uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,941,325,220,688 | arxiv |
\section{Introduction}
\label{sec:introduction}
The majority of the $\sim90$ known transiting extrasolar planets (TEPs) have
been found to lie in the 0.5\,\ensuremath{M_{\rm J}}\ to 2.0\,\ensuremath{M_{\rm J}}\ mass range. The
apparent drop in their mass distribution at $\sim$2\,\ensuremath{M_{\rm J}}\ has been
noted by, e.g., \citet{southworth:2009}, and by \citet{torres:2010}.
In the currently known sample, 75\% of the TEPs have planetary mass
$\ensuremath{M_{p}}<2.0\,\ensuremath{M_{\rm J}}$, and there appears to be a minor peak in their
occurrence rate at $\ensuremath{M_{p}}\approx2\,\ensuremath{M_{\rm J}}$, which then sharply falls off
towards higher masses. Are there any biases present against
discovering massive planets? Such planets tend to be less inflated,
and theory dictates that their radii shrink as their mass increases
towards the brown dwarf regime. According to \citet{baraffe:2010},
this reversal of the $\ensuremath{M_{p}}-\ensuremath{R_{p}}$ relation happens around
$\ensuremath{M_{p}}\approx2-3\,\ensuremath{M_{\rm J}}$, and falls off as $\ensuremath{R_{p}}\propto\ensuremath{M_{p}}^{-1/8}$
\citep[see e.g.][]{fortney:2009}. The smaller radii for massive
planets yield a minor bias against discovering them via the transit
method, since they produce shallower transits. Very massive planets
can induce stellar variability of their host stars
\citep{shkolnik:2009}, somewhat decreasing the efficiency of detecting
their shallow transits via simple algorithms that expect constant
out-of-transit light curves. Also, the host stars of massive planets are
typically more rapid rotators: the average $\ensuremath{v \sin{i}}$ for host stars with
planets $\ensuremath{M_{p}}<2\ensuremath{M_{\rm J}}$ is 3.9\,\ensuremath{\rm m\,s^{-1}}\ (with $3.8\,\ensuremath{\rm m\,s^{-1}}$ standard deviation
around the mean), whereas the same values for the massive planet hosts
stars, are $10.9$\,\ensuremath{\rm m\,s^{-1}} (with $12.7\,\ensuremath{\rm m\,s^{-1}}$ standard deviation around the
mean)\footnote{This includes those 4 planets announced in this paper.}.
The five fastest rotators all harbor planets more massive than
2\,\ensuremath{M_{\rm J}}. This presents a bias against discovering them either via
radial velocity (RV) searches, which are more efficient around quiet
non-rotating dwarfs, or via transit searches, where the targets may be
discarded during the confirmation phase. Along the same lines, the
large RV amplitude of the host star, as caused by the planetary
companion, may even lead to erroneous rejection during the
reconnaissance phase of candidate confirmation, since such systems
resemble eclipsing binaries. Finally, there is a tendency that massive
planets are more likely to be eccentric\footnote{See
e.g.~exoplanets.org for statistics.} \citep{southworth:2009}, meaning
that they require more RV observations for proper mapping of their
orbits, and thus leading to a slower announcement rate. On the other
hand, a strong bias {\em for} detecting such planets--compensating for
most of the effects above--is the fact that the large RV amplitudes of
the host stars are easier to detect, since they do not require internal
precisions at the \ensuremath{\rm m\,s^{-1}}\ level (see HAT-P-2, where valuable data was
contributed to the RV fit by modest precision instruments yielding
$\sim1\,\ensuremath{\rm km\,s^{-1}}$ precision; \citealt{bakos:2007}). Altogether, while
there are minor biases for and against detecting massive transiting
planets, their overall effect appears to be negligible, and the drop in
frequency at $\gtrsim 2\,\ensuremath{M_{\rm J}}$ seems to be real.
Massive planets are important for many reasons. They provide very
strong constraints on formation and migration theories, which need to
explain the observed distribution of planetary system parameters in a
wide range \citep{baraffe:2008,baraffe:2010}, from 0.01\,\ensuremath{M_{\rm J}}\
\citep[Corot-7b; ][]{queloz:2009} to 26.4\,\ensuremath{M_{\rm J}}\
\citep[Corot-3b; ][]{deleuil:2008}. Heavy mass objects necessitate the
inclusion of other physical mechanisms for the formation and migration,
such as planet-planet scattering \citep{chatterjee:2008,ford:2008}, and
the Kozai-mechanism \citep{fabrycky:2007}.
They are border-line objects between planets and brown-dwarfs, and help
us understand how these populations differ and overlap \citep[see ][for
a review]{leconte:2009}. For example, a traditional definition of
planets is that they have no Deuterium burning, where the Deuterium
burning limit is thought to be around 13\,\ensuremath{M_{\rm J}}. However, there
are large uncertainties on this limit due to the numerous model
parameters and solutions, and the fact that Deuterium may be able to
burn in the H/He layers above the core \citep{baraffe:2008}. Another
possible definition of planets is based on their formation scenario,
i.e.~they are formed by accretion in a protoplanetary disk around their
young host star, as opposed to the gravitational collapse of a
molecular cloud (brown dwarfs).
Perhaps related to the formation and migration mechanisms, a number of
interesting correlations involving massive planets have been pointed
out. \citet{udry:2002} noted that short period massive planets are
predominantly found in binary stellar systems. \citet{southworth:2009}
noted that only 8.6\% of the low mass planets show significantly
eccentric orbits, whereas 77\% of the massive planets have
eccentric orbits (although low-mass systems have lower S/N RV curves,
rendering the detection of eccentric orbits more difficult).
Curiously, there appears to be a lack of correlation between planetary
mass and host star metallicity, while one would naively think that the
formation of high mass planets (via core accretion) would require
higher metal content. Until this work, there was a hint of a correlation
between planetary and stellar mass \citep[e.g.][]{deleuil:2008}, in the
sense that the most massive planets orbited $\ensuremath{M_\star}\gtrsim1.2\,\ensuremath{M_\sun}$
stars, and there was a (biased) tendency that lower mass planets orbit
less massive stars.
All of these observations suffer from small-number statistics and heavy
biases. One way of improving our knowledge is to expand the
sample of well-characterized planets. In this work we report on 4 new
massive transiting planets around bright stars, namely
\setcounter{planetcounter}{1}
\ifnum\value{planetcounter}=4 and \else\fi\hatcurCCgsc{20}\ifnum\value{planetcounter}<4 ,\else. \fi
\setcounter{planetcounter}{2}
\ifnum\value{planetcounter}=4 and \else\fi\hatcurCCgsc{21}\ifnum\value{planetcounter}<4 ,\else. \fi
\setcounter{planetcounter}{3}
\ifnum\value{planetcounter}=4 and \else\fi\hatcurCChd{22}\ifnum\value{planetcounter}<4 ,\else. \fi
\setcounter{planetcounter}{4}
\ifnum\value{planetcounter}=4 and \else\fi\hatcurCCgsc{23}\ifnum\value{planetcounter}<4 ,\else. \fi
This extends the currently known sample of bright ($V<13.5$) and
massive ($\ensuremath{M_{p}}>2\,\ensuremath{M_{\rm J}}$) transiting planets by 30\% (from 13 to 17).
These discoveries were made by the Hungarian-made Automated Telescope
Network \citep[HATNet;][]{bakos:2004} survey. HATNet has been one of
the main contributors to the discovery of TEPs, among others such as
the ground-based SuperWASP \citep{pollacco:2006}, TrES
\citep{alonso:2004} and XO projects \citep{pmcc:2005}, and space-borne
searches such as CoRoT \citep{baglin:2006} and Kepler
\citep{borucki:2010}. In operation since 2003, HATNet has now covered
approximately 14\% of the sky, searching for TEPs around bright stars
($8\lesssim I \lesssim 14$). We operate six wide-field instruments:
four at the Fred Lawrence Whipple Observatory (FLWO) in Arizona, and
two on the roof of the hangar servicing the Smithsonian Astrophysical
Observatory's Submillimeter Array, in Hawaii.
The layout of the paper is as follows. In \refsecl{obs} we report the
detections of the photometric signals and the follow-up spectroscopic
and photometric observations for each of the planets. In
\refsecl{analysis} we describe the analysis of the data, beginning
with the determination of the stellar parameters, continuing with a
discussion of the methods used to rule out nonplanetary, false
positive scenarios which could mimic the photometric and spectroscopic
observations, and finishing with a description of our global modeling
of the photometry and radial velocities. Our findings are discussed
in \refsecl{discussion}.
\section{Observations}
\label{sec:obs}
\subsection{Photometric detection}
\label{sec:detection}
\reftabl{photobs} summarizes the HATNet discovery observations of each
new planetary system. The calibration of the HATNet frames was carried
out using standard procedures correcting for the CCD bias, dark-current
and flatfield structure. The calibrated images were then subjected to
star detection and astrometry, as described in \cite{pal:2006}.
Aperture photometry was performed on each image at the stellar
centroids derived from the Two Micron All Sky Survey
\citep[2MASS;][]{skrutskie:2006} catalog and the individual astrometric
solutions. For certain datasets (HAT-P-20, HAT-P-22, HAT-P-23) we also
carried out an image subtraction \citep{alard:2000} based photometric
reduction using discrete kernels \citep{bramich:2008}, as described in
\citet{pal:2009b}. The resulting light curves\ were decorrelated (cleaned of
trends) using the External Parameter Decorrelation \citep[EPD;
see][]{bakos:2009} technique in ``constant'' mode and the Trend
Filtering Algorithm \citep[TFA; see][]{kovacs:2005}. The light curves{} were
searched for periodic box-shaped signals using the Box Least-Squares
\citep[BLS; see][]{kovacs:2002} method. We detected significant signals
in the light curves\ of the stars as summarized below:
\begin{itemize}
\item {\em \hatcur{20}} -- \hatcurCCgsc{20} (also known as
\hatcurCCtwomass{20}; $\alpha = \hatcurCCra{20}$, $\delta =
\hatcurCCdec{20}$; J2000; V=\hatcurCCtassmv{20}). A signal was
detected for this star with an apparent depth of
$\sim$\hatcurLCdip{20}\,mmag, and a period of
$P=$\hatcurLCPshort{20}\,days (see \reffigl{hatnet20}).
Note that the depth was attenuated by the presence of the fainter
neighbor star that is not resolved on the coarse resolution (14\ensuremath{\rm \arcsec pixel^{-1}})
HATNet pixels. Also, the depth by fitting a trapese instead of the
correct \citet{mandel:2002} model, is somewhat shallower than the
maximum depth in the \citet{mandel:2002} model fit (which was
19.6\,mmag; see later in \refsec{globmod}). The drop in brightness
had a first-to-last-contact duration, relative to the total period,
of $q = $\hatcurLCq{20}, corresponding to a total duration of $Pq =
$\hatcurLCdurhr{20}~hr. \hatcur{20} has a red companion (2MASS
07273963+2420171, $J-K = 0.92$) at 6.86\arcsec\ separation that is
fainter than \hatcur{20} by $\Delta R = 1.36$\,mag.
\item {\em \hatcur{21}} -- \hatcurCCgsc{21} (also known as
\hatcurCCtwomass{21}; $\alpha = \hatcurCCra{21}$, $\delta =
\hatcurCCdec{21}$; J2000; V=\hatcurCCtassmv{21}). A signal was
detected for this star with an apparent depth of
$\sim$\hatcurLCdip{21}\,mmag, and a period of
$P=$\hatcurLCPshort{21}\,days (see \reffigl{hatnet21}). The drop in
brightness had a first-to-last-contact duration, relative to the
total period, of $q = $\hatcurLCq{21}, corresponding to a total
duration of $Pq = $\hatcurLCdurhr{21}~hr.
\item {\em \hatcur{22}} -- \hatcurCChd{22} (also known as
\hatcurCCgsc{22} and \hatcurCCtwomass{22}; $\alpha =
\hatcurCCra{22}$, $\delta = \hatcurCCdec{22}$; J2000;
V=\hatcurCCtassmv{22}). A signal was detected for this star with an
apparent depth of $\sim$\hatcurLCdip{22}\,mmag, and a period of
$P=$\hatcurLCPshort{22}\,days (see \reffigl{hatnet22}). The drop in
brightness had a first-to-last-contact duration, relative to the
total period, of $q = $\hatcurLCq{22}, corresponding to a total
duration of $Pq = $\hatcurLCdurhr{22}~hr. \hatcur{22} has a close
red companion star (2MASS 10224397+5007504, $J-K = 0.86$) at
9.1\arcsec\ separation and $\Delta i = 2.58$ magnitude fainter.
\item {\em \hatcur{23}} -- \hatcurCCgsc{23} (also known as
\hatcurCCtwomass{23}; $\alpha = \hatcurCCra{23}$, $\delta =
\hatcurCCdec{23}$; J2000; V=\hatcurCCtassmv{23}). A signal was
detected for this star with an apparent depth of $\sim 11.5$\,mmag,
and a period of $P=$\hatcurLCPshort{23}\,days (see
\reffigl{hatnet23}). Similarly to \hatcur{20}, the depth was
attenuated by close-by faint neighbors. The drop in brightness had a
first-to-last-contact duration, relative to the total period, of $q =
$\hatcurLCq{23}, corresponding to a total duration of $Pq =
$\hatcurLCdurhr{23}~hr.
\end{itemize}
\begin{figure}[!ht]
\plotone{\hatcurhtr{20}-hatnet.eps}
\caption[]{
Unbinned light curve{} of \hatcur{20} including all 2600 instrumental
\band{R} 5.5 minute cadence measurements obtained with the HAT-7
and HAT-8 telescopes of HATNet (see \reftabl{photobs} for
details), and folded with the period $P =
\hatcurLCPprec{20}$\,days resulting from the global fit described
in \refsecl{analysis}. The solid line shows the ``P1P3'' transit
model fit to the light curve (\refsecl{globmod}). The bold points
in the lower panel show the light curve binned in phase with a
bin-size of 0.002.
\label{fig:hatnet20}}
\end{figure}
\begin{figure}[!ht]
\plotone{\hatcurhtr{21}-hatnet.eps}
\caption[]{
Unbinned light curve{} of \hatcur{21} including all 28,000 instrumental
\band{I} and \band{R} 5.5 minute cadence measurements obtained
with the HAT-5, HAT-6, HAT-8 and HAT-9 telescopes of HATNet (see
\reftabl{photobs} for details), and folded with the period $P =
\hatcurLCPprec{21}$\,days resulting from the global fit described
in \refsecl{analysis}. The solid line shows the ``P1P3'' transit
model fit to the light curve (\refsecl{globmod}). The bold points
in the lower panel show the light curve binned in phase with a
bin-size of 0.002.
\label{fig:hatnet21}}
\end{figure}
\begin{figure}[!ht]
\plotone{\hatcurhtr{22}-hatnet.eps}
\caption[]{
Unbinned light curve{} of \hatcur{22} including all
4200 instrumental \band{R} 5.5 minute cadence
measurements obtained with the HAT-5 telescope of HATNet (see
the text for details), and folded with the period $P =
\hatcurLCPprec{22}$\,days resulting from the global
fit described in \refsecl{analysis}. The solid line shows the
``P1P3'' transit model fit to the light curve
(\refsecl{globmod}). The bold points
in the lower panel show the light curve binned in phase with a
bin-size of 0.002.
\label{fig:hatnet22}}
\end{figure}
\begin{figure}[!ht]
\plotone{\hatcurhtr{23}-hatnet.eps}
\caption[]{
Unbinned light curve{} of \hatcur{23} including all 3500 instrumental
\band{R} 5.5 minute cadence measurements obtained with the HAT-6
and HAT-9 telescopes of HATNet (see \reftabl{photobs} for
details), and folded with the period $P =
\hatcurLCPprec{23}$\,days resulting from the global fit described
in \refsecl{analysis}. The solid line shows the ``P1P3'' transit
model fit to the light curve (\refsecl{globmod}). The bold points
in the lower panel show the light curve binned in phase with a
bin-size of 0.002.
\label{fig:hatnet23}}
\end{figure}
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{llrrr}
}{
\begin{deluxetable}{llrrr}
}
\tablewidth{0pc}
\tabletypesize{\scriptsize}
\tablecaption{
Summary of photometric observations
\label{tab:photobs}
}
\tablehead{
\colhead{~~~~~~~~Instrument/Field~~~~~~~~} &
\colhead{Date(s)} &
\colhead{Number of Images} &
\colhead{Cadence (s)} &
\colhead{Filter}
}
\startdata
\sidehead{\textbf{\hatcur{20}}}
~~~~HAT-7/G267 & 2007 Dec--2008 May & 802 & 330 & $R$ \\
~~~~HAT-8/G267 & 2007 Oct--2008 May & 1850 & 330 & $R$ \\
~~~~KeplerCam & 2009 Mar 11 & 268 & 43 & Sloan $i$ \\
~~~~KeplerCam & 2009 Oct 21 & 343 & 32 & Sloan $i$ \\
\sidehead{\textbf{\hatcur{21}}}
~~~~HAT-6/G183 & 2006 Dec--2007 May & 4528 & 330 & $I$ \\
~~~~HAT-9/G183 & 2006 Nov--2007 Jun & 4586 & 330 & $I$ \\
~~~~HAT-5/G184 & 2006 Dec--2007 Jun & 4040 & 330 & $I$ \\
~~~~HAT-8/G184 & 2006 Dec--2007 Jun & 5606 & 330 & $I$ \\
~~~~HAT-6/G141 & 2008 Jan--2008 Jun & 5142 & 330 & $R$ \\
~~~~HAT-9/G141 & 2008 Jan--2008 Jun & 3964 & 330 & $R$ \\
~~~~KeplerCam & 2009 Apr 20 & 243 & 53 & Sloan $i$ \\
~~~~KeplerCam & 2010 Feb 15 & 412 & 43 & Sloan $i$ \\
~~~~FTN/LCOGT\tablenotemark{a} & 2010 Feb 19 & 511 & 31 & Sloan $i$ \\
\sidehead{\textbf{\hatcur{22}}}
~~~~HAT-5/G139 & 2007 Dec--2008 May & 4288 & 330 & $R$ \\
~~~~KeplerCam & 2009 Feb 28 & 532 & 28 & Sloan $z$ \\
~~~~KeplerCam & 2009 Apr 30 & 353 & 33 & Sloan $g$ \\
\sidehead{\textbf{\hatcur{23}}}
~~~~HAT-6/G341 & 2007 Sep--2007 Dec & 1178 & 330 & $R$ \\
~~~~HAT-9/G341 & 2007 Sep--2007 Nov & 2351 & 330 & $R$ \\
~~~~KeplerCam & 2008 Jun 14 & 147 & 73 & Sloan $i$ \\
~~~~KeplerCam & 2008 Sep 08 & 246 & 73 & Sloan $i$ \\
~~~~KeplerCam & 2008 Sep 13 & 265 & 73 & Sloan $i$ \\
~~~~KeplerCam & 2008 Nov 03 & 117 & 89 & Sloan $i$ \\
~~~~KeplerCam & 2009 Apr 19 & 46 & 150 & Sloan $g$ \\
~~~~KeplerCam & 2009 Jul 13 & 150 & 73 & Sloan $i$
\enddata
\tablenotetext{a}{
Observations were performed without guiding due to a technical
problem with the guiding system, and resulted in decreased data
quality.
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\subsection{Reconnaissance Spectroscopy}
\label{sec:recspec}
As is routine in the HATNet project, all candidates are subjected to
careful scrutiny before investing valuable time on large telescopes.
This includes spectroscopic observations at relatively modest
facilities to establish whether the transit-like feature in the light
curve of a candidate might be due to astrophysical phenomena other than
a planet transiting a star. Many of these false positives are
associated with large radial-velocity variations in the star (tens of
\ensuremath{\rm km\,s^{-1}}) that are easily recognized. The reconnaissance spectroscopic
observations and results for each system are summarized in
\reftabl{reconspecobs}; below we provide a brief description of the
instruments used, the data reduction, and the analysis procedure.
One of the tools we have used for this purpose is the
Harvard-Smithsonian Center for Astrophysics (CfA) Digital Speedometer
\citep[DS;][]{latham:1992}, an echelle spectrograph mounted on the
\mbox{FLWO 1.5\,m}\ telescope. This instrument delivers high-resolution spectra
($\lambda/\Delta\lambda \approx 35,\!000$) over a single order
centered on the \ion{Mg}{1}\,b triplet ($\sim$5187\,\AA), with
typically low signal-to-noise (S/N) ratios that are nevertheless
sufficient to derive radial velocities (RVs) with moderate precisions
of 0.5--1.0\,\ensuremath{\rm km\,s^{-1}}\ for slowly rotating stars. The same spectra can be
used to estimate the effective temperature, surface gravity, and
projected rotational velocity of the host star, as described by
\cite{torres:2002}. With this facility we are able to reject many
types of false positives, such as F dwarfs orbited by M dwarfs,
grazing eclipsing binaries, or triple or quadruple star
systems. Additional tests are performed with other spectroscopic
material described in the next section.
Another of the tools we have used for this purpose is the FIbre-fed
\'Echelle Spectrograph (FIES) at the 2.5\,m Nordic Optical Telescope
(NOT) at La Palma, Spain \citep{djupvik:2010}. We used the
medium-resolution fiber which produces spectra at a resolution of
$\lambda/\Delta\lambda \approx 46,\!000$ and a wavelength coverage of
$\sim$\,3600-7400\,\AA\ to observe \hatcur{20}. The spectrum was
extracted and analyzed to measure the radial velocity, effective
temperature, surface gravity, and projected rotation velocity of the
host star, following the procedures described by \cite{buchhave:2010}.
Based on the observations summarized in \reftabl{reconspecobs} we find
that \hatcur{21}, \hatcur{22} and \hatcur{23} have rms residuals
consistent with no detectable RV variation within the precision of the
measurements. Curiously, \hatcur{20} showed significant RV variations,
even at the modest ($\sim0.5$\,\ensuremath{\rm km\,s^{-1}}) precision of the Digital
Speedometer, and the reconnaissance RV variations (including the FIES
spectrum; see later) phased up with the photometric ephemeris. All
spectra were single-lined, i.e., there is no evidence that any of these
targets consist of more than one star. The gravities for all of the
stars indicate that they are dwarfs.
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{llrrrrr}
}{
\begin{deluxetable}{llrrrrr}
}
\tablewidth{0pc}
\tabletypesize{\scriptsize}
\tablecaption{
Summary of reconnaissance spectroscopy observations
\label{tab:reconspecobs}
}
\tablehead{
\multicolumn{1}{c}{Instrument} &
\multicolumn{1}{c}{Date(s)} &
\multicolumn{1}{c}{Number of Spectra} &
\multicolumn{1}{c}{$\ensuremath{T_{\rm eff\star}}$} &
\multicolumn{1}{c}{$\ensuremath{\log{g_{\star}}}$} &
\multicolumn{1}{c}{$\ensuremath{v \sin{i}}$} &
\multicolumn{1}{c}{$\gamma_{\rm RV}$\tablenotemark{a}} \\
&
&
&
\multicolumn{1}{c}{(K)} &
\multicolumn{1}{c}{(cgs)} &
\multicolumn{1}{c}{(\ensuremath{\rm km\,s^{-1}})} &
\multicolumn{1}{c}{(\ensuremath{\rm km\,s^{-1}})}
}
\startdata
\sidehead{\textbf{\hatcur{20}}}
~~~~DS & 2009 Feb 11--2009 Feb 15 & 3 & \hatcurDSteff{20} & \hatcurDSlogg{20} & \hatcurDSvsini{20} & \hatcurDSgamma{20} \\
~~~~FIES & 2009 Oct 07 & 1 & \hatcurFIESteff{20} & \hatcurFIESlogg{20} & \hatcurFIESvsini{20} & \hatcurFIESgamma{20} \\
\sidehead{\textbf{\hatcur{21}}}
~~~~DS & 2009 Mar 08--2009 Apr 05 & 3 & \hatcurDSteff{21} & \hatcurDSlogg{21} & \hatcurDSvsini{21} & \hatcurDSgamma{21} \\
\sidehead{\textbf{\hatcur{22}}}
~~~~DS & 2009 Feb 11--2009 Feb 16 & 4 & \hatcurDSteff{22} & \hatcurDSlogg{22} & \hatcurDSvsini{22} & \hatcurDSgamma{22} \\
\sidehead{\textbf{\hatcur{23}}}
~~~~DS & 2008 May 19--2008 Sep 14 & 5 & \hatcurDSteff{23} & \hatcurDSlogg{23} & \hatcurDSvsini{23} & \hatcurDSgamma{23}
\enddata
\tablenotetext{a}{
The mean heliocentric RV of the target (in the IAU system). The
error gives the rms of the individual velocity measures for the
target with the given instrument.
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\subsection{High resolution, high S/N spectroscopy}
\label{sec:hispec}
We proceeded with the follow-up of each candidate by obtaining
high-resolution, high-S/N spectra to characterize the RV variations,
and to refine the determination of the stellar parameters. These
observations are summarized in \reftabl{highsnspecobs}. The RV
measurements and uncertainties are given in \reftabl{rvs20},
\reftabl{rvs21}, \reftabl{rvs22} and \reftabl{rvs23} for \hatcur{20}
through \hatcur{23}, respectively. The period-folded data, along with
our best fits described below in \refsecl{analysis}, are displayed in
\reffigl{rvbis20} through \reffigl{rvbis23} for \hatcur{20} through
\hatcur{23}. Below we briefly describe the instruments used, the data
reduction, and the analysis procedure.
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable}{llrr}
}{
\begin{deluxetable}{llrr}
}
\tablewidth{0pc}
\tabletypesize{\scriptsize}
\tablecaption{
Summary of high-resolution/high-SN spectroscopic observations
\label{tab:highsnspecobs}
}
\tablehead{
\multicolumn{1}{c}{Instrument} &
\multicolumn{1}{c}{Date(s)} &
\multicolumn{1}{c}{Number of} \\
&
&
\multicolumn{1}{c}{RV obs.}
}
\startdata
\sidehead{\textbf{\hatcur{20}}}
~~~~Keck/HIRES & 2009 Apr--2009 Dec & 10 \\
\sidehead{\textbf{\hatcur{21}}}
~~~~Keck/HIRES & 2009 May--2010 Feb & 15 \\
\sidehead{\textbf{\hatcur{22}}}
~~~~Keck/HIRES & 2009 Apr--2009 Dec & 12 \\
\sidehead{\textbf{\hatcur{23}}}
~~~~Keck/HIRES & 2008 Jun--2009 Dec & 13
\enddata
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable}
}{
\end{deluxetable}
}
\begin{figure}[ht]
\plotone{HIRES_montage.eps}
\caption{
Keck/HIRES guider camera snapshots of HAT-P-20 through HAT-P-23
(labeled). North is up and East is to the left. The snapshots cover an
area of approximately $30\times20\arcsec$. The slit is also
visible, as positioned on the planet host stars.
}
\label{fig:kecksnap}
\end{figure}
Observations were made of all four planet host stars with the HIRES
instrument \citep{vogt:1994} on the Keck~I telescope located on Mauna
Kea, Hawaii. The width of the spectrometer slit was $0\farcs86$,
resulting in a resolving power of $\lambda/\Delta\lambda \approx
55,\!000$, with a wavelength coverage of $\sim$3800--8000\,\AA\@. We
typically used the B5 decker yielding a
$3.5\arcsec(H)\times0.861\arcsec(W)$ slit, and for the last few
observations on each target we used the C2 decker that enables a better
sky subtraction due to the longer slit
$14.0\arcsec(H)\times0.861\arcsec(W)$. The slit height was oriented
with altitude (vertical), except for rare cases, when the slit would
have run through the faint companion to HAT-P-20 or HAT-P-22. A
Keck/HIRES snapshot for each planet host star is shown in
\reffigl{kecksnap}. Spectra were obtained through an iodine gas
absorption cell, which was used to superimpose a dense forest of
$\mathrm{I}_2$ lines on the stellar spectrum and establish an accurate
wavelength fiducial \citep[see][]{marcy:1992}. For each target an
additional exposure was taken without the iodine cell, for use as a
template in the reductions. Relative RVs in the solar system
barycentric frame were derived as described by \cite{butler:1996},
incorporating full modeling of the spatial and temporal variations of
the instrumental profile.
\setcounter{planetcounter}{1}
\begin{figure}[ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{20}-rv.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{20}-rv.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
{\em Top panel:} Keck/HIRES RV measurements for
\hbox{\hatcur{20}{}} shown as a function of orbital
phase, along with our best-fit eccentric model (see
\reftabl{planetparam}). Zero phase corresponds to the
time of mid-transit. The center-of-mass velocity has been
subtracted. The rms around the best orbital fit is
\hatcurRVfitrms{20}\,\ensuremath{\rm m\,s^{-1}}.
{\em Second panel:} Velocity $O\!-\!C$ residuals from the best
fit. The error bars for both the top and second panel
include a component from the
jitter (\hatcurRVjitter{20}\,\ensuremath{\rm m\,s^{-1}}) added in quadrature to
the formal errors (see \refsecl{globmod}).
{\em Third panel:} Bisector spans (BS), with the mean value
subtracted. The measurement from the template spectrum is
included (see \refsecl{blend}).
{\em Bottom panel:} Relative chromospheric activity index $S$
measured from the Keck spectra.
}}{
\caption{
Keck/HIRES observations of \hatcur{20}. The panels are as in
\reffigl{rvbis20}. The parameters used in the
best-fit model are given in \reftabl{planetparam}.
}}
\label{fig:rvbis20}
\end{figure}
\setcounter{planetcounter}{2}
\begin{figure}[ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{21}-rv.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{21}-rv.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
{\em Top panel:} Keck/HIRES RV measurements for
\hbox{\hatcur{21}{}} shown as a function of orbital
phase, along with our best-fit model (see
\reftabl{planetparam}). Zero phase corresponds to the
time of mid-transit. The center-of-mass velocity has been
subtracted.
{\em Second panel:} Velocity $O\!-\!C$ residuals from the best
fit. The error bars include a component from astrophysical
jitter (\hatcurRVjitter{21}\,\ensuremath{\rm m\,s^{-1}}) added in quadrature to
the formal errors (see \refsecl{globmod}).
{\em Third panel:} Bisector spans (BS), with the mean value
subtracted. The measurement from the template spectrum is
included (see \refsecl{blend}).
{\em Bottom panel:} Relative chromospheric activity index $S$
measured from the Keck spectra.
}}{
\caption{
Keck/HIRES observations of \hatcur{21}. The panels are as in
\reffigl{rvbis20}. The parameters used in the best-fit model are given
in \reftabl{planetparam}, the RV jitter was
\hatcurRVjitter{21}\,\ensuremath{\rm m\,s^{-1}}, and the fit rms was \hatcurRVfitrms{21}\,\ensuremath{\rm m\,s^{-1}}.
Observations shown twice are represented with open symbols.
}}
\label{fig:rvbis21}
\end{figure}
\setcounter{planetcounter}{3}
\begin{figure}[ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{22}-rv.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{22}-rv.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
{\em Top panel:} Keck/HIRES RV measurements for
\hbox{\hatcur{22}{}} shown as a function of orbital
phase, along with our best-fit model (see
\reftabl{planetparam}). Zero phase corresponds to the
time of mid-transit. The center-of-mass velocity has been
subtracted.
{\em Second panel:} Velocity $O\!-\!C$ residuals from the best
fit. The error bars include a component from astrophysical
jitter (\hatcurRVjitter{22}\,\ensuremath{\rm m\,s^{-1}}) added in quadrature to
the formal errors (see \refsecl{globmod}).
{\em Third panel:} Bisector spans (BS), with the mean value
subtracted. The measurement from the template spectrum is
included (see \refsecl{blend}).
{\em Bottom panel:} Relative chromospheric activity index $S$
measured from the Keck spectra.
Note the different vertical scales of the panels. Observations
shown twice are represented with open symbols.
}}{
\caption{
Keck/HIRES observations of \hatcur{22}. The panels are as in
\reffigl{rvbis20}. The parameters used in the
best-fit model are given in \reftabl{planetparam}, the RV jitter
was \hatcurRVjitter{22}\,\ensuremath{\rm m\,s^{-1}}, and the fit rms was
\hatcurRVfitrms{22}\,\ensuremath{\rm m\,s^{-1}}.
Observations
shown twice are represented with open symbols.
}}
\label{fig:rvbis22}
\end{figure}
\setcounter{planetcounter}{4}
\begin{figure}[ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{23}-rv.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{23}-rv.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
{\em Top panel:} Keck/HIRES RV measurements for
\hbox{\hatcur{23}{}} shown as a function of orbital
phase, along with our best-fit model (see
\reftabl{planetparam}). Zero phase corresponds to the
time of mid-transit. The center-of-mass velocity has been
subtracted.
{\em Second panel:} Velocity $O\!-\!C$ residuals from the best
fit. The error bars include a component from astrophysical
jitter (\hatcurRVjitter{23}\,\ensuremath{\rm m\,s^{-1}}) added in quadrature to
the formal errors (see \refsecl{globmod}).
{\em Third panel:} Bisector spans (BS), with the mean value
subtracted. The measurement from the template spectrum is
included (see \refsecl{blend}).
{\em Bottom panel:} Relative chromospheric activity index $S$
measured from the Keck spectra.
Note the different vertical scales of the panels. Observations
shown twice are represented with open symbols.
}}{
\caption{
Keck/HIRES observations of \hatcur{23}. The panels are as in
\reffigl{rvbis20}. The parameters used in the
best-fit model are given in \reftabl{planetparam}, and the RV jitter
was \hatcurRVjitter{23}\,\ensuremath{\rm m\,s^{-1}}.
Observations
shown twice are represented with open symbols.
}}
\label{fig:rvbis23}
\end{figure}
In each of Figures \ref{fig:rvbis20}--\ref{fig:rvbis23} we show also
the relative $S$ index, which is a measure of the chromospheric
activity of the star derived from the flux in the cores of the
\ion{Ca}{2} H and K lines. This index was computed following the
prescription given by \citet{vaughan:1978}, and as described in
\citet{hartman:2009}.
Note that our relative $S$ index has not been calibrated to the scale
of \citet{vaughan:1978}. We do not detect any significant variation of
the index correlated with orbital phase; such a correlation might have
indicated that the RV variations could be due to stellar activity,
casting doubt on the planetary nature of the candidate.
There is no sign of emission in the cores of the \ion{Ca}{2} H and K
lines in any of our spectra, from which we conclude that all of the
targets have low chromospheric activity levels.
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{lrrrrrr}
}{
\begin{deluxetable}{lrrrrrr}
}
\tablewidth{0pc}
\tablecaption{
Relative radial velocities, bisector spans, and activity index
measurements of \hatcur{20}.
\label{tab:rvs20}
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$}\tablenotemark{a} &
\colhead{RV\tablenotemark{b}} &
\colhead{\ensuremath{\sigma_{\rm RV}}\tablenotemark{c}} &
\colhead{BS} &
\colhead{\ensuremath{\sigma_{\rm BS}}} &
\colhead{S\tablenotemark{d}} &
\colhead{\ensuremath{\sigma_{\rm S}}}\\
\colhead{\hbox{(2,454,000$+$)}} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{} &
\colhead{}
}
\startdata
\ifthenelse{\boolean{rvtablelong}}{
\input{\hatcurhtr{20}_rvtable.tex}
[-1.5ex]
}{
\input{\hatcurhtr{20}_rvtable_short.tex}
[-1.5ex]
}
\enddata
\tablenotetext{a}{
Barycentric Julian dates throughout the paper are calculated from
Coordinated Universal Time (UTC).
}
\tablenotetext{b}{
The zero-point of these velocities is arbitrary. An overall offset
$\gamma_{\rm rel}$ fitted to these velocities in \refsecl{globmod}
has {\em not} been subtracted.
}
\tablenotetext{c}{
Internal errors excluding the component of astrophysical jitter
considered in \refsecl{globmod}.
}
\tablenotetext{d}{
Relative chromospheric activity index, not calibrated to the scale
of \citet{vaughan:1978}.
}
\ifthenelse{\boolean{rvtablelong}}{
\tablecomments{
Note that for the iodine-free template exposures we do not
measure the RV but do measure the BS and S index. Such
template exposures can be distinguished by the missing RV
value.
}
}{
\tablecomments{
Note that for the iodine-free template exposures we do not
measure the RV but do measure the BS and S index. Such
template exposures can be distinguished by the missing RV
value. This table is presented in its entirety in the
electronic edition of the Astrophysical Journal. A portion is
shown here for guidance regarding its form and content.
}
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{lrrrrrr}
}{
\begin{deluxetable}{lrrrrrr}
}
\tablewidth{0pc}
\tablecaption{
Relative radial velocities, bisector spans, and activity index
measurements of \hatcur{21}.
\label{tab:rvs21}
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$} &
\colhead{RV\tablenotemark{a}} &
\colhead{\ensuremath{\sigma_{\rm RV}}\tablenotemark{b}} &
\colhead{BS} &
\colhead{\ensuremath{\sigma_{\rm BS}}} &
\colhead{S\tablenotemark{c}} &
\colhead{\ensuremath{\sigma_{\rm S}}}\\
\colhead{\hbox{(2,454,000$+$)}} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{} &
\colhead{}
}
\startdata
\ifthenelse{\boolean{rvtablelong}}{
\input{\hatcurhtr{21}_rvtable.tex}
[-1.5ex]
}{
\input{\hatcurhtr{21}_rvtable_short.tex}
[-1.5ex]
}
\enddata
\ifthenelse{\boolean{rvtablelong}}{
\tablecomments{
Notes for this table are identical to that of \reftabl{rvs20}.
}
}{
\tablecomments{
Notes for this table are identical to that of \reftabl{rvs20}.
}
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{lrrrrrr}
}{
\begin{deluxetable}{lrrrrrr}
}
\tablewidth{0pc}
\tablecaption{
Relative radial velocities, bisector spans, and activity index
measurements of \hatcur{22}.
\label{tab:rvs22}
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$} &
\colhead{RV\tablenotemark{a}} &
\colhead{\ensuremath{\sigma_{\rm RV}}\tablenotemark{b}} &
\colhead{BS} &
\colhead{\ensuremath{\sigma_{\rm BS}}} &
\colhead{S\tablenotemark{c}} &
\colhead{\ensuremath{\sigma_{\rm S}}}\\
\colhead{\hbox{(2,454,000$+$)}} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{} &
\colhead{}
}
\startdata
\ifthenelse{\boolean{rvtablelong}}{
\input{\hatcurhtr{22}_rvtable.tex}
[-1.5ex]
}{
\input{\hatcurhtr{22}_rvtable_short.tex}
[-1.5ex]
}
\enddata
\ifthenelse{\boolean{rvtablelong}}{
\tablecomments{
Notes for this table are identical to that of \reftabl{rvs20}.
}
}{
\tablecomments{
Notes for this table are identical to that of \reftabl{rvs20}.
}
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{lrrrrrr}
}{
\begin{deluxetable}{lrrrrrr}
}
\tablewidth{0pc}
\tablecaption{
Relative radial velocities, bisector spans, and activity index
measurements of \hatcur{23}.
\label{tab:rvs23}
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$} &
\colhead{RV\tablenotemark{a}} &
\colhead{\ensuremath{\sigma_{\rm RV}}\tablenotemark{b}} &
\colhead{BS} &
\colhead{\ensuremath{\sigma_{\rm BS}}} &
\colhead{S\tablenotemark{c}} &
\colhead{\ensuremath{\sigma_{\rm S}}}\\
\colhead{\hbox{(2,454,000$+$)}} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{(\ensuremath{\rm m\,s^{-1}})} &
\colhead{} &
\colhead{}
}
\startdata
\ifthenelse{\boolean{rvtablelong}}{
\input{\hatcurhtr{23}_rvtable.tex}
[-1.5ex]
}{
\input{\hatcurhtr{23}_rvtable_short.tex}
[-1.5ex]
}
\enddata
\ifthenelse{\boolean{rvtablelong}}{
\tablecomments{
Notes for this table are identical to that of \reftabl{rvs20}.
}
}{
\tablecomments{
Notes for this table are identical to that of \reftabl{rvs20}.
}
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\subsection{Photometric follow-up observations}
\label{sec:phot}
\setcounter{planetcounter}{1}
\begin{figure}[!ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{20}-lc.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{20}-lc.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
Unbinned transit light curves{} for \hatcur{20}, acquired with
KeplerCam at the \mbox{FLWO 1.2\,m}{} telescope. The light curves have been
EPD and TFA processed, as described in \refsec{globmod}.
%
%
The dates of the events are indicated. Curves after the first are
displaced vertically for clarity. Our best fit from the global
modeling described in \refsecl{globmod} is shown by the solid
lines. Residuals from the fits are displayed at the bottom, in
the same order as the top curves. The error bars represent the
photon and background shot noise, plus the readout noise.
}}{
\caption{
Similar to \reffigl{lc20}; here we show the follow-up
light curves{} for \hatcur{20}.
}}
\label{fig:lc20}
\end{figure}
\setcounter{planetcounter}{2}
\begin{figure}[!ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{21}-lc.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{21}-lc.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
Unbinned transit light curves{} for \hatcur{21}, acquired with
KeplerCam at the \mbox{FLWO 1.2\,m}{} telescope. The light curves have been
EPD and TFA processed, as described in \refsec{globmod}.
%
%
The dates of the events are indicated. Curves after the first are
displaced vertically for clarity. Our best fit from the global
modeling described in \refsecl{globmod} is shown by the solid
lines. Residuals from the fits are displayed at the bottom, in the
same order as the top curves. The error bars represent the photon
and background shot noise, plus the readout noise.
}}{
\caption{
Similar to \reffigl{lc20}; here we show the follow-up
light curves{} for \hatcur{21}.
}}
\label{fig:lc21}
\end{figure}
\setcounter{planetcounter}{3}
\begin{figure}[!ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{22}-lc.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{22}-lc.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
Unbinned transit light curves{} for \hatcur{22}, acquired with
KeplerCam at the \mbox{FLWO 1.2\,m}{} telescope. The light curves have been
EPD and TFA processed, as described in \refsec{globmod}.
%
%
The dates of the events are indicated. Curves after the first are
displaced vertically for clarity. Our best fit from the global
modeling described in \refsecl{globmod} is shown by the solid
lines. Residuals from the fits are displayed at the bottom, in the
same order as the top curves. The error bars represent the photon
and background shot noise, plus the readout noise.
}}{
\caption{
Similar to \reffigl{lc20}; here we show the follow-up
light curves{} for \hatcur{22}.
}}
\label{fig:lc22}
\end{figure}
\setcounter{planetcounter}{4}
\begin{figure}[!ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{\hatcurhtr{23}-lc.eps}
}{
\includegraphics[scale=0.8]{\hatcurhtr{23}-lc.eps}
}
\ifthenelse{\value{planetcounter}=1}{
\caption{
Unbinned transit light curves{} for \hatcur{23}, acquired with
KeplerCam at the \mbox{FLWO 1.2\,m}{} telescope. The light curves have been
EPD and TFA processed, as described in \refsec{globmod}.
%
%
The dates of the events are indicated. Curves after the first are
displaced vertically for clarity. Our best fit from the global
modeling described in \refsecl{globmod} is shown by the solid
lines. Residuals from the fits are displayed at the bottom, in the
same order as the top curves. The error bars represent the photon
and background shot noise, plus the readout noise.
}}{
\caption{
Similar to \reffigl{lc20}; here we show the follow-up
light curves{} for \hatcur{23}.
}}
\label{fig:lc23}
\end{figure}
In order to permit a more accurate modeling of the light curves, we
conducted additional photometric observations with the KeplerCam CCD
camera on the \mbox{FLWO 1.2\,m}{} telescope for each star, and
with the Faulkes North Telescope (FTN) of the Las Cumbres Observatory
Global Network (LCOGT) at Hawaii for HAT-P-21 only. The observations for
each target are summarized in \reftabl{photobs}.
The reduction of these images, including basic calibration, astrometry,
and aperture photometry, was performed as described by
\citet{bakos:2009}. We found that the aperture photometry for
\hatcur{20} was significantly affected by the close-by neighbor star
2MASS 07273995+2420118 with $\Delta i = 1.1$ magnitude difference at
6.86\arcsec\ separation (\reffigl{kecksnap}). Thus, we performed image
subtraction on the \mbox{FLWO 1.2\,m}\ images with the same toolset used for the
HATNet reductions, but applied a discrete kernel with half-size of 5
pixels and no spatial variations. Indeed, for this stellar
configuration, the image subtraction results proved to be superior to
the aperture photometry. For all of the follow-up light curves, we performed
EPD and TFA to remove trends simultaneously with the light curve
modeling (for more details, see \refsecl{analysis}, and
\citealt{bakos:2009}). The final time series, together with our
best-fit transit light curve{} model, are shown in the top portion of Figures
\ref{fig:lc20} through \ref{fig:lc23}; the individual measurements are
reported in Tables \ref{tab:phfu20}---\ref{tab:phfu23}, for \hatcur{20}
through \hatcur{23}, respectively.
\begin{deluxetable}{lrrrr}
\tablewidth{0pc}
\tablecaption{
High-precision differential photometry of
\hatcur{20}\label{tab:phfu20}.
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$} &
\colhead{Mag\tablenotemark{a}} &
\colhead{\ensuremath{\sigma_{\rm Mag}}} &
\colhead{Mag(orig)\tablenotemark{b}} &
\colhead{Filter} \\
\colhead{\hbox{~~~~(2,400,000$+$)~~~~}} &
\colhead{} &
\colhead{} &
\colhead{} &
\colhead{}
}
\startdata
\input{\hatcurhtr{20}_phfu_tab_short.tex}
[-1.5ex]
\enddata
\tablenotetext{a}{
The out-of-transit level has been subtracted. These magnitudes have
been subjected to the EPD and TFA procedures, carried out
simultaneously with the transit fit.
}
\tablenotetext{b}{
Raw magnitude values without application of the EPD and TFA
procedures.
}
\tablecomments{
This table is available in a machine-readable form in the online
journal. A portion is shown here for guidance regarding its form
and content.
}
\end{deluxetable}
\begin{deluxetable}{lrrrr}
\tablewidth{0pc}
\tablecaption{
High-precision differential photometry of
\hatcur{21}\label{tab:phfu21}.
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$} &
\colhead{Mag\tablenotemark{a}} &
\colhead{\ensuremath{\sigma_{\rm Mag}}} &
\colhead{Mag(orig)\tablenotemark{b}} &
\colhead{Filter} \\
\colhead{\hbox{~~~~(2,400,000$+$)~~~~}} &
\colhead{} &
\colhead{} &
\colhead{} &
\colhead{}
}
\startdata
\input{\hatcurhtr{21}_phfu_tab_short.tex}
[-1.5ex]
\enddata
\tablecomments{
Notes for this table are identical to those of \reftabl{phfu20}.
}
\end{deluxetable}
\begin{deluxetable}{lrrrr}
\tablewidth{0pc}
\tablecaption{
High-precision differential photometry of
\hatcur{22}\label{tab:phfu22}.
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$} &
\colhead{Mag\tablenotemark{a}} &
\colhead{\ensuremath{\sigma_{\rm Mag}}} &
\colhead{Mag(orig)\tablenotemark{b}} &
\colhead{Filter} \\
\colhead{\hbox{~~~~(2,400,000$+$)~~~~}} &
\colhead{} &
\colhead{} &
\colhead{} &
\colhead{}
}
\startdata
\input{\hatcurhtr{22}_phfu_tab_short.tex}
[-1.5ex]
\enddata
\tablecomments{
Notes for this table are identical to those of \reftabl{phfu20}.
}
\end{deluxetable}
\begin{deluxetable}{lrrrr}
\tablewidth{0pc}
\tablecaption{
High-precision differential photometry of
\hatcur{23}\label{tab:phfu23}.
}
\tablehead{
\colhead{$\mathrm{BJD_{UTC}}$} &
\colhead{Mag\tablenotemark{a}} &
\colhead{\ensuremath{\sigma_{\rm Mag}}} &
\colhead{Mag(orig)\tablenotemark{b}} &
\colhead{Filter} \\
\colhead{\hbox{~~~~(2,400,000$+$)~~~~}} &
\colhead{} &
\colhead{} &
\colhead{} &
\colhead{}
}
\startdata
\input{\hatcurhtr{23}_phfu_tab_short.tex}
[-1.5ex]
\enddata
\tablecomments{
Notes for this table are identical to those of \reftabl{phfu20}.
}
\end{deluxetable}
\section{Analysis}
\label{sec:analysis}
\subsection{Properties of the parent stars}
\label{sec:stelparam}
Fundamental parameters for each of the host stars, including the mass
(\ensuremath{M_\star}) and radius (\ensuremath{R_\star}), which are needed to infer the planetary
properties, depend strongly on other stellar quantities that can be
derived spectroscopically. For this we have relied on our template
spectra obtained with the Keck/HIRES instrument, and the analysis
package known as Spectroscopy Made Easy \citep[SME;][]{valenti:1996},
along with the atomic line database of \cite{valenti:2005}. For each
star, SME yielded the following {\em initial} values and uncertainties
(which we have conservatively increased to include our estimates of the
systematic errors):
\begin{itemize}
\item {\em \hatcur{20}} --
effective temperature $\ensuremath{T_{\rm eff\star}}=\hatcurSMEiteff{20}$\,K,
stellar surface gravity $\ensuremath{\log{g_{\star}}}=\hatcurSMEilogg{20}$\,(cgs),
metallicity $\ensuremath{\rm [Fe/H]}=\hatcurSMEizfeh{20}$\,dex, and
projected rotational velocity $\ensuremath{v \sin{i}}=\hatcurSMEivsin{20}$\,\ensuremath{\rm km\,s^{-1}}.
\item {\em \hatcur{21}} --
effective temperature $\ensuremath{T_{\rm eff\star}}=\hatcurSMEiteff{21}$\,K,
stellar surface gravity $\ensuremath{\log{g_{\star}}}=\hatcurSMEilogg{21}$\,(cgs),
metallicity $\ensuremath{\rm [Fe/H]}=\hatcurSMEizfeh{21}$\,dex, and
projected rotational velocity $\ensuremath{v \sin{i}}=\hatcurSMEivsin{21}$\,\ensuremath{\rm km\,s^{-1}}.
\item {\em \hatcur{22}} --
effective temperature $\ensuremath{T_{\rm eff\star}}=\hatcurSMEiteff{22}$\,K,
stellar surface gravity $\ensuremath{\log{g_{\star}}}=\hatcurSMEilogg{22}$\,(cgs),
metallicity $\ensuremath{\rm [Fe/H]}=\hatcurSMEizfeh{22}$\,dex, and
projected rotational velocity $\ensuremath{v \sin{i}}=\hatcurSMEivsin{22}$\,\ensuremath{\rm km\,s^{-1}}.
\item {\em \hatcur{23}} --
effective temperature $\ensuremath{T_{\rm eff\star}}=\hatcurSMEiteff{23}$\,K,
stellar surface gravity $\ensuremath{\log{g_{\star}}}=\hatcurSMEilogg{23}$\,(cgs),
metallicity $\ensuremath{\rm [Fe/H]}=\hatcurSMEizfeh{23}$\,dex, and
projected rotational velocity $\ensuremath{v \sin{i}}=\hatcurSMEivsin{23}$\,\ensuremath{\rm km\,s^{-1}}.
\end{itemize}
In principle the effective temperature and metallicity, along with the
surface gravity taken as a luminosity indicator, could be used as
constraints to infer the stellar mass and radius by comparison with
stellar evolution models. However, the effect of \ensuremath{\log{g_{\star}}}\ on the
spectral line shapes is rather subtle, and as a result it is typically
difficult to determine accurately, so that it is a rather poor
luminosity indicator in practice. Unfortunately a trigonometric
parallax is not available for any of the host stars, since they were not
included among the targets of the {\it Hipparcos\/} mission
\citep{perryman:1997}. For planetary transits, another possible
constraint is
provided by the \ensuremath{a/\rstar}\ normalized semi-major axis, which is
closely related to \ensuremath{\rho_\star}, the mean stellar density.
The quantity
\ensuremath{a/\rstar}\ can be derived directly from the combination of the transit
light curves\ \citep{sozzetti:2007} and the RV data (required for eccentric
cases, see \refsecl{globmod}).
This, in turn, allows us to
improve on the determination of the spectroscopic parameters by
supplying an indirect constraint on the weakly determined spectroscopic
value of \ensuremath{\log{g_{\star}}}\ that removes degeneracies. We take this approach
here, as described below. The validity of our assumption, namely that
the adequate physical model describing our data is a planetary transit
(as opposed to a blend), is shown later in \refsecl{blend}.
For each system, our initial values of \ensuremath{T_{\rm eff\star}}, \ensuremath{\log{g_{\star}}}, and \ensuremath{\rm [Fe/H]}\
were used to determine auxiliary quantities needed in the global
modeling of the follow-up photometry and radial velocities
(specifically, the limb-darkening coefficients). This modeling, the
details of which are described in \refsecl{globmod}, uses a Monte Carlo
approach to deliver the numerical probability distribution of \ensuremath{a/\rstar}\
and other fitted variables. For further details we refer the reader to
\cite{pal:2009b}. When combining \ensuremath{a/\rstar}\ (used as a proxy for
luminosity) with assumed Gaussian distributions for \ensuremath{T_{\rm eff\star}}\ and
\ensuremath{\rm [Fe/H]}\ based on the SME determinations, a comparison with stellar
evolution models allows the probability distributions of other stellar
properties to be inferred, including \ensuremath{\log{g_{\star}}}. Here we use the
stellar evolution calculations from the Yonsei-Yale group
\citep[YY; ][]{yi:2001} for all planets presented in this work. The
comparison against the model isochrones was carried out for each of
10,000 Monte Carlo trial sets for \hatcur{21}, \hatcur{22}, and
\hatcur{23}, and for 20,000 Monte Carlo trial sets for \hatcur{20} (see
\refsecl{globmod}). Parameter combinations corresponding to unphysical
locations in the \hbox{H-R} diagram (26\% of the trials for
\hatcur{20}, and less than 1\% of the trials for the other objects)
were ignored, and replaced with another randomly drawn parameter set.
For each system we carried out a second SME iteration in which we
adopted the value of \ensuremath{\log{g_{\star}}}\ so determined and held it fixed in a
new SME analysis (coupled with a new global modeling of the RV and
light curves), adjusting only \ensuremath{T_{\rm eff\star}}, \ensuremath{\rm [Fe/H]}, and \ensuremath{v \sin{i}}. This gave:
\begin{itemize}
\item {\em \hatcur{20}}:
$\ensuremath{\log{g_{\star}}}=\hatcurSMEiilogg{20}$,
$\ensuremath{T_{\rm eff\star}}=\hatcurSMEiiteff{20}$\,K,
$\ensuremath{\rm [Fe/H]}=\hatcurSMEiizfeh{20}$, and
$\ensuremath{v \sin{i}}=\hatcurSMEiivsin{20}$\,\ensuremath{\rm km\,s^{-1}}.
\item {\em \hatcur{21}}:
$\ensuremath{\log{g_{\star}}}=\hatcurSMEiilogg{21}$,
$\ensuremath{T_{\rm eff\star}}=\hatcurSMEiiteff{21}$\,K,
$\ensuremath{\rm [Fe/H]}=\hatcurSMEiizfeh{21}$, and
$\ensuremath{v \sin{i}}=\hatcurSMEiivsin{21}$\,\ensuremath{\rm km\,s^{-1}}.
\item {\em \hatcur{22}}:
$\ensuremath{\log{g_{\star}}}=\hatcurSMEiilogg{22}$,
$\ensuremath{T_{\rm eff\star}}=\hatcurSMEiiteff{22}$\,K,
$\ensuremath{\rm [Fe/H]}=\hatcurSMEiizfeh{22}$, and
$\ensuremath{v \sin{i}}=\hatcurSMEiivsin{22}$\,\ensuremath{\rm km\,s^{-1}}.
\item {\em \hatcur{23}}:
$\ensuremath{\log{g_{\star}}}=\hatcurSMEiilogg{23}$,
$\ensuremath{T_{\rm eff\star}}=\hatcurSMEiiteff{23}$\,K,
$\ensuremath{\rm [Fe/H]}=\hatcurSMEiizfeh{23}$, and
$\ensuremath{v \sin{i}}=\hatcurSMEiivsin{23}$\,\ensuremath{\rm km\,s^{-1}}.
\end{itemize}
In each case the conservative uncertainties for $\ensuremath{T_{\rm eff\star}}$ and $\ensuremath{\rm [Fe/H]}$
have been increased by a factor of two over their formal values, as
before.
For each system, a further iteration did not change
\ensuremath{\log{g_{\star}}}\ significantly, so we adopted the values stated above as the
final atmospheric properties of the stars. They are collected in
\reftabl{stellar}.
With the adopted spectroscopic parameters the model isochrones yield
the stellar mass and radius, and other properties. These are listed
for each of the systems in
\reftabl{stellar}. According
to these models
\setcounter{planetcounter}{1}
\ifnum\value{planetcounter}=4 and \else\fi\hatcur{20} is a dwarf star with an
estimated age of
\hatcurISOage{20}\,Gyr\ifnum\value{planetcounter}<4 ,\else. \fi
\setcounter{planetcounter}{2}
\ifnum\value{planetcounter}=4 and \else\fi\hatcur{21} is a slightly evolved star with an
estimated age of
\hatcurISOage{21}\,Gyr\ifnum\value{planetcounter}<4 ,\else. \fi
\setcounter{planetcounter}{3}
\ifnum\value{planetcounter}=4 and \else\fi\hatcur{22} is a slightly evolved star with an
estimated age of
\hatcurISOage{22}\,Gyr\ifnum\value{planetcounter}<4 ,\else. \fi
\setcounter{planetcounter}{4}
\ifnum\value{planetcounter}=4 and \else\fi\hatcur{23} is a slightly evolved star with an
estimated age of
\hatcurISOage{23}\,Gyr\ifnum\value{planetcounter}<4 ,\else. \fi
The inferred location of each star in a diagram of \ensuremath{a/\rstar}\ versus
\ensuremath{T_{\rm eff\star}}, analogous to the classical H--R diagram, is shown in
\reffigl{iso}.
In all cases the stellar properties and their 1$\sigma$ and 2$\sigma$
confidence ellipsoides are displayed against the backdrop of model
isochrones for a range of ages, and the appropriate stellar
metallicity. For comparison, the locations implied by the initial SME
results are also shown (in each case with a triangle).
\setcounter{planetcounter}{1}
\ifthenelse{\boolean{emulateapj}}{
\begin{figure*}[!ht]
}{
\begin{figure}[!ht]
}
\plottwo{\hatcurhtr{20}-iso-ar.eps}{\hatcurhtr{21}-iso-ar.eps}
\plottwo{\hatcurhtr{22}-iso-ar.eps}{\hatcurhtr{23}-iso-ar.eps}
\caption{
Upper left: Model isochrones from \cite{\hatcurisocite{20}} for
the measured metallicity of \hatcur{20}, \ensuremath{\rm [Fe/H]} =
\hatcurSMEiizfehshort{20}, and ages of 3.0, 4.0, 5.0, 5.5, 6.0,
6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 10.0 and 11.0\,Gyr (left to right).
The adopted values of $\ensuremath{T_{\rm eff\star}}$ and \ensuremath{a/\rstar}\ are shown together
with their 1$\sigma$ and 2$\sigma$ confidence ellipsoids. The
initial values of \ensuremath{T_{\rm eff\star}}\ and \ensuremath{a/\rstar}\ from the first SME and
light curve\ analyses are represented with a triangle. Upper right: Same
as upper left, here we show the results for \hatcur{21}, with \ensuremath{\rm [Fe/H]}
= \hatcurSMEiizfehshort{21}, and ages of 5.0, 6.0, 7.0, 8.0, 9.0,
10.0, 11.0, 12.0, and 13.0\,Gyr (left to right). Lower left: Same
as upper left, here we show the results for \hatcur{22}, with \ensuremath{\rm [Fe/H]}
= \hatcurSMEiizfehshort{22}, and ages of 5.0, 6.0, 7.0, 8.0, 9.0,
10., 11.0, 12., 13.0, and 14.0\,Gyr (left to right). Lower right:
Same as upper left, here we show the results for \hatcur{23},
with \ensuremath{\rm [Fe/H]} = \hatcurSMEiizfehshort{23}, and ages of 0.2, 0.5, 1.0,
1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5 and 5.0\,Gyr (left to right).
}
\label{fig:iso}
\ifthenelse{\boolean{emulateapj}}{
\end{figure*}
}{
\end{figure}
}
The stellar evolution modeling provides color indices (see
\reftabl{stellar}) that may be compared against the measured values as
a sanity check. For each star, the best available measurements are the
near-infrared magnitudes from the 2MASS Catalogue
\citep{skrutskie:2006}, which are given in \reftabl{stellar}. These
are converted to the photometric system of the models (ESO system)
using the transformations by \citet{carpenter:2001}. The resulting
color indices are also shown in \reftabl{stellar} for \hatcur{20}
through \hatcur{23}, respectively. Indeed, the colors from the stellar
evolution models and from the observations agree for all of the host
stars within 2-$\sigma$. The distance to each object may be computed
from the absolute $K$ magnitude from the models and the 2MASS $K_s$
magnitudes, which has the advantage of being less affected by
extinction than optical magnitudes. The results are given in
\reftabl{stellar}, where in all cases the uncertainty excludes possible
systematics in the model isochrones that are difficult to quantify.
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{lccccl}
}{
\begin{deluxetable}{lccccl}
}
\tablewidth{0pc}
\tabletypesize{\scriptsize}
\tablecaption{
Stellar parameters for \hatcur{20}--\hatcur{23}
\label{tab:stellar}
}
\tablehead{
\colhead{~~~~~~~~Parameter~~~~~~~~} &
\colhead{\hatcur{20}} &
\colhead{\hatcur{21}} &
\colhead{\hatcur{22}} &
\colhead{\hatcur{23}} &
\colhead{Source}
}
\startdata
\noalign{\vskip -3pt}
\sidehead{Spectroscopic properties}
~~~~$\ensuremath{T_{\rm eff\star}}$ (K)\dotfill & \hatcurSMEteff{20} & \hatcurSMEteff{21} & \hatcurSMEteff{22} & \hatcurSMEteff{23} & SME\tablenotemark{a}\\
~~~~$\ensuremath{\rm [Fe/H]}$\dotfill & \hatcurSMEzfeh{20} & \hatcurSMEzfeh{21} & \hatcurSMEzfeh{22} & \hatcurSMEzfeh{23} & SME \\
~~~~$\ensuremath{v \sin{i}}$ (\ensuremath{\rm km\,s^{-1}})\dotfill & \hatcurSMEvsin{20} & \hatcurSMEvsin{21} & \hatcurSMEvsin{22} & \hatcurSMEvsin{23} & SME \\
~~~~$\ensuremath{v_{\rm mac}}$ (\ensuremath{\rm km\,s^{-1}})\dotfill & \hatcurSMEvmac{20} & \hatcurSMEvmac{21} & \hatcurSMEvmac{22} & \hatcurSMEvmac{23} & SME \\
~~~~$\ensuremath{v_{\rm mic}}$ (\ensuremath{\rm km\,s^{-1}})\dotfill & \hatcurSMEvmic{20} & \hatcurSMEvmic{21} & \hatcurSMEvmic{22} & \hatcurSMEvmic{23} & SME \\
~~~~$\gamma_{\rm RV}$ (\ensuremath{\rm km\,s^{-1}})\dotfill& \hatcurDSgamma{20} & \hatcurDSgamma{21} & \hatcurDSgamma{22} & \hatcurDSgamma{23} & DS \\
\sidehead{Photometric properties}
~~~~$V$ (mag)\dotfill & \hatcurCCtassmv{20} & \hatcurCCtassmv{21} & \hatcurCCtassmv{22} & \hatcurCCtassmv{23} & TASS \\
~~~~$\ensuremath{V\!-\!I_C}$ (mag)\dotfill & \hatcurCCtassvi{20} & \hatcurCCtassvi{21} & \hatcurCCtassvi{22} & \hatcurCCtassvi{23} & TASS \\
~~~~$J$ (mag)\dotfill & \hatcurCCtwomassJmag{20} & \hatcurCCtwomassJmag{21} & \hatcurCCtwomassJmag{22} & \hatcurCCtwomassJmag{23} & 2MASS \\
~~~~$H$ (mag)\dotfill & \hatcurCCtwomassHmag{20} & \hatcurCCtwomassHmag{21} & \hatcurCCtwomassHmag{22} & \hatcurCCtwomassHmag{23} & 2MASS \\
~~~~$K_s$ (mag)\dotfill & \hatcurCCtwomassKmag{20} & \hatcurCCtwomassKmag{21} & \hatcurCCtwomassKmag{22} & \hatcurCCtwomassKmag{23} & 2MASS \\
~~~~$J-K$ (mag,\hatcurjhkfilset{20})\dotfill & \hatcurCCesoJKmag{20} & \hatcurCCesoJKmag{21} & \hatcurCCesoJKmag{22} & \hatcurCCesoJKmag{23} & 2MASS \\
\sidehead{Derived properties}
~~~~$\ensuremath{M_\star}$ ($\ensuremath{M_\sun}$)\dotfill & \hatcurISOmlong{20} & \hatcurISOmlong{21} & \hatcurISOmlong{22} & \hatcurISOmlong{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \tablenotemark{b}\\
~~~~$\ensuremath{R_\star}$ ($\ensuremath{R_\sun}$)\dotfill & \hatcurISOrlong{20} & \hatcurISOrlong{21} & \hatcurISOrlong{22} & \hatcurISOrlong{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \\
~~~~$\ensuremath{\log{g_{\star}}}$ (cgs)\dotfill & \hatcurISOlogg{20} & \hatcurISOlogg{21} & \hatcurISOlogg{22} & \hatcurISOlogg{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \\
~~~~$\ensuremath{L_\star}$ ($\ensuremath{L_\sun}$)\dotfill & \hatcurISOlum{20} & \hatcurISOlum{21} & \hatcurISOlum{22} & \hatcurISOlum{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \\
~~~~$M_V$ (mag)\dotfill & \hatcurISOmv{20} & \hatcurISOmv{21} & \hatcurISOmv{22} & \hatcurISOmv{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \\
~~~~$M_K$ (mag,\hatcurjhkfilset{20})\dotfill & \hatcurISOMK{20} & \hatcurISOMK{21} & \hatcurISOMK{22} & \hatcurISOMK{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \\
~~~~$J-K$ (mag,\hatcurjhkfilset{20})\dotfill & \hatcurISOJK{20} & \hatcurISOJK{21} & \hatcurISOJK{22} & \hatcurISOJK{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \\
~~~~Age (Gyr)\dotfill & \hatcurISOage{20} & \hatcurISOage{21} & \hatcurISOage{22} & \hatcurISOage{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME \\
~~~~Distance (pc)\dotfill & \hatcurXdist{20}\phn & \hatcurXdist{21} & \hatcurXdist{22} & \hatcurXdist{23} & \hatcurisoshort{20}+\hatcurlumind{20}+SME\\
[-1.5ex]
\enddata
\tablenotetext{a}{
SME = ``Spectroscopy Made Easy'' package for the analysis of
high-resolution spectra \citep{valenti:1996}. These parameters
rely primarily on SME, but have a small dependence also on the
iterative analysis incorporating the isochrone search and global
modeling of the data, as described in the text.
}
\tablenotetext{b}{
\hatcurisoshort{20}+\hatcurlumind{20}+SME = Based on the \hatcurisoshort{20}
isochrones \citep{\hatcurisocite{20}}, \hatcurlumind{20} as a luminosity
indicator, and the SME results.
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\subsection{Excluding blend scenarios}
\label{sec:blend}
Our initial spectroscopic analyses discussed in \refsecl{recspec} and
\refsecl{hispec} rule out the most obvious astrophysical false positive
scenarios. However, more subtle phenomena such as blends (contamination by
an unresolved eclipsing binary, whether in the background or associated with
the target) can still mimic both the photometric and spectroscopic
signatures we see. In the following section we investigate whether
such scenarios may have caused the observed photometric and spectroscopic
features.
\subsubsection{Spectral line-bisector analysis}
\label{sec:bisec}
\begin{figure*}[!ht]
\ifthenelse{\boolean{emulateapj}}{
\plotone{SCF_residvsRV.eps}
}{
\includegraphics[scale=0.3]{SCF_residvsRV.eps}
}
\caption[]{
Panels on the left show the bisector spans (BS) as a function of Sky
Contamination Factor (SCF). Panels on the right exhibit the
SCF-corrected BS as a function of the radial velocities. The individual
planets are labeled.
\label{fig:bisecscf}}
\end{figure*}
Following \cite{torres:2007}, we explored the possibility that the
measured radial velocities are not real, but are instead caused by
distortions in the spectral line profiles due to contamination from a
nearby unresolved eclipsing binary. A bisector span (BS) analysis for
each system based on the Keck spectra was done as described in \S 5 of
\cite{bakos:2007}. In general, none of the Keck/HIRES spectra suffer
significant sky contamination. Nevertheless, we calculated the Sky
Contamination Factors (SCF) as described in \citet{hartman:2009}, and
corrected for the minor correlation between SCF and BS. The results
are exhibited in \reffigl{bisecscf}, where we show the SCF--BS and
RV--$\mathrm{BS_{SCF}}$ (BS after SCF correction) plots for each
planetary system. We also calculated the Spearman rank-order
correlation coefficients (denoted as $R_{s}$ for the RV vs.~BS
quantities and the false alarm probabilities (see \reftabl{bisec}).
There is no correlation for HAT-P-21, HAT-P-22 and HAT-P-23, and thus
the interpretation of these systems as transiting planets is clear.
There is an anti-correlation present for HAT-P-20, which is
strengthened when the SCF correction is applied. A plausible
explanation for this is that the neighboring star at 6\arcsec\
separation (see \reffigl{kecksnap}) is bleeding into the slit, even
though we were careful during the observations to keep the slit
centered on the main target, and adjusted the slit orientation to be
perpendicular to the direction to the neighbor. We simulated this
scenario, and calculated the expected BS as a function of RV due to the
neighbor, assuming that the two stars have the same systemic velocity
and the seeing is 1\arcsec. Indeed, we get a slight anti-correlation
from this simulation, and the range of magnitude in the BS variation is
consistent with the observations. It is also possible that some of the
anti-correlation is due to the fact that the slit was not in vertical
angle for many of the observations. The non-vertical slit mode may
result in wavelength-dependent slit losses due to atmospheric
dispersion, and this could bring in a correlation with the sky
background, and change the shape of the spectral lines.
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable}{lrrrr}
}{
\begin{deluxetable}{lrrrr}
}
\tablewidth{0pc}
\tabletypesize{\scriptsize}
\tablecaption{
Summary of RV vs.~BS correlations.
\label{tab:bisec}
}
\tablehead{
\colhead{Name} &
\colhead{$R_{s1}$\tablenotemark{a}} &
\colhead{$\mathrm{FAP_1}$\tablenotemark{b}} &
\colhead{$R_{s2}$\tablenotemark{c}} &
\colhead{$\mathrm{FAP_2}$\tablenotemark{d}}
}
\startdata
HAT-P-20 & -0.73 & 2.46\% & -0.92 & 0.05\% \\
HAT-P-21 & -0.24 & 39\% & -0.20 & 47\% \\
HAT-P-22 & 0.33 & 30\% & 0.27 & 37\% \\
HAT-P-23 & 0.27 & 37\% & 0.27 & 37\% \\
[-1.5ex]
\enddata
\tablenotetext{a}{
The Spearman correlation coefficient between the bisector
(BS) variations and the radial velocities (RV).
}
\tablenotetext{b}{
False alarm probability for $R_{s1}$.
}
\tablenotetext{c}{
The Spearman correlation coefficient between BS corrected for the sky
contamination factor (SCF) and the RVs.
}
\tablenotetext{d}{
False alarm probability for $R_{s2}$.
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable}
}{
\end{deluxetable}
}
\subsection{Global modeling of the data}
\label{sec:globmod}
This section describes the procedure we followed for each system to
model the HATNet photometry, the follow-up photometry, and the radial
velocities simultaneously. Our model for the follow-up light curves\ used
analytic formulae based on \citet{mandel:2002} for the eclipse of a
star by a planet, with limb darkening being prescribed by a quadratic
law. The limb darkening coefficients for the Sloan \band{g}, Sloan
\band{i}, and Sloan \band{z} were interpolated from the tables by
\citet{claret:2004} for the spectroscopic parameters of each star as
determined from the SME analysis (\refsecl{stelparam}). The transit
shape was parametrized by the normalized planetary radius $p\equiv
\ensuremath{R_{p}}/\ensuremath{R_\star}$, the square of the impact parameter $b^2$, and the
reciprocal of the half duration of the transit $\ensuremath{\zeta/\rstar}$. We chose
these parameters because of their simple geometric meanings and the
fact that these show negligible correlations
\citep[see][]{bakos:2009}. The relation between $\ensuremath{\zeta/\rstar}$ and the
quantity \ensuremath{a/\rstar}, used in \refsecl{stelparam}, is given by
\begin{equation}
\ensuremath{a/\rstar} = P/2\pi (\ensuremath{\zeta/\rstar}) \sqrt{1-b^2} \sqrt{1-e^2}/(1+e \sin\omega)
\end{equation}
\citep[see, e.g.,][]{tingley:2005}. Note the subtle dependency of
\ensuremath{a/\rstar}\ on the $k \equiv e \cos\omega$ and $h \equiv e \sin\omega$
Lagrangian orbital parameters that are typically derived from the RV
data ($\omega$ is the longitude of periastron). This dependency is
often ignored in the literature, and \ensuremath{a/\rstar}\ is quoted as a ``pure''
light curve\ parameter. Of course, if high quality secondary eclipse
observations are available that determine both the location and
duration of the occultation, then $k$ and $h$ can be determined without
RV data. Our model for the HATNet data was the simplified ``P1P3''
version of the \citet{mandel:2002} analytic functions (an expansion in
terms of Legendre polynomials), for the reasons described in
\citet{bakos:2009}.
Following the formalism presented by \citet{pal:2009}, the RVs were
fitted with an eccentric Keplerian model parametrized by the
semi-amplitude $K$ and Lagrangian elements $k$ and $h$. Note that we
allowed for an eccentric orbit for all planets, even if the results
were consistent with a circular orbit. There are several reasons for
this: i) many of the close-in hot Jupiters show eccentric orbits, thus
the assumption of fixing $e=0$ has no physical justification (while
this has been customary in early discoveries relying on very few
data-points) ii) the error-bars on various other derived quantities
(including $\ensuremath{a/\rstar}$) are more realistic with the inclusion of
eccentricity, and iii) non-zero eccentricities can be very important in
proper interpretation of these systems.
We assumed that there is a strict periodicity in the individual
transit times. For each system we assigned the transit number $N_{tr} =
0$ to a complete follow-up light curve. For
\setcounter{planetcounter}{1}
\ifnum\value{planetcounter}=4 and \else\fi\hatcurb{20} this was the light curve\ gathered on 2009 Oct
21\ifnum\value{planetcounter}<4 ,\else. \fi \setcounter{planetcounter}{2}
\ifnum\value{planetcounter}=4 and \else\fi for \hatcurb{21}: 2010 Feb
19\ifnum\value{planetcounter}<4 ,\else. \fi \setcounter{planetcounter}{3}
\ifnum\value{planetcounter}=4 and \else\fi\hatcurb{22}: 2009 Feb
28\ifnum\value{planetcounter}<4 ,\else. \fi \setcounter{planetcounter}{4}
\ifnum\value{planetcounter}=4 and \else\fi\hatcurb{23}: 2008 Sep
13\ifnum\value{planetcounter}<4 ,\else. \fi
The adjustable parameters in the fit that determine the ephemeris were
chosen to be the time of the first transit center observed with HATNet
($T_{c,-252}$, $T_{c,-286}$, $T_{c,-135}$, and $T_{c,-312}$ for
\hatcurb{20} through \hatcurb{23} respectively) and that of the last
transit center observed with the \mbox{FLWO 1.2\,m}\ telescope ($T_{c,0}$,
$T_{c,1}$, $T_{c,19}$, and $T_{c,250}$ for \hatcurb{20} through
\hatcurb{23} respectively). We used these as opposed to period and
reference epoch in order to minimize correlations between parameters
\citep[see][]{pal:2008}. Times of mid-transit for intermediate events
were interpolated using these two epochs and the corresponding transit
number of each event, $N_{tr}$. The eight main parameters describing
the physical model for each system were thus the first and last transit
center times, $\ensuremath{R_{p}}/\ensuremath{R_\star}$, $b^2$, $\ensuremath{\zeta/\rstar}$, $K$, $k \equiv
e\cos\omega$, and $h \equiv e\sin\omega$. For \hatcurb{20},
\hatcurb{22}, and \hatcurb{23} three additional parameters were
included (for each system) that have to do with the instrumental
configuration (blend factor, out-of-transit magnitudes, gamma
velocities; see later). For \hatcurb{21} seven additional parameters
were included, because it was observed in 3 different HATNet fields.
These are the HATNet blend factor $B_{\rm inst}$ (one for each HATNet
field), which accounts for possible dilution of the transit in the
HATNet light curve\ from background stars due to the broad PSF (20\arcsec\
FWHM), the HATNet out-of-transit magnitude $M_{\rm 0,HATNet}$ (one for
each HATNet field), and the relative zero-point $\gamma_{\rm rel}$ of
the Keck RVs.
We extended our physical model with an instrumental model that
describes brightness variations caused by systematic errors in the
measurements. This was done in a similar fashion to the analysis
presented by \citet{bakos:2009}. The HATNet photometry has already
been EPD- and TFA-corrected before the global modeling, so we only
considered corrections for systematics in the follow-up light curves. We chose
the ``ELTG'' method, i.e., EPD was performed in ``local'' mode with EPD
coefficients defined for each night, and TFA was performed in
``global'' mode using the same set of stars and TFA coefficients for
all nights. The five EPD parameters were the hour angle (representing
a monotonic trend that changes linearly over time), the square of the
hour angle (reflecting elevation), and the stellar profile parameters
(equivalent to FWHM, elongation, and position angle of the image).
The functional forms of the above parameters contained six
coefficients, including the auxiliary out-of-transit magnitude of the
individual events. For each system the EPD parameters were
independent for all nights, implying 12, 18, 12, and 36 additional
coefficients in the global fit for \hatcurb{20} through \hatcurb{23}
respectively. For the global TFA analysis we chose 20, 3, 10, and 20
template stars for \hatcurb{20} through \hatcurb{23} that had good
quality measurements for all nights and on all frames, implying an
additional 20, 3, 10, and 20 parameters in the fit for each system. In
all cases the total number of fitted parameters (43, 36, 33 and 67 for
\hatcurb{20} through \hatcurb{23}) was much smaller than the number of
data points (755, 1172, 892 and 953, counting only RV measurements and
follow-up photometry measurements).
The joint fit was performed as described in \citet{bakos:2009}. We
minimized \ensuremath{\chi^2}\ in the space of parameters by using a hybrid
algorithm, combining the downhill simplex method \citep[AMOEBA;
see][]{press:1992} with a classical linear least squares algorithm.
Uncertainties for the parameters were derived by applying the Markov
Chain Monte-Carlo method \citep[MCMC, see][]{ford:2006}.
This provided the full {\em a posteriori} probability distributions of
all adjusted variables. The {\em a priori} distributions of the
parameters for these chains were chosen to be Gaussian, with
eigenvalues and eigenvectors derived from the Fisher covariance matrix
for the best-fit solution. The Fisher covariance matrix was calculated
analytically using the partial derivatives given by \citet{pal:2009}.
Following this procedure we obtained the {\em a posteriori}
distributions for all fitted variables, and other quantities of
interest such as \ensuremath{a/\rstar}. As described in \refsecl{stelparam},
\ensuremath{a/\rstar}\ was used together with stellar evolution models to infer a
value for \ensuremath{\log{g_{\star}}}\ that is significantly more accurate
than the spectroscopic value. The improved estimate was in turn
applied to a second iteration of the SME analysis, as explained
previously, in order to obtain better estimates of \ensuremath{T_{\rm eff\star}}\ and
\ensuremath{\rm [Fe/H]}. The global modeling was then repeated with updated
limb-darkening coefficients based on those new spectroscopic
determinations. The resulting geometric parameters pertaining to the
light curves and velocity curves for each system are listed in
\reftabl{planetparam}.
Included in each table is the RV ``jitter'', which is a noise term that
we added in quadrature to the internal errors for the RVs in order to
achieve $\chi^{2}/{\rm dof} = 1$ from the RV data for the global fit.
The jitter is a combination of assumed astrophysical noise intrinsic to
the star, plus instrumental noise rising from uncorrected intstrumental
effects (such as a template spectrum taken under sub-optimal conditions).
The planetary parameters and their uncertainties were derived by
combining the {\em a posteriori} distributions for the stellar, light curve,
and RV parameters. In this way we find masses and radii for each
planet. These and other planetary parameters are listed at the bottom
of \reftabl{planetparam}, and further discussed in
\refsec{discussion}.
\ifthenelse{\boolean{emulateapj}}{
\begin{deluxetable*}{lcccc}
}{
\begin{deluxetable}{lcccc}
}
\tabletypesize{\tiny}
\tablecaption{Orbital and planetary parameters for
\hatcurb{20}--\hatcurb{23}\label{tab:planetparam}}
\tablehead{
\colhead{~~~~~~~~~~~~~~~Parameter~~~~~~~~~~~~~~~} &
\colhead{\hatcurb{20}} &
\colhead{\hatcurb{21}} &
\colhead{\hatcurb{22}} &
\colhead{\hatcurb{23}}
}
\startdata
\noalign{\vskip -3pt}
\sidehead{Light curve{} parameters}
~~~$P$ (days) \dotfill & $\hatcurLCP{20}$ & $\hatcurLCP{21}$ & $\hatcurLCP{22}$ & $\hatcurLCP{23}$ \\
~~~$T_c$ ($\mathrm{BJD_{UTC}}$)
\tablenotemark{a} \dotfill & $\hatcurLCT{20}$ & $\hatcurLCT{21}$ & $\hatcurLCT{22}$ & $\hatcurLCT{23}$ \\
~~~$T_{14}$ (days)
\tablenotemark{a} \dotfill & $\hatcurLCdur{20}$ & $\hatcurLCdur{21}$ & $\hatcurLCdur{22}$ & $\hatcurLCdur{23}$ \\
~~~$T_{12} = T_{34}$ (days)
\tablenotemark{a} \dotfill & $\hatcurLCingdur{20}$ & $\hatcurLCingdur{21}$ & $\hatcurLCingdur{22}$ & $\hatcurLCingdur{23}$ \\
~~~$\ensuremath{a/\rstar}$ \dotfill & $\hatcurPPar{20}$ & $\hatcurPPar{21}$ & $\hatcurPPar{22}$ & $\hatcurPPar{23}$ \\
~~~$\ensuremath{\zeta/\rstar}$ \dotfill & $\hatcurLCzeta{20}$ & $\hatcurLCzeta{21}$ & $\hatcurLCzeta{22}$ & $\hatcurLCzeta{23}$ \\
~~~$\ensuremath{R_{p}}/\ensuremath{R_\star}$ \dotfill & $\hatcurLCrprstar{20}$ & $\hatcurLCrprstar{21}$ & $\hatcurLCrprstar{22}$ & $\hatcurLCrprstar{23}$ \\
~~~$b^2$ \dotfill & $\hatcurLCbsq{20}$ & $\hatcurLCbsq{21}$ & $\hatcurLCbsq{22}$ & $\hatcurLCbsq{23}$ \\
~~~$b \equiv a \cos i/\ensuremath{R_\star}$
\dotfill & $\hatcurLCimp{20}$ & $\hatcurLCimp{21}$ & $\hatcurLCimp{22}$ & $\hatcurLCimp{23}$ \\
~~~$i$ (deg) \dotfill & $\hatcurPPi{20}$ & $\hatcurPPi{21}$ & $\hatcurPPi{22}$ & $\hatcurPPi{23}$ \\
\sidehead{Limb-darkening coefficients \tablenotemark{b}}
~~~$a_i$ (linear term) \dotfill & $\hatcurLBii{20}$ & $\hatcurLBii{21}$ & $\hatcurLBii{22}$ & $\hatcurLBii{23}$ \\
~~~$b_i$ (quadratic term) \dotfill & $\hatcurLBiii{20}$ & $\hatcurLBiii{21}$ & $\hatcurLBiii{22}$ & $\hatcurLBiii{23}$ \\
~~~$a_g$ \dotfill & $\cdots$ & $\cdots$ & $\hatcurLBig{22}$ & $\hatcurLBig{23}$ \\
~~~$b_g$ \dotfill & $\cdots$ & $\cdots$ & $\hatcurLBiig{22}$ & $\hatcurLBiig{23}$ \\
\sidehead{RV parameters}
~~~$K$ (\ensuremath{\rm m\,s^{-1}}) \dotfill & $\hatcurRVK{20}$ & $\hatcurRVK{21}$ & $\hatcurRVK{22}$ & $\hatcurRVK{23}$ \\
~~~$k_{\rm RV}$\tablenotemark{c}
\dotfill & $\hatcurRVk{20}$ & $\hatcurRVk{21}$ & $\hatcurRVk{22}$ & $\hatcurRVk{23}$ \\
~~~$h_{\rm RV}$\tablenotemark{c}
\dotfill & $\hatcurRVh{20}$ & $\hatcurRVh{21}$ & $\hatcurRVh{22}$ & $\hatcurRVh{23}$ \\
~~~$e$ \dotfill & $\hatcurRVeccen{20}$ & $\hatcurRVeccen{21}$ & $\hatcurRVeccen{22}$ & $\hatcurRVeccen{23}$ \\
~~~$\omega$ (deg) \dotfill & $\hatcurRVomega{20}$ & $\hatcurRVomega{21}$ & $\hatcurRVomega{22}$ & $\hatcurRVomega{23}$ \\
~~~RV jitter (\ensuremath{\rm m\,s^{-1}}) \dotfill & \hatcurRVjitter{20} & \hatcurRVjitter{21} & \hatcurRVjitter{22} & \hatcurRVjitter{23} \\
\sidehead{Secondary eclipse parameters (derived)}
~~~$T_s$ ($\mathrm{BJD_{UTC}}$) \dotfill & $\hatcurXsecondary{20}$ & $\hatcurXsecondary{21}$ & $\hatcurXsecondary{22}$ & $\hatcurXsecondary{23}$ \\
~~~$T_{s,14}$ \dotfill & $\hatcurXsecdur{20}$ & $\hatcurXsecdur{21}$ & $\hatcurXsecdur{22}$ & $\hatcurXsecdur{23}$ \\
~~~$T_{s,12}$ \dotfill & $\hatcurXsecingdur{20}$ & $\hatcurXsecingdur{21}$ & $\hatcurXsecingdur{22}$ & $\hatcurXsecingdur{23}$ \\
\sidehead{Planetary parameters}
~~~$\ensuremath{M_{p}}$ ($\ensuremath{M_{\rm J}}$) \dotfill & $\hatcurPPmlong{20}$ & $\hatcurPPmlong{21}$ & $\hatcurPPmlong{22}$ & $\hatcurPPmlong{23}$ \\
~~~$\ensuremath{R_{p}}$ ($\ensuremath{R_{\rm J}}$) \dotfill & $\hatcurPPrlong{20}$ & $\hatcurPPrlong{21}$ & $\hatcurPPrlong{22}$ & $\hatcurPPrlong{23}$ \\
~~~$C(\ensuremath{M_{p}},\ensuremath{R_{p}})$
\tablenotemark{d} \dotfill & $\hatcurPPmrcorr{20}$ & $\hatcurPPmrcorr{21}$ & $\hatcurPPmrcorr{22}$ & $\hatcurPPmrcorr{23}$ \\
~~~$\ensuremath{\rho_{p}}$ (\ensuremath{\rm g\,cm^{-3}}) \dotfill & $\hatcurPPrho{20}$ & $\hatcurPPrho{21}$ & $\hatcurPPrho{22}$ & $\hatcurPPrho{23}$ \\
~~~$\log g_p$ (cgs) \dotfill & $\hatcurPPlogg{20}$ & $\hatcurPPlogg{21}$ & $\hatcurPPlogg{22}$ & $\hatcurPPlogg{23}$ \\
~~~$a$ (AU) \dotfill & $\hatcurPParel{20}$ & $\hatcurPParel{21}$ & $\hatcurPParel{22}$ & $\hatcurPParel{23}$ \\
~~~$T_{\rm eq}$ (K) \dotfill & $\hatcurPPteff{20}$ & $\hatcurPPteff{21}$ & $\hatcurPPteff{22}$ & $\hatcurPPteff{23}$ \\
~~~$\Theta$\tablenotemark{e} \dotfill & $\hatcurPPtheta{20}$ & $\hatcurPPtheta{21}$ & $\hatcurPPtheta{22}$ & $\hatcurPPtheta{23}$ \\
~~~$F_{per}$ ($10^{\hatcurPPfluxperidim{20}}$\ensuremath{\rm erg\,s^{-1}\,cm^{-2}}) \tablenotemark{f}
\dotfill & $\hatcurPPfluxperi{20}$ & $\hatcurPPfluxperi{21}$ & $\hatcurPPfluxperi{22}$ & $\hatcurPPfluxperi{23}$ \\
~~~$F_{ap}$ ($10^{\hatcurPPfluxapdim{20}}$\ensuremath{\rm erg\,s^{-1}\,cm^{-2}}) \tablenotemark{f}
\dotfill & $\hatcurPPfluxap{20}$ & $\hatcurPPfluxap{21}$ & $\hatcurPPfluxap{22}$ & $\hatcurPPfluxap{23}$ \\
~~~$\langle F \rangle$ ($10^{\hatcurPPfluxavgdim{20}}$\ensuremath{\rm erg\,s^{-1}\,cm^{-2}}) \tablenotemark{f}
\dotfill & $\hatcurPPfluxavg{20}$ & $\hatcurPPfluxavg{21}$ & $\hatcurPPfluxavg{22}$ & $\hatcurPPfluxavg{23}$ \\
[-1.0ex]
\enddata
\tablenotetext{a}{
\ensuremath{T_c}: Reference epoch of mid transit that
minimizes the correlation with the orbital period. It
corresponds to $N_{tr} = -16$. BJD is calculated from UTC.
\ensuremath{T_{14}}: total transit duration, time
between first to last contact;
\ensuremath{T_{12}=T_{34}}: ingress/egress time, time between first
and second, or third and fourth contact.
}
\tablenotetext{b}{
Values for a quadratic law, adopted from the tabulations by
\cite{claret:2004} according to the spectroscopic (SME) parameters
listed in \reftabl{stellar}.
}
\tablenotetext{c}{
Lagrangian orbital parameters derived from the global modeling,
and primarily determined by the RV data.
}
\tablenotetext{d}{
Correlation coefficient between the planetary mass \ensuremath{M_{p}}\ and radius
\ensuremath{R_{p}}.
}
\tablenotetext{e}{
The Safronov number is given by $\Theta = \frac{1}{2}(V_{\rm
esc}/V_{\rm orb})^2 = (a/\ensuremath{R_{p}})(\ensuremath{M_{p}} / \ensuremath{M_\star} )$
\citep[see][]{hansen:2007}.
}
\tablenotetext{f}{
Incoming flux per unit surface area. $\langle F \rangle$ is
averaged over the orbit.
}
\ifthenelse{\boolean{emulateapj}}{
\end{deluxetable*}
}{
\end{deluxetable}
}
\section{Discussion}
\label{sec:discussion}
\subsection{\hatcurb{20}}
\label{sec:disc20}
\hatcurb{20} is a very massive ($\ensuremath{M_{p}} =
\hatcurPPmlong{20}\,\ensuremath{M_{\rm J}}=\hatcurPPme{20}\,\ensuremath{M_\earth}$) and very compact
($\ensuremath{R_{p}}=\hatcurPPrlong{20}\,\ensuremath{R_{\rm J}}$) hot Jupiter orbiting a
\hatcurISOspec{20} \citep{skiff:2009} star. \hatcurb{20} is the sixth
most massive, and second most dense transiting planet with
$\rho_p=$\hatcurPPrho{20}\,\ensuremath{\rm g\,cm^{-3}} (see \reffigl{exomr}). The only
planet (or brown dwarf) denser than \hatcurb{20} is CoRoT-3b
\citep{deleuil:2008} with $\ensuremath{\rho_{p}} \approx 27\ensuremath{\rm g\,cm^{-3}}$. Modeling
\hatcurb{20} may be a challenge, as the oldest (4\,Gyr, i.e.~yielding
the most compact planets) \citet{fortney:2007} models with
$\ensuremath{M_{p}}=2154\,\ensuremath{M_\earth}$ total mass and 100\,\ensuremath{M_\earth}\ core-mass predict a much
bigger radius (1.04\,\ensuremath{R_{\rm J}}). The observed radius of
\hatcurPPrshort{20}\,\ensuremath{M_{\rm J}}\ would require a very high metal content.
We note that the host star is one of the most metal rich stars that
have a transiting planet ($\ensuremath{\rm [Fe/H]}=\hatcurSMEzfeh{20}$). Curiously,
\hatcurb{20} orbits a fairly late type star (\hatcurISOspec{20}), as
compared to most of the massive hot Jupiters that orbit $\sim$F5
dwarfs. It is also different from the rest of the population in that
the orbit is consistent with circular at the 3$\sigma$ level. The
irradiation \hatcurb{20} receives is one of the smallest, clearly
making it a pL class exoplanet \citep{fortney:2008}: $\langle F \rangle
= (\hatcurPPfluxavg{20})\cdot 10^{\hatcurPPfluxavgdim{20}}$\ensuremath{\rm erg\,s^{-1}\,cm^{-2}},
comparable to the mean flux per orbit for another ``heavy'' planet
\hd{17156b} on a 21\,d period orbit. The only other massive planet
that receives less average flux (integrated over an orbit) is
\hd{80606b}. \hatcur{20} is an extreme outlier in the \ensuremath{M_{p}}--\ensuremath{M_\star}\
plane; it is a relatively small mass star harboring a very massive
planet. Another outlier (albeit to a much lesser extent) with similar
planetary radius and stellar mass is WASP-10b
\citep{johnson:2008b,christian:2009}, but this planet has less than
half of the mass of \hatcurb{20} (3.09\,\ensuremath{M_{\rm J}}). We also calculated the
maximum mass of a stable moon for both the prograde and retrograde
cases, and derived $0.128\,\ensuremath{M_\earth}$ and $8.31\,\ensuremath{M_\earth}$, respectively,
i.e.~\hatcurb{20} can harbor a fairly massive moon. An $8.31\,\ensuremath{M_\earth}$
retrograde moon would cause $\sim 10\,s$ variations in the transit
times, which is marginally detectable from the ground.
\hatcur{20} has a close-by faint and red companion star at $\sim
6.86\arcsec$ separation. Based on the Palomar sky survey archival
plates, we confirm that they form a close common-proper motion pair,
thus it is very likely that the two stars are physically associated.
The binary has appeared in the Washington Double Star compilation (WDS)
as POU2795, and was discovered by \citet{pourteau:1933}. Furthermore,
based on the summary of observations in the WDS, there is already a
hint of orbital motion of the companion to \hatcur{20} over the last
century. The position angle of the companion changed from
PA=323\arcdeg\ to PA=320\arcdeg\ over the course of 89 years (between
1909 and 1998), and it seems to be retrograde on the sky (clockwise).
Thus, \hatcur{20} is yet another example of a massive planet in a
binary system \citep{udry:2002}. The binary companion makes this
system ideal for high precision ground or space-based studies, as it
provides a natural comparison source, even though it has a later
spectral type.
\subsection{\hatcurb{21}}
\label{sec:disc21}
With a mass of $\ensuremath{M_{p}}=\hatcurPPmlong{21}\,\ensuremath{M_{\rm J}}$, \hatcurb{21} is the 11th
most massive transiting planet. \hatcurb{21} has a radius of
$\ensuremath{R_{p}}=\hatcurPPrlong{21}\,\ensuremath{R_{\rm J}}$, mean density
$\ensuremath{\rho_{p}}=\hatcurPPrho{21}\,\ensuremath{\rm g\,cm^{-3}}$, and orbits on a moderately eccentric
orbit with $e=\hatcurRVeccen{21}$, $\omega=\hatcurRVomega{21}\arcdeg$.
The transits occur near apastron. As noted by
\citet{buchhave:2010}, 4\,\ensuremath{M_{\rm J}}\ mass planets are very rare in the
sample of currently known transiting exoplanets, and the only siblings of
\hatcurb{21} are \hd{80606b} \citep[4.08\,\ensuremath{M_{\rm J}};][]{naef:2001} and
HAT-P-16b \citep[4.19\,\ensuremath{M_{\rm J}};][]{buchhave:2010}. Among these,
HAT-P-16b has a shorter period, also an eccentric orbit, and a much larger
radius (1.29\,\ensuremath{R_{\rm J}}). \hd{80606b}, on the other hand, has a similar
radius, and orbits on an extremely eccentric ($e=0.93$) orbit at
111\,day period. It appears that \hatcurb{21} is thus an unusual, short
period, eccentric, massive and compact planet.
The only models from \citet{fortney:2007} consistent with the observed
radius are 4\,Gyr models with 100\,\ensuremath{M_\earth}\ core mass, yielding
1.05\,\ensuremath{R_{\rm J}}\ radius. Probably \hatcurb{21} has an even higher metal
content. \hatcurb{21} has a very high mean density; it is 8th among
all TEPs, and very similar to \hd{80606b} and WASP-14b.
The flux received by the planet varies between
$(\hatcurPPfluxperi{21})\cdot 10^{\hatcurPPfluxperidim{21}}\,\ensuremath{\rm erg\,s^{-1}\,cm^{-2}}$ and
$(\hatcurPPfluxap{21})\cdot 10^{\hatcurPPfluxapdim{21}}\,\ensuremath{\rm erg\,s^{-1}\,cm^{-2}}$.
Interestingly, this puts \hatcurb{21} on the bordlerline between pL
(low irradiation) and pM (high irradiation) planets. At the time of
occultation, \hatcurb{21} is just approaching its periastron, thus
entering the irradiation level quoted for pM type planets.
\subsection{\hatcurb{22}}
\label{sec:disc22}
\hatcurb{22} has a mass of $\ensuremath{M_{p}}=\hatcurPPmlong{22}\,\ensuremath{M_{\rm J}}$, radius of
$\ensuremath{R_{p}}=\hatcurPPrlong{22}\,\ensuremath{R_{\rm J}}$, and mean density of
$\rho_p=\hatcurPPrho{22}\,\ensuremath{\rm g\,cm^{-3}}$. \hatcurb{22} orbits a fairly metal
rich ($\ensuremath{\rm [Fe/H]}=\hatcurSMEzfeh{22}$), bright (V=\hatcurCCtassmv{22}), and close-by
(\hatcurXdist{22}\,pc) star. Similarly to \hatcur{20}, the host star has a
faint and red neighbor at 9\arcsec\ separation that is co-moving with
\hatcur{22} (based on the POSS plates and recent Keck/HIRES snapshots),
thus they are likely to form a physical pair.
\hatcurb{22} belongs to the moderately massive ($\sim 2\,\ensuremath{M_{\rm J}}$) and
compact ($\ensuremath{R_{p}} \approx 1\,\ensuremath{R_{\rm J}}$) hot Jupiters, such as
HAT-P-15b \citep[\ensuremath{M_{p}}=1.95\ensuremath{M_{\rm J}}, \ensuremath{R_{p}}=1.07\,\ensuremath{R_{\rm J}}; ][]{kovacs:2010},
HAT-P-14b \citep[\ensuremath{M_{p}}=2.23\ensuremath{M_{\rm J}}, \ensuremath{R_{p}}=1.15\,\ensuremath{R_{\rm J}}; ][]{torres:2010},
and WASP-8b \citep[\ensuremath{M_{p}}=2.25\ensuremath{M_{\rm J}}, \ensuremath{R_{p}}=1.05\,\ensuremath{R_{\rm J}}; ][]{queloz:2010}.
The radius distribution is almost bi-modal for these planets (see
\reffigl{exomr}), with members of the inflated
($\ensuremath{R_{p}}\approx1.3\,\ensuremath{R_{\rm J}}$) group being:
HAT-P-23b (\ensuremath{M_{p}}=2.09\ensuremath{M_{\rm J}}, \ensuremath{R_{p}}=1.37\,\ensuremath{R_{\rm J}}; this work),
Kepler-5b \citep[\ensuremath{M_{p}}=2.10\ensuremath{M_{\rm J}}, \ensuremath{R_{p}}=1.31\,\ensuremath{R_{\rm J}};][]{kipping:2010,koch:2010},
CoRoT-11b \citep[\ensuremath{M_{p}}=2.33\ensuremath{M_{\rm J}}, \ensuremath{R_{p}}=1.43\,\ensuremath{R_{\rm J}}; ][]{gandolfi:2010}.
\hatcurb{22} is broadly consistent with the models of
\citet{fortney:2008}. For 300\,Myr, 1\,Gyr and 4\,Gyr models it
requires a 100\,\ensuremath{M_\earth}, 50\,\ensuremath{M_\earth}\ and 25\,\ensuremath{M_\earth}\ core,
respectively, to have a radius of $\sim\hatcurPPrshort{22}\,\ensuremath{R_{\rm J}}$.
The low incoming flux (see \reftabl{planetparam}) means that
\hatcurb{22} is a pL class planet. \hatcurb{22} can harbor a
$0.96\,\ensuremath{M_\earth}$ mass retrograde moon, which would cause transit timing
variations (TTVs) of $\sim 2$\,seconds.
\subsection{\hatcurb{23}}
\label{sec:disc23}
\hatcurb{23} belongs to the inflated group of $2\,\ensuremath{M_{\rm J}}$ planets (see
discussion above for \hatcurb{22}). This planet has a mass of
$\ensuremath{M_{p}}=\hatcurPPmlong{23}\,\ensuremath{M_{\rm J}}$, radius
$\ensuremath{R_{p}}=\hatcurPPrlong{23}\,\ensuremath{R_{\rm J}}$, and mean density
$\ensuremath{\rho_{p}}=\hatcurPPrho{23}\,\ensuremath{\rm g\,cm^{-3}}$.
The orbit is nearly circular, with the eccentricity being marginally
significant. The reason for the somewhat higher than usual errors in
the RV parameters is the high jitter of the star
(\hatcurRVjitter{23}\,\ensuremath{\rm m\,s^{-1}}), which may be related to the moderately high
$\ensuremath{v \sin{i}}=\hatcurSMEvsin{23}\,\ensuremath{\rm km\,s^{-1}}$ and the very close-in orbit of
\hatcurb{23}. The \citet{fortney:2008} models can not reproduce the
observed radius of \hatcurb{23}; even for the youngest, (300\,Myr)
core-less models, the theoretical radius for its mass is 1.25\,\ensuremath{R_{\rm J}}.
\hatcurb{23} orbits its host star on a very close-in orbit. The
orbital period is only \hatcurLCPshort{23}\,days; almost identical to
that of OGLE-TR-56b (1.21192\,days). The nominal planetary radius of
the two objects is also the same within 1\%, but OGLE-TR-56b is much
less massive (1.39\,\ensuremath{M_{\rm J}}). The flux falling on \hatcurb{23} from its
host star is one of the highest (i.e.~belongs to the pM class objects),
and is similar to that of HAT-P-7b and OGLE-TR-56b. We also calculated
the spiral in-fall timescale for each new discovery based on
\citet{levrard:2009} and \citet{dobbs-dixon:2004}. By assuming that
the stellar dissipation factor is $Q_\star=10^6$, the infall time for
\hatcurb{23} is $\tau_{infall} = \hatcurPPtinfall{23}$\,Myr, one of the
shortest among exoplanets.
The Rossiter-McLaughlin effect for \hatcurb{23} should be quite
significant, given the moderately high
$\ensuremath{v \sin{i}}=\hatcurSMEvsin{23}\,\ensuremath{\rm km\,s^{-1}}$ of the host star, and the $\Delta i =
17$\,mmag deep transit. The impact parameter is also ``ideal''
($b=\hatcurLCimp{23}$), i.e.~it is not equatorial ($b=0$), where there
would be a strong degeneracy between the stellar rotational velocity
\ensuremath{v \sin{i}}\ and the sky-projected angle of the stellar spin axis and the
orbital normal, $\lambda$, and is also far from grazing, where the
transit is short, and other system parameters have lower accuracy. The
effective temperature of the star ($\ensuremath{T_{\rm eff\star}} = \hatcurSMEteff{23}\,K$)
is close to the critical temperature of 6250\,K noted recently by
\citet{winn:2010}, which may be a border-line between systems where the
stellar spin axes and planetary orbital normals are preferentially
aligned ($\ensuremath{T_{\rm eff\star}} < 6250\,K$) and those that are misaligned
($\ensuremath{T_{\rm eff\star}} > 6250\,K$). An alternative hypothesis has been brought up
by \citet{schlaufman:2010}, where misaligned stellar spin axes and
orbital normals are related to the mass of the host star. The mass of
\hatcur{23} (\hatcurISOm{23}) is sufficiently close to the suggested
dividing line of $\ensuremath{M_\star} = 1.2\,\ensuremath{M_\sun}$, thus it will provide an
excellent additional test for these ideas.
\subsection{Summary}
We presented the discovery of four new massive transiting planets, and
provided accurate characterization of the host star and planetary
parameters. These 4 new systems are very diverse, and significantly
expand the sample of $\sim13$ other massive ($\ensuremath{M_{p}}\gtrsim2\,\ensuremath{M_{\rm J}}$)
planets. Two of the new discoveries orbit stars that have fainter,
most probably physically associated companions. The new discoveries do
not tend to enhance the mass--eccentricity correlation, since only one
(\hatcurb{21}) is significantly eccentric. Also, the tentative
mass--\ensuremath{v \sin{i}}\ correlation noted in the Introduction is weakened by the
new discoveries. The heavier mass planets (\hatcurb{20} and
\hatcurb{21}) seem to be inconsistent with current theoretical models
in that they are too dense, and would require a huge core (or metal
content) to have such small radii. One planet (\hatcurb{23}) is also
inconsistent with the models (unless we assume that the planet is very
young), but in the opposite sense; it has an inflated radius. It has
been noted by \citet{winn:2010} and \citet{schlaufman:2010} that
systems exhibiting stellar spin axis--planetary orbital normal
misalignment are preferentially eccentric and heavy mass planets (in
addition to the key parameter being the effective temperature or mass
of the host star, respectively). The four new planets presented in
this work will provide additional important tests for checking these
conjectures. The host stars are all bright ($9.7<V<12.4$), and thus
enable in-depth future characterization of these systems.
\begin{figure*}[!ht]
\plotone{exoplanet_m_r.eps}
\caption{ Mass--radius diagram of known TEPs (small filled
squares). \hatcurb{20}--\hatcurb{23}\ are shown as a large filled
squares. Overlaid are \citet{fortney:2007} planetary isochrones
interpolated to the solar equivalent semi-major axis of \hatcurb{20}
for ages of 1.0\,Gyr (upper, solid lines) and 4\,Gyr (lower
dashed-dotted lines) and core masses of 0 and 10\,\ensuremath{M_\earth} (upper and
lower lines respectively), as well as isodensity lines for 0.4, 0.7,
1.0, 1.33, 5.5 and 11.9\,\ensuremath{\rm g\,cm^{-3}} (dashed lines). Solar system planets
are shown with open triangles.
\label{fig:exomr}}
\end{figure*}
\acknowledgements
HATNet operations have been funded by NASA grants NNG04GN74G,
NNX08AF23G and SAO IR\&D grants. Work of G.\'A.B.~and J.~Johnson were
supported by the Postdoctoral Fellowship of the NSF Astronomy and
Astrophysics Program (AST-0702843 and AST-0702821, respectively). GT
acknowledges partial support from NASA grant NNX09AF59G. We
acknowledge partial support also from the Kepler Mission under NASA
Cooperative Agreement NCC2-1390 (D.W.L., PI). G.K.~thanks the
Hungarian Scientific Research Foundation (OTKA) for support through
grant K-81373. This research has made use of Keck telescope time
granted through NOAO and NASA, and uses observations
obtained with facilities of the Las Cumbres Observatory Global
Telescope.
\input{biblio.tex}
\end{document}
|
1,941,325,220,689 | arxiv | \section{Introduction and Summary}\label{first}
The recent proposal of Ho\v{r}ava \cite{Horava:2009uw,Horava:2008ih,
Horava:2008jf}
for a candidate theory of gravity
that is symmetric under the Lifshitz type of anisotropy scaling of space-time
coordinates $t \rightarrow l^{z} t , \>\>\>\>
x^i \rightarrow l x^i , $ where $z$ being the scaling exponent, has been
a very interesting area of research since the last year
\footnote{Some aspects of Ho\v{r}ava-Lifshitz theory has been discussed
in for example \cite{Tang:2009bu}- \cite{Ghodsi:2009rv}}.
This theory is constructed as a UV completion of Hilbert-Einstein
gravity so that it is perturbatively renormalizable.
This modification is possible only when
we sacrifice Lorentz symmetry at high energy. However it is again
observable at low energy. Among various versions of Ho\v{r}ava-Lifshitz
theories, only the class of projectable theories where the
so called lapse function
depends only on time, is the consistent choice. It is often, as in the present
case, interesting to study theories with broken general covariance. The
so called Lorentz symmetry breaking Hamiltonian formalism has been used to
study the point particles and strings in Ho\v{r}ava gravity. The basic
idea of this Lorentz breaking Hamiltonian formalism is that time and
spatial components of momenta are treated differently. Indeed in
\cite{Kluson:2010aw} the construction of a new string theory, called
Lorentz-breaking string theory (LBS) has been studied extensively by
generalizing the point particle dynamics
\cite{Capasso:2009fh,Suyama:2009vy,Romero:2009qs,Mosaffa:2010ym,
Kiritsis:2009rx,Rama:2009xc}
in Ho\v{r}ava-Lifshitz gravity.
The basic idea of the construction of the LBS theory is following. We
start with the Hamiltonian formulation of two dimensional theory when
the Hamiltonian is linear combination of two constraints: the spatial
diffeomorphism constraint and Hamiltonian constraint. As opposite
to the Hamiltonian formulation of Polyakov string we consider the
Hamiltonian constraint that breaks the Lorentz invariance of the
target space-time in the similar way as in the point particle case
\cite{Capasso:2009fh,Suyama:2009vy,Romero:2009qs,Mosaffa:2010ym,
Kiritsis:2009rx,Rama:2009xc}. However as opposite to the point particle
case now the world-sheet modes depend on spatial coordinate of
world-sheet theory so that it is possible to define many
LBS theories that reduce to the point particle Hamiltonian
constraint \cite{Capasso:2009fh,Suyama:2009vy,Romero:2009qs,Mosaffa:2010ym,
Kiritsis:2009rx,Rama:2009xc} in case of the world-sheet
dimensional reduction. In doing so, the consistency of the LBS theory
demands that the spatial component of the world-sheet metric
has to be dynamical.
As a by product, the world-sheet theory is no more invariant under full
two dimensional diffeomorphism but only under the world-sheet foliation
preserving transformation. Furthermore, the consistency of the Hamiltonian
dynamics of LBS theory implies that the world-sheet lapse has to obey
the projectability condition consistent with the
of Ho\v{r}ava-Lifshitz gravity.
On the other hand it is natural
to demand that the Hamiltonian constraint of LBS theory
reduces to the Hamiltonian constraint of the relativistic
string in the case when the target Ho\v{r}ava-Lifshitz gravity
reduces to General Relativity in low energy regime.
This requirement now implies
that we should consider LBS theory where the world-sheet mode
$x^0$ depends on
the world-sheet spatial coordinate $\sigma$ as well which is more
general situation than was consider in the paper \cite{Kluson:2010aw}.
Then we will be able to show that the Hamiltonian
constraint reduces to the
Hamiltonian constraint for
the relativistic string in the limit which has
been used to recover General Theory of Relativity from
Ho\v{r}ava gravity. However at this place we should stress
one important point that makes the construction
of LBS theory as intricate as the construction of
Ho\v{r}ava-Lifshitz gravity. Explicitly, we argue in
\cite{Kluson:2010aw} that the consistency of the
Hamiltonian formulation of LBS theory forces us to
consider the world-sheet lapse function that depends
on the world-sheet time coordinate $\tau$ only. As a result the LBS theory reduces to the
Polyakov action when however the lapse function
does not depend on the world-sheet spatial coordinate.
In other words LBS theory does not reduce to the Polyakov
string in the IR limit of target Ho\v{r}ava-Lifshitz gravity.
Despite of this fact, we feel that it is interesting to
study LBS theory further as a toy model of the theory
with broken Lorentz invariance that is more general then
the corresponding point particle action. We discussed
the symmetries of
the action and have shown that the action is invariant under the target space
foliation preserving diffeomorphism
and under world-sheet foliation preserving diffeomorphism
\cite{Kluson:2010aw}. We further derive
the T-duality rules for LBS string
and show that they are same as that of Buscher's T-duality for the
relativistic strings\cite{Buscher:1985kb,Buscher:1987sk,Buscher:1987qj}.
It would be interesting to study other extended
objects like D-branes in LBS theory. In particular it will be desired to
look for D-brane action in Ho\v{r}ava-Lifshitz background
and examine their fate by using the T-duality
transformation.
The rest of the paper is organized as follows. In section-2, we generalize the
construction of LBS theory of \cite{Kluson:2010aw} to include
other auxiliary fields and more general world sheet modes.
In section-3, we present the symmetries of the LBS theory action and show
they are invariant under the target space foliation preserving
diffeomorphism transformation. Finally, section-4 is devoted to
the study of T-duality transformation of the LBS theory.
\section{Review of LBS Theory}
In this section we review and slightly generalize
the construction of LBS theory
given in \cite{Kluson:2010aw} where more
details of motivation for this construction can be found.
As in \cite{Kluson:2010aw}
we begin with the following Hamiltonian formulation
of LBS \begin{eqnarray}\label{defH}
H&=&\int_{\Sigma} d\sigma \mH(\sigma) \ , \quad
\mH(\sigma)=n_\tau(\tau) \mH_\tau(\sigma)
+ n^\sigma(\sigma)
\mH_\sigma(\sigma)+\nonumber \\
&+& \lambda_\tau(\tau)
\pi^\tau(\tau)+\lambda_\sigma(\sigma) \pi^\sigma(\sigma)+
v_A(\sigma) P_A(\sigma)+v_B(\sigma) P_B(\sigma) \ , \nonumber \\
\end{eqnarray}
where
\begin{eqnarray}\label{defHtausigma}
\mH_\tau&=&-\pi\alpha' \frac{1}{\sqrt{\omega}
N^2}
(p_0-N^ip_i)^2+\sqrt{\omega}
G\left(-\frac{1}{4\pi\alpha'\omega}
N^2\parts x^0 \parts x^0\right) +\nonumber
\\
&+&B\left( \pi\alpha' \frac{1}{\omega}
p_i h^{ij}p_j+
\frac{(z-1)}{2\sqrt{\omega}}
\pi^\omega \omega^2 \pi^\omega +\right. \nonumber \\
&+& \left.\frac{1}
{4\pi\alpha'}\frac{1}{\omega}
(\parts x^i+N^i\parts x^0)
h_{ij}(\parts x^j+N^j\parts x^0)-A\right)+\sqrt{\omega}F(A) \
\ , \nonumber \\
\mH_\sigma&=& p_M\parts x^M- 2\omega
\nabla_\sigma \pi^\sigma
\ , \nonumber \\
\end{eqnarray}
where $x^M, M,N=0,\dots,D$ are world-sheet
modes that parameterize the embedding of the string
into
target space-time $\mathcal{M}$ with general
metric $g_{MN}$
and where $p_M$ are conjugate
momenta with following non-zero Poisson brackets
\begin{equation}
\pb{x^M(\sigma),p_N(\sigma')}=
\delta^M_N\delta(\sigma-\sigma') \ .
\end{equation}
Further, we
introduced two dimensional metric $\gamma_{\mu\nu}$
in $1+1$ formalism
\begin{equation}\label{gamma}
\gamma_{\alpha\beta}= \left(
\begin{array}{cc}
-n^2_\tau+ \frac{1}{\omega
}n_\sigma^2 &
n_\sigma \\
n_\sigma & \omega \\
\end{array}\right) \ ,
\end{equation}
where $n_\tau$ is a world-sheet lapse,
$n_\sigma$ is a world-sheet shift,
$ n^\sigma\equiv \frac{n_\sigma}{\omega}$
and
$\omega$ is a spatial part of world-sheet metric
and where $\pi^\tau,\pi^\sigma$ and $\pi^\omega$
are corresponding conjugate momenta with following
non-zero Poisson brackets
\begin{equation}
\pb{n_\tau,\pi^\tau}=1 \ ,
\quad \pb{n_\sigma(\sigma),\pi^\sigma(\sigma')}=
\delta(\sigma-\sigma') \ , \quad
\pb{\omega(\sigma),\pi^\omega(\sigma')}=
\delta(\sigma-\sigma') \ .
\end{equation}
We further defined world-sheet covariant
derivative
\begin{equation}
\nabla_\sigma n_\sigma=\parts
n_\sigma-\Gamma n_\sigma\ , \quad
\nabla_\sigma \pi^\omega=
\parts \pi^\omega+\Gamma \pi^\omega \ , \quad
\Gamma=\frac{1}{2\omega}\parts \omega
\ .
\end{equation}
We consider target space-time
$\mathcal{M}$ labeled with coordinates
$t=x^0,\bx=(x^1,\dots,x^D)$ with the metric
in $1+1$ form
\begin{equation}
g_{00}=-N^2+N_ih^{ij}N_j \ , \quad
g_{0i}=N_i \ , \quad g_{ij}=h_{ij} \ ,
\quad \det g=-N^2\det h \
\end{equation}
with inverse
\begin{equation}
g^{00}=-\frac{1}{N^2} \ , \quad
g^{0i}=\frac{N^i}{N} \ , \quad
g^{ij}=h^{ij}-\frac{N^i N^j}{N^2} \ .
\end{equation}
Note that the dynamics of target space metric $g_{MN}$
is governed by Ho\v{r}ava-Lifshitz gravity action.
Further, $A,B$ and corresponding
conjugate momenta $P_A,P_B$ are auxiliary modes
with Poisson brackets
\begin{equation}
\pb{A(\sigma),P_A(\sigma')}=
\delta(\sigma-\sigma') \ , \quad
\pb{B(\sigma),P_B(\sigma')}=
\delta(\sigma-\sigma') \ .
\end{equation}
Finally $\lambda_\tau,\lambda_\sigma, v_A,v_B$ are Lagrange multipliers
that ensure that $\pi^\tau\approx 0 \ , \pi^\sigma(\sigma)\approx 0 \ ,
P_A(\sigma)\approx 0 \ , P_B(\sigma)\approx 0$ are
primary constraints of the theory. Note that following arguments
given in \cite{Kluson:2010aw} we presume that
$n_\tau$ depends on $\tau$ only.
Then
the requirement of the preservation of the primary constraints
implies following secondary ones
\begin{eqnarray}
\partt \pi_\tau&=&\pb{\pi_\tau,H}=
-\int d\sigma \mH_\tau\approx 0 \ , \nonumber \\
\partt \pi^\sigma&=&\pb{\pi^\sigma,H}=
-\frac{1}{\omega}\mH_\sigma\approx 0 \ , \nonumber \\
\partt P_A&=&\pb{P_A,H}=B-\sqrt{\omega}F'(A)\equiv
G_A\approx 0 \ , \nonumber \\
\partt P_B&=&\pb{P_B,H}=-
\left( \pi\alpha' \frac{1}{\omega}
p_i h^{ij}p_j+
\frac{(z-1)}{2\sqrt{\omega}}
\pi^\omega \omega^2 \pi^\omega +\right. \nonumber \\
&+& \left.\frac{1}
{4\pi\alpha'}\frac{1}{\omega}
(\parts x^i+N^i\parts x^0)
h_{ij}(\parts x^j+N^j\parts x^0)-A\right)\equiv G_B\approx 0 \ .
\nonumber \\
\end{eqnarray}
It is easy to show that the secondary constraints $G_A\approx 0,G_B\approx 0$
together with the primary ones $P_A\approx 0, P_B\approx 0$ form
the collection of the second class constraints. Solving the
system of the second class constraints
$(P_A,P_B,G_A,G_B)$ we express $A,B$ as
functions of the canonical variables $x^M,p_M$
and we find non-linear form of the Hamiltonian
constraint. This procedure was extensively studied
in \cite{Kluson:2010aw} so that we skip
all details and recommend an interesting reader to
look at this reference.
As opposite to the case of the Hamiltonian
formulation of the relativistic string
we see that the Hamiltonian constraint
(\ref{defHtausigma}) contains
kinetic term for $\pi^\omega$.
Arguments why
there is non-trivial dynamics the
spatial part of the metric $\omega$
were given in \cite{Kluson:2010aw}
and we briefly recapitulate here.
Let us imagine for the time
being that $\pi^\omega$ is a primary constraint
of the theory. However we see from
(\ref{defH}) that the Hamiltonian constraint
depends on
$\omega$ in non-trivial way. Then if
$\pi^\omega\approx 0$ were be the primary
constraint of the theory we would find
overconstrained theory due to the
requirement of the consistency of this
constraint with the time evolution of
the system. Then in order to avoid
imposing additional constraint on the
system we demand
that $\omega$ is a dynamical
mode with kinetic term in the action.
We see that the Hamiltonian constraint
(\ref{defHtausigma}) is characterized by
the presence of two functions $F$
and $G$. As in \cite{Capasso:2009fh,Suyama:2009vy,Romero:2009qs,Mosaffa:2010ym,
Kiritsis:2009rx,Rama:2009xc}
we presume that $F(A)$ has
the form $F(A)=A+
\sum_{n=2}^{z} \lambda_n A^n$ with
$z$ being the critical exponent of the
Ho\v{r}ava-Lifshitz gravity. It is
believed that in the IR limit the
Ho\v{r}ava-Lifshitz gravity reduces
to the ordinary General Relativity
when $z=1$. Let us now study properties
of the Hamiltonian constraint
(\ref{defHtausigma}) in this limit. Firstly
we see that the kinetic
term for the spatial part of the
metric vanishes for $z\rightarrow 1$.
Since we consider more general
case than in \cite{Kluson:2010aw}
when $x^0$ depends on $\sigma$ we
mean that it is natural to add
into (\ref{defHtausigma}) the term
$G(-\frac{1}{4\pi \alpha'\omega^2}N^2
\parts x^0\parts x^0)$.
We assume that
$G(A)=A+\sum_{n=2}^z \omega_n A^n$
where $\omega_n$ are constants.
Then it is easy to see that in the limit $
z\rightarrow 1$ we have $F(A)\rightarrow 1 \ ,
\quad G(A)\rightarrow 1$ and hence
we find that for $z\rightarrow 1$ the Hamiltonian
constraint given in (\ref{defHtausigma})
takes the following form
\begin{eqnarray}\label{Hamconflat}
\mH_\tau&=&-\pi\alpha' \frac{1}{\sqrt{\omega}
N^2}
(p_0-N^ip_i)^2
-\frac{1}{4\pi\alpha'\sqrt{\omega}}
N^2\parts x^0 \parts x^0 +\nonumber
\\
&+&B\left( \pi\alpha' \frac{1}{\omega}
p_i h^{ij}p_j+
\frac{1}
{4\pi\alpha'}\frac{1}{\omega}
(\parts x^i+N^i\parts x^0)
h_{ij}(\parts x^j+N^j\parts x^0)-A\right)+\sqrt{\omega}A \
\ . \nonumber \\
\end{eqnarray}
Solving now the second class constraints
$P_A,P_B,G_A,G_B$ is equivalent to the
integration out $A$ and $B$ from
(\ref{Hamconflat}) and we find
that the Hamiltonian constraint
(\ref{Hamconflat}) reduces to
\begin{eqnarray}
\mH_\tau&=&-\pi\alpha' \frac{1}{\sqrt{\omega}
N^2}
(p_0-N^ip_i)^2+
\pi\alpha' \frac{1}{\sqrt{\omega}}
p_i h^{ij}p_j-\nonumber \\
&-&\frac{1}{4\pi\alpha'\sqrt{\omega}}
N^2\parts x^0 \parts x^0 +
\frac{1}
{4\pi\alpha'}\frac{1}{\sqrt{\omega}}
(\parts x^i+N^i\parts x^0)
h_{ij}(\parts x^j+N^j\parts x^0)=
\nonumber \\
&=& -\pi\alpha' \frac{1}{\sqrt{\omega}}
p_N g^{MN}p_N
-\frac{1}{4\pi\alpha'\sqrt{\omega}}
\partial_\sigma x^M g_{MN}\partial_\sigma x^N \ .
\nonumber \\
\end{eqnarray}
This is clearly the Hamiltonian constraint
of the Polyakov action. We see
that the LBS action reduces to the Polyakov
in the IR limit of target space-time.
However we should
stress the crucial point in the formulation
of LBS theory that is the same way resembles
the same problem as in the
formulation of the Ho\v{r}ava-Lifshitz gravity.
Explicitly, we argued in
\cite{Kluson:2010aw}
that LBS theory is
well defined only when the lapse function
$n$ depends on $\tau$ only i.e. $n=n(\tau)$. However
then we see that in the IR limit of the
Ho\v{r}ava-Lifshitz gravity the LBS theory
reduces to the Polyakov action where
$\gamma_{00}=-n_\tau$ depends on $\tau$ only
and hence the full diffeomorphism invariance is
not restored.
The next step is to find Lagrangian
corresponding to Hamiltonian
(\ref{defH}). To do this we determine
the time derivatives of $x^M,A,B,\omega$
\begin{eqnarray}
\partt x^0&=&\pb{x^0,H}=
-\frac{2\pi\alpha'}{N^2\sqrt{\omega}}(p_0-N^i p_i)
n_\tau +n^\sigma \parts x^0 \ , \nonumber \\
\partt
x^i&=& \frac{2\pi\alpha'}{N^2\sqrt{\omega}
}N^i(p_0-N^ip_i)n_\tau+
2\pi\alpha' B
\frac{1}{\omega}
h^{ij}p_jn_\tau+n^\sigma
\parts x^i \ , \nonumber \\
\partt A&=&\pb{A,H}=v_A \ , \quad
\partt B=\pb{B,H}=v_B \ , \nonumber \\
\partt \omega&=&\pb{\omega,H}=
n_\tau \frac{B(z-1)\omega^2}{\sqrt{\omega}}\pi^\omega
+2\nabla_\sigma n_\sigma \ .\nonumber \\
\end{eqnarray}
It is convenient to introduce following
object
\begin{eqnarray}
K_\sigma&=&\frac{1}{n_\tau} (\partt
\omega-2\nabla_\sigma n_\sigma) \ .
\nonumber \\
\end{eqnarray}
Then it is simple task to find corresponding
Lagrangian
\begin{eqnarray}\label{mlAB}
\mL&=& p_M \partt x^M+\partt Ap_A+\partt B p_B+
\partt \omega \pi^\omega-\mH= \nonumber \\
&=&-\frac{\sqrt{\omega}}{4\pi\alpha'}\frac{1}{n_\tau}
(\partt x^0-n^\sigma\parts
x^0)^2-n_\tau \sqrt{\omega} G\left(
-\frac{1}{2\pi\alpha'\omega} N^2\parts x^0
\parts x^0\right)+
\nonumber \\
&+&\omega n_\tau \frac{1}{B}
\left(\frac{1}{4\pi\alpha'}\frac{1}{n^2_\tau}
(V^i_\tau-n^\sigma V^i_\sigma)
h_{ij}(V^j_\tau-n^\sigma V^j_\sigma)
+\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}K_\sigma\right)-
\nonumber \\
&-& Bn_\tau\left(\frac{1}{2\pi\alpha'\omega}
V^i_\sigma h_{ij}V^j_\sigma-A\right)-\sqrt{\omega} n_\tau F(A) \ ,
\nonumber \\
\end{eqnarray}
where
\begin{equation}
V_\tau^i=\partt x^i+N^i\partt x^0 \ ,
\quad
V_\sigma^i=\parts x^i+N^i\parts x^0 \ .
\end{equation}
Observe that this theory is well defined in
case when $z\rightarrow 1$ on condition that
$K_\sigma=0$ which is in agreement with our
requirement that this theory reduces to
Polyakov action in this limit with exception
that $\gamma_{00}$ depends on $\tau$ only.
Finally we integrate out $A$ and
$B$ from (\ref{mlAB}). The equation of motion for $A$
implies
\begin{equation}\label{BA}
B-\sqrt{\omega}F'(A)=0
\end{equation}
while the equation of motion for $B$
implies
\begin{eqnarray}\label{eqA}
&-&\frac{\omega }{B^2}
\left( \frac{1}{4\pi\alpha' n_\tau^2}
(V_\tau^i-n^\sigma V^i_\sigma) h_{ij}
(V_\tau^j-n^\sigma V^j_\sigma)
+\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}
K_\sigma\right)
-\nonumber \\
&-&
\left(\frac{1}{2\pi\alpha'\omega}
V^i_\sigma h_{ij}V^j_\sigma-A\right)=0 \ .
\nonumber \\
\end{eqnarray}
Inserting (\ref{BA}) into (\ref{eqA})
we find the equation for $A$ in the form
\begin{eqnarray}
&-&\frac{1}{F'^2(A)
}\left(\frac{1}{4\pi\alpha' n_\tau^2}
(V_\tau^i-n^\sigma V^i_\sigma) h_{ij}
(V_\tau^j-n^\sigma V^j_\sigma)
+\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}
K_\sigma\right)-\nonumber \\
&-&
\left(\frac{1}{2\pi\alpha'}
V^i_\sigma h_{ij}V^j_\sigma-A\right)=0 \
\nonumber \\
\end{eqnarray}
that in principle allows as to find
$A$ as
\begin{equation}
A=\Psi\left( \frac{1}{4\pi\alpha'n_\tau^2}
(V^i_\tau-n^\sigma V^i_\sigma)
h_{ij}(V^j_\tau-n^\sigma V^j_\sigma)+
\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}
K_\sigma,
\frac{1}{2\pi\alpha'\omega} V_\sigma^i
h_{ij}V^j_\sigma\right) \ .
\end{equation}
Collecting all these results together
we find the Lagrangian density in the form
\begin{eqnarray}\label{mLLBS}
\mL&=&\sqrt{\omega}n_\tau\left[
-\frac{1}{4\pi\alpha'}\frac{1}{n_\tau^2}
(\partt x^0-n^\sigma\parts
x^0)^2- G\left(-\frac{1}{2\pi\alpha'\omega} N^2\parts x^0
\parts x^0\right)-\right.
\nonumber \\
&-& \left. F'(\Psi)\left(\frac{1}{2\pi\alpha'\omega}
V^i_\sigma h_{ij}V^j_\sigma-A\right)-2 F(\Psi) \right] \ .
\nonumber \\
\end{eqnarray}
This it the final form of the Lorentz breaking
string theory Lagrangian. In the next section we
study invariance of the action $S=\int d\tau d\sigma
\mL$ under local and global world-sheet symmetries.
\section{Symmetries of the LBS Action}
We start with the global
transformations from the point
of view of the string world-sheet
theory. These transformations
correspond to the foliation preserving
diffeomorphism of the target space-time
\cite{Horava:2009uw,Horava:2008ih}
\begin{eqnarray}\label{deffpdts}
x'^0(\tau,\sigma)&=&x^0(\tau,\sigma)+f(x^0(\tau,\sigma))
\ ,
\nonumber \\
x'^i(\tau,\sigma)&=&
x^i(\tau,\sigma)+\zeta^i(\tau,\sigma) \ ,
\nonumber \\
\end{eqnarray}
where $f(x^0),\zeta^i(x^0,\bx)$ are infinitesimal parameters.
Note that under these transformations
the metric component transform as
\begin{eqnarray}\label{Ntr}
N'_i(x'^0,\bx')&=& N_i(x^0,\bx)
-N_i(x^0,\bx)
\dot{f}(x^0)-N_j(x^0,\bx)\partial_i
\zeta^j(x^0,\bx)-g_{ij}(x^0,\bx)
\dot{\zeta}^j(t,\bx) \ , \nonumber \\
N'^i(x'^0,\bx')
&=&N^i(x^0,\bx)+N^j(x^0,\bx)\partial_j
\zeta^i(x^0,\bx)-
N^i(x^0,\bx)\dot{f}-\dot{\zeta}^i(x^0,\bx)
\ ,
\nonumber \\
N'(x'^0)&=&N(x^0)-N(x^0) \dot{f}(x^0) \ \nonumber \\
\end{eqnarray}
and
\begin{eqnarray}\label{trm}
g'_{ij}(x'^0,\bx')&=&g_{ij}(x^0,\bx)-
g_{il}(x^0,\bx)\partial_j
\zeta^l(x^0,\bx)-\partial_i
\zeta^k(x^0,\bx) g_{kj}(x^0,\bx) \ , \nonumber \\
g'^{ij}(x'^0,\bx')&=& g^{ij}(x^0,\bx)+
\partial_n \zeta^i(x^0,\bx) g^{nj}(x^0,\bx)
+g^{in}(x^0,\bx)
\partial_n \zeta^j(x^0,\bx)
\ . \nonumber \\
\end{eqnarray}
Then it is easy to see that $V_\alpha^i$
transform under (\ref{deffpdts}) as
\begin{eqnarray}
V'^i_\tau(\tau,\sigma) &=&
V^i_\tau(\tau,\sigma)+\partial_j
\xi^i(\tau,\sigma)V^j_\tau(\tau,\sigma) \ .
\nonumber \\
V'^i_\sigma(\tau,\sigma)
&=&V^i_\sigma(\tau,\sigma)+\partial_j \xi^i(\tau,\sigma)
V_\sigma^j(\tau,\sigma) \ .
\nonumber \\
\end{eqnarray}
Performing the same analysis as in
\cite{Kluson:2010aw}
we find that the
Lagrangian density (\ref{mLLBS})
is invariant under target-space
foliation preserving diffeomorphism
(\ref{deffpdts}).
As the next step we check the
invariance of the action under
world-sheet foliation preserving
diffeomorphism that we define
as the world-sheet transformation
\begin{equation}\label{wsfpd}
\tau'=\tau+f(\tau) \ , \quad
\sigma'=\sigma+\epsilon(\tau,\sigma) \ .
\end{equation}
where $f,\epsilon$ are infinitesimal
parameters.
In the same way as in
\cite{Horava:2008ih} we find that
the world-sheet metric components transform
under (\ref{wsfpd}) as
\begin{eqnarray}
n'_\sigma(\tau',\sigma')&=&
n_\sigma(\tau,\sigma)
-n_\sigma(\tau,\sigma)\parts
\epsilon(\tau,\sigma)
-\partt f(\tau)
n_\sigma(\tau,\sigma)
-\partt \epsilon(\tau,\sigma) \omega(\tau,\sigma)
\ ,
\nonumber \\
n'_\tau(\tau',\sigma')&=&
n_\tau(\tau,\sigma)-n_\tau(\tau,\sigma)
\partt f(\tau) \ ,
\nonumber \\
\omega'(\tau',\sigma')&=&
\omega(\tau,\sigma) -2\parts
\epsilon(\tau,\sigma)
\omega(\tau,\sigma) \ ,
\nonumber \\
n'^\sigma(\tau',\sigma')&=&
n^\sigma(\tau,\sigma)
+n^\sigma(\tau,\sigma)
\parts \epsilon(\tau,\sigma)
-n^\sigma(\tau,\sigma)
\partt f(\tau)-
\partt \epsilon(\tau,\sigma) \ .
\nonumber \\
\end{eqnarray}
Then it is easy to see that
\begin{eqnarray}
d\tau' d\sigma' n'_\tau\sqrt{\omega'}
= d\tau d\sigma n_\tau
\sqrt{\omega} \ .
\nonumber \\
\end{eqnarray}
Note that $\Gamma$ transforms under the
world-sheet foliation preserving
diffeomorphism (\ref{wsfpd}) as
\begin{eqnarray}
\Gamma'(\tau',\sigma')=
\Gamma(\tau,\sigma)-\Gamma(\tau,\sigma)
\parts \epsilon(\tau,\sigma)-
\parts^2\epsilon(\tau,\sigma) \ .
\nonumber \\
\end{eqnarray}
Then after some algebra we find that
$K_\sigma$ transforms as
\begin{eqnarray}
K_\sigma'(\tau',\sigma')=
K_\sigma(\tau,\sigma)-
2K_\sigma (\tau,\sigma)
\parts \epsilon(\tau,\sigma) \ . \nonumber \\
\end{eqnarray}
Clearly, the world-sheet
modes $x^M$ are
scalars under (\ref{wsfpd})
\begin{equation}
x'^M(\tau',\sigma')=
x^M(\tau,\sigma) \ .
\end{equation}
Collecting all these results together
and performing the same analysis as in
\cite{Kluson:2010aw}
we can show that the
Lagrangian density
(\ref{mLLBS}) is invariant under
world-sheet foliation preserving diffeomorphism
(\ref{wsfpd}).
\section{T-duality for LBS theory}\label{third}
In this section we analyze properties of
LBS theory under T-duality transformations.
In other words we would like to see whether
this theory shares the same properties
as ordinary string theory action.
For further purposes we again write the Lagrangian
density for LBS theory
\begin{eqnarray}\label{mlABT}
\mL&=&
-\frac{\sqrt{\omega}}{4\pi\alpha'}\frac{1}{n_\tau}
(\partt x^0-n^\sigma\parts
x^0)^2-n_\tau \sqrt{\omega} G\left(
-\frac{1}{2\pi\alpha'\omega} N^2\parts x^0
\parts x^0\right)+
\nonumber \\
&+&\omega n_\tau \frac{1}{B}
\left(\frac{1}{4\pi\alpha'}\frac{1}{n^2_\tau}
(V^i_\tau-n^\sigma V^i_\sigma)
h_{ij}(V^j_\tau-n^\sigma V^j_\sigma)
+\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}K_\sigma\right)-
\nonumber \\
&-& Bn_\tau\left(\frac{1}{2\pi\alpha'\omega}
V^i_\sigma h_{ij}V^j_\sigma-A\right)-\sqrt{\omega} n_\tau F(A) \ ,
\nonumber \\
\end{eqnarray}
where
\begin{equation}
V_\tau^i=\partt x^i+N^i\partt x^0 \ ,
\quad
V_\sigma^i=\parts x^i+N^i\parts x^0 \ .
\end{equation}
Let us now presume that
that the background
possesses isometry along $\phi$ direction
where we performed the splitting of
the target space coordinates
$x^i=(x^\alpha,\phi), \alpha,\beta=1,\dots,D-1$.
The fact that there is an isometry of the background
along $\phi$ direction implies that
the action is invariant under the
shift
\begin{equation}
\phi'(\tau,\sigma)
=\phi(\tau,\sigma)+\epsilon \ ,
\end{equation}
where $\epsilon=\mathrm{const}$.
The invariance of the action implies
an existence of the conserved
current
\begin{equation}
\mathcal{J}^\alpha
=\frac{\delta S}{\delta \partial_\alpha \phi}
\ , \quad \partial_\alpha \mathcal{J}^\alpha=0 \ .
\end{equation}
explicitly, we find
\begin{eqnarray}
\mathcal{J}_\tau&=&
\frac{\omega}{2\pi\alpha'n_\tau B}h_{\phi i}
(V^i_\tau-n^\sigma V^i_\sigma) \ ,
\nonumber \\
\mathcal{J}_\sigma&=&
-\frac{ n_\sigma}{2\pi\alpha'n_\tau B}h_{\phi i}
(V^i_\tau-n^\sigma V^i_\sigma)-
B\frac{n_\tau}{\pi\alpha'\omega}h_{\phi i}V^i_\sigma
\ .
\nonumber \\
\end{eqnarray}
Let us now try to implement the T-duality rules
as in standard bosonic theory. We gauge the
shift symmetry so that $\epsilon\rightarrow
\epsilon(\tau,\sigma)$. Then in order to
ensure the invariance of the Lagrangian
(\ref{mlABT})
we have to introduce the gauge field $a_\alpha$
and replace
\begin{equation}
\partial_\alpha\phi\rightarrow D_\alpha\phi
=\partial_\alpha\phi+a_\alpha \ .
\end{equation}
Note that under $a_\alpha$ transforms for
non-constant $\epsilon $ as
\begin{equation}
a_\alpha'(\tau,\sigma)=a_\alpha(\tau,\sigma)-
\partial_\alpha \epsilon(\tau,\sigma) \ .
\end{equation}
Then it is easy to see that
\begin{equation}
(D_\alpha \phi)'=D_\alpha \phi \ .
\end{equation}
In the same way we perform the replacement
\begin{eqnarray}
V_\tau^\phi &=&\partt \phi+N^\phi\partt x^0
\rightarrow D_\tau \phi+N^\phi\partt x^0\equiv \tV_\tau^\phi \ ,
\nonumber \\
V_\sigma^\phi&=&\parts \phi+N^\phi\parts x^0
\rightarrow
D_\sigma \phi+N^\phi\parts x^0\equiv \tV_\sigma^\phi \ .
\nonumber \\
\end{eqnarray}
However we have to also check that terms containing
$a_\alpha$ are invariant under world-sheet
foliation preserving diffeomorphism (\ref{wsfpd}).
To do this we presume that $a_\alpha$ transform
under world-sheet foliation preserving
diffeomorphism (\ref{wsfpd}) as
\begin{eqnarray}
a'_\tau(\tau',\sigma')&=&
a_\tau(\tau,\sigma)-a_\tau(\tau,\sigma)
\dot{f}(\tau)-a_\sigma(\tau,\sigma)
\partt \xi(\tau,\sigma) \ , \nonumber \\
a'_\sigma(\tau',\sigma')&=&
a_\sigma(\tau,\sigma)-a_\sigma(\tau,\sigma)
\parts \xi(\tau,\sigma) \ .
\nonumber \\
\end{eqnarray}
Then it is easy to see that the covariant derivatives transform
as \begin{eqnarray}
D'_\tau\phi(\tau',\sigma')&=&
D_\tau\phi(\tau,\sigma)-D_\tau\phi(\tau,\sigma)
\dot{f}(\tau)-D_\sigma\phi(\tau,\sigma)
\partt \xi(\tau,\sigma) \ , \nonumber \\
D'_\sigma\phi(\tau',\sigma')&=&
D_\sigma\phi(\tau,\sigma)-D_\sigma \phi(\tau,\sigma)
\parts \xi(\tau,\sigma) \ . \nonumber \\
\end{eqnarray}
As the next step we introduce $f_{\alpha\beta}$
defined as
\begin{equation}
f_{\tau\sigma}=\partt a_\sigma-
\parts a_\tau
\end{equation}
that transform under foliation preserving diffeomorphism
as
\begin{eqnarray}
f'_{\tau\sigma}(\tau',\sigma')=
f_{\tau\sigma}(\tau,\sigma)
-f_{\tau\sigma}(\tau,\sigma)\dot{f}(\tau)
-f_{\tau\sigma}(\tau,\sigma)
\parts \xi(\tau,\sigma) \ .
\nonumber \\
\end{eqnarray}
Then it is easy to see that $d\tau d\sigma
f_{\alpha\beta}$ is invariant under
(\ref{wsfpd}).
Collecting all these terms
together we find following
Lagrangian density invariant under
the foliation preserving diffeomorphism
(\ref{wsfpd})
\begin{eqnarray}\label{tildeL}
\tilde{\mL}&=&-\frac{\sqrt{\omega}n_\tau}{4\pi\alpha'}\frac{1}{n_\tau^2}
(\partt x^0-n^\sigma\parts
x^0)^2-n_\tau \sqrt{\omega} G\left(-\frac{1}{2\pi\alpha'\omega} N^2\parts x^0
\parts x^0\right)+
\nonumber \\
&+&\omega n_\tau\frac{1}{B}
\left[\frac{1}{4\pi\alpha'n^2_\tau}
(\tV^\phi_\tau-n^\sigma \tV^\phi_\sigma)
h_{\phi\phi}(V^\phi_\tau-n^\sigma V^\phi_\sigma)
+\frac{2}{4\pi\alpha'n^2_\tau}
(\tV^\phi_\tau-n^\sigma \tV^\phi_\sigma)
h_{\phi \alpha}(V^\alpha_\tau-n^\sigma V^\alpha_\sigma)+\right.
\nonumber \\
&+& \left.\frac{1}{4\pi\alpha'n^2_\tau}(V^\alpha_\tau-n^\sigma V^\alpha_\sigma)
h_{\alpha\beta}(V^\beta_\tau-n^\sigma V^\beta_\sigma)
+\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}K_\sigma\right]
-\nonumber \\
&-&Bn_\tau\left(\frac{1}{2\pi\alpha'\omega}
\left[\tV^\phi_\sigma h_{\phi\phi}\tV^\phi_\sigma
+2\tV^\phi_\sigma h_{\phi j}V^j_\sigma+
V^\alpha_\sigma h_{\alpha\beta}V^\beta_\sigma\right]
-A\right)-\sqrt{\omega} n_\tau F(A)
+\tphi f_{\tau\sigma} \ ,
\nonumber \\
\end{eqnarray}
where $\tphi$ is the Lagrange multiplier that ensures
that $a_\alpha$ is non-dynamical field. Note
that $\tphi$ transforms as scalar under (\ref{wsfpd}).
The gauge invariance of the Lagrangian
density (\ref{tildeL}) can be fixed by
imposing the condition $\phi=0$ that
implies
\begin{equation}\label{Dtau}
D_\tau\phi=a_\tau \ , \quad
D_\sigma\phi=a_\sigma \ .
\end{equation}
Inserting (\ref{Dtau}) into (\ref{tildeL})
we obtain
\begin{eqnarray}\label{Lhat}
\mL&=&-\frac{\sqrt{\omega}}{4\pi\alpha'}\frac{1}{n_\tau}
(\partt x^0-n^\sigma\parts
x^0)^2-n_\tau \sqrt{\omega} G\left(-\frac{1}{2\pi\alpha'\omega} N^2\parts x^0
\parts x^0\right)+
\nonumber \\
&+&\frac{\omega n_\tau}{B}
\left[
\frac{1}{4\pi\alpha'n_\tau^2}
(\hV^\phi_\tau-n^\sigma \hV^\phi_\sigma)
h_{\phi\phi}(\hV^\phi_\tau-n^\sigma \hV^\phi_\sigma)
+2(\hV^\phi_\tau-n^\sigma \hV^\phi_\sigma)
h_{\phi \alpha}(V^\alpha_\tau-n^\sigma V^\alpha_\sigma)+\right.
\nonumber \\
&+& \left.
\frac{1}{4\pi\alpha'n_\tau^2}
(V^\alpha_\tau-n^\sigma V^\alpha_\sigma)
h_{\alpha\beta}(V^\beta_\tau-n^\sigma V^\beta_\sigma)
+\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}K_\sigma\right]
-\nonumber \\
&-& Bn_\tau\left(\frac{1}{2\pi\alpha'\omega}
[\hV^\phi_\sigma h_{\phi\phi}\hV^\phi_\sigma
+2\hV^\phi_\sigma h_{\phi \alpha}V^\alpha_\sigma+
V^\alpha_\sigma h_{\alpha\beta}V^\beta_\sigma]
-A\right)-\sqrt{\omega} n_\tau F(A)
+\tphi f_{\tau\sigma}
\ ,
\nonumber \\
\end{eqnarray}
where
\begin{equation}
\hat{V}^\phi_\tau=
a_\tau+N^\phi\partt x^0 \ ,
\quad
\hat{V}^\phi_\sigma=
a_\sigma+N^\phi\partt x^0 \ .
\end{equation}
Now it ie easy to see that the
equation of motion for $\tilde{\phi}$
implies
\begin{equation}
f_{\tau\sigma}=0
\end{equation}
that can be solved as
\begin{equation}
a_\tau=\partt \theta \ , \quad
a_\sigma=\parts \theta \ .
\end{equation}
Inserting this result into (\ref{Lhat}) we
recovery the original Lagrangian density
after replacement $\theta\rightarrow \phi$.
However the equations of motion for $a_\alpha$
that follow from (\ref{Lhat})
take more complicated form
\begin{eqnarray}
& &\frac{\omega}{2\pi\alpha'B n_\tau}
h_{\phi\phi}(\hV^\phi_\tau-n^\sigma \hV^\phi_\sigma)
+\frac{\omega}{2\pi\alpha'B n_\tau}
h_{\phi\alpha}(\hV_\tau^\alpha-n^\sigma \hV_\sigma^\alpha)
+\parts \tphi=0 \ ,
\nonumber \\
& &-\frac{\omega n^\sigma}{2\pi\alpha'B n_\tau}
h_{\phi\phi}(\hV^\phi_\tau-n^\sigma \hV^\phi_\sigma)
-\frac{\omega n^\sigma}{2\pi\alpha' B n_\tau}
h_{\phi\alpha}(\hV^\alpha-n^\sigma \hV^\alpha_\sigma)
-\nonumber \\
&& -\frac{B n_\tau}{\pi\alpha'\omega}
(h_{\phi\phi}\hV_\sigma^\phi+h_{\phi\alpha} V^\alpha_\sigma)
-\partt \tphi=0 \ .
\nonumber \\
\end{eqnarray}
Solving these equations for
$a_\alpha$ we find
\begin{eqnarray}
a_\sigma
&=&-\frac{\pi \alpha'\omega}{h_{\phi\phi}
B n_\tau}(\partt \tphi-n^\sigma \parts \tphi)
-\frac{1}{h_{\phi\phi}}
(h_{\phi\phi}N^\phi \parts x^0+h_{\phi\alpha}
V^\alpha_\sigma) \ , \nonumber \\
a_\tau&=&
-\frac{n_\sigma\pi\alpha'}{Bh_{\phi\phi}n_\tau}
(\partt \tphi-n^\sigma \parts\tphi)-
\frac{2\pi\alpha'B}{\omega h_{\phi\phi}}
n_\tau \parts \tphi
-N^\phi\partt x^0-\frac{h_{\phi\alpha}}{h_{\phi\alpha}}
V^\alpha_\tau \ . \nonumber \\
\end{eqnarray}
Inserting these results into the Lagrangian
density (\ref{Lhat}) we obtain the Lagrangian
density for T-dual theory
\begin{eqnarray}\label{LagTdual}
\mL&=&-\frac{\sqrt{\omega}n_\tau}{4\pi\alpha'}\frac{1}{n_\tau^2}
(\partt x^0-n^\sigma\parts
x^0)^2-n_\tau \sqrt{\omega}
G\left(-\frac{1}{2\pi\alpha'\omega^2}N^2
\parts x^0 \parts x^0\right)+\nonumber \\
&+&\frac{\omega n_\tau }{B}
\left[\frac{1}{4\pi\alpha'n^2_\tau }
(\bV^i_\tau-n^\sigma \bV^i_\sigma)
\tilde{h}_{ij}(\bV^j_\tau-n^\sigma \bV^j_\sigma)
+\frac{1}{2(z-1)}K_\sigma \frac{1}{\omega^2}
K_\sigma\right]-
\nonumber \\
&-& Bn_\tau\left(\frac{1}{2\pi\alpha'\omega}
\bV^i_\sigma \hat{h}_{ij}\bV^j_\sigma-A\right)-\sqrt{\omega} n_\tau F(A)+
\nonumber \\
&+&\frac{1}{\sqrt{2}\pi\alpha'}
N^\phi(\parts x^0 \partt \tphi-
\partt x^0\parts \tphi)+
\frac{1}{\sqrt{2}
\pi\alpha'}
\frac{h_{\phi\alpha}}{h_{\phi\phi}}
(\bV^\alpha_\sigma \partt \tphi-\bV^\alpha_\tau \parts \tphi) \ ,
\nonumber \\
\end{eqnarray}
where
\begin{eqnarray}
\bV_\alpha^\phi=\partial_\alpha\tphi+
\hat{N}^\phi\partial_\alpha x^0 \ ,
\quad \bV_\alpha^\alpha=\partial_\alpha x^\alpha+
\hat{N}^\alpha\partial_\alpha x^0 \ .
\nonumber \\
\end{eqnarray}
Note that T-dual lapse, shift and metric
components take the form
\begin{eqnarray}\label{Tdualmet}
\hat{h}_{\alpha\beta}&=&h_{\alpha\beta}
-\frac{h_{\alpha\phi}h_{\phi \beta}}{h_{\phi\phi}} \ ,
\quad \hat{h}_{\phi\alpha}=0 \ ,
\nonumber \\
\hN&=&N \ , \quad \hat{N}^\phi=0 \ , \quad \hN^\alpha=N^\alpha \ . \nonumber \\
\end{eqnarray}
Note that T-dual metric written in
$D+1$ formalism takes the form
\begin{equation}
\hg_{00}=-\hN^2+\hN_i\hh^{ij}\hN_j \ , \quad
\hg_{0i}=\hN_i \ , \quad \hg_{ij}=\hh_{ij} \ ,
\end{equation}
with inverse
\begin{equation}
\hg^{00}=-\frac{1}{\hN^2} \ , \quad
\hg^{0i}=\frac{\hN^i}{\hN} \ , \quad
\hg^{ij}=\hh^{ij}-\frac{\hN^i \hN^j}{\hN^2} \ .
\end{equation}
From (\ref{Tdualmet})
we see that in T-dual theory $\hg_{\phi 0}=
\hg_{\phi \alpha}=0$
and also
\begin{equation}
\hh^{\phi\phi}=\frac{1}{\hh_{\phi\phi}} \ ,
\quad
\hh^{\phi\alpha}=0 \ .
\end{equation}
Then with the help of (\ref{Tdualmet})
we find
\begin{eqnarray}\label{gTdual}
\hg_{00}
&=&-\hN^2+\hN^\alpha \hh_{\alpha\beta}\hh^\beta+
\hN^\phi \hh_{\phi\phi}\hN^\phi
=g_{00}-\frac{g_{0\phi}g_{\phi 0}}
{g_{\phi\phi}} \ .
\nonumber \\
\end{eqnarray}
Finally we consider the last term in
(\ref{LagTdual})
\begin{eqnarray}
& &\frac{1}{\sqrt{2}\pi\alpha'}
N^\phi(\parts x^0 \partt \tphi-
\partt x^0\parts \tphi)+
\frac{h_{\phi\alpha}}{\sqrt{2}
\pi\alpha'h_{\phi\phi}}
(V^\alpha_\sigma \partt \tphi-V^\alpha_\tau \parts \tphi)=
\nonumber \\
&=&-\frac{1}{\sqrt{2}\pi\alpha'}
\frac{N_\phi}{h_{\phi\phi}}
(\partt x^0\parts \tphi-\parts x^0 \partt \tphi)-
\frac{h_{\phi\alpha}}{\sqrt{2}
\pi\alpha'h_{\phi\phi}}
(\partt x^\alpha \parts \tphi-\parts x^\alpha_\sigma \partt \tphi)=
\nonumber \\
&=&\frac{1}{\sqrt{2}\pi\alpha'}
\hat{b}_{0\phi}
(\partt x^0\parts \tphi-\parts x^0 \partt \tphi)+
\frac{1}{\sqrt{2}
\pi\alpha'}
\hat{b}_{\alpha\phi}
(\partt x^\alpha \parts \tphi-\parts x^\alpha_\sigma \partt \tphi) \ ,
\nonumber \\
\end{eqnarray}
where
\begin{equation}\label{TdualB}
\hat{b}_{0\phi}=-\frac{N_\phi}{h_{\phi\phi}}=
-\frac{g_{0\phi}}{g_{\phi\phi}} \ ,
\quad
\hat{b}_{\alpha\phi}=
-\frac{h_{\phi\alpha}}{h_{\phi\phi}}
=-\frac{g_{\phi\alpha}}{g_{\phi\phi}} \ .
\end{equation}
We see that the relations between original
and T-dual background fields given in
(\ref{Tdualmet}),(\ref{gTdual}) and (\ref{TdualB})
exactly coincide
with the standard Buscher's rules
\cite{Buscher:1985kb,Buscher:1987sk,Buscher:1987qj} between original
and T-dual metric components (see also
\cite{Bergshoeff:1995as,Breckenridge:1996tt})
\begin{eqnarray}
\hat{g}_{\phi\phi}&=&\frac{1}{g_{\phi\phi}} \ , \quad
\hat{g}_{\phi 0}=\frac{b_{\phi 0}}{g_{\phi\phi}} \ , \quad
\hat{g}_{\phi\alpha}=\frac{b_{\phi\alpha}}{g_{\phi\phi}} \ ,
\nonumber \\
\hat{g}_{00}&=&g_{00}-\frac{g_{0\phi}g_{\phi 0}}{g_{00}} \ ,
\quad
\hat{h}_{\alpha\beta}=g_{\alpha\beta}-\frac{g_{\alpha\phi}
g_{\phi\beta}}{g_{\phi\phi}} \ , \nonumber \\
\hat{b}_{\phi 0}&=&\frac{g_{\phi 0}}{g_{\phi\phi}} \ ,
\quad \hat{b}_{0\phi}=-\frac{g_{\phi 0}}{g_{\phi\phi}} \ ,
\quad
\hat{b}_{\phi\alpha}=\frac{g_{\phi\alpha}}{g_{\phi\phi}} \ ,
\quad \hat{b}_{\alpha\phi}=-\frac{g_{\alpha\phi}}{g_{\phi\phi}} \ .
\nonumber \\
\end{eqnarray}
\vskip 5mm
\noindent {\bf
Acknowledgements:}
J. K. would like to thank CERN PH-TH for
generous hospitality and financial
support during the course of this work. J.K. is
also supported by
the Czech Ministry of
Education under Contract No. MSM
0021622409. \vskip 5mm
|
1,941,325,220,690 | arxiv | \section{Introduction}
Exploration of the quantum chromodynamics (QCD) phase diagram at finite temperature and density is one of
the most challenging subjects in particle and nuclear physics. As a first-principle method, the Lattice QCD
(LQCD) simulations yield many meaningful results at vanishing baryon chemical potential
( see~\cite{DElia:2018fjp} and references therein ). However, it is still unavailable at nonzero real chemical
potential region because of the well-known sign problem~\cite{Kogut}. To evade this difficulty, various methods
have been developed~\cite{Z.Fodor1,Z.Fodor2,C. R. Allton,S. Ejiri,Forcrand,Philipsen}. One useful approach is
the analytic continuation from imaginary to real chemical potential~\cite{Forcrand,Philipsen,M. D'Elia}, in
which the fermion determinant is real and thus free from the sign problem.
Introducing an imaginary chemical potential $\mu_{I}=i\theta T$ in QCD corresponds to replacing the fermion
anti-periodic boundary condition (ABC) by the twisted one. In this case, the partition function satisfies
$Z_{QCD}(\theta)=Z_{QCD}(\theta+2\pi/3)$, which is called the Roberge-Weiss (RW) periodicity~\cite{Weiss}.
Since the $\mathbb{Z}_{3}$ symmetry is broken by dynamical quarks, the effective thermal potentials
$\Omega_\phi (\phi=0,\pm2\pi/3)$ of three $\mathbb{Z}_{3}$ sectors have a shift of $2\pi/3$ each other above
some critical temperature $T_{RW}$ and the physical solution is determined by the absolute minimum of the
three ones. This leads to discontinuity of $d\Omega_{\rm{QCD}}(\theta)/d\theta$ at
$\theta=\pi/3\,\,\text{mod}\, \,2\pi/3\,$, which is known as the RW transition~\cite{Weiss}.
The RW transition is a true phase transition for the $\mathbb{Z}_2$ symmetry. LQCD simulations suggest that
the nature of the RW endpoint may depend on quark masses
\cite{FMRW,OPRW,CGFMRW,wumeng,PP_wilson,wumeng2,nf2PP,cuteri,Bonati:2016pwz}: For intermediate quark masses,
it is a critical end point (CEP), while for large and small quark masses it is a triple point. The latest LQCQ
calculation provides evidence that the RW endpoint transition remains second order, in the $3$D Ising universality
class, in the explored mass range corresponding to $m_\pi\simeq 100, 70$ and 50 \text{MeV} \cite{Bonati12}. The
RW transition has also been investigated in effective models of QCD
~\cite{Sakai,Sakai:2009vb,Yahiro,Morita,H.K,Sugano}. Due to the analogy between $\theta$ and the Aharonov-Bohm
phase, it is proposed that the RW transition can be considered as a topological phase transition \cite{kashiwa}.
Note that special flavor-twisted boundary conditions (FTBCs) can lead to an unbroken $\mathbb{Z}_{N_c}$ center
symmetry. As shown in~\cite{ZN1,ZN6}, for $N_f$ flavors with a common mass in the fundamental representation,
the $SU(N_c)$ gauge theory with $d\equiv gcd(N_f,N_c)>1$ has a $\mathbb{Z}_d$ color-flavor center symmetry when
imposing the $\mathbb{Z}_{N_f}$ symmetric FTBCs on $S^1$. The $\mathbb{Z}_d$ symmetry arises due to the intertwined
color center transformations and cyclic flavor permutations. The QCD-like theory for $N_c=N_f=3$ under such FTBCs
is termed as $\mathbb{Z}_{3}$-QCD~\cite{ZN1}. In this theory, the Polyakov loop is the true order parameter for center
symmetry even fermions appear. $\mathbb{Z}_{3}$-QCD is an interesting and instructive theory which is useful for
understanding the deconfinement transition of QCD~\cite{ZN1,ZN2,ZN3,ZN4,ZN5,ZN6,Liu:2016yij,ZN7}.
As mentioned, FTBCs on $S^1$ in $\mathbb{Z}_{3}$-QCD can be replaced with the standard fermion ABCs by
introducing $\mu_{f}=i\theta_{f} T$ ( shifted by $i2\pi{}T/3$ ). Correspondingly, the center symmetry
of $\mathbb{Z}_{3}$-QCD can be explicitly broken by mass non-degeneracy of quarks, or no equal $2\pi/3$
difference in $\theta_f$, or both if color and flavor numbers are unchanged. Then, some natural and
interesting questions arise:
whether the $\mathbb{Z}_{3}$ symmetry breaking in such ways can lead to RW transitions? How do these RW transitions
depend on the center symmetry breaking ? What are the differences between these RW transitions and the traditional
ones in QCD with a flavor-independent $\mu_I$? Answering these questions may deepen our understanding on the relationship
between $\mathbb{Z}_{3}$ symmetry, RW transition and deconfinement transition. Actually, one advantage of
$\mathbb{Z}_{3}$-QCD is that we can use it to study how the pattern and degree of $\mathbb{Z}_{3}$ symmetry
breaking can affect the nature of RW and deconfinement transitions from a different perspective.
The main purpose of this work is to study how RW transitions depend on the center symmetry breaking patterns
by using a $\mathbb{Z}_{3}$-QCD model. We employ the three-flavor PNJL model~\cite{Fu:2007xc,Fukushima:2008wg}, which
possesses the so called extended $\mathbb{Z}_{3}$ symmetry and can correctly reproduce the RW periodicity~\cite{Sakai2}.
Without loss of generality, the flavor-dependent imaginary chemical potentials
$(\mu_{u},\mu_{d},\mu_{s})/iT=(\theta-2\pi{C}/3,\theta,\theta+2\pi{C}/3)$ with $0\leq C\leq 1$ are adopted, which
guarantees $Z(\theta)=Z(\theta+2\pi /3)$ \cite{Sugano}. When quark masses are non-degenerate or $C\neq1$, the
center symmetry of $\mathbb{Z}_{3}-QCD$ is explicitly broken and the RW transitions should appear at high temperature.
We focus on five types of center symmetry breaking and study impacts of variations of $C$ and quark masses
on the RW and deconfinement transitions.
The paper is organized as follows. In Sec.~\ref{Sec_2}, $\mathbb{Z}_{3}$-QCD and the PNJL model with flavor-dependent
imaginal chemical potentials are introduced. In Sec.~\ref{Sec_3}, we present the results of the numerical calculation.
Sec.~\ref{Sec_4} gives the discussion and conclusion.
\section{ $\mathbb{Z}_{3}$-QCD and $\mathbb{Z}_{3}$-symmetric PNJL model}
\label{Sec_2}
\subsection{ $\mathbb{Z}_{3}$-QCD}
The $\mathbb{Z}_{3}$ transformation of QCD is defined as
\begin{equation}
q\rightarrow q'=Uq,\quad A_\mu\rightarrow A_\mu{'}=UA_\mu{U^{-1}}+i(\partial_\mu{U})U^{-1},
\end{equation}
where $U$ is an element of the $SU(3)$ group which satisfies the temporal boundary condition
\begin{equation}
U(x_4=\beta,\textbf{x})={z_k}U(x_4=0,\textbf{x}),
\label{z3bd}
\end{equation}
with $z_k=e^{-i2\pi{k}/3}$ being an element of the center group.
Even the QCD partition function $Z_{QCD}$ is invariant under the $\mathbb{Z}_{3}$ transformation,
the original quark ABC
\begin{equation}
q(x_4=\beta,\textbf{x})=-q(x_4=0,\textbf{x})
\end{equation}
is changed into
\begin{equation}
q(x_4=\beta,\textbf{x})=-e^{i2\pi{k}/3}q(x_4=0,\textbf{x}).
\label{modbc}
\end{equation}
Thus the center symmetry is explicitly broken due to~\eqref{modbc}.
This is why the Polyakov-loop is no longer the true order parameter of deconfinement in QCD.
However, the $\mathbb{Z}_{3}$ symmetry can be recovered if one consider the FTBCs~\cite{ZN1}
\begin{equation}
q_f(x_4=\beta,\textbf{x})=-e^{-i\theta_f}q_f(x_4=0,\textbf{x}),
\label{FTBC1}
\end{equation}
with
\begin{equation}
\theta_f=\frac{2\pi}{3}f \quad (f=-1,0,1),
\end{equation}
instead of the ABC. For convenience, three numbers -1, 0, and 1 are used as the flavor indices.
Under the $\mathbb{Z}_{3}$ transformation, the FTBCs are transformed into
\begin{equation}
q_f(x_4=\beta,\textbf{x})=-e^{-i\theta_f'}q_f(x_4=0,\textbf{x}),
\label{FTBC2}
\end{equation}
with
\begin{equation}
\theta_f'=\frac{2\pi}{3}(f-k) \quad (f=-1,0,1).
\end{equation}
We can see that the modified boundary conditions ~\eqref{FTBC2} return to the original ones~\eqref{FTBC1}
if the flavor indices $f-k$ are relabeled as $f$. This means the QCD-like theory with the FTBCs~\eqref{FTBC1}
is invariant under the center transformation if three flavors have a common mass. As mentioned, such a theory
is termed as $\mathbb{Z}_{3}$-QCD, which equals to QCD when $T\rightarrow{0}$.
The FTBCs~\eqref{FTBC1} can be changed back into the standard ABCs through the field transformation~\cite{Weiss}
\begin{equation}
{q_f}\rightarrow{e^{-i\theta_f{T}{x_4}}}q_f,
\label{bctomu}
\end{equation}
which gives rise to the flavor-dependent imaginal chemical potentials
\begin{equation}
{\mu_f}=i\theta_f{T}.
\label{imamu}
\end{equation}
This implies that the global $SU_V(3)\otimes{SU_A(3)}$ symmetry in the
chiral limit is broken to $(U_V(1))^2\otimes{(U_A(1))^2}$ in $\mathbb{Z}_{3}$-QCD \cite{ZN4}.
Equation~\eqref{modbc} and the transformation between FTBCs~\eqref{FTBC1} and imaginal chemical
potentials~\eqref{imamu} indicate the RW periodicity: the partition function $Z(\theta_f)$ is
periodic under the shifts
\begin{equation}
\mu_f/i{T}\rightarrow{\mu_f/iT+2\pi/3},
\end{equation}
i.e.
\begin{equation}
Z(\theta_f)=Z(\theta_f+2\pi/3).
\label{RWP}
\end{equation}
\subsection{$\mathbb{Z}_{3}$-symmetry breaking patterns in $\mathbb{Z}_{3}$-QCD}
The center symmetry of $\mathbb{Z}_{3}$-QCD is attributed to three conditions: 1) $N_f=N_c=3$
(namely, $\gcd(N_f,N_c){>1}$); 2) quark masses are degenerate; 3) the dimensionless flavor-dependent
imaginal chemical potentials (normalized by $iT$) form an arithmetic sequence with the common difference
$2\pi/{N_c}$. Correspondingly, the center symmetry of $\mathbb{Z}_{3}$-QCD will be broken explicitly
if anyone of these conditions is not satisfied, which may lead to the RW transition at high temperature.
It is interesting to study how the possible RW transitions depend on the changes of conditions 2 and/or 3
in $\mathbb{Z}_{3}$-QCD by keeping $N_f=N_c=3$.
Here we express the imaginal chemical potential matrix
$\hat{\mu}=\textrm{diag}(\mu_{u},\mu_{d},\mu_{s})={iT}\hat{\theta}$ in terms of two real parameters $\theta$
and $C$, namely
\begin{equation}
\hat{\theta}=\begin{pmatrix}
\theta_{u}&&\\
&\theta_{d}&\\
&&\theta_{s}
\end{pmatrix}
=\begin{pmatrix}
\theta-\frac{2\pi{C}}{3}&&\\
&\theta&\\
&&\theta+\frac{2\pi{C}}{3}
\end{pmatrix},\label{arithmu}
\end{equation}
where $C\in[0,1]$. As mentioned, such a choice of $\hat{\theta}$ ensures the RW periodicity
$Z(\theta)=Z(\theta+2\pi/3)$. We concentrate on the following center symmetry breaking patterns:
(i) $N_f=3$ with varied $C\neq{1}$ (here and after $N_f=3$ means three flavors share the same mass);
(ii) $N_f= 2+1$ (two lighter flavors have the same mass) with $C=1$;
(iii)$N_f= 1+2$ (two heavier flavors have the same mass) with $C=1$;
(iv) $N_f=2+1$ with varied $C\neq{1}$;
(v) $N_f= 1+1+1$ with $C=1$.
For cases (i)-(iii), the thermal dynamical potential $\Omega(\theta)$ is a $\theta$-even function even
the imaginal chemical potentials are flavor-dependent. For the case (i), three flavors are mass-degenerate
and thus
\begin{align}
&\Omega(\theta)=\Omega(\theta_u,\theta_d,\theta_s)\notag\\
&=\Omega(\theta-{2\pi{C}}{/3},\theta,\theta+{2\pi{C}}{/3})\notag\\
&\xrightarrow{\mathcal{C}}\Omega(-\theta+{2\pi{C}}{/3},-\theta,-\theta-{2\pi{C}}{/3})\notag\\
&\xlongequal{u\leftrightarrow s}\Omega(-\theta-{2\pi{C}}{/3},-\theta,-\theta+{2\pi{C}}{/3})\notag\\
&=\Omega(-\theta),\label{evencase1}
\end{align}
where $\xrightarrow{\mathcal{C}}$ stands for the charge conjugation transformation
and $\Omega(\hat{\theta})=\Omega(-\hat{\theta})$ always holds.
For cases (ii) and (iii), two flavors are mass degenerate (e.g., $m_u=m_d$) and thus
\begin{align}
\Omega(\theta)&=\Omega(\theta_u,\theta_d,\theta_s)\notag\\
&=\Omega(\theta-{2\pi}{/3},\theta,\theta+{2\pi}{/3})\notag\\
&\xrightarrow{\mathcal{C}}\Omega(-\theta+{2\pi}{/3},-\theta,-\theta-{2\pi}{/3})\notag\\
&\xlongequal{\theta\rightarrow \theta+{4\pi}{/3}}\Omega(-\theta-{2\pi}{/3},-\theta-{4\pi}{/3},-\theta)\notag\\
&\xlongequal{u\leftrightarrow d}\Omega(-\theta-{4\pi}{/3},-\theta-{2\pi}{/3},-\theta)\notag\\
&\xlongequal{\theta\rightarrow \theta-{2\pi}{/3}}\Omega(-\theta-{2\pi}{/3},-\theta,-\theta+{2\pi}{/3})\notag\\
&=\Omega(-\theta).\label{evencase2}
\end{align}
Note that $\Omega(\theta)=\Omega(-\theta)$ does not hold for cases (iv) and (v).
\subsection{The center symmetric PNJL model}
The Lagrangian of three-flavor PNJL model of QCD in Euclidean spacetime is defined
as~\cite{Fu:2007xc,Fukushima:2008wg}
\begin{align}
\mathcal{L}=
&\bar{q}\left(\gamma_{\mu}D_{\mu}+\hat{m}-\hat{\mu}\gamma_{4}\right)q-G_{\rm S}\sum_{a=0}^{8}
\left[
(\bar{q}\lambda^a_f q)^2+(\bar{q}i\gamma_5\lambda^a_f q)^2
\right]
\notag \\
&+G_{\rm D}\left[\det_{ij}\bar{q}_i(1+\gamma^5)q_j
+\det_{ij}\bar{q}_i(1-\gamma^5)q_j\right]
\notag \\
&+{\cal U} (\Phi[A],\Phi^{\ast}[A],T),
\label{Lag_PNJL}
\end{align}
where $D_{\mu}=\partial_{\mu}+ig_{s}\delta_{\mu 4}A^{a}_{\mu}\lambda^a/2$ is the covariant derivative
with the $SU(3)$ gauge coupling $g_{s}$ and Gell-Mann matrices $\lambda^a$, $\hat{m}=\textrm{diag}(m_{u},m_{d},m_{s})$
denotes the current quark mass matrix, and $\hat{\mu}=\textrm{diag}(\mu_{u},\mu_{d},\mu_{s})$ is the quark chemical
potential matrix. $G_{\rm S}$ and $G_{\rm D}$ are the coupling constants of the scalar-type four-quark interaction
and the Kobayashi-Maskawa-'t Hooft determinant interaction~\cite{tHooft,Kobayashi-Maskawa,Kobayashi-Kondo-Maskawa},
respectively.
The Polyakov-loop (PL) potential ${\cal U} \big( \Phi[A], \Phi^*[A], T \big)$ in the Lagrangian~(\ref{Lag_PNJL})
is center symmetric, which is the function of the Polyakov loop $\Phi$ and its conjugate $\Phi^{\ast}$ and $T$.
The quantity $\Phi$ is the true order parameter for center symmetry in pure gauge theory
(and also in $\mathbb{Z}_{3}$-QCD), which is defined as
\begin{equation}
\Phi = \frac{1}{3}{\rm Tr}(L),
\end{equation}
with
\begin{equation}
L( \mathbf{x})=\mathcal{P}\mathrm{exp}\left[i\int_0^{\beta}d \tau
A_4(\mathbf{x},\tau)\right],
\end{equation}
where $\mathcal{P}$ is the path-integral ordering operator. One popular PL potential is the logarithmic one
proposed in~\cite{S.R}, which takes the form
\begin{align}
&{\cal U} \left(\Phi, \Phi^*, T \right)=T^4\left[-\frac{a(T)}{2}\Phi\Phi^{\ast}\right. \notag\\
&\left.+b(T)\ln (1-6\Phi\Phi^{\ast}+4(\Phi^3+\Phi^{\ast 3})-3(\Phi\Phi^{\ast})^2 )\right],
\label{polyakov_potential}
\end{align}
where
\begin{align}
a(T)=a_{0}+a_{1}\left(\frac{T_{0}}{T}\right)+a_{2}\left(\frac{T_{0}}{T}\right)^2,\ b(T)=b_{3}\left(\frac{T_{0}}{T}\right)^3.
\end{align}
The potential \eqref{polyakov_potential} will be used in our calculation.
In the Polyakov gauge, the matrix $L$ can be represented as a diagonal form in the color space
\begin{equation}
\begin{split}
L= \mathrm{e}^{i\beta A_4}={\rm diag}(\mathrm{e}^{i\beta\phi_1},\mathrm{e}^{i\beta\phi_2},\mathrm{e}^{i\beta\phi_3}),
\end{split}
\end{equation}
where $\phi_1+\phi_2+\phi_3=0$.
The mean-field thermodynamic potential of PNJL then reads
\begin{align}
&\Omega=
2G_{\rm S}\sum_{f}\sigma^2_{f}
-4G_{\rm D}\sigma_{u}\sigma_{d}\sigma_{s}
-\frac{2}{\beta}\sum_{f}\int\frac{d^3\mathbf{p}}{(2\pi)^3}
\bigl[
3\beta E_{f}+
\notag \\
&
\ln(1+3\Phi\textrm{e}^{-\beta(E_{f}-\mu_{f})}
+3\Phi^{\ast}\textrm{e}^{-2\beta(E_{f}-\mu_{f})}
+\textrm{e}^{-3\beta(E_{f}-\mu_{f})})+
\notag \\
&
\ln(1+3\Phi^{\ast}\textrm{e}^{-\beta(E_{f}+\mu_{f})}
+3\Phi\textrm{e}^{-2\beta(E_{f}+\mu_{f})}
+\textrm{e}^{-3\beta(E_{f}+\mu_{f})})
\bigr]
\notag\\
&+\mathcal{U},
\label{thermo_PNJL}
\end{align}
with $\sigma_{f}=\braket{\bar{q}_{f}q_{f}}$ and $E_{f}=\sqrt{\mathbf{p}^2+M^2_{f}}$ $(f=u,d,s)$.
The dynamical quark masses are defined by
\begin{align}
M_{f}=m_{f}-4G_{\rm S}\sigma_{f}+2G_{\rm D}\sigma_{f'}\sigma_{f''},
\end{align}
where $f\ne f'$ $ f'\ne f''$, and $f\ne f''$. As usual, the three-dimensional cutoff $\Lambda$ is
introduced to regularize the vacuum contribution.
For the pure imaginary chemical potential case, we can write $\Phi$ and $\Phi^{*}$ as
\begin{equation}
\begin{split}
\Phi = Re^{i\phi}, \ \ \
\Phi^{\ast} = Re^{-i\phi}.
\end{split}
\label{loop}
\end{equation}
The condensates $\sigma_f$, the magnitude $R$,
and the phase $\phi$ are determined by the stationary conditions
\begin{equation}
\frac{ \partial \Omega}{\partial\sigma_u}
=\frac{ \partial \Omega}{\partial\sigma_d}
=\frac{ \partial \Omega}{\partial\sigma_s}
=\frac{ \partial \Omega}{\partial R}
=\frac{\partial \Omega}{\partial\phi} = 0.
\end{equation}
Similar to $\mathbb{Z}_{3}$-QCD, the three-flavor PNJL with a common quark mass possesses
the exact $\mathbb{Z}_{3}$ symmetry when introducing the special flavor-dependent
imaginary chemical potentials ${\hat{\mu}}={iT}\hat{\theta}$, where
$\hat{\theta}={\rm diag}(\theta-2\pi/3,\theta,\theta+2\pi/3)$~\cite{ZN1}. Here we refer to
this center symmetric PNJL as $\mathbb{Z}_{3}$-PNJL, which can be regarded as a low-energy
effective theory of $\mathbb{Z}_{3}$-QCD. The RW transitions under the conditions (i)-(v)
will be studied in the $\mathbb{Z}_{3}$-PNJL formalism by breaking the center symmetry
explicitly.
\begin{table}[t]
\begin{center}
\caption{The parameter set in the PL potential sector }
\begin{tabular}{cccccc}
\hline
& $a_{0}$\qquad\qquad & $a_{1}$\qquad\qquad & $a_{2}$\qquad\qquad & $b_{3}$\qquad\qquad & $T_{0}$ [MeV]\qquad \\ \hline
& 3.51\qquad\qquad & -2.47\qquad\qquad & 15.2\qquad\qquad & -1.75\qquad\qquad & 195\qquad \\ \hline
\end{tabular}
\label{table1}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\caption{The parameter set in the NJL sector. }
\begin{tabular}{cccccc}
\hline
& $m_{u(d)}$ [MeV] & $m_{s}$ [MeV] & $\Lambda$ [MeV] & $G_{\rm S}\Lambda^2$ & $G_{D}\Lambda^5$ \\ \hline
& 5.5 & 140.7 & 602.3 & 1.835 & 12.36 \\ \hline
\end{tabular}
\label{table2}
\end{center}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{z3thermplt.eps}
\includegraphics[width=0.9\columnwidth]{z3thermpht.eps}
\caption{Thermodynamic potential $\Omega$ as the function of $\theta$ for $T=150$ MeV (upper) and $T=250$
MeV (lower) in $\mathbb{Z}_3$-PNJL with $m_u=m_d=m_s=5.5$ MeV and $C=1$.}
\label{fig:1}
\end{figure}
\subsection{Model parameters}
The five parameters of the logarithmic PL potential \eqref{polyakov_potential} are listed in~{Table \ref{table1}.}
Originally, $T_0$ is the critical temperature of deconfinement for pure $SU(3)$ gauge theory, which is around $270$
MeV~\cite{Boyd,Kaczmarek}. Note that the chiral $T_c$ at zero density obtained in PNJL with $T_0=270$ MeV is quite
higher than the LQCD prediction~\cite{Laermann,Fodor_Katz_tem,Borsanyi:2010bp,Bazavov:2018mes}. Following~\cite{TS},
we adopt $T_0=195$ MeV here which can lead to a lower $T_c$.
The NJL part of PNJL has six parameters and a typical parameter set obtained in~\cite{Reinberg_Klevansky_Hufner,Klevansky}
is listed in {Table \ref{table2}}. These parameters are determined by the empirical values of $\eta'$ and
$\pi$ meson masses, the $\pi$ decay constant $f_\pi$, and the quark condensates at vacuum. To qualitatively
investigate the sensitivity of the RW transition on the $Z_3$ symmetry breaking patterns, we take the current quark
masses as free parameters in this study while keep $G_S$, $G_D$, and $\Lambda$ unchanged.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{nf3m5p5c0999.eps}
\includegraphics[width=0.9\columnwidth]{nf3m5p5c099.eps}
\includegraphics[width=0.9\columnwidth]{nf3m5p5c09.eps}
\includegraphics[width=0.9\columnwidth]{nf3m5p5c03.eps}
\caption{ Thermodynamic potentials of the $\mathbb{Z}_3$ sectors as the functions of $\theta$ at $T=250$
MeV in PNJL for $N_f=3$ with fixed $m_u=m_d=m_s=5.5$ MeV and varied $C\neq{1}$.}
\label{fig:2}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{nf3m5p5thermp.eps}
\includegraphics[width=0.9\columnwidth]{nf3m5p5phased.eps}
\caption{Thermodynamic potential $\Omega$ as the function of $\theta$ at $T=250$ MeV (left) and the $\theta$-T phase
diagram (right) for $m_u=m_d=m_s=5.5$ MeV with varied $C\neq{1}$. The black spots in the right panel mean the RW endpoints
at $C=0$ are still triple points.}
\label{fig:3}
\end{figure*}
\section{Numerical results }
\label{Sec_3}
In this section, we show numerical results of PNJL with the imaginal chemical potentials
$(\mu_{u},\mu_{d},\mu_{s})/iT =(\theta-2C\pi/3, \theta, \theta+2C\pi/3)$, where $0\leq C\leq1$.
We study the RW and deconfinement transitions under the conditions (i)-(v) respectively. We concentrate
on how these transitions depend on the pattern of center symmetry breaking.
At high temperature, the thermodynamical potential of $\mathbb{Z}_3$-PNJL has three degenerate local minima
at $\phi=0$ and $\pm2\pi/3$, which are the three $\mathbb{Z}_3$ sectors. Correspondingly, the thermodynamic
potential of PNJL may have three non-degenerate solutions ( namely $\Omega_\phi$($\phi=0,\pm2\pi/3$)), and
the ground-state $\Omega_{gs}$ is determined by the absolute minimum of the three.
Without loss of generality, we take a fixed high temperature $T=250$ MeV to do the calculations at which
the RW transition always happens in this model.
\subsection{ Center symmetry breaking pattern (i): $N_f=3$ with varied $C\neq{1}$}
We first perform the calculation in $\mathbb{Z}_3$-PNJL with the small common quark mass $5.5~\text{MeV}$.
Fig.~\ref{fig:1} shows the thermodynamical potential $\Omega$ as the function of $\theta$ for two different
temperatures. We confirm that the ground state has a $3$-fold degeneracy at high temperature, and there is
no degeneracy at low temperature. The high-$T$ degeneracy indicates the spontaneous center symmetry breaking,
which rules out the RW transition. Numerical calculation indicates that the critical temperature
$T_c\approx$ 195 MeV, which is almost independent on $\theta$~\cite{ZN1}. We see that the RW periodicity always
holds even the $\theta$-dependence of $\Omega$ at low-$T$ ($<T_c$) is quite weaker than that at high-$T$ ($>T_c$).
The upper panel displays that $\Omega(\theta)$ peaks at $\theta=(2k+1)\pi/3$ for $T=150$ MeV, but the lower
shows it peaks at $\theta=2k\pi/3$ for $T=250$ MeV. This implies that the $T$-driven first-order transition
related to center symmetry corresponds to the shift of the shape of $\Omega(\theta)$. This is a nontrivial
result in the center symmetric theory with fermions.
Figure.~\ref{fig:2} shows $\Omega_{\phi}(\theta)$ at $T=250$ MeV for $N_f=3$ with the same common quark mass
as in Fig.~\ref{fig:1} but $C\neq{1}$, which corresponds to center symmetry breaking pattern (i). We see the
shifts between three $\mathbb{Z}_3$ sectors appear in the $\theta-\Omega$ plane and the cusps of $\Omega$
emerge at $\theta=\theta_{rw}=(2k+1)\pi/3$. Note that the angel $\theta_{rw}$ is consistent with the traditional
one in QCD with $C=0$. Fig.~\ref{fig:2} displays that each $\Omega_\phi(\theta)$ has the period $2\pi$, which
is continuous (discontinuous) when center symmetry is weakly (strongly) broken for $C$ near one (zero).
Fig.~\ref{fig:2}(d) shows that the solution $\Omega_0(\theta)$ for $C=0.3$ vanishes in the region
$0.6\pi<\theta<1.4\pi$, which is similar to that of the standard RW transition obtained in the two-flavor
PNJL~\cite{Yahiro}. We notice that the PNJL correctly reproduces the relation $\Omega_{gs}(\theta)=\Omega_{gs}(-\theta)$
required by pattern (i). So the RW transitions shown in Fig.~\ref{fig:2} still reflect the spontaneous breaking
of $\mathbb{Z}_2$ symmetry and the density $\partial\Omega/\partial{(i\theta})$ is the order parameter.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{nf2plus1ms10thermp.eps}
\includegraphics[width=0.9\columnwidth]{nf2plus1ms1407thermp.eps}
\caption{ Thermodynamic potentials of the $\mathbb{Z}_3$ sectors as the functions of $\theta$ at $T=250$
MeV with $C=1$ for $m_s=10$ MeV (upper) and $m_s=140.7$ MeV (lower). The masses of two light flavors are
fixed as $m_u=m_d=5.5$ MeV. The RW transitions appear at $\theta=2k\pi/3$.}
\label{fig:4}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{densityc1phymass.eps}
\caption{ The quark number density Im($n_q$) as a function of $\theta$ for $C=1$ at $T=150$
MeV(dotted line) and $T=250$ MeV(solid line). The quark masses are same as that in the lower panel of Fig.~\ref{fig:4}.}
\label{fig:5}
\end{figure}
\begin{figure}[!tb]
\centering
\includegraphics[width=0.9\columnwidth]{highlowtemp.eps}
\caption{ Thermodynamic potential $\Omega$ as a function of $\theta$ for $C=1$ at $T=180$ MeV(dotted line)
and $T=190$ MeV(solid line). The quark masses are same as that in the lower panel of Fig.~\ref{fig:4}.}
\label{fig:6}
\end{figure}
As expected, Fig.~\ref{fig:2} shows that the RW cusps become sharper when $C$ declining from one to zero.
The RW transition getting stronger with center symmetry breaking is demonstrated more clearly in the left
panel of Fig.~\ref{fig:3}. In contrast, the deconfinement transition evaluated by the PL becomes weaker
with the decrease of $C$. This is shown in the right panel of Fig.~\ref{fig:3}, where the $\theta$-$T$
phase diagrams for three different $C's$ are plotted (the solid line denotes the first order transition).
In this panel, the vertical lines represent the RW transitions and the other lines the deconfinement transitions.
We see that for $C=0.8$, the whole line of deconfinement is first-order; but for $C=0.5$, only the short line near
the RW endpoint keeps first-order. When $C$ approaching zero, the first-order line of deconfinement further shrinks
towards the RW line but does not vanish at $C=0$. So all the RW endpoints for pattern (i) with a small quark mass
are triple points in this model
\footnote{ This is different from the current lattice predictions that the physical RW endpoint may be
second-order. Note that in PNJL with $C=0$, the nature of the RW endpoint depends on the PL potential.}.
Note that the triple point may change into a critical end point if the common quark mass is large enough.
In this case, there should exist a critical value $C_c$ below which the RW endpoint is second-order.
\subsection{Center symmetry breaking pattern (ii): $N_f=2+1$ with $C=1$}
This subsection gives the numerical results for $N_f=2+1$ and $C=1$, namely Pattern (ii), where the center
symmetry of $\mathbb{Z}_3$-QCD is broken by the mass difference between two degenerate light flavors (u and d)
and a heavy one (s).
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{ud5.5c1thermp.eps}
\includegraphics[width=0.9\columnwidth]{ud5.5c1phased.eps}
\caption{ The upper panel shows $\theta$-dependences of the thermodynamic potential $\Omega$ at $T=250$ MeV for
$C=1$ and $m_u=m_d=5.5$ MeV with different strange quark masses. The lower panel shows the $\theta$-$T$ phase
diagrams under the same conditions.}
\label{fig:7}
\end{figure}
Figure.~\ref{fig:4} presents the thermodynamic potential $\Omega_{\phi}$ as the function of $\theta$ at $T=250
$ MeV for two different $m_s$'s, where $m_{l(u,d)}=5.5$ MeV. In the range $0\leq\theta{<2\pi}$, each
$\Omega_{\phi}$ has three local minimums for $m_s=10$ MeV, but only one for $m_s=140.7$ MeV; the shifts between
three $\mathbb{Z}_3$ sectors appear in both cases. Similar to Pattern (i), the relation $\Omega(\theta)=\Omega(-\theta)$
is also reproduced corretly in PNJL. Different from Pattern (i), the RW cusps occur at $\theta=2k\pi/3$ rather
than $(2k+1)\pi/3$.
Note that $\theta_{RW}=2k\pi/3$ can be explained using the previous study \cite{Sakai:2009vb}, in which the
RW transitions at finite imaginal baryon and isospin chemical potentials, namely $\mu_{q(I)}=iT\theta_{q(I)}$,
are investigated in an $N_f=2$ PNJL. The prediction of \cite{Sakai:2009vb} is that:
(a) the RW transition emerges at $\theta_{q}=0$ mod $2\pi/3$ when $-\pi/2-\delta(T)<\theta_{I}<\pi/2+\delta(T)$
\footnote{Here $\delta(T)=0.00016\times{(T-250)}$.};
(b) it does at $\theta_{q}=\pi/3$ mod $2\pi/3$ when $\pi/2-\delta(T)<\theta_{I}<3\pi/2+\delta(T)$.
In our case with $N_f=2+1$ and $C=1$, $\theta_{q}$ and $\theta_{I}$ associated with two light flavors
are $((\theta-2\pi/3)+\theta)/2=\theta-\pi/3$ and $((\theta-2\pi/3)-\theta)/2=-\pi/3$, respectively.
According to (a) (ignoring the heavy flavor for the moment), $\theta_{I}=-\pi/3$ belongs to the range
$(-\pi/2-\delta(T),\pi/2+\delta(T))$, and thus the RW transition appears at $\theta_{q}=\theta_{RW}-\pi/3=\pi/3$
mod $2\pi/3$. Namely, $\theta_{RW}=2k\pi/3$. On the other hand, if we adopt
$(\mu_u, \mu_d, \mu_s)/iT=(\theta-2\pi/3, \theta+2\pi/3, \theta)$, the corresponding $\theta_{q}$ and $\theta_{I}$
are $\theta$ and $-2\pi/3$, respectively. In this case, $\theta_{I}+2\pi=4\pi/3$ belongs to the range
$(\pi/2-\delta(T),3\pi/2+\delta(T))$, and thus the RW transition occurs at $\theta_{q}=\theta_{RW}=0$ mod $2\pi/3$
according to (b). So we still obtain $\theta_{RW}=2k\pi/3$.
The consistency between our result and that in \cite{Sakai:2009vb} implies that the RW angle for $N_f=2+1$ with
$C=1$ is mainly determined by the two degenerate light flavors. Actually, we will show later that $\theta_{RW}$
is still $(2k+1)\pi/3$ for $N_f=1+2$ with $C=1$, in which there is only one light flavor.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{a0condensate.eps}
\caption{The isovector condensate $a_0$ as a function of $\theta$ for $C=1$ at $T=180$ MeV(dotted line) and
$190$ MeV(solid line). The quark masses are same as that in the lower panel of Fig.~\ref{fig:4}. }
\label{fig:8}
\end{figure}
Figure.~\ref{fig:5} displays the $\theta$-dependence of the quark number density $n_I$=Im($n_q$) for
$m_{l(u,d)}=5.5$ MeV and $m_s=140.7$ MeV. In Pattern (ii), $n_I(\theta)$ is $\theta$-odd and thus the order
parameter for $\mathbb{Z}_2$ symmetry. We see that it is continuous for $T=150$ MeV but discontinuous
at $\theta=2k\pi/3$ for $T=250$ MeV.
In Fig.~\ref{fig:6}, we compare the thermodynamic potentials for temperatures below and above $T_{RW}$.
For $T=180$ MeV ($<T_{RW}$), $\Omega$ is weakly dependent on $\theta$, of which peaks and troughs are located
at $\theta=(2k+1)\pi/3$ and $2k\pi/3$, respectively; but for $T=190$ MeV ($>T_{RW}$), it depends on $\theta$
obviously and the positions of peak and trough are exchanged. Note that the peak and trough locations of
$\Omega$ at low and high temperature in Fig.~\ref{fig:6} are all the same as that of $\mathbb{Z}_3$-PNJL
shown in Fig.~\ref{fig:1}.
This indicates that the center symmetry is broken weakly in Pattern (ii) with the physical quark masses.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{5.5-5.5-3thermp.eps}
\includegraphics[width=0.9\columnwidth]{140.7-140.7-5.5thermp.eps}
\caption{ Thermodynamic potentials of the $\mathbb{Z}_3$ sectors as the functions of $\theta$ at $T=250$ MeV
for $N_f=1+2$ with $C=1$. The upper (lower) panel corresponds to the case with $m_u=m_d=5.5$ MeV and $m_s=3$
MeV ($m_u=m_d=140.7$ MeV and $m_s=5.5$ MeV). The RW transitions appear at $\theta=(2k+1)\pi/3$.}
\label{fig:9}
\end{figure}
The upper panel of Fig.~\ref{fig:7} presents thermodynamic potentials as functions of $\theta$ at $T=250$ MeV
for different larger $m_s$'s ($m_s=140.7,300,600$ MeV) with fixed $m_{l}=5.5$ MeV. As expected, the RW cusps
become sharper with $m_s$, but the change is milder (We only consider $m_s < \Lambda$ due to the limitation of PNJL).
The lower panel of Fig.~\ref{fig:7} displays the $\theta$-$T$ phase diagrams under the same conditions.
The deconfinement transitions for those $m_s$'s are all first-order, and thus the RW endpoints are triple points.
This suggests that the center symmetry is broken less severely by the mass differences considered here.
Similar to Fig.~\ref{fig:3}, the lower panel shows the higher the degree of center symmetry breaking, the lower
the $T_{RW}$. Another common feature of Fig.~\ref{fig:3} and Fig.~\ref{fig:7} is that $T_{RW}$ is the highest critical
temperature of deconfinement for a fixed $C\neq{1}$ (former) or $m_s$ (later). This implies that the RW transition
has a significant impact on the deconfinement transition in both symmetry breaking patterns.
In Fig.~\ref{fig:8}, we show the isovector condensate $a_0=\left\langle\bar{u}u-\bar{d}d\right\rangle$ as a function
of $\theta$ for $C=1$ with the physical quark masses. Here $a_0$ is normalized by $\sigma_0\equiv \sigma(T=0,\mu_f=0)$,
where $\sigma\equiv \left(\sigma_u+\sigma_d+\sigma_s\right)/3$. For $T=180$ MeV, the $a_0\thicksim0$ apart from
$\theta=k\pi/3$ where $a_0=0$. This can be considered as a remanent of the $N_f=3$ case where $a_0=0$ at low-$T$ \cite{ZN1}.
The approximate flavor symmetry at low-$T$ is attributed to the color confinement where $\Phi\sim0$. When $\theta=k\pi/3$,
the charge conjugate symmetry preserves the flavor symmetry between u and d. Actually, under the $\mathcal{C}$ transformation
$\Phi\leftrightarrow\Phi^*$, the thermodynamic potential with $\theta=0$
\begin{align}
&\Omega(-2\pi/3,0,2\pi/3)\xrightarrow{\mathcal{C}}\Omega(2\pi/3,0,-2\pi/3)\xlongequal{-2\pi/3}\notag \\
&\Omega(0,-2\pi/3,-4\pi/3)\xlongequal[u\leftrightarrow d]{-4\pi/3\rightarrow 2\pi/3}\Omega(-2\pi/3,0,2\pi/3),
\end{align}
and that with $\theta=\pi/3$
\begin{eqnarray}
&\Omega(-\pi/3,\pi/3,\pi)\xrightarrow{\mathcal{C}}\Omega(\pi/3,-\pi/3,-\pi)\xlongequal[u\leftrightarrow d]{-\pi\rightarrow \pi}\notag \\
&\Omega(-\pi/3,\pi/3,\pi),
\end{eqnarray}
where $u\leftrightarrow d$ stands for the relabeling of u and d. For $T=190$ MeV, the two flavor symmetry
is broken at $\theta=2k\pi/3$ due to the RW transition\cite{ZN1,Yahiro}.
\subsection{Center symmetry breaking pattern (iii): $N_f=1+2$ with $C=1$}
The numerical results for $N_f=1+2$ and $C=1$ or Pattern (iii) are given in this subsection. The center
symmetry is broken by the mass difference between the light flavor and two degenerate heavy ones.
Note that s refers to the only light flavor here.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{thermpnf1plus2.eps}
\includegraphics[width=0.9\columnwidth]{phasednf1plus2.eps}
\caption{ The upper panel shows $\theta$-dependences of the thermodynamic potential $\Omega$ at $T=250$ MeV for different
$m_s$'s and fixed $C=1$, where $m_u=m_d=140.7$ MeV. The lower one shows the $\theta$-$T$ phase diagrams under the same conditions.}
\label{fig:10}
\end{figure}
Figure.~\ref{fig:9} shows $\mathbb{Z}_3$ sectors of $\Omega$ as functions of $\theta$ at
$T=250~\text{MeV}$ for two cases with $m_{u(d)}=5.5$ MeV and $m_s=3$ MeV (upper panel) and
$m_{u(d)}=140.7$ MeV and $m_s=5.5$ MeV (lower panel). Different from Pattern (ii), both
panels display that RW transitions occur at $\theta=(2k+1)\pi/3$. This difference can be
understood in the following way. In Fig.~\ref{fig:9}, the physical thermodynamic potentials in
intervals $-\frac{\pi}{3}<\theta<\frac{\pi}{3}$, $\frac{\pi}{3}<\theta<\pi$, and
$\pi<\theta<\frac{5\pi}{3}$ are $\Omega_{\phi=-\frac{2\pi}{3}}$, $\Omega_{\phi=\frac{2\pi}{3}}$,
and $\Omega_{\phi=0}$, respectively. Note that such an $\Omega_{\phi}$ order of the physical
thermodynamic potential along the $\theta$ direction is same as that of one flavor system with
$\mu=\mu_{s}=i(\theta+\frac{2\pi}{3})T$
\footnote{The $\Omega_{\phi}$ order of the thermodynamic potential for one flavor system with
$\mu/iT=\theta$ is $\Omega_{\phi=0}$, $\Omega_{\phi=-\frac{2\pi}{3}}$, and $\Omega_{\phi=\frac{2\pi}{3}}$
for the $\theta$ intervals mentioned above \cite{Weiss}. When $\mu/iT=\theta+\frac{2\pi}{3}$, the
thermodynamic potential is shifted by $-\frac{2\pi}{3}$ along the $\theta$ axis and thus the order
becomes $\Omega_{\phi=-\frac{2\pi}{3}}$, $\Omega_{\phi=\frac{2\pi}{3}}$, and $\Omega_{\phi=0}$.}.
This suggests that $\theta_{rw}$ for $N_f=1+2$ and $C=1$ is mainly determined by the only light flavor.
Such a conclusion also supports our argument that $\theta_{rw}$ for $N_f=2+1$ and $C=1$ is mainly
determined by the two degenerate light flavors.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{physicalmassc09.eps}
\includegraphics[width=0.9\columnwidth]{physicalmassc07.eps}
\includegraphics[width=0.9\columnwidth]{physicalmassc05.eps}
\includegraphics[width=0.9\columnwidth]{physicalmassc0.eps}
\caption{Thermodynamic potentials of the $\mathbb{Z}_3$ sectors for different $C$ at $T=250$ MeV in the case of $N_f=2+1$
with $m_u=m_d=5.5$ MeV and $m_s=140.7$ MeV. The RW transition point moves from $2k\pi/3$ to $(2k-1)\pi/3$
when $C$ changes from 1 to 0.}
\label{fig:11}
\end{figure*}
The upper panel of Fig.~\ref{fig:10} presents the $\theta$-dependences of $\Omega$ at $T=250$ MeV
for fixed $m_{u(d)}=140.7$ MeV and varied $m_s$ ($m_s<m_{u(d)}$). As anticipated, the cusps of $\Omega$
become sharper with the decrease of $m_s$. The lower panel shows the $\theta$-$T$ phase diagrams under
the same conditions. Similar to Fig.~\ref{fig:7}, the deconfinement transitions are all first-order,
which suggests the center symmetry breaking due to mass deference is weak. We see that $T_{RW}$
increases with the decrease of $m_s$. This means the higher the degree of center symmetry breaking,
the higher the $T_{RW}$. This point is distinct with that shown in Figs.~\ref{fig:3} and ~\ref{fig:7}
and whether it is a model artifact is unclear.
\subsection{Center symmetry breaking pattern (iv): $N_f=2+1$ with varied $C\neq {1}$}
Figure.\ref{fig:11} shows the $\theta$-dependences of $\Omega_{\phi}$ at $T=250$ MeV for different
$C$ ($C=0.9, 0.7, 0.5, 0$) with the physical quark masses. In these cases, the original center symmetry
of $\mathbb{Z}_3$-QCD is explicitly broken by both the mass difference and $C\neq{1}$.
Note that for $N_f=2+1$, the relation $\Omega(\theta)=\Omega(-\theta)$, which is true for $C=0$ or $1$,
does not hold when $C\in(0,1)$. Correspondingly, the angle $\theta_{rw}$ for $0<C<1$ is located between $(2k-1)\pi/3$
and $2k\pi/3$, which moves towards $(2k-1)\pi/3$ ($2k\pi/3$) when $C\rightarrow{0}$ ($C\rightarrow{1}$),
as demonstrated in Fig.~\ref{fig:11}.
This figure clearly shows that the tip of the cusp becomes sharper with the decrease of $C$ and thus the
standard RW transition (Fig.~\ref{fig:11}(d) represents the standard RW transition) is the strongest.
We don't plot the $\theta-T$ phase diagrams for this pattern with the physical quark masses. The traditional
RW endpoint in PNJL with physical quark masses is a triple point~\cite{Sugano}. So we can expect that the phase
diagrams for different $C's$ are similar to Fig.~\ref{fig:3} and the RW endpoints are triple ones, except
$\theta_{RW}\neq{k\pi/3}$.
\begin{figure}[htbp]
\includegraphics[width=0.9\columnwidth, clip]{5.5-55-140.7thermp.eps}
\caption{ Thermodynamic potentials of the $\mathbb{Z}_3$ sectors at $T=250$ MeV for $C=1$
in the case of $N_f=1+1+1$ with $m_u=5.5$ MeV, $m_d=55$ MeV and $m_s=140.7$ MeV.}
\label{fig:12}
\end{figure}
\subsection{Center symmetry breaking pattern (v): $N_f=1+1+1$ with $C={1}$}
The $\theta$-dependences of $\Omega_{\phi}$ at $T=250$ MeV in pattern (v) are shown in Fig.~\ref{fig:12},
where $m_u=5.5$ MeV, $m_d=55$ MeV and $m_s=140.7$ MeV are adopted. Similar to patterns (ii)-(iii), the
original center symmetry of $\mathbb{Z}_3$-QCD is explicitly broken due to mass non-degeneracy.
Note that $\Omega(\theta)\neq\Omega(-\theta)$ in this pattern since different flavors have different masses.
As a result, the RW transitions don't occur at $\theta=k\pi/3$. Fig.~\ref{fig:12} shows that $\theta_{rw}$
is in between $2k\pi/3$ and $(2k+1)\pi/3$, which is different from Fig.~\ref{fig:11}. Fixing $m_u$ and $m_s$
and keeping $m_u<m_d<m_s$, we verify that the RW point moves towards $2k\pi/3$ ($(2k+1)\pi/3$) when
$m_d\rightarrow{m_u}$ ($m_s$). This is easily understood since the condition $m_u=m_d<m_s$ ($m_u<m_d=m_s$)
with $C=1$ corresponds to pattern (ii) ((iii)).
In pattern (v), how the RW transition depends on quark masses is complicated and the PNJL model is only
suited to study the system with light quarks. Fig.~\ref{fig:12} shows that the strength of the RW transition
is similar to cases of pattern (ii) with $m_{u(d)}=5.5$ MeV and $m_s=140.7$ MeV and pattern (iii) with
$m_{u(d)}=140.7$ MeV and $m_s=5.5$ MeV. This may suggest that in light flavor cases the RW transition due
to mass-nondegeneracy is quite weaker than the traditional RW transition, and thus the center symmetry
breaking is not so severely.
\section{Discussion and conclusion}
\label{Sec_4}
In this paper, we use the three flavor PNJL as a $\mathbb{Z}_3$-QCD model to investigate the nature
of RW and deconfinement transitions by breaking the center symmetry in different patterns. The FTBCs
are adopted, which correspond to the flavor-dependent imaginary chemical potentials
$(\mu_{u},\mu_{d},\mu_{s})/iT =(\theta-2C\pi/3, \theta, \theta+2C\pi/3)$. The center symmetry of
$\mathbb{Z}_3$-QCD is explicitly broken when three flavors are mass-nondegenerate or/and $C\neq1$.
We first demonstrate that the thermodynamic potential $\Omega(\theta)$ for $N_f=3$ and $C=1$ peaks at
$\theta=(2k+1)\pi/3$ and $2k\pi/3$ ($k\in\mathbb{Z}$) in low and high temperatures, respectively. Namely,
the shift of the peak position of $\Omega(\theta)$ from $\theta=(2k+1)\pi/3$ to $\theta=2k\pi/3$ with $T$
just corresponds to the true first order deconfinement transition. There is no the RW transition in this
case because of the exact center symmetry.
For $N_f=3$ with $C\neq1$, the RW transitions occur at $\theta=(2k+1)\pi/3$ when $T>T_{RW}$. The transition
strength becomes stronger when $C$ decreasing from one and the strongest corresponds to the traditional RW
transition with $C=0$. We verify that the RW endpoint is always a triple point in the light flavor case with
$m_u=m_d=m_s=5.5$ MeV. The corresponding first-order deconfinement transition line in the $\theta-T$ plane
becomes shorter when $C$ approaching zero. For $C$ near zero, the first-order deconfinement transition only
appears at a very small region around the RW endpoint.
For $N_f=2+1$ with $C=1$, the RW transitions appear at $\theta=2k\pi/3$ rather than $\theta=(2k+1)\pi/3$.
We argue that the angle $\theta_{RW}$ in this case is mainly determined by the two mass-degenerate light
flavors, which is supported by the previous study for the two flavor system at nonzero imaginal baryon
and isospin chemical potentials~\cite{Sakai:2009vb}. The only heavier flavor affects the $T_{RW}$ and RW
strength directly. For $m_u=m_d=5.5$ MeV, it is found that the tips of RW cusps become sharper with
$m_s$ ($m_s>m_u$); moreover, the RW endpoints are always triple points and only the first-order deconfinement
transition appears in the $\theta-T$ plane.
In contrast, the RW transitions for $N_f=1+2$ and $C=1$ still appear at $\theta=(2k+1)\pi/3$. This is because
the $\theta_{RW}$ in this pattern is determined by the only light flavor rather than the two degenerate heavier
ones, which is consistent with the argument mentioned above. Similarly, the flavor mass mismatch impacts on
the $T_{RW}$ and RW strength significantly. For $m_u=m_d=140.7$ MeV and $m_s<m_u$, it is found that with the
decrease of $m_s$, the RW transition gets stronger but the $T_{RW}$ becomes higher. The latter is unusual in
comparison with the aforementioned two cases and the reason is unclear. Similar to the pattern of $N_f=2+1$
and $C=1$, the deconfinement transition is always first-order.
In above three patterns, the relation $\Omega(\theta)=\Omega(-\theta)$ holds and the $\theta_{RW}$'s are
integral multiples of $\pi/3$. In general, $\Omega(\theta)$ is not $\theta$-even and $\theta_{RW}$ can be
other values. For $N_f=2+1$ but $C\in(0,1)$, the $\theta_{RW}$ is located in $((2k-1)\pi/3,2k\pi/3)$, which moves
to $2k\pi/3$ ($(2k-1)\pi/3$) when $C$ approaching one (zero). In this pattern, the RW strength is more
sensitive to the deviation of $C$ from one rather than the mass difference. In contrast, for $N_f=1+1+1$
with $C=1$, the $\theta_{RW}$ is located in $(2k\pi/3,(2k+1)\pi/3)$, which moves towards $2k\pi/3$
($(2k+1)\pi/3$) when $N_f=1+1+1 \rightarrow N_f=2+1$ ($N_f=1+2$).
Our calculation suggests that the deconfinement transition always keeps first-order for $C=1$ with or without
the mass degeneracy in PNJL. This indicates that the center symmetry breaking caused purely by mass difference
is too weak to lead to deconfinment crossover if the common difference of $\mu_f/{iT}$ series is $2\pi/3$ in this
model. In contrast, when $C$ deviates from one and below some critical value $C_c(\theta)$, the crossover for
deconfinement occurs at $\theta$ far from $\theta_{RW}$, which implies the strong center symmetry breaking. The
first-order deconfinement transition line in the $\theta-T$ plane shrinks with the decrease of $C$ up to zero.
Thus the strongest deconfinement transition happens at $\theta_{RW}$ and $C=0$.
The study gives predictions of how the RW and deconfinement transitions depend on the degree and manner of the
center symmetry breaking related to $\mathbb{Z}_3$-QCD. These results may be illuminating to understand the
relationship between $\mathbb{Z}_3$ symmetry, RW transition and deconfinement transition at finite imaginary
chemical potential and temperature region where the LQCD simulations are available. The conclusions obtained here
are mainly based on the effective model analysis, which should be checked by other methods. Moreover, the quark
masses can not be large enough in our calculation and it is unclear how the RW transition depends on the center
symmetry breaking for the heavy quark system. The further study is necessary by employing the LQCD simulations
or the perturbative strong coupling QCD by taking into FTBDc.
\vspace{5pt}
\noindent{\textbf{\large{Acknowledgements}}}\\
This work was supported by the NSFC ( No. 11875127, No.11275069 ).
|
1,941,325,220,691 | arxiv | \section{Introduction}
\label{intr}\setcounter{equation}{0}
Classical general relativity predicts that we can make black holes. If we have enough mass in a given region of space then we have a process of `gravitational collapse'; the matter is squeezed towards an infinite curvature singularity at $r=0$ and the
spacetime outside $r=0$ is described by the vacuum solution (we are assuming 3+1 dimensions, no charge for the black hole)
\begin{equation}
ds^2=-(1-{2M\over r})dt^2+(1-{2M\over r})^{-1} dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)
\label{one}
\end{equation}
We use units such that $G=1, c=1, \hbar=1$.
The surface $r=2M$ is the `horizon'; classically nothing can emerge from inside this region to the outside.
A curious paradox emerges when we add in the ideas of quantum mechanics to this picture. Quantum field theory tells us that the vacuum has fluctuations, which may be characterized by pairs of particles and anti-particles that are being continuously created and annihilated. Hawking \cite{hawking} showed that when such a pair is created near the horizon of a black hole then one member of the pair can fall into the hole (reducing its mass) while the other member escapes to infinity, giving `Hawking radiation'. Eventually the black hole disappears, and we are left with this radiation. While energy has been conserved in the process, we have a problem. The radiation quanta were created from vacuum fluctuations near the horizon, and at this location there is no information in the geometry about what kind of matter made up the mass $M$ of the black hole. So the radiation quanta do not carry information about the initial matter which collapsed to make the hole. If $|\psi\rangle_i$ is the state of the initial matter which underwent gravitational collapse, then we cannot describe the final configuration of radiation by a state arising from a normal quantum mechanical evolution
\begin{equation}
|\psi\rangle_f = e^{-iHt}|\psi\rangle_i
\end{equation}
since otherwise we {\it could} reconstruct the details of the initial matter by inverting the unitary evolution operator:
\begin{equation}
|\psi\rangle_i = e^{iHt}|\psi\rangle_f
\end{equation}
In fact Hawking found that the outgoing and infalling members of the pair are in an entangled state, and when the black hole vanishes (together with the members of the pairs that fell into the hole) then the radiation quanta left outside are entangled with `nothing'; i.e. they must be described not a pure state but by a density matrix.
Closely associated to this problem is the `entropy puzzle'. Take a box of gas having an entropy $S$ and throw it into the black hole. Have we decreased the entropy of the Universe and violated the second law of thermodynamics? The work of Bekenstein \cite{bek} and Hawking \cite{hawking} shows that if we associate an entropy to the black hole equal to
\begin{equation}
S_{Bek}={A\over 4G}
\label{twonew}
\end{equation}
then the second law is saved; the decrease in entropy of the matter in the Universe is made up by the increase in the entropy of the black hole. But if we take (\ref{twonew}) seriously as the entropy of the hole then statistical mechanics tells us that there should be $e^{S_{Bek}}$ states of the black hole for the same mass. Can we see these states? The metric (\ref{one}) seems to be the unique one describing the endpoint of gravitational collapse; no small deformations are allowed, a fact encoded in the colloquial statement `Black holes have no hair'. The fact that we cannot find the $e^{S_{Bek}}$ states of the hole is the `entropy puzzle'. To see why this is tied to the information puzzle consider burning a piece of coal. The coal disappears and radiation is left, but there is no `information loss'. The state of the coal can be seen by examining the piece of the coal; a different piece of coal will have a different internal arrangement of atoms even though it might look similar at a coarse-grained level. The radiation leaves from the surface of the coal, so it can `see' the details of the internal structure of the coal. By contrast in the black hole the radiation leaves from the horizon, a region which is locally the {\it vacuum}. The initial matter went to $r=0$, which is separated by a macroscopic distance -- the horizon radius-- from the place where the radiation is created.
If the above Hawking process were valid then we must make a big change in our ideas of quantum theory, replacing unitary evolution of pure states by a more general theory where the generic configuration is a density matrix. Not surprisingly, considerable effort was put into looking for a flaw in the Hawking computation. But the computation proved to be remarkably robust in its basic outline, and the `Black hole information paradox' resisted all attempts at resolution for some thirty years.
One may imagine that since we are using general relativity and quantum theory in the same problem, we must inevitably be led to the details of `quantum gravity', which is a poorly understood subject. So perhaps there are many things that were not done correctly in the Hawking derivation of radiation, and there might be no paradox. But we cannot escape the problem so easily. The radiation is derived from the behavior of vacuum fluctuations at the horizon, where the geometry (\ref{one}) is completely smooth; the curvature length scale here is $\sim M$, the radius of the black hole, and can be made arbitrarily large by considering holes of larger and larger $M$. When does quantum gravity become relevant? With $G,c,\hbar$ we can make a unit of length -- the `planck length
\begin{equation}
l_p=[{G\hbar\over c^3}]^{1\over 2}\sim 10^{-33} ~ cm
\end{equation}
(we have assumed 3+1 dimensions and put in the usual values of the fundamental constants.) When the curvature length scale becomes of order $l_p$ then the concept of spacetime as a smooth manifold must surely break down in some way.
But for a solar mass black hole the curvature length scale at the horizon is $\sim 3~km$ and for black holes like those at the centers of galaxies it is $\sim 10^8 ~km$, the same order as the curvature here on earth. It thus appears that we need not know anything about quantum gravity, and the simple rules of `quantum fields on curved space' would be sufficient to see how vacuum fluctuations at the horizon evolve to become radiation. It is these rules that Hawking used, and these lead to the information paradox.
Despite the above argument suggesting that quantum gravity is not relevant, there have been numerous attempts to find fault with the Hawking computation. Many of these attempts try to use the fact that in the Schwarzschild coordinate system used in (\ref{one}) there is an infinite redshift between the horizon and infinity, so low energy quanta at infinity appear very energetic to a local observer at the horizon (see for example \cite{thooft,ps}). But this infinite redshift just signals a breakdown of the Schwarzschild coordinate system at the horizon, and a different set of coordinates -- the Kruskal coordinates-- cover both the exterior and the interior of the horizon and show no pathology at the horizon. So we need to understand the physics behind such approaches in further detail. My personal opinion on this count is that the Hawking argument can be phrased in the following more precise way. Suppose that we assume
(a)\quad All quantum gravity effects are confined to within a fixed length scale like the planck length or string length.
(b)\quad The vacuum is unique.
\noindent Then Hawking radiation will occur in a way that will lead to the violation of quantum mechanics.
The arguments for this phrasing will be given elsewhere. But accepting for the moment this version of the Hawking argument we can ask which of the assumptions breaks down if quantum mechanics is to be valid. We will describe several computations in string theory that suggest that (a) is incorrect. How can this happen, if $l_p$ is the natural length scale made out of $G,c, \hbar$? If we scatter two gravitons off each other, then quantum gravity effects would indeed become important only when the wavelength of the gravitons dropped to a microscopic length like the string length or planck length. But a large black hole is made of a large number of quanta $N$. Is the length scale of quantum gravity $l_p$ or is it $N^\alpha l_p$ for some $\alpha>0$. We will argue, using some computations in string theory, that the latter is true, and that the length scale $N^\alpha l_p$ is of order the horizon radius, a macroscopic length. This, if true, would alter the picture of the black hole interior completely. It would also remove
the information paradox, since we will have a horizon sized `fuzzball', instead of the metric (\ref{one}) which is `empty space' near the horizon. Radiation leaving from the surface of the `fuzzball' can carry the information contained in the fuzzball just as radiation leaving from the surface of a piece of coal carries information about the state of the coal.
In this review we will go over some of the basic understanding of black holes that has been obtained using string theory.
We will see how to make black holes in string theory, and how to understand their entropy and Hawking radiation. We will construct explicitly the interiors for 2-charge extremal holes. These will give us a `fuzzball' type interior conjectured above.
We will then use qualitative arguments to suggest that all black holes have such a `fuzzball' description for the interior of their horizon.
\bigskip
{\bf Note:} The goal of this review is to initiate the reader to some of the older work on black holes in string theory, and to show how it connects to some of the ideas developing now. We do not seek to actually review in any detail the current advances being made in the area. In particular we do not discuss 3-charge systems, in which considerable progress has been achieved over the past couple of years. We list some of these advances at the end of this article, with the hope that the reader will be inclined to delve further into the literature on the subject.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=4in]{nfig10.eps}
\caption{(a) The conventional picture of a black hole \quad (b) the proposed picture -- state information is distributed throughout the `fuzzball'. }
\label{fig1}
\end{center}
\end{figure}
\section{Making black holes in string theory}
\label{maki}\setcounter{equation}{0}
For concreteness, let us start with type IIA string theory. We must make our black hole from the objects in the theory. These
objects are the following. First we have the massless graviton, present in any theory of gravity. We also have the elementary string, which we call an NS1 brane. Among the higher branes, we have an NS5 brane, and D0,D2,D4,D6,D8 branes.
We can also make Kaluza-Klein monopoles just using the gravitational field; these look like branes extending in the directions transverse to the KK-monopole.
To make a black hole we would like to put a large mass at one location. Let us start by using the elementary string, the NS1. Left to itself it will shrink and collapse to a massless quantum, so we compactify a direction to a circle $S^1$ and wrap the NS1 on this circle. To get a large mass we let the winding number be $n_1\gg 1$. It is important that we take a {\it bound} state of these $n_1$ strings, since otherwise we will end up making many small black holes instead of one big black hole. The bound state in this case is easily pictured: We let the string wrap around the circle $n_1$ times before joining back to form a closed loop; thus we have one long `multiwound' string of total length $2\pi R n_1$ where $R$ is the radius of the $S^1$.
The supergravity solution produced by such a string is
\begin{eqnarray}
ds^2_{string}&=&H_1^{-1}[-dt^2+dy^2]+\sum_{i=1}^8 dx_idx_i\\
e^{2\phi}&=&H_1^{-1}\\
H_1&=&1+{Q_1\over r^6}
\end{eqnarray}
Here $ds^2_{string}$ is the 10-D string metric, $y$ is the coordinate along the $S^1$ and $x_i$ are the 8 spatial directions transverse to the string. At $r\rightarrow 0$ the dilaton $\phi$ goes to $-\infty$ and the length of the $y$ circle is seen to go to zero. The geometry does not have a horizon at any nonzero $r$, and if we say that the horizon occurs at $r=0$ then we find that the area of this horizon (measured in the Einstein metric) is zero. Thus we get $S_{Bek}=0$.
This vanishing of $S_{Bek}$ is actually consistent with the microscopic count. The NS1 brane is in an oscillator ground state, so its only degeneracy comes from the zero modes of the string, which give 128 bosonic and 128 fermionic states. Thus we have $S_{micro}=\ln[256]$ which does not grow with $n_1$. Thus in the macroscopic limit $n_1\rightarrow \infty$ we would write $S_{micro}=0$ to leading order, which agrees with $S_{Bek}$.
Let us go back and see why we failed to make a black hole with nonzero area. Consider the NS1 brane as an M2 brane of M theory; this M2 brane wraps the directions $x_{11},y$. A brane has tension along its worldvolume directions, so it squeezes the cycles on which it is wrapped. Thus the length of the $x_{11}$ circle goes to zero at the brane location $r=0$, which shows up as $\phi\rightarrow -\infty$ in the IIA description. Similarly, we get a vanishing of the length of the $y$ circle in the M theory description. On the other hand if we had some directions that are compact and transverse to a brane then they would tend to expand; this happens because the flux radiated by the brane has energy and this energy is lower if the flux is spread over a larger volume.
In computing the area of the horizon we can take two equivalent approaches:
(a) We can just look at the D noncompact directions, and find the Einstein metric (after dimensional reduction) for these noncompact directions. We compute the area $A_D$ in this metric and use the Newton constant $G_D$ for $D$ dimensions to get $S_{Bek}=A_D/4G_D$.
(b) We can compute the area of the horizon in the full 11-D metric of M theory, and use the Newton constant for 11-D to get
$S_{Bek}=A_{11}/4 G_{11}$. In the IIA description we can compute the area of the horizon in the 10-D Einstein metric and write $S_{Bek}=A^E_{10}/4 G_{10}$.
It is easy to check that the two computations give the same result. Let us follow (b). Then we can see that the vanishing of the $x_{11}$ and $y$ circles in the above 1-charge solution will make the 11-D horizon area vanish, and give $S_{Bek}=0$.
\subsection{Two charges}
\label{twoc}
To avoid the shrinking of the direction $x_{11}$ we can take M5 branes and place them transverse to the direction $x_{11}$; this gives NS5 branes in the IIA theory. To wrap the five spatial worldvolume directions of the NS5 branes we need more compact directions, so let us compactify a $T^4$ in addition to the $S^1$ and wrap the NS5 branes on this $T^4\times S^1$. We still have the NS1 branes along $y$, but note that with the additional compactifications the power of $r$ occurring in $H_1$ changes. We get
\begin{eqnarray}
ds^2_{string}&=&H_1^{-1}[-dt^2+dy^2]+H_5\sum_{i=1}^4 dx_idx_i+\sum_{a=1}^4 dz_adz_a\nonumber \\
e^{2\phi}&=&{H_5\over H_1}\nonumber \\
H_1&=&1+{Q_1\over r^2}, ~~~~~\qquad H_5=1+{Q_5\over r^2}
\end{eqnarray}
The $T^4$ is parametrized by $z_a, a=1\dots 4$. $Q_5$ is proportional to $n_5$, the number of NS5 branes. Note that the dilaton stabilizes to a constant as $r\rightarrow 0$; this reflects the stabilization of the $x_{11}$ circle. Note that the $T^4$ also has a finite volume at $r=0$ since the NS5 branes cause it to shrink (their worldvolume is along the $T^4$) while the NS1 branes cause it to expand (they are transverse to the $T^4$). But the horizon area in the Einstein metric is still zero; this can be seen in the M theory description from the fact that the NS1 (M2) and the NS5 (M5) both wrap the $y$ circle and cause it to shrink to zero at $r=0$.
\subsection{Three charges}
\label{thre}
To stabilize the $y$ circle we add {\it momentum} charge P along the $y$ circle. If we have $n_p$ units of momentum along $y$ then the energy of these modes is $n_p/R$, so their energy is {\it lower} for larger $R$. This contribution will therefore counterbalance the effect of the NS1, NS5 branes for which the energies were linearly proportional to $R$. We get
\begin{eqnarray}
ds^2_{string}&=&H_1^{-1}[-dt^2+dy^2+K(dt+dy)^2]+H_5\sum_{i=1}^4 dx_idx_i+\sum_{a=1}^4 dz_adz_a\nonumber \\
e^{2\phi}&=&{H_5\over H_1}\nonumber \\
H_1&=&1+{Q_1\over r^2}, ~~~~~~~~~~\qquad H_5=1+{Q_5\over r^2}, ~~~~~~~~~~~\qquad K={Q_p\over r^2}
\label{tenp}
\end{eqnarray}
This metric has a horizon at $r=0$. We will compute the area of this horizon in the 10-D string metric, and then convert it to the area in the Einstein metric.
Let us write the metric in the noncompact directions in polar coordinates and examine it near $r=0$
\begin{equation}
H_5 \sum dx_idx_i= H_5(dr^2+r^2d\Omega_3^2)\approx Q_5[{dr^2\over r^2}+d\Omega_3^2]
\end{equation}
Thus the area of the transverse $S^3$ becomes a constant at $r\rightarrow 0$
\begin{equation}
A_{S^3}^{string} = (2\pi^2)Q_5^{3\over 2}
\end{equation}
The length of the $y$ circle at $r\rightarrow 0$ is
\begin{equation}
L_y^{string}=(2\pi R) ({K\over H_1})^{1\over 2}= 2\pi R {Q_p^{1\over 2}\over Q_1^{1\over 2}}
\end{equation}
Let the coordinate volume spanned by the $T^4$ coordinates $z_a$ be $(2\pi)^4 V$. The volume of $T^4$ at $r\rightarrow 0$ is
\begin{equation}
V_{T^4}^{string}=(2\pi)^4 V
\end{equation}
Thus the area of the horizon at $r=0$ is
\begin{equation}
A^{string}= A_{S^3}^{string} L_y^{string}V_{T^4}^{string}=(2\pi^2)(2\pi R)((2\pi)^4 V)Q_1^{-{1\over 2}}Q_5^{3\over 2}Q_p^{1\over 2}
\end{equation}
The 10-D Einstein metric $g^E_{ab}$ is related to the string metric $g^S_{ab}$ by
\begin{equation}
g^E_{ab}=e^{-{\phi\over 2}}g^S_{ab}={H_1^{1\over 4}\over H_5^{1\over 4}}g^S_{ab}
\end{equation}
At $r\rightarrow 0$ we have $e^{2\phi}={Q_5\over Q_1}$, which gives for the area of the horizon in the Einstein metric
\begin{equation}
A^E=({g^E_{ab}\over g^S_{ab}})^4A^{string}={Q_1\over Q_5}A^{string}=(2\pi^2)(2\pi R)((2\pi)^4 V)(Q_1Q_5Q_p)^{1\over 2}
\label{area3charge}
\end{equation}
The Newton constant of the 5-D noncompact space $G_5$ is related to the 10-D Newton constant $G_{10}$ by
\begin{equation}
G_5={G_{10}\over (2\pi R) ((2\pi)^4 V)}
\end{equation}
We can thus write the Bekenstein entropy as
\begin{equation}
S_{Bek}={A^E\over 4G_{10}}={(2\pi^2)(2\pi R)((2\pi)^4 V)(Q_1Q_5Q_p)^{1\over 2}\over 4 G_{10}}={(2\pi^2)(Q_1Q_5Q_p)^{1\over 2}\over 4G_5}
\label{two}
\end{equation}
We next express the $Q_i$ in terms of the integer charges
\begin{eqnarray}
Q_1&=& { g^2 \alpha'^3\over V} n_1\nonumber \\
Q_5&=&\alpha' n_5\nonumber \\
Q_p&=&{g^2\alpha'^4\over VR^2}n_p
\label{sixt}
\end{eqnarray}
We have
\begin{equation}
G_{10}=8\pi^6 g^2\alpha'^4
\end{equation}
Substituting in (\ref{two}) we find
\begin{equation}
S_{Bek}=2\pi(n_1n_5n_p)^{1\over 2}
\label{threeqp}
\end{equation}
Note that the moduli $g, V, R$ have all cancelled out. This fact is crucial to the possibility of reproducing this entropy by some microscopic calculation. In the microscopic description we will have a bound state of the charges $n_1, n_5, n_p$ and we will be counting the degeneracy of this bound state. But since we are looking at BPS states this degeneracy will not depend on the moduli.
\subsection{Dualities}
\label{dual}
We have used three charges above: NS1 branes wrapped on $S^1$, NS5 branes wrapped on $T^4\times S^1$, and momentum P along $S^1$. If we do a T-duality in one of the directions of $T^4$ we do not change any of the charges, but reach type IIB string theory. We can now do an S-duality which gives
\begin{equation}
NS1\, NS5\, P~\stackrel{\textstyle S}{\rightarrow} ~ D1\, D5\,P
\end{equation}
Dualities can also be used to permute the three charges among themselves in all possible ways. For example four T-dualities along the four $T^4$ directions will interchange the D1 with the D5, leaving P invariant. Another set of dualities can map NS1-NS5-P to P-NS1-NS5. Since we will make use of this map later, we give it explicitly here (the direction $y$ is called $x^5$ and the $T^4$ directions are called $x^6\dots x^9$)
\begin{eqnarray}
&NS1\, NS5\, P\,(IIB)&~\stackrel{\textstyle T_5}{\rightarrow}~ P\, NS5 \, NS1\, (IIA)\nonumber \\
&&~\stackrel{\textstyle T_6}{\rightarrow}~ P\, NS5 \, NS1\,(IIB)\nonumber \\
&&~\stackrel{\textstyle S}{\rightarrow}~ P\, D5\, D1\,(IIB)\nonumber \\
&&\stackrel{\textstyle T_{6789}}{\rightarrow}~ P\, D1\, D5\,(IIB)\nonumber \\
&&~\stackrel{\textstyle S}{\rightarrow}~ P\, NS1 \, NS5 \,(IIB)
\label{twop}
\end{eqnarray}
If we keep only the first two charges in the above sequence then we see that the NS1-NS5 bound state is dual to the P-NS1 state. This duality will help us understand the geometric structure of the NS1-NS5 system, since P-NS1 is just an elementary string carrying vibrations. This duality was also profitably used in \cite{vw}.
\section{The microscopic count of states}
\label{them}\setcounter{equation}{0}
We have already seen that for the one charge case (where we had just the string NS1 wrapped on a circle $n_1$ times) we get
$S_{micro}=\ln[256]$. This entropy does not grow with the winding number $n_1$ of the string, so from a macroscopic perspective we get $S_{micro}\approx 0$.
Let us now consider two charges, which we take to be NS1 and P; by the above described dualities this is equivalent to taking any two charges from the set NS1-NS5-P or from the set D1-D5-P. The winding number of the NS1 is $n_1$ and the number of units of momentum is $n_p$. It is important that we consider the {\it bound} state of all these charges. If we break up the charges into two separate bound states then we would be describing {\it two} black holes rather than the single black hole that we wish to consider.
For charges NS1-P it is easy to identify what states are bound. First we must join all the windings of the NS1 together to make a single string; this `long string' loops $n_1$ times around $S^1$ before joining back to its starting point. The momentum P must also be bound to this long string. If the momentum was {\it not} bound to the NS1 then it would manifest itself as massless quanta of the IIA theory (graviton, gauge fields etc) rotating around the $S^1$. When the momentum P is {\it bound} to the NS1 then it takes the form of traveling waves on the NS1. Thus the bound state of NS1-P is just a single `multiwound' string wrapped on $S^1$ with waves traveling in one direction along the $S^1$.
But there are several ways to carry the same momentum P on the NS1. We can partition the momentum among different harmonics of vibration, and this gives rise to a large number of states for a given choice of $n_1, n_p$.
Since we are looking at BPS states, we do not change the count of states by taking $R$ to be vary large. In this
limit we have small transverse vibrations of the NS1. We can take the DBI action for the NS1, choose the static gauge, and obtain an action for the vibrations that is just quadratic in the amplitude of vibrations. The vibrations travel at the speed of light along the direction $y$. Different Fourier modes separate and
each Fourier mode is described by a harmonic oscillator. The total length of the NS1 is
\begin{equation}
L_T=2\pi R n_1
\end{equation}
Each excitation of the Fourier mode $k$ carries momentum and energy
\begin{equation}
p_k={2\pi k\over L_T}, ~~~e_k=|p_k|
\label{component1}
\end{equation}
The total momentum on the string can be written as
\begin{equation}
P={n_p\over R}={2\pi n_1n_p\over L_T}
\label{momentum1}
\end{equation}
For later use we will work with the more general case where we have momentum modes traveling in both directions along the string. The extremal case was first discussed in \cite{sv} and the near extremal system in \cite{callanmalda}. We will follow the notation of \cite{dmcompare} for later convenience. We
have a gas of excitations with given total energy $E$ and total
momentum $P$ along the string direction. Using a canonical ensemble, the energy $E$ is determined
by an inverse temperature $\beta$ and the momentum $P$
by a chemical potential $\alpha$ as follows. Let there be
$m_r$ particles with energy $e_r$ and momentum $p_r$. Define a partition function ${\cal Z}$ by
\begin{equation}
\label{foone}{{\cal Z} = e^h = \sum_{states} {\rm exp}~[ -\beta\sum_r
m_r e_r - \alpha\sum_r m_r p_r]}
\end{equation}
Then $\alpha, \beta$ are determined by requiring
\begin{equation}
\label{fotwo}{ E = -{\partial h \over \partial \beta}~~~~~~~~~
P = -{\partial h \over \partial \alpha}}
\end{equation}
The average number of particles in the state $(e_r, p_r)$ is
then given by
\begin{equation}
\label{fothree}{ \rho (e_r, p_r) = {1 \over e^{\beta e_r + \alpha p_r}
\pm 1}}
\end{equation}
where as usual the plus sign is for fermions and the minus sign is
for bosons. Finally the entropy $S$ is given by the standard thermodynamic
relation
\begin{equation}
\label{fofour}{ S = h + \alpha P + \beta E}
\end{equation}
For the case where the gas of excitations has $f_B$ species of bosons and $f_F$
species of fermions the above quantities may be easily evaluated
\begin{eqnarray}
P &=& {(f_B+{1\over 2} f_F)L_T\pi\over 12}[{1\over (\beta + \alpha)^2}
-{1 \over (\beta - \alpha)^2}]~,~~~~
E = {(f_B+{1\over 2} f_F)L_T\pi\over 12}[{1\over (\beta + \alpha)^2}
+{1 \over (\beta - \alpha)^2}]\nonumber\\
S &=& {(f_B+{1\over 2} f_F)L_T\pi\over 6}[{1\over \beta + \alpha}
+{1 \over \beta - \alpha}]
\label{fofive}
\end{eqnarray}
Since we have massless particles in one spatial dimension, they
can be either right moving, with $e_r = p_r$ or left moving
$e_r = -p_r$. The distribution functions then become
\begin{eqnarray}
\rho_R &=& {1 \over e^{(\beta + \alpha)e_r}\pm 1}~~~~
~~~~~~~
{\rm R}\nonumber\\
\rho_L &=& {1 \over e^{(\beta - \alpha)e_r} \pm 1}~~~~~~~~~~~
{\rm L}
\label{fosix}
\end{eqnarray}
Thus the combinations
\begin{equation}
T_R = {1\over(\beta + \alpha)}, ~~~
T_L = {1\over (\beta - \alpha)}
\end{equation}
act as {\it effective} temperatures for the right and left moving
modes respectively. In fact all the thermodynamic quantities
can be split into a left and a right moving piece :
$E = E_R + E_L,~~P = P_R + P_L,~~~S = S_R + S_L$ in an obvious
notation. The various quantities $E_L, E_R, P_L, P_R, S_L, S_R$
may be read off from (\ref{fofive}).
We get
\begin{equation}
{ T_R = {\sqrt{12 E_R \over L_T\pi (f_B+{1\over 2} f_F)}}
~,~~~~~~~~~~~~T_L = {\sqrt{12 E_L \over L_T\pi (f_B+{1\over 2} f_F)}}}
\label{temps}
\end{equation}
The temperature $T_H$ of the whole system (R and L movers) is given through
\begin{equation}
{1\over T_H}=\beta={1\over 2}[{1\over T_R}+{1\over T_L}]
\label{thawking}
\end{equation}
where we have used the notation $T_H$ for this temperature since we will compare it to the Hawking temperature of the black hole. The extremal state corresponds to $P_L = E_L = 0$ so that $E = P$; in this case we get $T_L=0$ and consequently $T_H=0$.
Let us now apply these results to some particular cases.
\subsection{ Extremal NS1-P}
\label{extr}
To keep contact with the black hole problem that we have set up we will continue to compactify spacetime down to 4+1 noncompact dimensions. Take the compactification $M_{9,1}\rightarrow M_{4,1}\times T^4\times S^1$. The string can vibrate in 8 transverse directions: 4 along the $T^4$ and 4 along the noncompact directions of $M_{4,1}$. Thus we have 8 bosonic degrees of freedom, and by supersymmetry, 8 fermionic partners. Thus $f_B=f_F=8$. Since we are looking at the extremal case we have all the excitations moving in the same direction, so we have only say the R movers and no L movers. This gives $E=P$, which can be achieved by letting $\alpha\rightarrow-\infty, \beta\rightarrow\infty$ with $\alpha+\beta$ finite
\begin{equation}
E=P={(f_B+{1\over 2} f_F)L_T\pi\over 12}[{1\over (\beta + \alpha)^2}]={2\pi n_1n_p\over L_T}
\label{entropythermal}
\end{equation}
which gives
\begin{equation}
\beta+\alpha={L_T\over \sqrt{2}\sqrt{n_1n_p}}
\end{equation}
and
\begin{equation}
S^{T^4}_{micro}=2\pi\sqrt{2}\sqrt{n_1n_p}
\label{smicro2}
\end{equation}
For this calculation the compactification on $T^4\times S^1$ is the same as a compactification on just $S^1$, since vibrations in the $T^4$ directions are similar to those in $R^4$. We may also consider the compactification
$M_{9,1}\rightarrow M_{4,1}\times K3\times S^1$. But IIA on K3 is dual to heterotic on $T^4$, so we can look at vibrations of the heterotic string on $T^4$. To get supersymmetric configurations we must keep the left moving supersymmetric sector in the ground state, and excite only the bosonic right movers. There are 24 transverse bosonic oscillations, so we get $f_B=24, f_F=0$. We then find
\begin{equation}
S^{K3}_{micro}=4\pi\sqrt{n_1n_p}
\label{smicro2k3}
\end{equation}
There is another equivalent language in which we can derive these microscopic entropies. Recall that the momentum on the string is written in the form (\ref{momentum1}), where each quantum of harmonic $k$ carries the momentum (\ref{component1}). First focus on only one of the transverse directions of vibration. If there are $m_i$ units of the Fourier harmonic $k_i$ then we need to have
\begin{equation}
\sum_i m_ik_i=n_1n_p
\end{equation}
Thus the degeneracy is given by counting {\it partitions} of the integer $n_1n_p$. The number of partitions of an integer $N$ is known to be
\begin{equation}
P[N]~\sim~ e^{2\pi\sqrt{N\over 6}}
\label{partitions}
\end{equation}
We must however take into account the fact that (for the $T^4$ compactification) the momentum will be partitioned among 8 bosonic vibrations and 8 fermionic ones; the latter turn out to be equivalent to 4 bosons. Thus there are ${n_1n_p\over 12}$ units of momentum for each bosonic mode, and we must finally multiply the degeneracy in each mode. This gives for the degeneracy of states ${\cal N}$
\begin{equation}
{\cal N}=[Exp(2\pi\sqrt{n_1n_p\over 72})]^{12}=Exp(2\pi\sqrt{2}\sqrt{n_1n_p})
\label{newmethod}
\end{equation}
which again gives the entropy (\ref{smicro2}).
The 2-charge extremal entropy was first obtained in \cite{sen}, following suggestions in \cite{susskind}.
\subsection{Extremal NS1-NS5-P}\label{extr2}
Let us now ask what happens if we add in the third charge, which will be NS5 branes if the first two charges are NS1-P.
We will build a hand-waving picture for the 3-charge bound state which will be enough for our present purposes; a more systematic derivation of these properties can however be given by using an `orbifold CFT' to describe the bound state \cite{sw,lm12}.
Suppose we have only one NS5 brane. Since the NS1 brane lies along the NS5 and is bound to the NS5, we can imagine that the NS1 can vibrate inside the plane of the the NS5 but not `come out' of that plane. The momentum P will still be carried by traveling waves along the NS1, but now only four directions of vibration are allowed -- the ones inside the NS5 and transverse to the NS1. Thus $f_B$ in (\ref{entropythermal}) is 4 instead of 8. The three charge bound state is supersymmetric, so we should have 4 fermionic excitation modes as well. Then
\begin{equation}
f_B+{1\over 2} f_F=4+2=6
\end{equation}
But the rest of the computation can be done as for the two charge case, and we find
\begin{equation}
S_{micro}=2\pi\sqrt{n_1n_p}
\end{equation}
Since the three charges can be permuted among each other by duality, we expect a permutation symmetric result. Since we have taken $n_5=1$ we can write
\begin{equation}
S_{micro}=2\pi\sqrt{n_1n_5n_p}
\end{equation}
To understand the general case of $n_5>1$ we must get some understanding of why the winding number $n_1$ becomes effectively $n_1n_5$ when we have $n_5$ 5-branes in the system. To do this, recall that by dualities we have the map
\begin{equation}
NS1 (n_1) ~~ P (n_p) ~ \leftrightarrow ~ NS5 (n_1) ~~ NS1 (n_p)
\end{equation}
So let us first look at NS1-P. Suppose the NS1 wraps only {\it once} around the $S^1$. The $n_p$ units of momentum are partitioned among different harmonics, with the momentum of the excitations coming in multiples of $1/R$. Now suppose the NS1 is wound $n_1>1$ times around the $S^1$. The total length of the `multiwound' string is now $2\pi R n_1$ and the momentum now comes in multiples of
\begin{equation}
\Delta p=1/(n_1 R)
\end{equation}
(The total momentum $n_p/R$ must still be an integer multiple of $1/R$, since this quantization must be true for any system living on the $S^1$ of radius $R$ \cite{dmfrac}.) We therefore have $n_1n_p$ units of `fractional' strength $\Delta p$ that we can partition in different ways to get the allowed states of the system.
Now consider the NS5-NS1 system obtained after duality. If there is only one NS5 (i.e. $n_1=1$) then we just have $n_p$ NS1 branes bound to it. Noting how different states were obtained in the NS1-P picture we expect that we can count different states by partitioning this number $n_p$ in different ways. We can picture this by saying that the NS1 strings live in the NS5, but can be joined up to make `multiwound' strings in different ways. Thus we can have $n_p$ separate singly wound loops, or one loop wound $n_p$ times, or any other combination such that the total winding is $n_p$:
\begin{equation}
\sum_i m_i k_i = n_p
\end{equation}
where we have $m_i$ strings with winding number $k_i$.
If on the other hand we have many NS5 branes ($n_1>1$) then duality indicates that the NS1 breaks up into `fractional' NS1 branes, so that there are $n_1n_p$ strands in all. These latter strands can now be grouped together in various ways so that the number of possible states is given by partitions of $n_1n_p$
\begin{equation}
\sum_i m_i k_i=n_1n_p
\label{six}
\end{equation}
In fact we should be able to reproduce the entropy (\ref{smicro2}) by counting such partitions. Let us call each `multiwound' strand in the above sum a `component string'. The only other fact that we need to know about these component strings is that they have 4 fermion zero modes coming from left movers and 4 from right movers; this can be established by a more detailed treatment of the bound states using the `orbifold CFT'. Upon quantization we get two `raising operators' and two `lowering operators' for each of the left and right sides. Starting with a ground state (annihilated by all lowering operators)
we can choose to apply or not apply each of the 4 possible raising operators, so we get $2^4=16$ possible ground states of the component string. Applying an even number of raising operators gives a bosonic state while applying an odd number gives a fermionic state. Each component string (with a given winding number $k$) has therefore 8 bosonic states and 8 fermionic states.
The count of possible states of the NS5-NS1 system is now just like the count for the NS1-P system. If we partition the number $n_1n_p$ as in (\ref{six}) and there are 8 bosonic and 8 fermionic states for each member in a partition, then the total number of states ${\cal N}$ will be given by (following (\ref{newmethod}))
\begin{equation}
\ln[{\cal N}]=2\sqrt{2}\pi\sqrt{n_1n_p}
\label{sixfollow}
\end{equation}
With this understanding, let us return to the 3-charge system we were studying. We have $n_5$ NS5 branes and $n_1$ NS1 branes. The bound state of these two kinds of branes will generate an `effective string' which has total winding number
$n_1n_5$ \cite{maldasuss}. This effective string can give rise to many states where the `component strings' of the state have windings $k_i$ with
\begin{equation}
\sum m_i k_i =n_1n_5
\label{seven}
\end{equation}
We will later use a special subclass of states where all the component strings have the same winding $k$; we will also let each component string have the same fermion zero modes. Then the number of component strings is
\begin{equation}
m={n_1n_5\over k}
\label{teight}
\end{equation}
In the above set, one extreme case is where all component strings are singly wound
\begin{equation}
k=1, ~~~m=n_1n_5
\label{eight}
\end{equation}
The other extreme is where there is only one component string
\begin{equation}
k=n_1n_5, ~~~m=1
\label{nine}
\end{equation}
Let us now add the momentum charge $P$ to the system. We can take the NS1-NS5 bound state to be in any of the configurations (\ref{seven}), and the $n_p$ units of momentum can be distributed on the different component strings
in an arbitrary way. All the states arising in this way will be microstates of the NS1-NS5-P system, and should be counted towards the entropy. But one can see that at least for small values of $n_p$ we get a larger contribution from the case
where we have only a small number of component strings, each having a large $k_i$. To see this let $n_p=1$. First consider the extreme case (\ref{eight}). Since each component string is singly wound ($k=1$) there is no `fractionation', and we just place one unit of momentum on any one of the component strings. Further since all the component strings are alike (we chose all component strings to have the same zero modes) we do not get different states by exciting different component strings. Instead we have a state of the form
\begin{equation}
|\Psi\rangle={1\over \sqrt{n_1n_5}}[({\rm component ~string~ 1 ~excited})~+~\dots ~+~({\rm component ~string~ n_1n_5 ~excited})]
\end{equation}
The momentum mode can be in 4 bosonic states and 4 fermionic states, so we just get 8 states for the system.
Now consider the other extreme (\ref{nine}). There is only one component string, but since it has winding $w=n_1n_5$ the one unit of momentum becomes an excitation at level $n_1n_5$ on the component string. The number of states is then given by partitioning this level into among different harmonics, and we get for the number of states
\begin{equation}
{\cal N}\sim e^{2\pi\sqrt{c\over 6}\sqrt{n_1n_5}}=e^{2\pi\sqrt{n_1n_5}}
\end{equation}
where we have used $c=6$ since we have 4 bosons and 4 fermions. This is much larger than the number of states obtained for the case $k_i=1$.
The leading order entropy for NS1-NS5-P can be obtained by letting the NS1-NS5 be in the bound state
(\ref{nine}) and ignoring other possibilities. We put the $n_p$ units of momentum on this single long component string, getting an effective level of excitation $n_1n_5n_p$ and an entropy
\begin{equation}
S_{micro}=\ln [{\cal N}] = 2\pi \sqrt{n_1n_5n_p}
\label{ten}
\end{equation}
We now observe that the microscopic entropy (\ref{ten}) agrees exactly with the Bekenstein entropy (\ref{threeqp})
\begin{equation}
S_{micro}=S_{Bek}
\label{agree1}
\end{equation}
This is a remarkable result, first obtained by Strominger and Vafa \cite{sv} for a slightly different system. They took the compactification $M_{4,1}\times K3\times S^1$ (i.e. the $T^4$ was replaced by K3). The case with $T^4$ was done soon thereafter by Callan and Maldacena \cite{callanmalda}.
\subsection{Non-extremal holes}\label{none}
Extremal holes offer the most rigorous connection between black hole geometries and the corresponding CFT microstates, since the energy of extremal (i.e. BPS) states depends in a known way on the charges and moduli. But since these holes have the minimum possible energy for the given charge, they do not have any `excess' energy that could be radiated away as Hawking radiation. To see this radiation we have to consider non-extremal holes, and get some microscopic picture to describe their properties.
The extremal hole has three (large) charges and no `excess' energy. We will move away from extremality in small steps, first
keeping two large charges and some nonextremality, then one large charge and some non-extremality, and finally no large charges, a case which includes the Schwarzschild hole. While our control on the microscopics becomes less the further away we go from extremality, we will see that some general relations emerge from these studies which suggest a qualitative description for all holes.
\subsubsection{The nonextremal gravity solution}\label{then}
We continue to use the compactification $M_{9,1}\rightarrow M_{4,1}\times T^4\times S^1$. We have charges NS1, NS5, P as before, but also extra energy that gives nonextremality. The metric and dilaton are \cite{hms}
\begin{equation}
ds^2_{string}=H_1^{-1}[-dt^2+dy^2+{r_0^2\over r^2}(\cosh \sigma dt+\sinh\sigma dy)^2]
+H_5[{dr^2\over (1-{r_0^2\over r^2})}+r^2d\Omega_3^2]+\sum_{a=1}^4 dz_adz_a
\label{fullmetric}
\end{equation}
\begin{equation}
e^{2\phi}={H_5\over H_1}
\end{equation}
where
\begin{equation}
H_1=1+{r_0^2\sinh^2\alpha\over r^2}, ~~~H_5=1+{r_0^2\sinh^2\gamma\over r^2}
\end{equation}
The integer valued charges carried by this hole are
\begin{eqnarray}
\hat n_1&=&{Vr_0^2\sinh 2\alpha\over 2g^2\alpha'^3}\label{n1}\\
\hat n_5&=&{r_0^2\sinh 2\gamma\over 2\alpha'}\label{n5}\\
\hat n_p&=&{R^2Vr_0^2\sinh 2\sigma\over 2g^2\alpha'^4}
\label{np}
\end{eqnarray}
The energy (i.e. the mass of the black hole) is
\begin{equation}
E={RVr_0^2\over 2g^2\alpha'^4}(\cosh 2\alpha+\cosh 2\gamma+\cosh 2\sigma)
\label{energy}
\end{equation}
The horizon is at $r=r_0$. From the area of this horizon we find the Bekenstein entropy
\begin{equation}
S_{Bek}={A_{10}\over 4 G_{10}}={2\pi RV r_0^3\over g^2\alpha'^4}\cosh\alpha\cosh\gamma\cosh\sigma
\label{bek}
\end{equation}
The Hawking temperature is
\begin{equation}
T_H=[({\partial S\over \partial E})_{\hat n_1, \hat n_5, \hat n_p}]^{-1}={1\over 2\pi r_0 \cosh\alpha\cosh\gamma\cosh\sigma}
\end{equation}
\subsubsection{The extremal limit: `Three large charges, no nonextremality'}\label{thee}
The extremal limit is obtained by taking
\begin{equation}
r_0\rightarrow 0, ~~\alpha\rightarrow\infty, ~~\gamma\rightarrow\infty, ~~\sigma\rightarrow\infty
\end{equation}
while holding fixed
\begin{equation}
r_0^2\sinh^2\alpha=Q_1, ~~r_0^2\sinh^2\gamma=Q_5, ~~r_0^2\sinh^2\sigma=Q_p
\end{equation}
This gives the extremal hole we constructed earlier. For this case we have already checked that the microscopic entropy agrees with the Bekenstein entropy (\ref{agree1}). It can be seen that in this limit the Hawking temperature is $T_H=0$.
\subsubsection{Two large charges $+$ nonextremality}\label{twol}
We now wish to move away from the extremal 3-charge system, towards the neutral Schwarzschild hole. For a first step, we keep two of the charges large; let these be NS1, NS5. We will have a small amount of the third charge P, and a small amount of nonextremality. The relevant limits are
\begin{equation}
r_0, ~r_0e^\sigma ~\ll r_0e^\alpha, ~r_0e^\gamma
\label{2chargelim}
\end{equation}
Thus $\sigma$ is finite but $\alpha,\gamma\gg 1$. We are `close' to the extremal NS1-NS5 state, so we can hope that the excitations will be a small correction. The excitations will be a `dilute' gas among the large number of $\hat n_1, \hat n_5$ charges and a simple model for these excitations might give us the entropy and dynamics of the system.
The BPS mass corresponding to the $\hat n_1$ NS1 branes is
\begin{equation}
M_1^{BPS}={R\hat n_1\over \alpha'}={RVr_0^2\over 2g^2\alpha'^4}\sinh 2\alpha={RVr_0^2\over 2g^2\alpha'^4}(\cosh 2\alpha-e^{-2\alpha})\approx {RVr_0^2\over 2g^2\alpha'^4}\cosh 2\alpha
\end{equation}
The BPS mass corresponding to the $\hat n_5$ NS5 branes is
\begin{equation}
M_5^{BPS}={RV\hat n_5\over g^2 \alpha'^3}={RVr_0^2\over 2g^2\alpha'^4}\sinh 2\gamma={RVr_0^2\over 2g^2\alpha'^4}(\cosh 2\gamma-e^{-2\gamma})\approx {RVr_0^2\over 2g^2\alpha'^4}\cosh 2\gamma
\label{bps5}
\end{equation}
Thus the energy (\ref{energy}) can be written as
\begin{equation}
E= M_1^{BPS}+M_5^{BPS}+\Delta E, ~~~\Delta E\approx {RVr_0^2\over 2g^2\alpha'^4}\cosh 2\sigma
\end{equation}
The momentum is
\begin{equation}
P={\hat n_p\over R}={RVr_0^2\over 2g^2\alpha'^4}\sinh 2\sigma
\end{equation}
Note that
\begin{equation}
\Delta E+P\approx {RVr_0^2\over 2g^2\alpha'^4}e^{2\sigma}, ~~~\Delta E-P\approx {RVr_0^2\over 2g^2\alpha'^4}e^{-2\sigma}
\end{equation}
We wish to compute the entropy (\ref{bek}) in this limit. Note that
\begin{eqnarray}
\hat n_1&=&{Vr_0^2\over 2g^2\alpha'^3}\sinh 2\alpha\approx {Vr_0^2\over g^2\alpha'^3}\cosh^2\alpha\label{n1number}\\
\hat n_5&=&{r_0^2\over 2\alpha'}\sinh 2\gamma\approx {r_0^2\over \alpha'}\cosh^2\gamma\label{n5number}
\end{eqnarray}
We then find
\begin{equation}
S_{Bek}\approx 2\pi\sqrt{\hat n_1\hat n_5}~[~\sqrt{{R\over 2}(\Delta E+P)}+\sqrt{{R\over 2}(\Delta E-P)}~]
\label{sbek2}
\end{equation}
Let us now look at the microscopic description of this nonextremal state. The NS1, NS5 branes generate an `effective string' as before. In the extremal case all the excitations were right movers (R) on this effective string, so that we had the maximal possible momentum charge P for the given energy. For the non-extremal case we will have momentum modes moving in both R,L directions. Let the right movers carry $n_p$ units of momentum and the left movers $\bar n_p$ units of (oppositely directed) momentum. Then (ignoring any interaction between the R,L modes) we will have
\begin{equation}
\Delta E={1\over R}(n_p+\bar n_p), ~~~P={1\over R}(n_p-\bar n_p)
\label{npnpbar}
\end{equation}
Since we have ignored any interactions between the R,L modes the entropy $S_{micro}$ of this `gas' of momentum modes will be the sum of the entropies of the R,L excitations. Thus using (\ref{ten}) we write
\begin{equation}
S_{micro}=2\pi\sqrt{\hat n_1\hat n_5n_p}+2\pi\sqrt{\hat n_1\hat n_5\bar n_p}
\label{2chargeentropy}
\end{equation}
But using (\ref{npnpbar}) in (\ref{sbek2}) we find
\begin{equation}
S_{micro}=2\pi\sqrt{\hat n_1\hat n_5}~[~\sqrt{{R\over 2}(\Delta E+P)}+\sqrt{{R\over 2}(\Delta E-P)}~]
\label{sbek2q}
\end{equation}
Comparing to (\ref{sbek2}) we find that
\begin{equation}
S_{micro}\approx S_{Bek}
\end{equation}
We thus see that a simple model of the microscopic brane bound state describes well the entropy of this near extremal system.
\subsubsection{One large charge $+$ nonextremality}\label{onel}
Continuing on our path to reducing the charges carried by the hole, we now let only one charge, NS5, be large. The relevant limit in (\ref{fullmetric}) is
\begin{equation}
r_0, ~ r_0e^\alpha, ~ r_0e^\sigma ~\ll~ r_0e^\gamma
\end{equation}
The BPS mass for the NS5 branes is given by (\ref{bps5}), and we write
\begin{equation}
E=M_5^{BPS}+\Delta E, ~~~\Delta E\approx{RVr_0^2\over 2g^2\alpha'^4}(\cosh 2\alpha+\cosh 2\sigma)
\label{extraonecharge}
\end{equation}
Using (\ref{n5number}) in the Bekenstein entropy (\ref{bek}) gives
\begin{equation}
S_{Bek}\approx {2\pi RVr_0^2\over g^2\alpha'^{7\over 2}}\sqrt{\hat n_5}\cosh\alpha\cosh\sigma
\label{onechargebek}
\end{equation}
Let us now see how this entropy may be obtained from a microscopic model. When the NS1, NS5 charges were large then the low energy excitations were given by $P\bar P$ pairs; i.e. momentum modes and anti-momentum modes running along the effective string formed by the NS1-NS5 bound state. Now that only the NS5 charge is large, we expect that the low energy excitations will have $P\bar P$ as well as $NS1\overline{NS1}$ pairs. (Since all charges can be permuted under duality there is complete symmetry between the P and NS1 charges this time.)
Recall that when an NS1 was bound to NS5 branes then we had postulated that the NS1 could vibrate only inside the plane of the NS5. This gives the NS1 4 transverse bosonic vibrations, and by supersymmetry 4 corresponding fermionic ones, giving a total central charge $c=4+{4\over 2}=6$. Thus the NS1 inside the NS5 branes is not a critical string. This is not a contradiction, since we do not expect it to be a fundamental string; rather it is an `effective string' whose details will have to be known to find exactly its low lying excitations. But if we assume that the string to leading order is a `free string' then we expect that excitations above the low lying ones will be given by a relation like the one for the fundamental string
\begin{equation}
m^2=(2\pi R \hat n_1 T+{\hat n_p\over R})^2+8\pi T N_L=(2\pi R \hat n_1 T-{\hat n_p\over R})^2+8\pi T N_R
\label{massformula}
\end{equation}
(We have ignored the shift due to the vacuum energy since this is a small effect which depends on the details of the `effective string'; it will be subleading in what follows.) Here $\hat n_1$ is the net winding number around the $S^1$ with radius $R$, and $\hat n_p$ is the net number of units of momentum along $S^1$.
What is the tension $T$ of this `effective string'?
When have argued that when an NS1 brane binds to a collection of NS5 branes then the NS1 brane becomes `fractionated' -- the NS1 breaks up into $\hat n_5$ fractional NS1 branes. Thus the tension of the `effective string' must be
\begin{equation}
T={1\over \hat n_5} T_{NS1}={1\over \hat n_5}{1\over 2\pi\alpha'}
\label{tension}
\end{equation}
We are now ready to compute the microscopic entropy. For excitation levels $N_L, N_R \gg 1$ the degeneracy of string states grows like
\begin{equation}
{\cal N}\sim e^{2\pi\sqrt{{c\over 6}N_R}}~e^{2\pi\sqrt{{c\over 6}N_L}}
\end{equation}
Setting $c=6$ we find
\begin{equation}
S_{micro}=\ln {\cal N}=2\pi (\sqrt{N_L}+\sqrt{N_R})
\label{onechargemicro}
\end{equation}
In (\ref{massformula}) the charges $\hat n_1, \hat n_p$ are given by (\ref{n1}),(\ref{np}).
We set the mass $m$ of the `effective string' equal to the excitation energy $\Delta E$ (\ref{extraonecharge}) of the NS5 brane
\begin{equation}
m={RVr_0^2\over 2g^2\alpha'^4}(\cosh 2\alpha+\cosh 2\sigma)
\end{equation}
We then find from (\ref{massformula}) and using (\ref{tension})
\begin{eqnarray}
N_L&=&({RVr_0^2\over 2g^2\alpha'^{7\over 2}})^2\hat n_5\cosh^2(\alpha-\sigma)\nonumber \\
N_R&=&({RVr_0^2\over 2g^2\alpha'^{7\over 2}})^2\hat n_5\cosh^2(\alpha+\sigma)
\end{eqnarray}
Substituting in (\ref{onechargemicro}) we find
\begin{equation}
S_{micro}=2\pi (\sqrt{N_L}+\sqrt{N_R})={2\pi RVr_0^2\over g^2\alpha'^{7\over 2}}\sqrt{n_5}\cosh\alpha\cosh\sigma
\end{equation}
Comparing to (\ref{onechargebek}) we again find \cite{malda5}
\begin{equation}
S_{micro}\approx S_{Bek}
\end{equation}
\subsubsection{No large charges}\label{nola}
This case includes in particular the Schwarzschild hole, where we set all charges to zero and have only `nonextremality'.
The system cannot be considered a near-extremal perturbation of some extremal brane system, so we do not have a simple microscopic model based on the dynamics of the corresponding branes. But we will extract some general principles from the computations done above and then extrapolate them to the general nonextremal case.
First, we have come across the idea of `fractionation': When we put momentum modes on a string of winding number $n_1$ then this momentum comes in units of ${1\over n_1}$ times the `full' momentum unit ${1\over R}$. Similarly, NS1 branes bound to NS5 branes became `fractional'.
Second, we have observed that if we have large charges NS1-NS5, then nonextremal excitations are carried by (fractional) $P\bar P$ pairs. If we have only large NS5 charge then the nonextremality is carried by a (fractional) NS1 living in the NS5 worldvolume. This effective string could have net winding and momentum around the $S^1$, but consider for simplicity the case $\hat n_1=\hat n_p=0$. Then the `effective string' is a `wiggling loop' on the NS5 worldvolume, with no net winding or momentum. The loop can go up and down in the $y$ direction (the direction of the $S^1$). The part which goes up can be called a part of a winding mode, and the part which comes down can be called a part of an antiwinding mode. Similarly, the `wiggles' on the string carry both positive and negative momentum along the $S^1$. So very roughly we can say that the vibrations of the `effective string'
exhibit $P\bar P$ as well as $NS1\overline{NS1}$ pairs.
Thus in the general case of no large charges we expect that we will have $P\bar P$ pairs, $NS1\overline{NS1}$ pairs and $NS5\overline{NS5}$ pairs. Further, each kind of brane will be `fractionated' by the other branes in the system, and the resulting degrees of freedom will give rise to the entropy of the hole.
The geometry is characterized by the integer charges $\hat n_1, \hat n_5, \hat n_p$, the energy $E$, and also three moduli: $V,R,g$ which arise from the volume of the $T^4$, the length of the $S^1$ and the strength of the coupling.
The entropy $S$ is a function of these 7 parameters. Let us assume that the energy of the different branes and antibranes just adds together; i.e., there is no interaction energy. There is certainly no clear basis for this assumption since we are not in a BPS situation, but we make the assumption anyway and see where it leads. If we had just NS1 branes then we can find the energy by taking the extremal limit $r_0\rightarrow 0, \alpha\rightarrow\infty$ in (\ref{n1}),(\ref{energy}) and get
\begin{equation}
E_{NS1}={R\over \alpha'}n_1
\end{equation}
In (\ref{energy}) we write the energy contributed by the NS1 branes and antibranes as the sum of the brane and antibrane contributions
\begin{equation}
{RVr_0^2\over 2g^2\alpha'^4}~\cosh 2\alpha = {R\over \alpha'}(n_1+\bar n_1)
\label{sum}
\end{equation}
The net charge is given by (\ref{n1})
\begin{equation}
\hat n_1={Vr_0^2\over 2g^2\alpha'^3}~\sinh 2\alpha=n_1-\bar n_1
\label{diff}
\end{equation}
The solution to (\ref{sum}),(\ref{diff}) is
\begin{equation}
n_1={Vr_0^2\over 4g^2\alpha'^3}~e^{2\alpha}, ~~~~\bar n_1={Vr_0^2\over 4g^2\alpha'^3}~e^{-2\alpha}
\label{n1qq}
\end{equation}
Similarly, we find
\begin{equation}
n_5={r_0^2\over 4\alpha'}~e^{2\gamma}, ~~~~\bar n_5={r_0^2\over 4\alpha'}~e^{-2\gamma}
\label{n5qq}
\end{equation}
\begin{equation}
n_p={R^2Vr_0^2\over 4g^2\alpha'^4}~e^{2\sigma}, ~~~~\bar n_p={R^2Vr_0^2\over 4g^2\alpha'^4}~e^{-2\sigma}
\label{npqq}
\end{equation}
We now observe that in terms of the $n_i, \bar n_i$ the entropy (\ref{bek}) takes a remarkably simple and suggestive form \cite{hms}
\begin{equation}
S_{Bek}=2\pi(\sqrt{n_1}+\sqrt{\bar n_1})(\sqrt{n_5}+\sqrt{\bar n_5})(\sqrt{n_p}+\sqrt{\bar n_p})
\label{nonexentropy}
\end{equation}
In the extremal limit (no antibranes) this reduces to (\ref{ten}) and in the near-extremal limit with two large charges
it reduces to (\ref{2chargeentropy}). We have seen that duality permutes the three charges, and we note that
(\ref{nonexentropy}) is invariant under this permutation. It is also interesting that if we fix the energy $E$ (\ref{energy}) and the charges $\hat n_i$ and then require that the expression (\ref{nonexentropy}) be maximized then we get the relations (\ref{n1qq}),(\ref{n5qq}),(\ref{npqq}).
While we have no good justification for ignoring the interactions between branes and antibranes the simple form of (\ref{nonexentropy}) does suggest that for a qualitative understanding of how nonextremal holes behave we must think of the dynamics of fractional brane-antibtrane pairs. If we put energy into making a neutral black hole then we cannot think of this energy as the kinetic energy of a `gas of gravitons'; we know that such a gas would have a much smaller entropy than $S_{Bek}$. Rather we must use the energy to create fractional brane-antibrane pairs, and we expect that the vast degeneracy of such states will account for the entropy. Some recent studies have also explored brane-antibrane excitations from slightly different points of view \cite{fractional}.
\subsection{The 4-charge hole}\label{the4}
So far we had compactified spacetime down to 4+1 noncompact dimensions. Let us compactify another circle, getting
$M_{9,1}\rightarrow M_{3,1}\times T^4\times S^1\times \tilde S^1$. As before we wrap NS1 branes on $S^1$, NS5 branes on $T^4\times S^1$ and put momentum along $S^1$. But we can also make KK monopoles which have $\tilde S^1$ as the nontrivially fibred circle and which are uniform along $T^4\times S^1$. With these charges we get a 4-charge black hole in 3+1 dimensions that is very similar to the 3-charge case in 4+1 dimensions. This time it is easier to write the Einstein metric after dimensionally reducing to the 3+1 noncompact directions since the KK-monopoles give topological twisting of the compact circle $\tilde S^1$ over the $S^2$ of the noncompact space. This metric is
\cite{4charge}
\begin{equation}
ds^2_{Einstein}=-f^{-{1\over 2}} (1-{r_0\over r})dt^2+f^{1\over 2} [{dr^2\over (1-{r_0\over r})}+r^2(d\theta^2+\sin^2\theta d\phi^2)]
\label{4chargemetric}
\end{equation}
where
\begin{equation}
f=(1+{r_0\sinh^2\alpha\over r})(1+{r_0\sinh^2\gamma\over r})(1+{r_0\sinh^2\sigma\over r})(1+{r_0\sinh^2\lambda\over r})
\end{equation}
The integer valued charges carried by this hole are
\begin{eqnarray}
\hat n_1&=&{V\tilde R r_0\sinh 2\alpha\over g^2\alpha'^3}\label{n14}\\
\hat n_5&=&{\tilde R r_0\sinh 2\gamma\over \alpha'}\label{n54}\\
\hat n_p&=&{R^2\tilde RVr_0\sinh 2\sigma\over g^2\alpha'^4}\label{np4}\\
\hat n_{KK}&=&{ r_0\sinh 2\lambda\over \tilde R}\label{nkk}
\end{eqnarray}
The energy (i.e. the mass of the black hole) is
\begin{equation}
E={R\tilde RVr_0\over g^2\alpha'^4}(\cosh 2\alpha+\cosh 2\gamma+\cosh 2\sigma+\cosh 2\lambda)
\label{energy4}
\end{equation}
The horizon is at $r=r_0$. From the area of this horizon we find the Bekenstein entropy
\begin{equation}
S_{Bek}={A_{4}\over 4 G_{4}}={8\pi R\tilde RV r_0^2\over g^2\alpha'^4}\cosh\alpha\cosh\gamma\cosh\sigma\cosh\lambda
\label{bek4}
\end{equation}
In the microscopic picture the NS1, NS5, KK bind together to give an `effective string' of length
\begin{equation}
L_T=2\pi R \hat n_1\hat n_5\hat n_{KK}
\end{equation}
The effective number of bosonic degrees of freedom is again $6$, so the near extremal dynamics is very similar to the NS1-NS5-P system.
Suppose we are studying the case with three large charges NS1, NS5, KK and a small nonextremality.
We will then add $n_p, \bar n_p$ units of momentum along the R,L directions and find a microscopic entropy, which turns out to again agree with the Bekenstein entropy found from the corresponding geometry
\begin{equation}
S_{micro}=2\pi\sqrt{\hat n_1\hat n_5\hat n_{KK}n_p}+2\pi\sqrt{\hat n_1\hat n_5\hat n_{KK}\bar n_p}\approx S_{Bek}
\label{4chargeen}
\end{equation}
\subsubsection{Summary}\label{summ2}
We have seen that simple models of extremal and near-extremal holes give good results for their entropy. A naive extrapolation to far from extremal holes works surprisingly well, and suggests that these ideas may be at least qualitatively valid for all holes. A key notion is `fractionation': Branes in a bound state `break up' other branes into fractional units, and the count of the different ways of grouping the resulting bits gives the entropy of the system. Note that if the large charges are NS1-NS5, then we get most entropy by putting the nonextremal energy into the {\it third} kind of charge; i.e., creating $P\bar P$ pairs. When we had only NS5 charge, the entropy went to creating $NS1\overline{NS1}$ and $P\bar P$ pairs. More generally, suppose we add an energy $\Delta E$ to the black hole. Playing with the ansatz (\ref{nonexentropy}) and the expression (\ref{energy}) for the energy we find that the energy $\Delta E_i$ going towards creating pairs of the $i$th kind of charge satisfies $\Delta E_i\propto 1/E_i$. Thus it is entropically advantageous to create the kind of branes that we do {\it not} have.
\section{Absorption and emission from black holes}
\label{abso}\setcounter{equation}{0}
We have made black holes in string theory, and found that the microscopic physics of branes reproduces the Bekenstein entropy for near extremal holes. What about the dynamics of black holes? Can we compute the probability of the string state to absorb or emit quanta, and then compare this to the probability for the black hole to absorb infalling quanta or emit Hawking radiation?
\subsection{The Hawking computation}\label{theh}
Let us first look at the computation on the gravity side. Consider a spherical black hole with horizon area $A$. Suppose that we have a minimally coupled scalar in the theory
\begin{equation}
\square \phi=0
\label{wave}
\end{equation}
We wish to find the cross section for absorption of such scalars into the black hole. To do this we must solve the wave equation
(\ref{wave}), with the following boundary conditions. We have a plane wave incident from infinity. We put boundary conditions at the horizon which say that quanta are falling in but not coming out. Some part of the incident plane wave will be reflected from the metric around the hole and give rise to an outgoing waveform at infinity. The rest goes to the horizon, and represents the part that is absorbed. From this absorbed part we deduce the absorption cross section $\sigma$.
In general this is a hard calculation to do, but it becomes simple in the limit where the wavelength of the incident wave $\lambda$ becomes much larger than the radius of the hole. In this limit we get a universal answer \cite{unruh, dmcompare, dgm}
\begin{equation}
\sigma=A
\label{area}
\end{equation}
To compute the Hawking emission rate we must again study the wave equation in the black hole geometry. But we may just use the result \cite{hawking,hh} that this radiation is thermal, and thus conclude that the emission rate per unit time $\Gamma$ is related to the absorption cross section $\sigma$ by the usual thermodynamic relation
\begin{equation}
\Gamma_{Hawking}=\sigma {d^dk\over (2\pi)^d}{1\over e^{\omega\over T_H}-1}
\label{radiation}
\end{equation}
Here $d$ is the number of spatial directions in which the radiation is emitted, $T_H$ is the temperature of the hole and $\Gamma$ gives the number of quanta emitted in the given phase space range per unit time.
\subsection{The microscopic computation}\label{them2}
We would now like to see if the microscopic dynamics of branes can reproduce (\ref{radiation}). For the NS1-NS5-P hole we had the following microscopic brane picture. The NS1 branes could move inside the NS5 branes; calling the $T^4$ directions $z_1\dots z_4$ we find that the NS1 has allowed transverse vibrations in the four directions $z_1\dots z_4$. This gives 4 bosonic degrees of freedom. There are also 4 fermionic degrees of freedom, but let us ignore these for now.
If the transverse vibrations are all traveling waves moving in the same direction along the string then we have a BPS state with some momentum P. If we have both right and left moving waves, then we have a nonextremal state, which can decay by emitting energy. The mechanism of this decay is simple: A left moving vibration and a right moving vibration collide and leave the brane system as a massless quantum of the bulk supergravity theory. Conversely, a graviton incident on the brane bound state can convert its energy into left and right traveling waves on the string. This is of course just the same way that a guitar string emits and absorbs sound waves from the surrounding air. Let us make a simple model for the `effective string' and see what emission rates we get. (In the following we work with branes and gravitons moving in a flat background;
the cross section for absorption into these flat space branes will be described in a {\it dual} way by absorption into the black hole geometry \cite{maldacena}.)
In the bulk the 10-D Einstein action is
\begin{equation}
S={1\over 2\kappa^2}\int d^{10} x \sqrt{-g} [R+\dots]
\end{equation}
We have compactified on $T^4\times S^1$. Let us write the metric components in the $T^4$ as
\begin{equation}
g_{z_iz_j}=\delta_{ij}+2\kappa \tilde h_{ij}
\end{equation}
Then the action upto quadratic order gives
\begin{equation}
S\rightarrow \int d^{10} x{1\over 2}\partial_\mu \tilde h_{ij}\partial^\mu \tilde h_{ij}
\end{equation}
where the derivatives $\partial_\mu$ are nonvanishing only along in the 4+1 noncompact directions. Thus these components of the metric behave as scalars satisfying (\ref{wave}) in the noncompact space.
Now look at the brane bound state. The `effective string' stretches along the $S^1$ direction $y$. Let us model its dynamics by a Dirac-Born-Infeld type action
\begin{equation}
S_{DBI}=-T\int d^2\xi \sqrt{-\det[G_{ab}]}
\end{equation}
where
\begin{equation}
G_{ab}=g_{\mu\nu}{\partial X^\mu\over \partial\xi^a}{\partial X^\nu\over \partial\xi^b}
\label{dbi}
\end{equation}
is the induced metric onto the worldvolume of the effective string. We work in the static gauge
\begin{equation}
X^0=t=\xi^0, ~~~X^1=y=\xi^1
\end{equation}
In this gauge the bosonic dynamical variables are the transverse vibrations $X^i$.
Expanding (\ref{dbi}) we find
\begin{equation}
S_{DBI}\rightarrow T\int d^2\xi {1\over 2} (\delta_{ij}+ 2\kappa \tilde h_{ij})\partial_aX^i\partial^aX^j
\end{equation}
where the index $a=0,1$ runs over the worldsheet coordinates $\xi^a$.
We can write $\sqrt{T}X^i\equiv \tilde X^i$ to get
\begin{equation}
S_{DBI}\rightarrow \int d^2\xi {1\over 2} (\delta_{ij}+ 2\kappa \tilde h_{ij})\partial_a\tilde X^i\partial^a\tilde X^j
\label{inter}
\end{equation}
Thus the total action bulk + brane is, to this order
\begin{equation}
S\rightarrow \int d^{10} x{1\over 2}\partial \tilde h_{ij}\partial \tilde h_{ij} + \int d^2\xi {1\over 2} (\delta_{ij}+ 2\kappa \tilde h_{ij})\partial_a\tilde X^i\partial^a\tilde X^j
\label{actiondbi}
\end{equation}
Note that the tension $T$ has disappeared from these terms in the action. This is a good thing, since it means that we don't have to worry about physical principles at this stage which will determine this tension. (For more complicated absorption processes arising from higher order terms in the action we do need the value of $T$ \cite{mathurang}.)
The effective string can only vibrate inside the $T^4$, so we have only 4 possibilities for the indices of $\tilde h_{ij}$. Let us take $\tilde h_{12}$. The kinetic term for the variable $\tilde h_{12}$ is $\int d^{10} x {1\over 2} \partial_\mu (\sqrt{2} \tilde h_{12})\partial^\mu (\sqrt{2} \tilde h_{12})$, where the fact that $\tilde h_{12}$, $\tilde h_{21}$ are the same variable gives an extra factor of $2$. Thus the interaction term in (\ref{actiondbi}) is
\begin{equation}
S_{int}=\int d^2\xi\sqrt{2}\,\kappa\, \partial_a \tilde X^1\partial^a \tilde X^2~ (\sqrt{2}\,\tilde h_{12})
\label{inter1}
\end{equation}
When we quantize this system then we get field operators for the vibrations $\tilde X^1, \tilde X^2$ and the graviton $\tilde h_{12}$,
which we can expand into Fourier modes in the usual way
\begin{eqnarray}
\hat{\tilde X^1}&=&\sum_{ p} {1\over \sqrt{2|p|}\sqrt{L_T}}[\hat a_{1,p} e^{i p x-i|p| t}+{\hat a_{1,p}}^\dagger e^{-i px+i|p| t}]\nonumber \\
\hat{\tilde X^2}&=&\sum_{ p} {1\over \sqrt{2|p|}\sqrt{L_T}}[\hat a_{2,p} e^{i p x-i|p| t}+{\hat a_{2,p}}^\dagger e^{-i p x+i|p| t}]\nonumber \\
\sqrt{2}{\hat{\tilde h}_{12}}&=&\sum_{\vec k} {1\over \sqrt{2\omega}\sqrt{V_9}}[\hat a_{h,\vec k} e^{i\vec k\cdot \vec x-i\omega t}+{\hat a_{h,\vec k}}^\dagger e^{-i\vec k\cdot \vec x+i\omega t}],~~~~~~(\omega=|\vec k|)
\end{eqnarray}
Note that the fields $\hat {\tilde X^1}, \hat {\tilde X^2}$ live on the effective string which gives a 1-dimensional box of length $L_T$. The graviton field $ {\hat {\tilde h}_{12}}$ lives in the bulk. We regulate the volume of the bulk by letting the noncompact directions be in a box of volume $V_{nc}$. The total volume of the 9-D space is then
\begin{equation}
V_9=V_{nc}(2\pi R)((2\pi)^4 V)
\label{volumes}
\end{equation}
Since the fields involved in the interaction live on different box volumes we give the steps in the following field theory computation explicitly, rather than referring the reader to a standard set of rules for computing cross sections.
The interaction (\ref{inter1}) includes the contribution
\begin{equation}
S_{int}\rightarrow \sqrt{2}\,\kappa \sum_{p, \vec k}~({1\over \sqrt{2|p|}\sqrt{L_T}})({1\over \sqrt{2|p|}\sqrt{L_T}})({1\over \sqrt{2\omega}\sqrt{V_9}})L_T(2|p|^2)~\hat a_{1,p}\hat a_{2,-p}\hat a_{h,\vec k}^\dagger~e^{-2i|p|t+i\omega t}
\end{equation}
This gives the process we seek: The vibration modes $\tilde X^1, \tilde X^2$ are annihilated and the graviton $\tilde h_{12}$ created.
We are looking at low energy radiation, with wavelength much larger than the compact directions, so the momentum of the outgoing graviton is purely in the noncompact directions. This sets $p^1_1=-p^1_2=p$.
The factor $(2|p|^2)$ comes from the derivatives in (\ref{inter1})
\begin{equation}
p_{1a}p_2^a=-p_1^0p_2^0+p_1^1p_2^1=-2|p|^2
\end{equation}
Focus on a particular process with a given $p>0$ and a given $\vec k$ for the graviton. (Choosing $p>0$ implies that we have taken the right moving excitation to be $\tilde X^1$.) The amplitude for the decay per unit time is then
\begin{equation}
R_h=\sqrt{2}\kappa |p|{1\over \sqrt{2\omega}\sqrt{V_9}}~e^{-2i|p|t+i\omega t}~=~|R_h|~e^{-2i|p|t+i\omega t}
\end{equation}
We integrate over a large time $\Delta T$ and then take the absolute value squared to get the probability of this emission process to occur. Recall that
\begin{equation}
\Big |\int_0^{\Delta T} dt\,e^{-2i|p|t+i\omega t}~\Big |^2 ~\rightarrow~2\pi \Delta T \delta (\omega-2|p|)
\end{equation}
We must also multiply by the occupation numbers of the initial states. We then find
\begin{equation}
Prob[\tilde X^1(p),\tilde X^2(-p)\rightarrow \tilde h_{12}]=|R_h|^2\rho_R(p)\rho_L(p) 2\pi \Delta T\delta (\omega-2|p|)
\end{equation}
where $\rho_R,\rho_L$ give the occupation numbers of the right moving and left moving excitations on the effective string.
We sum over the value of $p$ in the initial state, and approximate the sum by an integral
\begin{equation}
\sum_{p}~~\rightarrow~~ {L_T\over 2\pi}\int_0^\infty dp
\end{equation}
This gives
\begin{eqnarray}
Prob[\tilde X^1,\tilde X^2\rightarrow \tilde h_{12}]&=&{L_T\over 2\pi}\int_0^\infty dp |R_h|^2 \rho_R(p)\rho_L(p) 2\pi \Delta T\delta (\omega-2|p|)\nonumber \\
&=&{L_T \kappa^2\omega\over 8 V_9} \rho_R({\omega\over 2})\rho_L({\omega\over 2}) \Delta T
\end{eqnarray}
We have
\begin{equation}
\rho_R({\omega\over 2})={1\over e^{{\omega\over 2T_R}}-1}, ~~~\rho_L({\omega\over 2})={1\over e^{{\omega\over 2T_L}}-1}
\end{equation}
where
\begin{equation}
T_R=\sqrt{12 E_R\over L_T \pi (f_B+{1\over 2} f_F)}=\sqrt{2 n_p\over \pi R L_T}, ~~~T_L=\sqrt{2\bar n_p\over \pi R L_T}
\end{equation}
and we have set $E_R={ n_p\over R}, ~E_L={\bar n_p\over R}$, $f_B=f_F=4$.
There are two additional sums to be performed. First we note that the final state graviton $\tilde h_{12}$ can be created by the interaction of a $\tilde X^1$ right mover and a $\tilde X^2$ left mover, or by the interaction of a $\tilde X^2$ right mover and a $\tilde X^1$ left mover. So we multiply the decay rate by $2$. Second, we must sum over final state gravitons. Recall that the momentum of the outgoing graviton is purely in the noncompact directions. So we get
\begin{equation}
\sum_{\vec k}~~\rightarrow~~\int V_{nc}~{d^4k\over (2\pi)^4}
\end{equation}
Putting all this together the rate of decay is \cite{dmcompare}
\begin{eqnarray}
\Gamma_{micro}&=&\int V_{nc}~{d^4k\over (2\pi)^4}2{1\over \Delta T}~Prob[\tilde X^1,\tilde X^2\rightarrow \tilde h_{12}]\nonumber \\
&=&\int {d^4 k\over (2\pi)^4}~(2\pi\omega G_5 L_T)~\rho_R({\omega\over 2})\rho_L({\omega\over 2})
\label{gmicro}
\end{eqnarray}
where we have used $2\kappa^2=16\pi G_{10}=16\pi (2\pi R)((2\pi)^4 V) G_5$ and (\ref{volumes}).
\subsubsection{Three large charges $+$ nonextremality}\label{thre2}
First let us go to the extremal 3-charge NS1-NS5-P hole that we constructed and add a small amount of nonextremality so that the hole radiates. The area of the horizon is given to a first approximation by the area of the extremal hole, so in the Hawking emission rate (\ref{radiation}) we can use $\sigma$ equal to the area (\ref{area3charge}). What does our microscopic calculation give?
The 3-charge extremal hole is obtained for $n_P\ne 0$ but $\bar n_p=0$. To go off extremality by a small amount we take $\bar n_p$ small but nonzero. Thus $T_L\ll T_R$. From (\ref{thawking}) we have
\begin{equation}
{1\over T_H}={1\over 2}[{1\over T_R}+{1\over T_L}]\approx{1\over 2}{1\over T_L}
\end{equation}
Consider radiation quanta with energy $\omega$ comparable to the black hole temperature
\begin{equation}
\omega\sim T_L, ~~~\omega \ll T_R
\label{ineq}
\end{equation}
Then we have
\begin{equation}
\rho_R\approx {2 T_R\over \omega}
\end{equation}
and we find (using $L_T=n_1n_5(2\pi R)$)
\begin{eqnarray}
\Gamma&=&\int {d^4 k\over (2\pi)^4}~\rho_L(-{\omega\over 2}) 8\pi G_5\sqrt{n_1n_5n_p}\nonumber \\
&=&\int {d^4 k\over (2\pi)^4}{1\over e^{\omega\over T_H}-1}(4G_5) (2\pi\sqrt{n_1n_5n_p})
\end{eqnarray}
Noting that
\begin{equation}
2\pi\sqrt{n_1n_5n_p}\approx S_{micro}={A_5\over 4G_5}
\end{equation}
we see that
\begin{equation}
\Gamma_{micro}=\int {d^4 k\over (2\pi)^4}{1\over e^{\omega\over T_H}-1} A_5
\end{equation}
Using (\ref{area}) in (\ref{radiation}) we find \cite{dmcompare}
\begin{equation}
\Gamma_{micro}~=~\Gamma_{Hawking}
\end{equation}
Thus we see that if we slightly excite the 3-charge extremal state then the radiation from the brane state has the same
gross behavior as the Hawking radiation from the corresponding near-extremal hole.
\subsubsection{Two large charges $+$ nonextremality}\label{twol2}
In studying the entropy of brane states we started with the case of three large charges and worked towards lesser charges.
We follow the same approach here. Let us now take the case studied in section(\ref{twol}) where we had two large charges NS1-NS5 plus a small amount of nonextremality. Now $n_p$ is not large, rather $n_p, \bar n_p$ should be thought of as being of the same order (and much smaller than $n_1, n_5$). We have $T_H\sim T_L\sim T_R$, so we consider radiation with energy
\begin{equation}
\omega\sim T_L, ~~~\omega \sim T_R
\label{ineqq}
\end{equation}
For the Hawking result (\ref{radiation}) we calculate the absorption cross section $\sigma$ by solving the wave equation (\ref{wave}) in the geometry (\ref{fullmetric}) with parameters in the domain (\ref{2chargelim}), and we find \cite{maldastrom}
\begin{equation}
\sigma=\pi^3 (r_0^2\sinh^2\alpha)(r_0^2\sinh^2\gamma)\omega {e^{\omega\over T_H}-1\over (e^{\omega\over 2 T_R}-1)(e^{\omega\over 2 T_L}-1)}
\end{equation}
Using (\ref{radiation}) the Hawking emission rate will be
\begin{equation}
\Gamma_{Hawking}=\int {d^4 k\over (2\pi)^4}~\pi^3 (r_0^2\sinh^2\alpha)(r_0^2\sinh^2\gamma)\omega {1\over (e^{\omega\over 2 T_R}-1)(e^{\omega\over 2 T_L}-1)}
\label{greybody}
\end{equation}
This does not have the standard black body form as a function of temperature $T_H$, so we say that the radiation is dressed by `greybody factors'. Such greybody factors are always expected to occur when the wavelength of the radiated wave becomes comparable to some length scale like the diameter of the radiating body.
Now look at the microscopic expression (\ref{gmicro}). Substituting the values of $G_5, L_T$ we have
\begin{equation}
\Gamma_{micro}=\int {d^4 k\over (2\pi)^4}~({\pi^3 n_1n_5\alpha'^4 g^2\over V})\omega {1\over (e^{\omega\over 2 T_R}-1)(e^{\omega\over 2 T_L}-1)}
\end{equation}
Using
\begin{equation}
r_0^2\sinh^2\alpha\approx {g^2\alpha'^3\over V}n_1, ~~~r_0^2\sinh^2\gamma\approx {\alpha'}n_5
\end{equation}
we again find \cite{maldastrom}
\begin{equation}
\Gamma_{micro}=\Gamma_{Hawking}
\end{equation}
This is very interesting, because we have reproduced the greybody factors found in the classical computation (\ref{greybody}) by a string theory calculation. The classical result had factors that correspond in the string computation to left moving and right moving particle densities. It is as if the classical geometry `knew' that there was an effective string description of the black hole.
\subsubsection{One large charge $+$ nonextremality}\label{onel2}
Let us now consider the case where we have one large charge (NS5) plus nonextremality. At leading order the absorption cross section for minimal scalars gives the absorption cross section $\sigma=A$ (eq. (\ref{area})) and the emission rate is given by using (\ref{radiation}). On the microscopic side we saw that the states are accounted for by a fractional tension string vibrating in the plane of the NS5 branes. We can compute the absorption by such a string, and we find \cite{klebanovmathur}
\begin{equation}
\sigma_{micro}=A
\end{equation}
which again implies $\Gamma_{micro}=\Gamma_{Hawking}$. A similar result holds for the case of 3+1 noncompact dimensions when we have 2 large charges + nonextremality.
\subsubsection{Summary}
We see that the degrees of freedom that of brane bound states that gave an entropy equalling the Bekenstein entropy also give radiation that agrees with the Hawking radiation from the corresponding holes. This is important, because it indicates that we have identified the correct physical degrees of freedom in the state.
Similar computations of absorption have been done for some other brane bound states (see for example
\cite{klebanov}) and agreement with the Hawking rate obtained.
\section{Constructing the microstates}
\label{cons}\setcounter{equation}{0}
We have seen that string theory gives us a count of microstates which agrees with the Bekenstein entropy, and using the dynamics of weakly coupled branes we also correctly reproduce Hawking radiation. But to solve the information problem we need to know what these microstates {\it look} like. We want to understand the structure of states in the coupling domain where we get the black hole. This is in contrast to a count at $g=0$ which can give us the correct {\it number} of states (since BPS states do not shift under changes of $g$) but will not tell us what the inside of a black hole looks like.
At this stage we already notice a puzzling fact. For the three charge case we found $S_{micro}=S_{Bek}=2\pi\sqrt{n_1n_5n_p}$. But suppose we keep only two of the charges, setting say $n_5=0$. Then the Bekenstein entropy $S_{Bek}$ becomes zero; this is why we had to take three charges to get a good black hole. But the microscopic entropy for two charges NS1-P was $S_{micro}=2\pi\sqrt{2}\sqrt{n_1n_p}$, which is nonzero.
One might say that the 2-charge case is just not a system that gives a good black hole, and should be disregarded in our investigation of black holes. But this would be strange, since on the microscopic side the entropy of the 2-charge system arose in a very similar way to that for the three charge system; in each case we partitioned among harmonics the momentum
on a string or `effective string'. We would therefore like to take a closer look at the gravity side of the problem for the case of two charges.
We get the metric for NS1-P by setting to zero the $Q_5$ charge in (\ref{tenp}). With a slight change of notation we write the metric as ($u=t+y, ~v=t-y$)
\begin{eqnarray}
ds^2_{string}&=&H[-dudv+Kdv^2]+\sum_{i=1}^4 dx_idx_i+\sum_{a=1}^4 dz_adz_a\nonumber \\
B_{uv}&=&{1\over 2}[H-1]\nonumber \\
e^{2\phi}&=&H\nonumber \\
H^{-1}&=&1+{Q_1\over r^2}, ~~\qquad K={Q_p\over r^2}
\label{naive}
\end{eqnarray}
We will call this metric the {\it naive} metric for NS1-P. This is because we will later argue that this metric is not produced by any configuration of NS1, P charges. It is a solution of the low energy supergravity equations away from $r=0$, but just because we can write such a solution does not mean that the singularity at $r=0$ will be an allowed one in the full string theory.
What then are the singularities that {\it are} allowed? If we start with flat space, then string theory tells us that excitations around flat space are described by configurations of various fundamental objects of the theory; in particular, the fundamental string. We can wrap this string around a circle like the $S^1$ in our compactification. We have also seen that we can wrap this string $n_1$ times around the $S^1$ forming a bound state. For $n_1$ large this configuration will generate the solution which has only NS1 charge
\begin{eqnarray}
ds^2_{string}&=&H[-dudv]+\sum_{i=1}^4 dx_idx_i+\sum_{a=1}^4 dz_adz_a\nonumber \\
B_{uv}&=&{1\over 2}[H-1]\nonumber \\
e^{2\phi}&=&H\nonumber \\
H^{-1}&=&1+{Q_1\over r^2}
\label{f1}
\end{eqnarray}
This solution is also singular at $r=0$, but this is a singularity that we must accept since the geometry was generated by a source that exists in the theory. One may first take the limit $g\rightarrow 0$ and get the string wrapped $n_1$ times around $S^1$ in flat space. Then we can increase $g$ to a nonzero value, noting that we can track the state under the change since it is a BPS state. If $n_1$ is large and we are not too close to $r=0$ then (\ref{f1}) will be a good description of the solution corresponding to the bound state of $n_1$ units of NS1 charge.
Now let us ask what happens when we add P charge. We have already seen that in the bound state NS1-P the momentum P will be carried as traveling waves on the `multiwound' NS1. Here we come to the most critical point of our analysis: {\it There are no longitudinal vibration modes of the fundamental string NS1}. Thus all the momentum must be carried by transverse vibrations. But this means that the string must bend away from its central axis in order to carry the momentum, so it will not be confined to the location $r=0$ in the transverse space. We will shortly find the correct solutions for NS1-P, but we can already see that the solution (\ref{naive}) may be incorrect since it requires the NS1-P source to be at a point $r=0$ in the transverse space.
The NS1 string has many strands since it is multiwound. When carrying a generic traveling wave these strands will separate from each other. We have to find the metric created by these strands. Consider the bosonic excitations, and for the moment restrict attention to the 4 that give bending in the noncompact directions $x_i$. The wave carried by the NS1 is then described by a transverse displacement profile $\vec F(v)$, where $v=t-y$. The metric for a single strand of the string carrying such a wave is known \cite{wave}
\begin{eqnarray}
ds^2_{string}&=&H[-dudv+Kdv^2+2A_i dx_i dv]+\sum_{i=1}^4 dx_idx_i+\sum_{a=1}^4 dz_adz_a\nonumber \\
B_{uv}&=&{1\over 2}[H-1], ~~\qquad B_{vi}=HA_i\nonumber \\
e^{2\phi}&=&H\nonumber \\
H^{-1}(\vec x ,y,t)&=&1+{Q_1\over |\vec x-\vec F(t-y)|^2}\nonumber \\
K(\vec x ,y,t)&=&{Q_1|\dot{\vec F}(t-y)|^2\over |\vec x-\vec F(t-y)|^2}\nonumber \\
A_i(\vec x ,y,t)&=&-{Q_1\dot F_i(t-y)\over |\vec x-\vec F(t-y)|^2}
\label{fpsingle}
\end{eqnarray}
Now suppose that we have many strands of the NS1 string, carrying different vibration profiles $\vec F^{(s)}(t-y)$.
While the vibration profiles are different, the strands all carry momentum in the same direction $y$. In this case the strands are mutually BPS and the metric of all the strands can be obtained by superposing the harmonic functions arising in the solutions for the individual strands. Thus we get
\begin{eqnarray}
ds^2_{string}&=&H[-dudv+Kdv^2+2A_i dx_i dv]+\sum_{i=1}^4 dx_idx_i+\sum_{a=1}^4 dz_adz_a\nonumber \\
B_{uv}&=&{1\over 2}[H-1], ~~\qquad B_{vi}=HA_i\nonumber \\
e^{2\phi}&=&H\nonumber \\
H^{-1}(\vec x, y,t)&=&1+\sum_s{Q_1^{(s)}\over |\vec x-\vec F^{(s)}(t-y)|^2}\nonumber \\
K(\vec x, y,t)&=&\sum_s{Q_1^{(s)}|\dot{\vec F}^{(s)}(t-y)|^2\over |\vec x-\vec F^{(s)}(t-y)|^2}\nonumber \\
A_i(\vec x,y,t)&=&-\sum_s{Q_1^{(s)}\dot F^{(s)}_i(t-y)\over |\vec x-\vec F^{(s)}(t-y)|^2}
\label{fpmultiple}
\end{eqnarray}
Now consider the string that we actually have in our problem.
We can open up the multiwound string by going to the $n_1$ fold cover of $S^1$. Then the string is described by the profile $\vec F(t-y)$, with $0\le y<2\pi R n_1$. The part of the string in the range $0\le y<2\pi R$ gives one strand in the actual space, the part in the range $2\pi R\le y<4\pi R$ gives another strand, and so on. These different strands do not lie on top of each other in general, so we have a many strand situation as in (\ref{fpmultiple}) above. But note that the end of one strand is at the same position as the start of the next strand, so the strands are not completely independent of each other. In any case all strands are given once we give the profile function $\vec F(v)$.
The above solution has a sum over strands that looks difficult to carry out in practice. But now we note that there is a simplification in the `black hole' limit which is defined by
\begin{equation}
n_1, n_p\rightarrow \infty
\label{fiftqq}
\end{equation}
while the moduli like $g, R, V$ are held fixed. We have called this limit the black hole limit for the following reason.
As we increase the number of quanta $n_i$ in a bound state, the system will in general change its behavior and properties. In the limit $n_i\rightarrow\infty$ we expect that there will be a certain set of properties that will govern the system, and these are the properties that will be the universal ones that characterize large black holes (assuming that the chosen charges do form a black hole).
The total length of the NS1 multiwound string is $2\pi n_1 R$. Consider the gas of excitations considered in section(\ref{them}). The energy of the typical excitation is
\begin{equation}
e\sim T\sim {\sqrt{n_1n_p}\over L_T}
\end{equation}
so that the generic quantum is in a harmonic
\begin{equation}
k\sim \sqrt{n_1n_p}
\label{thir}
\end{equation}
on the multiwound NS1 string. So the wavelength of the vibration is
\begin{equation}
\lambda\sim {2\pi Rn_1\over \sqrt{n_1n_p}}\sim {\sqrt{n_1\over n_p}}R
\label{fift}
\end{equation}
The generic state of the string will be a complicated wavefunction arising from excitations of all the Fourier modes of the string, so it will not be well described by a classical geometry. We will first take some limits to get good classical solutions, and use the results to estimate the `size' of the generic `fuzzball'. Let us take a state where the typical wavenumber is much smaller than the value (\ref{thir})
\begin{equation}
{k\over \sqrt{n_1n_p}}\equiv \alpha \ll1
\end{equation}
Then the wavelength of the vibrations is much longer than the length of the compactification circle
\begin{equation}
\lambda={2\pi R n_1\over k}={2\pi R \over \alpha}\sqrt{n_1\over n_p}\gg2\pi R
\end{equation}
where we have assumed that $n_1, n_p$ are of the same order.
When executing its vibration the string will move in the transverse space across a coordinate distance
\begin{equation}
\Delta x\sim |\dot{\vec F}|\lambda
\end{equation}
But the distance between neighboring strands of the string will be
\begin{equation}
\delta x=|\dot{\vec F}|(2\pi R)
\end{equation}
We thus see that
\begin{equation}
{\delta x\over \Delta x}\sim \sqrt{n_p\over n_1}~\alpha\ll1
\end{equation}
We can therefore look at the metric at points that are not too close to any one of the strands, but that are still in the general region occupied
by the vibrating string
\begin{equation}
|\vec x-\vec F(v)|\gg\delta x
\end{equation}
(It turns out that after we dualize to NS1-NS5 the geometry is smooth at the location of the strands; we will see this in an explicit example below, and for a generic discussion see \cite{lmm, fuzz}.) In this case neighboring strands give very similar contributions to the harmonic functions in (\ref{fpmultiple}), and we may replace the sum by an integral
\begin{equation}
\sum_{s=1}^{n_1} \rightarrow \int _{s=0}^{n_1} ds = \int_{y=0}^{2\pi R n_1}{ds\over dy} dy
\end{equation}
Since the length of the compacification circle is $2\pi R$ we have
\begin{equation}
{ds\over dy}={1\over 2\pi R}
\end{equation}
Also, since the vibration profile is a function of $v=t-y$ we can replace the integral over $y$ by an integral over $v$. Thus we have
\begin{equation}
\sum_{s=1}^{n_1}\rightarrow {1\over 2\pi R}\int_{v=0}^{L_T }dv
\end{equation}
where
\begin{equation}
L_T=2\pi R n_1
\label{fseven}
\end{equation}
is the total range of the $y$ coordinate on the multiwound string. Finally, note that
\begin{equation}
Q_1^{(i)}={Q_1\over n_1}
\end{equation}
We can then write the NS1-P solution as
\begin{eqnarray}
ds^2_{string}&=&H[-dudv+Kdv^2+2A_i dx_i dv]+\sum_{i=1}^4 dx_idx_i+\sum_{a=1}^4 dz_adz_a\nonumber \\
B_{uv}&=&{1\over 2}[H-1], ~~\qquad B_{vi}=HA_i\nonumber \\
e^{2\phi}&=&H
\label{ttsix}
\end{eqnarray}
where
\begin{eqnarray}
H^{-1}&=&1+{Q_1\over L_T}\int_0^{L_T}\! {dv\over |\vec x-\vec F(v)|^2}\\
K&=&{Q_1\over
L_T}\int_0^{L_T}\! {dv (\dot
F(v))^2\over |\vec x-\vec F(v)|^2}\\
A_i&=&-{Q_1\over L_T}\int_0^{L_T}\! {dv\dot F_i(v)\over |\vec x-\vec F(v)|^2}
\label{functionsq}
\end{eqnarray}
\subsection{Obtaining the NS1-NS5 geometries}\label{obta}
From (\ref{twop}) we see that we can perform S,T dualities to map the above NS1-P solutions to NS1-NS5 solutions.
For a detailed presentation of the steps (for a specific $\vec F(v)$) see \cite{lm3}. The computations are straightforward, except for one step where we need to perform an electric-magnetic duality. Recall that under T-duality a Ramond-Ramond gauge field form $C^{(p)} $
can change to a higher form $C^{(p+1)} $ or to a lower form $C^{(p-1)} $. We may therefore find ourselves with $C^{(2)}$ and $C^{(6)}$ at the same time in the solution. The former gives $F^{(3)}$ while the latter gives $F^{(7)}$. We should convert the $F^{(7)}$ to $F^{(3)}$ by taking the dual, so that the solution is completely described using only $C^{(2)}$. Finding $F^{(3)}$ is straightforward, but it takes some inspection to find a $C^{(2)}$ which will give this $F^{(3)}$.
Note that we have chosen to write the classical solutions in a way where $\phi$ goes to zero at infinity, so that the true dilaton $\hat\phi$ is given by
\begin{equation}
e^{\hat\phi}=ge^\phi
\end{equation}
The dualities change the values of the moduli describing the solution. Recall that the $T^4$ directions are $x^6, x^7, x^8, x^9$, while the $S^1$ direction is $y\equiv x^5$. We keep track of (i) the coupling $g$ (ii) the value of the scale $Q_1$ which occurred in the harmonic function for the NS1-P geometry (iii) the radius $R$ of the $x^5$ circle (iv) the radius $R_6$ of the $x^6$ circle, and (v) the volume $(2\pi)^4 V$ of $T^4$. We can start with NS1-P and reach NS5-NS1, which gives
(here we set $\alpha'=1$ for compactness)
\begin{equation}\label{DualParam}
\left(\begin{array}{c}
g\\Q_1\\R\\R_6\\V
\end{array}\right)
\stackrel{\textstyle S}{\rightarrow}
\left(\begin{array}{c}
1/g\\Q_1/{g}\\R/\sqrt{g}\\R_6/\sqrt{g}\\V/g^2
\end{array}\right)
\stackrel{\textstyle T6789}{\rightarrow}
\left(\begin{array}{c}
g/V\\Q_1/{g}\\R/\sqrt{g}\\\sqrt{g}/R_6\\g^2/V
\end{array}\right)
\stackrel{\textstyle S}{\rightarrow}
\left(\begin{array}{c}
V/g\\Q_1{V}/g^2\\R\sqrt{V}/g\\\sqrt{V}/R_6\\V
\end{array}\right)
\stackrel{\textstyle T56}{\rightarrow}
\left(\begin{array}{c}
R_6/R\\Q_1{V}/g^2\\g/(R\sqrt{V})\\R_6/\sqrt{V}\\R_6^2
\end{array}\right)
\equiv
\left(\begin{array}{c}
g'\\Q_5'\\R'\\R'_6\\V'
\end{array}\right)
\label{eightt}
\end{equation}
where at the last step we have noted that the $Q_1$ charge in NS1-P becomes the NS5 charge $Q'_5$ in NS5-NS1.
We will also choose coordinates at each stage so that the metric goes to $\eta_{AB}$ at infinity. Since we are writing the string metric, this convention is not affected by T-dualities, but when we perform an S-duality we need to re-scale the coordinates to keep the metric $\eta_{AB}$. In the NS1-P solution the harmonic function generated by the NS1 branes is (for large $r$)
\begin{equation}
H^{-1}\approx 1+{Q_1\over r^2}
\label{ninet}
\end{equation}
After we reach the NS1-NS5 system by dualities the corresponding harmonic function will behave as
\begin{equation}
H^{-1}\approx 1+{Q'_5\over r^2}
\label{fsix}
\end{equation}
where from (\ref{eightt}) we see that
\begin{equation}
Q'_5=\mu^2 Q_1
\end{equation}
with
\begin{equation}
\mu^2={V\over g^2}
\label{fone}
\end{equation}
Note that $Q_1, Q'_5$ have units of $(length)^2$. Thus all lengths get scaled by a factor $\mu$ after the dualities. Note that
\begin{equation}
Q'_5=\mu^2Q_1=\mu^2{g^2n_1\over V}=n_1\equiv n'_5
\label{feight}
\end{equation}
which is the correct parameter to appear in the harmonic function (\ref{fsix}) created by the NS5 branes.
With all this, for NS5-NS1 we get the solutions \cite{lm4}
\begin{equation}
ds^2_{string}={1\over 1+K}[-(dt-A_i dx^i)^2+(dy+B_i dx^i)^2]+{1\over
H}dx_idx_i+dz_adz_a
\label{qsix}
\end{equation}
where the harmonic functions are
\begin{eqnarray}
H^{-1}&=&1+{\mu^2Q_1\over \mu L_T}\int_0^{\mu L_T} {dv\over |\vec x-\mu\vec F(v)|^2}\nonumber \\
K&=&{\mu^2Q_1\over
\mu L_T}\int_0^{\mu L_T} {dv (\mu^2\dot
F(v))^2\over |\vec x-\mu\vec F(v)|^2},\nonumber\\
A_i&=&-{\mu^2Q_1\over \mu L_T}\int_0^{\mu L_T} {dv~\mu\dot F_i(v)\over |\vec x-\mu\vec F(v)|^2}
\label{functionsqq}
\end{eqnarray}
Here $B_i$ is given by
\begin{equation}
dB=-*_4dA
\label{vone}
\end{equation}
and $*_4$ is the duality operation in the 4-d transverse space
$x_1\dots
x_4$ using the flat metric $dx_idx_i$.
By contrast the `naive' geometry which one would write for NS1-NS5 is
\begin{equation}
ds^2_{naive}={1\over (1+{Q'_1\over r^2})}[-dt^2+dy^2]+(1+{Q'_5\over
r^2})dx_idx_i+dz_adz_a
\label{d1d5naive}
\end{equation}
\subsection{A special example}\label{aspe}
The above general solution looks rather complicated. To get a feeling for the nature of these NS1-NS5 solutions let us start by examining in detail a simple case. Start with the NS1-P solution which has the following vibration profile for the NS1 string
\begin{equation}
F_1=\hat a\cos\omega v,\quad F_2=\hat a\sin\omega v, \quad F_3=F_4=0
\label{yyonePrime}
\end{equation}
where $\hat a$ is a constant.
This makes the NS1 swing in a uniform helix in the $x_1-x_2$ plane. Choose
\begin{equation}
\omega=\frac{1}{n_1R}
\end{equation}
This makes the NS1 have just one turn of the helix in the covering space. Thus all the energy has been put in the lowest harmonic on the string.
We then find
\begin{equation}
H^{-1}=1+{Q_1\over 2\pi}\int_0^{2\pi} {d\xi\over
(x_1-\hat a\cos\xi)^2+(x_2-\hat a\sin\xi)^2+x_3^2+x_4^2}
\end{equation}
To compute the integral we introduce polar coordinates in the $\vec x $ space
\begin{eqnarray}\label{EpolarMap}
x_1&=&{\tilde r} \sin{\tilde \theta} \cos{\tilde\phi}, ~~~\qquad x_2={\tilde r}
\sin{\tilde\theta} \sin{\tilde\phi},\nonumber \\
x_3&=&{\tilde r} \cos{\tilde\theta} \cos{\tilde\psi}, ~~\qquad x_4={\tilde r}
\cos{\tilde\theta} \sin{\tilde\psi}
\label{etwo}
\end{eqnarray}
Then we find
\begin{equation}
H^{-1}=1+{Q_1\over
\sqrt{(\tilde r^2+\hat a^2)^2-4 \hat a^2\tilde r^2\sin^2\tilde\theta}}
\end{equation}
The above expression simplifies if we
change from $\tilde r, \tilde\theta$ to coordinates $r,\theta$:
\begin{eqnarray}
{\tilde r}&=& \sqrt{r^2+\hat a^2\sin^2\theta}, ~~\qquad \cos{\tilde\theta}
={r\cos\theta\over \sqrt{r^2+\hat a^2\sin^2\theta}}
\label{ethree}
\end{eqnarray}
(${\tilde\phi}$ and ${\tilde\psi}$ remain unchanged). Then we get
\begin{equation}
H^{-1}=1+{Q_1\over r^2+\hat a^2\cos^2\theta}
\end{equation}
Similarly we get
\begin{equation}
K={\hat a^2\over n_1^2 R^2}~{Q_1\over (r^2+\hat a^2\cos^2\theta)}
\end{equation}
With a little algebra we also find
\begin{eqnarray}
A_{x_1}&=&{Q_1\hat a\over 2\pi R n_1}\int_0^{2\pi}{d\xi \sin\xi\over (x_1-\hat a\cos\xi)^2+(x_2-\hat a\sin\xi)^2+x_3^2+x_4^2}\nonumber \\
&=&{Q_1\hat a\over 2\pi R n_1}\int_0^{2\pi}{d\xi \sin\xi\over (\tilde r^2+\hat a^2-2\tilde r \hat a\sin\tilde\theta\cos(\xi-\tilde\phi))}
\nonumber \\
&=&{Q_1\hat a^2\over R n_1}\sin\tilde\phi {\sin\theta\over (r^2+a^2\cos^2\theta)}{1\over \sqrt{r^2+a^2}}
\end{eqnarray}
\begin{eqnarray}
A_{x_2}&=&-{Q_1\hat a^2\over R n_1}\cos\tilde\phi {\sin\theta\over (r^2+a^2\cos^2\theta)}{1\over \sqrt{r^2+a^2}}
\end{eqnarray}
\begin{equation}
A_{x_3}=0, \qquad A_{x_4}=0
\end{equation}
We can write this in polar coordinates
\begin{eqnarray}
A_{\tilde\phi}&=&A_{x_1}{\partial x_1\over \partial\tilde\phi}+A_{x_2}{\partial x_2\over \partial\tilde\phi}\nonumber \\
&=&-{Q_1\hat a^2\over R n_1}{\sin^2\theta\over (r^2+a^2\cos^2\theta)}
\end{eqnarray}
We can now substitute these functions in (\ref{ttsix}) to get the solution for the NS1-P system for the choice of profile (\ref{yyonePrime}).
Let us now get the corresponding NS1-NS5 solution. Recall that all lengths scale up by a factor $\mu$ given through (\ref{fone}).
The transverse displacement profile $\vec F$ has units of length, and so scales up by the factor $\mu$. We define
\begin{equation}
a\equiv \mu \hat a
\end{equation}
so that
\begin{equation}
\mu F_1= a\cos\omega v,\quad \mu F_2= a\sin\omega v, \quad F_3=F_4=0
\label{yyonePrimep}
\end{equation}
Let
\begin{equation}
f=r^2+a^2\cos^2\theta
\end{equation}
The NS1 charge becomes the NS5 charge after dualities, and corresponding harmonic function becomes
\begin{equation}
H'^{-1}=1+{Q'_5\over f}
\end{equation}
The harmonic function for momentum P was
\begin{equation}
K={Q_1\hat a^2\over n_1^2R^2}{1\over (r^2+\hat a^2 \cos^2\theta)}\equiv {Q_p\over (r^2+\hat a^2 \cos^2\theta)}
\end{equation}
After dualities $K$ will change to the harmonic function generated by NS1 branes. Performing the change of scale (\ref{fone})
we find
\begin{equation}
K'=\mu^2 {Q_p\over f}\equiv{Q'_1\over f}
\end{equation}
Using the value of $Q_1$ from (\ref{sixt}) we observe that
\begin{equation}
a={\sqrt{Q'_1Q'_5}\over R'}
\label{ftwo}
\end{equation}
where $R'$ is the radius of the $y$ circle after dualities (given in (\ref{eightt})).
To finish writing the NS1-NS5 solution we also need the functions $B_i$ defined through (\ref{vone}). In the coordinates
$r, \theta, \tilde\phi\equiv\phi, \tilde\psi\equiv\psi$ we have
\begin{equation}
A_\phi=-{a\sqrt{Q'_1Q'_5}}{\sin^2\theta\over f}
\end{equation}
We can check that the dual form is
\begin{equation}
B_\psi=-{a\sqrt{Q'_1Q'_5}}{\cos^2\theta\over f}
\end{equation}
To check this, note that the flat 4-D metric in our coordinates is
\begin{equation}
dx_idx_i={f\over r^2+a^2}dr^2+fd\theta^2+(r^2+a^2)\sin^2\theta d\phi^2+r^2\cos^2\theta d\psi^2
\end{equation}
We also have
\begin{equation}
\epsilon_{r\theta\phi\psi}=\sqrt{g}=f r\sin\theta\cos\theta
\end{equation}
We then find
\begin{equation}
F_{r\psi}=\partial_r B_\psi={a\sqrt{Q'_1Q'_5}}{2r\cos^2\theta\over f^2}=-\epsilon_{r\psi\theta\phi}g^{\theta\theta}g^{\phi\phi}[\partial_\theta A_\phi]=-(*dA)_{r\psi}
\end{equation}
\begin{equation}
F_{\theta\psi}=\partial_\theta B_\psi={a\sqrt{Q'_1Q'_5}}{r^2\sin(2\theta)\over f^2}=-\epsilon_{\theta\psi r\phi}g^{rr}g^{\phi\phi}[\partial_r A_\phi]=-(*dA)_{\theta\psi}
\end{equation}
verifying (\ref{vone}).
Putting all this in (\ref{qsix}) we find the NS1-NS5 (string) metric for the profile (\ref{yyonePrime})
\begin{eqnarray}\label{MaldToCompare}
d{s}^2&=&-H_1^{-1}(d{t}^2-d{ y}^2)+
H_5f\left(d\theta^2+\frac{d{r}^2}{{r}^2+a^2}\right)
-\frac{2a\sqrt{Q'_1 Q'_5}}{H_1f}\left(\cos^2\theta d{ y}d\psi+
\sin^2\theta d{ t}d\phi\right)\nonumber\\
&+&H_5\left[
\left({ r}^2+\frac{a^2Q'_1Q'_5\cos^2\theta}{H_1H_5f^2}\right)
\cos^2\theta d\psi^2+
\left({ r}^2+a^2-\frac{a^2Q'_1Q'_5\sin^2\theta}{H_1H_5f^2}\right)
\sin^2\theta d\phi^2\right]\nonumber \\
&+&~dz_adz_a
\end{eqnarray}
where
\begin{equation}\label{defFHProp}
f={r}^2+a^2\cos^2\theta,\qquad
H_1=1+{Q'_1\over f}, ~~H_5=1+{Q'_5\over f}
\end{equation}
At large $r$ this metric goes over to flat space. Let us consider the opposite limit $r\ll
(Q'_1Q'_5)^{1/4}$ (we write $r'=r/a$):
\begin{eqnarray}
ds^2&=&-({r'}^2+1)\frac{a^2dt^2}{Q'_1}+{r'}^2
\frac{a^2dy^2}{Q'_1}+
Q'_5\frac{d{r'}^2}{{r'}^2+1}\nonumber\\
&+&Q'_5\left[d\theta^2+\cos^2\theta \left(d{\psi}-
\frac{ady}{\sqrt{Q'_1Q'_5}}\right)^2+
\sin^2\theta \left(d{\phi}-\frac{adt}{\sqrt{Q'_1Q'_5}}\right)^2\right]\nonumber \\
&+&dz_adz_a
\label{fthree}
\end{eqnarray}
Let us transform to new angular coordinates
\begin{equation}
\psi'=\psi-{a\over \sqrt{Q'_1Q'_5}}y, ~~\qquad \phi'=\phi-{a\over \sqrt{Q'_1Q'_5}}t
\end{equation}
Since $\psi,y$ are both periodic coordinates, it is not immediately obvious that the first of these changes makes sense.
The identifications on these coordinates are
\begin{equation}
(\psi\rightarrow \psi+2\pi, ~~y\rightarrow y), ~~\qquad (\psi\rightarrow\psi, ~~y\rightarrow y+2\pi R')
\end{equation}
But note that we have the relation (\ref{ftwo}), which implies that the identifications on the new variables are
\begin{equation}
(\psi'\rightarrow \psi'+2\pi, ~~y\rightarrow y), ~~\qquad (\psi'\rightarrow \psi'-{a2\pi R'\over \sqrt{Q'_1Q'_5}}=\psi'-2\pi, ~~y\rightarrow y+2\pi R')
\end{equation}
so that we do have a consistent lattice of identifications on $\psi',y$.
The metric (\ref{fthree}) now becomes
\begin{eqnarray}
\label{esix}
ds^2&=&Q'_5\left[
-({r'}^2+1)\frac{dt^2}{R^2}+{r'}^2
\frac{dy^2}{R^2}+
\frac{d{r'}^2}{{r'}^2+1}\right]\nonumber\\
&+&Q'_5\left[d\theta^2+\cos^2\theta d{\psi'}^2+
\sin^2\theta d{\phi'}^2\right]+dz_adz_a
\end{eqnarray}
This is just $AdS_3\times S^3\times T^4$. Thus the full geometry is flat at infinity, has a `throat' type region at smaller $r$
where it approximates the naive geometry (\ref{d1d5naive}), and then instead of a singularity at $r=0$ it ends in a smooth `cap'. This particular geometry, corresponding to the profile (\ref{yyonePrime}), was derived earlier in \cite{bal,mm} by taking limits of general rotating black hole solutions found in \cite{cy}. We have now obtained it by starting with the particular NS1-P profile (\ref{yyonePrime}), and thus we note that it is only one member of the complete family parametrized by $\vec F$. It can be shown \cite{lmm, fuzz}, that all the metrics of this family have the same qualitative structure as the particular metric that we studied; in particular they have no horizons, and they end in smooth `caps' near $r=0$. We depict the 2-charge NS1-NS5 microstate geometries in Figure 2.
\subsection{`Size' of the 2-charge bound state}\label{size}
The most important point that we have seen in the above discussion is that in the NS1-P bound state the NS1 undergoes transverse vibrations that cause its strands to spread out over a nonzero range in the transverse $\vec x$ space. Thus the bound state is not `pointlike'. Exactly how big {\it is} the bound state?
We have obtained good classical solutions by looking at solutions where the wavelength of vibrations $\lambda$ was much longer than the wavelength for the generic solution. To get an estimate of the size of the generic state we will now take our classical solutions and extrapolate to the domain where $\lambda$ takes its generic value (\ref{fift}).
The wavelength of vibrations for the generic state is
\begin{equation}
\lambda={L_T\over k}\sim {2\pi R n_1\over \sqrt{n_1n_p}}\sim R\sqrt{n_1\over n_p}
\end{equation}
We wish to ask how much the transverse coordinate $\vec x$ changes in the process of oscillation. Thus we set $\Delta y=\lambda$, and find
\begin{equation}
\Delta x\sim |\dot{\vec F}|\Delta y\sim |\dot{\vec F}|R\sqrt{n_1\over n_p}
\end{equation}
Note that
\begin{equation}
Q_p \sim Q_1 |\dot{\vec F}|^2
\end{equation}
which gives
\begin{equation}
\Delta x\sim \sqrt{Q_p\over Q_1}R\sqrt{n_1\over n_p}\sim \sqrt{\alpha'}
\label{ffone}
\end{equation}
where we have used (\ref{sixt}).
For
\begin{equation}
|\vec x|\gg\Delta x
\end{equation}
we have
\begin{equation}
{1\over |\vec x-\vec F|^2}\approx {1\over |\vec x|^2}
\end{equation}
and the solution becomes approximately the naive geometry (\ref{naive}).
We see that the metric settles down to the naive metric outside a certain ball shaped region $|\vec x|>\sqrt{\alpha'}$.
Let us now ask an important question: What is the surface area of this ball?
First we compute the area in the 10-D string metric. Note that the metric will settle down to the naive form (\ref{naive}) quite rapidly as we go outside the region occupied by the vibrating string. The mean wavenumber is $k\sim \sqrt{n_1n_p}$, so there are $\sim \sqrt{n_1n_p}$ oscillations of the string. There is in general no correlation between the directions of oscillation in each wavelength, so the string makes $\sim \sqrt{n_1n_p}$ randomly oriented traverses in the ball that we are investigating. This causes a strong cancellation of the leading moments like dipole, quadrupole ...etc. The surviving moments will be of very high order, an order that will increase with $n_1, n_p$ and which is thus infinite in the classical limit of large charges.
We must therefore compute the area, in the naive metric (\ref{naive}), of the location $|\vec x| =\sqrt{\alpha'}$. Introduce polar coordinates on the 4-D transverse space
\begin{equation}
d\vec x\cdot d\vec x=dr^2+r^2 d\Omega_3^2
\end{equation}
At the location $r=\sqrt{\alpha'}$ we get from the angular $S^3$ an area
\begin{equation}
A_{S^3}\sim \alpha'^{3\over 2}
\label{tsix}
\end{equation}
From the $T^4$ we get an area
\begin{equation}
A_{T^4}\sim V
\end{equation}
From the $S^1$ we get a length
\begin{equation}
L_y\sim \sqrt{HK} R \sim \sqrt{Q_p\over Q_1} R
\end{equation}
Thus the area of the 8-D surface bounding the region occupied by the string is given, in the string metric, by
\begin{equation}
A^{S}\sim \sqrt{Q_p\over Q_1}RV{\alpha'}^{3\over 2}
\end{equation}
The area in Einstein metric will be
\begin{equation}
A^E=A^{S}e^{-2\phi}
\end{equation}
Note that the dilaton $\phi$ becomes very negative at the surface of interest
\begin{equation}
e^{-2\phi}=H^{-1}\approx {Q_1\over r^2}\sim {Q_1\over \alpha'}
\label{tfive}
\end{equation}
We thus find
\begin{equation}
A^E\sim \sqrt{Q_1Q_p}RV\alpha'^{1\over 2}\sim {g^2\alpha'^4}\sqrt{n_1n_p}
\end{equation}
where we have used (\ref{sixt}).
Now we observe that
\begin{equation}
{A^E\over 4G_{10}}\sim \sqrt{n_1n_p}\sim S_{micro}
\label{fftwo}
\end{equation}
This is very interesting, since it shows that the surface area of our `fuzzball' region satisfies a Bekenstein type relation
\cite{lm5}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6in]{nfig8.eps}
\caption{(a) The naive geometry of extremal NS1-NS5 \quad (b) the actual geometries; the area of the surface denoted by the dashed line reproduces the microscopic entropy.}
\label{fig2}
\end{center}
\end{figure}
\subsection{Nontriviality of the `size'}
One of the key ideas we are pursuing is the following. To make a big black hole we will need to put together many elementary quanta. What is the size of the resulting bound state? One possibility is that this size is always of order planck length $l_p$ or string length $l_s$. In this case we will not be able to avoid the traditional picture of the black hole. Since the horizon radius can be made arbitrarily large, the neighborhood of the horizon will be `empty space' and the matter making the black hole will sit in a small neighborhood of the singularity. But a second possibility is that the size of a bound state {\it increases} with the number of quanta in the bound state
\begin{equation}
{\cal R}\sim N^\alpha l_p
\label{ttwo}
\end{equation}
where ${\cal R}$ is the radius of the bound state, $N$ is some count of the number of quanta in the state, and the power $\alpha$ depends on what quanta are being bound together. It would be very interesting if in every case we find that
\begin{equation}
{\cal R}\sim R_H
\label{tthree}
\end{equation}
where $R_H$ is the radius of the horizon that we would find for the classical geometry which has the mass and charge carried by these $N$ quanta. For in that case we would find that we do not get the traditional black hole; rather we get a `fuzzball' which has a size of order the horizon size. Since we do not get a traditional horizon we do not have the usual computation of Hawking radiation which gives information loss. The different configurations of the fuzzball will correspond to the $e^{S_{Bek}}$ states expected from the Bekenstein entropy.
For the 1-charge system we saw that the Bekenstein entropy was $S_{Bek}=0$. We also find no nontrivial size for the bound state, so the size remains order $l_p$ or $l_s$, with the exact scale depending perhaps on the choice of probe. This is consistent with the picture we are seeking, but not a nontrivial illustration of the conjecture. But the situation was much more interesting when we moved to the 2-charge case. The microscopic entropy was $S_{micro}=2\sqrt{2}\sqrt{n_1n_p}$. The size of the bound state was such that the area of the boundary satisfied a Bekenstein type relation. We had verified this relation using the 10-D metric, but we can also write it in terms of quantities in the dimensionally reduced 5-D theory
\begin{equation}
{A_5\over 4 G_5}\sim \sqrt{n_1n_p}
\label{tone}
\end{equation}
We define the 5-D planck length by
\begin{equation}
l_p^{(5)}\equiv G_5^{1\over 3}
\end{equation}
We also define the radius of the horizon from the area
\begin{equation}
{\cal R}=[{A_5\over 2\pi ^2}]^{1\over 3}
\end{equation}
The result (\ref{tone}) then translates to
\begin{equation}
{\cal R}\sim (n_1n_p)^{1\over 6} l^{(5)}_p
\label{tfour}
\end{equation}
Thus for the 2-charge system we find a manifestation of the conjectured relations (\ref{ttwo}), (\ref{tthree}).
While we see from (\ref{tfour}) that the fuzzball size ${\cal R}$ is much larger then planck length $l_p$, we have not yet compared
${\cal R}$ to the string length $l_s$. From (\ref{tsix}) we see that
\begin{equation}
{\cal R}\sim \sqrt{\alpha'}\sim l_s
\label{tseven}
\end{equation}
One might think that this indicates that the fuzzball is really small in some sense; it has just the natural minimum radius set by string theory. But such is not the case. In the NS1-P system that we are looking at $e^\phi$ becomes very small at the fuzzball surface. Thus the string tension becomes very low in planck units; in other words, the string becomes very long and `floppy'. Thus we should interpret (\ref{tseven}) as telling us that string length is very large, not that ${\cal R}$ is very small.
This may sound more a matter of language rather than physics, but we can make it more precise by looking at the NS1-NS5 system which is obtained from NS1-P by dualities. It is a general result that the area of a surface $r=const$ (measured in planck units) does not change upon dualities. To see this, note that the Einstein action in D spacetime dimensions scales with the metric as follows
\begin{equation}
S\sim {1\over G_D}\int d^D x \sqrt{-g}R \sim {[g_{ab}]^{D-2\over 2}\over G_D}
\label{ssone}
\end{equation}
This action must remain unchanged under S,T dualities. The hypersurface at fixed $r$ is $D-2$ dimensional. Under dualities the $D-2$ dimensional area scales as $[g_{ab}]^{D-2\over 2}$. We have $G_D=(l_p^{(D)})^{D-2}$. From the invariance of (\ref{ssone}) we see that under dualities the scalings are such that the area of the $D-2$ dimensional fuzzball boundary measured in D-dimensional planck units remains invariant. This fact remains true whether we use a 10-D description or the dimensionally reduced 5-D one.
For the fuzzball boundary in the NS1-NS5 system we get
\begin{equation}
{A_{10}\over 4 G_{10}}={A_5\over 4 G_5}\sim \sqrt{n_1n_5}
\end{equation}
(We have re-labeled the charges as $n_1\rightarrow n_5, n_p\rightarrow n_1$ to give them their appropriate names in the NS1-NS5 geometry.)
Thus
\begin{equation}
{\cal R}_{\rm NS1-NS5}\sim (n_1n_5)^{1\over 6} l_p^{(5)}
\end{equation}
But this time the dilaton does not become strongly negative near the fuzzball boundary; rather it remains order unity
\begin{equation}
e^{-2\phi}\approx {Q_1\over Q_5}\sim {n_1g^2\alpha'^2\over n_5V}
\end{equation}
To find ${\cal R}$ in terms of string length and other moduli we write the 10-D area-entropy relation
\begin{equation}
{A_{10}\over 4G_{10}}\sim {{\cal R}_{\rm NS1-NS5}^3 V R\over g^2\alpha'^4} \sim \sqrt{n_1n_5}
\end{equation}
to get
\begin{equation}
{\cal R}_{\rm NS1-NS5}\sim [{g^2\alpha'^4\over VR}]^{1\over 3} (n_1n_5)^{1\over 6}
\end{equation}
We thus see that $l_p$ and $l_s$ are of the same order (their ratio does not depend on $n_1, n_5$) while ${\cal R}$ grows much larger than these lengths as the number of quanta in the bound state is increased.
Thus we see that comparing the bound state size to the string length is not a duality invariant notion, while comparing it to the 5-D planck length is. In planck units the bound state size grows with charges. In string units, it also grows with charges in the NS1-NS5 duality frame, while it remains $l_s$ in the NS1-P duality frame. The latter fact can be traced to the very small value of $e^\phi$ at the fuzzball boundary, which makes the local string length very large in this case.
Sen \cite{sen} looked at the naive geometry of NS1-P, and noticed that the curvature of the string metric became string scale at $r\sim\sqrt{\alpha'}$, the same location that we have found in (\ref{ffone}) as the boundary of the fuzzball. He then argued that one should place a `stretched horizon' at this location, and then observed that the area of this horizon gave the Bekenstein relation (\ref{fftwo}). But if we look at the NS1-NS5 system that we obtain by dualities, then the curvature remains {\it bounded and small} as $r\rightarrow 0$. The naive geometry is locally $AdS_3\times S^3\times T^4$ for small $r$, and the curvature radii for the $AdS_3$ and the $S^3$ are $(Q'_1Q'_5)^{1/4}\gg\sqrt{\alpha'}$. So it does not appear that the criterion used by Sen can be applied in general to locate a `stretched horizon'. What we have seen on the other hand is that the naive geometry is not an accurate description for $r$ smaller than a certain value; the interior of this region is different for different states, and the boundary of this region satisfies a Bekenstein type relation (\ref{fftwo}). Further, we get the same relation (\ref{fftwo}) in all duality frames.
We have considered the compactification $T^4\times S^1$, but we could also have considered $K3\times S^1$. Suppose we further compactify a circle $\tilde S^1$, thus getting a 2-charge geometry in 3+1 noncompact dimensions. In this case if we include higher derivative corrections in the action then the naive geometry develops a horizon, and the area of this horizon correctly reproduces the microscopic entropy \cite{dab}. Order of magnitude estimates suggest that a similar effect will happen for the 4+1 dimensional case that we have been working with.
What are the {\it actual} geometries for say NS1-NS5 on $K3\times S^1$? Recall that in the NS1-P system that we started with the NS1 string had 8 possible directions for transverse vibration. We have considered the vibrations in the noncompact directions $x_i$; a similar analysis can be carried out for those in the compact directions $z_a$ \cite{lmm}. But after dualizing to NS1-NS5 we note that the the solutions generated by the $x_i$ vibrations are constant on the compact $T^4$, and we can replace the $T^4$ by $K3$ while leaving the rest of the solution unchanged. (The vibrations along compact directions will be different in the $T^4$ and $K3$ cases, but since we are only looking for estimate of the fuzzball size we ignore these states in the present discussion.) In \cite{gm2} it was shown that the higher derivative terms do not affect the `capped' nature of the `actual' geometries. Thus the K3 case is interesting in that it provides a microcosm of the entire black hole problem: There is a naive geometry that has a horizon, and we have `actual' geometries that have no horizons but that differ from each other inside a region of the size of the horizon. It would be interesting to understand the effect of higher derivative terms on the naive $T^4$ geometry.
\subsubsection{3-charge states}
The 2-charge hole has two length scales. The radius of the horizon scales as $ (n_1n_2)^{1/6}l_p\sim n_i^{1/ 3} l_p$ where
$n_1, n_2$ are the charges in any duality frame and $l_p$ is the 5-d planck length in that frame (this scaling gives the Bekenstein entropy from the horizon area). On the other hand the distortion of the metric due to charges reaches to a distance
$\sim Q_1^{1/ 2}, Q_2^{1/ 2}$ which scale as $\sim n_i^{1/ 2}$. Thus for large $n_i$ (the classical limit) the horizon radius is much larger than the planck length (which suggests that we have the physics of a black hole) but the `charge radius' is even larger. For the 3-charge hole the radius of the horizon and the `charge radius' both scale as
$n_i^{1\over 2}$, so we can take a classical limit where both scales are visible in the classical geometry. It would be satisfying to see the same `fuzzball' ideas emerge for 3-charge microstates, since we would then be describing the kind of black hole that has been long studied by general relativists. We do not know how to make all microstates of the 3-charge hole, but some selected families have been constructed \cite{3charge}. All these geometries agree with the `fuzzball' notion since they have no horizons, but instead are `capped' at small $r$. The constructed states are however not generic in that they have a large amount of rotation. Several other constructions suggest that similar results will hold for generic states, in particular the work on supertubes and black rings which shows that black objects are not as unique as initially believed, and which suggests ways to construct three charge states \cite{3chargeother,3chargenew,bk1,benawarner,gimonh}.
\section{`Fractionation' and the size of bound states}
\label{frac}\setcounter{equation}{0}
We have seen above that the 2-charge bound state has a size that grows with the charges; further, the area of the boundary and the degeneracy of the state satisfy a Bekenstein type relation $A/4G\sim S$. What causes the bound state to `swell up' in this fashion, and become much bigger than a fixed length like planck length or string length? For 2-charge extremal states we constructed the states explicitly, and traced the size to the fact that momentum charge could only be carried on an NS1 by transverse vibrations of the NS1. But we would like to have a more general understanding of the underlying physics, so that we can extrapolate the ideas here to more general black holes.
In this section we will use somewhat heuristic arguments to arrive at a picture of bound states in string theory.
The discussion of this section is based on \cite{emission}. The key idea will be `fractionation', which we encountered already when studying momentum waves on a string. If we have a graviton on a circle of radius $R$ then its energy and momentum will be $n/R$ with $n$ an integer. If on the other hand we have a string wound $m$ times on this circle then binding the graviton to the string gives traveling waves, which have energy and momenta occurring in units $n/(mR)$. This `fractionation' of the momentum leads to a large degeneracy of possibilities when we distribute the momentum among the harmonics, so it leads to the large entropy of the system. Does it also help us understand the large size?
At first it may appear that this effect is a rather simple and mundane one; it would happen for any string in any theory, so it does not seem to have much to do with string theory or black holes. But in string theory we have dualities that map fractional momentum modes to `fractional branes', which will have fractional tension $T/m$. Thus for large $m$ they can be very `stretchable, floppy objects', and can thus extend far to give the bound state a large `size'. Since the tension goes down with the number of quanta in the bound state, we may hope that the size of the state might always keep up with the horizon radius, however big a black hole we may try to make. In that case we will not have the usual black holes, but `fuzzballs'.
We will now use the computations done in earlier sections to offer some justification for the above picture of bound states.
We will argue that fractional branes must exist, that they can be expected to stretch out to macroscopic distances, and that this distance will be of order the horizon radius for the 3-charge extremal black hole that we had constructed.
\subsection{Exciting the NS1-P extremal state}\label{exci}
We have seen that the three charges NS5,NS1,P can be permuted by dualities. Consider a 2-charge system, and let these charges be NS1,P. The extremal state is given by a NS1 wound $n_1$ times around the $S^1$ parametrized by $y$, with $n_p$ units of momentum running along the positive $y$ direction. The entropy of this extremal state is
\begin{equation}
S_{ex}=2\pi\sqrt{2}\sqrt{n_1n_p}
\end{equation}
Now consider adding some energy $\Delta E$ to this system so that it becomes slightly non-extremal. Where does this extra energy go? One might think that it goes towards creating extra vibrations of the NS1. Since we add no net momentum we must have
\begin{equation}
n_p\rightarrow n_p+{R \Delta E\over 2}, ~~~\bar n_p={R\Delta E\over 2}
\end{equation}
which implies an entropy
\begin{equation}
S_{NS1-P+P\bar P}=2\pi\sqrt{2}[\sqrt{n_1(n_p+{R \Delta E\over 2})}+\sqrt{n_1{R \Delta E\over 2}}]
\end{equation}
The subscript on the entropy tells us that the system is NS1-P and that it has been assumed that the additional energy has gone to creating $P\bar P$ pairs:
\begin{equation}
{\rm NS1~P}~+~\Delta E~~\rightarrow ~~ {\rm NS1~P}~+~ (P~\bar P)
\label{pp}
\end{equation}
Let us also write down the expected emission rate from this near-extremal system. The $P\bar P$ vibrations can collide and leave the string as massless bulk quanta. Let us look at the bulk mode we considered before -- the component $h_{12}$ of the metric, where $z^1, z^2$ are directions in the $T^4$. In section(\ref{twol2}) we had computed the emission from vibrations on the effective string of the NS1-NS5 bound state, but the computation applies equally well to the NS1 string that we have here. The emission rate is given by \cite{emparan}
\begin{eqnarray}
\Gamma_{NS1-P+P\bar P}&=&\int {d^4 k\over (2\pi)^4}~(2\pi\omega G_5 L_T)~\rho_R({\omega\over 2})\rho_L({\omega\over 2}) \nonumber \\
&=&\int {d^4 k\over (2\pi)^4}~(4\pi^2\omega G_5 Rn_1)~{1\over (e^{\omega\over 2 T_R}-1)(e^{\omega\over 2 T_L}-1)}
\label{em1}
\end{eqnarray}
where we have set the length of the string to be $L_T=2\pi R n_1$. The temperatures $T_R, T_L$ are given by (\ref{temps}) but now with $f_B=f_F=8$ since there are 8 possible transverse vibrations of the string
\begin{equation}
T_R=\sqrt{{({n_p\over R}+{\Delta E\over 2})\over 2\pi^2 R n_1}}, ~~~T_L=\sqrt{{({\Delta E\over 2})\over 2\pi^2 R n_1}}
\end{equation}
\subsubsection{A puzzle}
Are these the correct near extremal entropy and emission rate? The emission process we have considered is
\begin{equation}
{\rm NS1+P+nonextremality} ~~\rightarrow ~~ h_{12}
\end{equation}
By S,T dualities we can map the NS1-P to NS5-NS1, while $h_{12}$ remains $h_{12}$, which gives the process
\begin{equation}
{\rm NS5+NS1+nonextremality} ~~\rightarrow ~~ h_{12}
\end{equation}
But this is a case that we have studied before, in section(\ref{twol2}). In that calculation we had a near-extremal NS1-NS5 system, and the energy above extremality went to creating $P\bar P$ pairs. The emission rate from (\ref{gmicro}) is
\begin{eqnarray}
\Gamma_{NS5-NS1+P\bar P}
&=&\int {d^4 k\over (2\pi)^4}~(4\pi^2\omega G_5 R'n'_1n'_5)~{1\over (e^{\omega\over 2 T'_R}-1)(e^{\omega\over 2 T'_L}-1)}
\label{em2}
\end{eqnarray}
where now the length of the effective string is $L_T=2\pi R' n'_1 n'_5$. The temperatures $T'_R, T'_L$ are given by (\ref{temps})
\begin{equation}
T'_R=\sqrt{{({\Delta E\over 2})\over \pi^2 R' n'_1n'_5}}, ~~~T'_L=\sqrt{{({\Delta E\over 2})\over \pi^2 R' n'_1n'_5}}
\end{equation}
We have denoted the radius of the $S^1$ by $R'$ since it will be related by dualities to the initial radius $R$. The charges are
also related by dualities
\begin{equation}
n'_1=n_p, ~~~n'_5=n_1
\end{equation}
But the radiation rate $\Gamma$ should be invariant under the dualities. This, however, is manifestly {\it not } so, if (\ref{em1}) gives the NS1-P result and (\ref{em2}) gives the NS1-NS5 result. In (\ref{em2}) we have equal $T_R, T_L$. In (\ref{em1}) we have {\it unequal} $T_R, T_L$, with the ratio diverging as we go closer to extremality $\Delta E\rightarrow 0$. Thus the functional forms of (\ref{em1}) and (\ref{em2}) are not the same.
In fact there is another more basic difference between the two emission rates. Recall that in the microscopic picture of the NS1-NS5 system the effective string can vibrate only in the plane of the NS5, so at leading order we can absorb a graviton like $h_{12}$ (which is a scalar in 4+1 D) but not a graviton like $h_{\mu\nu}$ where $\mu, \nu$ are two of the {\it noncompact} directions. $h_{\mu\nu}$ {\it is} absorbed at higher order in $\omega$, by exciting fermions in addition to the bosons on the effective string; these fermions carry spin under the rotation group of the noncompact directions. This fact accords well with the gravity picture, where the fact that $h_{\mu\nu}$ is a spin 2 particle in 4+1 D implies that it feels an angular momentum barrier and its cross section is suppressed by powers of $\omega$. Thus in both microscopic and gravity pictures, at leading order in $\omega$ only the $h_{ij}$ with $i,j$ in the $T^4$ are absorbed.
But in the NS1-P case all 8 transverse directions of the string are on the same footing, and thus in the microscopic computation we get the same cross section for all components $h_{MN}$, where $M,N$ run over the 8 spatial directions transverse to the $S^1$. This does {\it not} accord with the gravity computation for a 2-charge system.
So something has gone wrong, (\ref{em1}) and (\ref{em2}) are {\it not} computations of emission in duality related processes.
\subsubsection{Resolving the puzzle}
The reason for the mismatch is not hard to find. In the NS1-NS5 system that we had studied before the energy above extremality went to creating $P\bar P$ excitations
\begin{equation}
{\rm NS1~NS5}~+~\Delta E~~\rightarrow ~~{\rm NS1~NS5} ~+~(P~\bar P)
\label{ns1ns5}
\end{equation}
Permuting charges by duality we get the process
\begin{equation}
{\rm NS1~P}~+~\Delta E~~\rightarrow ~~{\rm NS1~P} ~+~(NS5~\overline{NS5})
\label{ns1p}
\end{equation}
which is {\it not} (\ref{pp}).
The model with excitations (\ref{ns1ns5}) gave us correctly the near extremal entropy and the correct Hawking radiation, so it is a model that we trust. We are then forced to accept the process (\ref{ns1p}) by duality. It may be hard to visualize how pairs of $NS5~\overline {NS5}$ can be created, or how they can interact to give rise to the emitted radiation, but since we have arrived at (\ref{ns1p}) by duality we will investigate the energy scales involved and see what we can learn. Adopting (\ref{ns1p}) certainly removes the puzzle we faced above. The number of $NS5$ and $\overline {NS5}$ will be equal in (\ref{ns1p}) (there is no net NS5 charge), so the left and right temperatures will be equal, as was the case in the near extremal NS1-NS5 case. We do not have a simple model like (\ref{inter}) to give the decay of $NS5~\overline{NS5}$ pairs to gravitons, but duality assures us that the correct spins will be emitted, and we do not have the obvious contradiction that we faced with the excitations (\ref{pp}) where all spins were emitted equally.
What then is the role of (\ref{pp})? It certainly looks a correct description for excitations of the weakly coupled string. The mass of the lightest possible $P\bar P$ pair is
\begin{equation}
\Delta E^{NS1P}_{P\bar P}~=2(E_p){1\over n_1}~=~2({1\over R}){1\over n_1}
\label{expp}
\end{equation}
where $E_p=1/R$ is the energy of a momentum mode, the $2$ arises because we must create these modes in pairs so as to add no net P charge, and the factor ${1\over n_1}$ is the `fractionation', which will be crucially important since we are interested in the limit where all charges will be large. Since we have one power of the charge in the denominator we will call this a case of `single fractionation'.
In the process (\ref{ns1ns5}) the energy of the lightest excitation pair is
\begin{equation}
\Delta E^{NS1NS5}_{P\bar P}~=~2(E_p){1\over n'_1n'_5}~=~2({1\over R'}){1\over n'_1n'_5}
\label{fullfrac}
\end{equation}
Since there are two charges in the denominator on the RHS we call this a case of `double fractionation.
Since (\ref{ns1p}) is dual to (\ref{ns1ns5}) we will have `double fractionation' in (\ref{ns1p}) as well
\begin{equation}
\Delta E^{NS1P}_{NS5\overline{NS5}}~=~2(m_5){1\over n_1n_p}~=~2({VR \over g^2\alpha'^3}){1\over n_1n_p}
\label{exns}
\end{equation}
where $m_5$ is the mass of a NS5 brane.
Which excitation will be preferred, (\ref{expp}) or (\ref{exns})? We may expect that the lighter excitation will generate more entropy for the same total energy and so will be the preferred excitation; we will investigate the issue of entropy in more detail below. (\ref{exns}) has a factor $1/g^2$, so (\ref{expp}) is the lighter excitation for $g\rightarrow 0$, which makes sense because this is the free string limit and the excitations should be just vibrations of the string. But (\ref{exns}) has `double fractionation' while (\ref{expp}) has only single fractionation, so for given $g$ and sufficiently large charges (\ref{exns}) will be the lighter excitation.
We will now see that {\it whenever we are in the `strong gravity' domain (where we make a black hole) then (\ref{exns}) will be the lighter excitation}.
We are looking at the NS1-P system. The NS1 string has to carry $n_p$ units of momentum. If the momentum is in low harmonics then the amplitude of vibration will be large and the string will spread over a large region; if the momentum is in high harmonics then the string will have a small transverse spread. For the generic vibration mode the transverse spread
$\Delta x$
was found in (\ref{ffone}), and we will use this value for the transverse size of the string state. On the other hand the gravitational field of the string is described by the quantities $Q_1, Q_p$ which occur in the metric as $\approx {Q_1\over r^2}, {Q_p\over r^2}$. We will say that the string strongly feels its own gravity if
\begin{equation}
(\Delta x)^2 ~ \lesssim~Q_1, ~Q_p
\label{strong}
\end{equation}
so that the size of the string is smaller than the reach of the gravitational effects of both kinds of charges. Since $Q_1, Q_p$
may be unequal, we want $\Delta x$ to be smaller than both these length scales, a requirement that we can encode by writing
\begin{equation}
(\Delta x)^2 ~\lesssim~{Q_1 Q_p\over Q_1+Q_p}
\end{equation}
Using (\ref{ffone}), the values (\ref{sixt}) for $Q_1, Q_p$ and noting that the mass $M$ of the string is
\begin{equation}
M={n_1 R\over \alpha'}+{n_p\over R}
\end{equation}
we find that the condition that the string feels its own gravity (rather than be a free string in flat space) is
\begin{equation}
\alpha'~\lesssim~{g^2\alpha'^3\over VR} {n_1n_p\over M}
\label{condition}
\end{equation}
Now consider the ratio of the two kinds of excitations that we wished to compare. In (\ref{expp}) we had excited only the momentum modes, but to be more precise we note that by T-duality the winding modes can be excited as well. The excitation levels of the string are in fact given by the relation (\ref{massformula}) with $T={1\over 2\pi\alpha'}$. This gives
\begin{equation}
2M\delta M~=~{4\over \alpha'}\delta N_L~=~{4\over \alpha'}\delta N_R
\end{equation}
For the lowest excitation we set $\delta N_L=\delta N_R=1$ and find
\begin{equation}
\Delta E^{NS1P}_{\rm vibrations}~=~\delta M~\sim ~ {1\over \alpha' M}
\end{equation}
We then find
\begin{equation}
{\Delta E^{NS1P}_{NS5\overline{NS5}}\over \Delta E^{NS1P}_{\rm vibrations} }~\sim~{RVM\over g^2\alpha'^2 n_1n_p}
\end{equation}
Comparing with (\ref{condition}) we find that whenever we are in the strong gravity regime (\ref{strong}) we have
\begin{equation}
{\Delta E^{NS1P}_{NS5\overline{NS5}}\over \Delta E^{NS1P}_{\rm vibrations} } ~\lesssim 1
\end{equation}
so that the fractional NS5 brane pairs are lighter than vibrations of the string.
\subsubsection{Entropy and phase transitions}
Even though we have found that the fractional NS5 brane pairs may be lighter, this does not mean that they are the preferred excitation; we have to check that exciting them creates more entropy for the same energy. Entropy measures the log of the volume of phase space. So if we require
\begin{equation}
{\Delta S_{NS5\overline{NS5}}\over \Delta S_{vibrations} }~>~1
\end{equation}
then (after the system comes to equilibrium) we will find that the energy of excitation is carried by the fractional 5-brane pairs in preference to vibrations of the string.
To understand the role of entropy in this story consider first the NS1-NS5 bound state discussed in section(\ref{extr2}); the present system NS1-P is of course related to this by duality. In the NS1-NS5 system we had an effective string with total winding number $n_1n_5$. We could partition this effective string into `component strings', where the component strings have winding numbers satisfying (\ref{six}). The different ways to make these partitions, together with the different possible fermion zero modes on the component strings, gives the 2-charge entropy (\ref{sixfollow}).
But the generic component string in a generic 2-charge state has winding number $\sim \sqrt{n_1n_5}$ (we can see this by applying duality to (\ref{thir})), so it does {\it not} give the fractionation (\ref{fullfrac}) for $P\bar P$ excitations. It is true that if we want to get the maximal possible entropy from the
$P\bar P$ excitations then we should have the maximal possible fractionation (\ref{fullfrac}); this partitions each momentum mode into $n_1n_5$ pieces and the count of these partitions gives the entropy. But to get this maximal fractionation we have to join all the component strings into one single long component string with winding number $n_1n_5$, so we {\it lose} the `2-charge entropy' that came from the different possible partitions of the winding number. If there is a very small excitation energy, giving very few $P\bar P$ pairs, then we would suffer a loss of entropy if we make the single long string; on the other hand if there was a sufficient amount of $P\bar P$ excitation then it will be {\it more} profitable to lose the `2-charge entropy' and make one single long component string, then {\it gain} the entropy coming from the maximally fractionated $P\bar P$ pairs. Since these pairs are the `third charge' in this problem, we call this latter entropy `3-charge entropy'. To summarize, once we cross a certain threshold of excitation energy the system prefers to rearrange its degrees of freedom so that the entropy is `3-charge' rather then `2-charge': The entropy obtained from different ways of partitioning the NS1-NS5 effective string is `sacrificed', and the single long string which results then generates a higher entropy by maximally partitioning the {\it third} charge P.
Let us now return to our NS1-P system, and apply the above principles. All we need do is permute under duality the charges involved in the above story. Let the excitation energy be $\Delta E$. If we put this into `2-charge' degrees of freedom then we have the entropy of a fundamental string
\begin{eqnarray}
S_{vibrations}&=&2\pi\sqrt{2}\sqrt{N_R+\delta N_R}+2\pi\sqrt{2}\sqrt{\delta N_L}\approx 2\pi\sqrt{2}\sqrt{N_R}+2\pi\sqrt{2}\sqrt{\delta N_L}\nonumber \\
&=&2\pi\sqrt{2}\sqrt{n_1n_p}+2\pi\sqrt{2}\sqrt{{M\alpha'\over 2}\Delta E}=S_{ex}+2\pi\sqrt{2}\sqrt{{M\alpha'\over 2}\Delta E}
\label{en1}
\end{eqnarray}
where $S_{ex}$ is the extremal 2-charge entropy. Here the extra energy $\Delta E$ has gone to just exciting the two charges NS1-P that we started with, so this is `2-charge entropy'.
The `3-charge entropy' will arise if we excite the third charge in the story, the fractional NS5 pairs. We assume, using duality,
that we sacrifice the `2-charge entropy', getting the NS1-P charges into a specific state which then manages to fractionate the NS5 charges by the maximal amount (\ref{fullfrac}). This gives
\begin{equation}
S_{NS5\overline{NS5}}=2\pi\sqrt{n_1n_pn_5}+2\pi\sqrt{n_1n_p\bar n_5}=4\pi\sqrt{n_1n_p( {g^2\alpha'^3\over VR})({\Delta E\over 2})}
\label{en2}
\end{equation}
The entropy (\ref{en2}) will become comparable to (\ref{en1}) (in which the dominant contribution is $S_{ex}$) when
\begin{equation}
{g^2\alpha'^3\over VR} \Delta E\sim 1
\label{compare}
\end{equation}
Since the nonextremal energy shows up in the geometry (\ref{fullmetric}) through the parameter $r_0$, it is worth asking what value of $r_0$ the $\Delta E$ in (\ref{compare}) corresponds to. Using (\ref{energy}) with $\cosh 2\alpha\approx\sinh2\alpha, \cosh 2\sigma\approx \sinh 2\sigma, \gamma=0$ we find
\begin{equation}
\Delta E=E-E_{extremal}={RV\over 2g^2\alpha'^4}r_0^2
\end{equation}
Using (\ref{compare}) we find
\begin{equation}
r_0\sim\sqrt{\alpha'}
\end{equation}
But this is just the `horizon' radius of the extremal 2-charge `fuzzball' (\ref{ffone}).
\subsubsection{Summary}\label{summ}
Let us summarize the discussion of this subsection. Start with extremal NS1-P and add a small amount of energy. At very weak coupling we just get additional vibrations of the string. But if $g$ is large enough that the string strongly feels its own gravity, then there is an excitation that is lighter than the lowest vibration mode of the string: A `maximally fractionated' $NS5\overline {NS5}$ pair. But to make such a maximally fractionated pair possible the NS1-P string needs to be in a specific state, so we would need to lose `2-charge entropy'. Suppose we make the added energy $\Delta E$ large enough so that the horizon of the geometry (\ref{fullmetric}) becomes larger than the radius of the 2-charge extremal NS1-P `fuzzball'. Then `maximal fractionation' becomes entropically favored, and the excitation energy is carried by
$NS5\overline {NS5}$ pairs. Our computations of near extremal entropy and radiation (sections (\ref{them}) and (\ref{abso})) were compared with the properties of the metric (\ref{fullmetric}). In this metric the classical limit of large charges $n_i\rightarrow \infty$ has been taken. The 2-charge fuzzball radius $\Delta x$ grows as $n_i^{1/3}$. In (\ref{fullmetric}) we have the `charge radii' $\sqrt{Q_i}$ and the nonextremality parameter $r_0$, and the near extremal limit corresponds to $r_0\ll \sqrt{Q_i}$. But the scales $\sqrt{Q_i}$ grow as $n_i^{1/2}$. Even though we may choose $r_0/\sqrt{Q_i}\ll1$ for our near-extremal calculations, we will still have $r_0\gg \Delta x$ in the classical limit.
{\it Thus to understand the dynamics of a black hole like (\ref{fullmetric}) with NS1-P charges we need to think of fractional $NS5\overline {NS5} $ pairs.\footnote{The effect of strong self-gravity on a string was also discussed in \cite{hp} from a slightly different point of view; it was noted that the entropy of a string state agreed with the entropy of a black hole at the `correspondence point' where the horizon radius just equalled the string scale.
We have argued here that such correspondence points are points of phase transition where the microscopic degrees of freedom completely change character.}}
\subsection{Squeezing a black hole}\label{sque}
Consider the 3-charge NS1-NS5-P black hole, with small amount of nonextremality. We assume for convenience that $Q_p$ is smaller than $Q_1, Q_5$ so the extra energy goes mainly towards creating $P\bar P$ pairs
\begin{equation}
{\rm NS1~NS5~P}~+~\Delta E~~\rightarrow ~~{\rm NS1~NS5~P} ~+~(P~\bar P)
\label{ns1ns5ppp}
\end{equation}
The entropy is
\begin{equation}
S^{NS1NS5P}_{P\bar P}~=~2\pi\sqrt{n_1n_5(n_p+\Delta n_p)}+2\pi\sqrt{n_1n_5\bar n_p}\equiv S_{ex}+\Delta S
\label{en21}
\end{equation}
where the extra entropy $\Delta S$ is much smaller then the extremal entropy $S_{ex}=2\pi\sqrt{n_1n_5n_p} $ since we are assuming that we are close to extremality.
Now imagine that one of the noncompact spatial directions (say $x^4$) is compactified, on a circle of radius $\tilde R$. If this circle is large
(i.e. $\tilde R$ is much greater than all other length scales in the problem) then we have the near extremal 3-charge hole as above. But now imagine reducing $\tilde R$. If $\tilde R$ is small then we should be thinking of the {\it four} charge black hole in 3+1 noncompact directions. In section(\ref{the4}) we had seen that the four charge hole has charges NS1, NS5, P, KK where KK stands for Kaluza-Klein monopoles with $x^4$ as the nontrivially fibered circle. If we have just the charges NS1-NS5-KK and a small amount of nonextremality then we excite (fractional) $P\bar P$ pairs (eq. (\ref{4chargeen}))
\begin{equation}
{\rm NS1~NS5~KK}~+~\Delta E~~\rightarrow ~~{\rm NS1~NS5~KK} ~+~(P~\bar P)
\label{ns1ns5kk}
\end{equation}
By duality we can permute the four charges, so if we start with NS1-NS5-P charges and add a little nonextremality then we should excite (fractional) $KK\overline {KK}$ pairs
\begin{equation}
{\rm NS1~NS5~P}~+~\Delta E~~\rightarrow ~~{\rm NS1~NS5~P} ~+~(KK~\overline {KK})
\label{ns1ns5p}
\end{equation}
The mass of a KK monopole is
\begin{equation}
m_{KK}={RV\tilde R^2\over g^2\alpha'^4}
\label{mkk}
\end{equation}
so the entropy of this near extremal system will be
\begin{equation}
S^{NS1NS5P}_{KK\bar KK}~=~2\pi\sqrt{n_1n_5n_pn_{KK}}+2\pi\sqrt{n_1n_5n_p\bar n_{KK}}
=4\pi\sqrt{n_1n_5n_p ({\Delta E\over 2 m_{KK}})}
\label{en22}
\end{equation}
Which is the correct description of microscopic excitations, (\ref{ns1ns5ppp}) or (\ref{ns1ns5p})?
This, we expect, depends on which entropy is higher, (\ref{en21}) or (\ref{en22}). Noting that (\ref{en21}) is dominated by the contribution $S_{ex}$ we find that these entropies become comparable when \cite{emission}
\begin{equation}
\Delta E~\sim~ m_{KK}~\sim~ {RV\tilde R^2\over g^2\alpha'^4}
\label{energy2}
\end{equation}
For larger energies above extremality, or equivalently, when $\tilde R$ is smaller than the value given by the above relation, we will have the excitations (\ref{ns1ns5p}) in preference to the excitations (\ref{ns1ns5ppp}).
So again we see a `phase transition' where the microscopic degrees of freedom change character when a parameter ($\tilde R$) is moved past a critical point.
It is interesting to ask what values of the nonextremality parameter $r_0$ the critical energy (\ref{energy2}) corresponds to.
In the metric of the 3+1 dimensional hole (\ref{4chargemetric}) the energy above extremality is
given by (using (\ref{energy4}))
\begin{equation}
\Delta E\approx {RV\tilde R r_0\over g^2\alpha'^4}
\end{equation}
Equating this to (\ref{energy2}) we find
\begin{equation}
r_0\sim \tilde R
\label{length1}
\end{equation}
Note that $\tilde R$ need not be small (i.e. it can be much bigger than planck length or string length) so we see that
our fractional monopole pairs can stretch out to macroscopic distances.
In the 4+1 dimensional metric all charges are nonzero so it takes a little more effort to compute the energy above extremality corresponding to (\ref{energy2}) (we hold the $\hat n_i$ fixed and change $r_0$). We find
\begin{equation}
\Delta E\approx {RVr_0^4\over 8g^2\alpha'^4}\, ( Q_1^{-1}+Q_5^{-1}+Q_p^{-1})
\end{equation}
Equating this to (\ref{energy2}) we find
\begin{equation}
r_0^4\sim {\tilde R^2\over Q_1^{-1}+Q_5^{-1}+Q_p^{-1}}
\label{length2}
\end{equation}
The physics here seems similar to that in the Gregory-Laflamme transition \cite{gl}. In the Gregory-Laflamme transition we find that as we increase the size of a circle in the spacetime we go from a black brane wrapping this circle to a black hole that does not fill the compact circle. In our microscopic computation above we found that if we increase the size of the circle $\tilde S^1$ then we go from an object which has KK monopole-pairs wrapping the circle to an object that does not make use of this circle in distributing its energy. It would be interesting to see if the length scales (\ref{length1}),(\ref{length2}) can be understood as a case of the Gregory-Laflamme type \cite{gl}, by analyzing the microscopic entropy in terms of the energy and `pressures' created by the brane bound state \cite{obers}.
\subsection{Estimating the size of the NS1-NS5-P bound state}\label{esti}
Let us put together all that we have learnt to make an ambitious attempt: We will try to obtain a crude estimate of the `size' of the 3-charge extremal bound state. We have already constructed explicitly the 2-charge states, and found that their radius $R_s$ is not `small', i.e. $R_s$ is not a fixed scale like planck length or string length, but a length that grows with the charges in the bound state in such a way that the surface area of a sphere at $r\sim R_s$ gives the Bekenstein entropy of the 2-charge state. We do not know how to make the generic 3-charge state, but we will try to use the concepts of fractionation and the origin of black hole entropy to get some idea of the dynamics that could govern the `size' of the 3-charge state.
We proceed in the following steps:
\medskip
(a) Start with the compactification $M_{9,1}\rightarrow M_{4,1}\times T^4\times S^1$, and construct the NS1-NS5-P extremal bound state as before. Let this bound state be placed with its center at $r=0$. Bring a test quantum to a location $|\vec r|=R_d$ near the bound state. At what value of $R_d$ will the test quantum directly `feel' the bound state? If $R_d$ is very large then the test quantum will only feel the usual long range fields like the gravitational field produced by the bound state. But as we reduce $R_d$ there might be a critical value at which brane-antibrane pairs `stretch out and touch' the test quantum. If such a phenomenon were to occur, we would say that $R_d$ is a measure of the physical size of the 3-charge bound state.
\medskip
(b) If brane-antibrane pairs have to be created to stretch out and affect the test quantum then we need energy to create such pairs. Where can this energy come from? We have deliberately made things as hard for ourselves as possible, by taking an extremal NS1-NS5-P bound state; all the energy of this state is accounted for by the charges it carries, so there is nothing extra that can be used to produce the pairs. But the fact that we bring the test quantum to within a distance $R_d$ means that we have an energy $\Delta E>{1/ R_d}$ above extremality in the combined system of bound state plus test quantum. (For the test quantum $E=\sqrt{m^2+p^2}>p\sim 1/R_d$.) We set
\begin{equation}
\Delta E={1\over R_d}
\label{en3}
\end{equation}
Note that we are hoping to get a macroscopic value for the length $R_d$ at the end of the computation, so this energy is very small.
\medskip
(c) We do not have a good picture of what it means for branes `to stretch out and touch' the test quantum. So let us modify the problem somewhat. We put the bound state in a periodic box of length $R_d$, and add the energy (\ref{en3}) to the system. If the resulting excitations do not feel the size of the box then we would say that the `size' of the bound state is much smaller than the box size $R_d$. On the other hand if the excitation generates branes that wrap around the compact direction of the box then we would say that the bound state size is {\it larger} than $R_d$.
At this stage we have now brought the problem to a form that can be tackled by the tools that we have already developed. We have a compactification $M_{9,1}\rightarrow M_{3,1}\times T^4\times S^1\times \tilde S^1$, with $\tilde S^1$ having radius $R_d$. We have a state with three large charges NS1-NS5-P, and a small amount of nonextremality. From (\ref{ns1ns5p}) we see that we can use the nonextremal energy to create fractional $KK\overline{KK}$ pairs. These monopoles have the direction $\tilde S^1$ as the nontrivially fibered direction, so the produced pairs `wrap' around the circle of size $R_d$ and are thus certainly sensitive to the `box size'. The mass of a monopole grows with $R_d$ as $R_d^2$, so one might think that such monopole pairs are hard to produce with the small energy (\ref{en3}), but fractionation can make these pairs light, as we will soon see.
\medskip
(d) One last step before we start computing. We want not only that the fractional $KK\overline{KK}$ pairs {\it can} be generated, we want that it should be likely that they {\it are} generated. Concretely, we demand that using the energy (\ref{en3}) to generate the pairs should lead to an entropy increase $\Delta S\ge 1$. Since entropy measures the volume of phase space, this would imply that
\begin{equation}
{{\rm Volume~ of ~phase ~space~ with } ~KK\overline{KK}~{\rm pairs ~created}\over {\rm Volume~ of ~phase~ space ~{\it without}~ pairs}}~\ge ~e
\end{equation}
so it is more likely than not that we create the pairs. Thus we set on ourselves the requirement
\begin{equation}
\Delta S=1
\end{equation}
\medskip
(e) Since the key effect here will be fractionation, let us pause for a moment to consider the nature of fractionation that we expect. We have seen that the extremal 3-charge system has an entropy $S_{ex}=2\pi\sqrt{n_1n_5n_p}$.
This entropy comes because we can distribute the fractionated components of this bound state (fractional momentum modes in the description that we had used) in many ways, getting the entropy $S_{ex}$. On that other hand, if we {\it sacrifice} this entropy to make a special state of the NS1-NS5-P system, then we can get the new excitation -- KK monopoles -- to be `maximally fractionated' in units $1/n_1n_5n_p$. In the computation of section(\ref{sque}) the `phase transition' occurred when it was advantageous to give up the entropy $S_{ex}$ and to gain instead the entropy of fractional $KK\overline {KK}$ pairs.
In the present case we are making a small perturbation to the 3-charge NS1-NS5-P extremal state, so we do not expect that
there will be a complete rearrangement of the 3-charge state. Rather, we expect that a small fraction $\mu$ of the excitations will `bind together' to make a special configuration, and the rest will exhibit the entropy of the 3-charge state. Specifically
\begin{eqnarray}
S&=&S_{3-charge}~+~S_{4-charge}\nonumber \\
&=&[\,2\pi\sqrt{n_1n_5((1-\mu)n_p)}\,]+[\,2\pi\sqrt{n_1n_5 (\mu n_p)n_{KK}}+2\pi\sqrt{n_1n_5 (\mu n_p)\bar n_{KK}}\,]
\label{fullen}
\end{eqnarray}
where
\begin{equation}
n_{KK}+\bar n_{KK}=2n_{KK}={\Delta E\over m_{KK}}
\label{nkk2}
\end{equation}
When the system comes to equilibrium the parameter $\mu$ will get set to the value that gives the maximum entropy. Extremising (\ref{fullen}) with respect to $\mu$ we find that the optimal value of $\mu$ is given through
\begin{equation}
{\mu\over 1-\mu}=4n_{KK}~\rightarrow~\mu\approx 4n_{KK}
\label{mukk}
\end{equation}
where in writing the approximation we have noted that we expect $\mu<<1$ at the end of the computation.
(Note that $n_{KK}$ will be fractional, so it can be much less than unity.) Substituting this value of $\mu$ in (\ref{fullen}) we find
\begin{equation}
S\approx 2\pi\sqrt{n_1n_5n_p}+4\pi\sqrt{n_1n_5n_p}~n_{KK}\equiv S_{ex}+\Delta S
\end{equation}
Setting $\Delta S=1$ gives
\begin{equation}
\mu={1\over \pi\sqrt{n_1n_5n_p}}
\end{equation}
Relating $\mu$ to $n_{KK}$ (eq. (\ref{mukk})), $n_{KK}$ to $\Delta E/2m_{KK}$ (eq. (\ref{nkk2})), and then using (\ref{en3}),(\ref{mkk}) we find
\begin{equation}
{1\over R_d}{g^2 \alpha'^4\over R_d^2 RV}\sim {1\over \sqrt{n_1n_5n_p}}
\end{equation}
which gives \cite{emission}
\begin{equation}
R_d\sim [{g^2\alpha'^3\sqrt{n_1n_5n_p}\over RV}]^{1\over 3}
\end{equation}
as the length scale to which the fractional KK monopoles extend.
But the Schwarzschild radius of the 3-charge extremal hole is
\begin{equation}
R_s = [{g^2\alpha'^3\sqrt{n_1n_5n_p}\over RV}]^{1\over 3}
\end{equation}
(This is easily verified by using $A/4G_5=S=2\pi\sqrt{n_1n_5n_p}$.) The agreement of the numerical
coefficient is just fortuitous, but it is interesting that even in our very crude estimate all the other parameters fall into their correct place and tell us that the size of the 3-charge extremal bound state is not small; i.e. not a fixed scale like planck or string length, but something that grows with the charges, and in fact is of order the horizon radius. This tells us that nonperturbative string theory effects can correct the geometry in the entire interior of the hole, and not just within planck distance of the singularity.
\subsection{Summary}
The arguments of this section have been rough, unlike the concrete computations that we presented in earlier sections. But note that these arguments used little more than the results that we had found in our more rigorous calculations. In the process we have uncovered phenomenon like fractionation, phase transitions, low tension fractional branes and large sized fuzzballs. Note that even for the 2-charge extremal NS1-P system we can attribute the nontrivial size to `fractionation'. Suppose we had a string with winding number $n_1$. If we put on this string a wave with wavenumber an integral multiple of $1/R$ then all strands of the string move together, and there is no significant transverse size. But if we put the energy in a mode with fractional wavenumber, say $k=1/(n_1R)$, then we have seen that the strands separate and spread out over a large transverse region.
All this suggests that we must use radically new concepts to study bound states of large numbers of quanta in string theory. Thus string theory can offer a perspective on the information paradox that we could not have obtained by using our semiclassical notions of gravity.
\section{Discussion}
\label{disc}\setcounter{equation}{0}
Let us summarize the ideas that we have developed. Gedanken experiments tell us that a black hole must have entropy $S_{bek}=A/4G$. In string theory we can make extremal black holes and count their states from the properties of strings/branes; the BPS nature of these systems enables this count to be made in a weak coupling domain and then continued without change to the coupling where we expect black holes. Interestingly, such computations work also for near extremal holes, presumably because the excitations are sufficiently `dilute' on the large system of branes that the interactions between excitations can be ignored to leading order. From the interaction of these excitations with the bulk modes we also found that the emission from near extremal bound states agreed with the low energy Hawking radiation expected from the corresponding holes.
All this did not resolve the information paradox; for that we need an understanding of the {\it structure} of black hole states, not just their count or dynamics deduced from a weak coupling domain. For 2-charge extremal states we found that the bound states were not small, but were `fuzzballs' of `horizon size'. We wrote the family (\ref{ttsix}) of classical geometries to describe these states, but in doing so we assumed that the string in the NS1-P picture was well described by a classical profile. In general we have an energy eigenstate of the string in a vibration mode, and there will be quantum fluctuations about the mean. These fluctuations do not change the size of the region over which the string spreads; they just make the object `fuzzy' so we term it a `fuzzball'. For more details on quantum fluctuations and the classical limit see \cite{fuzz}. There are additional corrections which in the NS1-NS5 picture come from D-string winding modes in the $y$ direction; in \cite{gm2} it was shown that these produce bounded corrections and so do not change the `fuzzball' picture of the microstates.
We have argued that a basic idea is `fractionation': The property of objects in string theory that when they make supersymmetric bound states they break up into smaller `bits'. It is this breakup that leads to the large entropy
of string states (which agrees with the Bekenstein entropy). We have suggested that this breakup also leads to the large `size' of bound states that gives `fuzzballs' instead of smooth geometries with horizons. We get low tension `fractional branes' which can easily stretch to large distances, and a rough analysis suggests that this distance will be of order the horizon length.
If such is the case then we change the picture of the black hole interior: Different microstates will be different states of the `fuzzball' and radiation leaving from the surface of the fuzzball can see the information encoded in the state just like the photons leaving a burning piece of coal see the state of the coal and so carry its information. This would therefore resolve the information paradox.
What is the significance of the `naive' geometry (like (\ref{one})) that we can write down as a solution of
the classical field equations? It is tempting to think that this could be some kind of an `average' over the microstate
geometries. But as in any quantum mechanical system our fuzzball must be in any {\it one} of its possible states, so what can such an average mean? One situation where {\it all} the states will be involved is where we put the system on a Euclidean time circle of length $\beta$; this gives thermal partition function in which all microstates run around the time circle.
This partition function may be represented by a Euclidean path integral. Suppose that this path integral is dominated by a saddle point configuration. Based on what we know about Euclidean black holes we expect that this will be the `cigar shaped' Euclidean solution which ends smoothly at the horizon and has no `interior'.
We can now continue this geometry to the Lorentzian section, getting the geometry (\ref{one}). This geometry has a horizon, but it does not have any direct significance as a geometry in our approach; the actual Lorentzian geometries were the `fuzzballs' which had no horizons. One may however be able use the geometry (\ref{one}) to compute Green's functions that give ensemble averages over the black hole states \cite{shenker}.
It is plausible that the ideas arising in black holes extend to other situations where we have a large number of particles interacting strongly, as for example in the early Universe. If we get long distance quantum effects in the cosmological setting then it may affect our understanding of the initial wavefunction of the Universe, the horizon problem and the way we compute the vacuum energy density.
As mentioned in the introduction, we have sought to review a few selected computations in string theory to bring out a certain perspective on black holes. The presented computations are a small fraction of the research results in the area, and we hope that they will encourage the reader to look deeper at the field. In particular recent progress has been quite rapid. There is a general way to construct all BPS 3-charge geometries, though this set includes bound states as well as unbound ones
\cite{benawarner}. For some specific families of bound states dual geometries have been constructed \cite{3charge}. Larger families of such geometries were found in \cite{3chargenew}. 3-charge supertubes offer a microscopic approach to the problem, and geometries for these have been suggested \cite{bk1}. The discovery of black rings has provided another new aspect of the problem of `hair' for black holes; one finds that black hole metrics are not as unique
as originally believed \cite{ring}. There is a way to add monopole charges to BPS states, which gives possible microstates for 4-dimensional systems \cite{monopole}. Work with generic members of the ensembles of microstates show that they might exhibit black hole like properties \cite{thermal2}. Some nonextremal state geometries have been constructed as well \cite{nonex1}. This progress suggests that there is a vast body of knowledge remaining to be uncovered in this area.
\section*{Acknowledgements}
I would like to thank Sumit Das, Stefano Giusto, Oleg Lunin, Ashish Saxena and Yogesh Srivastava who have been collaborators in various aspects of the work discussed here. I also thank Borun Chowdhury, Stefano Giusto and Yogesh Srivastava for helping to correct errors in the manuscript. This work is
supported in part by DOE grant DE-FG02-91ER-40690.
|
1,941,325,220,692 | arxiv | \section{Euclidean from Lorentzian}\label{introduction}
One often studies a Poincar\'e-invariant quantum field theory defined on Minkowski spacetime \emph{via} Wick rotation to a Euclidean-invariant statistical field theory defined on Euclidean space. Within the path integral formulation, the Wick rotation transforms a Lorentzian path integral, which involves complex probability amplitudes for each Lorentzian field configuration, into a statistical partition function, which involves real probabilities for each Euclidean field configuration. Absent the complications of complex probability amplitudes for Lorentzian field configurations, calculations typically prove considerably more tractable.
Provided that the statistical field theory satisfies the Osterwalder-Schrader axioms, one can recover the Lorentzian theory from the Euclidean theory through the Osterwalder-Schrader reconstruction theorem \cite{KO&RS1,KO&RS2}. One thus defines the Lorentzian theory in terms of the Euclidean theory.
The tempting prospect that a quantum theory of gravity could be similarly defined led to the development of various approaches taking as their starting point the partition function
\begin{equation}\label{formalEpathintegral}
\mathscr{Z}[\mathbf{\gamma}]=\int_{\mathbf{g}|_{\partial\mathcal{M}}=\mathbf{\gamma}}\mathrm{d}\mu(\mathbf{g})\,e^{-S_{\mathrm{cl}}^{(\mathrm{E})}[\mathbf{g}]/\hbar}
\end{equation}
over Euclidean geometries specified by a metric tensor $\mathbf{g}$.
One should, however, be skeptical of these approaches' applicability to gravity: a typical spacetime, even satisfying the Einstein equations, does not permit a global Wick rotation from Lorentzian to Euclidean signature.
Nevertheless, such approaches---collectively called Euclidean quantum gravity---work not only sensibly, but even successfully in sundry circumstances \cite{GWG&SWH2}. We briefly mention two notable examples. First, one can derive the thermodynamic behavior of black holes from the partition function \eqref{formalEpathintegral}. Gibbons and Hawking computed the black hole entropy \cite{GWG&SWH}, and Hartle and Hawking computed the black hole radiance \cite{JBH&SWH1}. Second, Hartle and Hawking developed a quantum theory of gravity in the minisuperspace truncation from the partition function \eqref{formalEpathintegral}, their so-called no-boundary proposal \cite{JBH&SWH}. These authors defined a wave function for the universe having a remarkable property:
Lorentzian geometries dominate the partition function \eqref{formalEpathintegral} owing to
the necessity of deforming an integration contour into the complex plane.
Consequently, there is no need for an Osterwalder-Schrader reconstruction: the partition function \eqref{formalEpathintegral} directly defines a Lorentzian quantum theory of gravity. Initial attempts to construct a complete nonperturbative quantum theory of gravity on the basis of the partition function \eqref{formalEpathintegral} did not fare so well \cite{RL}. Two approaches, quantum Regge calculus and Euclidean dynamical triangulations, both grounded upon lattice regularization of the partition function \eqref{formalEpathintegral}, were extensively studied \cite{RL}. Neither of the quantum theories of gravity so defined exhibited a sufficiently rich phase structure to support a continuum limit.\footnote{See \cite{HWH}, however.} More recently, an approach based on exact renormalization group analysis of the partition function \eqref{formalEpathintegral} has shown promise \cite{MR&FS}.
Causal dynamical triangulations emerged from the failures of quantum Regge calculus and Euclidean dynamical triangulations \cite{JA&RL}. This newer approach takes as its starting point the Lorentzian path integral
\begin{equation}
\mathscr{A}[\gamma]=\int_{\mathbf{g}|_{\partial\mathcal{M}}=\gamma}\mathrm{d}\mu(\mathbf{g})\,e^{iS_{\mathrm{cl}}[\mathbf{g}]/\hbar}
\end{equation}
over Lorentzian geometries specified by a metric tensor $\mathbf{g}$. One chooses to restrict the path integration to appropriately causal Lorentzian geometries, namely, those admitting a global foliation by spacelike hypersurfaces all of fixed topology.
One then introduces a lattice regularization---causal triangulations---of these causal Lorentzian geometries.
As Ambj\o rn, Jurkiewicz, and Loll demonstrated, this restriction allows for a well-defined Wick rotation of any Lorentzian causal triangulation to a corresponding Euclidean causal triangulation \cite{JA&JJ&RL1,JA&JJ&RL2}. This Wick rotation enables the use of Monte Carlo methods to study the resulting partition function.
Having implemented this Wick rotation, one could have wondered if the resulting partition function behaves conventionally, such as that of a
field theory satisfying the Osterwalder-Schrader axioms, or unconventionally, such as that of the Hartle-Hawking no-boundary proposal. On the basis of Monte Carlo simulations of certain transition amplitudes within the causal dynamical triangulations of $(2+1)$-dimensional Einstein gravity, Cooperman and Miller conjectured that its partition function
behaves unconventionally \cite{JHC&JMM}. Specifically, these authors suggested that geometries resembling Lorentzian de Sitter spacetime---not, as previously thought, Euclidean de Sitter space---on sufficiently large scales dominate this partition function. Independently, Ambj\o rn \emph{et al} argued for a signature change transition within the causal dynamical triangulations of $(3+1)$-dimensional Einstein gravity \cite{JA&DNC&JGS&JJ}. We now argue, contrary to the conjecture of Cooperman and Miller, that the partition function of causal dynamical triangulations behaves conventionally. Specifically, by reinterpreting these Monte Carlo simulations, we maintain that geometries resembling Euclidean de Sitter space on sufficiently large scales indeed dominate this partition function. In the process of making this argument, we provide further evidence that the partition function of causal dynamical triangulations behaves correctly in its semiclassical limit.
We introduce the formalism of causal dynamical triangulations, specializing to the case of $2+1$ dimensions for spherical topology with initial and final spacelike boundaries, in section \ref{CDT}. After recalling the relevant results from \cite{JHC&JMM} and presenting new related results, we restate the conjecture of Cooperman and Miller in section \ref{evidenceconjecture}. We present a first analysis of all of these results in section \ref{analysissupport}, which offers evidence in support of their conjecture. We present a more careful analysis in section \ref{argumentrefutation}, which leads to our argument refuting the conjecture of Cooperman and Miller. We conclude in section \ref{conclusion} by echoing Cooperman's call for the proof of an Osterwalder-Schrader-type theorem for causal dynamical triangulations \cite{JHC2}. Four appendices supplement aspects of sections \ref{CDT}, \ref{analysissupport}, and \ref{argumentrefutation}.
\section{Causal dynamical triangulations}\label{CDT}
Within a path integral quantization of a classical metric theory of gravity, one formally defines a transition amplitude as
\begin{equation}\label{gravitypathintegral}
\mathscr{A}[\gamma]=\int_{\mathbf{g}|_{\partial\mathcal{M}}=\gamma}\mathrm{d}\mu(\mathbf{g})\,e^{iS_{\mathrm{cl}}[\mathbf{g}]/\hbar}.
\end{equation}
The right hand side of equation \eqref{gravitypathintegral} encodes the following instructions for computing the transition amplitude $\mathscr{A}[\gamma]$: integrate over all spacetime metric tensors $\mathbf{g}$ that induce the metric tensor $\gamma$ on the boundary $\partial\mathcal{M}$ of the spacetime manifold $\mathcal{M}$,
weighting each metric tensor $\mathbf{g}$ by the product of the measure $\mathrm{d}\mu(\mathbf{g})$ and the exponential $e^{iS_{\mathrm{cl}}[\mathbf{g}]/\hbar}$. $S_{\mathrm{cl}}[\mathbf{g}]$ is the action specifying the classical metric theory of gravity, including boundary terms enforcing the condition $\mathbf{g}|_{\partial\mathcal{M}}=\gamma$.
Within the causal dynamical triangulations approach to such a quantization,\footnote{See \cite{JA&JJ&RL1,JA&JJ&RL2,JA&RL} for the original formulation and \cite{JA&AG&JJ&RL3} for a comprehensive review.} one restricts the path integration in equation \eqref{gravitypathintegral} to so-called causal spacetime metric tensors $\mathbf{g}_{c}$, those admitting a global foliation by spacelike hypersurfaces all of a fixed spatial topology $\Sigma$. The manifold $\mathcal{M}$ therefore has the topology $\Sigma\times\mathsf{I}$, the direct product of $\Sigma$ and a real interval $\mathsf{I}$. By invoking this restriction, one considers transition amplitudes $\mathscr{A}_{\Sigma}[\gamma]$ formally defined as
\begin{equation}\label{causalgravitypathintegral}
\mathscr{A}_{\Sigma}[\gamma]=\int_{\substack{\mathcal{M}\cong\Sigma\times\mathsf{I} \\ \mathbf{g}_{c}|_{\partial\mathcal{M}}=\gamma}}\mathrm{d}\mu(\mathbf{g}_{c})\,e^{iS_{\mathrm{cl}}[\mathbf{g}_{c}]/\hbar}.
\end{equation}
To regularize the transition amplitudes $\mathscr{A}_{\Sigma}[\gamma]$,
one replaces the path integration over all causal metric tensors $\mathbf{g}_{c}$ in equation \eqref{causalgravitypathintegral} with a path summation over all causal triangulations $\mathcal{T}_{c}$. A causal triangulation $\mathcal{T}_{c}$ is a piecewise-Minkowski simplicial manifold possessing a global foliation by spacelike hypersurfaces all of the topology $\Sigma$. One constructs a causal triangulation $\mathcal{T}_{c}$ by appropriately gluing together $N_{D}$ causal $D$-simplices, each a simplicial piece of $D$-dimensional Minkowski spacetime with spacelike edges of squared invariant length $a^{2}$ and timelike edges of squared invariant length $-\alpha a^{2}$ for positive constant $\alpha$. $a$ is the lattice spacing. We depict the three types of causal $3$-simplices (tetrahedra) in figure \ref{3-simplices}.
\begin{figure}
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.25)
\put(0,0){\includegraphics[clip,scale=0.5,trim={0cm 1.7cm 0cm 0cm}]{all_simplices_version_3.png}}
\put(0.2,-0.02){\bf (a)}
\put(0.475,0.01){\bf (b)}
\put(0.7,0.035){\bf (c)}
\put(0.77,0.09){\large $\tau = 0$}
\put(0.79,0.233){\large $\tau = 1$}
\end{picture}
\caption{Causal $3$-simplices employed in $(2+1)$-dimensional causal dynamical triangulations extending from time slice $\tau=0$ to time slice $\tau=1$. (a) $(3,1)$ $3$-simplex, (b) $(2,2)$ $3$-simplex, (c) $(1,3)$ $3$-simplex. We have adapted this figure from \cite{JHC&JMM}.}
\label{3-simplices}
\end{figure}
Causal $D$-simplices necessarily assemble into a manifold of topology $\Sigma\times\mathsf{I}$, and their skeleton distinguishes a foliation of this manifold into spacelike hypersurfaces. We refer to the leaves of this distinguished foliation as a causal triangulation's time slices, and we enumerate a causal triangulation's $T$ time slices with a discrete time coordinate $\tau$
By invoking this regularization, one considers regularized transition amplitudes $\mathcal{A}_{\Sigma}[\Gamma]$ defined as
\begin{equation}\label{causalpathsum}
\mathcal{A}_{\Sigma}[\Gamma]=\sum_{\substack{\mathcal{T}_{c}\cong\Sigma\times\mathsf{I} \\ \mathcal{T}_{c}|_{\partial\mathcal{T}_{c}}=\Gamma}}\mu(\mathcal{T}_{c})\,e^{i\mathcal{S}_{\mathrm{cl}}[\mathcal{T}_{c}]/\hbar}.
\end{equation}
$\Gamma$ is the triangulation of the boundary $\partial\mathcal{T}_{c}$ of the causal triangulation $\mathcal{T}_{c}$, $\mu(\mathcal{T}_{c})$ is the measure, equal to the inverse of the order of the automorphism group of the causal triangulation $\mathcal{T}_{c}$, and $\mathcal{S}_{\mathrm{cl}}[\mathcal{T}_{c}]$ is the translation of the action $S_{\mathrm{cl}}[\mathbf{g}]$ into the Regge calculus of causal triangulations. In the cases of $D>2$ dimensions, analytic calculations of the transition amplitudes $\mathcal{A}_{\Sigma}[\Gamma]$,
even for the simplest nontrivial cases, are not currently possible. To study the quantum theory of gravity defined by the transition amplitudes $\mathcal{A}_{\Sigma}[\Gamma]$,
one therefore employs numerical techniques, specifically Monte Carlo methods. To enable the application of such methods, one first performs a Wick rotation of each causal triangulation by analytically continuing $\alpha$ to $-\alpha$ through the lower-half complex plane. This Wick rotation transforms the transition amplitude $\mathcal{A}_{\Sigma}[\Gamma]$
into the partition function
\begin{equation}\label{partitionfunction}
\mathcal{Z}_{\Sigma}[\Gamma]=\sum_{\substack{\mathcal{T}_{c}\cong\Sigma\times\mathsf{I} \\ \mathcal{T}_{c}|_{\partial\mathcal{T}_{c}}=\Gamma}}\mu(\mathcal{T}_{c})\,e^{-\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]}
\end{equation}
in which $\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]$ is the resulting real-valued Euclidean action. Since one can only numerically simulate finite causal triangulations, one chooses to consider the partition function \eqref{partitionfunction} for fixed numbers $\bar{T}$ of time slices and $\bar{N}_{D}$ of causal $D$-simplices. Accordingly, Monte Carlo methods produce ensembles of causal triangulations representative of those contributing to the (canonical) partition function
\begin{equation}\label{partitionfunctionfixedTN}
Z_{\Sigma}[\Gamma]=\sum_{\substack{\mathcal{T}_{c}\cong\Sigma\times\mathsf{I} \\ \mathcal{T}_{c}|_{\partial\mathcal{T}_{c}}=\Gamma \\ T(\mathcal{T}_{c})=\bar{T} \\ N_{D}(\mathcal{T}_{c})=\bar{N}_{D}}}\mu(\mathcal{T}_{c})\,e^{-\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]},
\end{equation}
related by Laplace transform to the (grand canonical) partition function \eqref{partitionfunction}.
We take the action $S_{\mathrm{cl}}[\mathbf{g}]$ as that of $(2+1)$-dimensional Einstein gravity:
\begin{equation}\label{completeCaction}
S_{\mathrm{cl}}[\mathbf{g}]=\frac{1}{16\pi G_{0}}\int_{\mathcal{M}}\mathrm{d}^{3}x\,\sqrt{-g}\left(R-2\Lambda_{0}\right)+\frac{1}{8\pi G_{0}}\int_{\partial\mathcal{M}}\mathrm{d}^{2}y\sqrt{|\gamma|}K.
\end{equation}
The first term in the action \eqref{completeCaction}---the bulk term---is the Einstein-Hilbert action in which $G_{0}$ is the bare Newton constant, $R$ is the Ricci scalar of the metric tensor $\mathbf{g}$, and $\Lambda_{0}$ is a positive bare cosmological constant. The second term in the action \eqref{completeCaction}---the boundary term---is the Gibbons-Hawking-York action in which $K$ is the trace of the extrinsic curvature of the metric tensor $\gamma$ \cite{GWG&SWH,JWY}. We choose to consider a spacetime manifold $\mathcal{M}$ isomorphic to the direct product $\mathsf{S}^{2}\times\mathsf{I}$ of a $2$-sphere $\mathsf{S}^{2}$ and a real interval $\mathsf{I}$. In this case the boundary $\partial\mathcal{T}_{c}$ consists of two disconnected components: an initial spacelike $2$-sphere $\mathsf{S}_{\mathrm{i}}^{2}$ and a final spacelike $2$-sphere $\mathsf{S}_{\mathrm{f}}^{2}$. Drawing on previous results of Hartle and Sorkin \cite{JBH&RS}, Ambj\o rn \emph{et al} \cite{JA&JJ&RL2}, and Anderson \emph{et al} \cite{CA&SJC&JHC&PH&RKK&PZ}, Cooperman and Miller derived the form of the action $\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]$ arising from the action \eqref{completeCaction} for this case. We display $\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]$ in equation \eqref{completeEaction} of appendix \ref{completediscreteEaction}. If the initial and final boundary $2$-spheres $\mathsf{S}_{\mathrm{i}}^{2}$ and $\mathsf{S}_{\mathrm{f}}^{2}$ are identified, yielding periodic boundary conditions in the temporal direction, then the action $\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]$ simplifies considerably \cite{JA&JJ&RL2}:
\begin{equation}\label{CDTaction3}
\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]=-k_{0}N_{0}+k_{3}N_{3}.
\end{equation}
$N_{0}$ is the number of $0$-simplices (vertices), $N_{3}$ is the number of $3$-simplices, and the bare couplings $k_{0}$ and $k_{3}$ are the following dimensionless combinations of $G_{0}$, $\Lambda_{0}$, and $a$:
\begin{subequations}\label{k0k3expressions}
\begin{eqnarray}
k_{0}&=&2\pi ak\\
k_{3}&=&\frac{a^{3}\lambda}{4\sqrt{2}}+2\pi ak\left[\frac{3}{\pi}\cos^{-1}{\left(\frac{1}{3}\right)}-1\right]
\end{eqnarray}
\end{subequations}
with
\begin{subequations}
\begin{eqnarray}
k&=&\frac{1}{8\pi G_{0}}\\
\lambda&=&\frac{\Lambda_{0}}{8\pi G_{0}}
\end{eqnarray}
\end{subequations}
We set $\alpha=1$ because the value of $\alpha$ (once the Wick rotation has been performed) is irrelevant in $2+1$ dimensions.
When referring to an ensemble of causal triangulations with fixed initial and final boundary $2$-spheres $\mathsf{S}_{\mathrm{i}}^{2}$ and $\mathsf{S}_{\mathrm{f}}^{2}$, we employ the couplings $k_{0}$ and $k_{3}$ instead of the couplings $k$ and $\lambda$ of equation \eqref{completeEaction} to facilitate contact with previous work. By the given values of $k_{0}$ and $k_{3}$, we mean the values dictated by the relations \eqref{k0k3expressions} for the values of $k$ and $\lambda$ actually characterizing the given ensemble. An ensemble of causal triangulations
is therefore characterized by the number $\bar{T}$ of time slices, the number $\bar{N}_{3}$ of $3$-simplices, the value of the coupling $k_{0}$, and the triangulations $\Gamma(\mathsf{S}_{\mathrm{i}}^{2})$ and $\Gamma(\mathsf{S}_{\mathrm{f}}^{2})$ of the initial and final boundary $2$-spheres $\mathsf{S}_{\mathrm{i}}^{2}$ and $\mathsf{S}_{\mathrm{f}}^{2}$. As explained, for instance in \cite{JHC&JMM}, we must tune the coupling $k_{3}$ to its critical value $k_{3}^{c}$ to ensure that the partition function \eqref{partitionfunctionfixedTN} for the action \eqref{completeEaction} is well-defined. The value $k_{3}^{c}$ is therefore not independent of the other quantities characterizing an ensemble of causal triangulations.
The triangulations $\Gamma(\mathsf{S}_{\mathrm{i}}^{2})$ and $\Gamma(\mathsf{S}_{\mathrm{f}}^{2})$ completely characterize the geometries of the initial and final boundary $2$-spheres $\mathsf{S}_{\mathrm{i}}^{2}$ and $\mathsf{S}_{\mathrm{f}}^{2}$, constituting a sizeable amount of boundary data on which the partition function \eqref{partitionfunctionfixedTN} depends. Cooperman and Miller restricted attention to only one aspect of the geometries of the triangulations $\Gamma(\mathsf{S}_{\mathrm{i}}^{2})$ and $\Gamma(\mathsf{S}_{\mathrm{f}}^{2})$: their discrete spatial $2$-volumes as measured by the numbers $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ of spacelike $2$-simplices (equilateral triangles) comprising the $2$-spheres $\mathsf{S}_{\mathrm{i}}^{2}$ and $\mathsf{S}_{\mathrm{f}}^{2}$. The dependence of the partition function \eqref{partitionfunctionfixedTN} on $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ is not merely the simplest to consider: in the absence of a physically relevant characterization of the geometries of the triangulations $\Gamma(\mathsf{S}_{\mathrm{i}}^{2})$ and $\Gamma(\mathsf{S}_{\mathrm{f}}^{2})$, the dependence of the partition function \eqref{partitionfunctionfixedTN} on other aspects of these geometries is difficult to study meaningfully.
To probe only the dependence on the initial and final numbers $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ of spacelike $2$-simplices, Cooperman and Miller proceeded as follows. They generated $\mathsf{N}$ random triangulations $\Gamma(\mathsf{S}_{\mathrm{i}}^{2})$ of the $2$-sphere $\mathsf{S}_{\mathrm{i}}^{2}$ constructed from precisely $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ spacelike $2$-simplices and $\mathsf{N}$ random triangulations $\Gamma(\mathsf{S}_{\mathrm{f}}^{2})$ of the $2$-sphere $\mathsf{S}_{\mathrm{f}}^{2}$ constructed from precisely $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ spacelike $2$-simplices; they randomly paired the former $\mathsf{N}$ triangulations with the latter $\mathsf{N}$ triangulations to form $\mathsf{N}$ pairs of initial and final boundary triangulations $\Gamma(\mathsf{S}_{\mathrm{i}}^{2})$ and $\Gamma(\mathsf{S}_{\mathrm{f}}^{2})$; they generated an ensemble of causal triangulations for each of these $\mathsf{N}$ pairs; and
they combined these $\mathsf{N}$ ensembles into a single averaged ensemble
\footnote{Technically, the procedure of Cooperman and Miller assumes a constant measure over all causal triangulations with initial and final boundary triangulations $\Gamma(\mathsf{S}_{\mathrm{i}}^{2})$ and $\Gamma(\mathsf{S}_{\mathrm{f}}^{2})$ constructed respectively from precisely $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ spacelike $2$-simplices (for given values of $\bar{T}$, $\bar{N}_{3}$, and $k_{0}$) \cite{JHC&JMM}.}
By choosing to consider the dependence of the partition function \eqref{partitionfunctionfixedTN} only on $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$, Cooperman and Miller emulated virtually all previous studies
of causal dynamical triangulations in $2+1$ dimensions (and in $3+1$ dimensions) in probing the ground state of the quantum geometry defined by an ensembles of causal triangulations. Prior investigations examined the spacetime manifold structure $\mathrm{S}^{2}\times\mathrm{S}^{1}$ for which
the temporal direction is periodically identified. Such studies probe the ground state of quantum geometry in the sense that there are no boundary conditions to induce excitations of the quantum geometry. Although Cooperman and Miller explored transition amplitudes with the spacetime manifold structure $\mathsf{S}^{2}\times\mathsf{I}$ in \cite{JHC&JMM}, their averaging
over all geometrical degrees of freedom of the boundary $2$-spheres except for their discrete spatial $2$-volumes results in boundary conditions that do not induce excitations of the quantum geometry.
Monte Carlo methods do not give us access to the partition function \eqref{partitionfunctionfixedTN} itself; they yield only a representative sample of causal triangulations contributing to the path summation defining the partition function \eqref{partitionfunctionfixedTN}. This fact poses no problem of principle: we do have access to the expectation values of observables in the quantum state defined by the partition function \eqref{partitionfunctionfixedTN}. One computes the expectation value $\mathbb{E}[\mathcal{O}]$ of an observable $\mathcal{O}$ in this quantum state
as follows:
\begin{equation}
\mathbb{E}[\mathcal{O}]=\frac{1}{Z_{\Sigma}[\Gamma]}\sum_{\substack{\mathcal{T}_{c}\cong\Sigma\times\mathsf{I} \\ \mathcal{T}_{c}|_{\partial\mathcal{T}_{c}}=\Gamma \\ T(\mathcal{T}_{c})=\bar{T} \\ N_{D}(\mathcal{T}_{c})=\bar{N}_{D}}}\mu(\mathcal{T}_{c})\,e^{-\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}[\mathcal{T}_{c}]}\mathcal{O}[\mathcal{T}_{c}].
\end{equation}
We approximate the expectation value $\mathbb{E}[\mathcal{O}]$ by its average
\begin{equation}
\langle\mathcal{O}\rangle=\frac{1}{N(\mathcal{T}_{c})}\sum_{j=1}^{N(\mathcal{T}_{c})}\mathcal{O}[\mathcal{T}_{c}^{(j)}]
\end{equation}
over an ensemble of $N(\mathcal{T}_{c})$ causal triangulations generated by Monte Carlo methods. The Metropolis algorithm behind these simulations guarantees that
\begin{equation}
\mathbb{E}[\mathcal{O}]=\lim_{N(\mathcal{T}_{c})\rightarrow\infty}\langle\mathcal{O}\rangle.
\end{equation}
Numerical measurements of certain observables' ensemble averages have revealed that the model defined by the partition function \eqref{partitionfunctionfixedTN} for the action \eqref{CDTaction3} exhibits two phases of quantum geometry separated by a first-order phase transition: the decoupled phase, labeled A, for coupling $k_{0}>k_{0}^{c}$ and the condensate phase, labeled C, for coupling $k_{0}<k_{0}^{c}$ \cite{JA&JJ&RL3,RK}. Cooperman and Miller found that phase C also exists within the model defined by the partition function \eqref{partitionfunctionfixedTN} for the action \eqref{completeEaction} \cite{JHC&JMM}.
We restrict attention to values of the coupling $k_{0}$ that fall within phase C as only the quantum geometry defined by ensembles of causal triangulations with phase C possesses physical properties. We explore these properties in sections \ref{evidenceconjecture}, \ref{analysissupport}, and \ref{argumentrefutation}.
\section{Evidence and conjecture}\label{evidenceconjecture}
We now review and expand upon the evidence that led Cooperman and Miller to formulate their conjecture. Following several previous authors \cite{JA&JGS&AG&JJ,JA&JGS&AG&JJ2,JA&AG&JJ&RL1,JA&AG&JJ&RL2,JA&AG&JJ&RL&JGS&TT,JA&JJ&RL3,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6,CA&SJC&JHC&PH&RKK&PZ,RK}, Cooperman and Miller performed measurements of the number $N_{2}^{\mathrm{SL}}(\tau)$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ labeling the distinguished foliation's time slices
\cite{JHC&JMM}. $N_{2}^{\mathrm{SL}}(\tau)$ quantifies the evolution of discrete spatial $2$-volume in the distinguished foliation.
Cooperman and Miller first considered the following two ensembles of causal triangulations. For $\bar{T}=29$, $\bar{N}_{3}=30850$, and $k_{0}=1.00$, we display the ensemble average $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$\footnote{The minimal piecewise-Euclidean simplicial $2$-sphere is constructed from four $2$-simplices.} in figure \ref{nonminnonminsame1}(a) and for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=100$ in figure \ref{nonminnonminsame1}(b).
\begin{figure}[!ht]
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.2)
\put(0.1,0.005){\includegraphics[scale=1]{volprofIB4FB4.pdf}}
\put(0.5,0.005){\includegraphics[scale=1]{volprofIB100FB100.pdf}}
\put(0.28,-0.02){(a)}
\put(0.68,-0.02){(b)}
\end{picture}
\caption{Ensemble average number $\langle N_{2}^{\mathrm{SL}}\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{T}=29$, $\bar{N}_{3}=30850$, and $k_{0}=1.00$. (a) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ (b) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=100$. We have taken this data from \cite{JHC&JMM}.}
\label{nonminnonminsame1}
\end{figure}
The plot in figure \ref{nonminnonminsame1}(a) shows the behavior of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ previously understood as characteristic of phase C \cite{JA&AG&JJ&AK&RL,JA&AG&JJ&RL1,JA&AG&JJ&RL2,JA&AG&JJ&RL&JGS&TT,JA&JJ&RL3,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6,CA&SJC&JHC&PH&RKK&PZ,DB&JH2,JHC,JHC&JMM,RK}: $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ smoothly increases from its minimal value of $4$ at the initial boundary $2$-sphere $\mathsf{S}_{\mathrm{i}}^{2}$ to its maximal value at the central time slice and symmetrically decreases from its maximal value to its minimal value of $4$ at the final boundary $2$-sphere $\mathsf{S}_{\mathrm{f}}^{2}$. As several authors have previously demonstrated \cite{JA&AG&JJ&AK&RL,JA&AG&JJ&RL1,JA&AG&JJ&RL2,JA&JJ&RL3,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6,CA&SJC&JHC&PH&RKK&PZ,DB&JH2,JHC,JHC&JMM,RK}, and as we demonstrate once more in section \ref{analysissupport}, the ground state solution---Euclidean de Sitter space---of a minisuperspace model based on Euclidean Einstein gravity accurately describes the shape of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$. The plot in figure \ref{nonminnonminsame1}(b) shows that the characteristic behavior of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ continues to be manifest even for boundary $2$-spheres with nonminimal discrete spatial $2$-volumes. Cooperman and Miller demonstrated, moreover, that a portion of Euclidean de Sitter space accurately describes the shape of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ in this case as well \cite{JHC&JMM}.
Cooperman and Miller next increased further the discrete spatial $2$-volumes of the initial and final boundary $2$-spheres. For $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, we display $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=500$ in figure \ref{nonminnonminsame2}(a), for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=700$ in figure \ref{nonminnonminsame2}(b), and for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=900$ in figure \ref{nonminnonminsame2}(c).\footnote{The ensemble of causal triangulations characterized by $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=300$ is very close to the transition of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ from being concave-down to being concave-up for these values of $\bar{T}$, $\bar{N}_{3}$ and $k_{0}$. We have not yet performed Monte Carlo simulations for sufficiently long computer times to determine on which side of the transition this ensemble falls.}
\begin{figure}[!ht]
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.22)
\put(0,0.005){\includegraphics[scale=1]{volprofileIB500FB500.pdf}}
\put(0.34,0.005){\includegraphics[scale=1]{volprofT28V30k01ibfb700.pdf}}
\put(0.67,0.005){\includegraphics[scale=1]{volprofT28V30k01ibfb900.pdf}}
\put(0.18,-0.02){(a)}
\put(0.52,-0.02){(b)}
\put(0.85,-0.02){(c)}
\end{picture}
\caption{Ensemble average number $\langle N_{2}^{\mathrm{SL}}\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{T}=29$, $\bar{N}_{3}=30850$, and $k_{0}=1.00$. (a) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=500$ (b) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=700$ (c) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=900$. We have taken this data from \cite{JHC&JMM}.}
\label{nonminnonminsame2}
\end{figure}
We considered two further ensembles of causal triangulations.
For $\bar{T}=29$, $\bar{N}_{3}=30850$, and $k_{0}=1.00$, we display $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ in figure \ref{nonminnonmin2}(a) and for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=800$ in figure \ref{nonminnonmin2}(b).
\begin{figure}[!ht]
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.2)
\put(0.1,0.005){\includegraphics[scale=1]{volprofT28V30k01ibfb600.pdf}}
\put(0.5,0.005){\includegraphics[scale=1]{volprofT28V30k01ibfb800.pdf}}
\put(0.28,-0.02){(a)}
\put(0.68,-0.02){(b)}
\end{picture}
\caption{Ensemble average number $\langle N_{2}^{\mathrm{SL}}\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{T}=29$, $\bar{N}_{3}=30850$, and $k_{0}=1.00$. (a) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ (b) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=800$.}
\label{nonminnonmin2}
\end{figure}
As Cooperman and Miller remarked, the shape of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for those ensembles represented in figures \ref{nonminnonminsame2}
and \ref{nonminnonmin2} is possibly of a hyperbolic sinusoidal character. They hypothesized accordingly that a portion of Lorentzian de Sitter spacetime might accurately describe the shape of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for these ensembles \cite{JHC&JMM}. We test this hypothesis in section \ref{analysissupport}.
Following Ambj\o rn \emph{et al} \cite{JA&AG&JJ&RL1,JA&AG&JJ&RL2} and Cooperman \cite{JHC}, we moreover measured the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}(\tau)$ in the number $N_{2}^{\mathrm{SL}}(\tau)$ of spacelike $2$-simplices from the ensemble average $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ defined as
\begin{equation}
\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle=\frac{1}{N(\mathcal{T}_{c})}\sum_{j=1}^{N(\mathcal{T}_{c})}\left[n_{2}^{\mathrm{SL}}(\tau)\right]_{j}\left[n_{2}^{\mathrm{SL}}(\tau')\right]_{j}
\end{equation}
for
\begin{equation}
\left[n_{2}^{\mathrm{SL}}(\tau)\right]_{j}=\left[N_{2}^{\mathrm{SL}}(\tau)\right]_{j}-\langle N_{2}^{\mathrm{SL}}(\tau)\rangle.
\end{equation}
$\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ is a $\bar{T}\times\bar{T}$ real symmetric matrix, which we diagonalize to obtain its eigenvectors $\eta_{j}(\tau)$ and associated eigenvalues $\lambda_{j}$.
For the (Euclidean-like) ensemble $\mathcal{E}_{\mathrm{E}}$ of causal triangulations characterized by $\bar{T}=21$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$, we display the first three eigenvectors $\eta_{j}(\tau)$ and the eigenvalues $\lambda_{j}$ of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ in figures \ref{eigenvectors}(a) and \ref{eigenvalues}(a).\footnote{We employ the ensemble $\mathcal{E}_{\mathrm{E}}$ characterized by $\bar{T}=21$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ as a point of comparison for two reasons. First, our analysis of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for the ensemble of causal triangulations characterized by $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ indicates the presence of a stalk, resulting in the first eigenvector $\eta_{1}(\tau)$ possessing three rather than two nodes. See \cite{JA&AG&JJ&RL2} for an explanation. Second, our analysis in section \ref{analysissupport} of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for the ensemble $\mathcal{E}_{\mathrm{E}}$ yields a quality of fit comparable to that for the ensemble $\mathcal{E}_{\mathrm{L}}$ characterized by $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$, which we anonymously introduced with figure \ref{nonminnonmin2}(a) and formally introduce with figures \ref{eigenvectors}(b) and \ref{eigenvalues}(b).}
\begin{figure}[!ht]
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.5)
\put(0,0.25){\includegraphics[scale=0.9]{eigenvectorsT21V30IBFB4three.pdf}}
\put(0,-0.02){\includegraphics[scale=0.9]{eigenvectorsT29V30IBFB600three.pdf}}
\put(0.47,-0.035){(b)}
\put(0.47,0.235){(a)}
\end{picture}
\caption{First three eigenvectors $\eta_{j}(\tau)$ of the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}$ in the number of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{N}_{3}=30850$ and $k_{0}=1.00$ (a) $\bar{T}=21$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ (ensemble $\mathcal{E}_{\mathrm{E}}$) (b) $\bar{T}=29$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ (ensemble $\mathcal{E}_{\mathrm{L}}$). We do not indicate the scale of the eigenvectors $\eta_{j}(\tau)$ as their normalization is arbitrary.}
\label{eigenvectors}
\end{figure}
\begin{figure}[!ht]
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.3)
\put(0.0005,0.005){\includegraphics[scale=1]{eigenvaluesT21V30IBFB4.pdf}}
\put(0.51,0.00){\includegraphics[scale=1]{eigenvaluesT28V30IBFB600.pdf}}
\put(0.27,-0.03){(a)}
\put(0.78,-0.03){(b)}
\end{picture}
\caption{Eigenvalues $\lambda_{j}$ of the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}$ in the number of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{N}_{3}=30850$ and $k_{0}=1.00$ (a) $\bar{T}=21$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ (ensemble $\mathcal{E}_{\mathrm{E}}$) (b) $\bar{T}=29$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ (ensemble $\mathcal{E}_{\mathrm{L}}$).}
\label{eigenvalues}
\end{figure}
The plots in figures \ref{eigenvectors}(a) and \ref{eigenvalues}(a) show the behavior of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ previously understood as characteristic of phase C \cite{JA&AG&JJ&RL1,JA&AG&JJ&RL2,JHC}. As Ambj\o rn \emph{et al} \cite{JA&AG&JJ&RL1,JA&AG&JJ&RL2} and Cooperman \cite{JHC} have previously demonstrated in the case of $3+1$ dimensions, and as we demonstrate for the first time in $2+1$ dimensions in section \ref{analysissupport}, the connected $2$-point function of linear gravitational perturbations propagating on Euclidean de Sitter space accurately describes the shape of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$, both its eigenvectors $\eta_{j}(\tau)$ and its eigenvalues $\lambda_{j}$.
For the (Lorentzian-like) ensemble $\mathcal{E}_{\mathrm{L}}$ of causal triangulations characterized by $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$, we display the first three eigenvectors $\eta_{j}(\tau)$ and the associated eigenvalues $\lambda_{j}$ of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ in figures \ref{eigenvectors}(b) and \ref{eigenvalues}(b).
The shapes of the eigenvectors $\eta_{j}(\tau)$ for the ensemble $\mathcal{E}_{\mathrm{L}}$
differ subtly yet notably from the shapes of the eigenvectors $\eta_{j}(\tau)$ for the ensemble $\mathcal{E}_{\mathrm{E}}$.
The spectrum of eigenvalues $\lambda_{j}$ for the ensemble $\mathcal{E}_{\mathrm{L}}$ also differs subtly yet notably from the spectrum of eigenvalues $\lambda_{j}$ for the ensemble $\mathcal{E}_{\mathrm{E}}$. We hypothesize accordingly that linear gravitational perturbations propagating on a portion of Lorentzian de Sitter spacetime might accurately describe the shape of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$, both its eigenvectors $\eta_{j}(\tau)$ and its eigenvalues $\lambda_{j}$, for the ensemble $\mathcal{E}_{\mathrm{L}}$.
We test this hypothesis in section \ref{analysissupport}.
These finding led Cooperman and Miller to formulate the following conjecture: geometries resembling Lorentzian de Sitter spacetime, not Euclidean de Sitter space, on sufficiently large scales dominate the partition function \eqref{partitionfunction} for the action \eqref{CDTaction3} defining the ground state of $(2+1)$-dimensional causal dynamical triangulations for spherical spatial topology \cite{JHC&JMM}. Cooperman and Miller also suggested that their conjecture's scenario might arise \emph{via} a mechanism similar to that of the Hartle-Hawking no-boundary proposal in which complex geometries contribute to the partition function \cite{JBH&SWH}.
We subject their conjecture to a first test in section \ref{analysissupport}, obtaining evidence in its favor; however, we argue for a more straightforward explanation of the above findings in section \ref{argumentrefutation}, refuting their conjecture.
\section{Analysis and support}\label{analysissupport}
We now perform a preliminary test of the conjecture of Cooperman and Miller by analyzing the measurements of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ and $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ reported in section \ref{evidenceconjecture} on the basis of their conjecture. To connect their conjecture with these measurements,
we attempt to describe these measurements within a simple yet nontrivial model inspired by their conjecture: a minisuperspace truncation of $(2+1)$-dimensional Einstein gravity having either Lorentzian de Sitter spacetime or Euclidean de Sitter space as its ground state. Several authors have previously employed this model's Euclidean version \cite{JA&DNC&JGS&JJ,JA&JGS&AG&JJ,JA&JGS&AG&JJ2,JA&AG&JJ&AK&RL,JA&AG&JJ&RL1,JA&AG&JJ&RL2,JA&AG&JJ&RL3,JA&AG&JJ&RL&JGS&TT,JA&JJ&RL3,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6,CA&SJC&JHC&PH&RKK&PZ,DB&JH,DB&JH2,JHC,JHC&JMM,RK}, which Ambj\o rn, Jurkiewicz, and Loll first suggested \cite{JA&JJ&RL3,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6}. We specify the model's metric tensor $\mathbf{g}$ by the line element
\begin{equation}\label{minisuperspacemetric}
\mathrm{d}\mathsf{s}^{2}=\pm\omega^{2}\mathrm{d} t^{2}+\rho^{2}( t)\left(\mathrm{d}\theta^{2}+\sin^{2}{\theta}\,\mathrm{d}\phi^{2}\right)
\end{equation}
for positive constant $\omega$ and scale factor $\rho(t)$ with upper sign ($+$) for Euclidean signature and the lower sign ($-$) for Lorentzian signature. For the line element \eqref{minisuperspacemetric}, expressed in terms of the spatial $2$-volume
\begin{equation}
V_{2}( t)=\int_{0}^{\pi}\mathrm{d}\theta\int_{0}^{2\pi}\mathrm{d}\phi\sqrt{g_{\theta\theta}g_{\phi\phi}}=4\pi \rho^{2}( t),
\end{equation}
the Einstein-Hilbert action, including the Gibbons-Hawking-York action, given in equation \eqref{completeCaction} for Lorentzian signature, becomes
\begin{equation}\label{MSM2action4}
S_{\mathrm{cl}}[V_{2}]=\pm\frac{\omega}{32\pi G}\int_{t_{\mathrm{i}}}^{t_{\mathrm{f}}}\mathrm{d} t\left[\frac{\dot{V}_{2}^{2}(t)}{\omega^{2}V_{2}(t)}\mp4\Lambda V_{2}(t)\right]
\end{equation}
after integration by parts. As in equation \eqref{minisuperspacemetric}, the upper signs correspond to Euclidean signature, and the lower signs correspond to Lorentzian signature.\footnote{Typically, in Euclidean signature the action \eqref{MSM2action4} has an overall negative sign, which is surprisingly absent in the large-scale effective action of causal dynamical triangulations \cite{JA&DNC&JGS&JJ,JA&JGS&AG&JJ,JA&JGS&AG&JJ2,JA&AG&JJ&RL1,JA&AG&JJ&RL2,JA&AG&JJ&RL&JGS&TT,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6}. See \cite{JA&AG&JJ&RL3} for a plausible yet tentative explanation.} $G$ and $\Lambda$ are now the renormalized Newton and cosmological constants. The maximally symmetric extremum of the action \eqref{MSM2action4} for Euclidean signature is Euclidean de Sitter space, for which
\begin{equation}\label{dSvolprof}
V_{2}^{(\mathrm{EdS})}( t)=4\pi\ell_{\mathrm{dS}}^{2}\cos^{2}{\left(\frac{\omega t}{\ell_{\mathrm{dS}}}\right)}
\end{equation}
with $ t\in[-\pi\ell_{\mathrm{dS}}/2\omega,+\pi\ell_{\mathrm{dS}}/2\omega]$; the maximally symmetric extremum of the action \eqref{MSM2action4} for Lorentzian signature is Lorentzian de Sitter spacetime, for which
\begin{equation}\label{LdSvolprof}
V_{2}^{(\mathrm{LdS})}( t)=4\pi \ell_{\mathrm{dS}}^{2}\cosh^{2}{\left(\frac{\omega t}{\ell_{\mathrm{dS}}}\right)}
\end{equation}
with $ t\in(-\infty,+\infty)$. $\ell_{\mathrm{dS}}=\sqrt{1/\Lambda}$ is the de Sitter length.
We first model the ensemble average number $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ on the basis of the spatial $2$-volumes $V_{2}^{(\mathrm{EdS})}( t)$ and $V_{2}^{(\mathrm{LdS)}}(t)$ given in equations \eqref{dSvolprof} and \eqref{LdSvolprof}. In particular, we derive a discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ appropriate to causal triangulations of each of the spatial $2$-volumes $V_{2}^{(\mathrm{EdS})}( t)$ and $V_{2}^{(\mathrm{LdS)}}(t)$, and we subsequently perform a best fit of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ to $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$.
Several authors have previously performed such a derivation in the case of Euclidean de Sitter space \cite{JA&AG&JJ&RL1,JA&AG&JJ&RL2,JA&AG&JJ&RL3,JA&AG&JJ&RL&JGS&TT,JA&JJ&RL3,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6,CA&SJC&JHC&PH&RKK&PZ,DB&JH2,JHC,JHC&JMM}; we adapt their techniques to the case of a portion of Lorentzian de Sitter spacetime.
We begin by assuming a canonical finite-size scaling \emph{Ansatz} based on the double scaling limit
\begin{equation}\label{FSSansatz}
V_{3}=\lim_{\substack{N_{3}\rightarrow\infty \\ a\rightarrow0}}C_{3}N_{3}a^{3}
\end{equation}
of the spacetime $3$-volume $V_{3}$: in the infinite-volume ($N_{3}\rightarrow\infty$) and continuum ($a\rightarrow0$) limits, the discrete spacetime $3$-volume $C_{3}N_{3}a^{3}$ approaches the constant value $V_{3}$. $C_{3}$ is the effective discrete spacetime $3$-volume of a single $3$-simplex. Evidence for the applicability of this \emph{Ansatz} to the scaling of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ is presented in \cite{JA&AG&JJ&RL3,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6,DB&JH2}. The motivation for this \emph{Ansatz} is as following: $V_{3}$ is the largest-scale physical observable present in our model, so, of all possible discrete observables, we expect the discrete spacetime 3-volume to scale canonically with $N_{3}$ and $a$. In appendix \ref{derivation1pt} we employ the finite-size scaling \emph{Ansatz} based on equation \eqref{FSSansatz} to derive the discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ for each of the spatial $2$-volumes $V_{2}^{(\mathrm{EdS})}( t)$ and $V_{2}^{(\mathrm{LdS})}( t)$ restricted to the finite global time interval $[ t_{\mathrm{i}}, t_{\mathrm{f}}]$. In the case of Euclidean de Sitter space, we derive that
\begin{equation}\label{variantdiscretedSvolprofileappen}
\mathcal{N}_{2}^{\mathrm{SL}}(\tau)=\frac{\langle N_{3}^{(1,3)}\rangle}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}\frac{\cos^{2}{\left(\frac{\tau}{\bar{s}_{0}\langle N_{3}^{(3,1)}\rangle^{1/3}}\right)}}{\frac{\tau_{\mathrm{f}}-\tau_{\mathrm{i}}}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}+2\sin{\left(\frac{\tau_{\mathrm{f}}-\tau_{\mathrm{i}}}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}\right)}\cos{\left(\frac{\tau_{\mathrm{f}}+\tau_{\mathrm{i}}}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}\right)}},
\end{equation}
as previously determined in \cite{JHC&JMM}, and, in the case of Lorentzian de Sitter spacetime, we derive that
\begin{equation}\label{LdSdiscreteanalogue}
\mathcal{N}_{2}^{\mathrm{SL}}(\tau)=\frac{\langle N_{3}^{(1,3)}\rangle}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}\frac{\cosh^{2}{\left(\frac{\tau}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}\right)}}{\frac{\tau_{\mathrm{f}}-\tau_{\mathrm{i}}}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}+2\sinh{\left(\frac{\tau_{\mathrm{f}}-\tau_{\mathrm{i}}}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}\right)}\cosh{\left(\frac{\tau_{\mathrm{f}}+\tau_{\mathrm{i}}}{\bar{s}_{0}\langle N_{3}^{(1,3)}\rangle^{1/3}}\right)}}.
\end{equation}
$N_{3}^{(1,3)}$ is the number of $(1,3)$ $3$-simplices,
\begin{equation}
\bar{s}_{0}=\frac{2^{1/3}(1+\xi)^{1/3}\ell_{\mathrm{dS}}}{\omega V_{3}^{1/3}}
\end{equation}
is a fit parameter, and $\xi$ is the ratio of $\langle N_{3}^{(2,2)}\rangle$ to $\langle N_{3}^{(1,3)}\rangle+\langle N_{3}^{(3,1)}\rangle$. We now perform best fits of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ to the measurements of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$
following the procedure of \cite{JHC&JMM}. We report the value $\chi_{\mathrm{red}}^{2}$ of the $\chi^{2}$ per degree of freedom for each fit.
To establish a point of comparison, we first consider the ensemble $\mathcal{E}_{\mathrm{E}}$ characterized by $\bar{T}=21$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$, for which, as depicted in figure \ref{volproffitT21V30K1IBFB4}, $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ exhibits the characteristic behavior of phase C. We display $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ overlain with the best fit form of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$, given in equation \eqref{variantdiscretedSvolprofileappen}, for the ensemble $\mathcal{E}_{\mathrm{E}}$ in figure \ref{volproffitT21V30K1IBFB4}.
\begin{figure}
\centering
\includegraphics[scale=1]{volproffitT21V30k1IBFB4.pdf}
\caption{Ensemble average number $\langle N_{2}^{\mathrm{SL}}\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ (blue circles) for $\bar{T}=21$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ (Euclidean-like ensemble $\mathcal{E}_{\mathrm{E}}$) overlain with the best fit discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ (black line) of the spatial $2$-volume $V_{2}^{(\mathrm{EdS})}( t)$ as a function of the global time coordinate $t$ of Euclidean de Sitter space. $\chi^{2}_{\mathrm{red}}=79.91$.}
\label{volproffitT21V30K1IBFB4}
\end{figure}
This fit of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ to $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ is representative of the application of the above Euclidean model to measurements of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ \cite{JA&AG&JJ&RL1,JA&AG&JJ&RL2,JA&AG&JJ&RL3,JA&AG&JJ&RL&JGS&TT,JA&JJ&RL4,JA&JJ&RL5,JA&JJ&RL6,CA&SJC&JHC&PH&RKK&PZ,DB&JH2,JHC,JHC&JMM,RK}. Visually, $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ fits $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ quite satisfactorily. As measured by $\chi_{\mathrm{red}}^{2}$, the quality of the fit of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$, given in equation \eqref{variantdiscretedSvolprofileappen}, to $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for this Euclidean-like ensemble is comparable to the quality of previous such fits \cite{JHC&JMM}.
We now test the hypothesis that a portion of Lorentzian de Sitter spacetime accurately describes the ensemble average number $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for the Lorentzian-like ensembles represented in figures \ref{nonminnonminsame2} and \ref{nonminnonmin2}.
We consider the five ensembles of causal triangulations represented in figures \ref{nonminnonminsame2} and \ref{nonminnonmin2} including the ensemble $\mathcal{E}_{\mathrm{L}}$.
For $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, we display $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ overlain with the best fit form of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$, given in equation \eqref{LdSdiscreteanalogue}, for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=500$ in figure \ref{nonminnonminsame2fit}(a), for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=700$ in figure \ref{nonminnonminsame2fit}(b), for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=900$ in figure \ref{nonminnonminsame2fit}(c),
\begin{figure}[!ht]
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.22)
\put(0,0.005){\includegraphics[scale=1]{volproffitT29V30kIBFB500.pdf}}
\put(0.34,0.005){\includegraphics[scale=1]{volproffitT29V30kIBFB700.pdf}}
\put(0.67,0.005){\includegraphics[scale=1]{volproffitT29V30kIBFB900.pdf}}
\put(0.18,-0.02){(a)}
\put(0.52,-0.02){(b)}
\put(0.85,-0.02){(c)}
\end{picture}
\caption{Ensemble average number $\langle N_{2}^{\mathrm{SL}}\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ (blue circles) for $\bar{T}=29$, $\bar{N}_{3}=30850$, and $k_{0}=1.00$ overlain with the best fit discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ (black line) of the spatial $2$-volume $V_{2}^{(\mathrm{LdS})}( t)$ as a function of the global time coordinate $t$ of Lorentzian de Sitter spacetime. (a), $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=500$ $\chi^{2}_{\mathrm{red}}=169.86$. (b) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=700$, $\chi^{2}_{\mathrm{red}}=143.44$. (c) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=900$, $\chi^{2}_{\mathrm{red}}=1435.51$.}
\label{nonminnonminsame2fit}
\end{figure}
for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ in figure \ref{nonminnonmin2fit}(a), and for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=800$ in figure \ref{nonminnonmin2fit}(b).
\begin{figure}[!ht]
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.2)
\put(0.1,0.005){\includegraphics[scale=1]{volproffitT29V30kIBFB600.pdf}}
\put(0.5,0.005){\includegraphics[scale=1]{volproffitT29V30kIBFB800.pdf}}
\put(0.28,-0.02){(a)}
\put(0.68,-0.02){(b)}
\end{picture}
\caption{Ensemble average number $\langle N_{2}^{\mathrm{SL}}\rangle$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ (blue circles) for $\bar{T}=29$, $\bar{N}_{3}=30850$, and $k_{0}=1.00$ overlain with the best fit discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ (black line) of the spatial $2$-volume $V_{2}^{(\mathrm{LdS})}( t)$ as a function of the global time coordinate $t$ of Lorentzian de Sitter spacetime. (a) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$, $\chi^{2}_{\mathrm{red}}=86.67$. (b) $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=800$, $\chi^{2}_{\mathrm{red}}=452.85$.}
\label{nonminnonmin2fit}
\end{figure}
Visually, $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ again fits $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ quite satisfactorily. As measured by $\chi_{\mathrm{red}}^{2}$, the quality of the fits of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$, given in equation \eqref{LdSdiscreteanalogue}, to $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for these five Lorentzian-like ensembles is comparable to the quality of the fit of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$, given in equation \eqref{variantdiscretedSvolprofileappen}, to $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ for Euclidean-like ensembles \cite{JHC&JMM}. In particular, these fits for the ensembles $\mathcal{E}_{\mathrm{E}}$ and $\mathcal{E}_{\mathrm{L}}$ have nearly equivalent $\chi_{\mathrm{red}}^{2}$ values, motivating our choice to compare the ensembles $\mathcal{E}_{\mathrm{E}}$ and $\mathcal{E}_{\mathrm{L}}$. There is a systematic trend in the $\chi_{\mathrm{red}}^{2}$ values for these five Lorentzian-like ensembles: $\chi_{\mathrm{red}}^{2}$ is minimal for $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ and increases monotonically for both smaller and larger values of $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$. Cooperman and Miller found the same type of trend for ensembles with different numbers $\bar{T}$ of time slices at fixed number $\bar{N}_{3}$ of $3$-simplices, coupling $k_{0}$, and numbers $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ of initial and final spacelike $2$-simplices \cite{JHC&JMM}. These trends likely stem from either undiagnosed finite-size effects or incomplete modeling. We touch on finite-size scaling analyses of transition amplitudes at the end of this section, and Cooperman and Houthoff perform a first investigation of systematic modeling issues in a forthcoming paper \cite{JHC&WH}.
We now extend our model to include linear gravitational perturbations $v_{2}(t)$ propagating on either Euclidean de Sitter space or Lorentzian de Sitter spacetime. In the path integral formalism one computes the connected $2$-point function $\mathbb{E}_{\mathrm{EdS}}[v_{2}(t)\,v_{2}(t')]$ of perturbations $v_{2}(t)$ about Euclidean de Sitter space as
\begin{equation}\label{EdS2ptdef}
\mathbb{E}_{\mathrm{EdS}}[v_{2}(t)\,v_{2}(t')]=\frac{\int\mathrm{d}\mu(v_{2})\,v_{2}( t)\,v_{2}( t')\,e^{-S_{\mathrm{cl}}[v_{2}]/\hbar}}{\int\mathrm{d}\mu(v_{2})\,e^{-S_{\mathrm{cl}}[v_{2}]/\hbar}},
\end{equation}
in which $S_{\mathrm{cl}}[v_{2}]$ is the action \eqref{MSM2action4} in Euclidean signature for the spatial $2$-volume $V_{2}(t)$ perturbed by $v_{2}(t)$ about $V_{2}^{(\mathrm{EdS})}(t)$, and the connected $2$-point function $\mathbb{E}_{\mathrm{LdS}}[v_{2}(t)\,v_{2}(t')]$ of perturbations $v_{2}(t)$ about Lorentzian de Sitter spacetime as
\begin{equation}\label{LdS2ptdef}
\mathbb{E}_{\mathrm{LdS}}[v_{2}(t)\,v_{2}(t')]=\frac{\int\mathrm{d}\mu(v_{2})\,v_{2}( t)\,v_{2}( t')\,e^{iS_{\mathrm{cl}}[v_{2}]/\hbar}}{\int\mathrm{d}\mu(v_{2})\,e^{iS_{\mathrm{cl}}[v_{2}]/\hbar}},
\end{equation}
in which $S_{\mathrm{cl}}[v_{2}]$ is the action \eqref{MSM2action4} in Lorentzian signature for the spatial $2$-volume $V_{2}(t)$ perturbed by $v_{2}(t)$ about $V_{2}^{(\mathrm{LdS})}(t)$. Expanding the action \eqref{MSM2action4} in Euclidean signature to second order in $v_{2}(t)$, assuming that $V_{2}^{\mathrm{(EdS)}}(t)\gg v_{2}(t)$, we find that
\begin{eqnarray}\label{modelaction2ndorder}
S_{\mathrm{cl}}[v_{2}]&=&S_{\mathrm{cl}}[V_{2}^{(\mathrm{EdS})}]-\frac{1}{64\pi^{2}G\ell_{\mathrm{dS}}^{3}}\int_{\tilde{t}_{\mathrm{i}}}^{\tilde{t}_{\mathrm{f}}}\mathrm{d}\tilde{t}\,v_{2}(\tilde{t})\sec^{2}{\tilde{t}}\left[\frac{\mathrm{d}^{2}}{\mathrm{d}\tilde{t}^{2}}+2\tan{\tilde{t}}\frac{\mathrm{d}}{\mathrm{d}\tilde{t}}+2\sec^{2}{\tilde{t}}\right]v_{2}(\tilde{t})\nonumber\\ &&\qquad+O\left[\left(v_{2}\right)^{3}\right],
\end{eqnarray}
for $\tilde{ t}=\omega t/\ell_{\mathrm{dS}}$. The terms of first order in $v_{2}(t)$ vanish because $V_{2}^{(\mathrm{EdS})}(t)$ is an extremum of the action \eqref{MSM2action4} in Euclidean signature. Expanding the action \eqref{MSM2action4} in Lorentzian signature to second order in $v_{2}(t)$, assuming that $V_{2}^{\mathrm{(LdS)}}(t)\gg v_{2}(t)$, we find that
\begin{eqnarray}
S_{\mathrm{cl}}[v_{2}]&=&S_{\mathrm{cl}}[V_{2}^{(\mathrm{LdS})}]+\frac{1}{64\pi^{2}G\ell_{\mathrm{dS}}^{3}}\int_{\tilde{t}_{\mathrm{i}}}^{\tilde{t}_{\mathrm{f}}}\mathrm{d}\tilde{ t}\,v_{2}(\tilde{t})\sech^{2}{\tilde{ t}}\left[\frac{\mathrm{d}^{2}}{\mathrm{d}\tilde{ t}^{2}}-2\tanh{\tilde{ t}}\frac{\mathrm{d}}{\mathrm{d}\tilde{ t}}-2\sech^{2}{\tilde{ t}}\right]v_{2}(\tilde{t})\nonumber\\ &&\qquad+O\left[\left(v_{2}\right)^{3}\right
\end{eqnarray}
for $\tilde{ t}=\omega t/\ell_{\mathrm{dS}}$. The terms of first order in $v_{2}(t)$ vanish because $V_{2}^{(\mathrm{LdS})}(t)$ is an extremum of the action \eqref{MSM2action4} in Lorentzian signature. A standard calculation now gives that
\begin{equation}
\mathbb{E}[v_{2}( t)\,v_{2}( t')]=\left[\frac{1}{\hbar}\mathscr{M}( t, t')\right]^{-1},
\end{equation}
in which
\begin{equation}
\mathscr{M}( t, t')=\frac{\delta^{2}S_{\mathrm{cl}}[v_{2}]}{\delta v_{2}( t)\,\delta v_{2}( t')}\bigg|_{\substack{v_{2}( t)=0 \\ v_{2}( t')=0}}
\end{equation}
is the van Vleck-Morette determinant. For perturbations $v_{2}(t)$ about the spatial $2$-volume $V_{2}^{(\mathrm{EdS})}(t)$ of Euclidean de Sitter space,
\begin{equation}\label{vVMdEdS}
\mathscr{M}( t, t')=\frac{1}{64\pi^{2}G\ell_{\mathrm{dS}}^{3}}\sec^{2}{\tilde{t}}\left[\frac{\mathrm{d}^{2}}{\mathrm{d}\tilde{t}^{2}}+2\tan{\tilde{t}}\frac{\mathrm{d}}{\mathrm{d}\tilde{t}}+2\sec^{2}{\tilde{t}}\right],
\end{equation}
and, for perturbations $v_{2}(t)$ about the spatial $2$-volume $V_{2}^{(\mathrm{LdS})}(t)$ of Lorentzian de Sitter spacetime,
\begin{equation}\label{vVMdLdS}
\mathscr{M}( t, t')=\frac{1}{64\pi^{2}G\ell_{\mathrm{dS}}^{3}}\sech^{2}{\tilde{ t}}\left[\frac{\mathrm{d}^{2}}{\mathrm{d}\tilde{ t}^{2}}-2\tanh{\tilde{ t}}\frac{\mathrm{d}}{\mathrm{d}\tilde{ t}}-2\sech^{2}{\tilde{ t}}\right].
\end{equation}
One can show moreover that
\begin{equation}\label{vVMdetmodesum}
\mathscr{M}( t, t')=\sum_{j=1}^{\infty}\mu_{j}\,\nu_{j}( t)\,\nu_{j}( t')
\end{equation}
in which $\nu_{j}( t)$ are the eigenfunctions of the operator $\mathscr{M}( t, t')$ with associated eigenvalues $\mu_{j}$ satisfying the integral constraint
\begin{equation}\label{eigenfunctionconstraint}
\int_{ t_{\mathrm{i}}}^{ t_{\mathrm{f}}}\mathrm{d} t\,\omega\nu_{j}( t)=0
\end{equation}
and the boundary conditions $\nu_{j}( t_{\mathrm{i}})=0$ and $\nu_{j}( t_{\mathrm{f}})=0$. Accordingly,
\begin{equation}\label{2ptmodesum}
\mathbb{E}[v_{2}( t)\,v_{2}( t')]=\sum_{j=1}^{\infty}\frac{\hbar}{\mu_{j}}\,\nu_{j}( t)\,\nu_{j}( t')
\end{equation}
assuming that $\mu_{j}\neq0$ for all $j$, which holds in the cases under consideration.
We next model the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}(\tau)$ in the number $N_{2}^{\mathrm{SL}}(\tau)$ of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ on the basis of the $2$-point functions $\mathbb{E}_{\mathrm{EdS}}[v_{2}(t)\,v_{2}(t')]$ and $\mathbb{E}_{\mathrm{LdS}}[v_{2}(t)\,v_{2}(t')]$ given in equations \eqref{EdS2ptdef} and \eqref{LdS2ptdef}.
In particular, we derive a discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ appropriate to causal triangulations of each of the $2$-point functions
$\mathbb{E}_{\mathrm{EdS}}[v_{2}(t)\,v_{2}(t')]$ and $\mathbb{E}_{\mathrm{LdS}}[v_{2}(t)\,v_{2}(t')]$, and we subsequently perform a fit of $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ to $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$. Ambj\o rn \emph{et al} \cite{JA&AG&JJ&RL1,JA&AG&JJ&RL2} and Cooperman \cite{JHC} have previously performed such a derivation in the case of Euclidean de Sitter space in $3+1$ dimensions; we adapt their techniques to the cases of Euclidean de Sitter space in $2+1$ dimensions and a portion of Lorentzian de Sitter spacetime in $2+1$ dimensions. We again assume the finite-size scaling \emph{Ansatz} based on equation \eqref{FSSansatz}.
In appendix \ref{derivation2pt} we employ the \emph{Ansatz} based on equation \eqref{FSSansatz} to derive the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ for each of the $2$-point functions $\mathbb{E}_{\mathrm{EdS}}[v_{2}( t)\, v_{2}( t')]$ and $\mathbb{E}_{\mathrm{LdS}}[v_{2}( t)\, v_{2}( t')]$. Specifically, we derive $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ in the form of equation \eqref{2ptmodesum}, determining the eigenvectors $\nu_{j}(\tau)$ and associated eigenvalues $\mu_{j}$ of $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$. We now perform fits of the eigenvectors $\nu_{j}(\tau)$ to the eigenvectors $\eta_{j}(\tau)$ and of the eigenvalues $\mu_{j}$ to the eigenvalues $\lambda_{j}$. Once the best fit of $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ to $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ fixes the fit parameter $\bar{s}_{0}$, there is in fact no fitting to perform aside from a single overall rescaling of the eigenvalues corresponding to the value of $1/64\pi^{2}\hbar G\ell_{\mathrm{dS}}^{3}$. Employing this value of $\bar{s}_{0}$ accords with our treatment of $v_{2}(t)$ as a perturbation.
To establish a point of comparison, we first consider the ensemble $\mathcal{E}_{\mathrm{E}}$ characterized by $\bar{T}=21$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$, for which, as depicted in figure \ref{eigenvectorfitE}, $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ exhibits the characteristic behavior of phase C. We display the first six eigenvectors $\eta_{j}(\tau)$ of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ overlain with the corresponding eigenvectors $\nu_{j}(\tau)$ of the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau)$ of $\mathbb{E}_{\mathrm{EdS}}[v_{2}(t)\,v_{2}(t')]$ in figure \ref{eigenvectorfitE}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{eigenvectorfitsT21V30k1IBFB4.pdf}
\caption{First six eigenvectors $\eta_{j}(\tau)$ (blue circles) of the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}(\tau)$ in the number of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{T}=21$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ (Euclidean-like ensemble $\mathcal{E}_{\mathrm{E}}$) overlain with the eigenvectors $\nu_{j}(\tau)$ (black lines) of the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ of the connected $2$-point function $\mathbb{E}_{\mathrm{EdS}}[v_{2}( t)\,v_{2}( t')]$ of perturbations $v_{2}(t)$ in the spatial $2$-volume $V_{2}^{(\mathrm{EdS})}( t)$ as a function of the global time coordinate $ t$ of Euclidean de Sitter space.}
\label{eigenvectorfitE}
\end{figure}
We display the eigenvalues $\lambda_{j}$ of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ overlain with the corresponding eigenvalues $\mu_{j}$ of $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau)$ in figure \ref{eigenvaluefitE}.
\begin{figure}
\centering
\includegraphics[scale=1]{eigenvaluesfitT21V30k1IBF4.pdf}
\caption{Eigenvalues $\lambda_{j}$ (blue circles) of the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}(\tau)$ in the number of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{T}=21$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ (Euclidean-like ensemble $\mathcal{E}_{\mathrm{E}}$) overlain with the eigenvalues $\mu_{j}$ (black lines) of the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ of the connected $2$-point function $\mathbb{E}_{\mathrm{EdS}}[v_{2}( t)\,v_{2}( t')]$ of perturbations $v_{2}(t)$ in the spatial $2$-volume $V_{2}^{(\mathrm{EdS})}( t)$ as a function of the global time coordinate $ t$ of Euclidean de Sitter space.}
\label{eigenvaluefitE}
\end{figure}
The fits of $\nu_{j}(\tau)$ to $\eta_{j}(t)$ and of $\mu_{j}$ to $\lambda_{j}$ are representative of the application of the above Euclidean model to measurements of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ \cite{JA&AG&JJ&RL1,JA&AG&JJ&RL2,JHC}. Clearly, this model provides an accurate description of the connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ for the ensemble $\mathcal{E}_{\mathrm{E}}$.
We now test the hypothesis that the connected $2$-point function $\mathbb{E}_{\mathrm{LdS}}[v_{2}( t)\,v_{2}( t')]$ of linear gravitational perturbations $v_{2}( t)$ propagating on Lorentzian de Sitter spacetime accurately describes the shape of the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ for the Lorentzian-like ensembles represented in figures \ref{nonminnonminsame2} and \ref{nonminnonmin2}.
We consider only the ensemble $\mathcal{E}_{\mathrm{L}}$ characterized by $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$. We display the first six eigenvectors $\eta_{j}(\tau)$ of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ overlain with the corresponding eigenvectors $\nu_{j}(\tau)$ of the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau)$ of $\mathbb{E}_{\mathrm{LdS}}[v_{2}(t)\,v_{2}(t')]$ in figure \ref{eigenvectorfitL}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{eigenvectorfitsT29V30k1IBFB600.pdf}
\caption{First six eigenvectors $\eta_{j}(\tau)$ (blue circles) of the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}(\tau)$ in the number of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ (Lorentzian-like ensemble $\mathcal{E}_{\mathrm{L}}$) overlain with the eigenvectors $\nu_{j}(\tau)$ (black lines) of the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ of the connected $2$-point function $\mathbb{E}_{\mathrm{LdS}}[v_{2}( t)\,v_{2}( t')]$ of perturbations $v_{2}(t)$ in the spatial $2$-volume $V_{2}^{(\mathrm{LdS})}( t)$ as a function of the global time coordinate $ t$ of Lorentzian de Sitter space.}
\label{eigenvectorfitL}
\end{figure}
We display the eigenvalues $\lambda_{j}$ of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ overlain with the corresponding eigenvalues $\mu_{j}$ of $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau)$ in figure \ref{eigenvaluefitL}.
\begin{figure}
\centering
\includegraphics[scale=1]{eigenvaluesfitT29V30k1IBFB600.pdf}
\caption{Eigenvalues $\lambda_{j}$ (blue circles) of the ensemble average connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ of deviations $n_{2}^{\mathrm{SL}}(\tau)$ in the number of spacelike $2$-simplices as a function of the discrete time coordinate $\tau$ for $\bar{T}=29$, $\bar{N}_{3}=30850$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$ (Lorentzian-like ensemble $\mathcal{E}_{\mathrm{L}}$) overlain with the eigenvalues $\mu_{j}$ (black lines) of the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ of the connected $2$-point function $\mathbb{E}_{\mathrm{LdS}}[v_{2}( t)\,v_{2}( t')]$ of perturbations $v_{2}(t)$ in the spatial $2$-volume $V_{2}^{(\mathrm{LdS})}( t)$ as a function of the global time coordinate $ t$ of Lorentzian de Sitter space.}
\label{eigenvaluefitL}
\end{figure}
Clearly, the above Lorentzian model provides an accurate description of the connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ for the ensemble $\mathcal{E}_{\mathrm{L}}$.
These analyses, straightforwardly interpreted, provide evidence supporting the conjecture of Cooperman and Miller: a portion of Lorentzian de Sitter spacetime accurately describes the shape of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$, and the connected $2$-point function of linear perturbations propagating on Lorentzian de Sitter spacetime accurately describes the shape of $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ for the Lorentzian-like ensembles represented in figures \ref{nonminnonminsame2} and \ref{nonminnonmin2}. Nevertheless, we proffer an even more straightforward explanation of these results in section \ref{argumentrefutation}, casting serious doubt on the conjecture of Cooperman and Miller.
Ideally, we would extend our analysis of the above modeling of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ and $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ in two directions. First, we would perform a finite-size scaling analysis in which we consider ensembles of causal triangulations characterized by increasing numbers $\bar{N}_{3}$ of $3$-simplices---and commensurately increasing numbers $\bar{T}$ of time slices and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ of initial and final boundary spacelike $2$-simplices---to extrapolate the accuracy of our modeling towards the infinite-volume limit. Such a finite-size scaling analysis is more difficult to perform in the context of transition amplitudes:
the manner in which one must commensurately increase $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$, $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$, and $\bar{T}$ with $\bar{N}_{3}$ to consider transition amplitudes related by the finite-size scaling \emph{Ansatz} based on equation \eqref{FSSansatz} is nontrivial.
For this reason we have not yet performed any scaling analyses of the transition amplitudes; rather, we rely on the similarities of our numerical measurements to those of previous studies as justification for our use of the finite-size scaling \emph{Ansatz} based on equation \eqref{FSSansatz}.
Second, we would consider models based on departures from Einstein gravity---for instance, Ho\v{r}ava-Lifshitz or higher-order gravity---to assess our model's accuracy. Cooperman and Houthoff perform such an analysis, though only for Euclidean-like ensembles, in a forthcoming paper \cite{JHC&WH}.
\section{Argument and refutation}\label{argumentrefutation}
Extraordinary claims require extraordinary evidence. The conjecture of Cooperman and Miller constitutes an extraordinary claim, but we now argue that the analyses presented in section \ref{analysissupport} of the measurements presented in section \ref{evidenceconjecture} do not furnish extraordinary evidence. We offer an alternative explanation of these measurements and their analysis, one much more plausible as well as much more mundane.
We based the analyses of section \ref{analysissupport} on a minisuperspace truncation of $(2+1)$-dimensional Einstein gravity with either Euclidean de Sitter space or Lorentzian de Sitter spacetime as its ground state. As we presented this model in section \ref{analysissupport}, we did not incorporate with sufficient care the setting
of our numerical simulations of causal triangulations. Recall from section \ref{CDT} that we run a given simulation at fixed number $\bar{N}_{3}$ of $3$-simplices and at fixed numbers $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})$ of initial and final boundary spacelike $2$-simplices.\footnote{We also fix the number $\bar{T}$ of time slices; however, our model allows for an arbitrary lapse---the constant $\omega$, which propagates into the fit parameter $\bar{s}_{0}$---so we do not impose a constraint associated with fixed $\bar{T}$.} We accounted for these constraints by normalizing $V_{2}^{(\mathrm{dS})}(t)$ to $V_{3}$ and $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ to $\bar{N}_{3}$ in the derivation of appendix \ref{derivation1pt} and by enforcing boundary conditions on $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ in the best fit to $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$. We did not, however, explicitly include constraints implementing a fixed spacetime $3$-volume $V_{3}$ and fixed initial and final spatial $2$-volumes $V_{2}(t_{\mathrm{i}})$ and $V_{2}(t_{\mathrm{f}})$ in the action \eqref{MSM2action4} defining our model. We now augment our model's action with the relevant constraints and carefully extract their consequences.
Explicitly imposing these constraints
in the action \eqref{MSM2action4} for Euclidean signature, we arrive at the augmented action
\begin{eqnarray}\label{MSM2action4constrained}
S_{\mathrm{cl}}[V_{2}]&=&\frac{\omega}{32\pi G}\int_{ t_{\mathrm{i}}}^{ t_{\mathrm{f}}}\mathrm{d} t\left[\frac{\dot{V}_{2}^{2}(t)}{\omega^{2}V_{2}(t)}-4\Lambda V_{2}(t)\right]+\lambda_{V_{3}}\left[\int_{ t_{\mathrm{i}}}^{ t_{\mathrm{f}}}\mathrm{d} t\,\omega V_{2}(t)-V_{3}\right]\nonumber\\ &&\qquad+\lambda_{\mathrm{i}}\left[\int_{ t_{\mathrm{i}}}^{ t_{\mathrm{f}}}\mathrm{d} t\,\omega\,\delta( t- t_{\mathrm{i}})V_{2}(t)-V_{2}( t_{\mathrm{i}})\right]+\lambda_{\mathrm{f}}\left[\int_{ t_{\mathrm{i}}}^{ t_{\mathrm{f}}}\mathrm{d} t\,\omega\,\delta( t- t_{\mathrm{f}})V_{2}(t)-V_{2}( t_{\mathrm{f}})\right]
\end{eqnarray}
in which $\lambda_{V_{3}}$ is the Lagrange multiplier associated with the constraint of fixed spacetime $3$-volume $V_{3}$, and $\lambda_{\mathrm{i}}$ and $\lambda_{\mathrm{f}}$ are the Lagrange multipliers associated with the constraints of fixed initial and final spatial $2$-volumes $V_{2}(t_{\mathrm{i}})$ and $V_{2}(t_{\mathrm{f}})$. The cosmological constant term also acts to constrain the spacetime $3$-volume $V_{3}$ with the cosmological constant itself serving as the associated Lagrange multiplier. We include the additional constraint of fixed $V_{3}$ to make our argument more transparent; in particular, we think of the cosmological constant $\Lambda$ as fixed and the Lagrange multiplier $\lambda_{V_{3}}$ as variable.
Varying the action \eqref{MSM2action4constrained} with respect to $V_{2}(t)$, we obtain the equation of motion
\begin{equation}
2V_{2}(t)\ddot{V}_{2}(t)-\dot{V}_{2}^{2}(t)\pm4\omega^{2}(\Lambda-8\pi G\lambda_{V_{3}})V_{2}^{2}(t)=0,
\end{equation}
having the general solution
\begin{equation}\label{constrainedminisuperspacesolutions}
V_{2}( t)=\left\{\begin{array}{lcc} A\cos{[\omega\sqrt{\Lambda-8\pi G\lambda_{V_{3}}}(t-t_{0})]} & \mathrm{if} & \Lambda-8\pi G \lambda_{V_{3}}>0 \\ A\left(t-t_{0}\right)^{2} & \mathrm{if} & \Lambda-8\pi G\lambda_{V_{3}}=0 \\ A\cosh{[\omega\sqrt{8\pi G\lambda_{V_{3}}-\Lambda}(t-t_{0})]} & \mathrm{if} & \Lambda-8\pi G\lambda_{V_{3}}<0 \end{array}\right.
\end{equation}
for integration constants $A$ and $t_{0}$. Varying the action \eqref{MSM2action4constrained} with respect to $\lambda_{V_{3}}$, we obtain the constraint
\begin{equation}\label{V3constraint}
V_{3}=\int_{ t_{\mathrm{i}}}^{ t_{\mathrm{f}}}\mathrm{d} t\,\omega V_{2}( t),
\end{equation}
and varying the action \eqref{MSM2action4constrained} with respect to $\lambda_{\mathrm{i}}$ and $\lambda_{\mathrm{f}}$ constrains $V_{2}(t)$ to have the initial and final boundary values $V_{2}(t_{\mathrm{i}})$ and $V_{2}(t_{\mathrm{f}})$.
We now focus on the spatial $2$-volume $V_{2}(t)$ for $\Lambda-8\pi G\lambda_{V_{3}}>0$ given in the first line of equation \eqref{constrainedminisuperspacesolutions}. Let $\ell_{\mathrm{eff}}^{-2}=\Lambda-8\pi G\lambda_{V_{3}}$.
Recalling equation \eqref{dSvolprof}, we observe that
the spatial $2$-volume $V_{2}(t)$ for $\ell_{\mathrm{eff}}^{-2}>0$ is that of Euclidean de Sitter space if $A=4\pi\ell_{\mathrm{eff}}^{2}$.
Assuming further that $V_{2}(t_{\mathrm{i}})=V_{2}(t_{\mathrm{f}})$ dictates that $t_{0}=0$. The difference $t_{\mathrm{f}}-t_{\mathrm{i}}$ (and, indeed, the value of $t_{\mathrm{i}}=-t_{\mathrm{f}}$) is then determined in terms of $V_{2}(t_{\mathrm{i}})=V_{2}(t_{\mathrm{f}})$ and $\ell_{\mathrm{eff}}$:
\begin{equation}\label{timeinterval}
\omega(t_{\mathrm{f}}-t_{\mathrm{i}})=\ell_{\mathrm{eff}}\cos^{-1}{\sqrt{\frac{V_{2}(t_{\mathrm{f}})}{4\pi\ell_{\mathrm{eff}}^{2}}}}.
\end{equation}
Substituting $V_{2}(t)$ for $\ell_{\mathrm{eff}}^{-2}>0$, $A=4\pi\ell_{\mathrm{eff}}^{2}$, and $t_{0}=0$ into equation \eqref{V3constraint}, we obtain
\begin{equation}\label{V3expression}
V_{3}=4\pi\ell_{\mathrm{eff}}^{3}\left[\cos^{-1}{\sqrt{\frac{V_{2}(t_{\mathrm{f}})}{4\pi\ell_{\mathrm{eff}}^{2}}}}+\sqrt{\frac{V_{2}(t_{\mathrm{f}})}{4\pi\ell_{\mathrm{eff}}^{2}}}\sqrt{1-\frac{V_{2}(t_{\mathrm{f}})}{4\pi\ell_{\mathrm{eff}}^{2}}}\right].
\end{equation}
Solving equation \eqref{V3expression} for $4\pi\ell_{\mathrm{eff}}^{2}$ and replacing $4\pi\ell_{\mathrm{eff}}^{2}$ in equation \eqref{constrainedminisuperspacesolutions}, we obtain
\begin{equation}\label{EdSvolprofconstrained}
V_{2}(t)=\frac{V_{3}}{\ell_{\mathrm{eff}}}\left[\cos^{-1}{\sqrt{\frac{V_{2}(t_{\mathrm{f}})}{4\pi\ell_{\mathrm{eff}}^{2}}}}+\sqrt{\frac{V_{2}(t_{\mathrm{f}})}{4\pi\ell_{\mathrm{eff}}^{2}}}\sqrt{1-\frac{V_{2}(t_{\mathrm{f}})}{4\pi\ell_{\mathrm{eff}}^{2}}}\right]^{-1}\cos^{2}{\left(\frac{\omega t}{\ell_{\mathrm{eff}}}\right)}.
\end{equation}
Equation \eqref{EdSvolprofconstrained} gives the spatial $2$-volume as a function of the global time coordinate of a portion of Euclidean de Sitter space constrained to have spacetime $3$-volume $V_{3}$ and initial and final boundary spatial $2$-volumes $V_{2}(t_{\mathrm{i}})=V_{2}(t_{\mathrm{f}})$. For given values of $G$ and $\Lambda$, with either the gauge fixing $\omega=\mathrm{constant}$ or the gauge fixing $t_{\mathrm{f}}=\mathrm{constant}$, we may choose values for $V_{3}$ and $V_{2}(t_{\mathrm{f}})$ and determine (if possible) the value of $\lambda_{V_{3}}$ dictated by the chosen values of $V_{3}$ and $V_{2}(t_{\mathrm{f}})$.
If $V_{2}( t_{\mathrm{f}})$ is not too large in comparison to $V_{3}$, then $\Lambda>8\pi G\lambda_{V_{3}}$, and the solution is a portion of Euclidean de Sitter space; however, if $V_{2}( t_{\mathrm{f}})$ is too large in comparison to $V_{3}$, then $\Lambda<8\pi G\lambda_{V_{3}}$, and the solution is a portion of Lorentzian de Sitter spacetime.
We now give examples of these two cases. Suppose that $G=1/8\pi$ and $\Lambda=1$. Choose first $V_{3}=13500$, $V_{2}( t_{\mathrm{f}})=0$, and $t_{\mathrm{f}}=10$. Equations \eqref{timeinterval} and \eqref{V3expression} then yield $\omega=1.38$ and $\lambda_{V_{3}}=0.99$ for which $\ell_{\mathrm{eff}}^{2}=77.63$. We display the spatial $2$-volume $V_{2}(t)$ for this case in figure \ref{exampleV2}(a). This first example models the circumstances of the Euclidean-like ensemble $\mathcal{E}_{\mathrm{E}}$: in this case $N_{2}^{\mathrm{SL}}(\tau_{\mathrm{f}})$ is not too large in comparison to $N_{3}$, so the discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ of the spatial $2$-volume $V_{2}^{(\mathrm{EdS})}(t)$ of Euclidean de Sitter space accurately describes $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$. Compare figure \ref{exampleV2}(a) to figure \ref{volproffitT21V30K1IBFB4}.
\begin{figure}[!ht]
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.3)
\put(0.0005,0.005){\includegraphics[scale=1]{EuclideanlikeexampleV2.pdf}}
\put(0.51,0.00){\includegraphics[scale=1]{LorentzianlikeexampleV2.pdf}}
\put(0.26,-0.03){(a)}
\put(0.77,-0.03){(b)}
\end{picture}
\caption{Spatial $2$-volume $V_{2}(t)$ as a function of the global time coordinate $t$ for $A=4\pi\ell_{\mathrm{eff}}^{2}$, $t_{0}=0$, $G=1/8\pi$, and $\Lambda=1$ (a) $V_{3}=13500$, $V_{2}(t_{\mathrm{i}})=V_{2}(t_{\mathrm{f}})=0$, $\omega=1.38$, $\lambda_{V_{3}}=0.99$, and $\ell_{\mathrm{eff}}^{2}=77.63$ (b) $V_{3}=3300$, $V_{2}(t_{\mathrm{i}})=V_{2}(t_{\mathrm{f}})=600$, $\omega=0.31$, $\lambda_{V_{3}}=1.04$, and $\ell_{\mathrm{eff}}^{2}=-22.35$.}
\label{exampleV2}
\end{figure}
Choose second $V_{3}=3300$, $V_{2}(t_{\mathrm{f}})=600$, and $t_{\mathrm{f}}=14$. Equations \eqref{timeinterval} and \eqref{V3expression} then yield $\omega=0.31$ and $\lambda_{V_{3}}=1.04$ for which $\ell_{\mathrm{eff}}^{2}=-22.35$. We display the spatial $2$-volume $V_{2}(t)$ for this case in figure \ref{exampleV2}(b). This second example models the circumstances of the Lorentzian-like ensemble $\mathcal{E}_{\mathrm{L}}$: in this case $N_{2}^{\mathrm{SL}}(\tau_{\mathrm{f}})$ is too large in comparison to $N_{3}$, so the discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ of the spatial $2$-volume $V_{2}^{(\mathrm{LdS})}(t)$ of Lorentzian de Sitter spacetime accurately describes $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$. Compare figure \ref{exampleV2}(b) to figure \ref{nonminnonmin2fit}(a). The discrete analogue $\mathcal{N}_{2}^{\mathrm{SL}}(\tau)$ of the spatial $2$-volume $V_{2}^{(\mathrm{LdS})}(t)$ of Lorentzian de Sitter spacetime nevertheless arises from a model based on Euclidean Einstein gravity. Furthermore, the operator $\mathscr{M}(t,t')$ derived from the action \eqref{MSM2action4constrained} for linear perturbations $v_{2}(t)$ about the spatial $2$-volume $V_{2}(t)$ for $\ell_{\mathrm{eff}}^{2}<0$ coincides with the operator \eqref{vVMdLdS}, so the discrete analogue $\mathsf{n}_{2}^{\mathrm{SL}}(\tau)\,\mathsf{n}_{2}^{\mathrm{SL}}(\tau')$ of the connected $2$-point function $\mathbb{E}_{\mathrm{LdS}}[v_{2}(t)\,v_{2}(t')]$ still serves as the correct model for the connected $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$.
The above discussion points towards an explanation of the measurements of $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ and $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ presented in section \ref{evidenceconjecture} and their analysis presented in section \ref{analysissupport} different from that of the conjecture of Cooperman and Miller. The model based on a minisuperspace truncation of Euclidean Einstein gravity also accurately describes $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ and $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ for the Lorentzian-like ensembles: the interaction of the constraints of fixed spacetime $3$-volume and fixed initial and final boundary spatial $2$-volumes forces $\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ and $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$ to be Lorentzian in form.
We conclude accordingly that the geometries of causal triangulations comprising Lorentzian-like ensembles are not Lorentzian but Euclidean in nature.
Our argument does not, however, clinch the case against the conjecture of Cooperman and Miller: had we run our reasoning starting from the action \eqref{MSM2action4} in Lorentzian signature, Euclidean de Sitter space would have arisen from Lorentzian de Sitter spacetime as the Lagrange multiplier $\lambda_{V_{3}}$ forced $\Lambda-8\pi G\lambda_{V_{3}}$ to change sign, and we would have concluded that geometries resembling Lorentzian de Sitter spacetime on sufficiently large scales dominate the ground state of causal dynamical triangulations. We chose to present our argument starting from the action \eqref{MSM2action4} in Euclidean signature because we know that the configurations simulated numerically must be Euclidean in nature: the Metropolis algorithm simply cannot handle complex contributions to the partition function \eqref{partitionfunctionfixedTN}. Still, we would like more definitive evidence for the Euclidean nature of the causal triangulations of Lorentzian-like ensembles represented in figures \ref{nonminnonminsame2} and \ref{nonminnonmin2}.
The two observables that we measured---$\langle N_{2}^{\mathrm{SL}}(\tau)\rangle$ and $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$---probe the quantum geometry defined by an ensemble of causal triangulations only on its largest scales.
Since we do not consider observables that probe this quantum geometry on small scales, we do not assess the nature---Euclidean or Lorentzian---of the quantum geometry on small scales. To test the conjecture of Cooperman and Miller more definitively, we would like to make a statement regarding the nature of the quantum geometry on smaller scales, in particular, regarding the nature of local interactions, which should naively appear quite different if they are in fact Lorentzian.
We should therefore probe the quantum geometry on small scales by measuring appropriate observables.
Accordingly, we consider numerical measurements of the spectral dimension, a scale-dependent measure of the dimensionality of the quantum geometry, which probes the quantum geometry defined by an ensemble of causal triangulations on all scales. In appendix \ref{spectraldimension}, following several previous authors \cite{JA&JJ&RL6,JA&JJ&RL7,CA&SJC&JHC&PH&RKK&PZ,DB&JH,JHC,DNC&JJ,RK}, we define the spectral dimension $\mathcal{D}_{\mathrm{s}}(\sigma)$ as a function of the diffusion time $\sigma$, and we explain its numerical estimation. As in our analysis of the $2$-point function $\langle n_{2}^{\mathrm{SL}}(\tau)\,n_{2}^{\mathrm{SL}}(\tau')\rangle$, we compare the spectral dimension $\mathcal{D}_{\mathrm{s}}(\sigma)$ of the ensemble $\mathcal{E}_{\mathrm{E}}$ characterized by $\bar{T}=21$, $\bar{N}_{3}=30580$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ to the spectral dimension $\mathcal{D}_{\mathrm{s}}(\sigma)$ of the ensemble $\mathcal{E}_{\mathrm{L}}$ characterized by $\bar{T}=29$, $\bar{N}_{3}=30580$, $k_{0}=1.00$, and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$. We display $\mathcal{D}_{\mathrm{s}}(\sigma)$ for the ensemble $\mathcal{E}_{\mathrm{E}}$ in figure \ref{specdimfig}(a) and for the ensemble $\mathcal{E}_{\mathrm{L}}$ in figure \ref{specdimfig}(b).
\begin{figure}
\centering
\setlength{\unitlength}{\textwidth}
\begin{picture}(1,0.3)
\put(0.0005,0.005){\includegraphics[scale=1.5]{specdimEuclidean.pdf}}
\put(0.51,0.00){\includegraphics[scale=1.5]{specdimLorentzian.pdf}}
\put(0.26,-0.03){(a)}
\put(0.77,-0.03){(b)}
\end{picture}
\caption{Ensemble average spectral dimension $\langle\mathcal{D}_{\mathrm{s}}\rangle$ as a function of diffusion time $\sigma$ for $\bar{N}_{3}=30850$ and $k_{0}=1.00$ (a) $\bar{T}=21$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=4$ (b) $\bar{T}=29$ and $N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{i}}^{2})=N_{2}^{\mathrm{SL}}(\mathsf{S}_{\mathrm{f}}^{2})=600$.}
\label{specdimfig}
\end{figure}
The plot in figure \ref{specdimfig}(a) shows the behavior of $\mathcal{D}_{\mathrm{s}}(\sigma)$ previously understood as characteristic of phase C \cite{JA&JJ&RL6,JA&JJ&RL7,CA&SJC&JHC&PH&RKK&PZ,DB&JH,JHC,DNC&JJ,RK}. For intermediate diffusion times ($\sigma\sim200$ for $\mathcal{E}_{\mathrm{E}}$, $\sigma\sim150$ for $\mathcal{E}_{\mathrm{L}}$), the spectral dimension peaks at approximately the topological dimension of $3$; for smaller diffusion times ($\sigma\leq200$ for $\mathcal{E}_{\mathrm{E}}$, $\sigma\leq150$ for $\mathcal{E}_{\mathrm{L}}$), the spectral dimension dynamically reduces towards a value near $2$; and for larger diffusion times ($\sigma\geq200$ for $\mathcal{E}_{\mathrm{E}}$, $\sigma\geq150$ for $\mathcal{E}_{\mathrm{L}}$), the spectral dimension decays exponentially in the presence of positive curvature. The two measurements of $\mathcal{D}_{\mathrm{s}}(\sigma)$ displayed in figure \ref{specdimfig} exhibit essentially the same qualitative behavior and similar quantitative behavior.
The maximal value of $\mathcal{D}_{\mathrm{s}}(\sigma)$ ($2.96$ for $\mathcal{E}_{\mathrm{E}}$, $2.72$ for $\mathcal{E}_{\mathrm{L}}$) is the primary difference. As Benedetti and Henson found for Euclidean-like ensembles \cite{DB&JH}, the depression of $\mathcal{D}_{\mathrm{s}}(\sigma)$ below the topological value of $3$ is a finite-size effect. We have verified that this depression
is also a finite-size effect for Lorentzian-like ensembles. Although ensembles $\mathcal{E}_{\mathrm{E}}$ and $\mathcal{E}_{\mathrm{L}}$ are both characterized by $\bar{N}_{3}=30850$, we suspect that the ensemble $\mathcal{E}_{\mathrm{L}}$ exhibits stronger finite-size effects because the random walker can only probe a small portion of a quantum geometry resembling Lorentzian de Sitter spacetime on sufficiently large scales. Since $\mathcal{D}_{\mathrm{s}}(\sigma)$ for the ensemble $\mathcal{E}_{\mathrm{L}}$ behaves so similarly to $\mathcal{D}_{\mathrm{s}}(\sigma)$ for the ensemble $\mathcal{E}_{\mathrm{E}}$, we take these measurements of $\mathcal{D}_{\mathrm{s}}(\sigma)$ as evidence that the geometries of causal triangulations comprising the ensemble $\mathcal{E}_{\mathrm{L}}$ are Euclidean in nature, supporting our above conclusion.
\section{Lorentzian from Euclidean}\label{conclusion}
Studying the causal dynamical triangulations of $(2+1)$-dimensional Einstein gravity in the presence of initial and final spacelike boundaries, Cooperman and Miller identified several ensembles of causal triangulations the quantum geometry of which on sufficiently large scales appears to resemble closely that of Lorentzian de Sitter spacetime \cite{JHC&JMM}. On the basis of these findings, they conjectured that the partition function \eqref{partitionfunction} is dominated by causal triangulations the quantum geometry of which is nearly that of Lorentzian de Sitter spacetime on sufficiently large scales, possibly \emph{via} a mechanism akin to that of the Hartle-Hawking no-boundary proposal. The conjecture of Cooperman and Miller presented an exciting possibility: the definition of a Lorentzian quantum theory of gravity \emph{via} a Euclidean path integral, alleviating the necessity of reversing the Wick rotation of causal dynamical triangulations. We have argued for a much more plausible and mundane explanation of their findings: the implementation and interaction of multiple constraints may result in the partition function \eqref{partitionfunctionfixedTN} being dominated by (Euclidean) causal triangulations that closely resemble Lorentzian de Sitter spacetime on large scales.
While not particularly exciting, our explanation adds one further piece of evidence for the proper behavior of the partition function defined \emph{via} causal dynamical triangulations. Our explanation also serves as a cautionary tale: beware hastily drawing conclusions regarding signs of signature change within the partition function \eqref{partitionfunction} of causal dynamical triangulations.
The issue of reversing the Wick rotation of causal dynamical triangulations thus remains.
The results of modeling the large-scale quantum geometry within phase C on the basis of a minisuperspace truncation of Euclidean Einstein gravity, as exemplified by our modeling of the ensemble $\mathcal{E}_{\mathrm{E}}$ (and, indeed, also the ensemble $\mathcal{E}_{\mathrm{L}}$), suggest a straightforward possibility:
since (Euclidean) causal triangulations resembling Euclidean de Sitter space on sufficiently large scales dominate the partition function \eqref{partitionfunction}, obtained by Wick rotation from the path sum \eqref{causalpathsum}, (Lorentzian) causal triangulations resembling Lorentzian de Sitter spacetime on sufficiently large scales dominate the path sum \eqref{causalpathsum}.
For this interpretation to have force, one must establish a rigorous path from the Euclidean theory to the Lorentzian theory by demonstrating an Osterwalder-Schrader-type theorem for causal dynamical triangulations. Although technically challenging, achieving such a theorem is likely within reach since the action $\mathcal{S}_{\mathrm{cl}}^{(\mathrm{E})}(\mathcal{T}_{c})$ for Einstein gravity is reflection-positive, a key axiom of the Osterwalder-Schrader reconstruction theorem. We maintain that the promising results of causal dynamical triangulations warrant such an effort.
\section*{Acknowledgments}
We thank Christian Anderson, David Kamensky, and especially Rajesh Kommu for allowing us to employ parts of their computer codes. We also thank Joe Henson, Ian Morrison, Erik Schnetter, and especially Steve Carlip and Renate Loll for useful discussions. JHC acknowledges support from the Department of Energy under grant DE-FG02-91ER40674 at the University of California, Davis and from Stichting voor Fundamenteel Onderzoek der Materie itself supported by Nederlandse Organisatie voor Wetenschappelijk Onderzoek. KL acknowledges support from the SURF program at Chapman University and the hospitality of the Department of Physics at the University of California, Davis. JMM acknowledge support from the National Science Foundation under REU grant PHY-1004848 at the University of California, Davis. This work utilized the Janus supercomputer, which is supported by the National Science Foundation (award number CNS-0821794) and the University of Colorado, Boulder. The Janus supercomputer is a joint effort of the University of Colorado, Boulder, the University of Colorado, Denver, and the National Center for Atmospheric Research. This research was supported in part by the Perimeter Institute for Theoretical Physics. Research at the Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation.
|
1,941,325,220,693 | arxiv | \section{}
\section{Introduction}
The chiral symmetry is
one of the fundamental symmetries to classify the topological states
of matter. \cite{wen89, schnyder2008classification, kitaev09}
The symmetry relates positive and negative parts
in the energy spectrum, and a nontrivial topological nature
is linked to a singular property at zero energy.
The chiral symmetry is also called sublattice symmetry,
because the bases are divided into
two sublattices with different eigenvalues of the chiral operator
$\Gamma=+1$ and $-1$,
and the Hamiltonian has no matrix elements inside the same
sublattice group.
The difference between the number of
sublattices of $\Gamma=+1$ and $-1$
is a topological index which cannot change continuously.
A nontrivial index indicates the existence of
topologically protected zero energy modes.
If the chiral Hamiltonian is defined in a phase space,
on the other hand,
we have another topological invariant defined by a winding number
(Berry phase) for a closed path.
\cite{wen89,ryu_hatsugai_2002,ryu2010topological,teo-kane10,STYY11}
Nonzero winding number is also a source of
topological objects such as
band touching points and zero energy boundary modes.
A chiral symmetric system frequently comes with other
material-dependent spatial symmetry such as reflection, rotation,
and inversion.
Recent progress in the study on
topological phase has revealed that the existence
of the spatial symmetry enriches the
topological structure of
the system\cite{Fu11,HPB11,TZV10,HLLDBF12,SMJZ13,FHEA12,teo2013existence,ueno2013symmetry,chiu2013classification,zhang2013topological}.
The spatial symmetry often stabilizes the
band touching points which are otherwise unstable.
For example, the reflection symmetry defines topological crystalline
insulators with mirror Chern numbers \cite{HLLDBF12},
where an even number of stable Dirac cones exist on the
surface, \cite{xu2012observation,tanaka2012experimental,dziawa2012topological}
which are generally unstable in
the ordinary topological insulators.\cite{hasan-kane10,qi-zhang-rmp11}
We have a similar situation in Weyl semimetals
in three dimensions.\cite{murakami-semimetal07,Wan-semimetal11,burkov-balents11}
Weyl semimetals have stable gapless low-energy excitations that are
described by a $2\times 2$ Weyl Hamiltonian,
and the spectrum is generally gapped out when
two Weyl nodes with opposite topological charges
merge at the same $k$-point.
In the presence of additional spatial symmetry, however, we may have Dirac
semimetals with gapless low-energy excitations described by a
$4\times 4$ Dirac Hamiltonian, \cite{Young-Dirac-semimetal12,Wang12,Wang13}
and it has been confirmed experimentally
in Cd$_3$As$_2$ and Na$_3$Bi.\cite{Neupane13,Borisenko13,Liu14}
Generally, a zero energy mode or gapless mode in
a band structure is topologically stable when it is realized as an
intersect of constraints given by the secular equation in momentum
space.\cite{herring37, asano2011designing}
Sophisticated topological arguments based on the K-theory
enable us to classify possible intersects systematically as topological
obstructions, predicting gapless modes consistent with the spatial
symmetries.\cite{morimoto2013topological, shiozaki14, kobayashi14}
In this paper, we find a new class of zero energy modes
protected by the coexistence of chiral symmetry and
spatial symmetry.
If a chiral symmetric system has an additional
symmetry such as reflection, inversion and rotation,
the Hamiltonian can be block-diagonalized into
the eigenspaces of the symmetric operation,
and each individual sector is viewed as an independent
chiral symmetric system.
There we can define a topological index
as the difference between the numbers of sublattices,
and a nonzero index indicates the existence of chiral zero energy modes
in that sector.
As a result, the number of total zero energy modes
are generally larger than in the absence of the spatial symmetry.
If we apply the argument to Bloch electron
systems, we can detect the existence of
zero-energy band touching at symmetric points in the Brillouin zone.
This argument predicts the existence of gapless points
solely from the symmetry,
without even specifying the detail of the Hamiltonian.
In particular, we show that the Dirac nodes appearing
in two-dimensional (2D) honeycomb lattice (e.g. graphene) and in
half-flux square lattice are protected by three-fold ($C_3$)
and two-fold ($C_2$) rotation symmetry, respectively.
We also present examples of Dirac semimetal with isolated
band-touching points in three-dimensional (3D) $k$-space,
which are protected by rotation and reflection symmetry.
The zero-mode protection by spatial symmetry
is distinct from that by the conventional winding number,
and we actually demonstrate in several concrete models
that symmetry-protected band touching points emerge
even though the winding number is zero.
In the last part of the paper, we list up and classify
all independent topological invariants associated with
a given Dirac point under chiral and spatial symmetries.
They consist of winding numbers and
topological indeces (sublattice number difference)
of the subsectors of the spatial symmetry operator,
with the redundant degrees of freedom removed.
If the spatial symmetry is of order-two
(i.e., two times of operation is proportional to identity,
like reflection and inversion),
we can use the K-theory with Clifford algebra
to identify how many quantum numbers are needed
to label all topologically distinct phases.
\cite{morimoto2013topological, shiozaki14, kobayashi14}
We explicitly show that
a set of independent winding numbers and topological indices
serves as complete topological charges found in the K-theory.
The paper is organized as follows.
In Sec.\ \ref{sec_gen}, we present a general formulation for
zero modes protected by
the coexistence of chiral symmetry and spatial symmetry.
We then discuss protection of the Dirac points
in $C_3$ symmetric crystals in Sec.\ \ref{sec_c3},
that in $C_2$ symmetric crystals in Sec.\ \ref{sec_c2},
respectively.
We also argue line-node protection by additional reflection symmetry
in Sec.\ \ref{sec_ref}.
Several examples of 3D Dirac semimetal
are studied in Sec.\ \ref{sec_3d_dirac}.
In Sec.~\ref{sec:charges},
we identify
independent topological charges assigned to gapless points,
and clarify the relation to the classification theory using the Clifford
algebra.
Finally, we present a brief conclusion in Sec.\ \ref{sec_conc}.
\section{General arguments}
\label{sec_gen}
We first present a general argument
for zero modes protected by a space symmetry
in a chiral symmetric system.
We consider a Hamiltonian $H$,
satisfying
\begin{eqnarray}
&& [H,A] = 0,
\label{eq_H_A}
\\
&& \{H,\Gamma\} = 0,
\label{eq_H_G}
\end{eqnarray}
where $\Gamma$ is the chiral operator,
and $A$ is the operator describing
the spatial symmetry of the system.
We also assume
\begin{equation}
[\Gamma,A] = 0,
\label{eq_G_A}
\end{equation}
i.e., the sublattices belonging to $\Gamma = 1$ and $-1$
are not interchanged by $A$.
Since $[H,A]=[\Gamma,A]=0$,
the matrices $H$, $\Gamma$ and $A$ are simultaneously block-diagonalized
into subspaces labeled by the eigenvalues of $A$ as
\begin{align}
H &=
\begin{pmatrix}
H_{a_1} & & & \\
& H_{a_2} & & \\
& & H_{a_3}& \\
& & & \ddots
\end{pmatrix},\n
\Gamma &=
\begin{pmatrix}
\Gamma_{a_1} & & & \\
& \Gamma_{a_2} & & \\
& & \Gamma_{a_3}& \\
& & & \ddots
\end{pmatrix},\n
A &=
\begin{pmatrix}
a_1 & & & \\
& a_2 & & \\
& & a_3& \\
& & & \ddots
\end{pmatrix},\
\end{align}
where $a_1,a_2,\cdots$ are the eigenvalues of $A$.
Since
Eq.\ (\ref{eq_H_G}) requires
$\{H_{a_i}, \Gamma_{a_i}\} =0$ for all the sectors,
each eigenspace possesses chiral symmetry independently.
Then we can define the topological index for each sector as
\begin{align}
\nu_{a_i}=\t{tr } \Gamma_{a_i}.
\end{align}
The index $\nu_{a_i}$ is equal to the difference of
the chiral zero modes
of the Hamiltonian $H_{a_i}$;
\begin{align}
\nu_{a_i}&=N^+_{a_i} - N^-_{a_i},
\label{eq_nu_ai}
\end{align}
where $N^\pm_{a_i}$ are numbers of chiral zero modes satisfying
\begin{align}
H_{a_i} |u^\pm_{a_i} \rangle&=0, \n
\Gamma_{a_i} |u^\pm_{a_i} \rangle&= \pm |u^\pm_{a_i} \rangle.
\end{align}
Eq.\ (\ref{eq_nu_ai}) guarantees that
there are at least $|\nu_{a_i}|$ zero-modes in each sector,
and therefore, at least $\sum_i |\nu_{a_i}|$ zero-modes
in the total system.
On the other hand, the topological index for the total Hamiltonian
is given by the summation over all the sub-indices as
\begin{equation}
\nu_0 = \sum_i \nu_{a_i},
\end{equation}
which in itself guarantees $|\nu_0|$ zero-modes.
Since $\sum_i |\nu_{a_i}| \geq |\sum_i \nu_{a_i}|$,
we can generally have more zero-modes
in the presence of additional symmetry $A$,
than in its absence.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{fig_0dim_model.eps}
\end{center}
\caption{
(a) 4-site model with reflection symmetry
on a diagonal axis.
(b) 6-site model with three-fold rotation symmetry.
}
\label{fig_0dim_model}
\end{figure}
As the simplest example, let us consider a 4-site
square lattice as illustrated in Fig.\ \ref{fig_0dim_model}(a).
We assume that the system is invariant under
the reflection $R$ with respect to
a diagonal line
connecting the site 1 and 3.
We also assume that the lattice is bipartite, i.e.,
matrix elements only exist between white circles
(sites 1 and 3) and black circles (2 and 4).
Then the system is chiral symmetric
under the chiral operator $\Gamma$ defined by
\begin{equation}
\Gamma |i\rangle = \left\{
\begin{array}{ll}
+|i\rangle & (i=1,3)
\\
-|i\rangle & (i=2,4),
\end{array}
\right.
\end{equation}
where $|i\rangle$ is the state localized at the site $i$.
The operators $\Gamma$ and $R$ satisfies $[\Gamma, R] = 0$
since the black and white circles are not interchanged by $R$.
The topological index of the total system,
$\nu_0 = {\rm tr}\, \Gamma$, is obviously zero,
since there are even numbers of white and black circles.
Since $[H,R] = [\Gamma, R] = 0$,
$H$ and $\Gamma$ are block-diagonalized into subspaces
each labeled by the eigenvalues of $R$.
Each subspace is spanned by
\begin{eqnarray}
&& R={\rm even}: \quad
|1\rangle, |3\rangle, \frac{1}{\sqrt{2}}(|2\rangle + |4\rangle)
\nonumber\\
&& R={\rm odd}: \quad
\frac{1}{\sqrt{2}}(|2\rangle - |4\rangle).
\end{eqnarray}
The topological indices of each subblock is written as
\begin{eqnarray}
&& \nu_{\rm even} = {\rm tr}\, \Gamma_{\rm even} = +1
\nonumber\\
&& \nu_{\rm odd} = {\rm tr}\, \Gamma_{\rm odd} = -1,
\end{eqnarray}
so that the number of protected zero-modes
is $|\nu_{\rm even}|+|\nu_{\rm odd}|=2$.
By breaking the reflection symmetry,
the number of zero modes is actually reduced to $|\nu_0|=0$,
We may consider another example having $C_3$ rotation symmetry.
Here we introduce a 6-site lattice model shown in
Fig.\ \ref{fig_0dim_model}(b),
where the sites 1, 2 and 3 (white circles)
are located at $z$-axis,
and sites 4, 5 and 6 (black circles) are arranged in a triangle
around the origin.
The system is invariant under
the $C_3$ rotation with respect to $z$-axis,
where the sites 1, 2, and 3 are fixed
while 4, 5, and 6 are circularly permutated.
The lattice is bipartite
so that the Hamiltonian is chiral symmetric under $\Gamma$
defined by,
\begin{equation}
\Gamma |i\rangle = \left\{
\begin{array}{ll}
+|i\rangle & (i=1,2,3)
\\
-|i\rangle & (i=4,5,6).
\end{array}
\right.
\end{equation}
Since $[H,C_3] = [\Gamma, C_3] = 0$,
$H$ and $\Gamma$ are block-diagonalized into subspaces
spanned by
\begin{eqnarray}
&&
C_3=1: \quad
|1\rangle, |2\rangle,|3\rangle,
\frac{1}{\sqrt{3}}(|4\rangle + |5\rangle + |6\rangle),
\nonumber\\
&&
C_3=\omega: \quad
\frac{1}{\sqrt{3}}(|4\rangle + \omega^2 |5\rangle + \omega |6\rangle),
\nonumber\\
&&
C_3=\omega^2: \quad
\frac{1}{\sqrt{3}}(|4\rangle + \omega |5\rangle + \omega^2 |6\rangle),
\end{eqnarray}
where $\omega = \exp(2\pi i /3)$.
The topological indices of three subspace become
\begin{eqnarray}
(\nu_{\rm 1}, \nu_{\rm \omega}, \nu_{\rm \omega^2})
= (2, -1, -1).
\end{eqnarray}
The number of protected zero-modes
is $|\nu_1|+|\nu_{\omega}|+|\nu_{\omega^2}|=4$,
while we have only $|\nu_0|=0$ zero modes
in the absence of $C_3$ symmetry.
\section{Dirac points in honeycomb lattice}
\label{sec_c3}
Now let us extend the argument in the previous section
to Bloch electron systems.
In this section, we discuss the topological protection of the Dirac points in
2D systems in the presence of three-fold rotation symmetry.
For Bloch Hamiltonian
$H(\Vec k) = e^{-i\Vec{k} \cdot \Vec{r}} H e^{i\Vec{k} \cdot \Vec{r}}$,
the chiral symmetry and the three-fold rotation symmetry are given
by unitary operators $\Gamma$ and $C_3$ that satisfy
\begin{align}
\Gamma H(\Vec k) \Gamma^{-1}&=-H(\Vec k), \n
C_3 H(\Vec{k}) C_3^{-1} &=H(R_3(\Vec k)).
\label{eq: symmetry constraints}
\end{align}
$R_3(\Vec k)$ denotes a momentum rotated by
$120^\circ$ around the origin.
We assume the commutation relation of chiral operator and three-fold rotation,
\begin{align}
[\Gamma,C_3]=0.
\end{align}
Let us consider the high symmetric point in the Brillouin zone
that is invariant
under an action of three-fold rotation; $R_3(\Vec k^0)=\Vec k^0$.
There Eq.~(\ref{eq: symmetry constraints}) reduces to
\begin{align}
\{H(\Vec k^0), \Gamma\} = 0, \n
[H(\Vec k^0), C_3] = 0,
\end{align}
and we can apply the previous argument to $H = H(\Vec{k}^0)$.
We simultaneously block-diagonalize
$H$, $\Gamma$ and $C_3$
into three sectors each labeled by an eigenvalue of $C_3$,
and define a topological index for each eigenspace as
\begin{align}
\nu_a=\t{tr } \Gamma_a,
\end{align}
with $a=1,\omega,\omega^2$.
If $\sum_a |\nu_a|$ is non-zero,
it requires an existence of chiral zero modes of $H(\Vec k^0)$,
i.e., we have a topologically stable gap closing at $\Vec k^0$
protected by the chiral symmetry and $C_3$ symmetry.
\begin{figure}
\begin{center}
\includegraphics[width=0.55\linewidth]{fig_graphene}
\end{center}
\caption{
(a) Honeycomb lattice with
the nearest neighbor hopping.
Unit cell is indicated by dashed hexagon,
and shading represents the Bloch phase factor
$\exp(i\Vec{K}\cdot \Vec{r})=1,\omega,\omega^2$
at $K$ point.
(b) Brillouin zone for the honeycomb lattice.
Dotted, small hexagon is the reduced Brillouin zone
corresponding to $\sqrt{3}\times\sqrt{3}$ superlattice
in Figs.\ \ref{fig_honeycomb_superlattice}(a) and (b).
}
\label{fig_graphene}
\end{figure}
Graphene is the simplest example of
the band touching protected by $C_3$ symmetry.
Let us consider a tight-binding honeycomb lattice
with the nearest neighbor hopping as shown in Fig.\ \ref{fig_graphene}(a).
The unit cell is composed of non-equivalent A and B sublattices.
The Hamiltonian is chiral symmetric in that A is only connected to B,
and the system is obviously invariant under three-fold rotation $C_3$.
$C_3$ commutes with $\Gamma$ because the rotation does not
interchange A and B sublattices.
The Brillouin zone corners $K$ and $K'$,
shown in Fig.\ \ref{fig_graphene}(b),
are fixed
in $C_3$ rotation so that we can apply the above argument to
these points.
We define the Bloch wave basis as
\begin{eqnarray}
&& |X\rangle =
\frac{1}{\sqrt{N}}\sum_{\Vec{R}_{X}} e^{i\Vec{k}\cdot\Vec{R}_{X}}
|\Vec{R}_{X}\rangle \quad (X=A,B),
\end{eqnarray}
where $\Vec{k}$ is the Bloch wave vector ($K$ or $K'$),
$|\Vec{R}_{X}\rangle$ is the atomic state at the position $\Vec{R}_{X}$,
and $N$ is the number of unit cells in the whole system.
In the basis of $\{|A\rangle, |B\rangle\}$,
the chiral operator is written as
\begin{align}
\Gamma =
\begin{pmatrix}
1 & \\
& -1 \\
\end{pmatrix}.
\end{align}
If we set the rotation center at $A$ site,
the rotation $C_3$ is written as
\begin{eqnarray}
&&C_3 =
\begin{pmatrix}
1 & \\
& \omega
\end{pmatrix}
\t{for } K,
\quad
\begin{pmatrix}
1 & \\
& \omega^2
\end{pmatrix}
\t{for } K'.
\end{eqnarray}
This is actually derived by
considering the change of the Bloch factor
in the rotation [Fig.\ \ref{fig_graphene}(a) for $K$ point].
Therefore, the topological indices are obtained as
\begin{align}
(\nu_1,\nu_\omega,\nu_{\omega^2}) &=
\begin{cases}
(1,-1,0) & \t{for } K, \\
(1,0,-1) & \t{for } K'.
\end{cases}
\end{align}
This requires
two zero-modes at each of $K$ and $K'$,
which are nothing but the gapless Dirac nodes.
\cite{mcclure1956diamagnetism,JPSJ.74.777}
Note that the band touching is deduced purely from the symmetry
in the Bloch bases,
without specifying detailed Hamiltonian matrix.
In this particular case,
the gaplessness at $K$ and $K'$ can also be concluded from the
non-trivial winding number $\nu_W = \pm 1$
around $K$ and $K'$, respectively.
[For details of the winding number, see Sec.~\ref{sec:charges}A]
However, these two different arguments are not generally equivalent,
and actually $C_3$-protected band touching may occur even though
the winding number is zero, as shown in the following.
Let us consider a tight-binding honeycomb lattice
with $\sqrt{3}\times\sqrt{3}$ superlattice distortion
as shown in Figs.\ \ref{fig_honeycomb_superlattice} (a) and (b),
where the hopping amplitudes for thin and thick bonds
are differentiated.
In accordance with the enlarged unit cell,
the Brillouin zone is folded as
shown in Fig.\ \ref{fig_graphene} (b),
where the original $K$, $K'$ and $\Gamma$
points are folded onto the new $\Gamma$-point.
Then $\nu_W$ around the $\Gamma$-point is contributed from
$\pm 1$ around the original $K$ and $K'$, respectively,
so that we have trivial winding number $\nu_W=0$ as a whole.
However, we can show that the Dirac point remains ungapped
even in the presence of the superlattice distortion,
when the system keeps a certain three-fold rotation symmetry.
We consider two different types of rotations.
\begin{align}
& C_3: \, \mbox{120$^\circ$ rotation around $A$ site.}
\n
& C'_3: \, \mbox{120$^\circ$ rotation around the center of hexagon.}
\nonumber
\end{align}
Figs.\ \ref{fig_honeycomb_superlattice}(a) and \ref{fig_honeycomb_superlattice}(b)
are examples of the lattice distortion under
$C_3$ and $C'_3$ symmetry, respectively.
The latter case, Fig.\ \ref{fig_honeycomb_superlattice}(b),
is so-called Kekul\'{e} distortion.
The unit cell contains six atoms as depicted
in Fig.~\ref{fig_honeycomb_superlattice}.
In the basis of
$\{ |1\rangle,|2\rangle,\cdots ,|6\rangle\}$,
the chiral operator is given by
\begin{align}
\Gamma&=
\begin{pmatrix}
1 &&&&& \\
& -1 &&&& \\
&& 1 &&& \\
&&&-1 && \\
&&&& 1 & \\
&&&&&-1 \\
\end{pmatrix},
\end{align}
and the three-fold rotation at $\Gamma$-point is
represented by
\begin{align}
C_3&=
\begin{pmatrix}
1 &&&&& \\
& &&1&& \\
&& 1 &&& \\
&&& &&1 \\
&&&& 1 & \\
&1&&&& \\
\end{pmatrix}, \n
C'_3&=
\begin{pmatrix}
&&&&1& \\
& &&&&1 \\
1&& &&& \\
&1&& && \\
&&1&& & \\
&&&1&& \\
\end{pmatrix}.
\end{align}
The topological indices of three sectors are given by
\begin{align}
(\nu_1,\nu_\omega,\nu_{\omega^2}) &=
\begin{cases}
(2,-1,-1) & \t{for } C_3, \\
(0,0,0) & \t{for } C'_3.
\label{eq_3nus}
\end{cases}
\end{align}
Non-trivial topological indices in $C_3$ symmetry
requires four zero-modes, indicating that
the two Dirac points are protected.
In $C'_3$ symmetry, on the other hand,
the topological indices are all zero and the energy band is gapped out.
The situation of $C_3$ symmetry
closely resembles the 6-site model in the previous section,
where the fixed points in the rotation (the sites 1, 3 and 5)
all contribute to the sector of $C_3=1$,
leading to an imbalance in the topological indices among the three sectors.
In contrast, all the sites are circularly interchanged
in $C'_3$ rotation,
resulting in $\nu_1=\nu_\omega=\nu_{\omega^2}=\nu_0/3=0$.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{fig_honeycomb_superlattice.eps}
\end{center}
\caption{
(a, b) $\sqrt{3}\times\sqrt{3}$ superlattice unit
cell of graphene,
with possible lattice distortion under
(a) $C_3$ and (b) $C'_3$ symmetry.
The center of the rotation is indicated by a cross mark.
(c) Structure of graphite,
where black and gray layers stack alternatively.
}
\label{fig_honeycomb_superlattice}
\end{figure}
We can derive the same conclusion alternatively by starting from
$4\times 4$ low-energy effective Hamiltonian,
\begin{align}
H=k_x \sigma_x \tau_z + k_y \sigma_y,
\label{eq: Dirac H for graphene}
\end{align}
where Pauli matrices $\sigma$ and $\tau$
span the sublattice ($A, B$) and the valley $(K, K')$
degrees of freedom, respectively.
The dimension of the matrix is smaller than the previous argument
($6\times6$)
because we exclude the two high-energy bases from the original $\Gamma$-point,
which do not contribute to the topological indices.
The chiral operator is given by
\begin{align}
\Gamma =\sigma_z =
\begin{pmatrix}
1 &&& \\
& -1 && \\
&& 1 & \\
&&&-1 \\
\end{pmatrix}.
\end{align}
The matrices for $C_3$ and $C'_3$ are
derived by considering the Bloch factor in the original lattice
model as,
\begin{eqnarray}
&&C_3 =
\begin{pmatrix}
1 &&& \\
& \omega && \\
&& 1 & \\
&&& \omega^2 \\
\end{pmatrix}
=\exp\left[-\frac{\pi i}{3}(\sigma_z-1)\tau_z\right].
\\
&&C'_3
=
\begin{pmatrix}
\omega &&& \\
& \omega^2 && \\
&& \omega^2 & \\
&&& \omega \\
\end{pmatrix}
=\exp\left[\frac{2\pi i}{3}\sigma_z\tau_z\right],
\end{eqnarray}
The topological indices are immediately shown to
be equivalent to Eq.\ (\ref{eq_3nus}).
Possible mass terms under the chiral symmetry
which gap out the Hamiltonian of Eq.\ (\ref{eq: Dirac H for graphene})
anti-commute with both $H$ and $\Gamma$.
In the present case we have two such terms,
\begin{align}
\delta H= m_x \sigma_x \tau_x + m_y \sigma_x \tau_y.
\label{eq:massterm}
\end{align}
Since $\delta H$ commutes with $C_3'$ but not with $C_3$,
these terms can exist only in $C'_3$ symmetry.
This exactly corresponds to the fact that
the Dirac point is not protected in $C'_3$ symmetry.
Actually, Kekul\'{e} distortions depicted in Fig.~\ref{fig_honeycomb_superlattice}(b)
give rise to mass terms $\delta H$
and gap out the Dirac point.
The argument can be directly extended to
a 3D crystal with $C_3$ symmetry.
There the band touching point forms a line node
on $C_3$ symmetric axis
in 3D Brillouin zone.
A typical example of this is a bulk graphite,
where graphene layers are stacked in an alternative way
between black and gray layers as in Fig.\ \ref{fig_honeycomb_superlattice}(c).
When we consider the three-fold rotation symmetry
around the center of hexagon of a gray layer
(cross mark in the figure),
the topological indices are obtained as
\begin{align}
(\nu_1,\nu_\omega,\nu_{\omega^2}) &=
\begin{cases}
(1,0,-1) & \t{for } K, \\
(1,-1,0) & \t{for } K'.
\end{cases}
\end{align}
which guarantees two line nodes at $K$ and $K'$ parallel to $k_z$ direction.
A real graphite is not exactly chiral symmetric
because of some minor hopping amplitudes
between black and black (white and white) atoms.
As a result, the line node slightly disperses in $k_z$ axis
giving electron and hole pockets at zero energy.\cite{dresselhaus2002intercalation}
\begin{figure}
\begin{center}
\includegraphics[width=0.65\linewidth]{fig_half_flux.eps}
\end{center}
\caption{
(a) Square lattice with a half magnetic flux penetrating unit cell.
Unit cell is indicated by a dashed diamond,
and shading represents the Bloch phase factor
for $K$ point.
(b) The same system with double unit cell.
(c) Brillouin zone for the single unit cell in (a).
Dashed square is the reduced Brillouin zone corresponding to
the double unit cell in (b).
}
\label{fig_half_flux}
\end{figure}
\section{Dirac points in half-flux square lattice}
\label{sec_c2}
The square lattice with half magnetic flux penetrating a unit cell
is another well-known example having gapless Dirac nodes.\cite{affleck1988large}
The band touching in this system can also be explained
by a similar argument,
in terms of the chiral symmetry and $C_2$ rotation symmetry.
We consider a lattice Hamiltonian illustrated
in Fig.\ \ref{fig_half_flux}(a).
The unit cell is represented by a dashed diamond
including site 1 and 2 inside.
The hopping integral along horizontal bond is all identical to $t_x$,
while the vertical hopping depends on the direction,
and is given by $i t_y$ for the hopping in the direction of the arrow.
An electron always acquire the factor $-1$ when moving around
any single plaquette, so that it is equivalent to half magnetic flux
penetrating a unit cell.
The system is $C_2$-rotation symmetric with respect to
an arbitrary atomic site,
and the rotation commutes with the chiral operator
since it does not interchange the sublattices.
In the reciprocal space [Fig.\ \ref{fig_half_flux}(c)],
the points $K:(\pi/(2a),\pi/(2a))$ and
$K':(-\pi/(2a),\pi/(2a))$ are both invariant in the $C_2$ rotation,
and we apply the general argument to these points.
In the basis of $\{|1\rangle, |2\rangle\}$,
the chiral operator is written as
\begin{align}
\Gamma =
\begin{pmatrix}
1 & \\
& -1 \\
\end{pmatrix},
\end{align}
and the $C_2$ rotation with respect to site 1 is
\begin{eqnarray}
&&C_2 =
\begin{pmatrix}
1 & \\
& -1
\end{pmatrix} \quad \t{for $K$ and $K'$}.
\end{eqnarray}
Thus the topological indices are
\begin{align}
(\nu_{\rm even},\nu_{\rm odd}) = (1,-1) \quad \t{for $K$ and $K'$},
\end{align}
where even and odd specify the eigenvalue of $C_2$ rotation
$+1$ and $-1$, respectively.
As a result, we have two zero-modes at each of $K$ and $K'$,
corresponding to the band touching points.
Similarly to the honeycomb lattice in the previous section,
we may consider the stability of the Dirac points
under possible lattice distortions
for the double unit cell shown in Fig.\ \ref{fig_half_flux}(b).
In the reciprocal space, $K$ and $K'$ merge at the same corner point
and the total winding number becomes zero.
We consider two types of rotations.
\begin{align}
& C_2: \, \mbox{180$^\circ$ rotation around site 1.}
\n
& C'_2: \, \mbox{180$^\circ$ rotation around the center of square.}
\nonumber
\end{align}
In the basis of
$\{ |1\rangle,|2\rangle,|3\rangle\,|4\rangle\}$,
the chiral operator is given by
\begin{align}
\Gamma&=
\begin{pmatrix}
1 &&& \\
& -1 && \\
&& 1 & \\
&&&-1 \\
\end{pmatrix},
\end{align}
and the 180$^\circ$ rotation at the merged $k$-point is
represented by
\begin{align}
C_2 &=
\begin{pmatrix}
1 &&& \\
& -1 && \\
&& 1 & \\
&&& -1 \\
\end{pmatrix}, \n
C'_2 &=
\begin{pmatrix}
&&1& \\
& &&1 \\
1&& & \\
&1&& \\
\end{pmatrix}.
\end{align}
The topological indices are given by
\begin{align}
(\nu_{\rm even},\nu_{\rm odd}) &=
\begin{cases}
(2,-2) & \t{for } C_2, \\
(0,0) & \t{for } C'_2,
\label{eq_2nus}
\end{cases}
\end{align}
so that the two Dirac points are protected
in $C_2$ symmetry,
while not in $C'_2$ symmetry.
The same conclusion can be reproduced in terms of the low energy
effective Hamiltonian, in a manner similar to Sec.~\ref{sec_c3}.
The effective Hamiltonian and the chiral symmetry are given by
\begin{eqnarray}
H=k_x\sigma_x\tau_z +k_y\sigma_y,
\quad
\Gamma=\sigma_z
\end{eqnarray}
with $\sigma$ and $\tau$ spanning the sublattice $(|1\rangle,
|2\rangle)$ and the valley $(K,K')$ of the $\pi$-flux lattice.
By taking into account the Bloch factor properly, the two-fold rotations
$C_2$, $C_2'$ are identified as
\begin{eqnarray}
C_2=\sigma_z,
\quad
C_2'=\sigma_z\tau_z.
\end{eqnarray}
Since the effective Hamiltonian and the chiral symmetry take the same
forms as those in the honeycomb lattice case, possible mass terms consistent
with the chiral symmetry are given by the same Eq.(\ref{eq:massterm}).
Those mass terms are apparently inconsistent with the $C_2$ symmetry
above, but consistent with the $C_2'$ symmetry.
Thus, between the two types of rotations, only the $C_2$ symmetry does not
allow these mass terms, keeping the Dirac points gapless.
\section{Line node protected by reflection symmetry}
\label{sec_ref}
As another example,
we consider a 2D lattice
with the reflection symmetry.
In this case, the band touching is protected
on the diagonal lines in 2D Brillouin zone,
and form line nodes.
We take a lattice model
as illustrated in Fig.\ \ref{fig_square_lattice},
where the unit cell is composed of four sublattices
from 1 to 4, and the structure is reflection symmetric
with respect to the diagonal lines.
In the basis of
$\{ |1\rangle,|2\rangle,|3\rangle,|4\rangle\}$,
the chiral operator is given by
\begin{align}
\Gamma&=
\begin{pmatrix}
1 &&& \\
& -1 && \\
&& 1 & \\
&&&-1 \\
\end{pmatrix}.
\end{align}
We consider the reflection $R$ with respect to
the line connecting the sites 1 and 3.
The fixed $k$-points under $R$ are given by $\Vec{k}_0 = (k,k)$
with arbitrary $k$.
There the matrix for $R$ is written as
\begin{align}
R&=
\begin{pmatrix}
1 &&& \\
& &&1 \\
&& 1 & \\
&1 && \\
\end{pmatrix}.
\end{align}
The situation is exactly the same as the four-site model
in Sec.\ \ref{sec_gen}, and
the topological indices of two sectors become
\begin{align}
(\nu_{\rm even},\nu_{\rm odd}) &= (1,-1).
\end{align}
Since $|\nu_{\rm even}|+|\nu_{\rm odd}|=2$,
two energy bands are touching
along the diagonal axis in the Brillouin zone.
The same argument applies to the reflection for another
diagonal line, giving a line node at $(k,-k)$.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{fig_square_lattice.eps}
\end{center}
\caption{
Square lattice model with the reflection symmetry.
Dashed square indicates a unit cell and the diagonal line
is a symmetry axis.
}
\label{fig_square_lattice}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{fig_3d_dirac.eps}
\end{center}
\caption{
(a) Stacked honeycomb lattices
with staggered interlayer coupling.
(b) Cubic lattice
with a half magnetic flux penetrating every square plaquette.
}
\label{fig_3d_dirac}
\end{figure}
\section{Dirac points in three dimensions}
\label{sec_3d_dirac}
Here we present some examples
of 3D Dirac system,
where the band touching occurs at isolated $k$-points
in 3D Brillouin zone.
First we consider a stack of
honeycomb lattices with staggered interlayer coupling
as illustrated in Fig.\ \ref{fig_3d_dirac}(a).
Here the honeycomb layers are vertically stacked
at interlayer spacing $c$,
and the vertical hopping between the neighboring layers
is given by $t$ and $-t$ for $A$ and $B$ sublattices,
respectively.
The smallest unit cell of this system is given by
$A$ and $B$ on a single layer,
while we here take a double unit cell
including $A1,B1,A2,B2$,
so that the Hamiltonian becomes chiral symmetric
by grouping $(A1,B2)$ into $\Gamma=+1$,
and $(B1,A2)$ into $-1$.
The effective Hamiltonian is given by
\begin{equation}
H = k_x \sigma_x\tau_z + k_y\sigma_y+ 2 t \cos(k_z c)\sigma_z \rho_x,
\label{eq_H_3d_dirac}
\end{equation}
where Pauli matrices $\sigma$ and $\rho$
span the sublattice ($A, B$) and the layer $(1, 2)$
degrees of freedom, respectively,
and $\tau_z=\pm 1$ is the valley indices for $K$ and $K'$,
respectively.
Equation \ (\ref{eq_H_3d_dirac}) has a gapless node at
$\Vec{k}^0=(0,0,\pi/(2c))$, and two Dirac cones are degenerate
at this point.
Note that the lattice period in $z$ direction is $2c$,
so that $-\Vec{k}^0$ is equivalent to $\Vec{k}^0$.
The gapless point at $\Vec{k}^0$
can be concluded from the symmetry
argument without the band calculation.
The chiral operator is given by $\Gamma = \rho_z\sigma_z$,
which obviously anticommutes with the Hamiltonian.
We consider $C_3$ rotation with respect to $A1$-$A2$ axis,
and also the reflection $R_z$ with respect to $A1$-$B1$ layer.
The Hamiltonian is invariant
and also the point $\Vec{k}^0$ is fixed
under these operations.
Now we consider a combined operation $C_3 R_z$
at $\Vec{k}_0$.
Since $(C_3 R_z)^6 = 1$, the eigenvalues of $C_3 R_z$
can be either of $\pm 1, \pm\omega, \pm\omega^2$.
For $K$-valley, for example, the matrix of
$C_3 R_z$ in a basis of $\{
|A1\rangle,|B1\rangle,|A2\rangle,|B2\rangle\}$ becomes
\begin{equation}
C_3 R_z =
{\rm diag}(1,\omega,-1,-\omega),
\label{eq: charges C3Rz}
\end{equation}
i.e., the four sublattices are classified to all different sectors.
The number of zero modes is $\sum_a |\nu_a| = 4$,
which guarantees the existence of doubly degenerate Dirac nodes.
The argument equally applies to more general cases
where the vertical hopping at $A$ and $B$-sites
are given by $t_A$ and $t_B$ (instead of $t$ and $-t$),
respectively.
We can create another example of 3D Dirac nodes
by stacking 2D $\pi$-flux lattice
in Sec.\ \ref{sec_c2}
with staggered interlayer coupling.
The model is illustrated in Fig.\ \ref{fig_3d_dirac}(b),
where $\pi$-flux lattices are vertically stacked
with the hopping $t_z$ and $-t_z$ for $A$ and $B$ sublattices,
respectively.
The system can be viewed as a cubic lattice
with a half magnetic flux threading every single
square plaquette.
We take a unit cell composed of $A1,B1,A2,B2$,
and group $(A1,B2)$ into $\Gamma=+1$, and $(B1,A2)$ into $-1$
so that the Hamiltonian becomes chiral symmetric.
We have band touching at $K:\pi/(2a)(1,1,1)$ and
$K':\pi/(2a)(-1,1,1)$,
and the effective Hamiltonian near these point nodes is given by
\begin{equation}
H = k_x \sigma_x\tau_z + k_y\sigma_y - k_z \sigma_z \rho_x,
\label{eq_H_3d_pi-flux}
\end{equation}
where Pauli matrices $\sigma$ and $\rho$
span the sublattice ($A, B$) and the layer $(1, 2)$
degrees of freedom, respectively,
and $\tau_z=\pm 1$ is the valley indices for $K$ and $K'$,
respectively.
The chiral operator is given by $\Gamma = \rho_z\sigma_z$.
The gapless point in this model is protected by the inversion symmetry
$P = C_2 R_z$.
If we consider the inversion $P$ with respect to $A1$ site,
$K$ and $K'$ are both invariant,
and we can write $P = \rho_z \sigma_z$ at these points.
We then find $(\nu_{\rm even},\nu_{\rm odd})=(2,-2)$,
and thus we have doubly degenerate Dirac nodes at each of $K$ and $K'$.
\section{Classification of topological charges \label{sec:charges}}
In this section, we present general arguments to classify the
Dirac points in the
presence of chiral symmetry and spatial symmetry. We identify relevant
topological numbers associated with protection
of the Dirac points.
In Table \ref{table: topological charges},
we summarize our results on topological charges
of the Dirac points obtained in this section.
\begin{table}[tb]
\begin{center}
\caption{\label{table: topological charges}
Topological charges of the Dirac points in the presence of
chiral symmetry and spatial symmetry.
We assume that symmetry operators commute with each other.
}
\begin{tabular}[t]{ccc}
\hline \hline
~Dimensions~ & ~Symmetries~ & ~Charges~ \\
\hline
2D & $\Gamma, C_N$ & $\mathbb{Z}^N$ \\
3D & $\Gamma, C_3 R_z$ & $\mathbb{Z}^3$ \\
3D & $\Gamma, C_2, R_z$ & $\mathbb{Z}^2$ \\
3D & $\Gamma, P$ & $\mathbb{Z}$ \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Class AIII+$C_N$ in 2D}
First, let us study 2D Dirac points
in class AIII systems (possessing chiral symmetry $\Gamma$)
with additional $N$-fold rotation symmetry $C_N$.
We assume the commutation relation $[\Gamma,C_N]=0$.
the Dirac points in Sec.~\ref{sec_c3} and Sec.~\ref{sec_c2} are of this class.
In the presence of the chiral symmetry, we can define a winding number
for a circle $S^1$ surrounding
the Dirac point in the Brillouin zone.\cite{wen89}
When the circle $S^1$ is parameterized by $\theta
\in [0 , 2\pi)$, the winding number is given by
\begin{eqnarray}
\nu_W=\frac{1}{4\pi i}\oint_{S^1}d\theta{\rm tr}
\left[
\Gamma H^{-1}(\Vec{k}(\theta))
\partial_\theta H(\Vec{k}(\theta))
\right].
\label{eq: winding number}
\end{eqnarray}
Here the Hamiltonian is gapped on $S^1$ so the inverse $H^{-1}(\Vec{k}(\theta))$ is well-defined.
In a basis where the chiral operator $\Gamma$ is diagonal,
\begin{align}
\Gamma=
\begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix},
\end{align}
the Hamiltonian takes an off-diagonal form written as
\begin{align}
H(\Vec k)=
\begin{pmatrix}
0 & D^{\dagger}(\Vec k) \\
D(\Vec k) & 0 \\
\end{pmatrix}.
\end{align}
Here $\t{tr } \Gamma$ must be zero (i.e., $D$ is a square matrix),
since otherwise zero energy states
remain independently of $\Vec{k}$.
The winding number is then recast into
\begin{eqnarray}
\nu_W=\frac{1}{2\pi}{\rm Im}\left[
\oint_{S^1}
d\theta \partial_{\theta}\ln
{\rm det}D(\Vec{k}(\theta))
\right].
\end{eqnarray}
It is evident that $\nu_W$ is quantized
to an integer since the phase change of ${\rm det}\,D(\Vec{k}(\theta))$
around $S^1$ must be a multiple of $2\pi$.
As we have seen in Sec.~\ref{sec_gen},
by making use of rotation symmetry $C_N$,
we can further define topological indices
$\nu_{a_n}\, [a_n = \exp(2\pi n i / N),\, n=0,1,2,\cdots,N-1]$
by Eq.~(\ref{eq_nu_ai}),
for the Dirac points at $C_N$-symmetric $k$-points.
Since we have $\sum_n \nu_{a_n}=\t{tr } \Gamma=0$,
the number of independent indices are $N-1$.
Thus the topological charges assigned to the Dirac point are
\begin{align}
(\nu_W,\nu_{a_1},\ldots,\nu_{a_{N-1}}) \in \Z^N.
\label{eq: topological charge}
\end{align}
The Dirac points with non-trivial topological charges are stable against
perturbations preserving chiral and rotation symmetry.
In Sec.~\ref{sec_c3} and Sec.~\ref{sec_c2},
we show examples of the Dirac points
protected by non-trivial indices $\nu_{a_i}$,
while the winding number $\nu_W$ is trivial.
In this sense, these are canonical examples of gapless points
whose stability is not captured only by local symmetry (chiral symmetry),
but originates from spatial symmetry.
For two-fold rotation $C_2$,
we can also use the K-theory and the Clifford algebra to classify gapless
points:\cite{horava05,kitaev09,teo-kane10,morimoto2013topological,zhao-wang13,shiozaki14,kobayashi14,morimoto-weyl14}
In this case, the symmetry operators $C_2$ and $\Gamma$ can be
considered as an element of a complex Clifford algebra $Cl_n=\{e_1,\dots, e_n\}$
with generators $e_1,\ldots,e_n$ satisfying
the anticommutation relation
\begin{align}
\{e_i,e_j\}=2\delta_{ij}.
\end{align}
Hence, the powerful representation
theory of the Clifford algebra is available in the classification.
Below, we show that the approach with Clifford algebra provides
the same topological charges in Eq.~(\ref{eq: topological charge}).
Consider a general Hamiltonian of 2D Dirac point,
\begin{align}
H=k_x\gamma_x+k_y\gamma_y,
\label{eq:clifford2D}
\end{align}
where $\gamma_i$'s are gamma matrices.
The symmetries $C_2$ and $\Gamma$ imply
\begin{eqnarray}
\{\Gamma, \gamma_{i=x,y}\}=0,
\quad
\{C_2, \gamma_{i=x,y}\}=0,
\quad
[\Gamma,C_2] = 0,\quad
\end{eqnarray}
so they form the complex Clifford algebra
\begin{align}
Cl_3 \tensor Cl_1=
\{\gamma_x,\gamma_y,\Gamma\}
\tensor
\{\gamma_x\gamma_y C_2\},
\label{eq:Clifford1}
\end{align}
as we mentioned above.
Then if the Dirac point is unstable,
there exists a Dirac mass term $m\gamma_0$ consistent with the symmetries,
\begin{eqnarray}
\{\Gamma, \gamma_0\}=0,
\quad
[C_2, \gamma_0]=0,
\quad
\{\gamma_{i=x,y}, \gamma_0\}=0
\end{eqnarray}
which modifies the Clifford algebra in
Eq.(\ref{eq:Clifford1}) as
\begin{align}
Cl_4 \tensor Cl_1=
\{\gamma_0, \gamma_x,\gamma_y,\Gamma\}
\tensor
\{\gamma_x\gamma_y C_2\}.
\label{eq:Clifford2}
\end{align}
The modified algebra implies that the mass term $\gamma_0$ behaves like
an additional chiral operator $\Gamma'$ that anticommutes with $\Gamma$.
On the other hand, if the Dirac point is stable,
no such an additional chiral operator exists.
Therefore, the stability problem of the Dirac point reduces to the
existence problem of an additional chiral operator.\cite{morimoto2013topological,morimoto-weyl14}.
The latter problem is solved as follows.
By imposing chiral symmetry $\Gamma$ on other generators,
we have an extension of Clifford algebra
\begin{align}
Cl_2 \tensor Cl_1&=\{\gamma_x,\gamma_y\}\tensor\{\gamma_x\gamma_y C_2\} \n
\to Cl_3 \tensor Cl_1&=\{\gamma_x,\gamma_y,\Gamma\}\tensor\{\gamma_x\gamma_y C_2\},
\label{eq: classification of Gamma (2D)}
\end{align}
which defines the classifying space ${\cal C}_0\times {\cal C}_0$ in the
K-theory. [${\cal C}_0=\cup_{m,n} U(m+n)/(U(m)\times U(n))$;
for details, see Refs.~\onlinecite{morimoto2013topological,morimoto-weyl14}].
Because the classifying space consists of all possible matrix representations of $\Gamma$ with other generators' fixed,
the zero-th homotopy group of the classifying space
\begin{align}
\pi_0({\cal C}_0 \times {\cal C}_0)=\Z^2,
\label{eq_Z^2}
\end{align}
measures topologically different chiral operators,
specifying possible values for the topological number of $\Gamma$.
Now we can show that
if there is an additional chiral operator
$\Gamma'$, then the topological number of $\Gamma$ must be zero:
Indeed, using $\Gamma'$, one can introduce the chiral operator
$\Gamma(t)=\Gamma\cos t+\Gamma'\sin t$ connecting
$\Gamma=\Gamma(0)$ and $-\Gamma=\Gamma(\pi)$ continuously, which implies
that $\Gamma$ must be topologically trivial
since
topological numbers defined for chiral operators
take opposite values for $\Gamma$ and $-\Gamma$
as we will see in an explicit way later [Eq.~(\ref{eq: index by trace})].
Taking the contrapositive,
we can also say that if the topological number of $\Gamma$ is nontrivial,
then no additional chiral operator exists.
The last statement implies that the Dirac point is stable if the
topological number of $\Gamma$ is nontrivial.
In other words, we can conclude that the topological charge protecting
the Dirac point in Eq.(\ref{eq:clifford2D}) is given by Eq.(\ref{eq_Z^2}),
which coincides with Eq.~(\ref{eq: topological charge}) with $N=2$.
The algebraic argument above can be intuitively
understood by considering the specific Hamiltonian.
Let us take the effective Hamiltonian of 2D half-flux square lattice,
$H=k_x\sigma_x\tau_z +k_y\sigma_y$
(i.e., $\gamma_x = \sigma_x \tau_z$, $\gamma_y = \sigma_y$)
with the two-fold rotation symmetry $C_2 = \sigma_z$,
and consider a possible generator $\Gamma$
to form an algebra
$Cl_3 \tensor Cl_1=\{\gamma_x,\gamma_y,\Gamma\}\tensor\{\gamma_x\gamma_y C_2\}$.
Since $\Gamma$ anticommutes with
$\gamma_x$ and $\gamma_y$ while
commutes with $\gamma_x\gamma_y C_2 = i \tau_z$,
it should be written as
\begin{equation}
\Gamma =
\begin{pmatrix}
s \sigma_z & 0 \\
0 & s' \sigma_z
\end{pmatrix},
\end{equation}
where the first and the second blocks
correspond to $\tau_z= \pm 1$, respectively,
and $s, s' = \pm 1$.
Since $\tau_z= \pm 1$ are decoupled,
the sectors having different $(s,s')$
cannot be connected by a continuous transformation,
and thus they are topologically all distinct.
If we generally consider the matrix $\tau_z$
with larger dimension
such as $\tau_z = {\rm diag}(1,1,\cdots,-1,-1,\cdots)$,
the possible expression for $\Gamma$ is
\begin{eqnarray}
&& \Gamma =
\begin{pmatrix}
\sigma_z \tensor A & 0 \\
0 & \sigma_z \tensor A'
\end{pmatrix},
\end{eqnarray}
where the first and the second blocks in $\Gamma$
correspond to $\tau_z= \pm 1$, respectively.
Since we have $\Gamma^2=1$,
eigenvalues of $A$ and $A'$ are either $+1$ or $-1$.
The topologically distinct phases are labeled by two integers
\begin{align}
(s,s')=\left(\t{tr}A , \t{tr}A' \right),
\label{eq: index by trace}
\end{align}
and this is $\Z^2$ in Eq.\ (\ref{eq_Z^2}).
The winding number is given by $\nu_W= s-s'$,
and the topological index of $C_2 = +1$ sector
(i.e., the difference between the numbers of
the bases belonging to $\Gamma = +1$ and $-1$ in the $C_2 = +1$ sector)
is $\nu_{\rm even} = s + s'$.
So the space spanned by $(s,s')$ is equivalent to that by
$(\nu_W, \nu_{\rm even})$.
\subsection{class AIII with $C_3R_z$ in 3D}
We study the chiral symmetric Dirac points with $C_3R_z$ symmetry (a
combination of a 3-fold rotation in $xy$-plane
and a reflection along $z$-axis) in 3D Brillouin zone,
for which we have discussed an example in the stacked honeycomb lattice model
in Sec.~\ref{sec_3d_dirac}.
We write $g=C_3R_z$ and
assume the commutation relation $[g,\Gamma]=0$.
We consider a Dirac point located at the $C_3R_z$ symmetric point,
and assume that the energy band is gapped in the vicinity of
the Dirac point, except for the Dirac point itself.
At the Dirac point,
we can define the six topological numbers
$\nu_{\pm 1}, \nu_{\pm\omega}, \nu_{\pm\omega^2}$
as we have seen in Sec.~\ref{sec_3d_dirac},
but they are not completely independent.
Since $g^4 = C_3$ and $g^3 = R_z$,
the $C_3R_z$ symmetry is always accompanied by
the individual symmetries $C_3$ and $R_z$.
All the points on $k_z$ axis are fixed in $C_3$,
and in order to have a band gap at these momenta (except for the Dirac point),
all the indices for sectors $C_3=1,\omega,\omega^2$ should be zero;
\begin{align}
\nu_1+\nu_{-1}=\nu_\omega+\nu_{-\omega}=\nu_{\omega^2}+\nu_{-\omega^2}=0.
\label{eq: condition for indices for C3Rz}
\end{align}
Here note that sectors $g=\pm 1, \pm\omega,\pm\omega^2 $
belong to those $C_3=1,\omega, \omega^2$, respectively.
Similarly, since the $k_x$-$k_y$ plane is fixed in $R_z$,
we have the requirement
\begin{align}
\nu_1+\nu_\omega+\nu_{\omega^2}=\nu_{-1}+\nu_{-\omega}+\nu_{-\omega^2}=0,
\label{eq: condition 2 for indices for C3Rz}
\end{align}
in order to avoid a gap closing plane.
Due to these constraints, we are left with only two
independent indices, for example, $\nu_1,\nu_\omega$.
We can also define a winding number on the $R_z$ symmetric plane.
Let us perform the block diagonalization with respect to $R_z=\pm 1$
on the $R_z$ symmetric plane. Then, the $R_z=+1$ sector is viewed
as a 2D system class AIII+$C_3$,
and we can define a winding number $\nu_{W+}$ [Eq.~(\ref{eq: winding number})]
for $S^1$ surrounding the Dirac point.
Similarly, we can also define $\nu_{W-}$ for the $R_z=-1$ sector.
However, the total winding number $\nu_W=\nu_{W+}+\nu_{W-}$
should vanish because a circle $S^1$ defining the total winding number
can be freely deformed in the 3D space so
it is contractible without touching the Dirac point.
Consequently, independent topological charges assigned to the Dirac point
in the present case are a set of a winding number $\nu_{W+}$
and two topological indices $\nu_1, \nu_\omega$:
\begin{align}
(\nu_{W+},\nu_1,\nu_\omega) \in \Z^3.
\end{align}
For the Dirac point at $K$ in the stacked honeycomb
lattice model in Sec.~\ref{sec_3d_dirac}, Eq.~(\ref{eq: charges C3Rz}) leads to
$(\nu_{\pm 1},\nu_{\pm \omega}, \nu_{\pm \omega^2})=(\pm 1,\mp 1,0)$, which
is consistent with the
constraints Eqs.~(\ref{eq: condition for indices for C3Rz})
and (\ref{eq: condition 2 for indices for C3Rz}).
The winding numbers $\nu_{W\pm}$
can be evaluated
using the effective Hamiltonian Eq.(\ref{eq_H_3d_dirac})
as follows.
On the $R_z$-symmetric plane $(k_z=\pi/(2c))$, the Hamiltonian is expressed as,
\begin{eqnarray}
H=k_x\sigma_x\tau_z+k_y\sigma_y,
\end{eqnarray}
with $R_z = \rho_z$ and $\Gamma=\sigma_z\rho_z$.
It takes the same form both in the $R_z =\pm 1$ sectors,
but the chiral operator has an opposite sign,
i.e. $\Gamma=\pm \sigma_z$, leading to $\nu_{W\pm}=\pm 1$
for $K$-point ($\tau_z=+1$).
Since $\nu_{W\pm}$ is non-zero,
non-trivial indices $\nu_{a_i}$ are not necessary
for the topological protection of the Dirac point in this particular example.
However, if we consider a $C_3R_z$ symmetric superlattice where $K$ and
$K'$-points are folded onto the same $\Gamma$ point,
as in the case of the 2D honeycomb lattice in Sec.~\ref{sec_c3},
the winding number around the Dirac point becomes zero while other indices
$\nu_{a_i}$ are still non-zero.
There, the gaplessness at the Dirac point is solely guaranteed by non-trivial indices $\nu_{a_i}$.
\subsection{class AIII with $C_2R_z$ in 3D}
Finally, we study the chiral symmetric Dirac points with the
inversion symmetry $P=C_2R_z$ in 3D.
Here we consider two different cases,
(i) where we have $C_2$ and $R_z$ symmetries individually,
and (ii) where we only have $P$ but not $C_2$ or $R_z$.
First we consider the case (i).
The half-flux cubic lattice model argued in Sec.~\ref{sec_3d_dirac}
belongs to this case.
We assume $[C_2,R_z]=[C_2,\Gamma]=[R_z,\Gamma]=0$.
At the inversion symmetric point,
we can define the four topological indices
$\nu_{++},\nu_{+-},\nu_{-+},\nu_{--}$
for the sectors labeled by the eigenvalues of $(C_2,R_z)$.
To avoid the band gap closing on the $C_2$ symmetric axis,
\begin{align}
\nu_{++} + \nu_{+-} = \nu_{-+} + \nu_{--} = 0.
\end{align}
To gap out $R_z$ symmetric plane, similarly, we require
\begin{align}
\nu_{++} + \nu_{-+} = \nu_{+-} + \nu_{--} = 0.
\end{align}
Therefore $(\nu_{++},\nu_{+-},\nu_{-+},\nu_{--})$
is expressed by a single integer $s$ as $(s,-s,-s,s)$.
In Sec.~\ref{sec_3d_dirac}, we defined the topological indeces
$(\nu_{\rm even}, \nu_{\rm odd})$
for the sectors labeled by $P = C_2 R_z$,
and they are related to the present indices by
$\nu_{\rm even} = \nu_{++} + \nu_{--} =2s$
and $\nu_{\rm odd} = \nu_{+-} + \nu_{-+} =-2s$.
Similarly to the $C_3R_z$ case in the previous subsection,
we can define the winding numbers $\nu_{W\pm}$
for $R_z=\pm 1$ sector, respectively.
The total winding number $\nu_W=\nu_{W+}+\nu_{W-}$
vanishes again because of the same reason.
Therefore, independent topological charges assigned to a Dirac point are
\begin{align}
(\nu_{W+},\nu_{++}) \in \Z^2.
\label{eq:Z2inversion}
\end{align}
In the case (ii), we can define the topological indices
$\nu_{\rm even},\nu_{\rm odd}$
for the sectors labeled by the eigenvalues of the inversion $P$
(where $[P,\Gamma]$ is assumed).
The summation $\nu_{\rm even}+\nu_{\rm odd}=\t{tr}\,\Gamma$
should vanish otherwise
the band gap closes everywhere in $k$-space.
Unlike the case (i), we do not have the winding numbers $\nu_{W\pm}$
since $R_z$ symmetry is absent and thus
we do not have a 2D subspace invariant under the symmetry operation.
As a result, the Dirac point is characterized only by
a single topological number,
\begin{align}
\nu_{\rm even} \in \Z.
\label{eq:Zinversion}
\end{align}
Because $C_2$ and $R_z$ are both order-two operators,
we can also derive the same conclusion from
the analysis using the Clifford algebra.
Let us consider a 3D Dirac point
\begin{align}
H=k_x\gamma_x+k_y\gamma_y+k_z\gamma_z,
\end{align}
and explore whether a mass term $m\gamma_0$ is allowed or not by imposed
symmetries.
In the case (i), we have three symmetries:
chiral symmetry $\Gamma$,
two-fold rotation in $xy$-plane $C_2$,
reflection symmetry along $z$-direction $R_z$.
The symmetry operators satisfy the following algebraic relations
with the gamma matrices,
\begin{align}
\{\gamma_{i=0,x,y,z},\Gamma\}&=0,
\n
[\gamma_{i=0,z},C_2]=\{\gamma_{i=x,y},C_2\}&=0,
\n
[\gamma_{i=0,x,y},R_z]=\{\gamma_{z},R_z\}&=0,
\end{align}
with the commutation relations with each other
\begin{align}
[R_z,C_2]=[C_2,\Gamma]=[R_z,\Gamma]&=0.
\end{align}
Then we can construct a Clifford algebra from these relations as
\begin{align}
Cl_{6}\tensor Cl_1&=\{\gamma_0,\gamma_x,\gamma_y,\gamma_z,\gamma_z R_z, \Gamma \}
\tensor \{\gamma_x\gamma_y C_2\}.
\end{align}
In a similar way as Sec.~\ref{sec:charges}A,
the mass term $\gamma_0$ can be considered as an additional chiral
operator $\Gamma'$, so if the topological number of $\Gamma$ is nonzero,
then the Dirac point is stable.
From an extension of Clifford algebra which is obtained by adding $\Gamma$ to
other generators,
\begin{align}
Cl_{4}\tensor Cl_1 &=\{\gamma_x,\gamma_y,\gamma_z,\gamma_z R_z \}
\tensor \{\gamma_x\gamma_y C_2\} \n
\to Cl_{5}\tensor Cl_1 &=\{\gamma_x,\gamma_y,\gamma_z,\gamma_z R_z, \Gamma \}
\tensor \{\gamma_x\gamma_y C_2\},
\end{align}
we identify the relevant classifying space as
${\cal C}_0 \times {\cal C}_0$, then
the relevant topological number is evaluated as the zero-th homotopy,
\begin{align}
\pi_0({\cal C}_0 \times {\cal C}_0)=\mathbb{Z}^2,
\end{align}
which coincides with Eq.(\ref{eq:Z2inversion}).
In the case (ii),
the additional symmetry is only inversion $P=C_2R_z$.
The algebraic relations for $P$ read
\begin{align}
[\gamma_0,P]=\{\gamma_{i=x,y,z},P\} =0, \quad
[P,\Gamma] =0,
\end{align}
which form the Clifford algebra,
\begin{align}
Cl_6 &= \{\gamma_0,\gamma_x,\gamma_y,\gamma_z,
\gamma_x\gamma_y\gamma_z P, \Gamma \}.
\end{align}
The existence condition for the mass term $\gamma_0$
is obtained from the extension problem
\begin{align}
Cl_{4} &=\{\gamma_x,\gamma_y,\gamma_z,
\gamma_x\gamma_y\gamma_z P \}
\n
\to Cl_{5} &=\{\gamma_x,\gamma_y,\gamma_z,
\gamma_x\gamma_y\gamma_z P, \Gamma \},
\end{align}
which gives the classifying space as ${\cal C}_0$,
and thus the topological charge protecting the Dirac point is given by
\begin{align}
\pi_0({\cal C}_0)=\mathbb{Z}.
\end{align}
This result reproduces Eq.(\ref{eq:Zinversion}).
\section{Conclusion}
\label{sec_conc}
In this paper, we show that the coexistence of chiral symmetry and the
spatial symmetry can stabilize zero energy modes, even when the chiral
symmetry alone does not ensure their stability.
We present general arguments for the stability and we identify the
associated topological numbers.
The validity of our arguments are demonstrated for the Dirac points in two
dimensions with a variety of spatial symmetries.
We also illustrate that Dirac semimetals in three dimensions are
possible in the presence of coexisting spatial symmetries.
In the last part, we list up and classify
independent topological invariants
associated with a given Dirac point.
We find that the set of topological numbers found here
gives a complete minimal set of quantum numbers
allowed by the algebraic constraint
in the case of order two symmetries.
\section*{ACKNOWLEDGMENT}
The authors acknowledge
C. Hotta, K. Asano and K. Shiozaki for useful discussions.
This project has been
funded by JSPS Grant-in-Aid for Scientific Research No.
24740193, No. 25107005 (M.K.), No. 24840047 (T.M.), and No.22103005,
No. 25287085 (M.S.).
|
1,941,325,220,694 | arxiv | \section{INTRODUCTION}\label{sec2}
Nature of the nuclear force under extreme conditions of isospin asymmetry and
baryon density can be understood from the study of the neutron star properties
\cite{baym79,latt04,anto13}. Up-gradation of recent experimental techniques can
only create the nuclear matter up to a few times of nuclear saturation density~\cite{latt04}.
Due the lack of experimental facilities to probe into the high-density environment, a neutron star is considered as a solo natural laboratory, which can provide some information about the
nature of the nuclear force under high density. The global properties of a neutron star carry the information about the nature of equation of state,
in other words, the nature of the nuclear interaction. Since the last few decades, the limit on
the maximum mass of the neutron star remains a hot topic for both the nuclear and astrophysicists.
Theory of general relativity constraints the maximum mass of a neutron star is about 3$M_\odot$~\cite{clif74},
while the lowest observed neutron star mass is approximately 1.1$M_\odot$ \cite{weis01,mart15}.
A neutron star is considered as the densest object of the visible universe having
central density 5-10 times the saturation density~\cite{latt04,baym79}. This high-density
creates ambiguity about the internal composition of the neutron star. The internal structure
of a neutron star is not composed of only nucleons (proton and neutron ) and lepton, as we
consider in a simple model. From a simple energetic point of view, we can argue that at a high
density when the Fermi energy of the nucleon crosses the rest mass of the hyperon, there is a
possibility of conversion of nucleon to hyperon. Usually, the hyperons are produced
at 2--3 times the saturation density and a neutron star contains 15--20\%
of the hyperon inside the core \cite{glen85}. But the
production of the hyperons reduce the maximum mass of a neutron star \cite{amba60} and
many calculations can not reproduce the recent observation of neutron star mass about
2$M_\odot$\cite{anto13,demo10,frei08}. This problem is quoted as hyperon puzzle \cite{amba60}. Primarily, there are three ways to solve this problem : (a) repulsive hyperon-hyperon interaction through the exchange of vector meson
\cite{weis12,weis14,oert15} (b) addition of repulsive hyperonic three
body force ~\cite{vida11,yama13} (c) possibility of phase transition
to deconfinment quark matter~\cite{ozel10,bona12}.
Still, hyperon puzzle is an open problem, which can be solved by
knowing hyperon-hyperon interaction in detail. The hyperon-hyperon
interaction strength plays a major role in deciding the maximum mass and
other properties of a hyperon star. So it is necessary to have a proper
investigation for the effects of hyperon-hyperon interaction
strength on the various properties. In present contribution,
I study the effects of the $\phi_0$-meson on the EOS and mass-radius profile
of the hyperon star with various parameter sets of the relativistic mean field (RMF) model. These
parameters sets are G1 \cite{frun97}, G2 \cite{frun97},
IFSU \cite{fatt10}, IFSU* \cite{fatt10}, FSU \cite{todd05},
FSU2 \cite{weic14}, TM1 \cite{suga94}, TM2 \cite{suga94}, PK1 \cite{long04},
NL3 \cite{lala97}, NL3* \cite{lala09}, NL3-II \cite{lala97},
NL1 \cite{rein86}, NL-RA1 \cite{rash01},
SINPA \cite{mond16}, SINPB \cite{mond16},
GM1 \cite{glen91}, GL97 \cite{glenb97}, GL85 \cite{glen85}, L1 \cite{wale74},
L3 \cite{furn87}, and HS \cite{horo81}. Prespective
of the taking so many parameter sets is to show the predictive capacity
of RMF model to reproduce the maximum mass of the hyperon star.
This proceeding is organised as follows : in Sec. II, I give a short
formalism of RMF model and various equations to calculate energy density
and pressure density, which constitute the equation of state. Tall-Mann Oppenheimer Volkoff equation used to calculate mass and radius of a
hyperon star. Sec. III, is devoted to discuss the results. In Sec. IV,
a summary of the results is given.
\vspace{0.5cm}
\begin{figure}
\centering
\includegraphics[width=9.5cm]{g2.eps}
\caption{The upper panel of the graph shows the variation of the baryon density with energy density. The upper most curve gives the EOS for nucleonic matter (nucleon+lepton), the middle one indicates hyperonic (hyperon+lepton) matter with $\phi_0$-meson contribution. The lower one is same as middle one except contribution of $\phi_0$-meson. The lower panel of the graph shows the variation of the energy density with pressure density. G2 parameter set is used for these calculation.}
\label{fig1}
\end{figure}
\section{Theoretical formalism}\label{form}
{ Relativistic mean filed model provides a smooth road to go
from finite
nuclear system to neutron star system, which has an extreme dense and high
isospin asymmetry environment. Now-a-days RMF model is used to
study various properties of the neutron and hyperon star. Starting
point of the RMF model is an effective Lagrangian. For the present
calculation, I use an effective Lagrangian which contains non-linear
interactions of $\sigma$ and $\omega$-meson and cross-coupling of
various effective mesons~\cite{rein86,mill72,furn87,ring96} },
\begin{eqnarray}
&&{\cal L}=\sum_B\overline{\psi}_B\bigg(
i\gamma^{\mu}\partial_{\mu}-m_B+g_{\sigma B}\sigma -g_{\omega B}\gamma_\mu
\omega^ \mu
-\frac{1}{2}g_{\rho B}\gamma_\mu\tau\rho^\mu-g_{\phi_0 B} \gamma_{\mu} {\phi_0}^{\mu} \bigg)
\psi_B + \frac{1}{2}\partial_{\mu}\sigma\partial_{\mu}\sigma \nonumber \\
&&-m_{\sigma}^2\sigma^2
\left(\frac{1}{2}+\frac{\kappa_3}{3!}\frac{g_{\sigma}\sigma}{m_B}
+\frac{\kappa_4}{4!}\frac{g_{\sigma}^2\sigma^2}{m_B^2}\right)
- \frac{1}{4}\Omega_{\mu\nu}\Omega^{\mu\nu}
+\frac{1}{2}m_{\omega}^2
\omega_{\mu}\omega^{\mu}\left(1+\eta_1\frac{g_{\sigma}\sigma}{m_B}
+\frac{\eta_2}{2}\frac{g_{\sigma}^2\sigma^2}{m_B^2}\right)
-\frac{1}{4}R_{\mu\nu}R^{\mu\nu} \nonumber\\
&&+\frac{1}{2}m_{\rho}^2
R_{\mu}R^{\mu}\left(1+\eta_{\rho}
\frac{g_{\sigma}\sigma}{m_B} \right)
+\frac{1}{4!}\zeta_0 \left(g_{\omega}\omega_{\mu}\omega^{\mu}\right)^2
+\frac{1}{2} {m_\phi}^2 {\phi_\mu}{\phi^\mu}+\sum_l\overline{\psi}_l\left(
i\gamma^{\mu}\partial_{\mu}-m_l\right)\psi_l
+\Lambda_v R_{\mu}R^{\mu}
(\omega_{\mu}\omega^{\mu}),
\end{eqnarray}
where $\omega_{\mu\nu}$ and $R_{\mu\nu}$ are field tensors
for the $\omega$ and $\rho$ fields respectively and are defined as
$\omega_{\mu\nu}=\partial_\mu \omega_\nu-\partial_\nu \omega_\mu$ and
$R_{\mu\nu}=\partial_\mu R_\nu-\partial_\nu R_\mu$. The symbols are
carrying their usual meanings.
{$\sigma$, $\omega$ and $\rho$-meson are exchanged between the nucleons,
while $\phi_0$ being a strange meson it is exchanged between the
hyperons only. The coupling constants of nucleon-meson
interactions are fitted to reproduce the desired
nuclear matter saturation properties and finite nuclear properties, like
charge radius, binding energy, and monopole excitation energy of
a set of spherical nuclei.} The nature of the interaction depends on the
quantum numbers and masses of the intermediate mesons. $\sigma$ (T=0, S=0) is an isoscalar-scalar meson, it gives intermediate attractive interaction.
$\omega$-meson (T=0, S=1) is an isoscalar-vector meson, which gives
short-range repulsive interaction. $\rho$-meson (T=1, S=1) is an isovector
vector meson, whose interaction is account for the isospin asymmetry. Newly
added $\phi_0$-meson is a vector meson, which gives similar interaction
like $\omega$-meson \cite{cava08,bipa89,bedn03,spal99,bunt04,scha96,feng92}.
By using classical Euler-Lagrangian equation of motion, we get the various
equation of motions for the different mesons.
\begin{flushleft}
\begin{eqnarray}
m_{\sigma}^2 \left(\sigma_0+\frac{g_{\sigma N}\kappa_3\sigma_0}{2m_B}
+\frac{\kappa_4 g_{\sigma N}^2\sigma_0^2}{6m_B^2} \right) \sigma_0
-\frac{1}{2}m_{\rho}^2\eta_{\rho}\frac{g_{\sigma N}\rho_{03}^2}{m_B}
-\frac{1}{2}m_{\omega}^2\left(\eta_1\frac{g_{\sigma N}}{m_B}
+\eta_2\frac{g_{\sigma N}^2\sigma_0}{m_B^2}\right)\omega_0^2
=\sum g_{\sigma B}\rho^s_B.
\end{eqnarray}
\end{flushleft}
\begin{flushleft}
\begin{eqnarray}
m_{\omega}^2\left(1+\eta_1\frac{g_{\sigma N} \sigma_0}{m_B}
+\frac{\eta_2}{2}\frac{g_{\sigma N}^2\sigma_0^2}{m_B^2}\right)\omega_0
+\frac{1}{6}\zeta_0g_{\omega N}^2\omega_0^3
=\sum g_{\omega B}\rho_B.
\end{eqnarray}
\end{flushleft}
\begin{eqnarray}
m_{\rho}^2\left(1+\eta_{\rho}\frac{g_{\sigma N}\sigma_0}{m_B}\right)R_0
=\frac{1}{2}\sum g_{\rho B}\rho_{B3}.
\end{eqnarray}
\begin{eqnarray}
{m_\phi}^2 \phi_0 =\sum g_{\phi B} \rho_B.
\end{eqnarray}
Equation for the $\phi_0$-meson is similar to the $\omega $-meson
except the coupling constants.
We use the expression,
\begin{eqnarray}
U_{Y} = {m_n}(\frac{m_n^*}{m_n}-1)x_{\sigma Y}+(\frac{g_\omega}
{m_\omega})^2\rho_0 x_{\omega Y},
\end{eqnarray}
to calculate the hyperon potential depth.
Y stands for the different hyperons ( $\Lambda, \Sigma, \Xi $ ).
$x_{\sigma Y}$ and $x_{\omega Y}$ are the coupling constants of the
hyperon-meson interactions and $\rho_0$ is the saturation density.
We choose ${U_{\Lambda}}^{(N)}=-30$ MeV
\cite{batt97,mill88},
${U_{\Sigma}}^{(N)}= +40$ MeV \cite{frie07},
and ${U_{\Xi}}^{(N)}= -28 $ MeV \cite{glen91}. The hyperon-meson
coupling constants $x_{\sigma Y}$ and $x_{\omega Y}$ are fitted in a such
a way that hyperon potential depth for various hyperons can be reproduced.
We can vary the
$x_{\sigma Y}$ and $x_{\omega Y}$ for different combinations to get the
depth of the hyperon potentials. The hyperon interaction strengths with $\rho$-mesons are fitted
according to the SU(6) symmetry \cite{dove84}, $x_{\Lambda\rho}=$ 0, $x_{\Sigma\rho}=$ 2, $x_{\Xi\rho}=$1.
The interaction strengths of the hyperons with $\phi_0$-meson are given by
$x_{\phi\Lambda}=$$-{\sqrt{2}}/{3}$ g$_{\omega N}$, $ x_{\phi\Sigma}=
-{\sqrt{2}}/{3}$ $g_{\omega N}$, $x_{\phi\Xi}=-{2\sqrt{2}}/{3}$ $g_{\omega N}$.
These equations form a set of self-
consistent equations, which can be solved by iterative method to find
various meson fields and densities. Using energy-momentum tensor, the total
energy and pressure density can be written,
\begin{eqnarray}\label{energy}
&&{\cal E}=\sum_B\frac{Y_B}{(2\pi)^3}\int_0^{k_F^B} d^3k \sqrt{k ^2+{m_B}^*}
+ m_{\sigma}^2\sigma_0^2\left(\frac{1}{2}+\frac{\kappa_3}{3!}
\frac{g_{\sigma}\sigma_0}{m_B}+\frac{\kappa_4}{4!}
\frac{g_{\sigma}^2\sigma_0^2}{m_B^2}\right)
+ \frac{1}{2}m_{\omega}^2 \omega_0^2\left(1+\eta_1
\frac{g_{\sigma}\sigma_0}{m_B}+\frac{\eta_2}{2}
\frac{g_{\sigma}^2\sigma_0^2}{m_B^2}\right) \nonumber \\
&& + \frac{1}{2}m_{\rho}^2 \rho_{03}^2\left(1+\eta_{\rho}
\frac{g_{\sigma}\sigma_0}{m_B} \right)+\frac{1}{2}{m_\phi}^2 \phi^2
+\frac{1}{8}\zeta_0g_{\omega}^2\omega_0^4
+\sum_l \int_0^{k_F^l} \sqrt{k^2+{m_l}^2} k^2 dk
+3\Lambda_V \omega_0^2 R_0^2,
\end{eqnarray}
and
\begin{eqnarray}\label{pressure}
&&{\cal P}=\sum_B\frac{Y_B}{3(2\pi)^3}\int_0^{k_F^B}\frac{k^2 d^3k}{\sqrt{k^2+{m_B^*}^2}}
- m_{\sigma}^2\sigma_0^2\left(\frac{1}{2}+\frac{\kappa_3}{3!}
\frac{g_{\sigma}\sigma_0}{m_B}+\frac{\kappa_4}{4!}
\frac{g_{\sigma}^2\sigma_0^2}{m_B^2}\right)
+ \frac{1}{2}m_{\omega}^2 \omega_0^2\left(1+\eta_1
\frac{g_{\sigma}\sigma_0}{m_B}+\frac{\eta_2}{2}
\frac{g_{\sigma}^2\sigma_0^2}{m_B^2}\right) \nonumber \\
&& + \frac{1}{2}m_{\rho}^2 \rho_{03}^2\left(1+\eta_{\rho}
\frac{g_{\sigma}\sigma_0}{m_B} \right)+\Lambda_V R_0^2 \omega_0^2
+\frac{1}{24}\zeta_0g_{\omega}^2\omega_0^4
+\frac{1}{3\pi^2}\sum_l \int_0^{k_F^l} \frac{k^4 dk}{\sqrt{k^2+m_l^2}},
\end{eqnarray}
where, $l$ stands for the leptons like electron and muon.
Variation of total energy and pressure density
with baryon density known as the equation of state of the nuclear matter.
Putting $\beta$-equilibrium and charge neutrality conditions, it can be
converted to the star matter equation of state. These equation of states
are the inputs of the Tolman-Oppenheimer-Volkoff (TOV) \cite{tolm39,oppe39} equation, which is
given by
\begin{eqnarray}
\frac{\partial P}{\partial r}= - \frac{(P+\rho)(M(r)+4\pi r^3 P)}{r(r-2m)},
\end{eqnarray}
\begin{eqnarray}
\frac{\partial m}{\partial r}=4\pi r^2 \rho(r),
\end{eqnarray}
where $m(r)$ is the enclosed gravitational mass, $P$ is the pressure, $\cal E$
is the total energy density and $r$ is the radial variable.
These two coupled hydro-static equations are solved to get the mass and radius
of the neutron star at a certain central density. Different central density
gives different combination of mass and radius and one particular choice of
central density gives maximum mass of the neutron star for a given EOS.
\vspace{0.6cm}
\begin{figure}[h]
\vspace{0.6cm}
\centering
\includegraphics[width=9.5cm]{allh.eps}
\caption{ The figure shows the mass-radius curve for the various parameter
sets of the RMF model. Box (a), shows acceptable range of the radius
of canonical star (1.4$M_\odot$), while box (b) represents the star
mass in the range 1.2--2 $M_\odot$ with radius 10.7--13.5 km. The box
(c) shows the limit on the maximum mass of the neutron star ie. 1.93--
2.05 $M_\odot$.
}\label{fig2}
\end{figure}
\begin{figure}[h]
\vspace{0.6cm}
\centering
\includegraphics[width=9.5cm]{g2_con.eps}
\caption{ The graph shows the variation of the radius of the canonical star
(1.4$M_\odot$) with hyperon-meson coupling constants $x_{\Lambda Y}$.
}\label{fig3}
\end{figure}
\section{Results and discussions}\label{result}
In the present proceeding, I use 22 parameter sets of the RMF model
to calculate the properties of the hyperon star. These parameter sets
are divided into five groups according to the nature of the nucleon-nucleon
interaction. The parameter sets belong to a same group
are only different from each other by the value of the coupling constants
and saturation properties. From example, group I, contain G1 and G2 parameter
sets. Both G1 and G2 have a similar type of nucleon-meson interaction
and contain the same cross-couplings and self-interactions among mesons. These
22 parameter sets are commonly used in RMF calculations. These parameter sets
are differentiated from each other in a wide range of saturation properties
like incompressibility (K), symmetry energy (J), saturation binding energy
(E/A) and saturation density ($\rho_0$). These saturation properties also cover a wide range of value such as, incompressibility of NL1 is 211.4 MeV, while that of L1
is 626.3 MeV. Similarly, symmetry energy value ranges from 21.7 MeV (L1) to
43.7 MeV (NL1). These two quantities of nuclear matter affect the
EOS of the neutron star in a significant way. The perspective of
taking so many parameter sets is that to check the predictive capacity of RMF
model with different parameter sets.
Nuclear matter equation of state is considered as one of the most important
ingredient for the calculation of neutron star properties. Before the
discussion the effects of the $\phi_0$-meson on the properties
of the hyperon star,
it is wise to investigate how the $\phi_0$-meson affects the EOS.
In Fig.~\ref{fig1} the effects of $\phi_0$-meson on the EOS of
neutron star matter
are shown. The upper panel shows the variation of the baryon density with
energy density. Upper curve of the upper panel is for the pure neutron-proton matter with no contribution of the hyperons. This curve is the stiffest one.
The lowest curve contains the contribution of hyperon. The middle one contain
contribution of the hyperon along with the $\phi_0$-meson as
inter-mediating meson.
The graph clearly shows the $\phi_0$-meson makes the EOS stiff. So it increases
the maximum mass of the hyperon star. In the upper panel
of the Fig.~\ref{fig1},
contribution of the $\phi_0$-meson comes around 0.8 fm$^{-3}$, which is shown by
a blue circle. So it makes
the EOS with $\phi_0$ and without $\phi_0$-meson deviate from each other around 0.8 fm$^{-3}$.
The lower panel shows the variation of the energy density with pressure density.
The pressure-energy density graph also follows similar trend as in the
upper panel. The hyperon star matter without $\phi_0$-meson shows a soft EOS,
while with $\phi_0$-meson it shows a comparative stiff EOS.
In Fig.\ref{fig2}, I show the mass-radius graph with different parameter sets.
Three boxes are shown in the figure. The box (a) represents the radius of the
canonical star (1.4 $M_\odot$) \cite{hebe10}. From the study of chiral
effective model, authors in Ref.\cite{hebe10} suggested that the radius
of the canonical star lies
in the range 9.7--13.9 km. The figure shows that most of the parameter sets
are unable to reproduce the radius of the canonical star (1.4 $M_\odot$)
in the above range. Only a few parameter sets lik FSU2, PK1, GL85, Gl97
, SINPA, SINPB, NL3, NL3*, IFSU*, and G2 can give the radius of the canonical star in the above range.
But the radius of the canonical star depends on the hyperon-meson coupling
constants. For example, if I change the hyperon-meson coupling constants
, while keeping fix the different hyperon potential, the radius of the
canonical star increases monotonically with hyperon-meson interaction strength.
The box (b), shows the star with mass 1.2--2 $M_\odot$ and radius in the
range 10.7--13.5 km. Many parameter sets are able to reproduce the mass
and radius in this range. These parameter sets are GL97, GL85, FSU, TM1,
PK1, GM1 SINPA, and SINPB. Simillarly, the box (c) indicates the limit of the
maximum mass of the neutron star from recent observations. GM1, NL3-II,
NL3, NL3*, parameter sets have maximum mass in this recent
observation limit, which is 1.93--2.05 $M_\odot$.
Fig.\ref{fig3}, shows the variation of radius of the canonical star (1.4 $M_\odot$) with hyperon-meson coupling constants.
For the quantitative check, $x_{\sigma\Lambda}$ changed from 0.5 to 0.7 as
result the radius changed from 10.0278 km to 11.275 km. The radius of the
canonical star changes to 13\% by changing the hyperon-meson coupling constant
to 0.2. While changing the $x_{\sigma\Lambda}$ values, we also keep
changing the value of
$x_{\omega\Lambda}$ to fix the value of the $U_\Lambda$ at -30 MeV.
In the similar way, I take care the hyperon-meson coupling constants
for other hyperons. I keep the potential of $U_\Lambda$, and
$U_\Sigma$ and $U_\Xi$, while changing the $x_{\sigma Y}$ and $x_{\omega Y}$.
\vspace{0.8cm}
\begin{figure}[h]
\vspace{0.7cm}
\centering
\includegraphics[width=9.5cm]{yield_G1.eps}
\caption{Particle fraction of the hyperon with G2 parameter set. The solid lines
show the particle fraction of different types of hyperons with $\phi_0$-meson and the corresponding dotted lines show the result of hyperons without
$\phi_0$-meson.}
\label{fig4}
\end{figure}
Fig.\ref{fig4}, shows the effect of the $\phi_0$-meson on the hyperon
production with G2 parameter set. The solid lines represent the hyperon
production with $\phi_0$-meson
contribution, while the dotted line without $\phi_0$-meson. This graph
shows that $\phi_0$-meson shifts the threshold density (density at which
different hyperons are produced) to higher density. For example, without
the contribution of the $\phi_0$-meson the $\Xi$ produces at a density
0.3989 fm $^{-3}$, but the addition of $\phi_0$-meson shifts
the threshold density to 0.461 fm$^{-3}$. Similarly, $\Sigma^+$ hyperon's
threshold density shifts from 0.914 fm$^{-3}$ to 1.038 fm$^{-3}$.
The threshold density of the $\Lambda$-hyperon does not shift by
a significant amount. In the table 1, the maximum mass, the corresponding
radius, compactness,
and the central density at which maximum mass occurs for the hyperon
star are given with different parameter sets. Results are given for the
neutron star, hyperon star with $\sigma$-$\omega$-$\rho$-model and
hyperon star with $\sigma$-$\omega$-$\rho$-$\phi_0$-model. We conclude
from the table 1, for all the parameter sets the maximum masses follow
a general trend i.e. $M_{\mathrm{max}}^N$ ($\sigma$-$\omega$-$\rho$) $>$
$M_{\mathrm{max}}^H$ ($\sigma$-$\omega$-$\rho$-$\phi_0$) $>$ $M_{\mathrm{max}}^H$ ($\sigma$-$\omega$-$\rho$), where $M_{\mathrm{max}}^N$ ($\sigma$-$\omega$-$\rho$) is the
maximum mass of the proton-neutron star in $\sigma$-$\omega$-$\rho$
model and $M_{\mathrm{max}}^H$ ($\sigma$-$\omega$-$\rho$-$\phi_0$) is the
maximum mass of the hyperon star in $\sigma$-$\omega$-$\rho$-$\phi_0$ model.
But the radius does not follows any common trend in all parameter sets.
The compactness (M/R) also follows a general trend i.e. $(M/R)^N$ ($\sigma$-$\omega$-$\rho$) $>$ $(M/R)^H$ ($\sigma$-$\omega$-$\rho$-$\phi_0$) $>$ $(M/R)^H$ ($\sigma$-$\omega$-$\rho$) for all the parameter sets. The data shows by adding
the $\phi_0$-meson the compactness of the hyperon star increases, this is
mainly due to the increse of the maximum mass with the inclusion of $\phi_0$-meson. The central
density (${\cal E}_c$) at which the maximum mass occurs, also follows a
common pattern for all the parameter sets i.e. ${\cal E}_c^N$ ($\sigma$-$\omega$-$\rho$) $<$ ${\cal E}_c^H$ ($\sigma$-$\omega$-$\rho$-$\phi_0$) $<$ ${\cal E}_c^H$ ($\sigma$-$\omega$-$\rho$).
\vspace{1em}
\begin{table*}[h]
\hspace{3.0 cm}
\centering
\caption{Maximum mass (M), corresponding radius at maximum mass (R), compactness (M/R), and central density (${\cal E}_c$) for the neutron and hyperon stars are given with various parameter sets. }
\renewcommand{\tabcolsep}{0.08 cm}
\renewcommand{\arraystretch}{1.}
{\begin{tabular}{|c| c| c| c| c| c| c| c| c| c| c| c| c| c| c|}
\hline
&\multicolumn{4}{|c|}{Neutron star}&\multicolumn{4}{|c|}{Hyperon star
with $\phi_0$}
&\multicolumn{4}{|c|}{Hyperon star without $\phi_0$}\\
\hline
parameter &M &R & M/R & ${\cal E}_c\times 10^{15}$
& M &R & M/R &${\cal E}_c\times 10^{15}$ &M &R & M/R
&${\cal E}_c\times 10^{15}$ \\
sets & ($M_\odot$) &(km)& ($M_\odot$/km) & ( g cm$^{-3}$)
& ($M_\odot$)&(km)& ($M_\odot$/km) & ( g cm$^{-3}$)
& ($M_\odot$)&(km)& ($M_\odot$/km) & (g cm$^{-3}$) \\
\hline
\multicolumn{13}{|c|}{GROUP I}\\
\hline
G2 & 1.938 & 11.126 &0.174&2.317 & 1.576 &9.622&0.163&3.387&1.299&9.054&0.135&3.921 \\
\hline
G1& 2.162 & 12.244 &0.176 & 1.871&1.881&11.712 &0.160&2.139&1.816&12.048&0.150&1.960 \\
\hline
\multicolumn{13}{|c|}{GROUP II}\\
\hline
FSU& 1.722 & 10.654 & 0.161& 2.495 & 1.419 & 9.280&0.152& 3.743&1.100&8.788& 0.125& 3.921 \\
\hline
FSU2& 2.072 & 12.036 & 0.172& 1.960 & 1.779 & 11.399&0.155&2.139&1.377&10.812&0.127&2.674 \\
\hline
IFSU& 1.898 & 12.612 & 0.150& 1.960& 1.793 & 13.450&0.133&1.604&1.750&13.890 & 0.125& 1.426\\
\hline
IFSU*& 1.985 & 11.386 & 0.174 &1.529 & 1.763&10.90&0.161&2.317&1.570&10.820&
0.145&2.317 \\
\hline
SINPA&2.001 &11.350 &0.176&2.139 &1.750 & 10.652 &0.164&2.495 &1.525& 10.334 &0.147&2.674 \\
\hline
SINPB& 1.994 &11.468 &0.173&2.139 &1.719&10.536&0.163& 2.674 & 1.404& 10.258&0.136&2.674 \\
\hline
\multicolumn{13}{|c|}{GROUP III}\\
\hline
TM1&2.176& 12.236 & 0.177&1.871& 1.966&11.986 & 0.164& 1.960& 1.798& 12.228
& 0.1470& 1.782 \\
\hline
TM2& 2.622 & 16.508 & 0.158& 1.069 &2.512 & 17.062 &0.147& 0.944& 2.479&17.358& 0.142& 0.929 \\
\hline
PK1&2.489 & 14.042 & 0.177 &1.604 &2.275& 13.688& 0.166&1.782& 2.128& 13.904&
0.153&1.782 \\
\hline
\multicolumn{13}{|c|}{GROUP IV}\\
\hline
NL3& 2.774 & 13.154 & 0.210&1.604&2.633 & 13.012& 0.193& 1.604& 2.529& 13.012&0.194& 1.6044 \\
\hline
NL3*& 2.760 & 13.102& 0.210&1.604 & 2.605 & 12.938&0.201 & 1.604&2.500& 12.930& 0.193& 1.604 \\
\hline
NL1& 2.844&13.630 & 0.208&1.426 & 2.653 &13.740&0.193&1.604&2.506&13.190&0.189&1.604 \\
\hline
GM1& 2.370&12.012& 0.197& 1.960& 2.280 & 12.14 &0.187&1.871&2.230&12.21&0.182&1.77 \\
\hline
GL85& 2.168& 12.092&0.220&1.960 &2.122& 12.242&0.173& 1.871&2.106&12.223&0.172&1.871\\
\hline
GL97&2.003 &10.790&0.185&2.495&1.919 &10.832&2.495&0.177&1.881&10.894&0.172&2.495 \\
\hline
NL3-II& 2.774 &13.146 &0.211&1.604 &2.594 & 12.94 &0.200&1.604&2.474&12.742&0.194&1.782 \\
\hline
NL-RA1& 2.783 &13.420 &0.207&1.426 &2.631&13.050&0.201 & 1.604&2.516&13.030&
0.193&1.604 \\
\hline
\multicolumn{13}{|c|}{GROUP V}\\
\hline
HS& 2.974 &14.176&0.209&1.2478 &2.853&13.848 & 0.206&1.4261&2.770&13.860&0.199&1.426 \\
\hline
L1& 2.744 & 13.004 &0.211& 1.604&2.056 & 12.176 &0.168 &1.871&2.00&12.372&0.161 &1.786 \\
\hline
L3& 3.186 & 15.224 &0.209 &1.069&2.088&12.374&0.168&1.782&1.692&12.294&0.137
& 1.069 \\
\hline
\end{tabular}
\label{tab4}}
\end{table*}
\section{Summary and Conclusions}\label{conc}
In summary, I study the properties of the hyperon stars with the various
parameter sets of RMF model. The predictive capacity of the various
parameter sets to reproduce the canonical mass-radius relationship
are discussed with hyperonic degrees of freedom.
Out of 22 parameter sets only few parameter sets like FSU2, PK1, GL85, Gl97
, SINPA, SINPB, NL3, NL3*, IFSU*, and G2 can able to reproduce the radius of the canonical star (1.4 $M_\odot$) in the range 9.7 km to 13.9 km. But radius of 1.4 $M_\odot$ star depends on the
strongly on the hyperon-meson couplings. The radius of a canonical
star increases monotonically with hyperon-meson interaction in-spite of a fixed
hyperon potential depth. The radius of a canonical star change by 13\% with
a small change of 0.2 of the hyperon-meson coupling constants. This shows not only the depth of the hyperon potential but also the range of the hyperon-meson
coupling constants are improtant. More hyper-nuclei data required to fix the range of the hyperon-hyperon interaction. As the $\phi_0$ is a
vector meson, so it gives
repulsive interaction among the hyperons and makes the EOS comparatively stiff
The stiff EOS increases the maximum mass of the hyperon star. The
compactness of the hyperon star increases with inclusion of the
$\phi_0$-meson. The $\phi_0$-meson push the threshold density of
hyperon production to higher density.
\section{Acknowledgement}
All the calculations are preformed in Institute of Theoretical Physics,
Chinese Academy of Sciences. This work has been supported by the National
Key R\&D Program of China (2018YFA0404402), the NSF of China
(11525524, 11621131001, 11647601, 11747601, and 11711540016),
the CAS Key research Program (QYZDB-SSWSYS013 and XDPB09), and
the IAEA CRP "F41033".
|
1,941,325,220,695 | arxiv | \section{Introduction} \label{sec:intro}
NGC~6791 is a metal rich and old Galactic open cluster ([Fe/H] = 0.30-0.40, $\tau \sim7$ Gyr) that exhibits two prominent overdensities on the horizontal branch (HB). Approximately 45 stars occupy the red clump (RC) region \citep{bu12}, which is reproduced by standard stellar evolution codes modeling metal rich clusters. However, 12 cluster stars - with membership confirmed only for some of them - are significantly hotter than their RC counterparts, which is in conflict with such classic modeling. Those stars are designated extreme HB stars (EHB) when associated with star clusters, and hot subdwarf B and potentially O-stars when belonging to the field (sdB/sdO). Their effective temperature T$_{\rm{eff}}$ and gravity span T$_{\rm{eff}}$ = 25,000$-$45,000 $^{\circ}$K and $\log{g}$ = 4.5$-$6.2 \citet{li94}, respectively. Presumably, these stars are surrounded by a thin hydrogen envelope ($\sim 0.01\ M_{\odot}$). EHB stars are present in a number of Galactic globular clusters \citep{mbd08}. However, first NGC 6791 HB is much different from any globular cluster HB, as amply discussed in \citep{li94}; second, the combination of mass, age, and metallicity makes NGC 6791 a unique system, with no overlap with Galactic globular clusters.
The mechanism and stellar evolutionary path that gives rise to EHB in NGC 6791 has been an active source of debate, particularly since the discovery of a bimodal HB distribution in NGC 6791 \citep{ku92}. One proposal involves invoking extreme mass loss that is tied to the cluster's high metallicity \citep{dc96,yo00,ka07}, and whereby a \citet{re75} stellar wind mass-loss parameter as large as $\eta\sim1.2$ is adopted. Conversely, RC stars exhibit typical masses of 1.03$\pm$0.03 M$_{\odot}$ and lose 0.09$\pm$0.07 M$_{\odot}$ while ascending the red giant branch (RGB) phase \citep{mi12}, which implies a mass-loss compatible with a significantly smaller Reimers parameter of 0.1$ \leq \eta\ \leq $0.3. Direct observations confirming a sizeable mass-loss rate \citep[e.g., $\sim 10^{-9}$ M$_{\odot}$/yr][]{yo00} remain outstanding. Astero-seismological studies support a marginal mass-loss rate \citep{mi12}, while direct Spitzer observations did not reveal circum-stellar dust production that would accompany enhanced mass-loss during the RGB phase \citep{va08}. Furthermore, there is a lack of consensus on the details of (fine tuned) mass-loss required to yield the T$_{\rm{eff}}$ and envelope size of EHB stars. Lastly, current models reproduce nearly all evolutionary phases present in the CMD of NGC~6791 without anomalous mass-loss \citep{cc95,bu12}.
An alternative mechanism reiterated by \citet{li94} and \citet{ca96} is that EHB stars could emerge from type B or C binary systems, whereby their envelope would be largely removed during a common-envelope (CE) phase \citep[see also][]{me76,ha02,br08}. Complex models were not initially readily available to provide a robust evaluation of the hypothesis, and moreover, there was little evidence for binarity among the sample. Subsequent observations and models in concert suggest that NGC~6791 exhibits a high binary percentage of $\sim$50$\%$ \citep{be08,tw11}, and among the EHB class three systems have been confirmed: B4 \citep{mo03,pa11}, B7 and B8 \citep{mo03,va13}. Indeed, it has been noted that a high-fraction of sdB stars might belong to binary systems \citep[][and references therein]{gr01,ma04}.
In this study, it is demonstrated that an updated prescription of the \citet{bd03} binary evolutionary code successfully predicts the observed and temporal properties of EHB stars in NGC~6791.
\section{Results} \label{sec:Results}
In the following analysis EHBs are thought to arise from binary evolution, which provides a natural mechanism of depleting the hydrogen-rich outer layer of the star without an ad-hoc or simplified prescription of mass-loss. Essentially, the mass transfer to the companion is unstable and thus a CE encompasses the stars, subsequently, as the two stellar nuclei approach each other the envelope expands owing to heating and is lost \citep[][see also the discussion in \citealt{ma04}]{pa76}.
The binary evolution is modelled via an updated version of the \citet{bd03} code, who developed a Henyey-type algorithm to compute stellar evolution in close binary systems, based on a modification of the scheme presented by \citet{kwh67} to solve the set of differential equations of stellar evolution together with the mass transfer rate. This approach was subsequently modified to ameliorate transporting extremely steep chemical profiles outwards (corresponding to stars just prior to undergoing the helium flash). Convection is treated using the canonical mixing length theory with $\alpha_{mlt}= 2.0$, and semi-convection was introduced following \citet{la85} with $\alpha_{sc}= 0.1$.
It is known that diffusion slightly affects horizontal branch evolution \citep{2012MNRAS.427.1245R} and it is certainly necessary to account for surface abundances of EHB stars \citep{2007ApJ...670.1178M}. Here, because of the exploratory nature of this paper, diffusion processes were ignored and will be addressed elsewhere.
Donor stars that evolve to EHB conditions should have initial masses marginally larger than that of cluster turn-off, as EHB stars are undergoing core helium burning, which is an evolutionary phase appreciably shorter than core hydrogen burning. Binary systems consisting of similar mass stars are considered, whereby one star is 1.3~$M_{\odot}$, above the turn-off ($M_{to} \approx 1.15 \; M_{\odot}$), and the companion is slightly below and features a sufficiently lengthy initial orbital period. The stars are modelled with a metallicity of $Z= 0.04$. Moreover, EHBs should stem from stars that reached the red giant branch in the recent past. Consequently, binaries are considered whereby the primaries fill their Roche lobes as they have extended convective envelopes. Such conditions result in systems that undergo a CE stage in which the primary loses the bulk of its hydrogen rich envelope, while the companion keeps its initial mass and the orbital period falls off appreciably. EHB stars are the objects that evolve after emerging from the CE phase.
Our main interest is not the CE phase but the emerging objects. So, the CE stage is mimicked assuming a strong mass-loss rate until detachment \citet{it93}. This makes the deep chemical composition profile to remains essentially unaltered, which is expected since CE lasts little time. The binary pair is assumed to undergo the CE phase when the helium core reaches mass values of 0.3480, 0.3694, and 0.4067~$M_{\odot}$. All of them ignite helium well after the CE phase. Larger helium core masses ignite helium before reaching EHB conditions, and delineate an evolutionary path that is unimportant for the present discussion.
Each model was evolved until reaching a radius of detachment ($R_{d}$) of 7.5 and 1~$R_{\odot}$. For $R_{d}= 7.5\; R_{\odot}$, the total masses corresponding to each helium core at the end of the CE phase were 0.35048, 0.37367, and 0.41344~$M_{\odot}$, whereas for $R_{d}= 1\; R_{\odot}$ the results were slightly smaller, namely 0.34863, 0.37084, and 0.40973~$M_{\odot}$. The differences correspond to the varying thickness of the outermost hydrogen layer. After the CE phase, the stars are evolved at constant mass, and the computations are terminated at an age of $\tau \sim 9$~Gyr, which is an upper limit for the age of NGC~6791.
The evolutionary tracks of the two most massive models for each $R_{d}$ value are presented in Figure~\ref{fig:HR}. As noted above, two radii were assumed after the emergence from the CE phase. The larger $R_{d}$ value implies a thicker hydrogen rich layer, and thus a lower T$_{\rm{eff}}$ during most of the evolution. At post-CE stages, the star evolves blue-ward and ignites helium off-center owing to strong neutrino emission. The evolutionary track subsequently bends downward almost at constant radius. Thereafter the star depletes the helium core and then progressively the bottom of helium rich layers, following a cyclical-like trend. The stars exhibit EHB conditions during that stage (notice the blue squares in Figure~\ref{fig:HR} that represent the EHB stars in NGC~6791. When helium burning becomes weaker, the star contracts, again evolving blue-ward and igniting the outermost hydrogen layers that gives rise to few thermonuclear flashes. These flashes burn enough hydrogen to cause the star to finally evolve to the white dwarf stage. Lower mass objects undergo a larger number of cycles during helium burning and hydrogen flashes because nuclear ignition episodes are weaker.
The evolution of stars that emerge from the CE phase with $R_{d}= 7.5\; R_{\odot}$ is shown in Figure~\ref{fig:GTeff75}, together with data corresponding to the EHB stars B4-B7. Successfully, the model produces T$_{\rm{eff}}$ and surface gravities that match the observations. This is largely due to helium ignition that makes the star to stop its contraction at the right conditions.As expected, stars that emerge from the CE stage featuring $R_{d}= 1\; R_{\odot}$ exhibit a larger surface gravity since they are more compact (Figure~\ref{fig:GTeff1}).
It can be noticed from Figure~\ref{fig:HR} that tracks pass several times across the T$_{\rm{eff}}$ interval corresponding to EHBs ($(\Delta T_{\rm{eff}})_{\rm{EHB}}$); see Section~\ref{sec:intro}. Most of the time they fall at $(\Delta T_{\rm{eff}})_{\rm{EHB}}$ they undergo helium burning dominated cycles. The time they spend at $(\Delta T_{\rm{eff}})_{\rm{EHB}}$ during thermonuclear flashes and the final white dwarf cooling track is much shorter. So, the time the modelled stars spend at $(\Delta T_{\rm{eff}})_{\rm{EHB}}$ is essentially that when they resemble EHBs. This time is crucial since the longer the time the easier to find them as EHBs.
Figure~\ref{fig:tTeff75} conveys the temporal evolution as a function of T$_{\rm{eff}}$ for the case of $R_{d}= 7.5\; R_{\odot}$. Temperature intervals indicated by the observations presented in \citet{li94} are likewise included. Remarkably, the modelled stars can be detected as EHBs for several hundred million years. The same is true for models featuring $R_{d}= 1\; R_{\odot}$ (see Figure~\ref{fig:tTeff1}).
\section{Discussion} \label{sec:disc}
The resulting orbital periods of such systems can be estimated via Equation~3 in \citet{iv13}:
\begin{equation}
\frac{G M_1 M_{1,env}}{\lambda R_1}= \alpha_{CE} \bigg(
- \frac{G M_1 M_2}{2 a_i}
+ \frac{G M_{1,c} M_2}{2 a_f}
\bigg)
\end{equation}
where $G$ is the gravitational constant, $M_1$, $M_{1,env}$, $M_{1,c}$ are the total, envelope, and core masses of the donor star, respectively. $M_2$ is the companion mass, $a_i$, $a_f$ are the initial and final semi-axes, $\alpha_{CE}$ is the CE efficiency and $\lambda$ accounts for the density profile of the donor star. The semi-axis at the onset of mass transfer is $a_i$, and is computed via the relation between the orbital semi-axis and the equivalent radius of the Roche lobe \citep{eg83}. The final orbital period $P_f$ follows, and is an (increasing) function of the parameter $\xi= \lambda\; \alpha_{CE} / 2$. Here, $\xi= 0.10$ and $M_2= 1\; M_{\odot}$ are adopted as representative values and the models corresponding to the case of $R_d= 1\; R_{\odot}$.
If Roche lobe overflow occurs when the donor star develops a helium core of $0.3486\; M_{\odot}$ and exhibits a radius of 69~$R_{\odot}$, then $a_i= 172.1\; R_{\odot}$, $a_f= 1.876\; R_{\odot}$ and $P_f=0.254$~days. If overflow occurs when the helium core is $0.4067\; M_{\odot}$ and features a radius of 137~$R_{\odot}$, then $a_i= 342\; R_{\odot}$, $a_f= 4.61\; R_{\odot}$ and $P_f=0.962$~days.
The estimated periods fall in the range of observations \citep[][and references therein]{gr01,ma04}, and indeed, EHB B4 displays an orbital period of $P=0.4$~days \citep{pa11}.
\section{Conclusions} \label{sec:concl}
In this study it is advocated that binary evolution is the source of the EHB population within NGC~6791,
in full similarity with field subdwarf \citep{ha02}.
An updated form of the \citet{bd03} code is used to demonstrate that EHBs can emerge from the post CE evolution of binary stars with masses conducive to the cluster's turn-off ($M_{to} \approx 1.15 \; M_{\odot}$). The numerical model employed yields synthetic stars that match the observational and temporal properties of NGC~6791's EHB members. The binary mechanism is not only means for stars to evolve to EHB conditions, since isolated stars with heavy mass loss might succeed. However, the evolutionary path explored here is preferred since it does not require ad-hoc anomalous and observationally unconfirmed mass-loss rates, and granted that NGC~6791 and EHB stars exhibit a high rate of binarity.
One may wonder whether our results can be extended to other stellar systems. Unfortunately, no other open cluster is known to harbour EHB stars, which might be interpreted arguing that they are by far less massive than NGC 6791, even if they host a comparable amount of binary stars. This stresses once again the uniqueness of NGC 6791 among open clusters in the Milky Way. On the other hand, EHBs stars are more common in globular clusters, but they do no share the same properties of NGC 6791 EHB population \citep{li94} . First, in NGC 6791 EHBs are not centrally concentrated \citep{bu12}, while in globular they are \citep{li94}. Second, in globulars they span a much wider range in colours (hence temperature).
This was historically interpreted with the existence of a wide range of envelope sizes, hence with differential mass loss during the RGB ascent.
Nowadays the segmented EHB in globulars is mostly interpreted as an evidence of multiple stellar generations,
each segment with different degree of He enhancement \citep{Ma14}. Other authors consider rapid rotation \citep{Ta15}as well. These scenarios are difficult to invoke for NGC 6791 since we lack any accepted evidence of multiple stellar populations in NGC 6791 (see \citet{gei12} and \citet{bra14}).
\acknowledgments
G.C. deeply thanks La Plata Observatory for financial support during a visit where this project was started. The authors deeply thank Daniel Majaess for reading and commenting on the manuscript.
\vspace{5mm}
|
1,941,325,220,696 | arxiv | \section{AdS/CFT Results for $\sN = 4$ Plasma} \label{sec:sym}
We now turn to the calculation of spatial correlators
in a maximally supersymmetric strongly
coupled plasma using gauge-gravity
duality~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998zw}.
We will find that the thermal correlators in the $\sN = 4$
plasma are identical for the operators $F^2$ and $F\tilde{F}$,
as each of these operators is dual to a simple massless scalar field.
It is interesting that the two correlators also coincide at weak
coupling, where they are given by a two-gluon exchange diagram,
and therefore coincide with the pure Yang-Mills result (\ref{eq:free}).
These
correlators have been previously studied in momentum space
in \cite{Hartnoll:2005ju}. See
also~\cite{Kovtun:2006pf, Teaney:2006nc} for discussion of
finite temperature stress tensor and R-current correlators
in momentum space.
An outline of the computation is:
\begin{enumerate}
\item letting $\sO$ denote either $F^2$ or $F\tilde{F}$,
we note that the field theory operator $\sO$ is dual to
a massless bulk scalar field $\phi$. For $F^2$ this field
$\phi$ is exactly the type IIB dilaton, whereas for
$F\tilde{F}$ $\phi$ it is the Ramond-Ramond axion $C_0$.
\item We then compute the spectral density $\rho(\omega, k)$
for the operator $\sO$ using finite-temperature AdS/CFT.
This involves numerically solving the bulk equations of
motion for $\phi$ in a black brane geometry.
\item Finally, we Fourier transform this spectral density
to obtain the Euclidean correlator in position space.
\end{enumerate}
Each of these steps is explained in more detail below.
Throughout we will expand fields on each constant-radius
slice in Fourier space,
$\phi \sim \phi(r) e^{-i\omega t + i k z}$.
We take the spatial momentum to be in the $z$ direction.
With an eye on numerical evaluation,
we will often work with dimensionless momenta
and positions, which we denote with an overbar:
\begin{equation}
\bar{\omega} = \frac{\omega}{2\pi T} \qquad \bar{x} = 2\pi T x
\end{equation}
The relevant black brane metric for $\sN = 4$ SYM at
finite temperature on $\mathbb{R}^{3,1}$ can be written
\begin{equation}
ds^2 = (\pi R T)^2 r^2\left[-\left(1-\frac{1}{r^4}\right)dt^2
+ d\vec{x}_3^2\right] + \frac{1}{1-\frac{1}{r^4}}
\frac{dr^2 R^2}{r^2}, \label{bhmetric}
\end{equation}
where $R$ is the radius of the bulk AdS space and $T$
the temperature of the black brane, with the horizon
at $r = 1$ and the AdS boundary at $r \to \infty$.
\subsection{Flow Equation and Numerical Evaluation}
We now let $\sO$ denote either $F^2$ or $F\tilde{F}$.
In both cases the relevant bulk action for the field
dual to $\sO$ is simply that of a massless scalar
\begin{equation}
S = -\frac{1}{2\alpha}\int d^5 x\sqrt{-g}(\nabla\phi)^2,
\end{equation}
For these operators supersymmetry guarantees that the
vacuum two-point function is independent of the
coupling~\cite{Hartnoll:2005ju}, and thus the normalization $\alpha$
can be conveniently determined by demanding that this
correlator as computed from gravity agrees with the
free-field expressions (\ref{eq:ththPT}). We find for both
\begin{equation}
\frac{1}{2\alpha_{F^2}} = \frac{1}{2\alpha_{F\tilde{F}}}
= \frac{N^2}{\pi^2 R^3}
\end{equation}
The spectral density $\rho$ is proportional to the
imaginary part of the finite temperature retarded
correlator, $\rho = -\frac{1}{\pi}\mathop{\rm Im}(G_R)$.
An extensive literature exists on the evaluation
of this quantity from
AdS/CFT~\cite{Son:2002sd, vanRees:2009rw,
Skenderis:2008dg, Skenderis:2008dh}. We will use the
flow formalism developed in \cite{Iqbal:2008by},
which we briefly review here: consider the function
$\chi(r,k)$, defined as
\begin{equation}
\chi(r,k) \equiv \frac{\Pi(r,k)}{i\om\phi}
\qquad \Pi(r,k) = -\frac{1}{\alpha}\sqrt{-g}g^{rr}\partial_r\phi.
\end{equation}
Here $\Pi(r,k)$ is the momentum conjugate to
the bulk field $\phi(r,k)$. The bulk equation
of motion for $\phi$ implies that the $\chi(r,k)$
obeys (on any metric) the first-order flow equation
\begin{equation}
\partial_r \chi = i\omega \sqrt{\frac{g_{rr}}{g_{tt}}}
\left[\frac{\chi^2}{\Sigma_\phi} - \Sigma_\phi
\left(1 - \frac{k^2 g^{zz}}{\omega^2 g^{tt}}\right)\right]
\qquad \Sigma_\phi = \frac{1}{\alpha}\sqrt{\frac{-g}{g_{rr}g_{tt}}}
\end{equation}
Furthermore it follows from real-time AdS/CFT~\cite{Iqbal:2009fd}
that the retarded correlator $G_R$ in the dual field theory
is obtained from the boundary value of $\chi(r,k)$:
\begin{equation}
G_R(k) = -\lim_{r\to\infty}i\omega\chi(r,k) \qquad \to
\qquad \mathrm{Im}[G_R(k)] = -\lim_{r\to\infty}\omega
\mathrm{Re}[\chi(r,k)].
\end{equation}
The initial conditions at the horizon $r = 1$ are
fixed by the infalling wave condition to be
$\chi(r = 1) = \Sigma_\phi(r = 1)$.
We now plug in the metric (\ref{bhmetric}) and
define a dimensionless function $\tilde{\chi}$ by $\tilde{\chi}
= \frac{\alpha\chi}{(\pi R T)^3}$. We obtain the flow equation
\begin{equation}
\partial_r\tilde{\chi} =
\frac{2 i \bar{\omega}}{r^2\left(1 - \frac{1}{r^4}\right)}
\left[{\tilde{\chi}^2 \over r^3} - r^3
\left[1-\frac{\bar{k}^2}{\bar{\omega}^2}
\left(1-\frac{1}{r^4}\right)\right]\right]. \label{floweqn}
\end{equation}
This equation must now be integrated from
$\tilde{\chi}(z = 1)=1$ to the AdS boundary at
$z = \infty$, where it determines the AdS/CFT response.
Some technical details on the numerical integration
are given in Appendix \ref{app:numerics}.
\subsection{Fourier Transform}
To obtain a Euclidean correlator from the spectral
density, we use the identity \cite{Kapusta:2006pm}.
\begin{equation}
G_E(0;\tau,x) = \int_0^\infty d\omega \int
\frac{d^3 \vec{k}}{(2\pi)^3} \;\rho(\omega,\vec{k})
\frac{\cosh\left(\omega
\left(\tau - \frac{\beta}{2}\right)\right)}
{\sinh(\frac{\omega\beta}{2})}e^{i\vec{k}\cdot\vec{x}}
\end{equation}
Assuming rotational symmetry in the spatial directions
(i.e. $\rho(\omega, \vec{k}) = \rho(\omega, |\vec{k}|)$)
to perform the angular integral and switching to
dimensionless momenta, we obtain
\begin{equation}
G_E(0;\bar{\tau},|\bar{x}|) = 8\pi^2 T^4
\int_0^{\infty}d\bar{\omega}\int_0^{\infty}
d|\bar{k}|\;\rho(\bar{\omega},\bar{k}) \sin(|\bar{k}||\bar{x}|)
\frac{\bar{k}}{\bar{x}}\frac{\cosh\left(\bar{\omega}\left(\bar{\tau}
- \pi\right)\right)}{\sinh(\bar{\omega}\pi)}
\end{equation}
We note at this point that we are primarily interested
in equal time correlators, i.e. those for which $\tau$
is $0$ in the equation above.
However, at small time separations the nonzero value
of $\tau$ acts like a UV cutoff on high-frequency modes,
suppressing them as $e^{-\omega\tau}$. To achieve arbitrarily
small $\tau$ we would need to know $\rho$ at arbitrarily high
$\omega$, whereas numerically we are necessarily limited to finite
$\omega$\footnote{Note this is an advantage to using
the real-time formalism described here, as there would be
no such exponential suppression of high frequency modes
if we were to compute the position space correlator by
summing the Euclidean momentum space correlator over
Matsubara modes.}.
Since the position-space, equal-time correlator is finite,
one could repeat the calculation with
several $\tau$ values and extrapolate to $\tau=0$.
Here we however restrict ourselves to using a small, finite
`regulator' $\tau\ll r$. Since our goal is to compare the
correlators to those computed on the lattice, where
the region of very small $r$ is in any case
problematic due to discretization errors,
this approach will prove sufficient.
As a check on the numerical Fourier transforms themselves,
we compute the corresponding Fourier transform
in the free theory at finite
temperature starting from the analytic expression for
the spectral density Eq.~(\ref{eq:sfelem}); in this case an
analytic expression also exists directly in position space,
Eq.~(\ref{eq:free}), providing a non-trivial
check on the accuracy of the Fourier transform. In both
cases we subtract the zero-temperature contribution,
which, as mentioned above, is independent of the coupling.
The step sizes in $\bar\omega$ and $\bar k$ are both 0.1,
and the chosen range of integration is 0 to 20.
The result of this numerical integration is shown
in Fig.~\ref{fig:justone}, where we have fixed
$\tau = \frac{1}{2\pi T}$.
With $x$ ranging over values much larger than $\tau$,
the Figure shows the departure of the Fourier-transformed
correlator from the direct evaluation of the $x$-space
expression~(\ref{eq:free}). The achieved accuracy is sufficient
for our purposes. The largest discrepancy occurs at short distances,
where the sensitivity to the discretization step and the
finite range of $\omega$ is greatest.
To illustrate the dependence on the regulator $\tau$,
we compare the correlator $C_{\sO\sO}(r,\tau,T) - C_{\sO\sO}(r,\tau,0) $
for $\tau=1/2\pi T$
with $G_{\sO\sO}(r,T)$ in the free case (Fig.~\ref{fig:tautest}).
We see that for $r> 5/2\pi T$, the correlators coincide to
the accuracy needed for our purposes.
\section{Details of Numerical Integration} \label{app:numerics}
We must numerically integrate the flow equation (\ref{floweqn})
\begin{equation}
\partial_r\tilde{\chi} = \frac{2 i \bar{\omega}}{r^2\left(1 - \frac{1}{r^4}\right)}\left[{\tilde{\chi}^2 \over r^3} - r^3\left[1-\frac{\bar{k}^2}{\bar{\omega}^2}\left(1-\frac{1}{r^4}\right)\right]\right]. \label{floweqn2}
\end{equation}
from the horizon at $r = 1$ up to the AdS boundary at $r \to \infty$, which the initial condition $\tilde{\chi}(r = 1) = 1$. In practice we integrate to $r = 20000$ and verify that further increasing the integration domain does not change the answer. Note that while $\mathop{\rm Im}(\chi)$ contains a divergence as $r \to \infty$, this is a standard UV divergence that contributes only a contact term, and can be removed by holographic renormalization. It will not concern us, as the real part of $\chi$ (and thus the imaginary part of $G_R$) has a finite limit as $r \to \infty$.
We cannot begin our integration at precisely $r = 1$ as the equation is singular there. We thus build a series expansion of $\tilde{\chi}$ about $r = 1$:
\begin{equation}
\tilde{\chi} = 1 + \tilde{\chi}_1 (r-1) + \tilde{\chi_2}(r-1)^2 + ...
\end{equation}
Plugging this into Eq.~(\ref{floweqn2}) we can determine the expansion coefficients $\chi_{n}$ up to any desired order. The expressions are lengthy but straightforward to obtain and so we do not present them here; however we use the first three terms in this expansion to find the value of $\tilde{\chi}(r = 1+\delta)$ and use this the initial condition to begin our integration at a small finite value of $\delta$ ($\delta = 0.01$ in practice).
Note also that if we keep $\bar{k}$ finite and take $\bar{\omega} \to 0$, the flow equation appears singular. However we know that at vanishing chemical potential the spectral density of a bosonic operator must be an odd function of $\bar{\omega}$, and thus vanishes as $\bar{\omega} \to 0$, although the precise point $\bar{\omega} = 0$ presents numerical difficulties. Thus our code simply sets $\mathop{\rm Im}(G_R(\bar{\omega} = 0)) = 0$ by hand.
\section{Introduction}
Heavy ion collisions at RHIC have revealed properties of the
quark gluon plasma that had not been widely anticipated
(see~\cite{Muller:2008zzm} for an introduction).
The ability of the produced matter to flow with little dissipation
and to strongly quench energetic jets seemed to disfavor
a description of the matter in terms of weakly interacting
quarks and gluons. On the other hand,
the constituent quark number scaling
of the measured elliptic flow coefficient
(see for instance \cite{Adare:2006ti})
suggests that it is particles with the quantum numbers of quarks
that are flowing in the expanding fireball.
One of the central questions is thus whether
the quark-gluon plasma at temperatures within reach of
heavy-ion collisions is better described in
a weak coupling expansion or whether a radically different
computational scheme is more appropriate.
The answer to the question could depend on the quantity,
in which case it would be even more difficult
to form a mental picture of the plasma.
The strong elliptic flow and jet quenching observed
in heavy ion collisions point to very strong interactions
among the constituents of the plasma.
Indeed the quantities most sensitive
to interactions appear to be the dynamical ones,
such as the shear viscosity $\eta$ in units of the entropy density $s$,
which varies like $\alpha_s^{-2}$
between order unity and $+\infty$ with the coupling.
Such dynamic properties of the plasma remain a challenge
for lattice calculations (see~\cite{Meyer:2009jp} for a review).
In the shear channel the most accessible transport property
is
$\int_0^\Omega d\omega \rho(\omega)/\omega$ with $\Omega$ of order $T$,
where $\rho$ is the spectral density.
For a weakly coupled system obeying the $f$-sum rule~\cite{Teaney:2006nc},
this provides a measure of the mean square velocity $v_{\bf p}^2$
of the quasiparticles responsible
for the transverse transport of momentum.
A value much below unity would rule out the possibility of
these quasiparticles being light quarks or gluons.
Static quantities on the other hand, while often providing
a less clear-cut test of the importance of interactions,
are directly accessible in the Euclidean formulation of the theory.
The thermodynamic potentials, for instance,
have remained a challenge for perturbative
methods~\cite{Kajantie:2002wa,Hietanen:2008tv},
even though certain resummation schemes lead
to more stable predictions~\cite{Blaizot:2003iq,Blaizot:2003tw}.
The example of strongly coupled ${\cal N}=4 $ super-Yang-Mills (SYM) theory,
where the entropy density is only reduced by a factor 3/4
with respect to the non-interacting case~\cite{Gubser:1996de}, shows
that only a highly accurate agreement of the weak coupling
expansion and non-perturbative
lattice data can warrant the conclusion
that the plasma is dominated by weakly coupled
quark and gluon quasiparticles.
Other static quantities on the other hand appear to be
quite well described by weak coupling techniques.
A convincing example is the spatial string tension, for which
the dimensional reduction program works well~\cite{Laine:2005ai,Alanen:2009ej}.
As another example, the fluctuations of quark numbers~\cite{Cheng:2008zh}
appear to approach remarkably early the Stefan-Boltzmann limit.
Recently the expectation values of other twist-two operators
(besides the energy-momentum tensor)
have been proposed~\cite{Iancu:2009zz} as diagnostic tools for the
effective strength of interactions.
In this paper we calculate non-perturbatively spatial correlators of
two dimension-four operators, the trace anomaly $\theta(x)$
and the topological charge density $q(x)$ in the SU(3) gauge theory.
The range of distances covered by the calculation is
$\frac{1}{2T}<r<\frac{3}{2T}$.
These are also static quantities that are directly calculable
on the lattice. Furthermore, quite a lot is known about
these correlators in the QCD vacuum, going back to the original
QCD sum rules studies~\cite{Novikov:1979va,Novikov:1980uj}.
And thirdly, they are computable
in the large-$N_c$, strongly coupled ${\cal N}=4$ SYM theory
by AdS/CFT methods~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998zw}.
Thus we have the possibility to compare the lattice data to
two `caricatures' of the plasma, one being non-interacting gluons and
the other being a very strongly coupled non-Abelian plasma.
As we shall see, once the vacuum contribution has been subtracted
these two caricatures lead to qualitatively different
predictions for the correlators.
Parallel to the question of the weak- or strong-coupling nature
of the quark-gluon plasma, lies the question of how
similar non-Abelian relativistic plasmas are.
This constitutes a very interesting question in itself.
In addition, the possiblity to compute real-time quantities
in strongly coupled theories amenable to AdS/CFT computations
and ``port'' them to QCD provides a strong phenomenological motivation.
The best known examples of this strategy are the shear viscosity
to entropy density ratio~\cite{Kovtun:2004de}
and the jet quenching parameter $\hat q$ calculations~\cite{Liu:2006ug}.
As evidence in favor of the strategy,
the authors of~\cite{Bak:2007fk} conclude that the overall agreement
of the screening spectra of QCD and the ${\cal N}=4$ SYM theory
is rather good, although the low-lying screening masses are overall
a factor 1.9 or so larger in the strongly coupled SYM theory.
They therefore suggest that the QCD plasma around $2T_c$ is
most similar to the ${\cal N}=4$ SYM plasma at an intermediate
value of the 't~Hooft coupling $\lambda$. In order to
find the effective coupling which leads to the best match
(defined by a set of physical quantities) between the theories
therefore requires knowing the properties of the ${\cal N}=4$ SYM plasma
at intermediate values of the coupling, presumably
as hard a problem as determining those of the QCD plasma.
However, in the SYM theory one has the advantage of being
able to expand the observables in $\lambda$ and in $1/\lambda$,
opening the possibility to interpolate to
intermediate couplings~\cite{Bak:2007fk}.
The outline of this paper is as follows. In section (\ref{sec:defs}) we
define the relevant operators and their correlators,
and give the basic free-field theoretic predictions.
Section (\ref{sec:sym}) contains the AdS/CFT calculation of the
same correlators in the strongly coupled ${\cal N}=4$ SYM theory.
In section (\ref{sec:lat}) we describe the lattice calculation of these
correlators, including a new way to normalize the
topological charge density for on-shell correlation functions.
The results are compared to weak- and strong-coupling theoretical
predictions.
Section (\ref{sec:coupling}) discusses what values of the 't~Hooft coupling
best match the gluonic and the SYM plasma.
We finish with a summary of the lessons learnt and an outlook in
section (\ref{sec:disc}).
\section{Definitions and theoretical predictions\label{sec:defs}}
In this section and in the following, we use Euclidean conventions,
since the calculation of correlators will be performed on the lattice.
We consider the SU($N_c$) gauge theory without matter fields,
\begin{equation}
S_{\rm E} = \frac{-1}{2g^2}\int d^4x \,{\rm tr}\{F_{\mu\nu}(x)F_{\mu\nu}(x)\}\,.
\end{equation}
We focus on two operators in this paper.
The first is the (anomalous) trace of the energy-momentum tensor,
\begin{eqnarray}
\theta(x)\equiv T_{\mu\mu}(x)
&=& {\textstyle\frac{\beta(g)}{2g}} ~ F_{\rho\sigma}^a F_{\rho\sigma}^a\,,
\qquad
\beta(g) = -bg^3+\dots,\qquad b={\textstyle\frac{11N_c}{3(4\pi)^{2}}}\,.
\end{eqnarray}
The second operator is the topological charge density. It is defined as
\begin{equation}
q(x) = \frac{-1}{32\pi^2}\epsilon_{\mu\nu\rho\sigma}
{\rm tr}\{F_{\mu\nu}(x)F_{\rho\sigma}(x)\}
= \frac{g^2}{32\pi^2} F^a_{\mu\nu}(x) \tilde F^a_{\mu\nu}(x)
\end{equation}
where $F_{\mu\nu}=g\,F_{\mu\nu}^a t^a $, ${\rm tr}\{t^at^b \}=-\frac{1}{2}\delta_{ab}$
and $\tilde F^a_{\mu\nu}(x)\equiv
\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}F^a_{\rho\sigma}(x)$.
The normalization is chosen such that
the value of $Q=\int d^4x\, q(x)$ on a self-dual configuration is an integer.
For later use we also introduce the operator
\begin{equation}
\theta_{00}(x) = \frac{1}{4}F_{ij}^aF_{ij}^a-\frac{1}{2}F_{0i}F_{0i}\,.
\end{equation}
In the thermodynamic limit, $\<\theta_{00}\>_{T-0}=e+p$ while
$\<\theta\>_{T-0} = e-3p$, where $e$ and $p$ are the energy density
and pressure respectively and the subscript $T-0$ means that the difference
of the thermal expectation value and the vacuum expectation value is taken.
For ${\cal O}= \theta$ or $q$, we will consider the static connected
correlators at finite temperature $T\equiv 1/L_0$,
\begin{equation}
C_{\cal O O}(r,T) \equiv \<{\cal O}(0,{\bf r}) {\cal O}(0)\>_c
= \frac{1}{Z(L_0)} {\rm Tr}\left[e^{-L_0 H}\, \hat {\cal O}(0,{\bf r})\,\hat {\cal O}(0)\right]
- \left(\frac{1}{Z(L_0)} {\rm Tr}\left[e^{-L_0 H}\, \hat {\cal O}(0)\right]\right)^2\,.
\label{eq:statcor}
\end{equation}
Often, to emphasize the thermal effects on the correlator, we will subtract
the zero-temperature correlator,
\begin{equation}
G_{\cal O O}(r,T) \equiv C_{\cal O O}(r,T) - C_{\cal O O}(r,0) \,.
\end{equation}
If one expresses the traces of Eq.~(\ref{eq:statcor}) in a basis of
energy eigenstates, this subtraction has the effect of removing
the vacuum contribution.
\subsection{Short-distance behavior}
In this section we review the available perturbative results for the
correlators (\ref{eq:statcor}) as well as our knowledge of their
long-distance behavior.
\subsubsection{Zero temperature}
The two-point functions of the trace anomaly and the
topological charge density are to leading order
\begin{equation}
(8\pi b\alpha_s)^{-2}\<\theta(x)\theta(0)\>_{\rm 1\,loop} =
-\left(\frac{2\pi}{\alpha_s}\right)^2\, \<q(x)\,q(0)\>_{\rm 1\,loop} =
\frac{3d_A}{\pi^4(x^2)^4}\,,
\label{eq:ththPT}
\end{equation}
where $d_A=N_c^2-1$ is the number of gluons.
The correlators were calculated to two-loop order in~\cite{Kataev:1981gr},
but we will not exploit that result.
\subsubsection{Finite temperature}
The two-point functions of the trace anomaly and the
topological charge density to leading order read~\cite{Meyer:2008dt}
\begin{equation}
(8\pi b\alpha_s)^{-2}\<\theta(x)\theta(0)\>_{\rm 1L} =
-{\textstyle\left(\frac{2\pi}{\alpha_s}\right)^2}\, \<q(x)\,q(0)\>_{\rm 1L} =
\frac{d_A}{\pi^4}\sum_{m,n\in {\bf Z}}
\left(4 \frac{(x_{[m]}\cdot x_{[n]})^2} {x_{[m]}^2x_{[n]}^2} -1\right)
\frac{1}{(x_{[m]}^2 \,x_{[n]}^2)^2 }\,,
\label{eq:free}
\end{equation}
where $x_{[n]} \equiv x+n L_0 \hat e_0$.
Their operator-product expansion (OPE) reads~\cite{Novikov:1979va,Meyer:2008dt}
\begin{eqnarray}
(8\pi b\alpha_s)^{-2}\<\theta(x)\theta(0)\>
= -{\textstyle\left(\frac{2\pi}{\alpha_s}\right)^2}\, \<q(x)\,q(0)\>
+\dots
= \frac{3d_A}{\pi^4 r^8}
- \frac{1}{3\pi^2} \frac{\<\theta_{00}\>}{r^4}
- \frac{1}{2\pi^2} \frac{\<\theta\>}{r^4}
+\dots
\label{eq:OPEth}
\end{eqnarray}
and the dots refer to ${\rm O}({\textstyle\frac{1}{r^2}})$ terms.
In fact, the OPE of the two correlation functions appearing in Eq.~(\ref{eq:OPEth}),
treating the Wilson coefficients to leading order in perturbation theory,
differ only at O(${r^0}$) if one restricts the terms on the
right-hand side to operators whose vacuum expectation value
does not vanish~\cite{Novikov:1980uj}.
Since the $r^{-8}$ term cancels exactly when taking the difference of two temperatures
and $\<\theta_{00}\>_{T-0}\geq0$, $\<\theta\>_{T-0}\geq0$,
Eq.~(\ref{eq:OPEth}) implies that
the gluon plasma is always more screening than the vacuum of the theory
at sufficiently small distances.
\subsubsection{Spectral functions}
The free spectral functions for the trace anomaly and the
topological charge density are~\cite{Meyer:2008gt}
\begin{eqnarray}
(8\pi b\alpha_s)^{-2} \rho_{\theta,\theta}(\omega,q,T) &=&
- {\textstyle\left(\frac{2\pi}{\alpha_s}\right)^2}\, \rho_{qq}(\omega,q,T)
= \frac{d_A}{(8\pi)^2} ~(\omega^2-q^2)^2 ~~ {\cal I}([1], \omega, q, T)\,,
\nonumber \\
{\cal I}([1], \omega, q, T) &=&
-\frac{\omega}{q} \theta(q-\omega)
\,+\, \frac{2T}{q}\,\log\frac{\sinh(\omega+q)/4T}{\sinh{|\omega-q|/4T}}\,.
\label{eq:sfelem}
\end{eqnarray}
The spectral functions are related to the static correlators by
Fourier transformation,
\begin{equation}
\<{\cal O}(0,{\bf r})\, {\cal O}(0)\>
= \lim_{\epsilon\to0}\int \frac{d^3{\bf q}}{(2\pi)^3}e^{i{\bf q\cdot r}}
\int_0^\infty d\omega e^{-\epsilon \omega}
\frac{\rho(\omega,{\bf q},T)}{\tanh\omega/2T}\,.
\end{equation}
The parameter $\epsilon$ serves to regulate the integral over
frequencies at large $\omega$, which would otherwise diverge.
We will come back to this regulator in section 4.
\subsection{Long-distance behavior}
At long distances, the vacuum correlators of $\theta$ and $q$ are dominated
by the scalar and pseudoscalar glueballs respectively.
The most recent lattice results for their masses are
$r_0M_{0^{++}} = 3.96(5)$~\cite{Meyer:2008tr}
or 4.16(11)(4)~\cite{Chen:2005mg}
and $r_0M_{0^{-+}} = 5.93(16)$~\cite{Meyer:2004jc}
or 6.25(6)(6)~\cite{Chen:2005mg},
where $r_0\simeq 0.5$fm is the Sommer reference scale~\cite{Sommer:1993ce}.
The coupling of these states to the local operators $\theta$ and $q$,
$s\equiv\<{\rm vac}|\theta|0^{++}\>$ and $p\equiv\<{\rm vac}|q|0^{-+}\>$,
have also been calculated recently~\cite{Chen:2005mg,Meyer:2008tr}.
The screening masses, which determine the asymptotic
exponential fall-off of the finite-$T$ correlators, are also known to some extent.
The operators $\theta$ and $q$ belong to irreducible representations (irreps)
of the SO(3) rotation group, $\times$ parity and $\times$ charge conjugation.
At finite temperature, the symmetry group of a `$z$-slize' (for states
at rest, $p_x=p_y=0$ and given $\omega_n$, the discrete momentum in the
direction of length $1/T$) is reduced to R$\times$SO(2)$\times P_2\times C$, where
R is the Euclidean-time reflection and
$P_2$ is the reflection inside an $xy$ plane ($(x,y)\to(x,-y)$).
In general, an operator forming an irrep of the zero-temperature theory
gets decomposed into several irreps of this reduced symmetry group.
In our case, $\theta$ and $q$ simply become the scalar and pseudoscalar
representations of the $z$-slice symmetry group. The former is invariant
under all the symmetries of the $z$-slice; the latter is too, except
for being odd under the R and $P_2$ operations.
On the lattice, these irreps are generically further
reduced to crystallographic irreps. In our case they
are labelled $A_1^{++}$ and $A_1^{-+}$~\cite{Datta:1998eb}.
A recent result for the mass gap in the scalar sector is $m_{A_1^{++}}(T)/T= 2.62(16)$,
2.83(16) and 2.88(10) respectively at $1.24T_c$,
$1.65T_c$ and $2.20T_c$~\cite{Meyer:2008dt}.
Datta and Gupta find $m_{A_1^{-+}}(T)/T= 6.32(15)$ both at about $1.5T_c$
and $2.0T_c$~\cite{Datta:1998eb}.
So the asymptotic screening is much stronger in the pseudoscalar sector
than in the scalar sector. This ordering persists at all temperatures
according to a recent next-to-leading order perturbative analysis,
in spite of a change in the nature of the lightest scalar state~\cite{Laine:2009dh}.
\input{adscft.tex}
\section{Correlators on the lattice\label{sec:lat}}
In this section we describe the lattice setup and the
numerical results obtained by Monte-Carlo simulations.
We employ the Wilson action~\cite{Wilson:1974sk},
\begin{equation}
S_{\rm g} = \frac{1}{g_0^2} \sum_{x,\mu\neq\nu} {\rm tr}\{1-P_{\mu\nu}(x)\}\,,
\label{eq:Sg}
\end{equation}
where the `plaquette' $P_{\mu\nu}$ is the product of four link
variables $U_\mu(x)$ around an elementary cell in the $(\mu,\nu)$
plane.
As a simulation algorithm, we use the standard combination of heatbath and
over-relaxation~\cite{Creutz:1980zw,Cabibbo:1982zn,Kennedy:1985nu,Fabricius:1984wp}
sweeps for the update in a ratio increasing from
3 to 5 as the lattice spacing is decreased.
The overall number of sweeps between measurements was typically
between 4 to 12.
\subsection{Discretization and normalization of the operators}
A choice has to be made for the discretization of the
operators $\theta(x)$ and $q(x)$.
Here we use the specific discretization
\begin{eqnarray}
\theta_L(x) & \equiv &
-\chi_s(g_0) \frac{dg_0^{-2}}{d\log a}
~\frac{1}{2}\sum_{\mu,\nu} \mathop{\rm Re}{\rm tr}
\Big[\widehat F_{\mu\nu}(x)\widehat F_{\mu\nu}(x)\Big]\,,
\label{eq:Th}
\\
q_L(x) &\equiv &
\frac{-Z_q(g_0)}{32\pi^2}\epsilon_{\mu\nu\rho\sigma}
{\rm tr}\Big[\widehat F_{\mu\nu}(x)\widehat F_{\rho\sigma}(x)\Big]\,,
\label{eq:Q}
\end{eqnarray}
where the (antihermitian)
`clover' discretization of the field-strength tensor
$\widehat F_{\mu\nu}(x)$ is defined in terms of the
link variables in~\cite{Luscher:1996sc}.
In this work we feed in `HYP smeared' link variables~\cite{Hasenfratz:2001hp}
into the definition of $\widehat F_{\mu\nu}(x)$.
The name stems from the fact that the
elementary link variable is replaced by an average
of Wilson lines which remain in the adjacent elementary
hypercubes. We kept the original parameters~\cite{Hasenfratz:2001hp}
and used the projection onto the SU(3) group
of the Wilson-line average described in~\cite{DellaMorte:2005yc}.
At the quantum level,
the normalization of these lattice operators
differs from the naive normalization.
Indeed, even though the anomalous dimension of these
operators vanishes, a finite renormalization of the operators
survives, which has to be compensated for
in order to ensure that their on-shell
matrix elements approach their continuum limit with O($a^2$)
corrections.
In Eq.~(\ref{eq:Th}), $\frac{dg_0^{-2}}{d\log a}$ is the lattice
beta-function which describes by how much the lattice spacing shrinks
when the bare coupling is reduced. While asymptotically it
is governed by the first two universal beta-function coefficients,
we work in the region $g_0^2\sim 1$ and therefore employ
a non-perturbatively determined beta-function.
Specifically we use the quantity $r_0/a$
(the Sommer reference scale) as a function of
$g_0^2$, as computed in~\cite{Necco:2001xg} and
parametrized in the appendix of~\cite{Durr:2006ky}.
By taking one derivative of the parametrization, we obtain
$\frac{dg_0^{-2}}{d\log a}$.
The trace anomaly in our chosen discretization
still requires the additional normalization factor $\chi_s(g_0)$.
The latter is fixed by
calibrating against the `canonical' discretization of $\theta(x)$.
This discretization $\theta_L'$
arises from differentiating the lattice action (\ref{eq:Sg})
with respect to the bare coupling,
\begin{equation}
a^4\theta_L'(x)
= \frac{dg_0^{-2}}{d\log a}
\sum_{\mu,\nu} \mathop{\rm Re}{\rm tr} \Big[ 1 - P_{\mu\nu}(x) \Big].
\end{equation}
By requiring that $e-3p$ be independent of the discretization,
i.e. $\<\theta_L(x)\>_T-\<\theta_L(x)\>_0$
be equal to $\<\theta_L'(x)\>_T-\<\theta_L'(x)\>_0$,
we determine $\chi_s(g_0)$.
Here we choose $N_{\tau}\equiv L_0/a=6$ to do this matching.
The results are well parametrized by
\begin{equation}
\chi_s(g_0) = \frac{0.3257 }{ 1 - 0.7659 g_0^2},\quad
5.90\leq \beta\leq 6.72.
\end{equation}
The error varies from 0.004 at $\beta=6.0$ to 0.008 at $\beta=6.72$.
We have not investigated systematically the dependence
of $\chi_s$ on the value of $N_{\tau}$ used for the matching.
This uncertainty is not included in the above error estimates,
and requires repeating the matching calculation at $N_{\tau}=8$ or $12$.
For our purposes in this work, this uncertainty will not prevent
us from drawing conclusions when comparing the trace anomaly correlator
to theoretical predictions.
\subsubsection{Normalization of the topological charge density}
The procedure we use to normalize $q_L(x)$ is somewhat new
to our knowledge, and therefore we describe it in some detail.
We again exploit the fact that there exists a discretization
$q_L'(x)$ for which the normalization is known exactly.
This is the definition based on the overlap operator~\cite{Neuberger:1997fp}
which satisfies the Ginsparg-Wilson relation~\cite{Ginsparg:1981bj}.
Indeed, $Q_L'(x)=\sum_x q_L'(x)$ is then guaranteed to be
an integer on every gauge field configuration,
because it counts the difference of the number of right-handed
and left-handed zero-modes of the Dirac operator~\cite{Hasenfratz:1998ri}.
We normalize our discretization of $q(x)$ by matching
the value of its vacuum two-point function
with the same correlator obtained with the overlap-fermion-based
discretization of $q(x)$. The latter correlator was obtained
in~\cite{Horvath:2005cv}, and we use the numerical data of that
article to fix the normalization of our discretization.
More precisely, the quantity matched is $r^8\,C_{qq}(r,0)$;
this removes the largest part of the uncertainty in the lattice spacing
in physical units.
Specifically, we choose the matching distance to be $\bar r/r_0\simeq0.68$,
and use the data of~\cite{Horvath:2005cv} on the finest lattice
(${\cal E}_3$ ensemble at $a=0.082$fm). The matching distance $\bar r$
was chosen such that on this lattice, $\bar r /a = 3\sqrt{2}$.
The lattice spacing in~\cite{Horvath:2005cv} is specified by the string tension,
and we have used the factor $r_0\sqrt\sigma = 1.1611(95)$
(based on the compilation~\cite{Meyer:2008tr})
to convert physical distances in units of $r_0$.
Given our normalization strategy, it is
convenient to split the normalization factor of the operator
into two parts:
\begin{equation}
Z_q(\beta) = Z_q(\beta_{\rm ref}) \cdot \chi(\beta).
\end{equation}
where we choose $\beta_{\rm ref}=6.2822$.
The advantage of this separation is that only $ Z_q(\beta_{\rm ref})$
depends on the overlap-based correlator.
The latter has been calculated to much lower statistics
than our computationally cheaper correlator. We find
\begin{equation}
Z_q(\beta_{\rm ref}) = 1.55(8)\,,
\end{equation}
where the uncertainty comes almost entirely from the overlap data.
We then obtain, by matching the correlator at other values of $\beta$,
\begin{eqnarray}
\chi(5.903)&=& 1.25(4) \qquad r_{\rm match}/r_0=0.94\\
\chi(6.018)&=& 1.125(24) \qquad r_{\rm match}/r_0=0.77\\
\chi(6.200)&=& 1.030(15) \qquad r_{\rm match}/r_0=0.68 \\
\chi(6.408)&=& 0.959(20). \qquad r_{\rm match}/r_0= 0.60\,\\
\chi(6.720)&=& 0.958(50). \qquad r_{\rm match}/r_0= 0.60\,.
\end{eqnarray}
When necessary,
we use linear interpolations in $r$ of the function $r^8\,C_{qq}(r,0)$
to match different lattice spacings. These $\chi$ factors are well
parametrized as
\begin{equation}
\chi(\beta)= \frac{0.8112-0.7388 g_0^2}{1-0.9364 g_0^2},\qquad \beta=6/g_0^2\,.
\end{equation}
The absolute error is about 0.041 at $\beta=5.9$, 0.014 at $\beta=6.1$,
0.010 at $\beta=6.4$ and 0.026 at $\beta=6.7$.
\subsection{The vacuum correlators ($T=0$)}
The vacuum correlators of the trace anomaly and the
topological charge density (multiplied by $r^8$)
are displayed in Fig.~\ref{fig:edotb-h}.
The overall normalization uncertainty coming from
$\chi_s(g_0)$ and
$Z_q(\beta)$ are not included in the error bars on the picture;
they are given above. Only data points with $r/a\geq 4$
are displayed.
We have measured the correlator along the lattice axes (1,0,0),
(1,1,0) and (1,1,1) and averaged the results over all equivalent directions.
Some of the raw data is given in Tables~(\ref{tab:T=0}) and (\ref{tab:T=0b}).
The plots also show the perturbative prediction~(\ref{eq:ththPT})
at small distances. The three lines give an
indication of the renormalization scale uncertainty: they correspond to
$\mu=\frac{\pi}{2r},\frac{\pi}{r},\frac{2\pi}{r}$. For that purpose we have used the
result $\Lambda^{(N_{\rm f}=0)}_{\overline{\rm MS}}r_0=0.602(48)$~\cite{Capitani:1998mq}.
Based on these figures, it is very plausible that our data agrees
with perturbation theory at short distances, but data at smaller
lattice spacing is needed for a more stringent test.
Such vacuum correlators are of course interesting in their own right.
In particular, they can be used to test models for low-energy QCD, such
as the instanton-liquid model~\cite{Schafer:1995pz}
or the more recent holographic models of hadrons~\cite{Erlich:2005qh}.
See~\cite{Schafer:1994fd,Schafer:2007qy} for model calculations of gluonic
vacuum correlators. A detailed comparison of the latter with lattice data will be carried out
elsewhere; for the time being we simply note that around $r=0.4$fm the lattice data points
lie on a convex curve in the scalar case, and a concave curve in the pseudoscalar case.
This observation is qualitatively consistent with the instanton-model calculations
of~\cite{Schafer:1994fd}, where it is interpreted as an attraction/ repulsion
respectively in the scalar and pseudoscalar channels.
\subsection{Finite-temperature correlators\label{sec:FTC}}
The finite-temperature correlators of the trace anomaly
minus their zero-temperature counterpart are displayed in
Fig.~(\ref{fig:action2}). Partial results for this quantity
were already published in~\cite{Meyer:2008dt}.
A sample of the raw data is presented in Tables (\ref{tab:6.2822})
and (\ref{tab:6.408}).
Starting from the higher temperatures (bottom panel),
we note that the lattice correlators are gradually approaching the
free-theory prediction (Eq.~\ref{eq:ththPT}) as the temperature is raised, as one expects
on the basis of asymptotic freedom. In order to display Eq.~(\ref{eq:ththPT}) on
the figure, we have set
the renormalization scale to $\mu=\pi(T+\frac{1}{r})$ and used
$\Lambda^{(N_{\rm f}=0)}_{\overline{\rm MS}}r_0=0.602(48)$~\cite{Capitani:1998mq}.
However, at the temperatures where simulation data are available,
the subtracted correlator is negative at all separations $r$
-- unlike the free correlator.
This means that the plasma screens the fluctuations
of the operator $\theta$ more than the vacuum does.
We have checked for discretization errors by calculating
$G_{\theta\theta}(r,1.24T_c)$ at several lattice spacings.
This means that $N_{\tau}=(aT)^{-1}$ is varied and $\beta$ tuned
so that the temperature is kept fixed.
The result is shown in Fig.~(\ref{fig:edotb-1p24}), top panel.
To a good approximation, the data fall on a single curve.
It is not presently our intention to carry out a systematic
continuum extrapolation of $G_{\theta\theta}(r,T)$ in $1/N_{\tau}^2$.
Rather we want to show convincing evidence that
the conclusions we draw from finite lattice spacing data
are not affected by discretization errors.
In the figures, we have not included the statistical uncertainty
on the non-perturbative normalization factors.
However the latter cancel in the ratio
\begin{equation}
\frac{G_{\theta\theta}(r,T)}{ (e-3p)^2}\,,
\end{equation}
which is displayed in Fig.~\ref{fig:Fsq}, top panel.
We have multiplied this quantity by $d_A$
so as to give it a finite limit when $N_c\to\infty$
and by $(Tr)^4$, so that the expected short-distance behavior
is $\sim\alpha_s^2(\mu)$, where $\mu={\rm O}(1/r)$.
The graph shows that $G_{\theta\theta}(r,T)$ falls off
like $1/r^4$, as expected from the OPE, for
$3<2\pi Tr<5$. Beyond this interval, it falls off faster to zero.
The on-shell correlation functions of $\theta$ are renormalization-group
invariant. However, if we want to compare the result with the ${\cal N}=4$ SYM
correlator of $F^2$ described in section (\ref{sec:sym}),
we have to divide out the factor of $\beta(g)/2g$ that multiplies
the Yang-Mills operator $\theta$. Since the beta-function is scheme-dependent,
this means that the renormalized $F^2$ correlator in the Yang-Mills theory
is scheme-dependent, too. We choose the three-loop $\overline{\rm MS}$
scheme, and evaluate the coupling $\alpha_s$ at the scale
$\mu(r,T)=\frac{3}{8}\pi (T+1/r)$. A few remarks may be useful to
motivate this choice. At $r=1/T$, this corresponds to
$\mu=\frac{3\pi}{4r}$; it
makes the one-loop prediction for the $\theta$ correlator approximately
go through the lattice data at $T=0$, see Fig.~(\ref{fig:edotb-h}).
At short distances, we expect $r$ to provide the harder scale
and therefore the appropriate $\mu$ to be dominated by $r$.
For $r>T$, we expect the momenta of the two exchanged gluons
to be of order $\pi T$. The chosen expression for $\mu$
is then a simple interpolation between these two regimes.
We thus obtain the lattice data for the $F^2$ correlator
in the $\overline{\rm MS}$ scheme, see the bottom panel of Fig.~\ref{fig:Fsq}.
We are then in a position to carry out a parameter-free comparison with the
one-loop result~Eq.~(\ref{eq:ththPT}) and the AdS/CFT result.
In the range of temperatures $1.2<T/T_c<1.9$,
the lattice correlators are in semi-quantitative agreement with the
corresponding $F^2$ correlator calculated in the strongly
coupled ${\cal N}=4$ SYM theory.
The lattice data are negative at all $1/2T<r<1/T$, and this is in
contrast with the free-field theory result, which is positive
in that range. The data at $T>2T_c$ however does suggest that the
non-perturbative correlator gradually moves towards the
one-loop result as the temperature increases, as expected
from asymptotic freedom.
Coming back to Fig.~\ref{fig:action2} (top panel),
the thermal fluctuations become stronger
as the temperature is lowered toward $T_c$,
to the point where the subtracted correlator is positive
over a wide range of distances $r$.
Unsurprisingly this fact is accounted for neither by the
weak coupling predictions,
nor by the conformal ${\cal N}=4$ SYM result.
The correlation between the fluctuations of
$\theta$ is strongest at $1.01T_c$, and drops
again as one moves away from the transition below $T_c$.
The interpretation of the data is helped by the fact that
the overall fluctuations of $\theta$ are
related to thermodynamic properties via the
sum rule~\cite{Ellis:1998kj,Meyer:2007fc}
\begin{equation}
\int d^4x \,\<\theta(x) \theta(0)\>_{{\rm conn},T-0}
= T^5 \frac{\partial}{\partial T}\frac{e-3p}{T^4}.
\label{eq:sr}
\end{equation}
Since we know that $(e-3p)/T^4$ rises very steeply
between $T_c$ and $1.1T_c$~\cite{Boyd:1996bx}, Eq.~(\ref{eq:sr})
indicates that the fluctuations of $\int d^4x\,\theta$ are strongest
in that range of temperatures.
Since the Wilson coefficients in the OPE are negative,
there is a negative contribution to the LHS of Eq.~(\ref{eq:sr})
from the short-distance part of the correlator.
Therefore there has to be an enhancement in the $\theta$ two-point function
at intermediate or long distances in order to
account for the positive sign of the RHS
(note however that here we restrict ourselves to equal-time correlators).
On the other hand, above $1.13T_c$, where the RHS of Eq.~(\ref{eq:sr})
is negative, there is no necessity for $G_{\theta\theta}$
to be positive at any non-vanishing separation.
Our data shows that indeed $G_{\theta\theta}$ at intermediate distances
$r\approx 1/T$ is negative for all available temperatures above $1.2T_c$.
This differs from the correlator $G_{ee}$
of the energy density~\cite{Meyer:2008dt},
which is positive at intermediate separations.
\subsubsection{Topological charge density correlator\label{sec:TCD}}
We now discuss the topological charge density correlator, starting
from the high temperature end.
An example of $G_{qq}$ at high-temperatures is displayed in
Fig.~(\ref{fig:edot-6p408}). We see that qualitatively,
the correlator, in the interval where data is available,
resembles its free-theory counterpart:
$-G_{qq}$ is negative at short-distance, as predicted
by the OPE, and then positive at intermediate distances.
On the basis of the pseudoscalar screening masses,
we expect $G_{qq}$ to approach zero from below,
and there is a hint at $3.30T_c$ that indeed it does.
If we now lower the temperature, as shown in Fig.~(\ref{fig:qq2}, bottom panel),
we see that the range of distances where $-G_{qq}$ is positive
grows, and that its maximum value also grows.
The OPE tells us that at sufficiently short distances,
$-G_{qq}$ must be negative. Below $1.6T_c$ its maximum is no longer
visible in the data; we conjecture that it is located at too short
distance $r$ for us to see it in the lattice data
(at short distances, we are limited by discretization errors).
As we lower the temperature further (top panel of Fig.~(\ref{fig:qq2})),
$-G_{qq}(r,T)$ at fixed $r$ continues to grow.
It hits a maximum between $1.02$ and $1.06T_c$.
Thus similarly to the trace anomaly, the topological charge density
exhibits strong spatial correlations near $T_c$.
How strong they are is better illustrated by taking the ratio of the
finite-temperature to the zero-temperature data,
Fig.~(\ref{fig:edotb-ratio}).
Here one clearly sees that for $r$ of order $1/T$, the spatial
correlation of topological charge density fluctuations is about twice
as strong near $T_c$ as in the vacuum. A technical advantage
of this ratio is that the overall normalization cancels out, and
secondly that we expect a partial cancellation
of the discretization errors to take place.
To check that these large correlations are not a cutoff effect --
there is after all a large cancellation taking place
at short distance in the subtracted correlator --
we have repeated the $1.24T_c$ calculation at
a finer lattice spacing. The comparison is shown
in Fig.~(\ref{fig:edotb-1p24}). We see that the different data sets
fall on top of eachother within errors in the interval
$0.4<Tr<1.2$. We thus conclude that the strong, finite-temperature
induced enhancement of the correlation is a true physical effect.
There are significant differences between the scalar
and the pseudoscalar channels in the lattice data at
distances $1/2T<r< T$.
This is unlike the strongly coupled ${\cal N}=4$ SYM theory,
where the $F^2$ and $F\tilde F$ correlators are identical.
It is also unlike the free field prediction. In the
OPE framework, this difference requires operators of dimension 6 or higher
to overwhelm the expectation value of the stress-energy tensor
in Eq.~(\ref{eq:OPEth})\footnote{Another possibility is that
the Wilson coefficient of $\theta$ on the RHS of the OPE
changes sign when $r$ is not asymptotically small. This
would signal a breakdown of the perturbative series for the
Wilson coefficients.}. This would in turn imply the breakdown
of the OPE as an asymptotic expansion.
Our discovery of large spatial correlations
in the topological charge density fluctuations
in the vicinity of $T_c$ is qualitatively in agreement
with the results of~\cite{Lucini:2004yh}.
The latter showed a strong suppression of the topological
susceptibility just above $T_c$, using a method based on the
semi-classical identification of topological charges.
This suppression is particularly dramatic at larger $N_c$ values,
but even for SU(3) it amounts to a factor of about 0.54(4).
This implies that $-\int d^4x \<q(x)q(0)\>_{T-0}\geq 0$,
and therefore there has to be a range of separations $x$
where $-\<q(x)q(0)\>_{T-0}$ is positive; this is what we are seeing
in the data. Note that the short-distance singularity
of $\<q(x)q(0)\>_{T-0}\propto\alpha_s^2/x^4$ gives a finite contribution
when integrated over space-time. This is in contrast
with the topological susceptibility itself,
$\chi_t\equiv \int d^4x \<q(x)q(0)\>_{0}$,
which has to be defined with care~\cite{Luscher:2004fu}
if it is to remain finite when the cutoff is removed.
\section{The effective coupling in the plasma\label{sec:coupling}}
We have found that the strongly coupled SYM theory
has an $F^2$ correlator similar to the pure Yang-Mills theory
in the deconfined phase below $2T_c$.
To summarize the procedure, we have calculated the $\theta$ correlator on the lattice,
which contains a factor $(\beta(g)/2g)^2$ relative to the $F^2$ correlator.
This factor makes it renormalization-group invariant
in the pure Yang-Mills theory.
We used the 3-loop $\overline{\rm MS}$ scheme
for the beta-function to convert the lattice $\theta$ correlator
to the $F^2$ correlator, and found semi-quantitative agreement
between the theories in a range of temperatures.
For $r=1/T$, the values for our chosen running coupling are
\begin{equation}
\alpha_s(T)= 0.33,~~ 0.30,~~ 0.27,~~ 0.25,~~ 0.23,~~ 0.19
\label{eq:alphas}
\end{equation}
for the six temperatures
displayed on the bottom panel of Fig.~(\ref{fig:action2}).
From Fig.~(\ref{fig:Fsq}), we see that at the last two temperatures,
the Yang-Mills $F^2$ correlator no longer agrees with the strongly
coupled SYM result. Therefore we conclude that for $\alpha_s$
smaller than about $0.25$ (i.e. $\lambda$ smaller than about 10),
one should not expect other properties of the Yang-Mills plasma
to coincide with those of the infinite-coupling SYM plasma.
In fact it is somewhat surprising that
for $0.25<\alpha_s<0.30$, where the function $\alpha_{\overline{\rm MS}}(\mu)$
shows a modest dependence on the order in perturbation theory,
the thermal correlator is so much more similar to the strongly coupled
SYM correlator than to the weakly coupled one.
This may be related to the fact that at finite temperature,
due to infrared effects, the perturbative expansion parameter
is $g$ rather than $\alpha_s$.
Thus at an energy scale where the vacuum polarization effects are
still well approximated by the perturbative expansion,
the thermal physics rather has a strong coupling character.
We now discuss whether one may
use the `empirically' observed similarity
to match the couplings of the Yang-Mills and SYM theories.
By `matching', we mean to find a way of
comparing the two theories in such a way that they share
as many properties as possible at a semi-quantitative
level~\footnote{This notion is similar to, but distinct from the
(more precisely defined) matching procedure used in effective field theories.}.
Since the 't~Hooft coupling is the unique parameter of the
SYM theory, this is the only parameter we need to fix
in the comparison.
To what extent several observables can be simultaneously
made similar provides a clue as to how universal the
properties of non-Abelian plasmas are.
The best way to match QCD with a different theory is presumably
to equate a renormalized quantity such as the Debye mass~\cite{Bak:2007fk}
across the two theories.
However this requires knowing the relation between the coupling and the Debye
mass on the SYM side. A technical obstacle to this program is
that the Debye mass is independent of the coupling
in the limit of large coupling. Other observables
typically lead to the same lack of sensitivity to $\lambda$.
One then needs to know $1/\lambda$ corrections to the
selected observable on the SYM side, which leads to
more involved calculations and raises questions
of convergence, etc.
A different way to match the two theories is to
define a running coupling based on a renormalized quantity,
such that the weak-coupling relation holds by definition
for all scales.
One then equates the couplings of the two theories.
For example, one can define a Yang-Mills effective coupling from the
Debye mass, $\lambda(T)\equiv 3\frac{m_D^2}{T^2}$
in the SU($N_c$) gauge theory, and use that value of $\lambda$
in the SYM theory. A priori, when the coupling
is large, its scheme dependence is strong.
For this reason, we expect that matching the observable itself
is the superior procedure.
Nevertheless the non-trivial agreement of the Yang-Mills
and the SYM $F^2$-correlators in a range of temperatures suggests
that simply using the values of $\lambda=12\pi\alpha_s=10\dots12$,
where the temperature-dependent
values of $\alpha_s$ are given in Eq.~(\ref{eq:alphas}),
is a reasonable choice of coupling constant to use on the SYM side.
This is a moderately large coupling constant; for instance,
the O($1/\lambda^{3/2}$) correction to the $\lambda=\infty$
shear viscosity to entropy density ratio $\eta/s$
is about $+50\%$ at this coupling~\cite{Buchel:2004di,Buchel:2008sh,Myers:2008yi}.
\section{Summary\label{sec:disc}}
We have found that the gluon plasma generically
screens scalar and pseudoscalar
fluctuations more than the vacuum does at
short distance $r\ll 1/T$ and at long distances $r\gg 1/T$.
Near $T_c$ however, there is a significant range of distances
of order $1/T$ over
which the spatial correlations are stronger in the plasma
than in the vacuum. We interpret this fact as there being
stronger fluctuations of wavelength O($1/T$) in the plasma
than in the vacuum. As one increases the temperature above
$T_c$, this effect disappears soon in the scalar channel,
but extends to about $2T_c$ in the pseudoscalar case.
In the latter channel, the enhancement of these fluctuations
over those of the vacuum is about a factor two.
While the pseudoscalar and scalar channels are expected to
have similar correlation functions at very short distances
and they precisely agree in the SYM theory,
the two channels look rather different
at least up to $2T_c$, according to our lattice data.
The scalar correlator agrees well with the corresponding correlator
in the strongly coupled SYM theory in the range of temperatures $1.2<T/T_c<1.9$,
while the pseudoscalar correlator is notably different due to
the aforementioned strong fluctuations. These observations
constitute our main results. The scalar fluctuations of
wavelength $\sim 1/T$ are suppressed compared to the vacuum,
while the pseudoscalar fluctuations are significantly enhanced.
It would be interesting to see whether a next-to-leading order
perturbative calculation would agree significantly better with
the lattice data than the treelevel calculation does.
We note that studying the vacuum subtracted correlators
of gauge invariant operators at distances short compared to $1/T$
is morally equivalent, via the operator-product expansion,
to investigating the thermal expectation values of
higher-dimensional renormalized operators. Our study thus has goals in common
with the investigation of twist-two operator expectation values~\cite{Iancu:2009zz}.
The semi-quantitative agreement of the scalar correlators
between the pure Yang-Mills and the SYM theories, while
the pseudoscalar channel is markedly different, highlights
the fact that different plasmas can exhibit quite similar properties
in some channels while differing substantially in others.
In spite of having a reduced topological susceptibility~\cite{Lucini:2004yh},
the deconfined phase close to $T_c$ exhibits strong
correlations of $\vec E\cdot \vec B$ over distances
of order $1/T$, which are stronger than in the vacuum by
about a factor two. It would be interesting to see whether
models of QCD can account for this effect.
It would also be worth investigating how much this effect
depends on the weakness of the first-order deconfining phase transition,
and whether the effect persists at larger
values of $N_c$, where the transition is strongly first order~\cite{Lucini:2002ku,Lucini:2005vg}.
A plausible mechanism for the observed strong spatial correlations
is that fluctuations of $\vec E\cdot \vec B$ with a
coherence length of at least $1/T$ occur in the plasma.
Perhaps these fluctuations have been seen in~\cite{Lucini:2004yh},
where the topological lump size was found to be peaked
at $\rho\simeq1.7/T_c$. This large size led the authors
to conclude that this peak lies outside the range of applicability
of their semiclassical methods.
It is worth thinking about possible phenomenological
implications of these large-amplitude, long-wavelength fluctuations,
since the charge-separation
effects of a non-zero $\vec E\cdot \vec B$ field configuration
in the context of heavy ion collisions have recently received a lot of
attention~\cite{Kharzeev:2007jp,Kharzeev:2009mf,Voloshin:2009hr}.
\FIGURE[t]{
\vspace{0.65cm}
\centerline{\includegraphics[width=10.0 cm,angle=-90]{thth-T0.ps}}
\centerline{\includegraphics[width=10.0 cm,angle=-90]{edotb-horvathc.ps}}
\caption{Zero-temperature correlator of the trace anomaly $\theta(x)$ (top)
and of the topological charge density $q(x)$ (bottom).
The overall normalization in the latter case is fixed by the data of
Horvath et al.~\cite{Horvath:2005cv}. The dotted lines at small $r$
correspond to the one-loop result with choices of renormalization scale
(from top to bottom) $\mu=\frac{\pi}{2r},\frac{\pi}{r}$ and $\frac{2\pi}{r}$.}
\label{fig:edotb-h}
}
\noindent
\FIGURE[t]{
\vspace{0.65cm}
\centerline{\includegraphics[width=8.0 cm,angle=-90]{action.ps}}
\centerline{\includegraphics[width=8.0 cm,angle=-90]{action2.ps}}
\caption{Thermal part of the spatial correlator of the trace anomaly $\theta$,
at $N_{\tau}=6$ and several temperatures.
The curve in the lower panel is the one-loop result (Eq.~\ref{eq:ththPT})
with choice of renormalization scale described in section (\ref{sec:FTC}).}
\label{fig:action2}
}
\noindent
\FIGURE[t]{
\vspace{0.65cm}
\centerline{\includegraphics[width=8.0 cm,angle=-90]{act2-e3p-rT4.ps}}
\centerline{\includegraphics[width=8.0 cm,angle=-90]{Fsq.ps}}
\caption{Top: thermal part of the spatial correlator
of the trace anomaly $\theta$, normalized by $(e-3p)^2$
(which cancels the renormalization factor).
Bottom: comparison of the $F^2$ correlator in the 3-loop $\overline{\rm MS}$ scheme
to the one-loop result (upper curve) and the SYM correlator (lower curve).}
\label{fig:Fsq}
}
\noindent
\FIGURE[t]{
\vspace{0.65cm}
\centerline{\includegraphics[width=8.0 cm,angle=-90]{qq.ps}}
\centerline{\includegraphics[width=8.0 cm,angle=-90]{qq2.ps}}
\caption{Thermal part of the spatial correlator of the topological charge density $q$,
at $N_{\tau}=6$ and several temperatures.
The curve in the lower panel is the one-loop result (Eq.~\ref{eq:ththPT})
with the same choice of renormalization scale as in Fig.~(\ref{fig:action2}).
}
\label{fig:qq2}
}
\noindent
\FIGURE[t]{
\vspace{0.65cm}
\centerline{\includegraphics[width=9.0 cm,angle=-90]{act-1p24.ps}}
\centerline{\includegraphics[width=9.0 cm,angle=-90]{edotb-1p24.ps}}
\caption{Thermal part of the spatial correlator
of the trace anomaly (top) and topological charge density (bottom),
at $T=1.24T_c$. The curve in the lower panel is the one-loop result (Eq.~\ref{eq:ththPT})
with the same choice of renormalization scale as in Fig.~(\ref{fig:action2}).}
\label{fig:edotb-1p24}
}
\noindent
\FIGURE[t]{
\centerline{\includegraphics[width=9.0 cm,angle=-90]{edot-6p408.ps}}
\caption{Detail of the thermal part of the spatial correlator
of the topological charge density $q(x)$ at $\beta=6.408$,
compared to the one-loop result (Eq.~\ref{eq:ththPT})
with the same choice of renormalization scale as in Fig.~(\ref{fig:action2}).}
\label{fig:edot-6p408}
}
\noindent
\FIGURE[t]{
\centerline{\includegraphics[width=9.0 cm,angle=-90]{edot-ratio.ps}}
\caption{Ratio of the finite-temperature spatial correlator
of the topological charge density $q(x)$
to the zero-temperature one.
The curve is the one-loop result (Eq.~\ref{eq:ththPT}).}
\label{fig:edotb-ratio}
}
\noindent
\FIGURE[t]{
\centerline{\includegraphics[width=13.0 cm,angle=0]{plotfreestrong.eps}}
\caption{
SYM correlators obtained by Fourier transform of the spectral function.
}
\label{fig:justone}
}
\noindent
\FIGURE[t]{
\centerline{\includegraphics[width=13.0 cm,angle=0]{freecomparison.eps}}
\caption{
The free correlators with $2\pi T \tau= 1$ and 0, Eq.~(\ref{eq:free}).
}
\label{fig:tautest}
}
\noindent
\acknowledgments{
We thank K.~Rajagopal, H.~Liu, J.~Minahan and A.Vuorinen
for useful discussions.
H.M. thanks K.F.~Liu and I.~Horvath for providing the
raw lattice data of Ref.~\cite{Horvath:2005cv}.
Our simulations were done on a BlueGene L rack at MIT and
the desktop machines of the Laboratory for Nuclear Science.
This work was supported in part by
funds provided by the U.S. Department of Energy
under cooperative research agreement DE-FG02-94ER40818.
}
|
1,941,325,220,697 | arxiv |
\section{Computational Studies}\label{sec:computation}
In this section, we illustrate the different properties of the stochastic model. We also demonstrate that the stochastic model outperforms the deterministic one in all the metrics proposed. We also seek to highlight stochastic formulations provide benefits that go beyond improvements in social surplus. The optimization problems considered in this section were solved using {\tt CPLEX-12.6.1}. All models can be accessed at \url{http://zavalab.engr.wisc.edu/data}.
\subsection{System I}\label{sec:system1}
We first consider System I sketched in Figure \ref{fig:systemI}. The system has two deterministic suppliers on nodes 1 and 3 and a stochastic supplier on node 2. The stochastic supplier has three possible capacity scenarios $G_2(\omega)=\{25,50,75\}$ MWh of equal probabilities $p(\omega)=\{1/3,1/3,1/3\}$.
For deterministic clearing, the day-ahead capacity limit $\bar{g}$ for the wind supplier will be set to 50 MWh, the expected value forecast. The demand in node 2 is deterministic at a level of 100 MWh. We use $\alpha^d=VOLL=$1000\$/MWh as the bid price and an incremental bid price of $\Delta \alpha^d=0.001$. The bid prices $\alpha^g_i$ for the suppliers are \{10,1,20\} \$/MWh, and the incremental bid prices $\Delta \alpha^g_i$ are \{1.0,0.1,2.0\} \$/MWh. The transmission capacities of lines $1\rightarrow 2$ and $2\rightarrow 3$ are deterministic and set to $\bar{F}_{1\rightarrow 2}=\{25,25,25\}$ MWh and $\bar{F}_{2\rightarrow 3}=\{50,50,50\}$, respectively, with the penalty of $\Delta\alpha^f = 0.001$. The line capacities have been designed such that the system becomes stressed in the scenario in which the stochastic supplier delivers only 25 MWh. We assume that the line susceptances are 50 for both lines. In this scenario, both transmission lines become congested, and real-time prices will reach high values. We use the penalty parameter $\Delta\alpha_n^{\theta} = 0.001$.
\begin{figure}
\FIGURE
{\includegraphics[width=4in]{ex3_diag.pdf}}
{Scheme of System I.\label{fig:systemI}}
{}
\end{figure}
We compare the performance of the deterministic, stochastic here-and-now, and the stochastic wait-and-see (WS) settings. The results are presented in Table~\ref{tab:sysI_prices}. We compare the expected surplus for the suppliers $\varphi^{g}$ as well as prices and quantities. Because the demand is deterministic, the surplus for the consumers $\varphi^{load}$ is a constant. Consequently, we show only $\varphi^{g}$. For the deterministic setting, the expected supply surplus $\varphi^{g}$ is \$835, and the day-ahead prices $\pi_n$ are \{10,20,20\} \$/MWh. The price difference between the first two nodes results from the binding day-ahead flow for line $1\rightarrow 2$ line at 25 MWh. In the real-time market, the prices for each scenario $\Pi_n(\omega)$ are \{9,1000,22\}, \{9,20,20\}, and \{9,14,14\} \$/MWh with expected value $\mathbb{E}[\Pi_n(\omega)]=$\{9,345,19\}. There is a strong distortion in the prices, indicated by the metrics $\mathcal{M}^{\pi}_{avg}=109$ and $\mathcal{M}^{\pi}_{max}=325$.
\begin{table}
\TABLE
{System I. Comparison of quantities, prices, and social surplus.\label{tab:sysI_prices}}
{
\begin{tabular}{lcccccrr}
\toprule
& $g_{i}$ & $\pi_n$ & $G_{i}(\omega)$ & $\Pi_n(\omega)$ & $\mathbb{E}[\Pi_n(\omega)]$ & $\varphi^{g}$ & $\mathcal{M}_{max}^\pi$ \\
\midrule
& & & $\{25,25,50\}$ & $\{9,1000,22\}$ & & & \\
Deterministic & $\{25,50,25\}$ & $\{10,20,20\}$ & $\{25,50,25\}$ & $\{9,20,20\}$ & $\{9,345,19\}$ & 835 & 325 \\
& & & $\{25,75,0\}$ & $\{9,14,14\}$ & & & \\
\hline
& & & $\{25,25,50\}$ & $\{10,792,428\}$ & & & \\
Stochastic & $\{25,50,25\}$ & $\{10,276,154\}$ & $\{25,50,25\}$ & $\{10,20,20\}$ & $\{10,276,154\}$ & 835 & 0 \\
& & & $\{25,75,0\}$ & $\{10,15,15\}$ & & & \\
\hline
& $\{25,25,50\}$ & $\{10,1000,929\}$ & $\{25,25,50\}$ & $\{10,1000,929\}$ & & & \\
Stochastic-WS & $\{25,50,25\}$ & $\{10,20,20\}$ & $\{25,50,25\}$ & $\{10,20,20\}$ & $\{10,345,321\}$ & 800 & NA \\
& $\{25,75,0\}$ & $\{10,14,14\}$ & $\{25,75,0\}$ & $\{10,14,14\}$ & & & \\
\bottomrule
\end{tabular}
}
{}
\end{table}
We now analyze the clearing of the stochastic formulation. The day-ahead prices are \{10,276,154\} and the real-time prices are \{10,792,428\}, \{10,20,20\}, \{10,15,15\} with expected value \{10,276,154\}. The price distortion metrics $\mathcal{M}^{\pi}_{avg}$, $\mathcal{M}^{\pi}_{max}$ are both zero. The stochastic WS (wait-and-see) solution is consistent in that it leads to no corrections of quantities in the real-time market and it yields the same day-ahead and real-time prices. Thus, {\em we can guarantee convergence of day-ahead and real-time prices for each scenario only in the presence of perfect information}. We note that the expected surplus as well as the day-ahead and real-time quantities for the stochastic and deterministic formulations are the same. The reason is that the {\em the deterministic and stochastic formulations have the same primal solution.} This situation might lead the practitioner to believe that no benefits are obtained from the stochastic formulation. The prices obtained, however, are completely different. Hence one can see that arguments based on social surplus do not fully capture the benefits of stochastic formulations.
The different prices obtained with both formulations lead to {\em drastically different payment distributions} among the market participants. As seen in Table \ref{tab:sysI_rev}, for the deterministic setting the suppliers obtain expected payments $\mathbb{E}[P^g_i(\omega)]$ of \$\{250,-7219,569\}. The wind supplier receives negative payments, and requires an uplift to enable cost recovery. In this case, the expected cost $\mathbb{E}[C^g_i(\omega)]$ for the wind supplier is \$52 and thus requires an expected uplift $\mathcal{M}_i^U$ of \$7,271. For the stochastic formulation, the expected payments are \$\{250,7321,7295\}. The wind supplier has positive payments and no uplift is required. This situation illustrates that the stochastic setting allocates resources efficiently. Note that all formulations are revenue adequate in expectation.
\begin{table}
\TABLE
{System I. Comparison of suppliers and ISO revenues.\label{tab:sysI_rev}}
{
\begin{tabular}{lccr}
\toprule
& $\mathbb{E}[P^g_i(\omega)]$ & $\mathbb{E}[C^g_i(\omega)]$ & $\mathcal{M}^{ISO}$ \\
\midrule
Deterministic & \{250,{\bf -7219},569\} & \{250,52,533\} & -8400\\
Stochastic & \{250,7321,7295\} & \{250,52,533\} & -12719\\
Stochastic-WS & \{250,9021,15656\} & \{250,50,500\} & -9546 \\
\bottomrule
\end{tabular}
}
{}
\end{table}
\begin{remark}
The optimization problems for System I are highly degenerate, and thus multiple dual solutions (i.e., prices in our context) are available. We highlight that the solutions reported in this section were obtained from the barrier method (without crossover) implemented in {\tt CPLEX-12.6.1}, which would provide central point solutions. However, we also report the solutions obtained from different linear programming algorithms in Appendix.
\end{remark}
\subsubsection{Bounds on Day-ahead Quantities}
We now demonstrate that adding bounds on the day-ahead flows and phase angles, as opposed to adding penalty terms, can affect the pricing properties of the stochastic model. The price distortions $\pi-\mathbb{E}[\Pi(\omega)]$ obtained using day-ahead bounds (without the penalty term) are $\left\{-0.4,0,0\right\}$ while those obtained with the penalty term using a penalty of $\Delta \alpha^f=\Delta\alpha^\theta=0.001$ (without the bounds) are $\left\{0,0,0\right\}$. The penalty term achieves the desired pricing property. The day-ahead flows obtained with the penalty terms are $\{25,25\}$; these are the medians of the real-time flows which are $\{25,50\}$ for scenario 1, $\{25,25\}$ for scenario 2, and $\{25,0\}$ for scenario 3. This also implies that the day-ahead flows are bounded and therefore day-ahead bounds are redundant. Similarly, the day-ahead phase angles obtained with the penalty terms are $\{-3.58,-3.08,-3.58\}$; these are the medians of the real-time phase angles $\{-3.52,-3.02,-4.02\}, \{-3.58,-3.08,-3.58\}$, and $\{-3.65,-3.15,-3.15\}$ for scenarios 1, 2 and 3, respectively. This suggests that the day-ahead bounds bias the statistics.
\subsubsection{Effect of Incremental Bid Prices}
We experiment the effect that the incremental bid prices have on the price distortion. Consider the case in which the demand in the central node is also stochastic and with scenarios $D(\omega)=\{100,50,25\}$. We set the incremental bid price for the stochastic supplier $\Delta \alpha^g_2$ to $1.0$. For the demand incremental bid prices $\Delta \alpha^d = 0.001,0.01,0.1$, and 1.0, the maximum price distortions $\mathcal{M}_{max}^\pi$ are $0.001, 0.010, 0.069$, and 0.334, respectively. The distortion remains bounded by the incremental bid and can be made arbitrarily small as we decrease the incremental bid. The result is consistent with the properties established.
\subsection{System II}
\begin{figure}
\FIGURE
{\includegraphics[width=5in]{ex2_diag.pdf}}
{Scheme of System II.\label{fig:systemII}}
{}
\end{figure}
We now consider the more complex system presented in Figure \ref{fig:systemII}. This is an adapted version of the system presented in \citet{philpott}. The system has two stochastic suppliers in nodes 2 and 4, three deterministic suppliers in nodes 1, 3, and 5 and one stochastic demand in node 6. The demand is treated as inelastic. The demand follows a normal distribution with mean 250 and standard deviation 50. We use sample average approximation for solving this model and generate 25 scenarios of the demand with equal probabilities. The stochastic suppliers can have 5 possible capacities $\{10,20,60,70,90\}$ MW. Each scenario represents one of the 25 different permutations from all possible capacities. The bid prices $\alpha_i^g$ for the suppliers are $\{100,1,100,1,200\}$ \$/MWh, and the incremental bid prices $\Delta\alpha_i^g$ are $\{10,0.1,10,0.1,20\}$. We set $\alpha^d=VOLL=1000$ \$/MWh and $\Delta \alpha^d=0.001$. We also set the penalty parameters $\Delta \alpha^f_\ell=\Delta\alpha_n^\theta=0.001$. We assume that all the line susceptances are 50.
\subsubsection{Price Distortion and Uplift Payments}
\begin{table}
\TABLE
{System II. Comparison of day-ahead prices and surplus with deterministic and stochastic formulations.\label{tab:sysII}}
{
\begin{tabular}{lccc}
\toprule
& $\varphi$ & $\mathcal{M}^{\pi}_n$ & $\pi_n$ \\
\midrule
Deterministic & -139569 & $\{4,-28,-17,-158,-61,36\}$ & $\{100,-800,67,1,501,1000\}$ \\
Stochastic & -139569 & $\{-0.001,0,0,0,0,0\}$ & $\{100,-764,87,159,562,964\}$ \\
Stochastic-WS & -139737 & NA & NA \\
\bottomrule
\end{tabular}
}
{}
\end{table}
The results are presented in Tables \ref{tab:sysII} and \ref{tab:sysII_rev}. We first note that the price distortion for the deterministic setting is large, reaching values as large as 158 \$/MWh. Also note that the distortion (premia) is small and positive in nodes 1 and 6 and large and negative in the other nodes. This is inefficient because it biases incentives towards a subset of players. The system is overly optimistic about performance in the real-time market where multiple scenarios exhibit transmission congestion, but the deterministic setting cannot foresee this. The stochastic formulation has almost the same expected social surplus as the deterministic formulation, but the price distortion is eliminated.
\begin{table}
\TABLE
{System II. Comparison of suppliers and ISO revenues with deterministic and stochastic formulations.\label{tab:sysII_rev}}
{
\begin{tabular}{ccccccc}
\toprule
& $\mathbb{E}[P^{g}_i(\omega)]$ & $\mathbb{E}[C^{g}_i(\omega)]$ & $\mathcal{M}^{ISO}$\\
\midrule
Deterministic & $\{8529,33,627,{\bf -1398},24927\}$ & $\{8529,0,627,20,9640\}$ & -129111\\
Stochastic & $\{8529,33,627,1903,27985\}$ & $\{8529,0,627,20,9640\}$ & -116885\\
Stochastic-WS & $\{8458,37,570,1694,27714\}$ & $\{8458,0,570,20,9600\}$ & -117518\\
\bottomrule
\end{tabular}
}
{}
\end{table}
In Table \ref{tab:sysII_rev} we see that payments for both formulations are similar except for the 4th supplier, which is a stochastic supplier. This supplier receives a negative payment and requires uplift under deterministic clearing. The uplift is eliminated by using the stochastic formulation. The expected payments collected with the stochastic here-and-now solution are close to those of the perfect information solution.
\subsubsection{Convergence of Day-ahead Quantities: Quartiles vs. Means}
In Table~\ref{tab:sysII_quartile} we present the day-ahead quantities $g_i$ convergent to a quantile of the real-time quantities. The day-ahead quantities $g_i$ with the quartiles (i.e., quantiles at $p=0.25, 0.5, 0.75$) of the real-time quantities are compared with the means $\mathbb{E}[G_i(\omega)]$ of the real-time quantities. Recall that $\mathbb{Q}_{G_i(\omega)}(0.5) = \mathbb{M}[G_i(\omega)]$. We use asymmetric incremental bid prices $\Delta\alpha_i^{g,+}, \Delta\alpha_i^{g,-}$ as set in Table~\ref{tab:sysII_quartile}. As can be seen, convergence is achieved for all suppliers.
\begin{table}
\TABLE
{System II. Day-ahead, quartiles of real-time quantities, and mean of real-time quantities for suppliers.\label{tab:sysII_quartile}}
{
\begin{tabular}{lllrrrrr}
\toprule
$\Delta\alpha_i^{g,+}$ & $\Delta\alpha_i^{g,-}$ & Quantity & Gen 1 & Gen 2 & Gen 3 & Gen 4 & Gen 5 \\
\midrule
$0.5 \Delta\alpha_i^g$ & $1.5\Delta\alpha_i^g$
& $g_i$ & 90 & 0 & 0 & 20 & 50 \\
& & $\mathbb{Q}_{G_i(\omega)}(0.25)$ & 90 & 0 & 0 & 20 & 50 \\
& & $\mathbb{E}[G_i(\omega)]$ & 84.6 & 0.4 & 5.7 & 19.7 & 48 \\
\hline
$1.0\Delta\alpha_i^g$ & $1.0\Delta\alpha_i^g$
& $g_i$ & 91.7 & 0 & 0 & 20.8 & 50 \\
& & $\mathbb{Q}_{G_i(\omega)}(0.5)$ & 91.7 & 0 & 0 & 20.8 & 50 \\
& & $\mathbb{E}[G_i(\omega)]$ & 84.6 & 0.4 & 5.7 & 19.7 & 48 \\
\hline
$1.5\Delta\alpha_i^g$ & $0.5\Delta\alpha_i^g$
& $g_i$ & 91.7 & 0 & 2.5 & 20.8 & 50 \\
& & $\mathbb{Q}_{G_i(\omega)}(0.75)$ & 91.7 & 0 & 2.5 & 20.8 & 50 \\
& & $\mathbb{E}[G_i(\omega)]$ & 84.6 & 0.4 & 5.7 & 19.7 & 48 \\
\bottomrule
\end{tabular}
}
{}
\end{table}
\subsubsection{Reliability Constraints}
We now consider the case in which there are random line failures. We consider 25 scenarios and assume that each one of the lines $1\rightarrow2$, $2\rightarrow6$, $3\rightarrow6$, $4\rightarrow6$, and $2\rightarrow3$ fails in at least five scenarios. All scenarios have equal probability. The results are presented in Table \ref{tab:sysII_rev2}. The deterministic setting becomes revenue inadequate in this case, whereas the stochastic setting is revenue adequate and achieves an expected ISO revenue that is close to that of the perfect information setting. An average price distortion of 1,374 \$/MWh and a maximum distortion of 2,355 \$/MWh were obtained for the deterministic setting, indicating a pronounced effect of line failures on prices. In particular, we observed that several demands need to be curtailed in the deterministic case. The stochastic formulation eliminates the distortion and the need for uplift payments. Note also that the fourth wind supplier again faces a negative revenue under deterministic clearing and an uplift payment is needed. This again illustrates that deterministic clearing can affect resource diversification because it consistently biases the payments towards a subset of players.
\begin{table}
\TABLE
{System II. Comparison of suppliers and ISO revenues with deterministic and stochastic formulations under transmission line failtures.\label{tab:sysII_rev2}}
{
\begin{tabular}{lccr}
\toprule
& $\mathbb{E}[P^{g}_i(\omega)]$ & $\mathbb{E}[C^{g}_i(\omega)]$ & $\mathcal{M}^{ISO}$ \\
\midrule
Deterministic & $\{88870,5,999,45196,63415\}$ & $\{2447,6,999,6,4240\}$ & {\bf 148311} \\
Stochastic & $\{1870 ,6,999,519 ,10296\}$ & $\{1870,6,999,5,3960\}$ & -39691 \\
Stochastic-WS & $\{1700 ,5,908,516 ,10287\}$ & $\{1700,5,908,4,3600\}$ & -39965 \\
\bottomrule
\end{tabular}
}
{}
\end{table}
\subsection{IEEE-118 System}
We now demonstrate the properties of the stochastic setting in a more complex network. The IEEE-118 system comprises 118 nodes, 186 lines, 91 demand nodes, and 54 suppliers. We assume that three stochastic suppliers are located at buses 10, 65 and 112, and that each supplier has an installed capacity of 300 MWh. This represents 14\% of the total generation capacity. We also assume that a generation level for a given stochastic supplier follows a normal distribution with mean 300 MWh and standard deviation 150 MWh. The total generation capacity is 7,280 MW, and the total load capacity is 3,733 MW. We use sample average approximation and generate 25 scenarios for the stochastic suppliers. We use 10\% of the generation cost for the incremental bid prices $\Delta\alpha_i^g$. The demands are assumed to be deterministic, and we set $\Delta \alpha^d_j=0.001$. We use the penalty parameters $\Delta \alpha^f_{\ell}=\Delta\alpha_n^{\theta}=0.001$.
\begin{table}
\TABLE
{IEEE-118 System. Comparison of suppliers and ISO revenues with deterministic and stochastic formulations.\label{tab:ieee118}}
{
\begin{tabular}{lrrrrr}
\toprule
& $\mathcal{M}^U$ & $\varphi^{g}$ & $\mathcal{M}^{\pi}_{avg}$ & $\mathcal{M}^{\pi}_{max}$ & $\mathcal{M}^{ISO}$ \\
\midrule
Deterministic & -151 & 36531 & 0.270 & 2.331 & -2306 \\
Stochastic & 0 & 36527 & 0.001 & 0.003 & -2009 \\
Stochastic-WS & 0 & 36410 & 0 & 0 & -1982 \\
\bottomrule
\end{tabular}
}
{}
\end{table}
The results are presented in Table \ref{tab:ieee118}. The uplift payment and The price distortions exist for the deterministic setting. The stochastic formulation reduces the uplift payments by a factor of 4 and eliminates the price distortion ($\mathcal{M}^{\pi}_{max}=0.003$). The difference in social surplus between deterministic and stochastic formulations is marginal. Also note that the penalty parameters for the flows and phase angles can be set to arbitrarily small values because they have no economic interpretation. Consequently, they do not affect the social surplus significantly.
\section{Conclusions and Future Work}\label{sec:conclusions}
We have demonstrated that deterministic market clearing formulations introduce strong and arbitrary distortions between day-ahead and expected real-time prices that bias incentives and block diversification. We present a stochastic formulation capable of eliminating these issues. The formulation is based on a social surplus function that accounts for expected costs and penalizes deviations between day-ahead and real-time quantities. We show that the formulation yields day-ahead prices that are close to expected real-time prices. In addition, we show that day-ahead quantities converge to the quantile of real-time counterparts.
Future work requires extending the model in multiple directions. First, it is necessary to capture the progressive resolution of uncertainty by using multi-stage models and to incorporate ramping constraints and unit commitment decisions. Second, it is necessary to construct formulations that design day-ahead decisions that approach ideal wait-and-see behavior. \citet{morales2013pricing} demonstrate that this might be possible to do by using bi-level formulations, but a more detailed analysis is needed. Third, the proposed stochastic model is computationally more challenging than existing models available in the literature because it incorporates the detailed network in the first-stage. This leads to problems that much larger first-stage dimensions which are difficult to decompose and parallelize. Consequently, scalable strategies are needed. Finally, it is necessary to explore implementation issues of stochastic markets such as effects of distributional errors.
\section{Clearing Formulations}\label{sec:models}
In this section, we present {\em energy-only} day-ahead deterministic and stochastic clearing formulations. The term ``energy-only'' indicates that no unit commitment decisions are made. We consider these simplified formulations in order to focus on important concepts related to pricing and payments to suppliers and consumers. Model extensions are left as a topic of future research.
\subsection{Deterministic Formulation}
In a deterministic setting, the day-ahead market is cleared by solving the following optimization problem.
\begin{subequations}\label{eq:detday-ahead}
\begin{align}
\min_{d_j,g_i,f_{\ell},\theta_n} \quad
& \sum_{i \in \mathcal{G}} \alpha_{i}^gg_{i}-\sum_{j \in \mathcal{D}}\alpha_j^dd_{j}\\
\text{s.t.} \quad
& \sum_{\ell \in \mathcal{L}_n^{rec}} f_{\ell} -\sum_{\ell \in \mathcal{L}_n^{snd}} f_{\ell} + \sum_{i \in \mathcal{G}_n} g_{i} - \sum_{i \in \mathcal{D}_n} d_{i}=0,\quad (\pi_n)\quad n\in\mathcal{N}\label{eq:detdaflow}\\
& f_\ell = B_\ell (\theta_{rec(\ell)} - \theta_{snd(\ell)}), \quad \ell\in\mathcal{L} \label{eq:powerflow}\\
& -\bar{f}_{\ell}\leq f_{\ell} \leq \bar{f}_{\ell}, \quad \ell \in \mathcal{L} \label{eq:det:flowbounds}\\
& 0\leq g_{i}\leq \bar{g}_{i},\quad i\in\mathcal{G}\\
& 0\leq d_{j}\leq \bar{d}_{j},\quad j\in\mathcal{D}\\
& \underline\theta_n \leq \theta_n \leq \overline\theta_n, \quad n\in\mathcal{N}.\label{eq:det:anglebounds}
\end{align}
\end{subequations}
The objective function of this problem is the day-ahead {\em negative} social surplus. The solution of this problem gives the day-ahead quantities $g_{i},d_{j}$, flows $f_{\ell}$, phase angles $\theta_n$, and prices $\pi_n$. The deterministic formulation assumes a given value for the capacities $\bar{g}_{i},\bar{d}_{j},\bar{f}_{\ell}, \underline\theta_n$, and $\overline\theta_n$. Because the conditions of the real-time market are uncertain at the time the day-ahead problem \eqref{eq:detday-ahead} is solved, these capacities are typically assumed to be the most probable ones (e.g., the expected value or {\em forecast} for supply and demand capacities) or are set based on the current state of the system (e.g., for line capacities and phase angle ranges). In particular, it is usually assumed that $\bar{g}_{i}=\mathbb{E}[\bar{G}_{i}(\omega)]$, $\bar{d}_{j}=\mathbb{E}[\bar{D}_{j}(\omega)]$, and $\bar{f}_{\ell}$ is the most probable state. One can also assume that $\bar{g}_{i}=Cap^g_i$ and $\bar{d}_j=Cap^d_j$, and $\bar{f}_{\ell}=Cap^f_{\ell}$. Such an assumption, however, can yield high economic penalties if the day-ahead dispatched quantities are far from those realized in the real-time market. Similarly, one can also assume conservative capacities (e.g., worst-case). In this sense, the day-ahead capacities $\bar{g}_i,\bar{d}_j,\bar{f}_{\ell}$ can be used as mechanisms to hedge against risk, as experienced ISO operators do to allow for a safety margin. Doing so, however, gives only limited control because the players need to summarize the entire possible range of real-time capacities in one statistic. In Section \ref{sec:metrics} we argue that this limitation can induce a distortion between day-ahead and real-time prices and biases revenues.
When the capacities become known, the ISO uses fixed day-ahead committed quantities $g_{i},d_{j},f_{\ell},\theta_n$, to solve the following real-time clearing problem.
\begin{subequations}\label{eq:det_recourse}
\begin{align}
\min_{D_j(\cdot),G_i(\cdot),F_{\ell}(\cdot),\Theta_n(\cdot)} \quad
& \sum_{i \in \mathcal{G}} \left(\alpha_{i}^{g,+}(G_{i}(\omega)-g_{i})_+ - \alpha_{i}^{g,-}(G_{i}(\omega)-g_{i})_-\right)\\
& \qquad +\sum_{j \in \mathcal{D}} \left(\alpha_{j}^{d,+}(D_{j}(\omega)-d_{j})_--\alpha_{j}^{d,-}(D_{j}(\omega)-d_{j})_+ \right)\\
\text{s.t.} \quad
& \sum_{\ell \in \mathcal{L}_n^{rec}} F_{\ell}(\omega) -\sum_{\ell \in \mathcal{L}_n^{snd}} F_{\ell}(\omega) + \sum_{i \in \mathcal{G}_n} G_{i}(\omega) - \sum_{j \in \mathcal{D}_n} D_{j}(\omega)=0,\quad (\Pi_n(\omega)), \quad n\in\mathcal{N}\label{eq:detrtflow}\\
& F_\ell(\omega) = B_\ell (\Theta_{rec(\ell)}(\omega) - \Theta_{snd(\ell)}(\omega)), \quad n\in\mathcal{N}\\
& -\bar{F}_{\ell}(\omega)\leq F_{\ell}(\omega) \leq \bar{F}_{\ell}(\omega), \quad \ell \in \mathcal{L}\\
& 0\leq G_{i}(\omega)\leq \bar{G}_{i}(\omega),\quad i\in\mathcal{G}\\
& 0\leq D_{j}(\omega)\leq \bar{D}_{j}(\omega),\quad j\in\mathcal{D}\\
& \underline\theta_n \leq \Theta_n(\omega) \leq \overline\theta_n, \quad n\in\mathcal{N}.
\end{align}
\end{subequations}
The objective function of this problem is the real-time negative social surplus. The solution of this problem yields different real-time quantities ${G}_{i}(\omega),{D}_{j}(\omega)$, flows $F_{\ell}(\omega)$, phase angles $\Theta_n(\omega)$, and prices $\Pi_n(\omega)$ depending on the scenario $\omega\in\Omega$ realized.
\subsection{Stochastic Formulation}
Motivated by the structure of the day-ahead and real-time market problems, we consider the stochastic market clearing formulation:
\begin{subequations}\label{eq:stoch}
\begin{align}
\min_{d_j,D_j(\cdot),g_i,G_i(\cdot),f_{\ell},F_{\ell}(\cdot),\theta_n,\Theta_n(\cdot)} \;
& \varphi^{sto} := \sum_{i \in \mathcal{G}} \mathbb{E}\left[\alpha_{i}^gg_{i}+\alpha_{i}^{g,+}(G_{i}(\omega)-g_{i})_+ - \alpha_{i}^{g,-}(G_{i}(\omega)-g_{i})_-\right]\nonumber\\
&\qquad + \sum_{j \in \mathcal{D}} \mathbb{E}\left[-\alpha_j^dd_{j}+\alpha_{j}^{d,+}(D_{j}(\omega)-d_{j})_- - \alpha_{j}^{d,-}(D_{j}(\omega)-d_{j})_+\right]\nonumber\\
& \qquad + \sum_{\ell\in\mathcal{L}} \mathbb{E}\left[\Delta\alpha^{f,+}_{\ell} (F_{\ell}(\omega)-f_{\ell})_+ + \Delta\alpha^{f,-}_{\ell} (F_{\ell}(\omega)-f_{\ell})_-\right] \notag\\
& \qquad + \sum_{n\in\mathcal{N}} \mathbb{E}\left[\Delta\alpha^{\theta,+}_n (\Theta_n(\omega) - \theta_n)_+ + \Delta\alpha^{\theta,-}_n (\Theta_n(\omega) - \theta_n)_-\right]\\
\text{s.t.} \;
& \sum_{\ell \in \mathcal{L}_n^{rec}} f_{\ell} -\sum_{\ell \in \mathcal{L}_n^{snd}} f_{\ell} + \sum_{i \in \mathcal{G}_n} g_{i} - \sum_{i \in \mathcal{D}_n} d_{i}=0,\quad(\pi_n)\quad n\in\mathcal{N}\label{eq:networkfwd}\\
& f_\ell = B_\ell (\theta_{rec(\ell)} - \theta_{snd(\ell)}), \quad \ell\in\mathcal{L} \label{eq:DAangle}\\
& \sum_{\ell \in \mathcal{L}_n^{rec}} \left(F_{\ell}(\omega)-f_{\ell}\right) -\sum_{\ell \in \mathcal{L}_n^{snd}} \left(F_{\ell}(\omega)-f_{\ell}\right)+ \sum_{i \in \mathcal{G}_n} \left(G_{i}(\omega)-g_{i}\right) \nonumber\\
& \qquad - \sum_{j \in \mathcal{D}_n} \left(D_{j}(\omega)-d_{j}\right)=0,\quad(p(\omega)\Pi_n(\omega))\quad \omega \in \Omega,n\in\mathcal{N}\label{eq:networkrt}\\
& F_\ell(\omega) = B_\ell (\Theta_{rec(\ell)}(\omega) - \Theta_{snd(\ell)}(\omega)), \quad \omega\in\Omega,\ell\in\mathcal{L}\label{eq:RTangle}\\
& -\bar{F}_{\ell}(\omega)\leq F_{\ell}(\omega) \leq \bar{F}_{\ell}(\omega), \;\; \omega\in\Omega,\ell \in \mathcal{L}\label{eq:boundstoch1}\\
& 0\leq G_{i}(\omega)\leq \bar{G}_{i}(\omega),\qquad \omega\in\Omega,i\in\mathcal{G}\\
& 0\leq D_{j}(\omega)\leq \bar{D}_{j}(\omega),\qquad \omega\in\Omega,j\in\mathcal{D}\label{eq:boundstoch2}\\
& \underline\theta_n \leq \Theta_n(\omega) \leq \overline\theta_n, \quad \omega\in\Omega,n\in\mathcal{N}.
\end{align}
\end{subequations}
The stochastic setting provides a natural mechanism to {\em anticipate} the effects of day-ahead decisions on real-time market corrections. This property gives rise to several important pricing and payment properties, as we will see in the following section.
The above formulation is partially based on the one proposed by \citet{philpott}. We highlight the following features of the model:
\begin{itemize}
\item The real-time prices (duals of the network balance \eqref{eq:networkrt}) have been weighted by their corresponding probabilities. This feature will enable us to construct the Lagrange function of the problem in terms of expectations.
\item The network balance in the real-time market is written in terms of the residual quantities $(G_{i}(\omega)-g_{i})$, $(D_{j}(\omega)-d_{j})$, and flows $(F_{\ell}(\omega)-f_{\ell})$. This feature will be key in obtaining consistent prices and it emphasizes the fact that the real-time market is a market of corrections.
\item We assume that the real-time quantity bounds $\bar{G}_{i}(\omega),\bar{D}_{j}(\omega),\bar{F}_{\ell}(\omega)$ are independent of the day-ahead quantities.
\end{itemize}
The differences between the proposed formulation and the one presented by \citet{philpott} are the following.
\begin{itemize}
\item The formulation does not impose bounds on the day-ahead quantities, flows and phase angles. In Section \ref{sec:properties} we will prove that the penalization terms render bounds for the day-ahead quantities, flows and phase angles redundant (see Theorem~\ref{th:networkbounds}).
\item The parameters $\Delta\alpha^{f,+}_{\ell},\Delta\alpha^{f,-}_{\ell}, \Delta\alpha^{\theta,+}, \Delta\alpha^{\theta,-}> 0$ penalize deviations between day-ahead and real-time quantities. In Section \ref{sec:metrics} we will see that these penalties are motivated by the structure of the social surplus and in Section \ref{sec:properties} we will show that they are key to ensure desirable pricing properties.
\item We allow for randomness in the transmission line capacities. In Section \ref{sec:properties} we will see that doing so has no effect on the underlying properties of the model.
\item We assume that the stochastic problem has relative complete recourse. That is, there exist a feasible real-time recourse decision for any day-ahead decision.
\end{itemize}
We refer to the solution of the stochastic formulation \eqref{eq:stoch} as the {\em here-and-now} solution to reflect the fact that a single implementable decision must be made now in anticipation of the uncertain future and that day-ahead quantities and flows are scenario-independent. We also consider the (ideal, non-implementable) {\em wait-and-see} (WS) solution. For details, refer to \citet{birge}. In the WS setting, we assume that the capacities for each scenario are actually known at the moment of decision. In other words, we assume availability of perfect information. In order to obtain the WS solution, the clearing problem \eqref{eq:stoch} is solved by allowing first-stage decisions $g_i,d_j,f_{\ell}$ to be scenario-dependent. It is not difficult to prove that in this case, each scenario generates day-ahead prices and quantities that are equal to real-time counterparts because no corrections are necessary. We denote the expected social surplus obtained under perfect information as $\varphi^{sto}_{WS}$.
\section{Introduction}
Day-ahead markets enable commitment and pricing of resources to hedge against uncertainty in demand, generation, and network capacities that are observed in real time. The day-ahead market is cleared by independent system operators (ISOs) using deterministic unit commitment (UC) formulations that rely on expected capacity forecasts, while uncertainty is handled by allocating reserves that are used to balance the system if real-time capacities deviate from the forecasts. A large number of deterministic clearing formulations have been proposed in the literature. Representative examples include those of \citet{carrion,gribikramp}, and \citet{ucbook}. Pricing issues arising in deterministic clearing formulations have been explored by \citet{gribikelmp,galianasocial}, and \citet{oneill}.
In addition to guaranteeing reliability and maximizing social surplus, several metrics are monitored by ISOs to ensure that the market operates efficiently. For instance, as is discussed in \citet{ott}, the ISO must ensure that market players receive economic incentives that promote participation (give participants the incentive to follow commitment and dispatch signals). It is also desired that day-ahead and real-time prices are sufficiently close or converge. One of the reasons is that price convergence is an indication that capacity forecasts are effective reflections of real-time capacities. Recent evidence provided by \citet{bowden1}, however, has shown that persistent and predictable deviations between day-ahead and real-time prices (premia) exist in certain markets. This can bias the incentives to a subset of players and block the entry of new players and technologies. The introduction of purely financial players was intended to eliminate premia, but recent evidence provided by \citet{birgevirtual} shows that this has not been fully effective. One hypothesis is that virtual players can exploit predictable price differences in the day-ahead market to create artificial congestion and benefit from financial transmission rights \citep{joskow2000transmission}.
Prices are also monitored by ISOs to ensure that they do not run into financial deficit (a situation called revenue inadequacy) when balancing payments to suppliers and from consumers. This is discussed in detail in \citet{philpottftr}. In addition, ISOs might need to use uplift payments and adjust prices to protect suppliers from operating at an economic loss. This is necessary to prevent players from leaving the market. As discussed by \citet{oneill}, \cite{morales2012pricing}, and \citet{gribikelmp}; uplift payments can result from using incomplete characterizations of the system in the clearing model. Such characterizations can arise, for instance, in the presence of nonconvexities and stochasticity.
Achieving efficient market operations under intermittent renewable generation is a challenge for the ISOs because uncertainty follows complex spatiotemporal patterns not faced before (\citet{zavalauc}). In addition, the power grid is relying more strongly on natural gas and transportation infrastructures, and it is thus necessary to quantify and mitigate uncertainty in more systematic ways (\citet{liugas,zavalagas}).
\subsection{Previous Work}
A wide range of stochastic formulations of day-ahead market clearing and operational UC procedures has been previously proposed. In operational UC models, on/off decisions are made in advance (here-and-now) to ensure that enough running capacity is available at future times to balance the system. The objective of these formulations is to ensure reliability and maximization of social surplus (cost in case of inelastic demands) in intra-day operations. Examples include the works of \citet{takritibirge,wang,zavalauc,ryan,ruiz,bouffardstochastic,papavasiliou2013multiarea}. These studies have demonstrated significant improvements in reliability over deterministic formulations. However, these works do not explore pricing issues.
Stochastic day-ahead clearing formulations have been proposed by \citet{kaye} and \citet{fullerpricing}. \citet{kaye} analyse day-ahead and real-time markets under uncertainty and argue that day-ahead prices should be set to expected values of the real-time prices. This {\em price consistency} ensures that the day-ahead market does not bias real-time market incentives in the long run. Such consistency also avoids arbitrage as is argued by \citet{oren2013}.\citet{fullerpricing} propose pricing schemes to achieve cost recovery for all suppliers (i.e., payments cover the suppliers production costs). The pricing schemes, however, rely only partially on dual variables generated by the stochastic clearing model which are adjusted to achieve cost recovery. Consequently, these procedures do not guarantee dual and model consistency.
\citet{morales2012pricing} propose a stochastic clearing model to price electricity in pools with stochastic producers. Their model co-optimizes energy and reserves and they prove that it leads to revenue adequacy in expectation. In addition, they prove that prices allow for cost recovery in expectation for all players (i.e., no uplifts are needed in expectation) but pricing consistency is not explored.
\citet{philpott} propose a stochastic formulation that captures day-ahead and real-time bidding of both suppliers and consumers. The formulation maximizes the day-ahead social surplus and the expected value of the real-time corrections by considering the possibility of players' bidding in the real-time market. The authors prove that the formulation leads to revenue adequacy in expectation and provide conditions under which adequacy will hold for each scenario. The authors do not explore pricing consistency and economic incentives.
\citet{oren2013} propose a stochastic equilibrium formulation in which players bid parameters of a quadratic supply function to maximize the expected value of their profit function while the ISO uses these parameters to solve the clearing model and generate day-ahead and real-time quantities and prices. It is shown that the equilibrium model generates day-ahead prices that converge to expected value prices and thus achieve consistency. It is also shown that day-ahead quantities converge to expected value quantities and a small case study is presented to demonstrate that the formulation yields higher social surplus and producer profits compared to deterministic clearing. The proposed formulation uses a quadratic supply function and quadratic penalties for deviations between day-ahead and real-time quantities. No network and no capacity constraints are considered.
\citet{morales2013pricing} propose a bilevel stochastic optimization formulation that uses forecast capacities of stochastic suppliers as degrees of freedom. Using computational studies, they demonstrate that their framework provides cost recovery for all suppliers and for each scenario. The authors, however, do not discuss the effects of the modified pricing strategy on consumer payments (the demands are treated as inelastic) and no theoretical guarantees are provided. In particular, it is not guaranteed that a set of day-ahead capacities and prices exist that can achieve cost recovery for both suppliers and consumers in each scenario. While plausible, we believe that this requires further evidence and theoretical justification.
\subsection{Contributions of This Work}
In this work, we argue that deterministic formulations generate day-ahead prices that are distorted representations of expected real-time prices. This pricing inconsistency arises because solving a day-head clearing model using summarizing statistics of uncertain capacities (e.g., expected forecasts) does not lead to day-ahead prices that are expected values of the real-time prices. We argue that these price distortions lead to diverse issues such as the need of uplift payments as well as arbitrary and biased incentives that block diversification. We extend and analyze the stochastic clearing formulation of \citet{philpott} in which linear supply functions for day-ahead and real-time markets are used. The structure of this surplus function has the key property that yields bounded price distortions. We also prove that when the price distortion is zero, the formulation yields day-ahead quantities that converge to the quantile of their real-time counterparts. In addition, we prove that the formulation yields revenue adequacy in expectation and yields zero uplifts in expectation. We provide several case studies to demonstrate the properties of the stochastic formulation.
The paper is structured as follows. In Section \ref{sec:setting} we describe the market setting. In Section \ref{sec:models} we present deterministic and stochastic formulations of the day-ahead ISO clearing problem. In Section \ref{sec:metrics} we present a set of performance metrics to assess the benefits of the stochastic formulation over its deterministic counterpart. In Section \ref{sec:properties} we present the pricing properties of the formulation. In Section \ref{sec:computation} we present case studies to demonstrate the developments. Concluding remarks and directions of future work are provided in Section \ref{sec:conclusions}.
\section{Market Setting}\label{sec:setting}
We consider a market setting based on the work of \citet{philpott} and \citet{ott}. A set of suppliers (generators) $\mathcal{G}$ and consumers (demands) $\mathcal{D}$ bid into the day-ahead market by providing price bids $\alpha_i^{g}\geq 0$, $i\in\mathcal{G}$ and $\alpha_j^{d}\geq 0$, $j\in\mathcal{D}$, respectively. If a given demand is inelastic, we set the bid price to $\alpha_{j}^{d}=VOLL$ where $VOLL$ denotes the value of lost load, typically 1,000 \$/MWh. Suppliers and consumers also provide estimates of the available capacities $\bar{g}_{i}$ and $\bar{d}_{j}$, respectively. We assume that these capacities satisfy $0\leq \bar{g}_i\leq Cap_i^g$ and $0\leq \bar{d}_j\leq Cap_j^d$ where $0\leq Cap_{i}^g < +\infty$ is the total installed capacity of the supplier (its maximum possible supply) and $0\leq Cap_{i}^d < +\infty$ is the total installed capacity of the consumer (its maximum possible demand). The cleared day-ahead quantities for suppliers and consumers are given by $g_{i}$ and $d_{j}$, respectively. These satisfy $0\leq g_{i}\leq \bar{g}_{i}$ and $0\leq d_{j}\leq \bar{d}_{j}$.
Suppliers and consumers are connected through a network
comprising of a set of lines $\mathcal{L}$ and a set
of nodes $\mathcal{N}$. For each line $\ell \in\mathcal{L}$
we define its sending node as $snd(\ell)\in\mathcal{N}$ and
its receiving node as $rec(\ell)\in\mathcal{N}$ (we highlight that this definition of sending node is arbitrary because the flow can go in both directions). For each
node $n\in\mathcal{N}$, we define its set of receiving lines
as $\mathcal{L}_n^{rec}\subseteq \mathcal{L}$ and its set of
sending lines as $\mathcal{L}_{n}^{snd}\subseteq \mathcal{L}$. These sets are given by
\begin{subequations}
\begin{align}
\mathcal{L}_n^{rec}&=\{\ell\in\mathcal{L}\,|\,n=rec(\ell)\},\; n\in\mathcal{N}\\
\mathcal{L}_n^{snd}&=\{\ell\in\mathcal{L}\,|\,n=snd(\ell)\},\; n\in\mathcal{N}.
\end{align}
\end{subequations}
Day-ahead capacities $\bar{f}_{\ell}$ are also typically estimated for the transmission lines. We assume that these satisfy $0\leq \bar{f}_{\ell}\leq Cap^f_{\ell}$. Here, $0\leq Cap^f_\ell< +\infty$ is the installed capacity of line (its maximum possible value). The cleared day-ahead flows are given by $f_\ell$ such that $-\bar{f}_\ell\leq f_\ell\leq \bar{f}_\ell$.
The flows $f_\ell$ are determined by the line susceptance $B_\ell$ and the phase angle difference between two nodes of the line. Day-ahead capacities $\underline\theta_n,\bar\theta_n$ are estimated for each node $n\in\mathcal{N}$. The cleared day-ahead phase angles are given by $\theta_n$ such that $\underline\theta_n \leq \theta_n \leq \bar\theta_n$ for $n\in\mathcal{N}$. We define the set of all suppliers connected to node $n\in\mathcal{N}$ as $\mathcal{G}_n\subseteq \mathcal{G}$ and the set of demands connected to node $n$ as $\mathcal{D}_n\subseteq \mathcal{D}$. Subindex $n(i)$ indicates the node at which supplier $i\in \mathcal{G}$ is connected, and $n(j)$ indicates the node at which the demand $j\in\mathcal{D}$ is connected. We use subindex $i$ exclusively for suppliers and subindex $j$ exclusively for consumers.
At the moment the day-ahead market is cleared, the
real-time market conditions are uncertain. In particular, we
assume that a subset of generation, demand, and transmission
line capacities are uncertain. We further assume that discrete
distributions comprising a finite set of scenarios
$\Omega$ and $p(\omega)$ denote the probability of
scenario $\omega\in\Omega$. We also require that
$\sum_{\omega\in\Omega}p(\omega)=1$. The expected value of
variable $Y(\cdot)$ is given by
$\mathbb{E}[Y(\omega)]=\sum_{\omega \in \Omega}
p(\omega)Y(\omega)$. If $Y(\omega)$ is scalar-valued, the quantile function $\mathbb{Q}$ is defined as
\begin{align}\label{eq:defQuantile}
\mathbb{Q}_{Y(\omega)}(p) := \inf \left\{ y \in \mathbb{R} \;:\; \mathbb{P}(Y(\omega) \leq y) \geq p\right\}.
\end{align}
Moreover, the median is denoted as $\mathbb{M}[Y(\omega)] = \mathbb{Q}_{Y(\omega)}(0.5)$ and satisfies
\begin{align}
\mathbb{M}[Y(\omega)]&= \mathop{\textrm{argmin}}_{m} \mathbb{E}[|Y(\omega)-m|],\label{eq:median}
\end{align}
where $|\cdot|$ is the absolute value function.
In the real-time market, the suppliers can offer to sell additional generation over the agreed day-ahead quantities at a bid price $\alpha_i^{g,+}\geq 0$. The additional generation is given by $(G_{i}(\omega)-g_{i})_+$ where $G_{i}(\omega)$ is the cleared quantity in the real-time market and $0\leq \bar{G}_{i}(\omega)\leq Cap^g_i$ is the realized capacity under scenario $\omega\in{\Omega}$. Real-time generation quantities are bounded as $0\leq G_{i}(\omega) \leq \bar{G}_{i}(\omega)$. Here, $(X-x)_+:=\textrm{max}\{X-x,0\}$. The suppliers also have the option of buying electricity at an offering price $\alpha_i^{g,-}\geq 0$ to account for any uncovered generation ($G_{i}(\omega)-g_{i})_-$ over the agreed day-ahead quantities. Here, $(X-x)_-=\textrm{max}\{-(X-x),0\}$.
Consumers provide bid prices $\alpha_j^{d,-}\geq 0$ to buy additional demand $(D_{j}(\omega)-d_{j})_+$ in the real-time market, where $D_{j}(\omega)$ is the cleared quantity and $0\leq \bar{D}_{j}(\omega)\leq Cap^d_{j}$ is the available demand capacity realized under scenario $\omega\in\Omega$. We thus have $0\leq D_{j}(\omega)\leq \bar{D}_{j}(\omega)$. Consumers also have the option of selling the demand deficit $(D_{j}(\omega)-d_{j})_-$ at price $\alpha_{j}^{d,+}\geq 0$.
The flows cleared in the real-time market are given by $F_{\ell}(\omega)$ and satisfy $-\bar{F}_{\ell}(\omega)\leq {F}_{\ell}(\omega)\leq \bar{F}_{\ell}(\omega)$. Here, $\bar{F}_{\ell}(\omega)$ is the transmission line capacity realized under scenario $\omega\in\Omega$ and satisfies $-Cap^f_{\ell}\leq \bar{F}_{\ell}(\omega)\leq Cap^f_{\ell}$. Uncertain line capacities can be used to model $N-x$ contingencies or uncertainties in capacity due to ambient conditions (e.g., ambient temperature affects line capacity). The cleared phase angles in the real-time market are given by $\Theta_n(\omega)$ such that $\underline\theta_n\leq \Theta_n(\omega) \leq \bar\theta_n$ for $n\in\mathcal{N}$.
We also define day-ahead clearing prices (i.e., locational marginal prices) for each node $n\in\mathcal{N}$ as $\pi_n$. The real-time prices are defined as $\Pi_n(\omega), \; \omega\in \Omega$.
\section{ISO Performance Metrics for Market Clearing} \label{sec:metrics}
In this section, we discuss some objectives of the ISOs from a market operations standpoint and use these to motivate a new set of metrics to quantify performance of deterministic and stochastic formulations. We place special emphasis on the structure of the {\em social surplus function} and on the issue of {\em price consistency}. We provide arguments as to why price consistency is a key property in achieving incentives. We argue that deterministic formulations do not actually yield price consistency and hence result in a range of undesired effects such as biased payments, revenue inadequacy, and the need for uplifts.
We highlight that we define different metrics based on market behavior in expectation. A practical way of interpreting these {\em expected metrics} is the following: assume that the market conditions of a given day are repeated over a sequence of days and we collect the results over such period by using each day as a scenario. We then compute a certain metric (like the social welfare) to perform the comparisons between the stochastic and deterministic clearing mechanisms to evaluate performance. In this sense, market behavior in expectation can also interpreted as long run market behavior.
\subsection{Social Surplus}
Consider the combination of the day-ahead and real-time costs for suppliers and consumers,
\begin{subequations}\label{costs}
\begin{align}
C^g_{i}(\omega)&=+\alpha_{i}^gg_{i}+\alpha_{i}^{g,+}(G_{i}(\omega)-g_{i})_+ - \alpha_{i}^{g,-}(G_{i}(\omega)-g_{i})_-\\
C^d_{j}(\omega)&=-\alpha_j^dd_{j}+\alpha_{j}^{d,+}(D_{j}(\omega)-d_{j})_- - \alpha_{j}^{d,-}(D_{j}(\omega)-d_{j})_+.
\end{align}
\end{subequations}
We define the {\em incremental bid prices} as $\Delta\alpha_i^{g,+} := \alpha_i^{g,+} - \alpha_i^g$, $\Delta\alpha_i^{g,-} := \alpha_i^{g} - \alpha_i^{g,-}$, $\Delta\alpha_j^{d,+} := \alpha_j^{d,+} - \alpha_j^d$ and $\Delta\alpha_j^{d,-} := \alpha_j^{d} - \alpha_j^{d,-}$. To avoid degeneracy, we require that the incremental bid prices are positive: $\Delta\alpha_i^{g,+}, \Delta\alpha_i^{g,-}, \Delta\alpha_j^{d,+}, \Delta\alpha_j^{d,-} > 0$
\begin{theorem}
Assume that the incremental bid prices are positive. The cost functions for suppliers and consumers can be expressed as
\begin{subequations}\label{eq:asymcost}
\begin{align}
C_{i}^g(\omega) &= \alpha_i^{g} G_{i}(\omega) + \Delta \alpha_{i}^{g,+} (G_{i}(\omega)-g_{i})_+ + \Delta \alpha_{i}^{g,-} (G_{i}(\omega)-g_{i})_-, \quad i\in\mathcal{G},\omega\in\Omega\\
C_{j}^d(\omega) &= -\alpha_{j}^{d} D_{j}(\omega) + \Delta \alpha_{j}^{d,+} (D_{j}(\omega)-d_{j})_- + \Delta \alpha_{j}^{d,-} (D_{j}(\omega)-d_{j})_+, \quad j\in\mathcal{D},\omega\in\Omega.
\end{align}
\end{subequations}
\end{theorem}
\proof{Proof}
Consider the cost function for suppliers
\begin{align*}
C^g_{i}(\omega) &= \alpha_{i}^g g_{i} + \alpha_{i}^{g,+} (G_{i}(\omega)-g_{i})_+ - \alpha_{i}^{g,-} (G_{i}(\omega)-g_{i})_-\\
&= \alpha_{i}^g g_{i} + (\alpha_{i}^{g} + \Delta \alpha_i^{g,+}) (G_{i}(\omega)-g_{i})_+ - (\alpha_{i}^{g} - \Delta \alpha_i^{g,-}) (G_{i}(\omega)-g_{i})_- \\
&= \alpha_{i}^g g_{i} + \alpha_{i}^{g} (G_{i}(\omega) - g_{i})_+ - \alpha_{i}^{g} (G_{i}(\omega) - g_{i})_- + \Delta \alpha_i^{g,+} (G_{i}(\omega) - g_{i})_+ + \Delta \alpha_i^{g,-} (G_{i}(\omega) - g_{i})_- \\
&= \alpha_{i}^g g_{i} + \alpha_{i}^{g} (G_{i}(\omega) - g_{i}) + \Delta \alpha_i^{g,+} (G_{i}(\omega) - g_{i})_+ + \Delta \alpha_i^{g,-} (G_{i}(\omega) - g_{i})_- \\
&= \alpha_{i}^{g} G_{i}(\omega) + \Delta \alpha_i^{g,+} (G_{i}(\omega) - g_{i})_+ + \Delta \alpha_i^{g,-} (G_{i}(\omega) - g_{i})_-.
\end{align*}
The last two equalities follow from the fact that $X-x=(X-x)_+-(X-x)_-$. The same property applies to $C^d_j(\omega)$ (using the appropriate cost terms).\Halmos
\endproof
We say that the incremental bid prices are {\em symmetric} if $\Delta\alpha_i^{g,+}=\Delta\alpha_i^{g,-}$ and $\Delta\alpha_j^{d,+}=\Delta\alpha_j^{d,-}$. Denote the symmetric prices by $\Delta\alpha_i^g := \Delta\alpha_i^{g,+}=\Delta\alpha_i^{g,-}$ and $\Delta\alpha_j^d := \Delta\alpha_j^{d,+}=\Delta\alpha_j^{d,-}$.
\begin{corollary}\label{thm:abscost}
If the incremental bid prices are symmetric, then the cost functions for suppliers and consumers can be expressed as
\begin{subequations}\label{eq:abscost}
\begin{align}
C_{i}^g(\omega) &= \alpha_i^{g} G_{i}(\omega) + \Delta \alpha_{i}^{g} |G_{i}(\omega)-g_{i}|, \quad i\in\mathcal{G},\omega\in\Omega\\
C_{j}^d(\omega) &= -\alpha_{j}^{d} D_{j}(\omega) + \Delta \alpha_{j}^{d} |D_{j}(\omega)-d_{j}|, \quad j\in\mathcal{D},\omega\in\Omega.
\end{align}
\end{subequations}
\end{corollary}
\proof{Proof}
Consider the cost function for suppliers
\begin{align*}
C^g_{i}(\omega) &= \alpha_{i}^{g} G_{i}(\omega) + \Delta \alpha_i^{g,} (G_{i}(\omega) - g_{i})_+ + \Delta \alpha_i^{g,} (G_{i}(\omega) - g_{i})_- \\
&= \alpha_i^{g} G_{i}(\omega) + \Delta \alpha_{i}^{g} |G_{i}(\omega)-g_{i}|,
\end{align*}
because of the fact that $|X-x|=(X-x)_++(X-x)_-$. The same property applies to $C_j^d(\omega)$ (using the appropriate cost terms).\Halmos
\endproof
\begin{definition}[Social Surplus]
We define the {\em expected negative social surplus} (or {\em social surplus} for short) as
\begin{align}\label{eq:surplus}
\varphi &:= \mathbb{E}\left[\sum_{i \in \mathcal{G}} C^g_{i}(\omega)+\sum_{j \in \mathcal{D}}C_{j}^d(\omega)\right]\nonumber\\
&=\varphi^g+\varphi^d,
\end{align}
where $\varphi^g,\varphi^d$ are the {\em expected supply and consumer costs},
\begin{subequations}\label{eq:expectedcost}
\begin{align}
\varphi^g &:= \mathbb{E}\left[\sum_{i \in \mathcal{G}} C^g_{i}(\omega)\right] \notag\\
&= \sum_{i\in\mathcal{G}} \left( \alpha_i^{g} \mathbb{E}[G_i(\omega)] + \Delta\alpha_i^{g,+} \mathbb{E}[(G_{i}(\omega) - g_i)_+] + \Delta\alpha_i^{g,-} \mathbb{E}[(G_{i}(\omega) - g_i)_-] \right) \\
\varphi^d &:= \mathbb{E}\left[\sum_{j \in \mathcal{D}} C^d_{j}(\omega)\right] \notag\\
&= \sum_{j\in\mathcal{D}} \left( -\alpha_{j}^{d} \mathbb{E}[D_j(\omega)] + \Delta\alpha_j^{d,+} \mathbb{E}\left[ (D_{j}(\omega) - d_j)_-\right] + \Delta\alpha_j^{d,-} \mathbb{E}\left[ (D_{j}(\omega) - d_j)_+\right] \right).
\end{align}
\end{subequations}
\end{definition}
This particular structure of the expected social surplus function was noticed by \citet{philpott} and provides interesting insights. From Equation \eqref{eq:expectedcost}, we note that the expected quantities $\mathbb{E}[G_i(\omega)]$, $\mathbb{E}[D_j(\omega)]$ act as forecasts of the day-ahead quantities and are priced by using the day-ahead bids $\alpha_i^g,\alpha_d^j$ (first term). This immediately suggests that it is the {\em expected cleared quantities} $G_i(\omega),D_j(\omega)$ and {\em not} the capacities $\bar{g}_i,\bar{d}_j$ that are to be used as forecasts, as is done in the day-ahead deterministic formulation \eqref{eq:detday-ahead}. The second and third terms penalize deviations of the real-time quantities from the day-ahead commitments using the incremental bid prices. More interestingly, Corollary~\ref{thm:abscost} suggests that when the incremental bid prices are symmetric (i.e., $\Delta\alpha_i^{g,+}=\Delta\alpha_i^{g,-}$ and $\Delta\alpha_j^{d,+}=\Delta\alpha_j^{d,-}$), day-ahead quantities will tend to converge to the median of the real-time quantities if the expected social surplus function is minimized. A deterministic setting, however, cannot guarantee optimality in this sense because it minimizes the day-ahead and real-time components of the surplus function {\em separately}. In particular, the expected social surplus for the deterministic formulation is obtained by solving the day-ahead problem \eqref{eq:detday-ahead} followed by the solution of the real-time problem \eqref{eq:det_recourse} for all scenarios $\omega \in \Omega$. The day-ahead surplus and the expected value of the real-time surplus are then combined to obtain the expected surplus $\varphi$.
A deterministic setting can yield surplus inefficiencies because it cannot properly anticipate the effect of day-ahead decision on real-time market decisions. For instance, certain suppliers can be inflexible in the sense that they cannot modify their day-ahead supply easily in the real-time market (e.g., coal plants). This results in constraints of the form $g_i=G_i(\omega)$ or $d_j=D_j(\omega),\omega \in \Omega$. This inflexibility can trigger inefficiencies because the operator is forced to use expensive units in the real-time market (e.g., combined-cycle) or because load shedding is needed to prevent infeasibilities. Most studies on stochastic market clearing and unit commitment have focused on showing improvements in social surplus over deterministic formulations. In Section \ref{sec:computation} we demonstrate that even when social surplus differences are negligible, the resulting prices and payments can be drastically different. This situation motivates us to consider alternative metrics for monitoring performance.
We note that the objective function of the stochastic clearing formulation \eqref{eq:stoch} can be written as
\begin{align}
\varphi^{sto} = \varphi
&+ \sum_{\ell\in\mathcal{L}} \mathbb{E}\left[\Delta\alpha^{f,+}_{\ell} (F_{\ell}(\omega)-f_{\ell})_+ + \Delta\alpha^{f,-}_{\ell} (F_{\ell}(\omega)-f_{\ell})_-\right] \notag\\
&+ \sum_{n\in\mathcal{N}} \mathbb{E}\left[\Delta\alpha^{\theta,+}_n (\Theta_n(\omega) - \theta_n)_+ + \Delta\alpha^{\theta,-}_n (\Theta_n(\omega) - \theta_n)_-\right],\label{eq:surplussto}
\end{align}
where $\varphi$ is the expected negative surplus function defined in \eqref{eq:surplus}. Consequently, if $\Delta\alpha^{f,+}_{\ell},\Delta\alpha^{f,-}_{\ell}, \Delta\alpha^{\theta,+}, \Delta\alpha^{\theta,-}$ are sufficiently small, we have that $\varphi^{sto}\approx \varphi$.
\subsection{Pricing Consistency}\label{sec:priceconsistency}
We seek that the day-ahead prices be consistent representations of the expected real-time prices. In other words, we seek that the {\em expected price distortions} (also known as {\em expected price premia}) $\pi_n-\mathbb{E}[\Pi_n(\omega)],\; n\in\mathcal{N}$ be zero or at least in a bounded neighborhood. This is desired for various reasons that we will explain.
\begin{definition}[Price Distortions] We define the {\em expected price distortion} or {\em expected price premia} as
\begin{align}
\mathcal{M}^{\pi}_n:=\pi_n-\mathbb{E}\left[\Pi_n(\omega)\right], \quad n\in\mathcal{N}.
\end{align}
We say that the {\em price is consistent} at node $n\in\mathcal{N}$ if $\mathcal{M}^{\pi}_n=0$. In addition, we define the {\em node average} and {\em maximum absolute distortions},
\begin{subequations}
\begin{align}
\mathcal{M}^{\pi}_{avg}&:=\frac{1}{|\mathcal{N}|}\sum_{n\in\mathcal{N}}|\mathcal{M}^{\pi}_n|\\
\mathcal{M}^{\pi}_{max}&:=\mathop{\textrm{max}}_{n\in\mathcal{N}}|\mathcal{M}^{\pi}_n|.
\end{align}
\end{subequations}
\end{definition}
Pricing consistency is related to the desire that day-ahead and real-time prices converge, as is discussed by \citet{ott}. Note, however, that it is unrealistic to expect that day-ahead and real-time prices converge in each scenario. This is possible only in the absence of uncertainty (capacity forecasts are perfect such as in the perfect information setting). Any real-time deviation in capacity from a day-ahead forecast will lead to a deviation between day-ahead and real-time prices. It is possible, however, to ensure that {\em day-ahead and real-time prices converge in expectation}. This situation also implies that any deviation of the real-time price from the day-ahead price is entirely the result of {\em unpredictable random factors}. This is also equivalent to saying that day-ahead prices converge to the expected value of the real-time prices.
Pricing consistency cannot be guaranteed with deterministic formulations because the day-ahead clearing model forecasts real-time capacities, not real-time quantities. Consequently, players are forced to ``summarize'' their possible real-time capacities in single statistics $\bar{d}_j,\bar{g}_i,\bar{f}_{\ell}$. Expected values are typically used. This summarization, however, is inconsistent because it does not effectively average real-time market performance as the structure of the surplus function \eqref{eq:expectedcost} suggests. In fact, as we show in Section \ref{sec:properties}, expected values need not be the right statistic to use in the day-ahead market. This is consistent with the observations made by \citet{morales2013pricing}. In addition, we note that certain random variables might be difficult to summarize (e.g., if they follow multimodal and heavy-tailed distributions). For instance, consider that there is uncertainty about the state of a transmission line in the real-time market (i.e., there is a probability that it will fail). In a deterministic setting it is difficult to come up with a "forecast" value for the day-ahead capacity $\bar{f}_\ell$ in such a case.
\subsection{Suppliers and Consumer Payments}
As argued by \citet{kaye}, we can justify the desire of seeking price consistency by analyzing the {\em payments} to the market players. The payment includes the day-ahead settlement plus the correction payment given at real-time prices, as is the standard practice in market operations. For more details, see \citet{ott} and \citet{philpott}.
\begin{definition}[Payments] The payments to suppliers and from consumers in scenario $\omega \in \Omega$ are defined as follows:
\begin{subequations}
\begin{align}
P_{i}^g(\omega)&:=g_{i}\pi_{n(i)}+(G_{i}(\omega)-g_{i})\Pi_{n(i)}(\omega)\nonumber\\
&=g_{i}(\pi_{n(i)}-\Pi_{n(i)}(\omega))+G_{i}(\omega)\Pi_{n(i)}(\omega),\quad i\in\mathcal{G},\omega\in \Omega\\
P_{j}^d(\omega)&:=-d_{j}\pi_{n(i)}-(D_{j}(\omega)-d_{j})\Pi_{n(j)}(\omega)\nonumber\\
&=d_{j}(\Pi_{n(i)}(\omega)-\pi_{n(i)})-D_{j}(\omega)\Pi_{n(j)}(\omega),\quad j\in\mathcal{D},\omega\in \Omega.
\end{align}
\end{subequations}
We say that the {\em expected payments are consistent} if they satisfy
\begin{subequations}\label{eq:payconsistency}
\begin{align}
\mathbb{E}\left[P_{i}^g(\omega)\right]&=+\mathbb{E}\left[G_{i}(\omega)\Pi_{n(i)}(\omega)\right], \quad i\in\mathcal{G}\\
\mathbb{E}\left[P_{j}^d(\omega)\right]&=-\mathbb{E}\left[D_{j}(\omega)\Pi_{n(j)}(\omega)\right], \quad j\in\mathcal{D},
\end{align}
\end{subequations}
where
\begin{subequations}\label{eq:expectedpayments}
\begin{align}
\mathbb{E}\left[P_{i}^g(\omega)\right]&=+g_{i}\mathcal{M}_{n(i)}^{\pi}+\mathbb{E}\left[G_{i}(\omega)\Pi_{n(i)}(\omega)\right], \quad i\in\mathcal{G}\\
\mathbb{E}\left[P_{j}^d(\omega)\right]&=-d_{j}\mathcal{M}_{n(j)}^{\pi}-\mathbb{E}\left[D_{j}(\omega)\Pi_{n(j)}(\omega)\right], \quad j\in\mathcal{D}.
\end{align}
\end{subequations}
\end{definition}
If the prices are consistent at each node $n\in\mathcal{N}$, the expected payments are consistent. This definition of consistency is motivated by the following observations. The price distortion is factored in the expected payments. From \eqref{eq:expectedpayments} we see that price distortions (premia) can bias benefits toward a subset of players. In particular, if the premium at a given node is negative ($\mathcal{M}_n^{\pi}<0$), a supplier will not benefit from the day-ahead market but a consumer will. This situation can prevent suppliers from participating in day-ahead market. If $\mathcal{M}_n^{\pi}>0$, the oppostive holds true. This situation can prevent consumers from providing price-responsive demands. We can thus conclude that price consistency ensures payment consistency with respect to suppliers and consumers. In other words, $\mathcal{M}^{\pi}_n=0$ implies \eqref{eq:payconsistency}.
\citet{kaye} argue that setting the day-ahead prices to the expected real-time prices (price consistency) is desirable because it effectively eliminates the day-ahead component of the market. Consequently, the market operates (in expectation) as a pure real-time market. This situation is desirable because it implies that the day-ahead market does not interfere with the incentives provided by real-time markets. This is particularly important for players that benefit from real-time market variability (such as peaking units and price-response demands). This also implies that the ISO does not give any preference to either risk-taking or risk-averse players. We also highlight that price consistency does not imply that premia do not exist; they can exist in each scenario but not in expectation.
Deterministic formulations can yield persistent price premia that benefit a subset of players or that can be used for market manipulation. For instance, consider the case in which a wind farm forecast has the same mean but very different variance (uncertainty) for several consecutive days. If the expected forecast is used, the day-ahead prices will be consistently the same for all days, thus making them more predictable and biased toward a subset of players. While the use of risk-adaptive reserves can help ameliorate this effect, this approach is not guaranteed to achieve price consistency.
\subsection{Uplift Payments}
From \eqref{eq:expectedpayments} we see that if the premium at a given node is negative ($\mathcal{M}_n^{\pi}<0$), negative payments (losses) can be incurred by the suppliers. This issue is analyzed by \citet{fullerpricing} and \citet{morales2012pricing}. For instance, a wind supplier might be cleared at a given forecast capacity and at a low price but in real-time it might need to buy back power at a larger price if the realized capacity is lower than forecasted (this is illustrated in Section \ref{sec:computation}). It is thus desired that suppliers be paid at least as much as what they asked for and it is desired that consumers do not pay more than what they are willing to pay for. This is formally stated in the following definition.
\begin{definition}[Wholeness]\label{def:wholeness}
We say that suppliers and consumers are {\em whole in expectation} if
\begin{subequations}
\begin{align}
\mathbb{E}\left[P_{i}^g(\omega)\right]&\geq \;\;\;\;\mathbb{E}\left[C_{i}^g(\omega)\right], \quad i\in\mathcal{G}\\
-\mathbb{E}\left[P_{j}^d(\omega)\right]&\leq -\mathbb{E}\left[C_{j}^d(\omega)\right], \quad j\in\mathcal{D}.
\end{align}
\end{subequations}
\end{definition}
If the players are not made whole, they can leave the market and this can hinder diversification. Uplift payments are routinely used by the ISOs to avoid this situation \citep{galianasocial,baldickuplift}. Uplifts can result from inadequate representations of system behavior such as nonconvexities \citep{oneill} or, as we will see in Section \ref{sec:computation}, can result from using inadequate statistical representations of real-time market performance in deterministic settings. Consequently, uplift payments are a useful metric to determine the effectiveness of a given clearing formulation.
\begin{definition}[Uplift Payments]\label{def:uplift}
We define the {\em expected uplift payments} to suppliers and consumers as
\begin{subequations}\label{eq:uplift}
\begin{align}
\mathcal{M}^U_{i} &:= -\min\left\{\mathbb{E}[P^g_i(\omega)]-\mathbb{E}[C^g_i(\omega)],0\right\}, \quad i\in\mathcal{G}\\
\mathcal{M}^U_{j} &:= -\min\left\{\mathbb{E}[P^d_j(\omega)]-\mathbb{E}[C^d_j(\omega)],0\right\}, \quad j\in\mathcal{D}.
\end{align}
\end{subequations}
We also define the total uplift as $\displaystyle \mathcal{M}^U := \sum_{i\in\mathcal{G}} \mathcal{M}^U_{i}+\sum_{j\in\mathcal{D}} \mathcal{M}^U_{j}$.
\end{definition}
We highlight that our setting is convex and we thus only consider uplifts arising from inadequate statistical representations.
\subsection{Revenue Adequacy}
An efficient clearing procedure must ensure that the ISO does not run into financial deficit. In other words, the ISO must have a positive cash flow (payments collected from consumers are greater than the payments given to suppliers). We consider the following expected revenue definition, used by \citet{philpott}, to assess performance with respect to this case.
\begin{definition}[Revenue Adequacy] The {\em expected net payment to the ISO} is defined as
\begin{align}\label{eq:revaqmetric}
\mathcal{M}^{ISO}&:=\mathbb{E}\left[\sum_{i \in\mathcal{G}}P_{i}^g(\omega)+\sum_{j \in\mathcal{D}}P_{j}^d(\omega)\right]\nonumber\\
&\;=\sum_{i\in\mathcal{G}}\mathbb{E}[P^g_i(\omega)]+\sum_{j\in\mathcal{D}}\mathbb{E}[P^d_j(\omega)].
\end{align}
We say that the ISO is {\em revenue adequate in expectation} if $\mathcal{M}^{ISO} \leq 0$.
\end{definition}
Revenue adequacy guarantees that, in expectation, the ISO will not run into financial deficit.
\section{Properties of Stochastic Clearing}\label{sec:properties}
In this section, we prove that the stochastic clearing formulation yields bounded price distortions and that these distortions can be made arbitrarily small. In addition, we prove that day-ahead quantities are bounded by real-time quantities and that they converge to a quantile of the real-time quantities when the distortions are zero. Further, we prove that the formulation yields revenue adequacy and zero uplifts in expectation.
\subsection{No Network Constraints}\label{sec:prop:withoutnetwork}
We begin our discussion with a single-node formulation (no network constraints) and then generalize the results to the case of network constraints. The single-node formulation has the form:
\begin{subequations}\label{eq:stochnonet}
\begin{align}
\min_{d_j,g_i,G_i(\cdot),D(\cdot)}\;
&\; \sum_{i\in\mathcal{G}} \mathbb{E}\left[ \alpha_i^g G_i(\omega) + \Delta\alpha_i^{g,+} (G_i(\omega) - g_i)_+ + \Delta\alpha_i^{g,-} (G_i(\omega) - g_i)_- \right] \notag \\
& + \sum_{j\in\mathcal{D}} \mathbb{E}\left[ -\alpha_j^d D_j(\omega) + \Delta\alpha_j^{d,+} (D_j(\omega) - d_i)_- + \Delta\alpha_j^{d,-} (D_j(\omega) - d_j)_+ \right] \\
\textrm{s.t.} & \sum_{i\in\mathcal{G}} g_i = \sum_{j\in\mathcal{D}}d_j \quad (\pi)\\
& \sum_{i\in\mathcal{G}} (G_i(\omega)-g_i) = \sum_{j\in\mathcal{D}}(D_j(\omega)-d_j) \quad \omega \in \Omega \quad (p(\omega)\Pi(\omega))\\
& 0\leq G_i(\omega)\leq \bar{G}_i(\omega),\quad i\in\mathcal{G},\omega \in \Omega\label{eq:bound1}\\
& 0\leq D_j(\omega)\leq \bar{D}(\omega),\quad j\in\mathcal{D},\omega \in \Omega\label{eq:bound2}.
\end{align}
\end{subequations}
This formulation assumes infinite transmission capacity. In this case, the entire network collapses into a single node; consequently, a single day-ahead price $\pi$ and real-time price $\Pi(\omega)$ are used.
We state that the partial Lagrange function of \eqref{eq:stochnonet} is given by
\begin{align*}
&\mathcal{L}(d_j,D_j(\cdot),g_i,G_i(\cdot),\pi,\Pi(\cdot)) \notag \\
& = \sum_{i\in\mathcal{G}} \mathbb{E}\left[ \alpha_i^g G_i(\omega) + \Delta\alpha_i^{g,+} (G_i(\omega) - g_i)_+ + \Delta\alpha_i^{g,-} (G_i(\omega) - g_i)_- \right] \notag\\
& \quad - \sum_{j\in\mathcal{D}} \mathbb{E}\left[ \alpha_j^d D_j(\omega) - \Delta\alpha_j^{d,+} (D_j(\omega) - d_i)_- - \Delta\alpha_j^{d,-} (D_j(\omega) - d_j)_+ \right] \notag \\
& \quad - \pi\left(\sum_{i\in\mathcal{G}}g_i-\sum_{j\in\mathcal{D}}d_j\right) - \mathbb{E}\left[\Pi(\omega)\left(\sum_{i\in\mathcal{G}}(G_i(\omega)-g_i)-\sum_{j\in\mathcal{D}}(D_j(\omega)-d_j)\right)\right] .
\end{align*}
The contribution of the balance constraints can be written in expected value form if we weight the Lagrange multipliers of the balance equations (prices) by the probabilities $p(\omega)$.
\begin{theorem}\label{th:singledistortion}
Consider the single-node stochastic clearing problem \eqref{eq:stochnonet}, and assume that the incremental bid prices are positive. The price distortion $\mathcal{M}^{\pi}=\pi-\mathbb{E}[\Pi(\omega)]$ is bounded as
\begin{align}
-\Delta\alpha^+ \leq \mathcal{M}^{\pi}\leq \Delta\alpha^-,
\end{align}
where
\begin{subequations}\label{eq:delta}
\begin{align}\label{eq:deltaplus}
\Delta \alpha^+ = \min \left\{\min_{i\in\mathcal{G}} \Delta\alpha_i^{g,+}, \min_{j\in\mathcal{D}} \Delta\alpha_j^{d,+} \right\}
\end{align}
and
\begin{align}\label{eq:deltaminus}
\Delta \alpha^- = \min \left\{\min_{i\in\mathcal{G}} \Delta\alpha_i^{g,-}, \min_{j\in\mathcal{D}} \Delta\alpha_j^{d,-} \right\}.
\end{align}
\end{subequations}
\end{theorem}
\proof{Proof}
Since $(X-x)_- = (X-x)_+ - (X-x)$, we have the partial Lagrange function
\begin{align*}
&\mathcal{L}(d_j,D_j(\cdot),g_i,G_i(\cdot),\pi,\Pi(\cdot)) \notag \\
& = \sum_{i\in\mathcal{G}} \mathbb{E}\left[ \alpha_i^g G_i(\omega) + \Delta\alpha_i^{g,+} (G_i(\omega) - g_i)_+ + \Delta\alpha_i^{g,-} (G_i(\omega) - g_i)_+ - \Delta\alpha_i^{g,-} (G_i(\omega) - g_i) \right] \notag\\
& \quad - \sum_{j\in\mathcal{D}} \mathbb{E}\left[ \alpha_j^d D_j(\omega) - \Delta\alpha_j^{d,+} (D_j(\omega) - d_i)_+ + \Delta\alpha_j^{d,+} (D_j(\omega) - d_i) - \Delta\alpha_j^{d,-} (D_j(\omega) - d_j)_+ \right] \notag \\
& \quad - \pi\left(\sum_{i\in\mathcal{G}}g_i-\sum_{j\in\mathcal{D}}d_j\right) - \mathbb{E}\left[\Pi(\omega)\left(\sum_{i\in\mathcal{G}}(G_i(\omega)-g_i)-\sum_{j\in\mathcal{D}}(D_j(\omega)-d_j)\right)\right].
\end{align*}
The stationarity conditions of the partial Lagrange function with respect to the day-ahead quantities $d_j,g_i$ are given by
\begin{subequations}
\begin{align}
0 &\in \partial_{d_j}\mathcal{L}
= (\Delta\alpha_j^{d,+} + \Delta\alpha_j^{d,-}) \partial_{d_j} \mathbb{E}[(D_j(\omega) - d_j)_+] + \Delta\alpha_j^{d,+} + \pi - \mathbb{E}[\Pi(\omega)] \quad j\in\mathcal{D} \label{eq:stationd}\\
0 &\in \partial_{g_i}\mathcal{L}
= (\Delta\alpha_i^{g,+} + \Delta\alpha_i^{g,-}) \partial_{g_i} \mathbb{E}[(G_i(\omega) - g_i)_+] + \Delta\alpha_i^{g,-} - \pi + \mathbb{E}[\Pi(\omega)] \quad i\in\mathcal{G}. \label{eq:stationg}
\end{align}
\end{subequations}
Rearranging \eqref{eq:stationd}, we obtain
\begin{subequations}\label{eq:stations}
\begin{align}
\frac{- \Delta\alpha_j^{d,+} - \pi + \mathbb{E}[\Pi(\omega)]}{\Delta\alpha_j^{d,+} + \Delta\alpha_j^{d,-}}
& \in \partial_{d_j} \mathbb{E}[(D_j(\omega) - d_j)_+] \quad j\in\mathcal{D} \label{eq:stations:d}\\
\frac{- \Delta\alpha_i^{g,-} + \pi - \mathbb{E}[\Pi(\omega)]}{\Delta\alpha_i^{g,+} + \Delta\alpha_i^{g,-}}
& \in \partial_{g_i} \mathbb{E}[(G_i(\omega) - g_i)_+] \quad i\in\mathcal{G}.
\end{align}
\end{subequations}
From the property
\begin{align}
\label{eq:subdiff}
\partial_x (X-x)_+ = \begin{cases}
-1 & \text{if } X > x \\
0 & \text{if } X < x \\
[-1, 0] & \text{if } X = x
\end{cases},
\end{align}
we have
\begin{align}
\partial_{x}\mathbb{E}[(X(\omega)-x)_+] \notag
&= \left\{\mathbb{E}\left[-\mathbf{1}_{X(\omega) > x} + a\mathbf{1}_{X(\omega) = x}\right] \;:\; a \in [-1,0] \right\} \notag \\
&= \left\{-\mathbb{P}\left(X(\omega) > x\right) + a\mathbb{P}\left(X(\omega) = x\right) \;:\; a \in [-1,0] \right\}, \notag\\
&= \left\{\eta \;:\; -\mathbb{P}(X(\omega) \geq x) \leq \eta \leq -\mathbb{P}(X(\omega) > x) \right\}. \label{eq:probabilityrange}
\end{align}
Since $-1\leq -\mathbb{P}(X(\omega) \geq x) \leq -\mathbb{P}(X(\omega) > x) \leq 0$, we have
\begin{align}\label{eq:subdifferential}
\partial_{x}\mathbb{E}[(X(\omega)-x)_+] \subseteq [-1,0].
\end{align}
From \eqref{eq:stations} and \eqref{eq:subdifferential}, we have
\begin{subequations}\label{eq:subgradbound}
\begin{align}
-1 \leq \frac{-\Delta\alpha_j^{d,+} - \pi + \mathbb{E}\left[\Pi(\omega)\right]}{\Delta\alpha_j^{d,+} + \Delta\alpha_j^{d,-}} \leq 0 \label{eq:subgradbounddj}\\
-1 \leq \frac{-\Delta\alpha_i^{g,-} + \pi - \mathbb{E}\left[\Pi(\omega)\right]}{\Delta\alpha_i^{g,+} + \Delta\alpha_i^{g,-}} \leq 0.\label{eq:subgradboundgi}
\end{align}
\end{subequations}
The above relationships are equivalent to
\begin{subequations}\label{eq:distbound}
\begin{align}
-\Delta\alpha_j^{d,+} &\leq \pi-\mathbb{E}\left[\Pi(\omega)\right] \leq \Delta\alpha_j^{d,-} \\
-\Delta\alpha_i^{g,+} & \leq \pi-\mathbb{E}\left[\Pi(\omega)\right] \leq \Delta\alpha_i^{g,-}.
\end{align}
\end{subequations}
or, equivalently, $-\Delta\alpha^+ \leq \mathcal{M}^{\pi}\leq \Delta\alpha^-$.\Halmos
\endproof
The price distortion is bounded above by the smallest of all $\Delta\alpha^{g,-}_i$ and $\Delta\alpha^{d,-}_j$ and bounded below by the largest of all $-\Delta\alpha^{g,+}_i$ and $-\Delta\alpha^{d,+}_j$. The bounds are denoted by $\Delta\alpha^-$ and $-\Delta\alpha^+$, respectively. This implies that if we let $\Delta\alpha^+$ and $\Delta\alpha^-$ be sufficiently small, then {\em we can make the price distortion $\mathcal{M}^{\pi}$ arbitrarily small}. Note that the bound is independent of the cleared quantities, which reflects robust behavior. Moreover, the upper bound depends on the incremental bid prices $\Delta\alpha_i^{g,-}$ and $\Delta\alpha_j^{d,-}$ only, while the lower bound depends on $\Delta\alpha_i^{g,+}$ and $\Delta\alpha_j^{d,+}$ only. Boundedness of the price distortion also eliminates the day-ahead component of the suppliers and consumer payments and thus achieves payment consistency. We highlight that Theorem~\ref{th:singledistortion} assumes the positive incremental bid prices. Otherwise, the solution can be degenerate.
We now prove that the day-ahead quantities $d_j,g_i$ obtained from the stochastic clearing model are implicitly bounded by the minimum and maximum real-time quantities.
\begin{theorem}\label{th:singlebound}
Consider the single-node stochastic clearing problem \eqref{eq:stochnonet}, and assume that the incremental bid prices are positive. The day-ahead quantities are bounded by the real-time quantities as
\begin{align*}
\min_{\omega \in \Omega}D_j(\omega)\leq d_j\leq \max_{\omega \in \Omega}\,D_j(\omega), \; j\in\mathcal{D}\\
\min_{\omega \in \Omega}G_i(\omega)\leq g_i\leq \max_{\omega \in \Omega}\,G_i(\omega), \; i\in\mathcal{G}.
\end{align*}
\end{theorem}
\proof{Proof}
Consider the following two cases:
\begin{itemize}
\item Case 1: The price distortion hits the lower bound for demand $j$; we thus have $\pi-\mathbb{E}\left[\Pi(\omega)\right]=-\Delta\alpha^{d,+}_j$. This implies $0 \in \partial_{d_j} \mathbb{E}[(D_j(\omega) - d_j)_+]$ from \eqref{eq:stations:d}, and hence $\mathbb{P}(D_j(\omega) > d_j) = 0$ and $\mathbb{P}(D_j(\omega) \leq d_j) = 1$ from \eqref{eq:subdiff}. This implies that $d_j\geq D_j(\omega),\; \forall \omega \in \Omega$ and $d_j\geq \min_{\omega \in \Omega} {D}_j(\omega)$.
\item Case 2: The price distortion hits the upper bound for demand $j$; we thus have $\pi-\mathbb{E}\left[\Pi(\omega)\right]=\Delta\alpha^{d,-}_j$. This implies $-1 \in \partial_{d_j} \mathbb{E}[(D_j(\omega) - d_j)_+]$ from \eqref{eq:stations:d}, and hence $\mathbb{P}(D_j(\omega) \geq d_j) = 1$ from \eqref{eq:subdiff}. This implies that $d_j\leq D_j(\omega),\; \forall \omega \in \Omega$ and $d_j \leq \max_{\omega \in \Omega}D_j(\omega)$.
\end{itemize}
We thus conclude that $d_j$ is bounded from below by $\min_{\omega \in \Omega}D_j(\omega)$ and from above by $\max_{\omega \in \Omega}D_j(\omega)$. The same procedure can be followed to prove that $g_i$ is bounded from below by $\min_{\omega \in \Omega}G_i(\omega)$ and from above by $\max_{\omega \in \Omega}G_i(\omega)$.\Halmos
\endproof
The implicit bound on the day-ahead quantities $d_j,g_i$ is a key property of the stochastic model proposed because it implies that we do not have to choose day-ahead capacities $\bar{g}_i,\bar{d}_j$ (e.g., summarization statistics). These are automatically set by the model through the scenario information. This is important because, as we have mentioned, obtaining proper summarizing statistics for complex probability distributions might not be trivial.
We now prove that if the price distortion is zero, the day-ahead quantities converge to {\em quantiles} of the real-time quantities.
\begin{theorem}\label{th:singlemedian}
Consider the stochastic clearing problem \eqref{eq:stochnonet}, and assume that the incremental bid prices are positive. If the price distortion is zero at the solution, then
\begin{subequations}
\begin{align}
d_j &= \mathbb{Q}_{D_j(\omega)}\left( \frac{\Delta\alpha_j^{d,-}}{\Delta\alpha_j^{d,+} + \Delta\alpha_j^{d,-}} \right), \; j\in\mathcal{D} \label{eq:quantiledj}\\
g_i &= \mathbb{Q}_{G_i(\omega)}\left( \frac{\Delta\alpha_i^{g,+}}{\Delta\alpha_i^{g,+} + \Delta\alpha_i^{g,-}} \right), \; i \in\mathcal{G}.\label{eq:quantilegi}
\end{align}
\end{subequations}
\end{theorem}
\proof{Proof}
From \eqref{eq:probabilityrange} and \eqref{eq:subgradbounddj} we have that if $\pi - \mathbb{E}\left[\Pi(\omega)\right]=0$, then $-\mathbb{P}(D_j(\omega) \geq d_j) \leq \frac{-\Delta\alpha_j^{d,+}}{\Delta\alpha_j^{d,+} + \Delta\alpha_j^{d,-}}\leq -\mathbb{P}(D_j(\omega) > d_j)$, and thus $\mathbb{P}(D_j(\omega) < d_j) \leq \frac{\Delta\alpha_j^{d,-}}{\Delta\alpha_j^{d,+} + \Delta\alpha_j^{d,-}}\leq \mathbb{P}(D_j(\omega) \leq d_j)$. This implies \eqref{eq:quantiledj} from \eqref{eq:defQuantile}. The same argument holds for \eqref{eq:quantilegi}.\Halmos
\endproof
\begin{corollary}
If the incremental bid prices are symmetric then $d_j = \mathbb{M}\left(D_j(\omega)\right),\; j\in\mathcal{D}$ and $g_i = \mathbb{M}\left(D_j(\omega)\right),\; i\in\mathcal{G}$.
\end{corollary}
\proof{Proof}
The proof follows from Corollary~\ref{thm:abscost} and Theorem \ref{th:singlemedian}. \Halmos
\endproof
This result implies that the day-ahead quantities $d_j,g_i$ cannot in general be guaranteed to converge to the expected values of the real-time quantities $\mathbb{E}[D_j(\omega)],\mathbb{E}[G_i(\omega)]$. Such convergence can only be guaranteed when the quantile and mean coincide. These observations thus imply that {\em the expected value is not necessarily the only statistic that can be used for the capacities in the day-ahead market.}
We now prove that the stochastic formulation yields zero uplifts in expectation. Revenue adequacy is not considered because this is a single-node problem. We use the strategy followed by \citet{morales2012pricing}. For this discussion, we denote a minimizer of the partial Lagrange function (subject to the constraints \eqref{eq:bound1} and \eqref{eq:bound2}) as $d_j^*,D_j(\cdot)^*,g_i^*,G_i(\cdot)^*,\pi^*,\Pi^*(\cdot)$. Because the problem is convex, we know that the optimal prices $\pi^*,\Pi^*(\cdot)$ satisfy
\begin{align}\label{eq:singlelag}
(d_j^*,D_j(\cdot)^*,g_i^*,G_i^*(\cdot)) = \mathop{\textrm{argmin}}_{d_j,D_j(\cdot),g_i,G_i(\cdot)} & \quad \mathcal{L}(d_j,D_j(\cdot),g_i,G_i(\cdot),\pi^*,\Pi^*(\cdot))\quad \textrm{s.t.} \quad \eqref{eq:bound1}-\eqref{eq:bound2}.
\end{align}
Moreover, at fixed $\pi^*,\Pi^*(\cdot)$, the partial Lagrange function can be
separated as
\begin{align}
\mathcal{L}(d_j,D_j(\cdot),g_i,G_i(\cdot),\pi^*,\Pi^*(\cdot))=\sum_{i \in \mathcal{G}}\mathcal{L}_i^g(g_i,G_i(\cdot),\pi^*,\Pi^*(\cdot))+\sum_{j \in \mathcal{D}}\mathcal{L}_j^d(d_j,D_j(\cdot),\pi^*,\Pi^*(\cdot)),
\end{align}
where
\begin{subequations}\label{eq:contlag}
\begin{align}
\mathcal{L}_i^g(g_i,G_i(\cdot),\pi^*,\Pi^*(\cdot))& := \mathbb{E}[C_i^g(\omega)]-\mathbb{E}[P_i^g(\omega)],\; i\in \mathcal{G}\\
\mathcal{L}_j^d(d_j,D_j(\cdot),\pi^*,\Pi^*(\cdot))& := \mathbb{E}[C_j^d(\omega)]-\mathbb{E}[P_j^d(\omega)],\; j\in\mathcal{D}.
\end{align}
\end{subequations}
Consequently, one can minimize the partial Lagrange function by minimizing \eqref{eq:contlag} independently.
\begin{theorem} \label{thm:zerouplift}
Consider the single-node clearing problem \eqref{eq:stochnonet}, and let the assumptions of Theorem \ref{th:singledistortion} hold. Any minimizer $d_j^*,D_j(\cdot)^*,g_i^*,G_i(\cdot)^*,\pi^*,\Pi^*(\cdot)$ of \eqref{eq:stochnonet} yields zero uplift payments in expectation:
\begin{subequations}
\begin{align}
\mathcal{M}_i^U&=0, \quad i\in \mathcal{G}\\
\mathcal{M}_j^U&=0, \quad j\in \mathcal{D}.
\end{align}
\end{subequations}
\end{theorem}
\proof{Proof}
From Definition~\ref{def:uplift}, it suffices to show that $\mathbb{E}[P_i^g(\omega)] - \mathbb{E}[C_i^g(\omega)] \geq 0$ for all $i\in \mathcal{G}$ and $\mathbb{E}[P_j^d(\omega)] - \mathbb{E}[C_j^d(\omega)]\geq 0$ for all $j\in \mathcal{D}$. For fixed $\pi^*,\Pi^*(\omega)$, the candidate solution $d_j=D_j(\cdot)=g_i=G_i(\cdot)=0$ is feasible for \eqref{eq:singlelag} with values $\mathcal{L}^g_i(g_i,G_i(\cdot),\pi^*,\Pi^*(\cdot))=0,\; i\in\mathcal{G}$ and $\mathcal{L}^d(d_j,D_j(\cdot),\pi^*,\Pi^*(\cdot))=0,\; j\in\mathcal{D}$. Because the candidate is suboptimal we have $\mathcal{L}_i^g(g_i^*,G_i^*(\cdot),\pi^*,\Pi^*(\cdot)) \leq \mathcal{L}_i^g(g_i,G_i(\cdot),\pi^*,\Pi^*(\cdot))=0$ and $\mathcal{L}_j^d(d_j^*,D_j^*(\cdot),\pi^*,\Pi^*(\cdot)) \leq 0$. The result follows from equations \eqref{eq:contlag} and the definition of $\mathcal{M}_i^U$ and $\mathcal{M}_j^U$ in \eqref{eq:uplift}.\Halmos
\endproof
\subsection{Network Constraints}\label{sec:prop:withnetwork}
Having established some insights into the properties of the stochastic model, we now turn our attention to the full stochastic problem with network constraints \eqref{eq:stoch} and generalize our results.
It is well known that stochastic formulations yield a better expected social surplus. This follows from the well-known inequality (see \citet{birge}):
\begin{align}
\varphi^{sto}_{WS}\leq \varphi^{sto}\leq \varphi^{det}.
\end{align}
This follows from the fact that the stochastic formulation will lead to a lower recourse cost (real-time penalty costs) than will the deterministic solution because the deterministic day-ahead problem does not anticipate recourse actions. The wait-and-see setting can perfectly anticipate real-time market conditions and therefore its real-time penalties are zero. This makes it the optimal, but nonimplementable policy.
We now establish boundedness of the price distortions throughout the network. To establish our result, we need the following definitions. We rewrite equations \eqref{eq:DAangle} and \eqref{eq:RTangle} as
\begin{subequations}
\label{eq:angle2}
\begin{align}
& f_\ell = \sum_{n\in\mathcal{N}} B_{\ell n} \theta_n \quad \forall \ell\in\mathcal{L}, \label{eq:DAangle2}\\
& F_\ell(\omega) = \sum_{n\in\mathcal{N}} B_{\ell n} \Theta_n(\omega) \quad \forall \omega\in\Omega, \ell\in\mathcal{L},\label{eq:RTangle2}
\end{align}
\end{subequations}
where
\begin{align*}
B_{\ell n} = \begin{cases}
B_\ell & \text{if } n = rec(\ell), \\
-B_\ell & \text{if } n = snd(\ell), \\
0 & \text{otherwise}.
\end{cases}
\end{align*}
We note that the above definitions imply that $B_{\ell, rec(\ell)}=B_\ell$ and $B_{\ell, snd(\ell)}=-B_\ell$. Moreover we have that,
\begin{align*}
\sum_{\ell\in \mathcal{L}_n^{rec}} f_\ell&=\sum_{\ell \in \mathcal{L}_n^{rec}} \left(B_{\ell}\theta_{rec(\ell)}-B_\ell\theta_{snd(\ell)}\right)\\
&=\sum_{\ell \in \mathcal{L}_n^{rec}}\left(B_{\ell, rec(\ell)}\theta_{rec(\ell)}+B_{\ell, snd(\ell)}\theta_{snd(\ell)}\right)\\
&=\sum_{\ell \in \mathcal{L}_n^{rec}}\left(B_{\ell n}\theta_{n}+B_{\ell, snd(\ell)}\theta_{snd(\ell)}\right).
\end{align*}
Using similar observations we have that,
\begin{align}
\sum_{\ell\in \mathcal{L}_n^{snd}} f_\ell=\sum_{\ell \in \mathcal{L}_n^{snd}} \left(B_{\ell,rec(\ell)} \theta_{rec(\ell)} + B_{\ell n} \theta_n\right).
\end{align}
Substituting the flows $f_\ell, F_\ell(\omega)$ by their corresponding phase angle expressions and using the above properties we have that the stochastic clearing problem \eqref{eq:angle2} can be written as
\newpage
\begin{subequations}
\label{eq:stochangle}
\begin{align}
\min_{d_j,D_j(\cdot),g_i,G_i(\cdot),\theta_n,\Theta_n(\cdot)} \;
& \sum_{i \in \mathcal{G}} \mathbb{E}\left[\alpha_i^{g} G_{i}(\omega) + \Delta\alpha_{i}^{g,+} (G_{i}(\omega)-g_{i})_+ + \Delta\alpha_{i}^{g,-} (G_{i}(\omega)-g_{i})_-\right]\nonumber\\
& + \sum_{j \in \mathcal{D}} \mathbb{E}\left[-\alpha_{j}^{d} D_{j}(\omega) + \Delta\alpha_{j}^{d,+} (D_{j}(\omega) - d_{j})_- + \Delta\alpha_{j}^{d,-} (D_{j}(\omega) - d_{j})_+\right]\nonumber\\
& + \sum_{\ell\in\mathcal{L}} \mathbb{E}\left[\Delta\alpha^{f,+}_{\ell} \left(\sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega) - \theta_n) \right)_+ + \Delta\alpha^{f,-}_{\ell} \left( \sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega) - \theta_n) \right)_-\right]\notag\\
& + \sum_{n\in\mathcal{N}} \mathbb{E}\left[\Delta\alpha^{\theta,+}_n (\Theta_n(\omega) - \theta_n)_+ + \Delta\alpha^{\theta,-}_n (\Theta_n(\omega) - \theta_n)_-\right]\\
\text{s.t.}
& \sum_{\ell \in \mathcal{L}_n^{rec}} \left(B_{\ell n} \theta_n + B_{\ell,snd(\ell)} \theta_{snd(\ell)}\right) -\sum_{\ell \in \mathcal{L}_n^{snd}} \left(B_{\ell,rec(\ell)} \theta_{rec(\ell)} + B_{\ell n} \theta_n\right) \notag \\
& \quad + \sum_{i \in \mathcal{G}_n} g_{i} - \sum_{i \in \mathcal{D}_n} d_{i}=0,\quad (\pi_n) \quad \forall n\in\mathcal{N} \label{eq:ec:stochconst1} \\
& \sum_{\ell \in \mathcal{L}_n^{rec}} \left[ B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) + B_{\ell,snd(\ell)} \left( \Theta_{snd(\ell)}(\omega) - \theta_{snd(\ell)} \right) \right] \notag \\
& \quad -\sum_{\ell \in \mathcal{L}_n^{snd}} \left[ B_{\ell,rec(\ell)} \left( \Theta_{rec(\ell)}(\omega) - \theta_{rec(\ell)} \right) + B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) \right] \notag \\
& \quad + \sum_{i \in \mathcal{G}_n} \left(G_{i}(\omega)-g_{i}\right) - \sum_{j \in \mathcal{D}_n} \left(D_{j}(\omega)-d_{j}\right)=0, \quad (p(\omega)\Pi_n(\omega)),\quad \forall\omega \in \Omega,n\in\mathcal{N} \label{eq:ec:stochconst2}\\
& -\bar{F}_{\ell}(\omega)\leq \sum_{n\in\mathcal{N}} B_{\ell n} \Theta_n(\omega) \leq \bar{F}_{\ell}(\omega), \quad \forall\omega\in\Omega,\ell \in \mathcal{L} \label{eq:ec:stochconst3}\\
& 0\leq G_{i}(\omega)\leq \bar{G}_{i}(\omega),\quad \forall\omega\in\Omega,i\in\mathcal{G} \\
& 0\leq D_{j}(\omega)\leq \bar{D}_{j}(\omega),\quad \forall\omega\in\Omega,j\in\mathcal{D} \label{eq:ec:stochconst6}\\
&\underline\theta_n \leq \Theta_n(\omega) \leq \overline\theta_n, \quad \forall\omega\in\Omega,n\in\mathcal{N}.
\end{align}
\end{subequations}
We consider the partial Lagrange function of \eqref{eq:stochangle} as
\begin{align}\label{eq:lagstoch}
&\mathcal{L}(d_j,D_j(\cdot),g_i,G_i(\cdot),\theta_n,\Theta_n(\cdot),\pi_n,\Pi_n(\cdot)) \notag\\
&= \sum_{i\in\mathcal{G}} \mathbb{E}\left[\alpha_i^g G_i(\omega) + \left(\Delta\alpha_i^{g,+} + \Delta\alpha_i^{g,-}\right)(G_{i}(\omega)-g_i)_+ - \Delta\alpha_i^{g,-}(G_{i}(\omega)-g_i)\right] \notag\\
&\quad -\sum_{j\in\mathcal{D}} \mathbb{E}\left[\alpha^d_jD_j(\omega) - \left(\Delta\alpha^{d,+}_j+\Delta\alpha^{d,-}_j\right)(D_j(\omega)-d_j)_+ + \Delta\alpha^{d,+}_j(D_j(\omega)-d_j)\right] \notag\\
&\quad + \sum_{\ell\in\mathcal{L}} \mathbb{E}\left[\left(\Delta\alpha^{f,+}_{\ell}+\Delta\alpha^{f,-}_{\ell}\right) \left( \sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega) - \theta_n) \right)_+ - \Delta\alpha^{f,-}_{\ell} \sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega) - \theta_n) \right] \notag\\
&\quad + \sum_{n\in\mathcal{N}} \mathbb{E}\left[ \left(\Delta\alpha^{\theta,+}_n + \Delta\alpha^{\theta,-}_n \right) \left(\Theta_n(\omega) - \theta_n \right)_+ - \Delta\alpha^{\theta,-}_n \left(\Theta_n(\omega) - \theta_n\right)\right]\notag\\
&\quad -\sum_{n\in\mathcal{N}}\pi_n \left[ \sum_{\ell \in \mathcal{L}_n^{rec}} \left(B_{\ell n} \theta_n + B_{\ell,snd(\ell)} \theta_{snd(\ell)}\right) -\sum_{\ell \in \mathcal{L}_n^{snd}} \left(B_{\ell,rec(\ell)} \theta_{rec(\ell)} + B_{\ell n} \theta_n\right) + \sum_{i \in \mathcal{G}_n} g_{i} - \sum_{j \in \mathcal{D}_n} d_j \right] \notag\\
&\quad -\mathbb{E} \left[ \sum_{n\in\mathcal{N}} \Pi_n(\omega) \left(\sum_{\ell \in \mathcal{L}_n^{rec}} \left[ B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) + B_{\ell,snd(\ell)} \left( \Theta_{snd(\ell)}(\omega) - \theta_{snd(\ell)} \right) \right] \right.\right. \notag \\
& \qquad \qquad \qquad \qquad -\sum_{\ell \in \mathcal{L}_n^{snd}} \left[ B_{\ell,rec(\ell)} \left( \Theta_{rec(\ell)}(\omega) - \theta_{rec(\ell)} \right) + B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) \right] \notag \\
& \qquad \qquad \qquad \qquad \left.\left. + \sum_{i \in \mathcal{G}_n} \left(G_{i}(\omega)-g_{i}\right) - \sum_{j \in \mathcal{D}_n} \left(D_{j}(\omega)-d_{j}\right) \right)\right].
\end{align}
We define the subset $\bar{\mathcal{N}}\subseteq \mathcal{N}$ containing all nodes at which at least one supplier or consumer is connected. We also define the subset $\mathcal{L}_n := \mathcal{L}_n^{rec} \cup \mathcal{L}_n^{snd}$.
\begin{theorem}\label{th:networkdistortion}
Consider the stochastic clearing model \eqref{eq:stochangle} and assume that the incremental bid prices are positive and that $\Delta\alpha^{f,+}_{\ell}, \Delta\alpha^{f,-}_{\ell},\Delta\alpha^{\theta,+}_n,\Delta\alpha^{\theta,-}_n>0,\; \ell\in \mathcal{L}, \; n\in\mathcal{N}$. The price distortions $\mathcal{M}_n^{\pi},\; n\in\mathcal{N}$ are bounded as
\begin{align*}
-\Delta\bar\alpha_n^+ & \leq \mathcal{M}_n^{\pi} \leq \Delta\bar\alpha_n^-,\quad n\in\bar{\mathcal{N}}, \\
-\Delta\alpha_n^+ & \leq \mathcal{M}_n^\pi \leq \Delta\alpha_n^-,\quad n\in\mathcal{N}\backslash\bar{\mathcal{N}},
\end{align*}
where
\begin{align*}
\Delta\bar\alpha_n^+ &= \min\left\{\min_{i\in\mathcal{G}_n}\Delta\alpha_i^{g,+}, \min_{j\in\mathcal{D}_n}\Delta\alpha_j^{d,+}, \Delta\alpha_n^{+} \right\},\; n\in\bar{\mathcal{N}}, \\
\Delta\bar\alpha_n^- &= \min\left\{\min_{i\in\mathcal{G}_n}\Delta\alpha_i^{g,-}, \min_{j\in\mathcal{D}_n} \Delta\alpha_j^{d,-}, \Delta\alpha_n^{-}\right\},\; n\in\bar{\mathcal{N}}
\end{align*}
and
\begin{align*}
\Delta\alpha_n^{+} &= \displaystyle\frac{\sum_{\ell\in\mathcal{L}_n} \left( \Delta\alpha^{f,+}_{\ell} + (1 - B_{\ell n}) \Delta\alpha^{f,-}_\ell \right) + \Delta\alpha_n^{\theta,+}}{\sum_{\ell\in\mathcal{L}_n} B_{\ell}}, \; n\in\mathcal{N}, \\
\Delta\alpha_n^{-} &=\displaystyle \frac{\sum_{\ell\in\mathcal{L}_n} B_{\ell n} \Delta\alpha^{f,-}_{\ell} + \Delta\alpha_n^{\theta,-}}{\sum_{\ell\in\mathcal{L}_n} B_{\ell}}, \; n\in\mathcal{N}.
\end{align*}
\end{theorem}
\proof{Proof}
The stationarity conditions of the partial Lagrange function with respect to the day-ahead quantities $g_i,d_j$ and phase angles $\theta_n$ are given by
\begin{subequations}\label{eq:stationd2}
\begin{align}
0 &\in \partial_{d_j}\mathcal{L}
= (\Delta\alpha^{d,+}_j + \Delta\alpha^{d,-}_j) \partial_{d_j} \mathbb{E}[(D_j(\omega) - d_j)_+] + \Delta\alpha^{d,+}_j + \pi_{n(j)} - \mathbb{E}\left[\Pi_{n(j)}(\omega)\right] \quad j\in\mathcal{D}\\
0 &\in \partial_{g_i}\mathcal{L}
= (\Delta\alpha^{g,+}_i + \Delta\alpha^{g,-}_i) \partial_{g_i} \mathbb{E}[(G_i(\omega) - g_i)_+] + \Delta\alpha^{g,-}_i - \pi_{n(i)} + \mathbb{E}\left[\Pi_{n(i)}(\omega)\right] \quad i\in\mathcal{G}\\
0 &\in \partial_{\theta_n}\mathcal{L}
= \sum_{\ell\in\mathcal{L}_n} \left(\Delta\alpha^{f,+}_\ell + \Delta\alpha^{f,-}_\ell\right) \partial_{\theta_n} \mathbb{E}\left[\left(\sum_{m\in\mathcal{N}} B_{\ell m}(\Theta_m(\omega) - \theta_m) \right)_+\right] + \sum_{\ell\in\mathcal{L}_n} B_{\ell n} \Delta\alpha^{f,-}_\ell \notag \\
& \qquad \qquad + \left( \Delta\alpha_n^{\theta,+} + \Delta\alpha_n^{\theta,-} \right) \partial_{\theta_n} \mathbb{E} \left[ \left( \Theta_n(\omega) - \theta_n \right)_+ \right] + \Delta\alpha_n^{\theta,-} \notag \\
& \qquad \qquad - \left( \sum_{\ell\in\mathcal{L}_n^{rec}} B_{\ell n} - \sum_{\ell\in\mathcal{L}_n^{snd}} B_{\ell n} \right) \left( \pi_n - \mathbb{E}\left[\Pi_n(\omega)\right] \right) \quad n\in\mathcal{N}, \label{eq:stationd:angle}
\end{align}
\end{subequations}
where we recall that $n(i)$ is the node at which supplier $i$ is connected and $n(j)$ is the node at which demand $j$ is connected. Following the same bounding procedure used in the proof of Theorem \ref{th:singledistortion}, we obtain
\begin{align*}
-\Delta\alpha_j^{d,+}\leq \mathcal{M}^{\pi}_{n(j)}\leq \Delta \alpha_j^{d,-},\;j\in\mathcal{D}\\
-\Delta\alpha_i^{g,+}\leq \mathcal{M}^{\pi}_{n(i)}\leq \Delta \alpha_i^{g,-},\;i\in\mathcal{G}.
\end{align*}
Rearranging \eqref{eq:stationd:angle}, we obtain
\begin{align}\label{eq:subdiff:angle}
-\sum_{\ell\in\mathcal{L}_n} B_{\ell n} \Delta\alpha^{f,-}_{\ell} - \Delta\alpha_n^{\theta,-} + \left( \sum_{\ell\in\mathcal{L}_n^{rec}} B_{\ell n} - \sum_{\ell\in\mathcal{L}_n^{snd}} B_{\ell n} \right) \mathcal{M}^{\pi}_n \in \mathcal{S}_n,
\end{align}
where
\begin{align}
\mathcal{S}_n :=
& \sum_{\ell\in\mathcal{L}_n} \left( \Delta\alpha^{f,+}_{\ell} + \Delta\alpha^{f,-}_{\ell} \right)\partial_{\theta_n} \mathbb{E}\left[\left(\sum_{m\in\mathcal{N}} B_{\ell m}(\Theta_m(\omega) - \theta_m) \right)_+\right] \notag\\
& + \left( \Delta\alpha_n^{\theta,+} + \Delta\alpha_n^{\theta,-} \right) \partial_{\theta_n} \mathbb{E} \left[ \left( \Theta_n(\omega) - \theta_n \right)_+ \right] \label{eq:sumofsubdiffs}
\end{align}
Because $\partial_{\theta_n} \mathbb{E}\left[\left(\sum_{m\in\mathcal{N}} B_{\ell m}(\Theta_m(\omega) - \theta_m) \right)_+\right] \subseteq [-1,0]$ and $\partial_{\theta_n} \mathbb{E} \left[ \left( \Theta_n(\omega) - \theta_n \right)_+ \right] \subseteq \left[ -1, 0 \right]$, we have
\begin{align}
\mathcal{S}_n
\subseteq \left[ -\sum_{\ell\in\mathcal{L}_n} \left( \Delta\alpha^{f,+}_{\ell} + \Delta\alpha^{f,-}_{\ell} \right) - \Delta\alpha_n^{\theta,+} - \Delta\alpha_n^{\theta,-}, 0 \right], \label{eq:anglesubdiffrange}
\end{align}
and therefore, from \eqref{eq:subdiff:angle} and \eqref{eq:anglesubdiffrange},
\begin{align}
& -\sum_{\ell\in\mathcal{L}_n} \left( \Delta\alpha^{f,+}_{\ell} + \Delta\alpha^{f,-}_{\ell} \right) - \Delta\alpha_n^{\theta,+} - \Delta\alpha_n^{\theta,-} \notag\\
& \qquad \leq -\sum_{\ell\in\mathcal{L}_n} B_{\ell n} \Delta\alpha^{f,-}_{\ell} - \Delta\alpha_n^{\theta,-} + \left( \sum_{\ell\in\mathcal{L}_n^{rec}} B_{\ell n} - \sum_{\ell\in\mathcal{L}_n^{snd}} B_{\ell n} \right) \mathcal{M}^{\pi}_n \notag \\
& \qquad = -\sum_{\ell\in\mathcal{L}_n} B_{\ell n} \Delta\alpha^{f,-}_{\ell} - \Delta\alpha_n^{\theta,-} + \sum_{\ell\in\mathcal{L}_n} B_\ell \mathcal{M}^{\pi}_n
\leq 0. \label{eq:subdiff:angle2}
\end{align}
Hence, we have $\Delta\alpha_n^{+} \leq \mathcal{M}^\pi_n \leq \Delta\alpha_n^{-}$. Because $\Delta\bar\alpha_n^+$ and $\Delta\bar\alpha_n^-$ are the smallest incremental bid prices at node $n\in\bar{\mathcal{\mathcal{N}}}$, we obtain the bound $-\Delta\bar\alpha_n^+ \leq \mathcal{M}_n^{\pi} \leq \Delta\bar\alpha_n^-,\; n\in\bar{\mathcal{N}}$. \Halmos
\endproof
The price distortion is bounded for every node $\n\in\mathcal{N}$. Moreover, if the penalty parameters $\Delta\alpha_{\ell}^{f,+},\Delta\alpha_{\ell}^{f,-},\Delta\alpha_n^{\theta,+},\Delta\alpha_n^{\theta,-}$ are made arbitrarily small, then the price distortion at every node becomes arbitrarily small. We now state results that are natural extensions of Theorems \ref{th:singlebound} and \ref{th:singlemedian}.
\begin{theorem}\label{th:networkbounds}
Consider the stochastic clearing model \eqref{eq:stochangle}, and let the assumptions of Theorem \ref{th:networkdistortion} hold. The day-ahead quantities and phase angles are bounded by the real-time quantities and phase angles as
\begin{align*}
\min_{\omega \in \Omega}D_j(\omega) \leq d_j\leq \max_{\omega \in \Omega}D_j(\omega),& \quad j\in\mathcal{D}\\
\min_{\omega \in \Omega}G_i(\omega) \leq g_i\leq \max_{\omega \in \Omega}G_i(\omega),& \quad i\in\mathcal{G}\\
\min_{\omega \in \Omega}F_\ell(\omega) \leq f_\ell \leq \max_{\omega \in \Omega}F_\ell(\omega),& \quad \ell\in\mathcal{L}\\
\min_{\omega \in \Omega}\Theta_n(\omega) \leq \theta_n \leq \max_{\omega \in \Omega}\,\Theta_n(\omega),& \quad n\in\mathcal{N}.
\end{align*}
\end{theorem}
\proof{Proof}
For the suppliers and demands, we can use the same procedure used in the proof of Theorem \ref{th:singlebound}. The bounds on the day-ahead flows and phase angles follow the same argument as well. We use the definition \eqref{eq:sumofsubdiffs} for simplicity. Consider the following two cases:
\begin{itemize}
\item Case 1: The price distortion hits the lower bound for node $n$; we thus have $\mathcal{M}^{\pi}_n = -\Delta\alpha_n^{+}$. This implies that $-\sum_{\ell\in\mathcal{L}_n} \left( \Delta\alpha_\ell^{f,+} + \Delta\alpha_\ell^{f,-} \right) - \Delta\alpha_n^{\theta,+} - \Delta\alpha_n^{\theta,-} \in \mathcal{S}_n$ from \eqref{eq:subdiff:angle}, and hence we have
\begin{subequations}
\begin{align}
& -1 \in \partial_{\theta_n}\mathbb{E}\left[\left( \Theta_n(\omega)-\theta_n \right)_+\right] \label{eq:angleub}\\
& -1 \in \partial_{\theta_n}\mathbb{E}\left[\left(\sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega)-\theta_n) \right)_+\right], \quad \ell\in\mathcal{L}_n. \label{eq:flowub}
\end{align}
\end{subequations}
From \eqref{eq:subdiff}, equation~\eqref{eq:angleub} implies that $\mathbb{P}\left(\Theta_n(\omega) \geq \theta_n\right) = 1$, and equation~\eqref{eq:flowub} implies that $\mathbb{P}(\sum_{n\in\mathcal{N}} B_{\ell n} \Theta_n(\omega) \geq \sum_{n\in\mathcal{N}} B_{\ell n} \theta_n) = \mathbb{P}(F_\ell(\omega)\geq f_\ell) = 1$ for $\ell\in\mathcal{L}_n$. Therefore, we have $\theta_n \leq \Theta_n(\omega),\;\omega\in\Omega$ and $\theta_n \leq \max_{\omega\in\Omega} \Theta_n(\omega)$. Similarly, $f_\ell \leq F_\ell(\omega),\;\forall\omega\in\Omega$ and $f_\ell \leq \max_{\omega\in\Omega} F_\ell(\omega)$ for $\ell\in\mathcal{L}_n$.
\item Case 2: The price distortion hits the upper bound for node $n$; we thus have $\mathcal{M}^{\pi}_n = \Delta\alpha_n^{-}$. This implies $0 \in \mathcal{S}_n$ from \eqref{eq:subdiff:angle}, and hence we have
\begin{subequations}
\begin{align}
& 0 \in \partial_{\theta_n}\mathbb{E}\left[\left( \Theta_n(\omega)-\theta_n \right)_+\right] \label{eq:anglelb}\\
& 0 \in \partial_{\theta_n}\mathbb{E}\left[\left(\sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega)-\theta_n) \right)_+\right], \quad \ell\in\mathcal{L}_n. \label{eq:flowlb}
\end{align}
\end{subequations}
From \eqref{eq:subdiff}, equation~\eqref{eq:anglelb} implies that $\mathbb{P}\left(\Theta_n(\omega) \leq \theta_n\right) = 1$, and equation~\eqref{eq:flowub} implies that $\mathbb{P}(\sum_{n\in\mathcal{N}} B_{\ell n} \Theta_n(\omega) \leq \sum_{n\in\mathcal{N}} B_{\ell n} \theta_n) = \mathbb{P}(F_\ell(\omega)\leq f_\ell) = 1$ for $\ell\in\mathcal{L}_n$. Therefore, we have $\theta_n \geq \Theta_n(\omega),\;\omega\in\Omega$ and $\theta_n \geq \min_{\omega\in\Omega} \Theta_n(\omega)$. Similarly, $f_\ell \geq F_\ell(\omega),\;\forall\omega\in\Omega$ and $f_\ell \geq \min_{\omega\in\Omega} F_\ell(\omega)$ for $\ell\in\mathcal{L}_n$.
\end{itemize}
\Halmos
\endproof
\begin{theorem}
Consider the stochastic clearing problem \eqref{eq:stochangle}, and let the assumptions of Theorem \ref{th:networkdistortion} hold. If the price distortions $\mathcal{M}_n^{\pi},\;n\in\mathcal{N}$ are zero at the solution, then
\begin{subequations}
\begin{align}
d_j &= \mathbb{Q}_{D_j(\omega)}\left( \frac{\Delta\alpha_j^{d,-}}{\Delta\alpha_j^{d,+}+\Delta\alpha_j^{d,-}} \right),\; j\in\mathcal{D} \label{eq:convergencedj}\\
g_i &= \mathbb{Q}_{G_i(\omega)}\left( \frac{\Delta\alpha_i^{g,+}}{\Delta\alpha_i^{g,+}+\Delta\alpha_i^{g,-}} \right), \; i \in\mathcal{G} \label{eq:convergencegi}.
\end{align}
\end{subequations}
\end{theorem}
\proof{Proof}
For \eqref{eq:convergencedj} and \eqref{eq:convergencegi}, we can use the same procedure used in the proof of Theorem \ref{th:singlemedian}.\Halmos
\endproof
\begin{corollary}
If the incremental bid prices are symmetric from Corollary~\ref{thm:abscost}, then $d_j = \mathbb{M}\left(D_j(\omega)\right),\; j\in\mathcal{D}$, and $g_i = \mathbb{M}\left(G_i(\omega)\right),\; i\in\mathcal{G}$.
\end{corollary}
We treat the penalty terms purely as a means to constrain the day-ahead flows and phase angles and induce the desired pricing properties. Our results indicate that this can be done with no harm by allowing $\Delta\alpha^{f,+}_{\ell},\Delta\alpha^{f,-}_{\ell},\Delta\alpha^{\theta,+}_n,\Delta\alpha^{\theta,-}_n$ to be sufficiently small. Moreover, making these arbitrarily small guarantees that the expected social surplus of the stochastic problem \eqref{eq:surplussto} satisfies $\varphi^{sto}\approx \varphi$. The alternative is to simply impose day-ahead bounds of the forms \eqref{eq:det:flowbounds} and \eqref{eq:det:anglebounds} and to eliminate the penalty terms on the flows and phase angles. In this case, however, we cannot guarantee that the price distortions are bounded, as we illustrate in the next section. In addition, similar to the case of day-ahead quantities, imposing day-ahead bounds on flows would require us to choose a proper statistic for the bounds of flows and phase angles, which might not be trivial to do.
We now prove revenue adequacy and zero uplift payments in expectation for the network-constrained formulation. We denote a minimizer of the partial Lagrange function \eqref{eq:lagstoch} (subject to the constraints \eqref{eq:boundstoch1}-\eqref{eq:boundstoch2}) as $d_j^*,D_j^*(\cdot),g_i^*,G_i^*(\cdot),\theta^*_n,\Theta_n^*(\cdot),\pi^*_n,\Pi^*_n(\cdot)$. Because the problem is convex, we know that the prices $\pi^*_n,\Pi^*_n(\cdot)$ satisfy
\begin{align*
(d_j^*,D_j(\cdot)^*,g_i^*,G_i^*(\cdot),\theta_n^*,\Theta^*_n(\cdot)) = \mathop{\textrm{argmin}}_{d_j,D_j(\cdot),g_i,G_i(\cdot),\theta_n,\Theta_n(\cdot)} \quad & \mathcal{L}(d_j,D_j(\cdot),g_i,G_i(\cdot),\theta_n,\Theta_n(\cdot),\pi^*_n,\Pi^*_n(\cdot))\nonumber\\
\textrm{s.t.} \quad & \eqref{eq:boundstoch1}-\eqref{eq:boundstoch2}.
\end{align*}
Moreover, at $\pi^*_n,\Pi^*_n(\cdot)$, the partial Lagrange function can be separated as
\begin{align}\label{eq:lag2}
&\mathcal{L}(d_j,D_j(\cdot),g_i,G_i(\cdot),\theta_n,\Theta_n(\cdot),\pi^*_n,\Pi^*_n(\cdot))=\nonumber\\
& \qquad \sum_{i \in \mathcal{G}}\mathcal{L}_i^g(g_i,G_i(\cdot),\pi^*_n,\Pi^*_n(\cdot))+\sum_{j \in \mathcal{D}}\mathcal{L}_j^d(d_j,D_j(\cdot),\pi^*_n,\Pi^*_n(\cdot)) + \mathcal{L}^\theta(\theta_n,\Theta_n(\cdot),\pi^*_n,\Pi^*_n(\cdot)).
\end{align}
where the first two terms are defined in \eqref{eq:contlag} and
\begin{align}\label{eq:revlag}
&\mathcal{L}^\theta(\theta_{\ell},\Theta_{\ell}(\cdot),\pi^*_n,\Pi^*_n(\cdot))=\nonumber\\
&\sum_{\ell\in\mathcal{L}} \mathbb{E}\left[\Delta\alpha^{f,+}_{\ell} \left( \sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega)-\theta_n) \right)_+ + \Delta\alpha^{f,-}_{\ell} \left( \sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega)-\theta_n) \right)_- \right] \notag \\
&+\sum_{n\in\mathcal{N}} \mathbb{E}\left[\Delta\alpha^{\theta,+}_n \left( \Theta_n(\omega)-\theta_n \right)_+ + \Delta\alpha^{\theta,-}_n \left( \Theta_n(\omega)-\theta_n \right)_- \right] \notag \\
& -\sum_{n\in\mathcal{N}}\pi_n \left[ \sum_{\ell \in \mathcal{L}_n^{rec}} \left(B_{\ell n} \theta_n + B_{\ell,snd(\ell)} \theta_{snd(\ell)}\right) -\sum_{\ell \in \mathcal{L}_n^{snd}} \left(B_{\ell,rec(\ell)} \theta_{rec(\ell)} + B_{\ell n} \theta_n\right) \right] \notag\\
& -\mathbb{E} \left[ \sum_{n\in\mathcal{N}} \Pi_n(\omega) \left(\sum_{\ell \in \mathcal{L}_n^{rec}} \left[ B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) + B_{\ell,snd(\ell)} \left( \Theta_{snd(\ell)}(\omega) - \theta_{snd(\ell)} \right) \right] \right.\right. \notag \\
& \qquad \qquad \qquad \qquad \left.\left.-\sum_{\ell \in \mathcal{L}_n^{snd}} \left[ B_{\ell,rec(\ell)} \left( \Theta_{rec(\ell)}(\omega) - \theta_{rec(\ell)} \right) + B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) \right] \right)\right]
\end{align}
Consequently, one can minimize the partial Lagrange function by minimizing \eqref{eq:contlag} and \eqref{eq:revlag} independently.
\begin{theorem}\label{th:networkadequacy}
Consider the stochastic clearing problem \eqref{eq:stochangle}, and let the assumptions of Theorem \ref{th:networkdistortion} hold. Any minimizer $d_j^*,D_j(\cdot)^*,g_i^*,G_i(\cdot)^*,\theta^*_n,\Theta^*_n,\pi^*_n,\Pi^*_n(\cdot)$ of \eqref{eq:stochangle} yields zero uplift payments for all players and revenue adequacy in expectation:
\begin{subequations}
\begin{align}
\mathcal{M}_i^U&=0, \quad i\in \mathcal{G}, \\
\mathcal{M}_j^U&=0, \quad j\in \mathcal{D}, \\
\mathcal{M}^{ISO}&\leq 0.
\end{align}
\end{subequations}
\end{theorem}
\proof{Proof}
For fixed $\pi^*_n,\Pi^*_n(\cdot)$, by the separation of the partial Lagrange function, the zero uplift payments directly result from Theorem~\ref{thm:zerouplift}. At fixed $\pi^*_n,\Pi^*_n(\cdot)$ we also note that $\theta_n=\Theta_n(\cdot)=0$ is a feasible candidate solution for the maximization of $\mathcal{L}^\theta(\theta_n,\Theta_n(\cdot),\pi^*_n,\Pi^*_n(\cdot))$ and that, at this suboptimal point, this term is also zero.
If the flow balances \eqref{eq:networkfwd} and \eqref{eq:networkrt} hold, we have
\begin{align}
0 =
& -\sum_{n\in\mathcal{N}}\pi_n \left[ \sum_{\ell \in \mathcal{L}_n^{rec}} \left(B_{\ell n} \theta_n + B_{\ell,snd(\ell)} \theta_{snd(\ell)}\right) -\sum_{\ell \in \mathcal{L}_n^{snd}} \left(B_{\ell,rec(\ell)} \theta_{rec(\ell)} + B_{\ell n} \theta_n\right) + \sum_{i \in \mathcal{G}_n} g_{i} - \sum_{j \in \mathcal{D}_n} d_j \right] \notag\\
& -\mathbb{E} \left[ \sum_{n\in\mathcal{N}} \Pi_n(\omega) \left(\sum_{\ell \in \mathcal{L}_n^{rec}} \left[ B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) + B_{\ell,snd(\ell)} \left( \Theta_{snd(\ell)}(\omega) - \theta_{snd(\ell)} \right) \right] \right.\right. \notag \\
& \qquad \qquad \qquad \qquad -\sum_{\ell \in \mathcal{L}_n^{snd}} \left[ B_{\ell,rec(\ell)} \left( \Theta_{rec(\ell)}(\omega) - \theta_{rec(\ell)} \right) + B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) \right] \notag \\
& \qquad \qquad \qquad \qquad \left.\left. + \sum_{i \in \mathcal{G}_n} \left(G_{i}(\omega)-g_{i}\right) - \sum_{j \in \mathcal{D}_n} \left(D_{j}(\omega)-d_{j}\right) \right)\right].
\end{align}
Consequently, for any arbitrary set of prices $\pi_n,\Pi_n(\cdot)$, we have
\begin{align*}
&-\sum_{n\in\mathcal{N}}\pi_n \left[ \sum_{\ell \in \mathcal{L}_n^{rec}} \left(B_{\ell n} \theta_n + B_{\ell,snd(\ell)} \theta_{snd(\ell)}\right) -\sum_{\ell \in \mathcal{L}_n^{snd}} \left(B_{\ell,rec(\ell)} \theta_{rec(\ell)} + B_{\ell n} \theta_n\right) \right] \notag\\
& -\mathbb{E} \left[ \sum_{n\in\mathcal{N}} \Pi_n(\omega) \left(\sum_{\ell \in \mathcal{L}_n^{rec}} \left[ B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) + B_{\ell,snd(\ell)} \left( \Theta_{snd(\ell)}(\omega) - \theta_{snd(\ell)} \right) \right] \right.\right. \notag \\
& \qquad \qquad \qquad \qquad \left.\left.-\sum_{\ell \in \mathcal{L}_n^{snd}} \left[ B_{\ell,rec(\ell)} \left( \Theta_{rec(\ell)}(\omega) - \theta_{rec(\ell)} \right) + B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) \right] \right)\right] \\
&=\sum_{n\in\mathcal{N}}\pi_n\left(\sum_{i \in \mathcal{G}_n} g_{i} - \sum_{j \in \mathcal{D}_n} d_{j}\right)+\mathbb{E}\left[\sum_{n\in\mathcal{N}}\Pi_n(\omega)\left(\sum_{i \in \mathcal{G}_n} \left(G_{i}(\omega)-g_{i}\right) - \sum_{j \in \mathcal{D}_n} \left(D_{j}(\omega)-d_{j}\right)\right)\right] \\
&=\sum_{i \in \mathcal{G}_n} \pi_{n(i)}g_{i} +\mathbb{E}\left[\sum_{i \in \mathcal{G}_n} \Pi_{n(i)}(\omega)\left(G_{i}(\omega)-g_{i}\right)\right] -\sum_{j \in \mathcal{D}_n} \pi_{n(j)}d_{j} - \mathbb{E}\left[\sum_{j \in \mathcal{D}_n} \Pi_{n(j)}(\omega)\left(D_{j}(\omega)-d_{j}\right)\right] \\
&=\mathcal{M}^{ISO}.
\end{align*}
Therefore, we have
\begin{align*}
0 \geq
&\mathcal{L}^\theta(\theta_n^*,\Theta_n^*(\cdot),\pi^*_n,\Pi^*_n(\cdot)) \\
\geq
&-\sum_{n\in\mathcal{N}}\pi_n \left[ \sum_{\ell \in \mathcal{L}_n^{rec}} \left(B_{\ell n} \theta_n + B_{\ell,snd(\ell)} \theta_{snd(\ell)}\right) -\sum_{\ell \in \mathcal{L}_n^{snd}} \left(B_{\ell,rec(\ell)} \theta_{rec(\ell)} + B_{\ell n} \theta_n\right) \right] \notag\\
& -\mathbb{E} \left[ \sum_{n\in\mathcal{N}} \Pi_n(\omega) \left(\sum_{\ell \in \mathcal{L}_n^{rec}} \left[ B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) + B_{\ell,snd(\ell)} \left( \Theta_{snd(\ell)}(\omega) - \theta_{snd(\ell)} \right) \right] \right.\right. \notag \\
& \qquad \qquad \qquad \qquad \left.\left.-\sum_{\ell \in \mathcal{L}_n^{snd}} \left[ B_{\ell,rec(\ell)} \left( \Theta_{rec(\ell)}(\omega) - \theta_{rec(\ell)} \right) + B_{\ell n} \left( \Theta_n(\omega) - \theta_n \right) \right] \right)\right] \\
&= \mathcal{M}^{ISO},
\end{align*}
where the second inequality holds because
\begin{align*}
&\sum_{\ell\in\mathcal{L}} \mathbb{E} \left[\Delta\alpha^{f,+}_{\ell} \left( \sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega)-\theta_n) \right)_+ + \Delta\alpha^{f,-}_{\ell} \left( \sum_{n\in\mathcal{N}} B_{\ell n} (\Theta_n(\omega)-\theta_n) \right)_- \right]\\
&\qquad + \sum_{n\in\mathcal{N}} \mathbb{E}\left[\Delta\alpha^{\theta,+}_n \left( \Theta_n(\omega)-\theta_n \right)_+ + \Delta\alpha^{\theta,-}_n \left( \Theta_n(\omega)-\theta_n \right)_- \right]\geq 0.\Halmos
\end{align*}
\endproof
We highlight that the introduction of the penalty terms for flows does not affect revenue adequacy and cost recovery because the partial Lagrange function remains separable for fixed prices.
|
1,941,325,220,698 | arxiv | \section{The kinetic Vlasov model and the corresponding macroscopic systems}
In this section, we introduce the Vlasov model and the corresponding macroscopic systems.
We consider the dimensionless VP system
\begin{equation}
\frac{\partial f}{\partial t}
+ {\bf{v}} \cdot \nabla_{\bf{x}} f
+ {\bf{E}} ({\bf{x}},t) \cdot \nabla_{\bf{v}} f = 0,
\label{vlasov1}
\end{equation}
\begin{equation}
{\bf E}( {\bf x},t) = - \nabla_{\bf x} \phi({\bf x},t), \quad -\triangle_{\bf x} \phi ({\bf x},t) = {{\bf \rho} ({\bf x},t)} - \rho_0,
\label{poisson}
\end{equation}
which describes the dynamics of the probability distribution function $f({\bf x}, {\bf v},t)$ of electrons in a collisionless plasma.
Here ${\bf E}$ is the electric field and $\phi$ is the self-consistent electrostatic potential determined by Poisson's equation. $f$ couples to the long range fields via the charge density ${\bf \rho}({\bf x},t) = \int_{\Omega_{{\bf v}}} f({\bf x}, {\bf v},t) d {\bf v}$, where we take the limit of uniformly distributed infinitely massive ions in the background.
The Vlasov dynamics are well-known to conserve several physical invariants. In particular, let
\begin{eqnarray}
\label{eq: mass_d}
\mbox{charge density:}&& \rho ({\bf x}, t) = \int_{\Omega_{{\bf v}}} f({\bf x}, {\bf v},t) d {\bf v}, \\
\label{eq: current_d}
\mbox{current density:} &&{\bf J} ({\bf x}, t) = \int_{\Omega_{{\bf v}}}f({\bf x}, {\bf v},t) {\bf v} d {\bf v},\\
\label{eq: kenergy_d}
\mbox{kinetic energy density:} && \kappa({\bf x},t) = \frac{1}{2} \int_{\Omega_{{\bf v}}} |{\bf v}|^{2} f({\bf x}, {\bf v},t) d {\bf v},\\
\label{eq: energy_d}
\mbox{energy density:} && e({\bf x},t)=\kappa({\bf x},t)+\frac{1}{2} {\bf E}({\bf x})^2.
\end{eqnarray}
Then, by taking the first few moments of the Vlasov equation,
the following conservation laws of mass, momentum and energy can be derived
\begin{align}
\partial_{t} \rho + \nabla_{\bf x} \cdot {\bf J} &= 0\label{eq:mass}\\
\partial_{t} {\bf J} +\nabla_{{\bf x}} \cdot \mathbf{\sigma}&= \rho{\bf E} \label{eq:mom}\\
\partial_{t} e +\nabla_{{\bf x}} \cdot \mathbf{Q}& =0,\label{eq:ener}
\end{align}
where $ \sigma(t, {\bf x})=\int_{\Omega_{{\bf v}}}({\bf v} \otimes {\bf v}) f({\bf x}, {\bf v},t) d {\bf v}$ and $\mathbf{Q}({\bf x},t) =\frac12\int_{\Omega_{{\bf v}}}{\bf v}|{\bf v}|^2 f({\bf x}, {\bf v},t) d {\bf v}$.
It is well-known that local conservation property is essential to capture correct entropy solutions of hyperbolic systems such as \eqref{eq:mass}-\eqref{eq:ener}.
\section{A LoMaC low rank tensor approach with DG discretizations for the Vlasov dynamics}
For simplicity of illustrating the basic idea, we only discuss a 1D1V example in this section. The low rank tensor approach \cite{guo2021lowrank} is designed based on the assumption that our solution at time $t$ has a low rank representation in the form of
\begin{equation}
\label{eq: fn1}
f(x, v, t) = \sum_{l=1}^{r} \left(C_l(t) \ U_l^{(1)}(x, t) U_l^{(2)}(v, t)\right),
\end{equation}
where $\left\{U_l^{(1)}(x, t)\right\}_{l=1}^{r}$ and $\left\{U_l^{(2)}(v, t)\right\}_{l=1}^{r}$ are a set of time-dependent low rank orthonormal basis in $x$ and $v$ directions, respectively, $C_l$ is the coefficient for the basis $U_l^{(1)}(x, t)U_l^{(2)}(v, t)$, and $r$ is the representation rank. \eqref{eq: fn1} can be viewed as a Schmidt decomposition of functions in $(x, v)$ by truncating small singular values up to rank $r$.
\subsection{DG discretization with nodal Lagrangian basis functions.}
We perform a DG discretization with a piecewise $Q^k$ polynomial space for $f$ on a truncated 1D1V domain of $\Omega = [x_{\min}, x_{\max}] \times [-v_{\max}, v_{\max}]$.
We start with a tensor product Cartesian partition of $\Omega$ denoted by $\Omega_h$ with
$$x_{\min}=x_{\frac12}<x_{\frac{3}{2}}<\cdots <x_{N_x+\frac12}=x_{\max},$$
$$-v_{\max}=v_{\frac12}<v_{\frac{3}{2}}<\cdots <v_{N_v+\frac12}=v_{\max}.$$
Denote an element as $I_{ij}=[{x_{i-\frac{1}{2}}}, {x_{i+\frac{1}{2}}}]\times[{v_{j-\frac{1}{2}}}, {v_{j+\frac{1}{2}}}]\in \Omega_h$ with element size $h_{x, i} h_{v, j}$ and the center $x_{i} = \frac12(x_{i-\frac12}+x_{i+\frac12})$ and $v_{j} = \frac12(v_{j-\frac12}+v_{j+\frac12})$ . Let $h_x=\max_{i=1}^{N_x}h_{x, i}$ and $h_v=\max_{j=1}^{N_v}h_{v, j}$. Given any non-negative integer $k$, we define a finite dimensional discrete space with piecewisely defined $Q^k$ polynomials,
\begin{equation}
Q_h^k=\left\{p(x, v)\in L^2(\Omega): p|_{I_{ij}}\in Q^k(I_{ij}),\, \forall I_{ij}\in \Omega_h \right\}.
\label{eq:DiscreteSpace}
\end{equation}
The local space $Q^k(I)$ consists of polynomials with terms in the form of $x^m v^n$ with $\max(m, n)\le k$ on $I\in\Omega_h$.
To distinguish the left and right limits of a function $p\in Q_h^k$ at $(x_{i+\frac{1}{2}}, v)$, \QQ{we let
$p_{i+\frac{1}{2}, v}^\pm=\lim_{\delta \rightarrow \pm 0}p(x_{i+\frac{1}{2}}+\delta, v)$.}
A semi-discrete DG method for the Vlasov equation \eqref{vlasov1} is: find $f_h(\cdot, \cdot, t)\in {Q}_h^k$ , such that
$\forall \phi \in Q_h^k$ and $\forall I_{ij}\in\Omega_h$,
\begin{align}
\label{eq:DG}
\int_{I_{ij}} \partial_t f_h \phi dxdv &= \int_{I_{ij}} v f_h \phi_x dxdv - \int_{v_{j-\frac{1}{2}}}^{{v_{j+\frac{1}{2}}}} v(\hat{f}_{i+\frac{1}{2}, v} \phi^-_{i+\frac{1}{2}, v}- \hat{f}_{i-\frac{1}{2}, v} \phi^+_{i-\frac{1}{2}, v}) dv \\
& + \int_{I_{ij}} E f_h \phi_v dxdv - \int_{x_{i-\frac{1}{2}}}^{{x_{i+\frac{1}{2}}}} E(x) (\hat{f}_{x, j+\frac{1}{2}} \phi^-_{x, j+\frac{1}{2}}- \hat{f}_{x, j-\frac{1}{2}} \phi^+_{x, j-\frac{1}{2}}) dx.\notag
\end{align}
To implement the DG scheme under the low rank framework, we use the nodal basis to represent functions in the discrete space $Q_h^k$,
in conjunction with rewriting and/or approximating the integrals in the schemes by numerical quadratures.
We consider a reference cell $I=[-\frac12, \frac12]\times[-\frac12, \frac12]$ and the tensor product of Gaussian quadrature points in each direction $\{\xi_{ig},\eta_{jg}\}_{ig, jg=0}^{k}$. We further let $\{\omega_l\}^k_{l=0}$ denote the corresponding quadrature weights on the reference element.
The local nodal Lagrangian basis on the reference cell is $\{L_{ig, jg}(\xi, \eta)\}_{ig, jg=0}^{k}$ in $Q^k(I)$ with
\begin{equation}
L_{ig, jg} (\xi_{ig'},\eta_{jg'})=\delta_{ig, ig'}\delta_{jg, jg'},\quad ig, ig', jg, jg'=0,\cdots, k.
\end{equation}
Here $\delta_{\cdot, \cdot'}$ is the Kronecker delta function. In fact,
$
L_{ig, jg} (\xi, \eta) = L_{ig} (\xi) L_{jg} (\eta),
$
where $L_{ig}$ and $L_{jg}$ are the 1D Lagrangian nodal basis functions associated with the corresponding Gaussian nodes. For a computational cell $I_{ij}$, we can perform a linear transformation to the reference cell, with $\xi = \frac{x-x_i}{h_{x, i}}, \eta = \frac{v-v_j}{h_{v,j}}$, and denote by the shifted Gaussian nodes $x_{i,ig} = x_i + h_{x,i}\xi_{ig}$, $v_{j,jg} = v_j + h_{v,j}\eta_{jg}$.
With the nodal basis functions, the DG scheme \eqref{eq:DG} on a computational cell $I_{ij}$ can be equivalently written with the test functions being taken as $L_{ig', jg'}(\xi, \eta)$, $ig', jg'=0,\cdots,k$. We look for the DG solution expressed in the form of $f_{h, i, j}(x, v, t) = \sum_{ig, jg=0}^{k} f^{ig, jg}_{h, i,j}(t) L_{ig, jg}(\xi(x), \eta(v))$, with its nodal values satisfying the following equations:
\begin{align}
\label{eq:DG_0}
&h_{x, i}h_{v, j} \omega_{ig}\omega_{jg} \left(\frac{d}{dt}f^{ig, jg}_{h,i,j}(t)\right) \notag\\
=& h_{x, i}h_{v, j} \omega_{jg} v_{j,jg} \sum_{ig''} \omega_{ig''}\left(\frac{d}{dx} L_{ig}(\xi_{ig''})f^{ig'', jg}_{h,i,j}(t) \right)
- h_{v, j} \omega_{jg} v_{j,jg} \left(\hat{f}_{i+\frac12, jg} L_{ig}(\frac12)- \hat{f}_{i-\frac12, jg} L_{ig}(-\frac12)\right) \notag\\
+& h_{x, i}h_{v, j} \omega_{ig} E_{i,ig} \sum_{jg''} \omega_{jg''} \left( \frac{d}{dv} L_{jg}(\eta_{jg''}) f^{ig, jg''}_{h,i,j}(t) \right) - h_{x, i} \omega_{ig} E_{i,ig} \left(\hat{f}_{ig, j+\frac12} L_{jg}(\frac12)- \hat{f}_{ig, j-\frac12} L_{jg}(-\frac12)\right).
\end{align}
Dividing by $h_{x, i}h_{v, j} \omega_{ig}\omega_{jg}$, the above equation becomes
\begin{align}
\label{eq:DG}
\frac{d}{dt}f^{ig, jg}_{h, ij}(t)
=& \frac{1}{\omega_{ig}}\left(v_{j,jg} \sum_{ig''} \omega_{ig''}\left(\frac{d}{dx} L_{ig}(\xi_{ig''})f^{ig'', jg}_{h, i,j}(t) \right)
- \frac{v_{j,jg}}{h_{x, i}}\left(\hat{f}_{i+\frac12, jg} L_{ig}(\frac12)- \hat{f}_{i-\frac12, jg} L_{ig}(-\frac12)\right)\right) \notag\\
+& \frac{1}{\omega_{jg}}\left(E_{i,ig} \sum_{jg''} \omega_{jg''} \left( \frac{d}{dv} L_{jg}(\eta_{jg''}) f^{ig, jg''}_{h, i,j}(t) \right) - \frac{E_{i,ig}}{h_{v, j}} \left(\hat{f}_{ig, j+\frac12} L_{jg}(\frac12)- \hat{f}_{ig, j-\frac12} L_{jg}(-\frac12)\right)\right),
\end{align}
where $\hat{f}_{i\pm\frac12, jg}$ and $\hat{f}_{ig, j\pm\frac12}$ are taken as monotone upwind fluxes and $E_{i,ig}$ denotes the electric field at $x_{i,ig}$. In particular, let $v^+ = \max(v, 0)$, $v^- = \min(v, 0)$, $E^+ = \max(E, 0)$, $E^- = \min(E, 0)$, \eqref{eq:DG} becomes the following with a simple upwind flux
\begin{align}
& \partial_t f^{ig, jg}_{h, i,j}(t) \notag\\
=& \frac{v^+_{j,jg}}{\omega_{ig}h_{x,i}} \left(\sum_{ig''} \omega_{ig''}\frac{dL_{ig}}{d\xi}(\xi_{ig'}) f^{ig'', jg}_{h, i,j}
- f_{h,i, j}^{ig'', jg} L_{ig''}(\tiny{\frac12})
L_{ig}(\tiny{\frac12})- f_{h, i-1, j}^{ig'', jg} L_{ig''}(\frac12) L_{ig}(-\frac12)\right)\notag \\
+& \frac{v^-_{j,jg}}{\omega_{ig}h_{x,i}} \left(\sum_{ig''} \omega_{ig''}\frac{d L_{ig}}{d\xi}(\xi_{ig''})f^{ig'', jg}_{h, i,j}
- f_{h,i+1, j}^{ig'', jg} L_{ig''}(-\frac12)
L_{ig}(\frac12) + f_{h, i, j}^{ig'', jg} L_{ig''}(-\frac12) L_{ig}(-\frac12)\right)\notag\\
+& \frac{E^+_{i,ig}}{\omega_{jg}h_{v,j}} \left(\sum_{jg''} \omega_{jg''} \frac{dL_{jg}}{d\eta} (\eta_{jg''}) f^{ig, jg''}_{h, i,j}
- f_{h,i, j}^{ig, jg''} L_{jg''}(\frac12) L_{jg}(\frac12)+ f_{h,i, j-1}^{ig, jg''} L_{jg''}(\frac12) L_{jg}(-\frac12)\right) \notag\\
+& \frac{E^-_{i,ig}}{\omega_{jg}h_{v,j}} \left(\sum_{jg''} \omega_{jg''} \frac{dL_{jg}}{d\eta} (\eta_{jg''}) f^{ig, jg''}_{h, i,j}
- f_{h,i, j+1}^{ig, jg''} L_{jg''}(-\frac12) L_{jg}(\frac12)+ f_{h,i, j}^{ig, jg''} L_{jg''}(-\frac12) L_{jg}(-\frac12)\right)
\label{eq:DG_1}
\end{align}
We denote the first two terms on the RHS of \eqref{eq:DG_1} as
\begin{equation}
\label{eq: x-der-dg}
v^+_{j,jg}\cdot D^{+}_{x, i,ig} {\bf f}^{+,:, jg}_{h, i, j}, \quad v^-_{j,jg} \cdot D^-_{x, i,ig} {\bf f}^{-,:, jg}_{h, i, j},
\end{equation}
as standard 1D upwind DG discretizations of $x$ derivative at the $ig$-th Gaussian node of the $i$-th cell for positive/negative velocity, respectively. Here we assume that the $v$- grid is fixed at $jg$-th Gaussian node of the $j$-th cell, and
\begin{align*}
{\bf f}^{+,:, jg}_{h, i, j}&=(f_{h,i-1,j}^{0,jg},\ldots,f_{h,i-1,j}^{k,jg},f_{h,i,j}^{0,jg},\ldots,f_{h,i,j}^{k,jg}),\\
{\bf f}^{-,:, jg}_{h, i, j}&=(f_{h,i,j}^{0,jg},\ldots,f_{h,i,j}^{k,jg},f_{h,i+1,j}^{0,jg},\ldots,f_{h,i+1,j}^{k,jg}).
\end{align*}
Similarly, the other two terms are denoted as
\begin{equation}
\label{eq: v-der-dg}
E^+_{i,ig}\cdot D^{+}_{v, j,jg} {\bf f}^{+,ig, :}_{h, i, j}, \quad E^-_{i,ig}\cdot D^-_{v, i,ig} {\bf f}^{-,ig,:}_{h, i,j},
\end{equation}
where
\begin{align*}
{\bf f}^{+,ig, :}_{h, i, j}&=(f_{h,i,j-1}^{ig,0},\ldots,f_{h,i,j-1}^{ig,k},f_{h,i,j}^{ig,0},\ldots,f_{h,i,j}^{ig,k}),\\
{\bf f}^{-,ig, :}_{h, i, j}&=(f_{h,i,j}^{ig,0},\ldots,f_{h,i,j}^{ig,k},f_{h,i,j+1}^{ig,0},\ldots,f_{h,i,j+1}^{ig,k}).
\end{align*}
\begin{rem}
One observation in the above formulation is that, although DG method formulate the scheme in an element-by-element fashion, the evaluation of solution derivatives in $x$- and $v$-directions, at Gaussian nodal points of each cell, actually occurs in a dimension-by-dimension manner. In other words, we can formulate a DG differentiation operator $D^\pm_{x}$ by concatenating $D^\pm_{x,i,ig}$.
Similar comments can be applied to $D^\pm_{v}$ as the DG differentiation operator for the $v$-derivative.
\end{rem}
\subsection{Nodal DG solutions on grid points and weighted SVD}
In this subsection, we first set up the nodal DG solutions at Gaussian grid points on each computational cell, which comes from a tensor product of $x$ and $v$ discretizations. Then we introduce several basic tools for performing the LoMaC DG low rank tensor approach in the next subsection. These tools include the weights and definition for the discrete inner product space, the orthogonal projection for conservation of macroscopic observables in the weighted inner product space, as well as the weighted singular value truncation.
The nodal grid points for the DG discretization, as tensor product of $(k+1)N_x \times (k+1)N_v$ points from $N_x \times N_v$ computational cells, are
\begin{equation}
\label{eq: x_grid}
x_{\text{grid}}: \quad x_{\min}< \cdots < (x_{i, 0} < \cdots<x_{i, k}) \cdots < x_{\max},
\end{equation}
\begin{equation}
\label{eq: v_grid}
v_{\text{grid}}: \quad -v_{\max}< \cdots < (v_{j, 0} < \cdots<v_{j, k}) \cdots < v_{\max}.
\end{equation}
Here $\{x_{i, ig}\}_{ig=0}^{k}$ and $\{v_{j, jg}\}_{jg=0}^{k}$ are the shifted Gaussian points on the cell $[{x_{i-\frac{1}{2}}}, {x_{i+\frac{1}{2}}}]$ and $[{v_{j-\frac{1}{2}}}, {v_{j+\frac{1}{2}}}]$ respectively.
DG nodal solutions on the tensor product of grids \eqref{eq: x_grid}-\eqref{eq: x_grid} are organized as ${\bf f} \in \mathbb{R}^{(k+1)N_x \times (k+1) N_v}$ with each of its component $f^{ig, jg}_{h, i,j}(t)$ being an approximation to point values of the solution on the tensor product of grids \eqref{eq: x_grid}-\eqref{eq: v_grid}. It has a corresponding low rank decomposition, similar to \eqref{eq: fn1}, as
\begin{equation}
\label{eq: fn2}
{\bf f} = \sum_{l=1}^{r} \left(C_l \ {\bf U}_l^{(1)} \otimes {\bf U}_l^{(2)}\right), \quad
(\mbox{or element-wise:} \quad
\ijindex{f} = \sum_{l=1}^{r} C_l \ {U}_{l, i, ig}^{(1)} {U}_{l, j, jg}^{(2)}),
\end{equation}
where ${\bf U}_l^{(1)} \in \mathbb{R}^{(k+1)N_x}$ and ${\bf U}_l^{(2)} \in \mathbb{R}^{(k+1)N_v}$ can be viewed as approximations to corresponding grid point values of the basis functions in \eqref{eq: fn1}. \eqref{eq: fn2} can also be viewed as a weighted SVD of the matrix ${\bf f} \in \mathbb{R}^{(k+1)N_x \times (k+1)N_v}$, where the weight
\begin{equation}
\bm{ \omega} = \bm{ \omega}_x \otimes \bm{ \omega}_v
\label{eq: weight}
\end{equation}
with
\[
\bm{ \omega}_{x}\in \mathbb{R}^{(k+1)N_x}, \quad { \omega}_{x, i, ig} = h_{x, i} \omega_{ig}, \quad i = 1, \cdots N_x, \quad ig = 0, \cdots,k,
\]
\[
\bm{ \omega}_{v}\in \mathbb{R}^{(k+1)N_v}, \quad { \omega}_{v, j, jg} = h_{v, j} \omega_{jg}, \quad j = 1, \cdots N_v, \quad jg = 0, \cdots, k.
\]
Next, we introduce three basic operations for the discrete weighted inner product spaces: (1) the computation of macroscopic observations; (2) the orthogonal projection of ${\bf f}$ for conservation of macroscopic observables; (3) a weighted singular value truncation.
\begin{itemize}
\item {\bf Macroscopic quantities of ${\bf f}$.}
In order to perform the projection, we first compute macroscopic quantities of ${\bf f}$, i.e. the discrete macroscopic charge, current and kinetic energy density ${\boldsymbol \rho}$, ${\bf J}$ and ${\boldsymbol \kappa} \in \mathbb{R}^{N_x}$ by quadrature
\begin{align}
\left(\begin{array}{l}
{\boldsymbol \rho}\\
{\bf J}\\
{\boldsymbol \kappa}
\end{array}
\right )
= \sum_{l=1}^{r} C_l
\left
\langle {\bf U}^{(2)}_{l},
\left(\begin{array}{l}
{\bf 1}_v \\
{\bf v}\\
\frac12{\bf v}^2
\end{array}
\right )
\right \rangle_v
\ {\bf U}^{(1)}_l.
\label{eq:rho_j_kappa}
\end{align}
and the inner product $\langle \cdot, \cdot \rangle_v$ is defined as
\begin{equation}
\label{eq: inner}
\langle {\bf f}, {\bf g} \rangle_v \doteq \sum_{j, jg} f_{j, jg} g_{j, jg} \omega_{v, j,jg}, \quad {\bf f}, {\bf g} \in \mathbb{R}^{(k+1)N_v},
\end{equation}
in analogue to the continuous inner product $\int_{\Omega_v} f(v)g(v)dv$.
\item {\bf An orthogonal projection with preservation of macroscopic densities.}
Following the conservative projection idea in \cite{guo2022lowrank}, we propose to project a kinetic solution ${\bf f}$ to a subspace
\begin{equation}
\mathcal{N}\doteq \text{span}\{{\bf 1}_v, {\bf v}, {\bf v}^2\},
\end{equation}
where ${\bf 1}_v\in \mathbb{R}^{(k+1)N_v}$ is the vector of all ones, ${\bf v}$ is the v-grid \eqref{eq: v_grid} and ${\bf v}^2$ $\in \mathbb{R}^{(k+1)N_v}$ is the element-wise square of ${\bf v}$. We use a weight function $w_M(v)= \exp(-v^2/2)$ with exponential decay to ensure proper decay of the projected function as $v \to \infty$.
We introduce the weighted inner product and the associated norm as
\begin{equation}
\label{eq: inner_prod_d}
\langle {\bf f}, {\bf g} \rangle_{{\bf w}_M} = \sum_{j, jg} f_{j, jg} g_{j, jg} w_{M, j, jg} \omega_{v, j,jg}, \quad \|{\bf f}\|_{{\bf w}_M} =\sqrt{\langle {\bf f}, {\bf f} \rangle_{{\bf w}_M}},
\end{equation}
where ${\bf w}_M \in \mathbb{R}^{(k+1)N_v}$ with $w_{M, j, jg} = w_M(v_{j, jg})$ and $\omega_{v,j, jg}$ is the quadrature weights for $v$-integration.
Correspondingly, we let
$
l^2_{{\bf w}_M} = \{{\bf f}\in\mathbb{R}^{(k+1)N_v}: \|{\bf f}\|_{{\bf w}_M} < \infty\}.
$
With the weight function, we first scale ${\bf f}$ as
\begin{equation}
\label{eq: rescale}
\tilde{\bf f} = \frac{1}{{\bf w}_M} \star {\bf f} = \sum_{l=1}^{r} \left(C_l \ \ {\bf U}_l^{(1)} \otimes \left(\frac{1}{{\bf w}_M} \star {\bf U}_l^{(2)}\right)\right),
\end{equation}
where $\star$ is the element-wise product in the $v$-dimension.
We perform an orthogonal projection of $\tilde{\bf f}$ with respect to the inner product \eqref{eq: inner_prod_d} onto subspace $\mathcal{N}$, i.e.
\begin{equation}
\label{eq: proj}
\langle P_{\mathcal{N}}(\tilde{\bf f}), {\bf g} \rangle_{{\bf w}_M}
= \langle \tilde{\bf f}, {\bf g} \rangle_{{\bf w}_M},
\quad \forall {\bf g}\in \mathcal{N}.
\end{equation}
It can be shown that ${\bf w}_M\star P_{\mathcal{N}}(\tilde{\bf f})$ preserves the mass, momentum and kinetic energy densities of ${\bf f}$ in the discrete sense.
With the orthogonal project, a conservative decomposition of ${\bf f}$ \cite{guo2022lowrank} can be performed as
\begin{equation}
\label{eq: f_decom_d}
{\bf f} = {{\bf w}_M} \star (P_{\mathcal{N}}(\tilde{{\bf f}}) + (I-P_{\mathcal{N}})(\tilde{{\bf f}}))
\doteq {{\bf w}_M} \star (\tilde{{\bf f}}_1 + \tilde{{\bf f}}_2)
\doteq {\bf f}_1 + {\bf f}_2,
\end{equation}
where $\mathbf{f}_1$ can be represented as a rank three tensor
\begin{align}\label{eq:f1}
{\bf f}_1 ({\boldsymbol \rho}, {\bf J}, {\boldsymbol \kappa})= & \frac{\boldsymbol \rho}{\|{\bf 1}_v\|_{{\bf w}_M}^2} \otimes ({{\bf w}_M} \star {\bf 1}_v)
+ \frac{\bf J}{\|{\bf v}\|_{{\bf w}_M}^2} \otimes ({{\bf w}_M} \star {\bf v}) + \frac{2 {\boldsymbol \kappa}-c{\boldsymbol \rho}}{\|{\bf v}^2- c {\bf 1}_v\|_{{\bf w}_M}^2} \otimes ({{\bf w}_M} \star ({\bf v}^2- c {\bf 1}_v) ),
\end{align}
where $c=\frac{\langle \bm{1}_v, {\bf v}^2\rangle_{{\bf w}_M}}{\|{\bf 1}_v\|_{{\bf w}_M}^2}$ is computed so that $\{ {\bf 1}_v, {\bf v}, {\bf v}^2- c {\bf 1}_v\}$ forms an orthogonal set of basis and
${\boldsymbol \rho}$, ${\bf J}$ and ${\boldsymbol \kappa}$ are the discrete mass, momentum and kinetic energy density of ${\bf f}$ from \eqref{eq:rho_j_kappa}. ${\bf f}_1$ preserves the discrete mass, momentum and kinetic energy density of ${\bf f}$, while the remainder part ${\bf f}_2 = {\bf f} -{\bf f}_1$ has zero of them.
\item {\bf Weighted SVD procedure with preservation of macroscopic observables.} The remainder part in the orthogonal decomposition ${\bf f}_2$ can be shown to have zero macroscopic mass, momentum and kinetic energy. In order to perform a singular value truncation to remove redundancy in basis representation, as well as maintain the zero macroscopic observables, we perform a weighted SVD truncation, where the weights comes from the quadrature weights associated with quadrature nodes as well as the weight function $w_M$ at quadrature nodes.
A weighted SVD procedure assumes a weighted inner product space $\langle \cdot, \cdot \rangle$ in the following sense:
\begin{equation}
\label{eq: inner_xv}
\langle {\bf f}, {\bf g} \rangle \doteq \sum_{i, ig; j, jg} \ijindex{f} \ijindex{g}\omega_{x, i, ig} \omega_{v, j, jg} w_{M, j, jg}, \quad {\bf f}, {\bf g} \in \mathbb{R}^{(k+1)N_x \times (k+1)N_v}.
\end{equation}
The weighted SVD procedure consists of three steps: first a scaling step with element-wise multiplication by $\frac{1}{\bf \sqrt{\bm{ \omega} \star {\bf w}_M}}$ with $\bm{ \omega}$ in \eqref{eq: weight} and ${\bf w}_M$ as in \eqref{eq: inner_prod_d}, followed by a traditional SVD procedure, and finally a rescaling step with element-wise multiplication by ${\sqrt{\bm{ \omega}\star {\bf w}_M}}$. The associated storage cost is $\mathcal{O}(r N)$, where $N:= \max\{(k+1)N_x, (k+1)N_v\}$. The scaling and rescaling can be performed with respect to the basis in $x$ and $v$ directions with the cost of $\mathcal{O}(r N)$. We denote this weighted SVD truncation procedure as $\mathcal{T}_{\varepsilon, \bm{ \omega} \star {\bf w}_M}$. In the algorithm, it will be applied to the remainder ${\bf f}_2$ in \eqref{eq: f_decom_d}, i.e. $\mathcal{T}_{\varepsilon, \bm{ \omega} \star {\bf w}_M}({\bf f}_2)$ to realize data sparsity. In summary, we have the following weighted SVD truncation procedure for ${\bf f}_2 \in \mathbb{R}^{(k+1)N_x \times (k+1)N_v}$.
\begin{equation}
\label{eq: weighted_SVD}
\boxed{{\bf f}_2}\stackrel{scaling}{ \Longrightarrow} \boxed{\tilde{\bf f}_2 \doteq \frac{{\bf f}_2}{\sqrt{\bm{ \omega}\star {\bf w}_M}}} \stackrel{truncation}{\Longrightarrow} \boxed{\mathcal{T}_{\varepsilon}(\tilde{\bf f}_2)} \stackrel{rescaling}{ \Longrightarrow} \boxed{\sqrt{\bm{ \omega}\star {\bf w}_M}\star \mathcal{T}_{\varepsilon}(\tilde{\bf f}_2)}
\end{equation}
with the output being
\begin{equation}
\label{eq: weighted_T}
\mathcal{T}_{\varepsilon, \bm{ \omega} \star {\bf w}_M}({\bf f}_2)\doteq \sqrt{\bm{ \omega}\star {\bf w}_M}\star \mathcal{T}_{\varepsilon}(\tilde{\bf f}_2).
\end{equation}
\end{itemize}
\begin{rem}
We now summarize by recognizing that there are three different discrete inner product spaces we introduced in this subsection: the first is defined by \eqref{eq: inner} as a discrete analog of a standard $L^2$ inner product in $v$ direction only for computing macroscopic observables, the second is defined by \eqref{eq: inner_prod_d} as as a discrete analog of a weighted inner product product space in $v$ direction for projection purpose, and the third is defined by \eqref{eq: inner_xv} as a discrete analog of weighted inner product in $x-v$ directions for weighted SVD truncation for the remainder ${\bf f}_2$ in \eqref{eq: f_decom_d} to realize data sparsity via removing redundancy in basis representation in each dimension.
\end{rem}
\subsection{LoMaC low rank approach with DG discretization}
In this subsection, we introduce the proposed LoMaC low rank approach with DG discretization. The flow chart of the algorithm is in a similar spirit to that we introduced in \cite{guo2022local}. We outline the scheme flow chart with special discussion on the nodal discretization DG spatial discretization and the corresponding weighted orthogonal decomposition and weighted SVD truncation.
Below, we assume the solution in the form of \eqref{eq: fn2} with superscript $n$ for the solution at $t^n$.
\begin{enumerate}
\item [Step 0.] {\em Initialization.} We assume that the analytic initial condition can be written as or approximated by a linear combination of separable functions, then the DG solutions can be constructed directly from those separable functions on Gaussian nodal points.
\item [Step 1.] {\em Add basis and obtain an intermediate solution ${\bf f}^{n+1, *}$.}
A second order multi-step discretization of time derivative in \eqref{vlasov1} gives
\begin{equation}
\label{eq: fn3}
{f}^{n+1, *} = \frac14{f}^{n-2}+\frac34 {f}^{n}- \frac32\Delta t \left(v \partial_x ({f}^n) + E^n \partial_v ({f}^n)\right).
\end{equation}
Here the electric field $E^n$ is solved by a Poisson solver. Thanks to the tensor friendly form of the Vlasov equation, assuming the low rank format of solutions at $t^{n-2}$ and $t^n$, ${\bf f}^{n+1, *}$ can be represented in the following low rank format:
\begin{align}
\label{eq:lowrankmethod}
{\bf f}^{n+1, *} =& \frac14 \sum_{l=1}^{r^{n-2}} C_l^{n-2} \left( {\bf U}_l^{(1), n-2} \otimes {\bf U}_l^{(2), n-2}\right)
+\frac34 \sum_{l=1}^{r^n} C_l^n \left( {\bf U}_l^{(1), n} \otimes {\bf U}_l^{(2), n}\right) \\
& -\frac32 \Delta t
\left( D_x {\bf U}_l^{(1), n} \otimes {\bf v} \star {\bf U}_l^{(2), n} + {\bf E}^n \star {\bf U}_l^{(1), n} \otimes D_v {\bf U}_l^{(2), n}
\right),
\end{align}
Here, with a slight abuse of notation, ${\bf v} \in\mathbb{R}^{N_v}$ denotes the coordinates of $v_{grid}$ introduced in \eqref{eq: v_grid}. $D_x$ and $D_v$ represent high order spatial differentiations, and $\star$ denotes an element-wise multiplication operation.
For example the discretization of $D_x {\bf U}_l^{(1), n} \otimes {\bf v} \star {\bf U}_l^{(2), n}$ follows
\begin{equation}
D^+_x {\bf U}_l^{(1), n} \otimes {\bf v}^+ \star {\bf U}_l^{(2), n} + D^-_x {\bf U}_l^{(1), n} \otimes {\bf v}^- \star {\bf U}_l^{(2), n},
\end{equation}
where $D^+_x$ and $D^-_x$ are a $(k+1)^{th}$ order conservative upwind DG discretization of positive and negative velocities respectively, with ${\bf v}^+ = \max({\bf v}, 0)$ and ${\bf v}^-=\min({\bf v}, 0)$. For example, see \eqref{eq: x-der-dg} for the derivative at the $i, ig$-th nodal points. Similar comments can be applied to the $D_v$ operator in ${\bf E}^n \star {\bf U}_l^{(1), n} \otimes D_v {\bf U}_l^{(2), n}$.
\item [Step 2.] {\em Perform a macroscopic conservative decomposition} as in \eqref{eq: f_decom_d}
\begin{equation}
\label{eq: decomp}
{\bf f}^{n+1, *} = {\bf f}_1 + {\bf f}_2.
\end{equation}
Here ${\bf f}_1$ is computed from \eqref{eq:f1} with the macroscopic observables computed as in \eqref{eq:rho_j_kappa};
${\bf f}_2 = {\bf f}-{\bf f}_1$ is the remainder term, where we apply a weight SVD truncation of the remainder term $T_{\epsilon, {\bf w} \star {\bf w}_M}({\bf f}_2)$ as in the previous subsection.
\item [Step 3.]{\em Conservative update of macroscopic variables.}
Let $U \doteq (\rho, {J}, {e})^\top$, $F \doteq ({J}, \sigma, {\bf Q})^\top$ and $S = (0, \rho E,0)^\top$, then the macroscopic system \eqref{eq:mass}-\eqref{eq:ener} becomes
\begin{equation}
\label{eq:U}
U_t + F_x = S.
\end{equation}
The numerical solutions for $U$ are denoted as $\boldsymbol\rho^{M}$, ${\bf J}^{M}$, $\boldsymbol\kappa^{M}$, where $M$ is for ``Macroscopic variables".
In the DG setting, they are nodal values of DG solutions of size $(k+1)N_x$, and are computed with a high order nodal DG spatial discretization coupled with the second order SSP multi-step time integrator for system \eqref{eq:U}:
\begin{align}
\label{eq:Uupdate_0}
U_{i, ig}^{n+1} & = \frac14U^{n-2}_{i, ig} + \frac34U^{n}_{i, ig} + \frac32\Delta t (D^{+}_{x, i, ig} {\bf F}^{n, +}_{i,:} + D^{-}_{x, i, ig} {\bf F}^{n, -}_{i,:} + {S}^n_{i, ig})
\end{align}
where $U^{n}_{i, ig} = (\rho_{i, ig}^n, J^{n}_{i, ig} , e^{n}_{i, ig})^\top$ and $S_{i, ig}^n = (0, \rho_{i, ig}^nE_{i, ig}^n,0)^\top$, $i=1,\ldots,N_x$, $ig = 0, \cdots, k$.
The ${\bf F}^{n, \pm} \in \mathbb{R}^{(k+1)N_x}$ are given by the kinetic flux vector splitting scheme \cite{guo2022local} with
\begin{align}
{\bf F}^{n, +}
& = \sum_{l=1}^{r^n} C^{n}_l
\left
\langle {\bf U}^{(2), n}_{l},
\left(\begin{array}{c}
{\bf v}^+\\
({\bf v}^+)^2\\
\frac12 ({\bf v}^+)^3
\end{array}
\right )
\right \rangle_v
\ {\bf U}^{(1), n}_l \\
{\bf F}^{n, -}
& = \sum_{l=1}^{r^n} C^n_l
\left
\langle {\bf U}^{(2), n}_{l},
\left(\begin{array}{c}
{\bf v}^-\\
({\bf v}^-)^2\\
\frac12 ({\bf v}^-)^3
\end{array}
\right )
\right \rangle_v
\ {\bf U}^{(1), n}_l,
\label{eq:Fpm}
\end{align}
where ${\bf v}^+ = \max({\bf v}, 0)$, ${\bf v}^- = \min({\bf v}, 0)$ and the weighted inner product is defined in \eqref{eq: inner}. $D^{\pm}_{x, i, ig}$ are defined in a similar fashion as in \eqref{eq: x-der-dg}, and
$${\bf F}^{n, +}_{i,:} =(F_{i-1,0}^{n,+},\ldots,F_{i-1,k}^{n,+},F_{i,0}^{n,+},\ldots,F_{i,k}^{n,+}),$$
$${\bf F}^{n, -}_{i,:} =(F_{i,0}^{n,-},\ldots,F_{i,k}^{n,-},F_{i+1,0}^{n,-},\ldots,F_{i+1,k}^{n,-}).$$
From the updated $U^{n+1}_{i, ig}$, we can compute
\begin{equation}
\label{eq:kinetic_update}
{\kappa}_{i, ig}^{n+1, M} = {e}_{i, ig}^{n+1, M} - \frac12|E^{n+1, M}_{i, ig}|^2,
\end{equation}
where ${\bf E}^{n+1, M}$ is computed directly from $\boldsymbol \rho^{n+1,M}$ via Poisson's equation using the local DG method \cite{arnold2002unified}. Finally, we construct ${\bf f}^M_1$ according to \eqref{eq:f1}, with the macroscopic observables from this step of macroscopic update.
\item [Step 4.] We update the low rank solution as
\begin{equation}
\label{eq: f_update}
{\bf f}^{n+1} = {\bf f}^M_1 + \mathcal{T}_{\varepsilon, {\bm{\omega}} \star {\bf w}_M}({\bf f}_2),
\end{equation}
where ${\bf f}^M_1$ computed from Step 3 and the weighted SVD truncation operator $T_{\varepsilon, {\bm{\omega}} \star {\bf w}_M}$ as in defined \eqref{eq: weighted_T}. Here $ {\bf f}^M_1$ is used, as a correction to $ {\bf f}_1$ for local conservation of mass, momentum and energy densities.
\end{enumerate}
In summary, the proposed LoMaC low rank DG scheme updates the VP solution by first adding basis through traditional high order nodal DG discretizations for spatial/velocity derivatives and an SSP multi-step time integrator. Then we perform an orthogonal decomposition, with respect to a weighted inner product space, for preservation of macroscopic observables. Meanwhile, we update macroscopic conservation laws using KFVS fluxes for local conservation of macroscopic mass, momentum and energy density. Finally, we correct the solution via \eqref{eq: f_update} with macroscopic densities agree with those from macroscopic updates and with a weighted SVD truncation on the remainder term to realize optimal data sparsity. Note that the Step 2 and Step 3 above can be implemented in parallel, i.e. no need to be in a sequential order. We have the following proposition for local and global macroscopic conservation properties of the proposed scheme.
\begin{prop} (Local mass, momentum and energy conservation.) The proposed LoMaC low rank DG algorithm locally conserves the macroscopic mass, momentum and energy.
\end{prop}
\begin{proof} The proof follows directly from the construction of the algorithm, and the fact that the DG algorithm for macroscopic systems locally conserve the mass, momentum and energy.
\end{proof}
Finally, we comment on the algorithm extension of the above proposed LoMaC low rank DG algorithm to a general setting. In a high dimensional setting (e.g. 2D2V), the above DG algorithm can be generalized using the hierarchical Tucker (HT) format as in \cite{guo2022local}. If the mesh for spatial discretization comes from tensor product of 1D discretization, then the algorithm can be directly generalized following the steps in \cite{guo2022local}, but with DG discretization on spatial/velocity derivatives and using a weighted inner product space on DG nodal solutions. We will not repeat the details, but refer to \cite{guo2022local}. Alternatively, one could consider nodal DG solutions on an unstructured mesh for the spatial dimensions for flexibility in geometry and boundary conditions, and use a HT dimension tree with full rank in spatial dimensions, but low rank between spatial and velocity dimensions, and within velocity dimensions. Further, it is possible to use DG for spatial discretization for compact boundary treatment and use spectral methods for high order accuracy in velocity directions. Similar LoMaC property can be achieved for the corresponding high dimensional algorithm.
\section{Conclusion}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this paper, we proposed a LoMaC low rank tensor approach with nodal DG discretization for performing high dimensional deterministic Vlasov simulations.
The introduction of DG and nodal DG discretization opens up the potential of low rank tensor algorithm in using general nonsmooth, nonuniform or unstructured meshes and for handling complex boundary conditions. The locally macroscopic conservation property, realized by a macroscopic conservative projection and correction of the kinetic solution, preserves globally total mass, momentum and energy at the discrete level using an explicit scheme.
The algorithm is extended to the 2D2V VP system by a hierarchical Tucker structure with full rank (no reduction) in the physical space and low rank reduction for the phase space as well as for the linkage between phase and physical spaces. Further work includes the extension to unstructured mesh and in resolving complex boundary conditions arise from applications.
\section{Introduction}
Numerical simulation of the Vlasov-Poisson (VP) system plays a fundamental role in understanding complex dynamics of plasma and has a wide range of applications in science and engineering, such as fusion energy. The well-known challenges for VP simulations include the high dimensionality of the phase space, resolution of multiple scales in time and in phase space, preservation of physical invariants, among many others. In this paper, we develop a novel Local Macroscopic Conservative (LoMaC) low rank tensor method with discontinuous Galerkin (DG) discretization. The LoMaC property means that the algorithm can conserve locally densities of macroscopic observables at the discrete level.
This paper is a generalization of LoMaC low rank tensor method with finite difference discretization in \cite{guo2022local}. In the introduction of \cite{guo2022local}, we have discussed the application background and existing works on low rank approach for time-dependent dynamics. Below we only highlight several key ingredients to realize accuracy, robustness, computational efficiency and local conservation for macroscopic observables of the newly proposed algorithm.
\begin{enumerate}
\item {\em Low rank representation of solutions and high order discretizations \cite{guo2022low}.} In this low rank approach, the solution is being written in the form of Schmidt decomposition, where the basis in each dimension are being dynamically updated from a high order discretization of PDEs together with a singular value type truncation for sparsity in function representation and efficiency for computational complexity. The original idea is presented in \cite{guo2022low}. In this paper, we generalize the algorithm to nodal DG type spatial discretization on tensor product of computational meshes. The nodal DG differentiation operator, as well as the weights in the discrete inner product space, will depends on the mesh spacing and the associated Gaussian quadrature nodes in each computational cell. The new method allows the flexibility in mesh spacing, e.g. using a not smooth nonuniform mesh, yet achieves high order spatial accuracy. Meanwhile the method take advantages of the compactness of the DG discretization in boundary treatment. With the weighted inner product space, we perform a scaling procedure, followed by a standard SVD truncation, and finished by a rescaling procedure to remove redundancy for data sparsity. For time discretization, we apply the strong-stability-preserving (SSP) multi-step time discretizations \cite{gottlieb2011strong}.
\item {\em Simultaneous update of macroscopic mass, moment and energy in a locally conservative manner.} This step is the key novelty in \cite{guo2022local} in locally preserving mass, momentum and even energy in an explicit scheme. In this paper, we use a nodal DG scheme for macroscopic conservation laws, with the numerical fluxes from taking moment integration of kinetic probability density functions via the kinetic flux vector splitting (KFVS) fluxes \cite{mandal1994kinetic, xu1995gas}. Meanwhile, the updated macroscopic mass, momentum and energy are used to correct the kinetic solutions via a macroscopic conservative projection. Figure~\ref{f1} from \cite{guo2022local} shows the interplay between numerical solutions for kinetic model and the corresponding macroscopic system. The kinetic solution $f$ is used as the kinetic flux to advance solutions for macroscopic systems, while the updated macroscopic mass, momentum and energy are used to perform a conservative correction to kinetic solution $f$ via a macroscopic conservative projection.
\begin{figure}[h!]
\centering
{\includegraphics[height=20mm]{kineticmacro.eps}}
\caption{Illustration of LoMaC scheme.}
\label{f1}
\end{figure}
The newly developed low rank DG algorithm is theoretically proved and numerically verified to be locally mass, momentum and energy conservative.
\item {\em Hierarchical Tucker (HT) representation of high dimensional tensors.} We further generalize the algorithm to high-dimensional problems with the HT decomposition, which attains a storage complexity that is linearly scaled with the dimension and polynomial scaled with the rank, mitigating the curse of dimensionality.
The HT format \cite{hackbusch2009new,grasedyck2010hierarchical} is motivated by the classical Tucker format \cite{tucker1966some,de2000multilinear}, but considering a dimension tree and taking advantage of the hierarchy of the nested subspaces. A hierarchical high order singular value decomposition (HOSVD)\cite{hackbusch2009new,grasedyck2010hierarchical} can be performed to strike a balance between data complexity and numerical feasibility. In this paper, we use the same dimension tree as in our earlier work \cite{guo2022local} for 2D2V Vlasov system, with full rank in the physical spaces, and low rank in velocity spaces, and low rank between physical and velocity spaces.
\end{enumerate}
As far as we are aware of, this paper is a first paper on coupling the DG discretization with the low rank tensor framework for kinetic simulations. It well combines the merits of DG discretization with that of low rank tensor approach: for the DG method in flexibility and robustness in using nonuniform or unstructured meshes, in treating complex boundary conditions and in realizing superconvergence properties in a long time simulation, and for the low rank tensor approach in reducing computational complexity. Although we haven't extended the algorithm to unstructured triangular meshes or for complex boundary conditions here, this paper serves as a first step in this direction, and shows the proof of concept on the potential of the algorithm for complex and high dimensional problems.
This paper is organized as follows. In Section 2, we introduce the kinetic Vlasov model and the corresponding macroscopic conservation laws. Section 3 is the main section to introduce the proposed algorithm. We introduce the DG and nodal DG discretization in Section 3.1; we discuss the low rank framework with tensor product of nodal DG meshes, the weighted inner product spaces, and the corresponding macroscopic conservative projection and weighted SVD truncation in Section 3.2; we propose the LoMaC low rank DG algorithm in Section 3.3 with remarks on further generalization of the algorithm to high dimensional problems with HT format and to unstructured meshes. In Section 4, we present numerical results on an extensive set of 1D1V and 2D2V problems to demonstrate the efficacy the proposed algorithm. We conclude in Section 5.
\section{Numerical results}\label{sec:numerical}
In this section, we present a collection of numerical examples to demonstrate the efficacy of the proposed LoMaC low rank tensor DG methods.
The second order SSP multi-step method is employed for time integration. We numerically verify the LoMaC property by tracking the time evolution of total mass, total momentum and total energy.
\subsection{Linear advection: convergence and superconvergence}
\begin{exa}\label{ex:linear} We solve the following simple 2D linear advection problem
$$
u_t + u_{x_1} + u_{x_2} = 0,\quad x_1,\, x_2 \in [0,2\pi],
$$
with periodic boundary conditions. We choose the initial condition $u(x_1,x_2,t=0) = \sin(x_1+x_2)$, and the exact solution is known as $$u(x_1,x_2,t) = \sin(x_1+x_2-2t),$$
which is smooth and stays very low rank over time. We make use of this example to investigate the convergence and superconvergence of the proposed low rank DG method. It is well known that the full grid DG solution is superconvergent in the negative-order norm with order $2k+1$, based on which the DG solution over a translation invariant grid can be post-processed so that the convergence order is enhanced from $k+1$ to $2k+1$ in the $L^2$ norm \cite{cockburn2003enhanced}. In the simulation, we let $k=1$ and employ a set of uniform meshes with $N_{x_1}=N_{x_2}$. The time step is chosen as $\Delta t= \left(\frac{h_x}{3} \right)^{1.5}$ to minimize the effect of temporal errors. The truncation threshold is set to be $\varepsilon=10^{-4}$. We solve the problem up to $t=1$. At the end of the computation, we post-processes the low rank DG solutions by convolving the basis ${\bf U}^{(1)}$ and ${\bf U}^{(2)}$ with the kernel given in \cite{cockburn2003enhanced}. The numerical results are summarized in Table \ref{tb:linear}. It is observed that the low rank solution before post-processing is second order accurate ($k+1$); after post-processing the low rank DG solution, the accuracy is enhanced to third order ($2k+1$). The CPU time approximately scales as $2^{1.5}$ with mesh refinement, indicating that the curse of dimensionality is avoided for this problem. In Figure \ref{fig:linear_error}, we plot the errors before and after post-processing, and it is observed that the errors of the low rank DG solutions are highly oscillatory before post-processing, implying that the solution is superconvergent in the negative-order norm. After post-processing, the error plots become much smoother, and the magnitude is reduced significantly. Lastly, we plot the time histories of the ranks of the DG solution in Figure \ref{fig:linear_rank}, and we can see that the representation ranks of the solutions stay two during the time evolution for all sets of meshes used. The numerical evidence indicates that the proposed low rank DG method with the adding and removing basis procedure preserves the superconvergence property of the standard DG method. The superconvergence phenomenon due to the DG discretization is preserved well under the low rank truncation setting, if the solution stays low rank.
\begin{table}[!hbp]
\centering
\caption{Example \ref{ex:linear}. $t=1$. $k=1$. Convergence study. }
\label{tb:linear}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$N_{x_1}\times N_{x_2}$} & \multicolumn{4}{|c|} {Before post-processing} & \multicolumn{4}{|c|} {After post-processing} & \multirow{2}{*}{CPU} \\\cline{2-9}
& $L^2$ error & order & $L^\infty$ error & order & $L^2$ error & order & $L^\infty$ error & order &\\\hline
$16\times16$ & 1.59E-01 & & 5.55E-02 & & 3.24E-02 & & 7.33E-03 & &0.28s \\\hline
$32\times32$ & 3.73E-02 & 2.09 & 1.34E-02 & 2.05 & 4.20E-03 & 2.95 & 9.48E-04 & 2.95& 0.45s \\\hline
$64\times64$ & 9.03E-03 & 2.05 & 3.29E-03 & 2.03 & 5.35E-04 & 2.97 & 1.21E-04 & 2.97& 1.22s \\\hline
$128\times128$ & 2.22E-03 & 2.02 & 8.13E-04 & 2.02 & 6.75E-05 & 2.99 & 1.52E-05 & 2.99&3.06s \\\hline
\end{tabular}
\end{table}
\begin{figure}[h!]
\centering
\subfigure[$N_{x_1}\times N_{x_2}=16\times16$]{\includegraphics[height=40mm]{linear_error_16.eps}}
\subfigure[$N_{x_1}\times N_{x_2}=32\times32$]{\includegraphics[height=40mm]{linear_error_32.eps}}
\subfigure[$N_x\times N_v=64\times64$]{\includegraphics[height=40mm]{linear_error_64.eps}}
\subfigure[$N_{x_1}\times N_{x_2}=16\times16$]{\includegraphics[height=40mm]{linear_error_16_pp.eps}}
\subfigure[$N_{x_1}\times N_{x_2}=32\times32$]{\includegraphics[height=40mm]{linear_error_32_pp.eps}}
\subfigure[$N_{x_1}\times N_{x_2}=64\times64$]{\includegraphics[height=40mm]{linear_error_64_pp.eps}}
\caption{Example \ref{ex:linear}. Error plots before and after post-processing at $t=1$. $k=1$. $\varepsilon=10^{-4}$.}
\label{fig:linear_error}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[height=50mm]{linear_rank.eps}
\caption{Example \ref{ex:linear}. The time evolution of ranks of the DG solutions. $k=1$. $\varepsilon=10^{-4}$.}
\label{fig:linear_rank}
\end{figure}
\end{exa}
\subsection{1D1V Vlasov-Poisson system}
\begin{exa}\label{ex:forced}(A forced VP system \cite{de2012high}.) We simulate the following forced VP system
\begin{align*}
\frac{\partial f}{\partial t} + vf_x + Ef_v &= \psi(x,v,t),\\
E(x,t)_x &= \rho(x,t) - \sqrt{\pi},
\end{align*}
where the source $\psi$ is defined as
$$
\psi(x, v, t)=\left(\left(\left(4 \sqrt{\pi}+2\right) v-\left(2 \pi+\sqrt{\pi}\right)\right) \sin (2 x-2 \pi t)
+\sqrt{\pi}(\frac14-v) \sin (4 x-4 \pi t)\right)\exp\left(-\frac{(4 v-1)^{2}}{4}\right)
$$
so that the system has the exact solution
$$
\begin{aligned}
f(x, v, t) &=\left(2-\cos (2 x-2 \pi t)\right) \exp\left(-\frac{(4 v-1)^{2}}{4}\right), \\
E(x, t) &=-\frac{\sqrt{\pi}}{4} \sin (2 x-2 \pi t).
\end{aligned}
$$
Periodic boundary conditions are imposed. Note that the forced system satisfies the following the macroscopic system
\begin{align*}
\partial_{t} \rho + {\bf J}_x &= \frac{\sqrt{\pi}}{4}(1-4\pi)\sin(2x-2\pi t)\\
\partial_{t} {\bf J} +\bm{\sigma}_x&= \rho E + \frac{\sqrt{\pi}}{16}(3+ 4\sqrt{\pi} -4 \pi)\sin(2x-2\pi t) -\frac{\pi}{16}\sin(4x-4\pi t)\\
\partial_{t} e + \mathbf{Q}_x& = \frac{\sqrt{\pi}}{128}(7 + 8\sqrt{\pi}-12\pi)\sin(2x-2\pi t)-\frac{\pi}{64}\sin(4x-4\pi t) \\
& + \frac{\sqrt{\pi}}{8}\left( 2 - (1-4\pi)\cos(2x-2\pi t) \right)E.
\end{align*}
It is easily verified that the total mass, total momentum, and total energy of the system is conserved. For this example, we test the accuracy of the proposed low rank DG method and justify its ability to conserve the physical invariants. In the simulation, we set the truncation threshold $\varepsilon=10^{-3}$ and set the computational domain $[-\pi,\pi]\times[-L_v,L_v]$ with $L_v=4$. The convergence study is summarized in Table \ref{tb:forced}, and $k+1$-th order of convergence is observed for both $L^2$ and $L^\infty$ errors. To showcase the flexibility of DG meshes, we perturb the uniform mesh randomly by up to 10\%. In Figures \ref{fig:forced1d_invar_k1}-\ref{fig:forced1d_invar_k2}, we report the time histories of numerical ranks of the low rank DG solutions together with relative deviation of the total mass, total momentum and total energy for $k=1$ and $k=2$, respectively. It is observed that for $k=1$, the ranks of the numerical solution over a coarser mesh ($N_x\times N_v=16\times32$) are higher than that over a finer mesh and also increase over time, which is attributed to the large DG discretization error. For $k=2$, the ranks of the numerical solutions stay four during the time evolution. Hence, it is advantageous to employ a higher order DG discretization for this problem. Here the rank four comes from the a rank one from the exact solution, and rank three from conservative projection for mass, momentum and energy. We can observe that total mass, total momentum, and total energy are conserved up to machine precision for both $k=1$ and $k=2$ with all mesh sizes, indicating that the MaLoC property of the proposed method is independent of the degree $k$ and mesh size used.
\begin{table}[!hbp]
\centering
\caption{Example \ref{ex:forced}. $t=1$. Convergence study. The non-uniform meshes are obtained by randomly perturbing the element boundaries of uniform meshes up to 10\%.}
\label{tb:forced}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$N_x\times N_v$} & \multicolumn{4}{|c|} {$k=1$} & \multicolumn{4}{|c|} {$k=2$} \\\cline{2-9}
& $L^2$ error & order & $L^\infty$ error & order & $L^2$ error & order & $L^\infty$ error & order\\\hline
$16\times32$ & 1.37E-01 & & 1.33E-01 & & 6.07E-03 & & 9.73E-03 & \\\hline
$32\times64$ & 3.83E-02 & 1.83 & 3.32E-02 & 2.01 & 9.15E-04 & 2.73 & 1.52E-03 & 2.68 \\\hline
$64\times128$ & 4.33E-03 & 3.15 & 6.31E-03 & 2.39 & 1.07E-04 & 3.10 & 1.91E-04 & 2.99 \\\hline
$128\times256$ & 1.12E-03 & 1.95 & 1.57E-03 & 2.01 & 1.23E-05 & 3.11 & 2.26E-05 & 3.08 \\\hline
\end{tabular}
\end{table}
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG2_rank_1e3.eps}}
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG2_mass_1e3.eps}}
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG2_mom_1e3.eps}}
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG2_ener_1e3.eps}}
\caption{Example \ref{ex:forced}. The time evolution of ranks of the numerical solutions (a), relative deviation of total mass (b), total momentum (c), and total energy (d). $k=1$. $\varepsilon=10^{-3}$.}
\label{fig:forced1d_invar_k1}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG3_rank_1e3.eps}}
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG3_mass_1e3.eps}}
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG3_mom_1e3.eps}}
\subfigure[]{\includegraphics[height=50mm]{forced1d_fullconser_DG3_ener_1e3.eps}}
\caption{Example \ref{ex:forced}. The time evolution of ranks of the numerical solutions (a), relative deviation of total mass (b), total momentum (c), and total energy (d). $k=2$. $\varepsilon=10^{-3}$.}
\label{fig:forced1d_invar_k2}
\end{figure}
\end{exa}
\begin{exa}
\label{ex:weak1d}(Weak Landau damping.) We simulate the weak Landau damping test with initial condition
\begin{equation}
\label{eq:landau1d}
f(x,v,t=0) =\frac{1}{\sqrt{2 \pi}} \left(1+\alpha \cos \left(k x\right)\right)\exp\left(-\frac{v^2}{2}\right),
\end{equation}
where $\alpha=0.01$ and $k=0.5$. The computational domain is set to be $[0,L_x]\times[-L_v,L_v]$ with $L_x=2\pi/k$ and $L_v=6$. We set $\varepsilon=10^{-5}$ for truncation. In the simulation, we employ a set of non-uniform meshes by randomly perturbing uniform meshes up to 10\%. In Figure \ref{fig:weak1d_elec}, we report the time histories of the electric energy and numerical ranks of the low rank DG solutions for $k=1$ and $k=2$. It is observed that the low rank method is able to predict the correct damping rate of the electric energy. In addition, the method of larger $k$ over a finer mesh can better track the damping phenomenon with lower numerical ranks, justifying the computational advantages of using higher order DG discretization. In Figure \ref{fig:weak1d_invar}, we further report the time histories of relative deviation of the total mass and total energy, together with absolute derivation of total momentum. We can see that the method is able to conserve the total mass, momentum and energy up to the machine precision.
\begin{figure}[h!]
\centering
\subfigure[$k=1$]{\includegraphics[height=60mm]{weak1d_DG2_elec.eps}}
\subfigure[$k=2$]{\includegraphics[height=60mm]{weak1d_DG3_elec.eps}}
\subfigure[$k=1$]{\includegraphics[height=60mm]{weak1d_DG2_rank.eps}}
\subfigure[$k=2$]{\includegraphics[height=60mm]{weak1d_DG3_rank.eps}}
\caption{Example \ref{ex:weak1d}. The time evolution of electric energy (a, b) and ranks of the low rank DG solutions (c, b). $\varepsilon=10^{-5}$.}
\label{fig:weak1d_elec}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[$k=1$]{\includegraphics[height=40mm]{weak1d_DG2_mass.eps}}
\subfigure[$k=1$]{\includegraphics[height=40mm]{weak1d_DG2_mom.eps}}
\subfigure[$k=1$]{\includegraphics[height=40mm]{weak1d_DG2_ener.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{weak1d_DG3_mass.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{weak1d_DG3_mom.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{weak1d_DG3_ener.eps}}
\caption{Example \ref{ex:weak1d}. The time evolution of relative deviation of total mass (a, d), absolute deviation of total momentum (b, e), and relative deviation of total energy (c, f). $\varepsilon=10^{-5}$.}
\label{fig:weak1d_invar}
\end{figure}
\end{exa}
\begin{exa}
\label{ex:strong1d}(Strong Landau damping.) For this example, we simulate another benchmark problem, namely the strong Landau damping test. The initial condition is the same as \eqref{eq:landau1d} but with parameters
$\alpha=0.5$ and $k=0.5$. Unlike the previous example, due to the large perturbation the electric energy would decay at first and then start to increase until reaching saturation due to the large perturbation. The computational domain is set to be the same as the previous example. In the simulation, the truncation threshold is set to be $\varepsilon=10^{-3}$, and we employ non-uniform meshes obtained by randomly perturbing uniform meshes up to 10\%. We summarize the simulation results in Figures \ref{fig:strong1d_elec}-\ref{fig:strong1d_invar}.
It is observed that the proposed low rank DG method can adapt the numerical ranks to efficiently capture Vlasov dynamics. Furthermore, the method is able to conserve the physical invariants as expected up to machine precision as expected.
\begin{figure}[h!]
\centering
\subfigure[$k=1$]{\includegraphics[height=60mm]{strong1d_DG2_elec.eps}}
\subfigure[$k=2$]{\includegraphics[height=60mm]{strong1d_DG3_elec.eps}}
\subfigure[$k=1$]{\includegraphics[height=60mm]{strong1d_DG2_rank.eps}}
\subfigure[$k=2$]{\includegraphics[height=60mm]{strong1d_DG3_rank.eps}}
\caption{Example \ref{ex:strong1d}. The time evolution of electric energy (a, b) and ranks of the low rank DG solutions (c, b). $\varepsilon=10^{-3}$.}
\label{fig:strong1d_elec}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[$k=1$]{\includegraphics[height=40mm]{strong1d_DG2_mass.eps}}
\subfigure[$k=1$]{\includegraphics[height=40mm]{strong1d_DG2_mom.eps}}
\subfigure[$k=1$]{\includegraphics[height=40mm]{strong1d_DG2_ener.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{strong1d_DG3_mass.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{strong1d_DG3_mom.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{strong1d_DG3_ener.eps}}
\caption{Example \ref{ex:strong1d}. The time evolution of relative deviation of total mass (a, d), absolute deviation of total momentum (b, e), and relative deviation of total energy (c, f). $\varepsilon=10^{-3}$.}
\label{fig:strong1d_invar}
\end{figure}
\end{exa}
\begin{exa}\label{ex:bumpontail} (Bump on tail.) As the last 1D1V test, we simulate the bump-on-tail problem with the initial condition
\begin{equation}
\label{eq:bump1d}
f(x,v,t=0) = \left(1+\alpha \cos \left(k x\right)\right)\left(n_p\exp\left(-\frac{v^2}{2}\right) +n_b\exp\left(-\frac{(v-u)^2}{2v_{t}}\right) \right),
\end{equation}
where $\alpha=0.04$, $k=0.3$, $n_{p}=\frac{9}{10 \sqrt{2 \pi}}$, $n_{b}=\frac{2}{10 \sqrt{2 \pi}}$, $u=4.5$, $v_{t}=0.5$. The weight function $w(v) = \exp(-\frac{v^2}{7})$ is chosen. The domain is set to be $[0,L_x]\times[-L_v,L_v]$ with $L_x=2\pi/k$ and $L_v=13$, and the truncation threshold is chosen as $\varepsilon=10^{-5}$. We simulate the problem up to $t=30$ and plot the contours of the low rank DG solutions with a set of non-uniform meshes obtained by perturbing uniform meshes by 10\%. The results are consistent with those reported in the literature, and a method with larger $k$ and over a finer mesh can provide better resolution as expected. In Figure \ref{fig:bump1d_elec}-\ref{fig:bump1d_invar}, we report the time histories of electric energy, numerical ranks, together with relative derivation of total mass, total momentum and total energy. The observation is similar to the strong Landau damping test that the filamentation structures are well captured by the proposed method with rank adaptivity, and the physical invariants are conserved up to the machine precision.
\begin{figure}[h!]
\centering
\subfigure[$k=1$, $N_x\times N_v=16\times32$]{\includegraphics[height=40mm]{bumpDG2_16_1E5.eps}}
\subfigure[$k=1$, $N_x\times N_v=32\times64$]{\includegraphics[height=40mm]{bumpDG2_32_1E5.eps}}
\subfigure[$k=1$, $N_x\times N_v=64\times128$]{\includegraphics[height=40mm]{bumpDG2_64_1E5.eps}}
\subfigure[$k=2$, $N_x\times N_v=16\times32$]{\includegraphics[height=40mm]{bumpDG3_16_1E5.eps}}
\subfigure[$k=2$, $N_x\times N_v=32\times64$]{\includegraphics[height=40mm]{bumpDG3_32_1E5.eps}}
\subfigure[$k=2$, $N_x\times N_v=64\times128$]{\includegraphics[height=40mm]{bumpDG3_64_1E5.eps}}
\caption{Example \ref{ex:bumpontail}. Contour plots of the low rank DG solutions at $t=30$. $\varepsilon=10^{-5}$.}
\label{fig:bumpontail_contour}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[$k=1$]{\includegraphics[height=60mm]{bump1d_DG2_elec.eps}}
\subfigure[$k=2$]{\includegraphics[height=60mm]{bump1d_DG3_elec.eps}}
\subfigure[$k=1$]{\includegraphics[height=60mm]{bump1d_DG2_rank.eps}}
\subfigure[$k=2$]{\includegraphics[height=60mm]{bump1d_DG3_rank.eps}}
\caption{Example \ref{ex:bumpontail}. The time evolution of electric energy (a, b) and ranks of the low rank DG solutions (c, b). $\varepsilon=10^{-5}$.}
\label{fig:bump1d_elec}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[$k=1$]{\includegraphics[height=40mm]{bump1d_DG2_mass.eps}}
\subfigure[$k=1$]{\includegraphics[height=40mm]{bump1d_DG2_mom.eps}}
\subfigure[$k=1$]{\includegraphics[height=40mm]{bump1d_DG2_ener.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{bump1d_DG3_mass.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{bump1d_DG3_mom.eps}}
\subfigure[$k=2$]{\includegraphics[height=40mm]{bump1d_DG3_ener.eps}}
\caption{Example \ref{ex:bumpontail}. The time evolution of relative deviation of total mass (a, d), total momentum (b, e), and rtotal energy (c, f). $\varepsilon=10^{-5}$.}
\label{fig:bump1d_invar}
\end{figure}
\end{exa}
\subsection{2D2V Vlasov-Poisson system}
\begin{exa} \label{ex:weak2d} (Weak Landau damping.) We consider the 2D2V version of weak Landau damping. The initial
condition is
\begin{equation}
\label{eq:weak}
f({\bf x},{\bf v},t=0) =\frac{1}{(2 \pi)^{d / 2}} \left(1+\alpha \sum_{m=1}^{d} \cos \left(k x_{m}\right)\right)\exp\left(-\frac{|{\bf v}|^2}{2}\right),
\end{equation}
where $d=2$, $\alpha=0.01$, and $k=0.5$. We set the computation domain as $[0,L_x]^2\times[-L_v,L_v]^2$, where $L_x=\frac{2\pi}{k}$ and $L_v=6$, and the truncation threshold $\varepsilon=10^{-4}$. As with the 1D1V case, the electric energy will decay exponentially fast over time. To mitigate the curse of the dimensionality, we represent the four dimensional solution in the third order HT tensor format, without further decomposition in the spatial directions.
In Figure \ref{fig:weak2d_elec_con_k1}-\ref{fig:weak2d_elec_con_k2}, we report the time evolution of the electric energy, hierarchical ranks of the numerical solution, relative deviation of total mass and energy together with absolute deviation of total momentum $J_1$ and $J_2$. It is known that the solution processes low rank structures on phase space, and hence we expect the proposed low rank DG method can efficiently avoid the curse of dimensionality. The CPU cost for the simulation with meshes $16^2\times32^2$, $32^2\times64^2$, $64^2\times128^2$ is for 550s, 1092s, 3047s for $k=1$ and 556s, 1143s, and 4435s for $k=2$ with serial implementation. Furthermore, the LoMaC low rank DG method can conserve the physical invariants up to machine precision.
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG2_elec.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG2_rank.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG2_mass.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG2_mom1.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG2_mom2.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG2_ener.eps}}
\caption{Example \ref{ex:weak2d}. The time evolution of electric energy (a), hierarchical ranks of the numerical solution of mesh size $N^2_x\times N^2_v=64^2\times128^2$ (b), relative deviation of total mass (c), absolute total momentum $J_1$ (d), absolute total momentum $J_2$ (e), and relative deviation of total energy (f). $\varepsilon=10^{-4}$. $k=1$. }
\label{fig:weak2d_elec_con_k1}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG3_elec.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG3_rank.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG3_mass.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG3_mom1.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG3_mom2.eps}}
\subfigure[]{\includegraphics[height=40mm]{weak2d_DG3_ener.eps}}
\caption{Example \ref{ex:weak2d}. The time evolution of electric energy (a), hierarchical ranks of the numerical solution of mesh size $N^2_x\times N^2_v=64^2\times128^2$ (b), relative deviation of total mass (c), absolute total momentum $J_1$ (d), absolute total momentum $J_2$ (e), and relative deviation of total energy (f). $\varepsilon=10^{-4}$. $k=2$. }
\label{fig:weak2d_elec_con_k2}
\end{figure}
\end{exa}
\begin{exa} \label{ex:two2d}(Two-stream instability.) The last example is the 2D2V two-stream instability with initial condition
\begin{equation}
\label{eq:two2d}
f({\bf x},{\bf v},t=0) =\frac{1}{2^d(2 \pi)^{d / 2}} \left(1+\alpha \sum_{m=1}^{d} \cos \left(k x_{m}\right)\right)\prod_{m=1}^d\left(\exp\left(-\frac{(v_m-v_0)^2}{2}\right) + \exp\left( -\frac{(v_m+v_0)^2}{2}\right)\right),
\end{equation}
where $d=2$, $\alpha=10^{-3}$, $v_0=2.4$, and $k=0.2$. The computation domain is set as $[0,L_x]^2\times[-L_v,L_v]^2$, where $L_x=\frac{2\pi}{k}$ and $L_v=8$, and the truncation threshold is $\varepsilon=10^{-4}$. In Figure \ref{fig:two2d_elec_con_k1}-\ref{fig:two2d_elec_con_k2}, we report the time evolution of the electric energy, hierarchical ranks of the numerical solution of mesh size $N_x^2\times N^2_v=128^2\times256^2$, relative deviation of total mass and energy together with absolute derivation of total momentum $J_1$ and $J_2$. The results of the electric energy evolution agree with those reported in the literature. In addition, the dynamics is efficiently captured by the low rank DG method, observing that the hierarchical ranks of the solution remains very low until approximately $t=17$ and then start to increase due to the instability developed. Lastly, the proposed method conserves the total mass, momentum, and energy.
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[height=40mm]{two2d_DG2_elec.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG2_rank.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG2_mass.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG2_mom1.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG2_mom2.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG2_ener.eps}}
\caption{Example \ref{ex:two2d}. The time evolution of electric energy (a), hierarchical ranks of the numerical solution of mesh size $N^2_x\times N^2_v=64^2\times128^2$ (b), relative deviation of total mass (c), absolute deviation of total momentum $J_1$ (d) and total momentum $J_2$ (e), and relative deviation of total energy (f). $\varepsilon=10^{-5}$. $k=1$. }
\label{fig:two2d_elec_con_k1}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[height=40mm]{two2d_DG3_elec.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG3_rank.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG3_mass.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG3_mom1.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG3_mom2.eps}}
\subfigure[]{\includegraphics[height=40mm]{two2d_DG3_ener.eps}}
\caption{Example \ref{ex:two2d}. The time evolution of electric energy (a), hierarchical ranks of the numerical solution of mesh size $N^2_x\times N^2_v=64^2\times128^2$ (b), relative deviation of total mass (c), absolute deviation of total momentum $J_1$ (d) and total momentum $J_2$ (e), and relative deviation of total energy (f). $\varepsilon=10^{-5}$. $k=2$. }
\label{fig:two2d_elec_con_k2}
\end{figure}
\end{exa}
|
1,941,325,220,699 | arxiv | \section{Introduction}
\acresetall
\IEEEPARstart{W}{ireless} technology recent advances provide massive connectivity for machines and objects resulting in the \ac{iot} \cite{IoTint}.
The demand for the \ac{iot} is expected to grow drastically in the near future with numerous applications in health care systems, education, businesses and governmental services \cite{IoTInd, iot1, MassiveMachineType}.
As the demand for connectivity in \ac{iot} systems is growing rapidly, it is crucial to improve the spectrum efficiency \cite{iotcapacity}.
Hence, the \ac{noma} has been introduced \cite{noma}.
To address the main challenges of \ac{iot}, including access collisions and massive connectivity, \ac{noma} allows devices to access the channel non-orthogonally by either power-domain \cite{PowerNoma} or code-domain \cite{SCMANOMA} multiplexing.
Meanwhile, this massive connectivity is highly affected by the conventional grant-based \ac{noma} transmission scheme, where the exchange of control signaling between the \ac{bs} and \ac{iot} devices is needed for channel access.
The excessive signaling overhead causes spectral deficiency and large transmission latency.
Grant-free \ac{noma} has been introduced to make a flexible transmission mechanism for the devices and save time and bandwidth by removing the need for the exchange of control signaling between the \ac{bs} and devices.
Hence, devices can transmit data randomly at any time slot without any request-grant procedure.
In many \ac{iot} applications, a few devices become active for a short period of time to communicate with the \ac{bs} while others are inactive \cite{IoTMag}.
In \ac{iot} networks with a large number of nodes each with a small probability of activity, \ac{mud} methods heavily rely on \ac{ad} prior to detection and decoding \cite{MassiveMachineType,verdu1998multiuser, zhang2018block, GiaSparseActivityMUD, CDMAMUD}.
For uplink transmission in \ac{iot} systems with grant-free \ac{noma} transmission scheme, where the performance of \ac{mud} can be severely affected by the multi-access interference, the reliable detection of both activity and transmitted signal is very challenging and can be computationally expensive \cite{verdu1998multiuser,GiaSparseActivityMUD}.
There have been many studies in the literature suggesting \ac{cs} methods for joint activity and data detection \cite{GiaSparseActivityMUD,CDMAMUD,SparseActivityDetection,SparseCDMA, wang2020compressive}.
Although \ac{cs} methods can achieve a reliable \ac{mud}, they only work in networks with sporadic traffic pattern, and are expensive in terms of computational complexity \cite{GiaSparseActivityMUD}.
Recently, \ac{dl} models have observed a lot of interests in communication systems and more specifically in signal detection \cite{deeplearning, DLSphere, NOMAActivity}.
A study in \cite{NOMAActivity} suggests to use \ac{dl} for activity and data detection, however they consider a deterministic traffic pattern for the activity which is not valid in all environments.
In this work, we first formulate the problem of IoT activity detection as a threshold comparing problem. We then analyze the probability of error of this activity detection method. Observing that this probability of error is a convex function of the decision threshold, we raise the question of finding the optimal threshold for minimizing the activity detection error. To achieve this goal, we propose a \ac{cnn}-based AD algorithm for grant-fee code-domain uplink NOMA. Unlike existing CS-based AD algorithms, our solution does not need to know the exact number of active devices or even the activity rate of IoT devices. In fact, in our system model we assume a time-varying and unknown activity rate and a heterogeneous network. Simulation results verify the success the proposed algorithm.
The rest of this paper is organized as follows.
We present the system model in Section \ref{Sec:system}.
In Section \ref{Sec:DetectorAlg} we formulate the device \ac{ad} problem and derive its probability of error.
Section \ref{Sec:method} introduces our \ac{cnn}-based solution for device \ac{ad}.
The simulation results are presented in Section \ref{Sec:simulations}.
Finally, the paper is concluded in Section \ref{Sec:Conclusions}.
\subsection{Notations}
Throughout this paper, $(\cdot)^*$ represents the complex conjugate.
Matrix transpose and Hermitian operators are shown by $(\cdot)^T$ and $(\cdot)^H$, respectively.
The operator $\text{diag}(\mathbf{b})$ returns a square diagonal matrix with the elements of vector $\mathbf{b}$ on the main diagonal.
Furthermore, $\mathbb{E}[\cdot]$ is the statistical expectation, $\hat{\mathbf{a}}$ denotes an estimated value for $\mathbf{a}$, and size of set $\mathcal{S}$ is shown by $|\mathcal{S}|$.
The constellation and $m$-dimensional complex spaces are denoted by $\mathbb{D}$ and $\mathbb{C}^m$, respectively.
Finally, the circularly symmetric complex Gaussian distribution with mean vector $\mathbf{\mu}$ and covariance matrix $\mathbf{\Sigma}$ is denoted by $\mathcal{CN}(\mathbf{\mu},\mathbf{\Sigma})$.
\section{System Model}
\label{Sec:system}
We consider a \ac{cdma} uplink transmission, where $K$ \ac{iot} devices communicate with a single \ac{iot} \ac{bs} equipped with $M$ receive antennas.
This commonly used model \cite{iot1,noma,NOMAActivity}, also considers a frame structure for uplink transmission composed of a channel estimation phase followed by CDMA slotted ALOHA data transmissions as shown in Fig.~\ref{fig:transmissinoFrame}.
In each frame, let $N_{\rm f}$ short packets of length $T_{\rm t}=N_{\rm s}T_{\rm s}$, where $N_{\rm s}$ is the number of symbols per \ac{iot} packet and $T_{\rm s}$ is the symbol duration.
It is assumed that the channel is fixed during each frame, but it varies from one frame to another.
The \ac{csi} is acquired at the \ac{bs} during the channel estimation phase.
As it is common in \ac{mmtc}, we assume that
the \ac{iot} devices are only active on occasion and transmit short data packets during each frame.
The activity rate of the \ac{iot} devices is denoted by
$P_{\rm a}\in[0,P_{\rm max}]$, which is assumed to be unknown and time-varying from one packet transmission to another.
\begin{figure}[t]
\vspace{-.2cm}
\centering
\includegraphics[width=0.5\textwidth]{Figures/ActivityFrame.pdf}
\vspace{-.6cm}
\caption{CDMA slotted ALOHA transmission frame}
\vspace{-.5cm}
\label{fig:transmissinoFrame}
\end{figure}
Let $b_k\in\cal A$ be the transmitted symbol of the $k$-th device and chosen from a finite alphabet $\cal A$, when the $k$-th device is active; otherwise, $b_k=0$.
Consequently, $b_k$ can take values from an augmented alphabet $\bar{{\cal A}}={\cal A}\cup\{0\}$.
We also denote the set of all devices and the set of active devices by ${\cal S}_{\rm t}=\{1,2,\dots,K\}$ and
${\cal S}_{\rm a}$, respectively, where ${\cal S}_{\rm a} \subset {\cal S}_{\rm t}$.\footnote{For the simplicity of notation, we remove the index of frame and packet.}
A unique spreading code is dedicated to each
\ac{iot} device which is simultaneously used for the spreading
purpose and device identification. This removes the need for control signaling associated with IoT
device identification.
Control signals are inefficient for short packet \ac{mmtc}.
The spreading sequence for the $k$-th \ac{iot} device is denoted by $\mathbf{c}_k=[c_{1,k} ~~ c_{2,k} ~~\cdots ~~c_{N_c,k}]^T$ where $c_{i,k}\in\{-1,+1\}$ and $N_{\rm c}$ is the spreading factor.
To support a large number of devices, non-orthogonal spreading sequences are employed; resulting in \ac{noma} transmission.
For a single frame, the complex channel coefficient between the $k$-th \ac{iot} device and the $m$-th receive antenna at the \ac{bs} is denoted as $g_{m,k}$.
The active \ac{iot} device $k$, $k\in{\cal S}_{\rm a}$ transmits $N_{\rm s}$ symbols denoted by $\mathbf{b}_k = [b_{k,1},\cdots,b_{k,N_{\rm s}}]^T$ during a packet.
The received baseband signal over Rayleigh flat fading channel in a single slot of the slotted ALOHA frame at the $m$-th receive antenna of the \ac{bs} is expressed as
\begin{equation}
\mathbf{Y}_m=\sum_{k=1}^{K}g_{m,k}\mathbf{c}_k\mathbf{b}_k^T+\mathbf{W}_{m},
\end{equation}
where $\mathbf{W}_{m}\in \mathbb{C}^{N_{\rm c}\times N_{\rm s}}$ with $w_{i,j}\sim {\cal CN}(0,\sigma_{\rm w}^2)$ and $E[w_{i,j}w_{u,v}^*]=\sigma_{\rm w}^2 \delta[i-u]\delta[j-v]$ is the \ac{awgn} matrix at the $m$-th receive antenna.
The equivalent channel matrix between all IoT devices and the $m$-th receive antenna can be expressed as $\mathbf{\Phi}_m=[g_{m,1}\textbf{c}_1, \cdots, g_{m,K}\textbf{c}_{K}]\in\mathbb{C}^{N_{\rm c}\times K}$.
Thus, the received packet at the $m$-th ($m=1,2,\cdots,M$) receive antenna is given by
\begin{equation}
\mathbf{Y}_m = \mathbf{\Phi}_m\textbf{B} + \mathbf{W}_{m},
\end{equation}
where $\textbf{B} = [\mathbf{b}_1, \cdots, \mathbf{b}_{K}]^T\in\mathbb{D}^{K\times N_{\rm s}}$.
Let the total set of all \ac{iot} devices be decomposed into a finite number of disjoint groups $\mathcal{G}_1,\mathcal{G}_2,\cdots,\mathcal{G}_S$.
Within group $\mathcal{G}_j$, the
power of every \ac{iot} device is given by $P_j$.
The powers of the devices are equal within each group, but differ from group to group.
The fraction of devices in group $\mathcal{G}_j$ is therefore $|\mathcal{G}_j|/K$.
It is assumed that $P_j$ is known at the \ac{bs}.
This configuration captures heterogeneous \ac{iot} networks, where groups of \ac{iot} devices capture different phenomenon in a given geographical area.
A single group of \ac{iot} devices with equal power transmission, resulting in a homogeneous network, is also studied in this paper.
\section{Problem Formulation} \label{Sec:DetectorAlg}
In this section, we present the problem of \ac{iot} device \ac{ad} in the cases of known \ac{csi} at the receiver and in the presence of sparse or non-sparse transmission.
In order to detect the active devices, it is assumed that the \ac{bs} is equipped with a match filter and the precoding matrix and \ac{csi} $\mathbf{\Phi}_m$ is available.
Before \ac{ad}, the observation matrix at the $m$-th receive antenna $\mathbf{y}_m$ is passed through the decorrelator to obtain
\begin{equation}
\mathbf{\overline{Y}}_{m} = \mathbf{\Phi}_m^H\mathbf{Y}_m \in \mathbb{C}^{K\times N_{\rm s}}.
\end{equation}
In the following, we investigate the details of the \ac{ad} problem based on the Gaussian detection to show how a threshold can be computed to distinguish active \ac{iot} devices from inactive ones.
The output of the decorrelator receiver for the $m$-th receive antenna is expressed as
\begin{align}\nonumber
& \mathbf{\overline{Y}}_{m} = \mathbf{\Phi}_m^H\mathbf{\Phi}_m\mathbf{B}+\mathbf{\Phi}_m^H\mathbf{W}_m, \\
&= \begin{bmatrix}
\sum_{k=1}^{K}g_{m,1}^*g_{m,k}\mathbf{c}_1^T\mathbf{c}_k\mathbf{b}_k^T+g_{m,1}^*\mathbf{c}_1^T\mathbf{W}_m \\
\sum_{k=1}^{K}g_{m,2}^*g_{m,k}\mathbf{c}_2^T\mathbf{c}_k\mathbf{b}_k^T + g_{m,2}^*\mathbf{c}_2^T\mathbf{W}_m \\
\vdots \\
\sum_{k=1}^{K}g_{m,K}^*g_{m,k}\mathbf{c}_{K}^T\mathbf{c}_k\mathbf{b}_k^T + g_{m,K}^*\mathbf{c}_{K}^T\mathbf{W}_m
\end{bmatrix}.
\end{align}
Consequently, the received signal from the $k$-th user at the $m$-th receive antenna is
\begin{equation}
\mathbf{r}_{k}^{m} = ||g_{m,k}\mathbf{c}_k||_2^2 \mathbf{b}_k^T + \sum_{i=1 (i\ne k)}^{K} g_{m,k}^*g_{m,i} \mathbf{c}_{k}^T\mathbf{c}_i\mathbf{b}_i^T +g_{m,k}^*\mathbf{c}_k^T\mathbf{W}_m,
\end{equation}
where the second and third terms are multi user interference and additive noise, respectively.
Since an \ac{iot} device is either active or inactive for the entire packet transmission, we determine the activity status of a device based on each received symbol and then use the results in \cite{hardcombine} for spectrum sensing and combine the obtained results from all $N_s$ symbols.
The device \ac{ad} in the case of single symbol transmission is studied in \cite{GiaSparseActivityMUD}, and we follow that to determine the status of each device based on each received symbol and then combine the results.
The $j$-th received symbol from device $k$ at receive antenna $m$, denoted as $r_{k,j}^m$, is
\begin{align}\nonumber
r_{k,j}^m =& ||g_{m,k}\mathbf{c}_k||_2^2 b_{k,j} + \\ &\sum_{i=1 (i\ne k)}^{K} g_{m,k}^*g_{m,i}\mathbf{c}_{k}^T\mathbf{c}_ib_{i,j} + g_{m,k}^*\mathbf{c}_k^T\mathbf{w}_j,
\end{align}
where the first term is the main signal, the second term is multi user interference from other devices, and the third term is the additive noise.
For the sake of simplicity we assume that BPSK modulation is used, i.e., the transmitted symbols are drawn from ${\cal A}=\{-1,+1\}$ and $p(b_{k,j} = +1)=p(b_{k,j} = -1)=1/2$.
The multi user interference plus noise in $r_{k,j}^m$ has variance
\begin{align}\nonumber
\sigma^2_{k,j} = & \text{~var}\Big{\{}\sum_{i=1 (i\ne k)}^{K} g_{m,k}^*g_{m,i}\mathbf{c}_{k}^T\mathbf{c}_ib_{i,j} + g_{m,k}^*\mathbf{c}_k^T\mathbf{w}_j\Big{\}} \\
= & \sum_{i=1 (i\ne k)}^{K}|g_{m,k}^*g_{m,i}\mathbf{c}_{k}^T\mathbf{c}_i|^2 P_a + ||g_{m,k}^*\mathbf{c}_k^T||_2^2.
\end{align}
Now we can approximate $r_{k,j}^m$ by a Gaussian distribution as ${\cal N}(||g_{m,k}\mathbf{c}_k||_2^2 ,\sigma^2_{k,j})$ \cite{hardcombine}.
In order to identify the activity of device $k$, our goal is to propose an algorithm to define threshold $\tau$ and set device $k$ as active if $|r_{k,j}^m|>\tau$.
Then the probability of error, $P_e$, is computed as
\begin{align}\label{eq:error}\nonumber
P_e^{k,j} =& P_a p(|r_{k,j}^{m}|<\tau|b_{k,j} \ne 0) \\&+ 2(1-P_a)p(|r_{k,j}^{m}|>\tau|b_{k,j} = 0),
\end{align}
where we have $p(r_{k,j}^{m}|b_{k,j} \ne 0)\sim{\cal N}(||g_{m,k}\mathbf{c}_k||_2^2,\sigma^2_{k,j})$ and $p(r_{k,j}^{m}|b_{k,j} = 0)\sim{\cal N}(0,\sigma^2_{k,j})$.
We can rewrite \eqref{eq:error} as
\begin{align}\label{eq:errorQ}
P_e^{k,j} = 2(1-P_a) Q(\frac{\tau}{\sigma_{k,j}}) + P_a Q(\frac{||g_{m,k}\mathbf{c}_k||_2^2-\tau}{\sigma_{k,j}}),
\end{align}
where $Q(x)=(1/\sqrt{2\pi})\int_{x}^{\infty}\exp(-t^2/2)dt$ denotes the Gaussian tail function.
The probability of error in \eqref{eq:errorQ} is a convex function of $\tau$ and hence, a fine tuned neural network is capable of solving this problem and detect the active devices by finding the optimum $\tau$.
In the following section, we define our \ac{dl}-based algorithm to find the optimum $\tau$ and minimize the probability of error.
\iffalse
The activity status of device $k$ based on the $j$-th received symbol at antenna $m$ is denoted as $\beta_{k,j}^{(m)}$ and we have
\begin{align}
\beta_{k,j}^{m}=
\begin{cases}
1 \text{~~~for~~~} |r_{k,j}^m|>\theta\\
0 \text{~~~for~~~} |r_{k,j}^m|<\theta
\end{cases}.
\end{align}
As discussed in \cite{hardcombine}, the final decision on the activity status of device $k$ after receiving $N_s$ symbols is as
\begin{align}
\beta_{k}^{m}=
\begin{cases}
1 \text{~~~for~~~} \sum_{j=0}^{N_s-1}\beta_{k,j}^{m}>n\\
0 \text{~~~for~~~} \sum_{j=0}^{N_s-1}\beta_{k,j}^{m}<n
\end{cases},
\end{align}
where $n$ is chosen based on the following discussion.
It was shown in \cite{hardcombine}, that the probability of false alarm for $\beta_{k}^{m}$, i.e. $p(\sum_{j=0}^{N_s-1}\beta_{k,j}^{m}>n|\beta_{k}^{m}=0)$, denoted ${\cal Q}_f$ is computed as
\begin{align}
{\cal Q}_f = \sum_{l=n}^{N_s} {N_s \choose l} P_f^l (1-P_f)^{N_s-l},
\end{align}
where $P_f$ is the probability of false alarm based on single received symbol.
Similarly, the missed detection probability, i.e. $p(\sum_{j=0}^{N_s-1}\beta_{k,j}^{m}<n|\beta_{k}^{m}=1)$, denoted ${\cal Q}_m$ is computed as
\begin{align}
{\cal Q}_m = 1 - \sum_{l=n}^{N_s} {N_s \choose l} P_d^l (1-P_d)^{N_s-l},
\end{align}
where, $P_d$ is the probability of true detection based on single received symbol.
It is proven in \cite{hardcombine}, the optimal $n$ is chosen by minimizing ${\cal Q}_f + {\cal Q}_m$ and it is
\begin{align}
n_{\rm opt} = \min\Big(N_s,\Big\lceil\frac{N_s}{1+\alpha}\Big\rceil\Big),
\end{align}
where $\alpha = \frac{\ln\frac{P_f}{P_d}}{\ln\frac{1-P_d}{1-P_f}}$.
Finally, by obtaining the activity status from all $M$ receive antennas, we repeat the presented decision combining and achieve the final activity status of each \ac{iot} device.
and report it as $\beta_{k}$.
\fi
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{Figures/CNN-AD.pdf}
\vspace{-.4cm}
\caption{Model structure of the proposed CNN-AD algorithm}
\vspace{-.4cm}
\label{fig:modelStructure}
\end{figure*}
\section{DL-Based AD}\label{Sec:method}
Device \ac{ad} is the first step toward effective \ac{mud} in a grant-free uplink multiple access.
The recent studies on \ac{ad} suggest to use \ac{cs} methods to identify the set of active devices \cite{SparseActivityDetection,SparseCDMA}.
However, these methods fail in the practical scenarios, where the activity rate is time-varying and/or unknown.
Moreover, these methods are mainly effective for low device activity rate scenarios, i.e., when sparsity level is high \cite{SparseActivityDetection}.
In this section, we propose our \ac{ad} algorithms called CNN-AD by employing a \ac{cnn} for heterogeneous IoT networks.
By employing a suitably designed \ac{cnn}, the underlying pattern in device activity can be easily learnt.
\subsection{\ac{cnn}-\ac{ad} Algorithm}
Fig. 2 illustrates the structure of the proposed CNN-AD algorithm. As seen, it is composed of there blocks: 1) preprocessing, 2) CNN processing, and 3) hypothesis testing.
In the preprocessing step after sequence matched filtering,
we first sort the observation matrix from all $M$ receive antennas in a 3D Tensor as
\begin{align}\label{eqtttu}
{\mathbf{\overline{R}}} = \left[ \begin{array}{l}
\left[ {{\bf{P}}{{{\bf{\bar Y}}}_1}} \right]\\
\left[ {{\bf{P}}{{{\bf{\bar Y}}}_2}} \right]\\
\,\,\,\,\, \vdots \\
\left[ {{\bf{P}}{{{\bf{\bar Y}}}_M}} \right]
\end{array} \right]
\end{align}
where $\mathbf{P\overline{Y}}_m\in \mathbb{C}^{K\times N_{\rm s}}$,
$\mathbf{\overline{Y}}_{m} = \mathbf{\Phi}_m^H\mathbf{Y}_m \in \mathbb{C}^{K\times N_{\rm s}}$ for $m=1,2,\cdots, M$, and
$\textbf{P} \triangleq {\rm diag}(p_1,\cdots,p_K)$, $p_k\in\{1/P_1,\cdots,1/P_S\}$ for $k=1,2,\cdots,K$.
In the CNN processing block, the 3D Tensor $\mathbf{\overline{R}}$ is used as the input of a suitable designed \ac{cnn}. The \ac{cnn} models benefit from the convolutional layers performing convolution operations between matrices instead of multiplication. Thus,
it leads to dimension reduction for feature extraction and provides a new input to the next network layers which includes only the useful features of the original high-dimensional input.
The IoT device \ac{ad} can be formulated as a binary classification or regression problem. Formulating device \ac{ad} as a classification problem is straightforward, but it requires the accurate design of the \ac{cnn}'s structure and parameters.
In the hypothesis testing block, the $K$
outputs of the \ac{cnn}'s Sigmoid layer is compared with a predefined threshold to determine the activity status of the \ac{iot} devices in the network. If the $k$-th node of the Sigmoid layer exceeds the threshold, the $k$-th \ac{iot} device is identified as active.
\subsection{Training Phase}
In order to train the designed \ac{cnn}, we define the activity vector $\mathbf{a}$ as
\begin{equation}
\mathbf{a} = [a_1 ~~ a_2 ~~ \cdots ~~ a_{K}]^T,
\end{equation}
where $a_k$ is 1 when the $k$-th \ac{iot} device is active and 0 otherwise.
We train our model with $N$ independent training samples ($\mathbf{\overline{R}}^{(j)}$,$\mathbf{a}^{(j)}$), where $j=1,2,\cdots,N$ and $\mathbf{a}^{(j)}$ and $\mathbf{\overline{R}}^{(j)}$ are the activity vector and observation matrix of the $j$-th training sample, respectively.
Our objective is to train the designed \ac{cnn} to generate the desired output vector $\mathbf{a}^{(j)}$ for input matrix $\mathbf{\overline{R}}^{(j)}$.
The model tries to learns non-linear transformation $\Psi$ such that
\begin{equation}\label{eq:cnn}
\mathbf{\hat{a}}^{(j)} = \Psi(\mathbf{\overline{R}}^{(j)};\bf{\Theta}),
\end{equation}
where $\bf{\Theta}$ is the set of parameters learned during the training by minimizing the loss function.
The output of model, i.e. $\mathbf{\hat{a}}$ determines the activity probabilities of the \ac{iot} devices.
Here since there are two classes (active or inactive) for each \ac{iot} device, the loss function is chosen as the binary cross-entropy.
For each training sample $j$, the binary cross-entropy loss function compares the probability that the \ac{iot} devices are active ($\mathbf{\hat{a}}^{(j)}$) with the true activity vector $\mathbf{a}^{(j)}$ as
\begin{equation}\label{eq:lossFunction}
\text{Loss}{(\bf{\Theta})}=\frac{1}{N}\sum_{j=1}^{N}
-\Big(\mathbf{a}^{(j)}\log(\mathbf{\hat{a}}^{(j)}) + ({\bf{1}}-\mathbf{a}^{(j)})\log({\bf 1}-\mathbf{\hat{a}}^{(j)})\Big),
\end{equation}
where $\log(\cdot)$ performs an element-wise $\log$ operation on $\mathbf{\hat{a}}^{(j)}$, and the vector multiplication is also element-wise.
\iffalse
\section{CNN-AD Assisted FL}
In this section, we study the application of CNN-AD in \ac{air}-based \ac{fl} systems \cite{yang2020federated}, where a grant-free \ac{noma} is deployed for uplink transmission between the \ac{iot} devices and the \ac{fl} model aggregator (\ac{bs}).
We develop a new optimization paradigm for fast and reliable model aggregation that can only incorporate the active \ac{iot} devices in the training process.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{Figures/FLIoTNetwork.pdf}
\caption{FL over an IoT network with $K$ edge (\ac{iot}) devices which can be either active or inactive}
\label{fig:FLIoT}
\end{figure}
The \ac{air}-based \ac{fl} system, similar to the system model in Sec. \ref{Sec:system}, consists of one \ac{bs} with $M$ receive antennas serving as an edge server and $K$ single-antenna edge (\ac{iot}) devices as shown in Fig. \ref{fig:FLIoT}.
It is assumed that the \ac{bs} (edge server) broadcasts the updated global model parameter vector to the
\ac{iot} (edge) devices at each iteration of the \ac{fl} learning process. The training process is performed in $N_{\rm f}$ iterations.
We consider that only a fraction of \ac{iot} devices in the network participate in the training process at each iteration due to battery limitation and downlink packet loss.
Thus,
the set of active \ac{iot} devices varies from one iteration to another. We also assume that the active \ac{iot} devices participating in the \ac{fl} model aggregation at each iteration are unknown at the \ac{bs}.
The active \ac{iot} device $k \in\mathcal{S}_{\rm a}^{[t]}$ at the iteration $t$ with the training dataset $\mathcal{D}_k$ including $D_k = |\mathcal{D}_k|$ labeled training samples $\{(\textbf{u}_i, v_i)\}_{i=1}^{D_k}\in \mathcal{D}_k$ trains the global model by its local training dataset and obtain the model parameter vector ${\bf g}_k^{(t)}$. The training data sample
$(\textbf{u}_i, v_i)$ denotes the input-output data consisting of the $i$-th training input vector $\mathbf{u}_i$ and the corresponding ground-truth label $v_i$ for $i\in\{1,\cdots,D_k\}$.
For a given $d$-dimensional model parameter vector $\mathbf{\bf g }_k\in\mathbb{R}^d$, the local loss function for device $k \in\mathcal{S}_{\rm a}^{[t]}$ is defined as
\begin{equation}
L_k(\mathbf{g}_k;\mathcal{D}_k) = \frac{1}{D_k} \sum_{(\textbf{u}_i, v_i)\in\mathcal{D}_k} l(\mathbf{g}_k;\textbf{u}_i,v_i),
\end{equation}
where $l(\mathbf{g}_k;\textbf{u}_i,v_i)$ denotes the sample-wise loss function.
When all \ac{iot} devices are active,
the global loss function with model parameter $\mathbf{g}$ can
be represented as
\begin{equation}
L(\mathbf{g})= \frac{1}{K} \sum_{k=1}^K L_k(\mathbf{g};\mathcal{D}_k).
\end{equation}
The learning process aims to optimize the model parameter $\mathbf{g}$
that minimizes the global loss function, i.e.,
\begin{equation}
\mathbf{g}^* = {\rm arg}\min_{\mathbf{g}\in \mathbb{R}^d} L(\mathbf{g}).
\end{equation}
\ac{fl} is able to collaboratively train a global model by coordinating the distributed devices across the \ac{iot} network to update the local model parameters according to the locally owned training data.
Without the need of uploading the local data to the \ac{bs}, this
distributed learning method possesses the advantages of low latency, higher spectral efficiency, low power consumption, and high data privacy.
In this work, we consider the \ac{fedavg} in \cite{mcmahan2017communication} in the presence of inactive
\ac{iot} devices in each iteration of the \ac{fl} process as follows
\begin{itemize}
\item The \ac{bs} broadcasts the global model parameter vector at iteration $t$, i.e., $\mathbf{g}^{[t-1]}$, to all \ac{iot} devices.
\item Based on the received global parameter vector $\mathbf{g}^{[t-1]}$,
the active \ac{iot} device $k \in\mathcal{S}_{\rm a}^{[t]}$ at the $t$-th iteration ($t \in \{1,2\ldots,N_{\rm f}\}$)
performs a local model update by utilizing its local dataset $\mathcal{D}_k$ to obtain an updated local parameter vector $\mathbf{g}^{[t]}_k$ through the \ac{sgd} as
\begin{equation}
\mathbf{g}_k^{[t]} = \mathbf{g}_k^{[t-1]} - \eta\nabla L_k\Big{(}\mathbf{g}_k^{[t-1]}; \mathcal{D}_k\Big{)}.
\end{equation}
\item To obtain the updated global parameter vector at the iteration $t$, i.e., $\mathbf{g}^{[t]}$,
model aggregation through averaging is performed at the \ac{bs} as
\begin{equation}\label{eq:agg}
\mathbf{g}^{[t]} = \frac{1}{|\mathcal{S}_{\rm a}^{(t)}|}\sum_{k\in \mathcal{S}_{\rm a}^{(t)}} \mathbf{g}^{[t]}_k,
\end{equation}
\end{itemize}
where $\mathcal{S}_{\rm a}^{[t]}$ is the set of active \ac{iot} devices at the $t$-th iteration. The \ac{iot} devices employ grant-free code-domain \ac{noma} in the form of slotted ALOHA (Fig. 1) for uplink communication with the \ac{bs}.
We assume that the \ac{iot} device $k \in \mathcal{S}_{\rm t}$ participates $m_k$ times in the learning process at iterations
$\mathcal{T}_{\rm k} \triangleq \{t_{k,1},t_{k,2},\cdots,t_{k,m_k}\}\subset \{1,2,\ldots,N_{\rm f}\}$, where $0 \le |\mathcal{T}_{\rm k}| \le N_{\rm f}$.
The set $\mathcal{T}_{\rm k}$, $k=1,2,\ldots,K$, is unknown at the~\ac{bs}.
Model aggregation without \ac{iot} device \ac{ad} can reduce the overall performance of the \ac{fl}. There exist many works in the literature on the convergence and accuracy of \ac{fl} taking into account data heterogeneity, communication and computation limitations, and partial device participation.
However, these works assume unbiased device participation and consider that all the selected devices participate in the model aggregation. To the best of the author's knowledge, the effect of \ac{ad} error on the model aggregation in the \ac{fl} has not been studied yet~\cite{nguyen2021federated}.
We propose a novel optimization paradigm for the \ac{fl} by using the CNN-AD algorithm to take into account the activity state of the \ac{iot} devices for the model aggregation.
Excluding the inactive \ac{iot} devices in the model aggregation, i.e. equation \eqref{eq:agg}, can significantly improve the accuracy level and convergence speed of the \ac{fl}. The pseudocode of the proposed CNN-AD assisted FL is given in Algorithm \ref{alg:cnn-adFL}.
\begin{figure}[t]
\begin{algorithm}[H]
\caption{CNN-AD Assisted \ac{fl}}\label{alg:cnn-adFL}
\textbf{Input:} $[\mathbf{\Phi}_1, \cdots, \mathbf{\Phi}_{M}]$ \\
\textbf{Output:} $\mathbf{g}^{*}$
\begin{algorithmic}
\State \ac{bs} initialises ${\bf{g}}^{[0]}$ and broadcasts it
\State $t \gets 1$
\While{$t \le N_{\rm f}$}
\For{$k\in\mathcal{S}_{\rm a}^{[t]}$}
\State ${\bf{g}}_k^{[t-1]}={\bf{g}}^{[t-1]}$
\State $\mathbf{g}_k^{[t]} \gets \mathbf{g}_k^{[t-1]} - \eta\nabla L_k(\mathbf{g}_k^{[t-1]}; \mathcal{D}_k)$ \EndFor
\State Active \ac{iot} device $k \in \mathcal{S}_{\rm a}^{[t]}$ sends
$\mathbf{g}_k^{[t]}$ to the \ac{bs}.
$[\mathbf{Y}_1^{[t]}, \cdots, \mathbf{Y}_{M}^{[t]}]$ is received at the \ac{bs}.
\State \ac{bs} implements preprocessing step in \eqref{eqtttu} to obtain the 3D tensor $\mathbf{\overline{R}}^{[t]}$
\State \ac{bs} finds the active \ac{iot} devices as $\hat{\mathcal{S}}_{\rm a}^{[t]}=\{k \text{~~if~~} \hat{a}_k^{[t]}\ge 0.5\}$, where $\hat{\mathbf{a}}^{[t]}=\Psi(\mathbf{\overline{R}}^{[t]};{\bf \Theta})=[\hat{a}_1^{[t]},\cdots,\hat{a}_{K}^{[t]}]$ given in \eqref{eq:cnn}.
\State \ac{bs} implements MUD on $[\mathbf{Y}_1^{[t]}, \cdots, \mathbf{Y}_{M}^{[t]}]$ to obtain $\mathbf{g}_k^{[t]}$, $k \in \hat{\mathcal{S}}_{\rm a}^{[t]}$.
\State BS implements the global model aggregation as $\mathbf{g}^{[t]} \gets \frac{1}{|\mathcal{S}_{\rm a}^{[t]}|}\sum_{k\in\mathcal{S}_{\rm a}^{[t]}} \mathbf{g}_k^{[t]}$.
\State \ac{bs} broadcasts $\mathbf{g}^{[t]}$
\State $t+1 \gets t$
\EndWhile
\State $\mathbf{g}^{*} \gets \mathbf{g}^{[N_{\rm f}]}$
\end{algorithmic}
\end{algorithm}
\vspace{-.9cm}
\end{figure}
\fi
\section{Experiments}
\label{Sec:simulations}
In this section, we evaluate the performance of the proposed CNN-AD algorithm through various simulation experiments and compare it with some of the existing methods.
\subsection{Simulation Setup}
We consider an \ac{iot} network with $K$ devices where $K>N_{\rm c}$ and pseudo-random codes are used as the spreading sequences for \ac{iot} devices.
The probability of activity $P_{\rm a}$ is considered to be unknown and time-varying from one packet to another in the range of $P_{\rm a}\in[0,P_{\rm max}]$, where $P_{\rm max}=0.1$.
The BPSK modulation is used for uplink transmission. Without loss of generality, the channel coefficient between \ac{iot} devices and the \ac{bs} is modeled as independent zero-mean complex Gaussian random variables with variance $\sigma_{k,m}^2=1, k\in {\cal S}_{\rm t}$ and $m\in\{1,\cdots,M\}$.
The additive white noise is modeled as zero-mean complex Gaussian random variables with variance $\sigma_{\rm w}^2$, and the \ac{snr} in dB is defined as $\gamma \triangleq 10\log(\sigma_{\rm{s}}^2/\sigma_{\rm{w}}^2)$, where $\sigma_{\rm{s}}^2=P_{\rm a}P_{\rm t}$ is the average transmit power with $P_{\rm t}=\sum_{k=1}^Kp_k$ as the total transmission power.
Unless otherwise mentioned, we consider spreading sequences with spreading factor $N_{\rm c}=32$.
In order to train CNN-AD, we generate $10^5$ independent samples and use 80\% for training and the rest for validation and test.
Adam optimizer \cite{adam} with learning rate of $10^{-3}$ is used to minimize cross-entropy loss function in \eqref{eq:lossFunction}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/AER.pdf}
\vspace{-.2cm}
\caption{Achieved BER with MMSE with a priory AD of OMP, AMP, and CNN-AD without knowing the number of active devices.} \label{fig:BER1}
\vspace{-.4cm}
\end{figure}
\begin{figure}[t]
\centering
\vspace{.1cm}
\includegraphics[width=0.45\textwidth]{Figures/Fig2_prob.pdf}
\vspace{-.2cm}
\caption{Impact of $P_{\rm a}$ on the performance of different methods as the priory AD for MMSE in terms of achieved BER.}
\label{fig:3}
\vspace{-.25cm}
\end{figure}
\subsection{Simulation Results}
\subsubsection{Performance Evaluation of CNN-AD}
We assess CNN-AD through various simulations and compare it with the existing \ac{cs}-based methods including \ac{omp} \cite{OMP} and \ac{amp}~\cite{AMP}.
The impact of \ac{snr} on the \ac{aer} achieved by different \ac{ad} algorithms in both homogeneous and heterogeneous \ac{iot} networks with uniform and non-uniform power allocation is shown in Fig.~\ref{fig:BER1}.
The \ac{aer} of different methods are compared for a wide range of \ac{snr}s in an \ac{iot} system with total $K=40$ \ac{iot} devices and a single \ac{bs} with $M=100$ receive antennas.
As expected, the \ac{aer} of all \ac{ad} algorithms decreases with increasing \ac{snr}.
However, CNN-AD achieves the best performance since unlike the non-Bayesian greedy algorithms \ac{omp} and \ac{amp}, our method relies on the statistical distributions of device activities and channels and exploit them in the training process.
Fig.~\ref{fig:3} illustrates the effect of activity rate on the \ac{ber}
for \ac{mmse}-\ac{mud} with different \ac{ad} algorithms at $\gamma=10$ dB \ac{snr}.
As seen, as the activity rate increases, the number of active devices also increases accordingly and thus it becomes difficult to detect all the active devices. This results in a higher \ac{ber}.
We use $P_{\rm max}=0.1$ to train CNN-AD. Thus, the \ac{mmse}-\ac{mud} with \ac{cnn}-\ac{ad} shows performance degradation for the activity rates of larger than $P_{\rm max}=0.1$. However, it still outperforms the performance of the \ac{mmse}-\ac{mud} with \ac{omp} and \ac{amp} \ac{ad} algorithms.
It should be mentioned that this performance improves when CNN-AD is trained for a larger value of $P_{\rm max}$.
We further investigate the \ac{ad} algorithms in terms of other metrics for two typical \ac{iot} devices for $P_{\rm max}=0.1$ at $\gamma=10$ dB \ac{snr}, presented in Table \ref{tab:metrics}.
In this table we compare the precision, recall, and F1-score, defined in \cite{goutte2005probabilistic}, achieved by CNN-AD with \ac{omp} and \ac{amp} \ac{ad} algorithms.
As seen, all metrics are improved by using CNN-AD.
\begin{table}[]
\centering
\vspace{.2cm}
\begin{tabular}{*{2}c | *{3}c}
\hline
IoT Device & Model & Precision & Recall & F1-score\\ \hline
& OMP & 28\% & 32\% & 30\%\\
Device A & AMP & 31\% & 35\% & 33\%\\
& \textbf{CNN-AD} & \textbf{73\%} & \textbf{92\%} & \textbf{81\%} \\ \hline
& OMP & 33\% & 32\% & 32\%\\
Device B & AMP & 38\% & 35\% & 36\%\\
& \textbf{CNN-AD} & \textbf{100\%} & \textbf{83\%} & \textbf{91\%}\\ \hline
\end{tabular}
\caption{Performance analysis different algorithms for two typical IoT devices for $P_{\rm max}=0.1$ at $\gamma=10$ dB.}
\vspace{-.6cm}
\label{tab:metrics}
\end{table}
\vspace{-.15cm}
\section{Conclusions}\vspace{-.15cm}
\label{Sec:Conclusions}
In this paper, we consider the problem of \ac{ad} in \ac{iot} networks in grant-free \ac{noma} systems.
Based on the application, \ac{iot} devices can be inactive for a long period of time and only active in the time of transmission to the \ac{bs}.
Hence, identifying the active devices is required for an accurate data detection.
Some studies propose \ac{cs}-based method for \ac{ad}.
However, high level of message sparsity is necessary for those methods.
In order to remove this need and exploit the statistical properties of the channels we propose a \ac{cnn}-based method called CNN-AD to detect active \ac{iot} devices.
Comparison with available methods shows the strength of our algorithm.
\section*{Acknowledgment}
The study presented in this paper is supported by Alberta Innovates and Natural Sciences and Engineering Research Council of Canada (NSERC).
\bibliographystyle{IEEEtran}
|
1,941,325,220,700 | arxiv | \section{Introduction}
Recently, the community has seen a strong revival
of neural networks, which is mainly stimulated by the great success
of deep neural network models, specifically Deep Convolutional Neural
Networks (DCNN), in various vision tasks. However, majority of the
recent works related to deep neural networks have devoted to detection
or classification of object categories \cite{GirshickDDM14,KrizhevskySH12}.
In this paper, we are concerned with a classic problem in computer
vision: image-based sequence recognition. In real world, a stable
of visual objects, such as scene text, handwriting and musical score,
tend to occur in the form of sequence, not in isolation. Unlike general
object recognition, recognizing such sequence-like objects often requires
the system to predict a series of object labels, instead of a single
label. Therefore, recognition of such objects can be naturally cast
as a sequence recognition problem. Another unique property of sequence-like
objects is that their lengths may vary drastically. For instance,
English words can either consist of 2 characters such as ``OK''
or 15 characters such as ``congratulations''. Consequently, the
most popular deep models like DCNN \cite{KrizhevskySH12,LecunLYP1998}
cannot be directly applied to sequence prediction, since DCNN models
often operate on inputs and outputs with fixed dimensions, and thus
are incapable of producing a variable-length label sequence.
Some attempts have been made to address this problem for a specific
sequence-like object (\emph{e.g.} scene text). For example, the algorithms
in \cite{WangWCN12,BissaccoCNN13} firstly detect individual characters
and then recognize these detected characters with DCNN models, which
are trained using labeled character images. Such methods often require
training a strong character detector for accurately detecting and
cropping each character out from the original word image. Some other
approaches (such as \cite{JaderbergSVZ14a}) treat scene text recognition
as an image classification problem, and assign a class label to each
English word (90K words in total). It turns out a large trained model
with a huge number of classes, which is difficult to be generalized
to other types of sequence-like objects, such as Chinese texts, musical
scores, \emph{etc.}, because the numbers of basic combinations of
such kind of sequences can be greater than 1 million. In summary,
current systems based on DCNN can not be directly used for image-based
sequence recognition.
Recurrent neural networks (RNN) models, another important branch of
the deep neural networks family, were mainly designed for handling
sequences. One of the advantages of RNN is that it does not need the
position of each element in a sequence object image in both training
and testing. However, a preprocessing step that converts an input
object image into a sequence of image features, is usually essential.
For example, Graves \emph{et al.} \cite{GravesLFBBS09} extract a
set of geometrical or image features from handwritten texts, while
Su and Lu \cite{SuL14} convert word images into sequential HOG features.
The preprocessing step is independent of the subsequent components
in the pipeline, thus the existing systems based on RNN can not be
trained and optimized in an end-to-end fashion.
Several conventional scene text recognition methods that are not based
on neural networks also brought insightful ideas and novel representations
into this field. For example, Almaz\`an \emph{et al.~}\cite{AlmazanGFV14}
and Rodriguez-Serrano \emph{et al.}~\cite{Rodriguez-Serrano15} proposed
to embed word images and text strings in a common vectorial subspace,
and word recognition is converted into a retrieval problem. Yao \emph{et
al.}~\cite{YaoBSL14} and Gordo \emph{et al.}~\cite{Gordo14} used
mid-level features for scene text recognition. Though achieved promising
performance on standard benchmarks, these methods are generally outperformed
by previous algorithms based on neural networks~\cite{BissaccoCNN13,JaderbergSVZ14a},
as well as the approach proposed in this paper.
The main contribution of this paper is a novel neural network model,
whose network architecture is specifically designed for recognizing
sequence-like objects in images. The proposed neural network model
is named as Convolutional Recurrent Neural Network (CRNN), since it
is a combination of DCNN and RNN. For sequence-like objects, CRNN
possesses several distinctive advantages over conventional neural
network models: 1) It can be directly learned from sequence labels
(for instance, words), requiring no detailed annotations (for instance,
characters); 2) It has the same property of DCNN on learning informative
representations directly from image data, requiring neither hand-craft
features nor preprocessing steps, including binarization/segmentation,
component localization, \emph{etc.}; 3) It has the same property of
RNN, being able to produce a sequence of labels; 4) It is unconstrained
to the lengths of sequence-like objects, requiring only height normalization
in both training and testing phases; 5) It achieves better or highly
competitive performance on scene texts (word recognition) than the
prior arts \cite{JaderbergVZ14,BissaccoCNN13}; 6) It contains much
less parameters than a standard DCNN model, consuming less storage
space.
\section{The Proposed Network Architecture}
The network architecture of CRNN, as shown in Fig.~\ref{fig:netarch},
consists of three components, including the convolutional layers,
the recurrent layers, and a transcription layer, from bottom to top.
At the bottom of CRNN, the convolutional layers automatically extract
a feature sequence from each input image. On top of the convolutional
network, a recurrent network is built for making prediction for each
frame of the feature sequence, outputted by the convolutional layers.
The transcription layer at the top of CRNN is adopted to translate
the per-frame predictions by the recurrent layers into a label sequence.
Though CRNN is composed of different kinds of network architectures
(eg. CNN and RNN), it can be jointly trained with one loss function.
\begin{figure}[h]
\begin{centering}
\includegraphics[width=0.9\linewidth]{net-arch.pdf}
\par\end{centering}
\caption{The network architecture. The architecture consists of three parts:
1) convolutional layers, which extract a feature sequence from the
input image; 2) recurrent layers, which predict a label distribution
for each frame; 3) transcription layer, which translates the per-frame
predictions into the final label sequence.}
\label{fig:netarch}
\end{figure}
\subsection{Feature Sequence Extraction}
In CRNN model, the component of convolutional layers is constructed
by taking the convolutional and max-pooling layers from a standard
CNN model (fully-connected layers are removed). Such component is
used to extract a sequential feature representation from an input
image. Before being fed into the network, all the images need to be
scaled to the same height. Then a sequence of feature vectors is extracted
from the feature maps produced by the component of convolutional layers,
which is the input for the recurrent layers. Specifically, each feature
vector of a feature sequence is generated from left to right on the
feature maps by column. This means the $i$-th feature vector is the
concatenation of the $i$-th columns of all the maps. The width of
each column in our settings is fixed to single pixel.
As the layers of convolution, max-pooling, and element-wise activation
function operate on local regions, they are translation invariant.
Therefore, each column of the feature maps corresponds to a rectangle
region of the original image (termed the \emph{receptive field}),
and such rectangle regions are in the same order to their corresponding
columns on the feature maps from left to right. As illustrated in
Fig.~\ref{fig:recfield}, each vector in the feature sequence is
associated with a receptive field, and can be considered as the image
descriptor for that region.
\begin{figure}[h]
\begin{centering}
\includegraphics[width=0.45\linewidth]{rec-field.pdf}
\par\end{centering}
\caption{The receptive field. Each vector in the extracted feature sequence
is associated with a receptive field on the input image, and can be
considered as the feature vector of that field.}
\label{fig:recfield}
\end{figure}
Being robust, rich and trainable, deep convolutional features have
been widely adopted for different kinds of visual recognition tasks~\cite{KrizhevskySH12,GirshickDDM14}.
Some previous approaches have employed CNN to learn a robust representation
for sequence-like objects such as scene text \cite{JaderbergSVZ14a}.
However, these approaches usually extract holistic representation
of the whole image by CNN, then the local deep features are collected
for recognizing each component of a sequence-like object. Since CNN
requires the input images to be scaled to a fixed size in order to
satisfy with its fixed input dimension, it is not appropriate for
sequence-like objects due to their large length variation. In CRNN,
we convey deep features into sequential representations in order to
be invariant to the length variation of sequence-like objects.
\subsection{Sequence Labeling}
\begin{figure}
\begin{centering}
\includegraphics[width=1\linewidth]{lstm-unit.pdf}
\par\end{centering}
\caption{(a) The structure of a basic LSTM unit. An LSTM consists of a cell
module and three gates, namely the input gate, the output gate and
the forget gate. (b) The structure of deep bidirectional LSTM we use
in our paper. Combining a forward (left to right) and a backward (right
to left) LSTMs results in a bidirectional LSTM. Stacking multiple
bidirectional LSTM results in a deep bidirectional LSTM.}
\label{fig:lstm}
\end{figure}
A deep bidirectional Recurrent Neural Network is built on the top
of the convolutional layers, as the recurrent layers. The recurrent
layers predict a label distribution $y_{t}$ for each frame $x_{t}$
in the feature sequence $\mathbf{x}=x_{1},\dots,x_{T}$. The advantages
of the recurrent layers are three-fold. Firstly, RNN has a strong
capability of capturing contextual information within a sequence.
Using contextual cues for image-based sequence recognition is more
stable and helpful than treating each symbol independently. Taking
scene text recognition as an example, wide characters may require
several successive frames to fully describe (refer to Fig.~\ref{fig:recfield}).
Besides, some ambiguous characters are easier to distinguish when
observing their contexts, \emph{e.g.} it is easier to recognize ``il''
by contrasting the character heights than by recognizing each of them
separately. Secondly, RNN can back-propagates error differentials
to its input, \emph{i.e.} the convolutional layer, allowing us to
jointly train the recurrent layers and the convolutional layers in
a unified network. Thirdly, RNN is able to operate on sequences of
arbitrary lengths, traversing from starts to ends.
A traditional RNN unit has a self-connected hidden layer between its
input and output layers. Each time it receives a frame $x_{t}$ in
the sequence, it updates its internal state $h_{t}$ with a non-linear
function that takes both current input $x_{t}$ and past state $h_{t-1}$
as its inputs: $h_{t}=g(x_{t},h_{t-1})$. Then the prediction $y_{t}$
is made based on $h_{t}$. In this way, past contexts $\{x_{t'}\}_{t'<t}$
are captured and utilized for prediction. Traditional RNN unit, however,
suffers from the vanishing gradient problem~\cite{BengioSF94}, which
limits the range of context it can store, and adds burden to the training
process. Long-Short Term Memory~\cite{HochreiterS97,GersSS02} (LSTM)
is a type of RNN unit that is specially designed to address this problem.
An LSTM (illustrated in Fig.~\ref{fig:lstm}) consists of a memory
cell and three multiplicative gates, namely the input, output and
forget gates. Conceptually, the memory cell stores the past contexts,
and the input and output gates allow the cell to store contexts for
a long period of time. Meanwhile, the memory in the cell can be cleared
by the forget gate. The special design of LSTM allows it to capture
long-range dependencies, which often occur in image-based sequences.
LSTM is directional, it only uses past contexts. However, in image-based
sequences, contexts from both directions are useful and complementary
to each other. Therefore, we follow \cite{GravesMH13} and combine
two LSTMs, one forward and one backward, into a bidirectional LSTM.
Furthermore, multiple bidirectional LSTMs can be stacked, resulting
in a deep bidirectional LSTM as illustrated in Fig.~\ref{fig:lstm}.b.
The deep structure allows higher level of abstractions than a shallow
one, and has achieved significant performance improvements in the
task of speech recognition \cite{GravesMH13}.
In recurrent layers, error differentials are propagated in the opposite
directions of the arrows shown in Fig.~\ref{fig:lstm}.b, \emph{i.e.}
Back-Propagation Through Time (BPTT). At the bottom of the recurrent
layers, the sequence of propagated differentials are concatenated
into maps, inverting the operation of converting feature maps into
feature sequences, and fed back to the convolutional layers. In practice,
we create a custom network layer, called ``Map-to-Sequence'', as
the bridge between convolutional layers and recurrent layers.
\subsection{Transcription}
Transcription is the process of converting the per-frame predictions
made by RNN into a label sequence. Mathematically, transcription is
to find the label sequence with the highest probability conditioned
on the per-frame predictions. In practice, there exists two modes
of transcription, namely the lexicon-free and lexicon-based transcriptions.
A lexicon is a set of label sequences that prediction is constraint
to, \emph{e.g.} a spell checking dictionary. In lexicon-free mode,
predictions are made without any lexicon. In lexicon-based mode, predictions
are made by choosing the label sequence that has the highest probability.
\subsubsection{Probability of label sequence\label{sec:stringprob}}
We adopt the conditional probability defined in the Connectionist
Temporal Classification (CTC) layer proposed by Graves \emph{et al.}
\cite{GravesFGS06}. The probability is defined for label sequence
$\mathbf{l}$ conditioned on the per-frame predictions $\mathbf{y}=y_{1},\dots,y_{T}$,
and it ignores the position where each label in $\mathbf{l}$ is located.
Consequently, when we use the negative log-likelihood of this probability
as the objective to train the network, we only need images and their
corresponding label sequences, avoiding the labor of labeling positions
of individual characters.
The formulation of the conditional probability is briefly described
as follows: The input is a sequence $\mathbf{y}=y_{1},\dots,y_{T}$
where $T$ is the sequence length. Here, each $y_{t}\in\Re^{|{\cal L}'|}$
is a probability distribution over the set ${\cal L}'={\cal L}\cup\textrm{�}$,
where ${\cal L}$ contains all labels in the task (\emph{e.g.} all
English characters), as well as a 'blank' label denoted by $\textrm{�}$.
A sequence-to-sequence mapping function ${\cal B}$ is defined on
sequence $\boldsymbol{\pi}\in{\cal L}'^{T}$, where $T$ is the length.
${\cal B}$ maps $\boldsymbol{\pi}$ onto $\mathbf{l}$ by firstly
removing the repeated labels, then removing the 'blank's. For example,
${\cal B}$ maps ``\texttt{-{}-hh-e-l-ll-oo-{}-}'' ('\texttt{-}'
represents 'blank') onto ``\texttt{hello}''. Then, the conditional
probability is defined as the sum of probabilities of all $\boldsymbol{\pi}$
that are mapped by ${\cal B}$ onto $\mathbf{l}$:
\begin{equation}
p(\mathbf{l}|\mathbf{y})=\sum_{\boldsymbol{\pi}:{\cal B}(\boldsymbol{\pi})=\mathbf{l}}p(\boldsymbol{\pi}|\mathbf{y}),\label{eq:stringprob}
\end{equation}
where the probability of $\boldsymbol{\pi}$ is defined as $p(\boldsymbol{\pi}|\mathbf{y})=\prod_{t=1}^{T}y_{\pi_{t}}^{t}$,
$y_{\pi_{t}}^{t}$ is the probability of having label $\pi_{t}$ at
time stamp $t$. Directly computing Eq.~\ref{eq:stringprob} would
be computationally infeasible due to the exponentially large number
of summation items. However, Eq.~\ref{eq:stringprob} can be efficiently
computed using the forward-backward algorithm described in~\cite{GravesFGS06}.
\subsubsection{Lexicon-free transcription\label{sec:lexfree}}
In this mode, the sequence $\mathbf{l}^{*}$ that has the highest
probability as defined in Eq.~\ref{eq:stringprob} is taken as the
prediction. Since there exists no tractable algorithm to precisely
find the solution, we use the strategy adopted in \cite{GravesFGS06}.
The sequence $\mathbf{l}^{*}$ is approximately found by $\mathbf{l}^{*}\approx{\cal B}(\arg\max_{\boldsymbol{\pi}}p(\boldsymbol{\pi}|\mathbf{y}))$,
\emph{i.e.} taking the most probable label $\pi_{t}$ at each time
stamp $t$, and map the resulted sequence onto $\mathbf{l}^{*}$.
\subsubsection{Lexicon-based transcription\label{sec:lexbased}}
In lexicon-based mode, each test sample is associated with a lexicon
${\cal D}$. Basically, the label sequence is recognized by choosing
the sequence in the lexicon that has highest conditional probability
defined in Eq.~\ref{eq:stringprob}, \emph{i.e.} $\mathbf{l}^{*}=\arg\max_{\mathbf{l}\in{\cal D}}p(\mathbf{l}|\mathbf{y})$.
However, for large lexicons\emph{, e.g.} the 50k-words Hunspell spell-checking
dictionary~\cite{Hunspell}, it would be very time-consuming to perform
an exhaustive search over the lexicon, \emph{i.e.} to compute Equation~\ref{eq:stringprob}
for all sequences in the lexicon and choose the one with the highest
probability. To solve this problem, we observe that the label sequences
predicted via lexicon-free transcription, described in \ref{sec:lexfree},
are often close to the ground-truth under the edit distance metric.
This indicates that we can limit our search to the nearest-neighbor
candidates ${\cal N}_{\delta}(\mathbf{l}')$, where $\delta$ is the
maximal edit distance and $\mathbf{l}'$ is the sequence transcribed
from $\mathbf{y}$ in lexicon-free mode:
\begin{equation}
\mathbf{l}^{*}=\arg\max_{\mathbf{l}\in{\cal N}_{\delta}(\mathbf{l}')}p(\mathbf{l}|\mathbf{y}).\label{eq:largelex}
\end{equation}
The candidates ${\cal N}_{\delta}(\mathbf{l}')$ can be found efficiently
with the BK-tree data structure~\cite{BurkhardK73}, which is a metric
tree specifically adapted to discrete metric spaces. The search time
complexity of BK-tree is $O(\log|{\cal D}|)$, where $|{\cal D}|$
is the lexicon size. Therefore this scheme readily extends to very
large lexicons. In our approach, a BK-tree is constructed offline
for a lexicon. Then we perform fast online search with the tree, by
finding sequences that have less or equal to $\delta$ edit distance
to the query sequence.
\subsection{Network Training\label{sec:nettrain}}
Denote the training dataset by ${\cal X}=\{I_{i},\mathbf{l}_{i}\}_{i}$,
where $I_{i}$ is the training image and $\mathbf{l}_{i}$ is the
ground truth label sequence. The objective is to minimize the negative
log-likelihood of conditional probability of ground truth:
\begin{equation}
{\cal O}=-\sum_{I_{i},\mathbf{l}_{i}\in{\cal X}}\log p(\mathbf{l}_{i}|\mathbf{y}_{i}),\label{eq:objective}
\end{equation}
where $\mathbf{y}_{i}$ is the sequence produced by the recurrent
and convolutional layers from $I_{i}$. This objective function calculates
a cost value directly from an image and its ground truth label sequence.
Therefore, the network can be end-to-end trained on pairs of images
and sequences, eliminating the procedure of manually labeling all
individual components in training images.
The network is trained with stochastic gradient descent (SGD). Gradients
are calculated by the back-propagation algorithm. In particular, in
the transcription layer, error differentials are back-propagated with
the forward-backward algorithm, as described in~\cite{GravesFGS06}.
In the recurrent layers, the Back-Propagation Through Time (BPTT)
is applied to calculate the error differentials.
For optimization, we use the ADADELTA~\cite{Matthew12ADADELTA} to
automatically calculate per-dimension learning rates. Compared with
the conventional momentum \cite{LearningRepresentations} method,
ADADELTA requires no manual setting of a learning rate. More importantly,
we find that optimization using ADADELTA converges faster than the
momentum method.
\section{Experiments}
To evaluate the effectiveness of the proposed CRNN model, we conducted
experiments on standard benchmarks for scene text recognition and
musical score recognition, which are both challenging vision tasks.
The datasets and setting for training and testing are given in Sec.~\ref{sec:datasets},
the detailed settings of CRNN for scene text images is provided in
Sec.~\ref{sec:impldetails}, and the results with the comprehensive
comparisons are reported in Sec.~\ref{sec:evaluation}. To further
demonstrate the generality of CRNN, we verify the proposed algorithm
on a music score recognition task in Sec.~\ref{sec:musicalscore}.
\subsection{Datasets\label{sec:datasets}}
For all the experiments for scene text recognition, we use the synthetic
dataset (Synth) released by Jaderberg \emph{et al.} \cite{JaderbergSVZ14}
as the training data. The dataset contains 8 millions training images
and their corresponding ground truth words. Such images are generated
by a synthetic text engine and are highly realistic. Our network is
trained on the synthetic data once, and tested on all other real-world
test datasets without any fine-tuning on their training data. Even
though the CRNN model is purely trained with synthetic text data,
it works well on real images from standard text recognition benchmarks.
Four popular benchmarks for scene text recognition are used for performance
evaluation, namely ICDAR~2003 (IC03), ICDAR~2013 (IC13), IIIT~5k-word
(IIIT5k), and Street View Text (SVT).
\textbf{IC03} \cite{LucasPSTWYANOYMZOWJTWL05} test dataset contains
251 scene images with labeled text bounding boxes. Following Wang
\emph{et al.} \cite{WangBB11}, we ignore images that either contain
non-alphanumeric characters or have less than three characters, and
get a test set with 860 cropped text images. Each test image is associated
with a 50-words lexicon which is defined by Wang \emph{et al.} \cite{WangBB11}.
A full lexicon is built by combining all the per-image lexicons. In
addition, we use a 50k words lexicon consisting of the words in the
Hunspell spell-checking dictionary \cite{Hunspell}.
\textbf{IC13} \cite{KaratzasSUIBMMMAH13} test dataset inherits most
of its data from IC03. It contains 1,015 ground truths cropped word
images.
\textbf{IIIT5k \cite{MishraAJ12}} contains 3,000 cropped word test
images collected from the Internet. Each image has been associated
to a 50-words lexicon and a 1k-words lexicon.
\textbf{SVT \cite{WangBB11}} test dataset consists of 249 street
view images collected from Google Street View. From them 647 word
images are cropped. Each word image has a 50 words lexicon defined
by Wang \emph{et al.} \cite{WangBB11}.
\begin{table}
\caption{Network configuration summary. The first row is the top layer.
`k', `s' and `p' stand for kernel size, stride and padding size respectively}
\footnotesize
\begin{centering}
\begin{tabular}{|c|c|}
\hline
\textbf{Type} & \textbf{Configurations}\tabularnewline
\hline
\hline
Transcription & -\tabularnewline
\hline
Bidirectional-LSTM & \#hidden units:256\tabularnewline
\hline
Bidirectional-LSTM & \#hidden units:256\tabularnewline
\hline
Map-to-Sequence & -\tabularnewline
\hline
Convolution & \#maps:512, k:$2\times2$, s:1, p:0\tabularnewline
\hline
MaxPooling & Window:$1\times2$, s:2\tabularnewline
\hline
BatchNormalization & -\tabularnewline
\hline
Convolution & \#maps:512, k:$3\times3$, s:1, p:1\tabularnewline
\hline
BatchNormalization & -\tabularnewline
\hline
Convolution & \#maps:512, k:$3\times3$, s:1, p:1\tabularnewline
\hline
MaxPooling & Window:$1\times2$, s:2\tabularnewline
\hline
Convolution & \#maps:256, k:$3\times3$, s:1, p:1\tabularnewline
\hline
Convolution & \#maps:256, k:$3\times3$, s:1, p:1\tabularnewline
\hline
MaxPooling & Window:$2\times2$, s:2\tabularnewline
\hline
Convolution & \#maps:128, k:$3\times3$, s:1, p:1\tabularnewline
\hline
MaxPooling & Window:$2\times2$, s:2\tabularnewline
\hline
Convolution & \#maps:64, k:$3\times3$, s:1, p:1\tabularnewline
\hline
Input & $W\times32$ gray-scale image\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\label{tbl:netconfig}
\end{table}
\subsection{Implementation Details\label{sec:impldetails}}
The network configuration we use in our experiments is summarized
in Table~\ref{tbl:netconfig}. The architecture of the convolutional
layers is based on the VGG-VeryDeep architectures~\cite{SimonyanZ14a}.
A tweak is made in order to make it suitable for recognizing English
texts. In the 3rd and the 4th max-pooling layers, we adopt $1\times2$
sized rectangular pooling windows instead of the conventional squared
ones. This tweak yields feature maps with larger width, hence longer
feature sequence. For example, an image containing 10 characters is
typically of size $100\times32$, from which a feature sequence 25
frames can be generated. This length exceeds the lengths of most English
words. On top of that, the rectangular pooling windows yield rectangular
receptive fields (illustrated in Fig.~\ref{fig:recfield}), which
are beneficial for recognizing some characters that have narrow shapes,
such as 'i' and 'l'.
The network not only has deep convolutional layers, but also has recurrent
layers. Both are known to be hard to train. We find that the batch
normalization \cite{IoffeS15} technique is extremely useful for training
network of such depth. Two batch normalization layers are inserted
after the 5th and 6th convolutional layers respectively. With the
batch normalization layers, the training process is greatly accelerated.
We implement the network within the Torch7~\cite{Collobert11} framework,
with custom implementations for the LSTM units (in Torch7/CUDA), the
transcription layer (in C++) and the BK-tree data structure (in C++).
Experiments are carried out on a workstation with a 2.50 GHz Intel(R)
Xeon(R) E5-2609 CPU, 64GB RAM and an NVIDIA(R) Tesla(TM) K40 GPU.
Networks are trained with ADADELTA, setting the parameter $\rho$
to 0.9. During training, all images are scaled to $100\times32$ in
order to accelerate the training process. The training process takes
about 50 hours to reach convergence. Testing images are scaled to
have height 32. Widths are proportionally scaled with heights, but
at least 100 pixels. The average testing time is 0.16s/sample, as
measured on IC03 without a lexicon. The approximate lexicon search
is applied to the 50k lexicon of IC03, with the parameter $\delta$
set to 3. Testing each sample takes 0.53s on average.
\subsection{Comparative Evaluation\label{sec:evaluation}}
All the recognition accuracies on the above four public datasets,
obtained by the proposed CRNN model and the recent state-of-the-arts
techniques including the approaches based on deep models \cite{JaderbergVZ14,JaderbergSVZ14a,JaderbergSVZ14b},
are shown in Table~\ref{tbl:results}.
\begin{table*}[t]
\caption{Recognition accuracies (\%) on four datasets. In the second row, ``50'',
``1k'', ``50k'' and ``Full'' denote the lexicon used, and ``None''
denotes recognition without a lexicon. ({*}\cite{JaderbergSVZ14a}
is not lexicon-free in the strict sense, as its outputs are constrained
to a 90k dictionary.}
\vspace{0.2cm}
\footnotesize
\begin{centering}
\begin{tabular}{llccccccccccccc}
\hline
\noalign{\vskip\doublerulesep}
& & \multicolumn{3}{c}{\textbf{IIIT5k}} & & \multicolumn{2}{c}{\textbf{SVT}} & & \multicolumn{4}{c}{\textbf{IC03}} & & \textbf{IC13}\tabularnewline
\cline{3-5} \cline{7-8} \cline{10-13} \cline{15-15}
\noalign{\vskip\doublerulesep}
& & \textbf{50} & \textbf{1k} & \textbf{None} & & \textbf{50} & \textbf{None} & & \textbf{50} & \textbf{Full} & \textbf{50k} & \textbf{None} & & \textbf{None}\tabularnewline
\hline
\noalign{\vskip\doublerulesep}
ABBYY \cite{WangBB11} & & 24.3 & - & - & & 35.0 & - & & 56.0 & 55.0 & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Wang \emph{et al.} \cite{WangBB11} & & - & - & - & & 57.0 & - & & 76.0 & 62.0 & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Mishra \emph{et al.} \cite{MishraAJ12} & & 64.1 & 57.5 & - & & 73.2 & - & & 81.8 & 67.8 & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Wang \emph{et al.} \cite{WangWCN12} & & - & - & - & & 70.0 & - & & 90.0 & 84.0 & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Goel \emph{et al.} \cite{GoelMAJ13} & & - & - & - & & 77.3 & - & & 89.7 & - & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Bissacco \emph{et al.} \cite{BissaccoCNN13} & & - & - & - & & 90.4 & 78.0 & & - & - & - & - & & 87.6\tabularnewline
\noalign{\vskip\doublerulesep}
Alsharif and Pineau \cite{AlsharifP13} & & - & - & - & & 74.3 & - & & 93.1 & 88.6 & 85.1 & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Almaz\'an \emph{et al.} \cite{AlmazanGFV14} & & 91.2 & 82.1 & - & & 89.2 & - & & - & - & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Yao \emph{et al.} \cite{YaoBSL14} & & 80.2 & 69.3 & - & & 75.9 & - & & 88.5 & 80.3 & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Rodr�guez-Serrano \emph{et al.} \cite{Rodriguez-Serrano15} & & 76.1 & 57.4 & - & & 70.0 & - & & - & - & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Jaderberg \emph{et al.} \cite{JaderbergVZ14} & & - & - & - & & 86.1 & - & & 96.2 & 91.5 & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Su and Lu \cite{SuL14} & & - & - & - & & 83.0 & - & & 92.0 & 82.0 & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Gordo \cite{Gordo14} & & 93.3 & 86.6 & - & & 91.8 & - & & - & - & - & - & & -\tabularnewline
\noalign{\vskip\doublerulesep}
Jaderberg \emph{et al.} \cite{JaderbergSVZ14a} & & 97.1 & 92.7 & - & & 95.4 & 80.7{*} & & \textbf{98.7} & \textbf{98.6} & 93.3 & \textbf{93.1}{*} & & \textbf{90.8}{*}\tabularnewline
\noalign{\vskip\doublerulesep}
Jaderberg \emph{et al.} \cite{JaderbergSVZ14b} & & 95.5 & 89.6 & - & & 93.2 & 71.7 & & 97.8 & 97.0 & 93.4 & 89.6 & & 81.8\tabularnewline[\doublerulesep]
\hline
\noalign{\vskip\doublerulesep}
CRNN & & \textbf{97.6} & \textbf{94.4} & \textbf{78.2} & & \textbf{96.4} & \textbf{80.8} & & \textbf{98.7} & 97.6 & \textbf{95.5} & 89.4 & & 86.7\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\label{tbl:results}
\end{table*}
In the constrained lexicon cases, our method consistently outperforms
most state-of-the-arts approaches, and in average beats the best text
reader proposed in \cite{JaderbergSVZ14a}. Specifically, we obtain
superior performance on IIIT5k, and SVT compared to \cite{JaderbergSVZ14a},
only achieved lower performance on IC03 with the ``Full'' lexicon.
Note that the model in\cite{JaderbergSVZ14a} is trained on a specific
dictionary, namely that each word is associated to a class label.
Unlike \cite{JaderbergSVZ14a}, CRNN is not limited to recognize a
word in a known dictionary, and able to handle random strings (\emph{e.g.}
telephone numbers), sentences or other scripts like Chinese words.
Therefore, the results of CRNN are competitive on all the testing
datasets.
In the unconstrained lexicon cases, our method achieves the best performance
on SVT, yet, is still behind some approaches \cite{BissaccoCNN13,JaderbergSVZ14a}
on IC03 and IC13. Note that the blanks in the ``none'' columns of
Table~\ref{tbl:results} denote that such approaches are unable to
be applied to recognition without lexicon or did not report the recognition
accuracies in the unconstrained cases. Our method uses only synthetic
text with word level labels as the training data, very different to
PhotoOCR~\cite{BissaccoCNN13} which used 7.9 millions of real word
images with character-level annotations for training. The best performance
is reported by \cite{JaderbergSVZ14a} in the unconstrained lexicon
cases, benefiting from its large dictionary, however, it is not a
model strictly unconstrained to a lexicon as mentioned before. In
this sense, our results in the unconstrained lexicon case are still
promising.
\begin{table}[t]
\caption{Comparison among various methods. Attributes for comparison include:
1) being end-to-end trainable (E2E Train); 2) using convolutional
features that are directly learned from images rather than using hand-crafted
ones (Conv Ftrs); 3) requiring no ground truth bounding boxes for
characters during training (CharGT-Free); 4) not confined to a pre-defined
dictionary (Unconstrained); 5) the model size (if an end-to-end trainable
model is used), measured by the number of model parameters (Model
Size, M stands for millions).}
\footnotesize
\begin{centering}
\begin{tabular}{lccccc}
\noalign{\vskip\doublerulesep}
& \begin{turn}{90}
E2E Train
\end{turn} & \begin{turn}{90}
Conv Ftrs
\end{turn} & \begin{turn}{90}
CharGT-Free
\end{turn} & \begin{turn}{90}
Unconstrained
\end{turn} & \begin{turn}{90}
Model Size
\end{turn}\tabularnewline
\hline
\noalign{\vskip\doublerulesep}
Wang \emph{et al.} \cite{WangBB11} & \ding{55} & \ding{55} & \ding{55} & \ding{52} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Mishra \emph{et al.} \cite{MishraAJ12} & \ding{55} & \ding{55} & \ding{55} & \ding{55} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Wang \emph{et al.} \cite{WangWCN12} & \ding{55} & \ding{52} & \ding{55} & \ding{52} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Goel \emph{et al.} \cite{GoelMAJ13} & \ding{55} & \ding{55} & \ding{52} & \ding{55} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Bissacco \emph{et al.} \cite{BissaccoCNN13} & \ding{55} & \ding{55} & \ding{55} & \ding{52} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Alsharif and Pineau \cite{AlsharifP13} & \ding{55} & \ding{52} & \ding{55} & \ding{52} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Almaz\'an \emph{et al.} \cite{AlmazanGFV14} & \ding{55} & \ding{55} & \ding{52} & \ding{55} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Yao \emph{et al.} \cite{YaoBSL14} & \ding{55} & \ding{55} & \ding{55} & \ding{52} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Rodr�guez-Serrano \emph{et al.} \cite{Rodriguez-Serrano15} & \ding{55} & \ding{55} & \ding{52} & \ding{55} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Jaderberg \emph{et al.} \cite{JaderbergVZ14} & \ding{55} & \ding{52} & \ding{55} & \ding{52} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Su and Lu \cite{SuL14} & \ding{55} & \ding{55} & \ding{52} & \ding{52} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Gordo \cite{Gordo14} & \ding{55} & \ding{55} & \ding{55} & \ding{55} & -\tabularnewline
\noalign{\vskip\doublerulesep}
Jaderberg \emph{et al.} \cite{JaderbergSVZ14a} & \ding{52} & \ding{52} & \ding{52} & \ding{55} & 490M\tabularnewline
\noalign{\vskip\doublerulesep}
Jaderberg \emph{et al.} \cite{JaderbergSVZ14b} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & 304M\tabularnewline[\doublerulesep]
\hline
\noalign{\vskip\doublerulesep}
CRNN & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \textbf{8.3M}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\label{tbl:methodcomp}
\end{table}
For further understanding the advantages of the proposed algorithm
over other text recognition approaches, we provide a comprehensive
comparison on several properties named E2E Train, Conv Ftrs, CharGT-Free,
Unconstrained, and Model Size, as summarized in Table~\ref{tbl:methodcomp}.
\textbf{E2E Train}: This column is to show whether a certain text
reading model is end-to-end trainable, without any preprocess or through
several separated steps, which indicates such approaches are elegant
and clean for training. As can be observed from Table~\ref{tbl:methodcomp},
only the models based on deep neural networks including~\cite{JaderbergSVZ14a,JaderbergSVZ14b}
as well as CRNN have this property.
\textbf{Conv Ftrs}: This column is to indicate whether an approach
uses the convolutional features learned from training images directly
or handcraft features as the basic representations.
\textbf{CharGT-Free}: This column is to indicate whether the character-level
annotations are essential for training the model. As the input and
output labels of CRNN can be a sequence, character-level annotations
are not necessary.
\textbf{Unconstrained}: This column is to indicate whether the trained
model is constrained to a specific dictionary, unable to handling
out-of-dictionary words or random sequences. Notice that though the
recent models learned by label embedding \cite{AlmazanGFV14,Gordo14}
and incremental learning \cite{JaderbergSVZ14a} achieved highly competitive
performance, they are constrained to a specific dictionary.
\textbf{Model Size}: This column is to report the storage space of
the learned model. In CRNN, all layers have weight-sharing connections,
and the fully-connected layers are not needed. Consequently, the number
of parameters of CRNN is much less than the models learned on the
variants of CNN \cite{JaderbergSVZ14a,JaderbergSVZ14b}, resulting
in a much smaller model compared with \cite{JaderbergSVZ14a,JaderbergSVZ14b}.
Our model has 8.3 million parameters, taking only 33MB RAM (using
4-bytes single-precision float for each parameter), thus it can be
easily ported to mobile devices.
Table~\ref{tbl:methodcomp} clearly shows the differences among different
approaches in details, and fully demonstrates the advantages of CRNN
over other competing methods.
In addition, to test the impact of parameter $\delta$, we experiment
different values of $\delta$ in Eq.~\ref{eq:largelex}. In Fig.~\ref{fig:impact-of-delta}
we plot the recognition accuracy as a function of $\delta$. Larger
$\delta$ results in more candidates, thus more accurate lexicon-based
transcription. On the other hand, the computational cost grows with
larger $\delta$, due to longer BK-tree search time, as well as larger
number of candidate sequences for testing. In practice, we choose
$\delta=3$ as a tradeoff between accuracy and speed.
\begin{figure}[h]
\begin{centering}
\includegraphics[width=0.7\linewidth]{impact-of-delta.pdf}
\par\end{centering}
\caption{Blue line graph: recognition accuracy as a function parameter $\delta$.
Red bars: lexicon search time per sample. Tested on the IC03 dataset
with the 50k lexicon.}
\label{fig:impact-of-delta}
\end{figure}
\subsection{Musical Score Recognition\label{sec:musicalscore}}
A musical score typically consists of sequences of musical notes arranged
on staff lines. Recognizing musical scores in images is known as the
Optical Music Recognition (OMR) problem. Previous methods often requires
image preprocessing (mostly binirization), staff lines detection and
individual notes recognition~\cite{RebeloFPMGC12}. We cast the OMR
as a sequence recognition problem, and predict a sequence of musical
notes directly from the image with CRNN. For simplicity, we recognize
pitches only, ignore all chords and assume the same major scales (C
major) for all scores.
To the best of our knowledge, there exists no public datasets for
evaluating algorithms on pitch recognition. To prepare the training
data needed by CRNN, we collect 2650 images from \cite{Musescore}.
Each image contains a fragment of score containing 3 to 20 notes.
We manually label the ground truth label sequences (sequences of not
ezpitches) for all the images. The collected images are augmented to
265k training samples by being rotated, scaled and corrupted with
noise, and by replacing their backgrounds with natural images. For
testing, we create three datasets: 1) ``Clean'', which contains
260 images collected from \cite{Musescore}. Examples are shown in
Fig.~\ref{fig:musicsamples}.a; 2) ``Synthesized'', which is created
from ``Clean'', using the augmentation strategy mentioned above.
It contains 200 samples, some of which are shown in Fig.~\ref{fig:musicsamples}.b;
3) ``Real-World'', which contains 200 images of score fragments
taken from music books with a phone camera. Examples are shown in
Fig.~\ref{fig:musicsamples}.c.\footnote{We will release the dataset for academic use.}
\begin{figure}[th]
\begin{centering}
\includegraphics[width=0.9\linewidth]{music-samples.pdf}
\par\end{centering}
\caption{(a) Clean musical scores images collected from \cite{Musescore} (b)
Synthesized musical score images. (c) Real-world score images taken
with a mobile phone camera.}
\label{fig:musicsamples}
\end{figure}
Since we have limited training data, we use a simplified CRNN configuration
in order to reduce model capacity. Different from the configuration
specified in Tab.~\ref{tbl:netconfig}, the 4th and 6th convolution
layers are removed, and the 2-layer bidirectional LSTM is replaced
by a 2-layer single directional LSTM. The network is trained on the
pairs of images and corresponding label sequences. Two measures are
used for evaluating the recognition performance: 1) fragment accuracy,
\emph{i.e.} the percentage of score fragments correctly recognized;
2) average edit distance, \emph{i.e.} the average edit distance between
predicted pitch sequences and the ground truths. For comparison,
we evaluate two commercial OMR engines, namely the Capella~Scan~\cite{CapellaScan}
and the PhotoScore~\cite{PhotoScore}.
\begin{table}[h]
\caption{Comparison of pitch recognition accuracies, among CRNN and two commercial
OMR systems, on the three datasets we have collected. Performances
are evaluated by fragment accuracies and average edit distance (``fragment
accuracy/average edit distance'').}
\footnotesize
\begin{centering}
\begin{tabular}{lccc}
\noalign{\vskip\doublerulesep}
& \textbf{Clean} & \textbf{Synthesized} & \textbf{Real-World}\tabularnewline
\hline
\noalign{\vskip\doublerulesep}
Capella~Scan \cite{CapellaScan} & 51.9\%/1.75 & 20.0\%/2.31 & 43.5\%/3.05\tabularnewline
\noalign{\vskip\doublerulesep}
PhotoScore \cite{PhotoScore} & 55.0\%/2.34 & 28.0\%/1.85 & 20.4\%/3.00\tabularnewline
\noalign{\vskip\doublerulesep}
CRNN & \textbf{74.6\%/0.37} & \textbf{81.5\%/0.30} & \textbf{84.0\%/0.30}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\label{tbl:musicaccuracy}
\end{table}
Tab.~\ref{tbl:musicaccuracy} summarizes the results. The CRNN outperforms
the two commercial systems by a large margin. The Capella~Scan and
PhotoScore systems perform reasonably well on the Clean dataset, but
their performances drop significantly on synthesized and real-world
data. The main reason is that they rely on robust binarization to
detect staff lines and notes, but the binarization step often fails
on synthesized and real-world data due to bad lighting condition,
noise corruption and cluttered background. The CRNN, on the other
hand, uses convolutional features that are highly robust to noises
and distortions. Besides, recurrent layers in CRNN can utilize contextual
information in the score. Each note is recognized not only itself,
but also by the nearby notes. Consequently, some notes can be recognized
by comparing them with the nearby notes, $e.g.$ contrasting their
vertical positions.
The results have shown the generality of CRNN, in that it can be readily
applied to other image-based sequence recognition problems, requiring
minimal domain knowledge. Compared with Capella~Scan and PhotoScore,
our CRNN-based system is still preliminary and misses many functionalities.
But it provides a new scheme for OMR, and has shown promising capabilities
in pitch recognition.
\section{Conclusion}
In this paper, we have presented a novel neural network architecture,
called Convolutional Recurrent Neural Network (CRNN), which integrates
the advantages of both Convolutional Neural Networks (CNN) and Recurrent
Neural Networks (RNN). CRNN is able to take input images of varying
dimensions and produces predictions with different lengths. It directly
runs on coarse level labels (\emph{e.g.} words), requiring no detailed
annotations for each individual element (\emph{e.g.} characters) in
the training phase. Moreover, as CRNN abandons fully connected layers
used in conventional neural networks, it results in a much more compact
and efficient model. All these properties make CRNN an excellent approach
for image-based sequence recognition.
The experiments on the scene text recognition benchmarks demonstrate
that CRNN achieves superior or highly competitive performance, compared
with conventional methods as well as other CNN and RNN based algorithms.
This confirms the advantages of the proposed algorithm. In addition,
CRNN significantly outperforms other competitors on a benchmark for
Optical Music Recognition (OMR), which verifies the generality of
CRNN.
Actually, CRNN is a general framework, thus it can be applied to other
domains and problems (such as Chinese character recognition), which
involve sequence prediction in images. To further speed up CRNN and
make it more practical in real-world applications is another direction
that is worthy of exploration in the future.
\section*{Acknowledgement}
This work was primarily supported by National Natural Science Foundation
of China (NSFC) (No. 61222308).
{\small
\bibliographystyle{ieee}
|
1,941,325,220,701 | arxiv | \section{Introduction}
The object of our study is the massless field on $D_N = D \cap \tfrac{1}{N} \mathbf{Z}^2$, where $D \subseteq \mathbf{C}$ is a bounded domain with smooth boundary, with Hamiltonian given by $\mathcal {H}(h) = \sum_{b \in D_N^*} \mathcal {V}(\nabla h(b))$. The sum is over all edges in the induced subgraph of $\tfrac{1}{N} \mathbf{Z}^2$ with vertices in $D_N$ and $\nabla h(b) = h(y) - h(x)$ denotes the discrete gradient of $h$. We assume that $h(x) = f(x)$ when $x \in \partial D_N$ and $f \colon D \to \mathbf{R}$ is a given continuous function. We consider a general interaction $\mathcal {V} \in C^2(\mathbf{R})$ which is assumed only to satisfy:
\begin{enumerate}
\item $\mathcal {V}(x) = \mathcal {V}(-x)$ (symmetry),
\item $a \leq \mathcal {V}''(x) \leq A$ (strict convexity),
\item $\mathcal {V}''$ is $L$-Lipschitz.
\end{enumerate}
The purpose of the first condition is to simplify the notation since the symmetrization of a non-symmetric potential does not change its behavior. The second and third conditions are technical assumptions. Note that we can assume without loss of generality that $\mathcal {V}(0) = 0$. This is the so-called Ginzburg-Landau $\nabla \phi$ effective interface (GL) model, also known as the anharmonic crystal. The variables $h(x)$ represent the heights of a random surface. The GL model has been the subject of considerable recent study. Funaki and Spohn in \cite{FS97} prove the existence and uniqueness of ergodic gradient Gibbs states. Dueschel, Giacomin, and Ioffe in \cite{DGI00} establish a large deviations principle for the surface shape with zero boundary conditions and Funaki and Sakagawa \cite{FS04} extend this result to the case of non-zero boundary conditions. The central limit theorem for the infinite gradient Gibbs states was proved first by \cite{NS97} and Giacomin, Olla, and Spohn handle the dynamical case in \cite{GOS01}. Lastly, the behavior of the maximum was studied by Deuschel and Giacomin in \cite{DG00} in the static case and in the dynamics by Deuschel and Nishikawa in \cite{DN07}.
The special case where $\mathcal {V}(x) = \tfrac{1}{2} x^2$ is quadratic corresponds to the so-called discrete Gaussian free field (DGFF) or harmonic crystal, a special case which was thoroughly studied earlier than the more general case of convex interaction since the Gaussian structure makes its analysis more tractable while still exhibiting an extremely rich and interesting behavior. Large deviations principles for the surface shape as well as a central limit theorem for the height variable was proved by Ben Arous and Desuchel in \cite{BAD96}, the behavior of the maximum studied by Bolthausen, Dueschel, and Giacomin in \cite{BDG01} and by Daviaud in \cite{DAV06}. In a particularly impressive and difficult work, Schramm and Sheffield in \cite{SS06} show that the macroscopic level sets are described by a family of conformally invariant random curves which are variants of $SLE(4)$.
Beyond having a Gaussian distribution, the main feature that makes the analysis of the DGFF tractable is that its mean and covariance structure can be completely described in terms of the harmonic measure and Green's function associated with a simple random walk. These objects are very well understood in the case of a planar random walk.
The mean and covariance structure in the setting of the more general GL model also admit a representation in terms of a random walk (Helffer-Sj\"ostrand representation, HS random walk). The situation, however, is far more difficult because in addition to being non-Guassian, the corresponding random walk representations involve a \emph{random walk in a dynamic random environment} whose behavior depends non-trivially on the boundary conditions. The hypothesis that $\mathcal {V}$ is strictly convex is a technical condition whose main application is that the jump rates of the HS random walk are uniformly bounded from above and below. This gives that the Green's function of the HS random walk is comparable to that of a simple random walk, which allows for rough estimates for variances and, more generally, centered moments in terms of the corresponding moments for the DGFF (Brascamp-Lieb inequalities). Furthermore, that the jump rates are bounded gives that the Nash-Aronson and Nash continuity estimates apply, which gives some rough control of the covariance structure.
These heat kernel and Green's function estimates hold for any random walk with bounded jump rates. Thus they do not provide any fine control on the first exit distribution of the HS random walk, which is necessary in order to give a precise estimate for the mean. Indeed, there exists random walks with bounded rates such that the Radon-Nykodym derivative of the first exit distribution with respect to the harmonic measure of a usual random walk is very ill behaved. It is not difficult to construct examples in the continuum setting where the two measures are absolutely singular and that the support of the former even has a fractal like structure. The situation is further complicated in the setting of the HS walk since the jump rates depend on the boundary conditions and are dynamic, hence it is impossible a priori to rule out pathological behavior whenever the walk gets close to the boundary.
\subsection{Main Results}
In this article, we introduce a new technique to estimate the mean which gives us fine control even in the presence of ill-behaved boundary conditions and a complicated boundary structure of the underlying domain.
\begin{theorem}
The mean height is harmonic.
\end{theorem}
In quadratic case, one actually has the following Markovian structure: the law of a DGFF on $D_N$ with boundary condition $f$ is equal in law to that of a \emph{zero-boundary} DGFF on $D_N$ plus the harmonic extension of $f$ to $D_N$. This is a particularly useful property which is much stronger than merely having the mean harmonic. Our method of proof implies that one has an approximate version of this property in the GL setting.
\begin{theorem}
The coupling
\end{theorem}
The main step in the proof of \cite{GOS01} is a proof that the macroscopic covariance structure of the gradient Gibbs state for the GL model is the same as that in the GL model. This is proved by showing that the HS random walk converges in the limit to a Brownian motion. The homogenization of the HS random walk is proved using the Kipnis-Varadhan method, which is to represent the random walk as an additive functional of the environment as viewed from the particle. In this case, the environment is a functional of a dynamical version of the surface and is therefore an ergodic Markov process. Theorem of then implies that it converges to Brownian motion.
The difference between the case of infinite gradient Gibbs states considered in \cite{GOS01} and our case is that the environment as viewed from the particle is no longer ergodic. This puts it outside of the scope of the Kipnis-Varadhan theorem, so that a new approach is needed in this case. Combining our method with the results of \cite{GOS01}, we are able to get the central limit theorem more or less for free.
\begin{theorem}
The Central Limit Theorem on Bounded Domains
\end{theorem}
Recalling the relationship between the Green's function of the HS random walk and the covariance structure of the underlying field, Theorem X implies that the Green's function of the HS random walk is the same as that of a Brownian motion. Using this observation, we are able to prove the homogenization of the HS random walk.
\begin{theorem}
The HS random walk converges to Brownian motion.
\end{theorem}
\subsection{Sequel} This article is actually the first in a series of two. In the second article, we will employ many of the estimates developed here in order to a resolve a conjecture made by Sheffield that the macroscopic level lines of the GL model converge in the limit to variants of $SLE(4)$.
\section{Gaussian Free Fields}
In this section we will introduce the discrete and continuum Gaussian free fields (DGFF and GFF). The reason that we include a separate discussion of the latter is that the DGFF serves as a toy model for the GL model. In particular, the orthogonal decomposition of the DGFF with given boundary conditions into a sum of a zero boundary DGFF plus a harmonic function is the motivation behind the technique of proof in Section \ref{sec::harm}
\subsection{Discrete Gaussian Free Field}
\label{subsec::dgff_construction}
Suppose that $G = (V \cup \partial V,E)$ is a finite, undirected, connected graph with distinguished subset $\partial V \neq \emptyset$. The zero-boundary discrete Gaussian free field (DGFF) is the measure on functions $h \colon V \cup \partial V \to \mathbf{R}$ vanishing on $\partial V$ with density
\[ \frac{1}{\mathcal {Z}_G} \exp\left(-\tfrac{1}{2} \sum_{b \in E} (h(x) - h(y))^2\right)\]
with with respect to Lebesgue measure. Here, $\mathcal {Z}_G$ is a normalizing constant so that the above has unit mass. Equivalently, the DGFF is the standard Gaussian associated with the Hilbert space $H_0^1$ of real-valued function $h$ on $V$ vanishing on $\partial V$ with Dirichlet inner product
\[ (f,g)_\nabla = \sum_{b \in E} (f(x) - f(y))(g(x) - g(y)).\]
This means that the DGFF $h$ can be thought of as a family of Gaussian random variables $(h,f)_\nabla$ indexed by elements $f \in H_0^1(V)$ with mean zero and covariance structure
\[ {\rm Cov}((h,f)_\nabla,(h,g)_\nabla) = (f,g)_\nabla,\ f,g \in H_0^1(V).\]
This representation of the DGFF is convenient since it allows for a simple derivation of the mean and autocovariance structure of $h$. Let $\Delta \colon V \to \mathbf{R}$ denote the discrete Laplacian on $V$, i.e.
\[ \Delta f(x) = \sum_{b \ni x} \nabla f(b)\]
and let $G(x,y) = \Delta^{-1} 1_{\{x\}}(y)$ be the discrete Green's function on $V$. Summation by parts gives that
\[ (f,g)_\nabla = -\sum_{x \in V} f(x) \Delta g(x) = -\sum_{x \in V} \Delta f(x) g(x).\]
Thus
\[h(x) = (h,1_{\{x\}}(\cdot))_{L^2} = -(h,G(x,\cdot))_\nabla,\]
hence
\[ {\rm Cov}(h(x),h(y)) = (G(x,\cdot),G(y,\cdot))_\nabla = G(x,y).\]
Suppose that $W \subseteq V$. Then $H_0^1(V)$ admits the orthogonal decomposition $H_0^1(V) = \mathcal {M}_I \oplus \mathcal {M}_B \oplus \mathcal {M}_O$ where $\mathcal {M}_I,\mathcal {M}_B,\mathcal {M}_O$ are the subspaces of $H_0^1(V)$ consisting of those functions that vanish on $V \setminus W$, are harmonic off of $\partial W$, and vanish on $V$, respectively. It follows that we can write $h = h_I + h_B + h_O$ with $h_I \in \mathcal {M}_I, h_B \in \mathcal {M}_B, h_O \in \mathcal {M}_O$, respectively, where $h_I,h_B,h_O$ are independent. This implies that the DGFF possesses the following Markov property: the law of $h$ on $W$ conditional on $h$ on $V \setminus W$ is that of a zero boundary DGFF on $W$ plus the harmonic extension of $h$ from $\partial W$ to $W$. In particular, the conditional mean of $h$ on $W$ given $h$ on $V \setminus W$ is the harmonic extension of $h|\partial W$ to $W$.
More generally, if $h_\partial \colon \partial V \to \mathbf{R}$ is a given function, the DGFF with boundary condition $h_\partial$ is the measure on functions $h \colon V \to \mathbf{R}$ with $h|\partial V = h_\partial$ with density
\[ \frac{1}{\mathcal {Z}_G} \exp\left( -\tfrac{1}{2} \sum_{b \in E} (h(x)-h(y))^2 \right).\]
That is, $h$ has the law of a zero boundary DGFF on $V$ plus the harmonic extension of $h_\partial$ from $\partial V$ to $V$.
\subsection{The Continuum Gaussian Free Field}
Suppose that $D \subseteq \mathbf{C}$ is a bounded domain with smooth boundary. Let $C_0^\infty(D)$ denote the set of $C^\infty$ functions compactly supported in $\mathbf{C}$. The space $H_0^1(D)$ is the Hilbert space closure of $C_0^\infty(D)$ under the Dirichlet inner product
\[ (f,g)_\nabla = \int \nabla f \cdot \nabla g.\]
The (zero-boundary) GFF $h$ on $D$ is the standard Gaussian on the Hilbert space $H_0^1(D)$. Formally,
\begin{equation}
\label{gff::eqn::hilbert_space}
h = \sum_{n=1}^\infty \alpha_n f_n
\end{equation}
where $(\alpha_n)$ is an iid sequence of $N(0,1)$ random variables and $(f_n)$ is an orthonormal basis for $H_0^1(D)$. Although the sum \eqref{gff::eqn::hilbert_space} does not converge in $H_0^1(D)$, it does converge almost surely in the space of distributions. Hence the GFF can viewed as a continuous linear functional on $C_0^\infty(D)$. The can be thought of as a $2$-time dimensional analog of the Brownian motion. Just as the Brownian motion arises as the scaling limit of many random curve ensembles, the GFF arises as the scaling limit of many random surface ensembles.
\section{The Ginzburg-Landau Model}
The Ginzburg-Landau $\nabla \phi$-interface (GL) model is a non-Gaussian generalization of the DGFF introduced by Funaki and Spohn in \cite{FS97}. Suppose that $G = (V \cup \partial V,E)$ is a finite, undirected graph with a distinguished set of vertices $\partial V$. Let $\mathcal {V} \colon \mathbf{R} \to [0,\infty)$ be a $C^2$ function with Lipschitz second derivative satisfying:
\begin{enumerate}
\item (Symmetry) $\mathcal {V}(-x) = \mathcal {V}(x)$ for every $x \in \mathbf{R}$,
\item (Strict convexity) There exists $a,A > 0$ such that $a \leq \mathcal {V}''(x) \leq A$ for all $x \in \mathbf{R}$.
\end{enumerate}
The law of the GL model on $V$ with potential function $\mathcal {V}$ and boundary condition $h_\partial \colon \partial V \to \mathbf{R}$ is the measure on functions $h \colon V \to \mathbf{R}$ satisfying $h|\partial V = h_\partial$ described by the density
\[ \frac{1}{\mathcal {Z}_\mathcal {V}} \exp\left( - \sum_{b \in E} \mathcal {V}(h \vee h_\partial (x) - h \vee h_\partial(y)) \right) \]
with respect to Lebesgue measure. The DGFF corresponds to the special case that $\mathcal {V}(x) = \tfrac{1}{2} x^2$.
\subsection{Langevin Dynamics}
Suppose that $D$ is a bounded $\mathbf{L}$ domain, $\psi \colon \partial D \to \mathbf{R}$ is a given boundary condition, and suppose that $h_t$ solves the stochastic differential system (SDS)
\begin{equation}
\label{gl::eqn::dynam}
dh_t(x) = \sum_{b \ni x} \mathcal {V}'(\nabla (h_t \vee \psi)(b))dt + \sqrt{2}dW_t(x) \text{ for } x \in D
\end{equation}
where $W_t(x)$ is a family of independent standard Brownian motions. Here, we make use of the notation
\[ \phi \vee \varphi(x) =
\begin{cases}
\phi(x) \text{ if } x \in D,\\
\varphi(x) \text{ if } x \in \partial D.
\end{cases}
\]
We further assume that the initial distribution $h_0$ is that of the GL model on $D$. The generator for \eqref{gl::eqn::dynam} is given by
\begin{align*}
\mathcal {L} \varphi(h)
&= \sum_{x \in D} \left(\sum_{b \ni x} \mathcal {V}'(\nabla h \vee \psi(b)) \partial_{h(x)}\varphi(h) + \partial_{h(x)}^2 \varphi(h)\right)\\
&= -\sum_{x \in D} e^{\mathcal {H}_\mathcal {V}^\psi(h)} \frac{\partial}{\partial h(x)} e^{-\mathcal {H}_\mathcal {V}^\psi(h)} \frac{\partial}{\partial h(x)} \varphi(h),
\end{align*}
where
\[ \mathcal {H}_\mathcal {V}^\psi(h) = \sum_{b \in D^*} \mathcal {V}( \nabla h \vee \psi(b))\]
is the Hamiltonian for the GL model on $D$ with boundary conditions $\psi$.
Thus it is easy to see that $\mathcal {L}$ is self-adjoint in the space $L^2(e^{-\mathcal {H}(h)})$, hence the dynamics \eqref{gl::eqn::dynam} are reversible with respect to the law of the GL model. These are the \emph{Langevin dynamics}.
More generally, we will be interested in the Langevin dynamics associated with the GL model conditioned to take values at a given collection of vertices between a given range. That is, suppose we are given functions $a,b \colon D \to \mathbf{R}$ satisfying $-\infty \leq a < b \leq \infty$. Then the Langevin dynamics associated with the GL model on $D$ conditioned on the event $\{a(x) \leq h(x) \leq b(x) : x \in D\}$ are described by the SDS
\begin{equation}
\label{gl::eqn::cond_dynam}
dh_t(x) = \sum_{b \ni x} \mathcal {V}'(\nabla (h_t \vee \psi)(b))dt + d[\ell_t^a - \ell_t^b](x) d + \sqrt{2}dW_t(x) \text{ for } x \in D.
\end{equation}
As before, $W$ is a family of independent standard Brownian motions. Additionally, the processes $\ell^a,\ell^b$ are of bounded variation, non-decreasing, and non-zero only when $h_t(x) = a(x)$ or $h_t(x) = b(x)$, respectively. The cases that will be of most interest to us are when (i) $a(x) = -\infty, b(x) = \infty$, (ii) $a(x) = -\infty, b(x) = 0$, and (iii) $a(x) = 0, b(x) = \infty$. These correspond to no conditioning, negative conditioning, and positive conditioning, respectively.
As an immediate consequence of the stationarity of the dynamics we obtain:
\begin{lemma}
\label{gl::lem::expectation_sym}
If $x \in D$ and $a(x) = - \infty$ and $b(x) = \infty$ then
\[ \sum_{b \ni x} \mathbf{E} \mathcal {V}'(\nabla h(b)) = 0.\]
\end{lemma}
Note that this gives another proof of the harmonicity of $\mathbf{E} h(x)$ if $\mathcal {V}(x) = \tfrac{1}{2}x^2$ is quadratic so that $h$ is a DGFF. This exact equality will be useful for us in the proof of the approximate harmonicity of the mean for the GL model. [am I still using this?]
\subsection{The Helffer-Sj\"ostrand Representation}
\label{subsec::hs_representation}
We showed both in the previous subsection and in subsection \ref{subsec::dgff_construction} that if $\mathcal {V}(x) = \tfrac{1}{2} x^2$ then the mean height is the harmonic extension of the boundary values and in subsection \ref{subsec::dgff_construction} that the covariance of heights is described by the discrete Green's function. Both of these quantities admit simple probabilistic representations. Indeed, suppose that $X_t$ is a random walk on the lattice $\mathbf{L}$ that jumps at the uniform rate $1$ equally to its neighbors. Let $\tau$ be the time of first exit of $X_t$ from $D$. Then
\[ \mathbf{E} h(x) = \mathbf{E}_x h(X_\tau) \text{ and } {\rm Cov}(h(x),h(y)) = \mathbf{E}_x \int_0^\tau 1_{\{X(s) = y\}} ds.\]
The idea of the Helffer-Sj\"ostrand (HS) representation, originally developed in \cite{HS94} and reworked probabilistically in \cite{DGI00}, \cite{GOS01}, is to give an expression for the mean height and covariance of heights in the GL model in terms of the first exit distribution and occupation time of a random walk. In contrast to the case that $\mathcal {V}$ is quadratic, the random walk is rather complicated when $\mathcal {V}$ is non-quadratic as its jump rates are not only \emph{random}, but additionally are \emph{time varying} and \emph{depend} on both the boundary condition and any other conditioning present in the model. Nevertheless, the HS representation is a rather useful analytical tool due to the presence of comparison inequalities (Brascamp-Lieb and Nash-Aronson).
More precisely, let $h_t^\psi$ denote a time-varying instance of the GL model with boundary conditions $\psi$ and without any conditioning. Conditional on the realization of the trajectory of the time-varying gradient field $(\nabla h_t^\psi(b) : b \in D^*)$, we let $X_t^\psi$ be the Markov process on $D$ with time-varying jump rates $\mathcal {V}''(\nabla h_t^\psi(b))$. Let $\tau$ denote the first exit of $X_t^\psi$ from $D$.
\begin{lemma}
\label{gl::lem::hs_mean_cov}
The mean and covariances of $h^\psi$ admit the representation
\begin{align}
{\rm Cov}(h^\psi(x),h^\psi(y)) &= \mathbf{E}_x^\psi \int_0^\tau 1_{\{X_s^\psi = y\}} ds\\
\mathbf{E} h^\psi(x) &= \int_0^1 \mathbf{E}^{r\psi} \psi(X_\tau^{r\psi}) dr.
\end{align}
\end{lemma}
\begin{proof}
See \cite{DGI00} or \cite{GOS01}.
\end{proof}
The HS representation is applicable in more generality. One particularly useful case is developed in Remark 2.3 of \cite{DGI00}, which we summarize here. Suppose that $\mathcal {U}_x$ is a family of $C^2$ functions indexed by $x \in D$ satisfying
\[ 0 \leq \mathcal {U}_x'' \leq \alpha.\]
The GL model with potential $\mathcal {V}$ and self-potentials $\mathcal {U}_x$ is the probability measure with density
\begin{align*}
\frac{1}{\mathcal {Z}_{\mathcal {V},\mathcal {U}}} \exp\left( -\sum_{b \in E} \mathcal {V}( \nabla h \vee \psi(b)) - \sum_{x \in V} \mathcal {U}_x(h(x)) \right).
\end{align*}
The Langevin dynamics associated with this more general model are described by the SDS
\begin{align*}
d h_t^\psi(x) = \sum_{b \ni x} [\mathcal {V}'(\nabla h \vee \psi(b)) + \mathcal {U}_x'(h(x))]dt + \sqrt{2} dW_t(x).
\end{align*}
Letting $X_t$ be the random walk with time-dependent jump rates $\mathcal {V}''(\nabla h_t^\psi(b))$, we have the following representation of covariances for the generalized case:
\begin{equation}
\label{gl::eqn::hs_self_potential}
{\rm Cov}(h^\psi(x), h^\psi(y))
= \mathbf{E}_x \int_0^\tau \exp\left( -\int_0^s \mathcal {U}_{X_t}''(h_t^\psi) \right) 1_{y}(X_s) ds
\end{equation}
\subsection{Brascamp-Lieb and FKG Inequalities}
\label{subsec::bl}
If $\nu \in \mathbf{R}^{|D|}$ is a given vector then we let
\[ \langle \nu, h \rangle = \sum_{x \in D} \nu(x) h(x).\]
We have the following inequalities relating the centered moments of linear functionals of $h$ with those of $h^*$.
\begin{lemma}[Brascamp-Lieb inequalities]
\label{bl::lem::bl_inequalities}
There exists constant $C > 0$ depending only on $a,A$ such that the following inequalities hold:
\begin{align}
& {\rm Var}(\langle \nu, h \rangle) \leq C {\rm Var}( \langle \nu, h^* \rangle ) \label{gl::eqn::bl_var}, \\
& \mathbf{E} \exp(\langle \nu, h \rangle - \mathbf{E} \langle \nu,h \rangle) \leq \mathbf{E} \exp( C(\langle \nu, h^* \rangle - \mathbf{E} \langle \nu,h^* \rangle)) \label{gl::eqn::bl_exp}
\end{align}
for all $\nu \in \mathbf{R}^{|D|}$.
\end{lemma}
\begin{proof}
For each $-\infty \leq \alpha < \beta \leq \infty$, fix a $C^\infty$ function $f_{\alpha,\beta}$ such that $f_{\alpha,\beta}(x) = 0$ for all $x \in (\alpha,\beta)$ and $f_{\alpha,\beta}(x) > 0$ for all $x \notin (\alpha,\beta)$ with $0 \leq f_{\alpha,\beta}''(x) \leq 1$ for all $x$. Let $\mathcal {U}_x^n = n f_{a(x),b(x)}$. If $h_n,h_n^*$ have the law of the GL model and DGFF with self-potential $\mathcal {U}^n$, respectively, it follows from \eqref{gl::eqn::hs_self_potential} that for some $C > 0$ depending only on $\mathcal {V}$ we have
\[ {\rm Var}(\langle \nu, h_n\rangle) \leq C {\rm Var}( \langle \nu, h_n^* \rangle).\]
As $n \to \infty$, $h_n,h_n^*$ tend to the law of the GL model and DGFF conditioned on $a(x) \leq h(x) \leq b(x)$ and $a(x) \leq h^*(x) \leq b(x)$, respectively. This proves \eqref{gl::eqn::bl_var}. One proves \eqref{gl::eqn::bl_exp} using a similar method.
\end{proof}
The other useful inequality that we record in this section gives that monotonic functionals of the field are non-negatively correlated:
\begin{lemma}[FKG inequality]
\label{gl::lem::fkg}
Suppose that $F,G \colon \mathbf{R}^{|D|} \to \mathbf{R}$ are monotonic functionals, i.e. if $\varphi_1,\varphi_2 \in \mathbf{R}^{|D|}$ with $\varphi_1(x) \leq \varphi_2(x)$ for every $x \in D$ then $F(\varphi_1) \leq F(\varphi_2)$ and $G(\varphi_1) \leq G(\varphi_2)$. Then
\[ \mathbf{E} F(h) G(h) \geq \mathbf{E} F(h) \mathbf{E} G(h).\]
\begin{proof}
This can be proved using the same method as the previous lemma to deal with the conditioning combined with the usual proof of the FKG inequality from the HS representation for covariances of functionals of the field \cite{DGI00}.
\end{proof}
\end{lemma}
\section{Dynamics}
\label{sec::dyn}
The Langevin dynamics are extremely useful for constructing couplings of instances of the GL model with either different boundary conditions, defined on different (though overlapping) domains, or both. Suppose that $h^\psi, h^{\tilde{\psi}}$ are solutions of \eqref{gl::eqn::cond_dynam} driven by the same Brownian motions with $a(x) = \tilde{a}(x)$ and $b(x) = \tilde{b}(x)$ for all $x \in D$, but with boundary conditions $\psi,\tilde{\psi}$, respectively. Let $\overline{\psi} = \psi - \tilde{\psi}$, $\overline{h} = h^\psi - h^{\tilde{\psi}}$, $\overline{\ell}^b = \ell^b - \tilde{\ell}^b$, and $\overline{\ell}^a = \ell^a - \tilde{\ell}^a$. Observe that
\begin{equation}
\label{gl::eqn::diff_dynam}
d \overline{h}_t(x) = \sum_{b \ni x} [\mathcal {V}'(\nabla(h_t \vee \psi)(b)) - \mathcal {V}'(\nabla (h^{\tilde{\psi}} \vee \tilde{\psi})(b))] dt + d[\overline{\ell}_t^a - \overline{\ell}_t^b](x).
\end{equation}
Let
\[ a_t(b) = \int_0^1 \mathcal {V}''(\nabla (h_t^{\tilde{\psi}} + s \overline{h}_t)(b)) ds\]
and
\[ \mathcal {L}_t f(x) = \sum_{b \ni x} a_t(b) \nabla f(b).\]
Then we can rewrite \eqref{gl::eqn::diff_dynam} more concisely as
\begin{equation}
\label{gl::eqn::diff_dynam_ell}
d \overline{h}_t(x) = \mathcal {L}_t \overline{h}_t(x) dt + d[\overline{\ell}_t^a - \overline{\ell}_t^b](x).
\end{equation}
\begin{lemma}[Energy Inequality]
For every $T > 0$ we have
\begin{equation}
\label{gl::eqn::ee}
\sum_{x \in D} |\overline{h}_T(x)|^2 + \int_0^T \sum_{b \in D^*} |\nabla \overline{h}_t(b)|^2 dt \leq C \left(\sum_{x \in D} |\overline{h}_0(x)|^2 + \int_0^T \sum_{b \in \partial D^*} |\overline{\psi}(x_b)||\nabla \overline{h}_t(b)|dt\right)
\end{equation}
for a constant $C > 0$ depending only on $\mathcal {V}$.
\end{lemma}
\begin{proof}
It is an immediate consequence of \eqref{gl::eqn::diff_dynam_ell} that
\[ d (\overline{h}_t(x))^2
= 2 \overline{h}_t(x) \mathcal {L}_t \overline{h}_t(x) dt + 2\overline{h}_t(x) d[\overline{\ell}_t^a - \overline{\ell}_t^b](x).
\]
Suppose $a(x) > -\infty$. If $t$ is such that $h_t^\psi(x) \geq h_t^{\tilde{\psi}}(x) = a(x)$ or $h_t^{\tilde{\psi}}(x) \geq h_t^{\psi}(x) = a(x)$ then $2\overline{h}_t(x) d \overline{\ell}_t^a(x) \leq 0$. The same is also true at $x$ such that $b(x) < \infty$ and therefore
\[ \frac{d}{dt} (\overline{h}_t(x))^2
\leq 2 \overline{h}_t(x) \mathcal {L}_t \overline{h}_t(x) dt.
\]
The result now follows by summing by parts.
\end{proof}
The proof of this result depends only on $\overline{h}$ specifically through the fact that $\ell_t^{\cdot}$ can only be non-zero when $h_t^{\cdot}$ vanishes and the same for $\tilde{\ell}_t^{\cdot}$ and $\tilde{h}_t^{\cdot}$. More generally, if $f_t$ solves $\partial_t f = \mathcal {L}_t f_t$ then $f_t$ also satisfies \eqref{gl::eqn::ee}.
\subsection{Energy inequality bounds}
Suppose that $(h_\infty^\psi,h_\infty^{\tilde{\psi}})$ is a subsequential limit of $(h_t^\psi,h_t^{\tilde{\psi}})$ as $t \to \infty$. By dividing both sides of \eqref{gl::eqn::ee} by $T$ and sending $T \to \infty$ we see that $\overline{h}_\infty$ satisfies
\begin{equation}
\label{gl::eqn::ee_limit}
\sum_{b \in D^*} \mathbf{E} |\nabla \overline{h}_\infty(b)|^2 \leq C\sum_{b \in \partial D^*} \mathbf{E} |\overline{\psi}(x_b)||\nabla \overline{h}_\infty(b)|
\end{equation}
The following lemma is immediate from the discussion thus far.
\begin{lemma}$\ $
\label{gl::lem::ergodic}
\begin{enumerate}
\item The SDS \eqref{gl::eqn::cond_dynam} is ergodic.
\item More generally, any finite collection $h^1,\ldots,h^n$ satisfying the SDS \eqref{gl::eqn::cond_dynam} each with the same conditioning and driven by the same family of Brownian motions is ergodic.
\item If $h^1,\ldots,h^n$ is the unique stationary distribution from above then $\overline{h}^{ij} = h^i - h^j$ satisfies \eqref{gl::eqn::ee_limit}
\end{enumerate}
\end{lemma}
Suppose that $R = {\rm diam}(D)$. Then the Brascamp-Lieb inequality (Lemma \ref{bl::lem::bl_inequalities}) implies
\[ \sup_{x \in D} {\rm Var}(h(x)) \leq C \log R\]
for $C = C(\mathcal {V}) > 0$. Let $\overline{\Lambda} > 0$ be a given constant and assume that $\| \psi \|_\infty + \| \tilde{\psi}\|_\infty = \overline{\Lambda}( \log R)^{\overline{\Lambda}}$. Fix $\epsilon > 0$. It follows from Lemma \eqref{gl::lem::hs_mean_cov}, the HS representation of the mean, that
\[ \sup_{x \in D} \big[ \mathbf{E} (h^\psi)^2(x) + \mathbf{E} (h^{\tilde{\psi}})^2(x) \big] = O_{\overline{\Lambda}}(R^\epsilon).\]
Assume that $(h_t^\psi,h_t^{\tilde{\psi}})$ is stationary. Plugging these \emph{a priori} estimates into \eqref{gl::eqn::ee_limit} implies that for any subdomain $D' \subseteq D$ we have
\[ \sum_{b \in (D')^*} \mathbf{E} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(|\partial D'| R^\epsilon).\]
Let $D(r) = \{x \in D : {\rm dist}(x, \partial D) \geq r\}$. There exists $0 \leq r_1 \leq R^{1-\epsilon}$ such that $|\partial D(r_1)| = O(R^{1+\epsilon})$ so that
\[ \sum_{b \in D(r_1)^*} \mathbf{E} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{1+2\epsilon}).\]
This implies that there exists $R^{1-\epsilon} \leq r_2 \leq 2 R^{1-\epsilon}$ such that
\[ \sum_{b \in \partial D(r_2)^*} \mathbf{E} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{3\epsilon})\]
and therefore combining \eqref{gl::eqn::ee_limit} with the Cauchy-Schwartz inequality we have
\begin{align*}
\sum_{b \in D(r_2)^*} \mathbf{E} |\nabla \overline{h}(b)|^2
&\leq C \left(\sum_{b \in \partial D(r_2)^*} \mathbf{E} |\overline{h}(x_b)|^2\right)^{1/2} \left(\sum_{b \in \partial D(r_2)^*} \mathbf{E} |\nabla \overline{h}(b)|^2\right)^{1/2}\\
&= O_{\overline{\Lambda}}(\sqrt{R^{1+\epsilon} \cdot R^\epsilon}) O_{\overline{\Lambda}}(\sqrt{R^{3\epsilon}}) = O_{\overline{\Lambda}}(R^{1/2+5/2\epsilon}).
\end{align*}
Iterating this procedure a second time, we can find $2R^{1-\epsilon} \leq r_3 \leq 3R^{1-\epsilon}$ such that
\[ \sum_{b \in \partial D(r_3)^*} \mathbf{E} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{-1/2 + 7/2\epsilon})\]
so that
\begin{align*}
\sum_{b \in D(r_3)^*} \mathbf{E} |\nabla \overline{h}(b)|^2
&\leq C \left(\sum_{b \in \partial D(r_3)^*} \mathbf{E} |\overline{h}(x_b)|^2\right)^{1/2} \left(\sum_{b \in \partial D(r_3)^*} \mathbf{E} |\nabla \overline{h}(b)|^2\right)^{1/2}\\
&= O_{\overline{\Lambda}}(\sqrt{R^{1+\epsilon} \cdot R^\epsilon}) O_{\overline{\Lambda}}(\sqrt{R^{-1/2 + 7/2\epsilon}}) = O_{\overline{\Lambda}}(R^{1/4+11/4\epsilon}).
\end{align*}
Iterating this $n$ times yields that there exists $(n-1) R^{1-\epsilon} \leq r_n \leq n R^{1-\epsilon}$ such that
\begin{align*}
\sum_{b \in D(r_n)^*} \mathbf{E} |\nabla \overline{h}(b)|^2
&\leq C \left(\sum_{b \in \partial D(r_n)^*} \mathbf{E} |\overline{h}(x_b)|^2\right)^{1/2} \left(\sum_{b \in \partial D(r_n)^*} \mathbf{E} |\nabla \overline{h}(b)|^2\right)^{1/2}\\
&= O_{\overline{\Lambda}}(R^{2^{-n} + \tilde{\lambda}(n) \epsilon}),
\end{align*}
where $\tilde{\lambda}(n) \leq 3$. Hence taking $n$ large enough we get that
\[ \sum_{b \in D(r_n)^*} \mathbf{E} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{3\epsilon}).\]
By exactly the same argument, for each $\beta > 0$ we can find $\alpha(\epsilon)R^{1-\epsilon} \leq r \leq (\alpha(\epsilon)+1)R^{1-\epsilon}$ such that
\[ \sum_{b \in D(r-R^{\beta})^* \setminus D(r)^*} \mathbf{E} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{3\epsilon+\beta-1}).\]
Furthermore, this can be done for any finite number of $\beta$s simultaneously.
We have obtained:
\begin{lemma}
\label{gl::lem::grad_error}
Suppose that $(h,\tilde{h})$ is a stationary coupling of two solutions of the SDS \eqref{gl::eqn::cond_dynam} with the same conditioning and driven by the same families for Brownian motions. For every $\epsilon > 0$ and $\beta_1,\ldots,\beta_n \geq 0$ there exists a constant $\alpha = \alpha(\epsilon)$ and $\alpha(\epsilon)R^{1-\epsilon} \leq r \leq (\alpha(\epsilon)+1) R^{1-\epsilon}$ such that
\begin{align*}
\sum_{b \in D(r-R^{\beta_i})^* \setminus D(r)^*} \mathbf{E} |\nabla \overline{h}(b)|^2 &= O_{\overline{\Lambda}}(R^{3\epsilon+\beta_i-1}), \text{ and } \\
\sum_{b \in D(r-R^{\beta_i})^*} \mathbf{E} |\nabla \overline{h}(b)|^2 &= O_{\overline{\Lambda}}(R^{\epsilon-1}) \text{ for all } 1 \leq i \leq n .
\end{align*}
\end{lemma}
\subsection{Random Walk Representation and Stochastic Domination}
\label{subsec::rw_difference}
The energy method of the previous subsection allows one to deduce macroscopic regularity and ergodicty properties of the dynamic coupling. In this subsection, we will develop the random-walk representation of $\overline{h}_t(x)$. This will allow us to get pointwise estimates on $\nabla \overline{h}_t(x)$ in addition to a simple proof of a stochastic domination result.
Recall from the proof of the energy inequality that
\[ \frac{d}{dt} (\overline{h}_t(x))^2
\leq 2 \overline{h}_t(x) \mathcal {L}_t \overline{h}_t(x).
\]
Fix $T > 0$ and let $X_t^T$ be the random walk in $D$ with time-dependent generator
\[ f(x) \mapsto \mathcal {L}_{T-t}f (x) = \sum_{b \ni x} a_{T-t}(b) \nabla f(b).\]
In the special case that $a(x) = -\infty$ and $b(x) = \infty$ so that we do not condition the field we have
\begin{equation}
\label{gl::eqn::rand_walk_diff}
\overline{h}_t(x) = \mathbf{E}_x \overline{h}_{T-t}(X_{t \wedge \tau}^T).
\end{equation}
This implies that if $\overline{h}|\partial D \geq 0$ then $\lim_{t \to \infty} \overline{h}_t(x) \geq 0$. In other words, the stationary coupling of $(h_t^\psi,h_t^{\tilde{\psi}})$ satisfies $h_t^\psi \geq h_t^{\tilde{\psi}}$ if the inequality is satisfied uniformly on the boundary. The purpose of the following lemma is to establish the same result in the presence of conditioning.
\begin{lemma}
\label{gl::lem::stoch_dom}
If $D \in \mathbf{L}$ is a lattice domain and $\psi,\tilde{\psi}$ are given boundary conditions such that $\psi(x) \geq \tilde{\psi}(x)$ for every $x \in \partial D$, then the stationary coupling $(h^\psi,h^{\tilde{\psi}})$ of the GL models $h^\psi,h^{\tilde{\psi}}$ on $D$ with boundary conditions, $\psi,\tilde{\psi}$, respectively, satisfies $h^\psi(x) \geq h^{\tilde{\psi}}(x)$ for every $x \in D$.
\end{lemma}
\begin{proof}
Similar to the proof of Lemma 2.4 in \cite{DN07}, we let $\overline{h}_t^- = \min(\overline{h}_t,0)$. We have that
\begin{align*}
d(\overline{h}_t^-(x))^2
&= 2 \overline{h}_t^-(x) \mathcal {L}_t \overline{h}_t(x) dt + 2\overline{h}_t^-(x) d[\overline{\ell}_t^a - \overline{\ell}_t^b](x)
\leq 2\overline{h}_t^-(x) \mathcal {L}_t \overline{h}_t(x) dt,
\end{align*}
where we used that $\overline{h}_t^-(x) d[\overline{\ell}_t^a - \overline{\ell}_t^b](x) \leq 0$, similar to the proof of the energy inequality. Thus
\begin{align*}
\sum_{x \in D} d(\overline{h}_t^-(x))^2
&\leq 2\sum_{x \in D} \overline{h}_t^-(x) \mathcal {L}_t \overline{h}_t(x) dt
= -2\sum_{b \in D^*} \nabla \overline{h}_t^-(x) \nabla \overline{h}_t(x) dt
\leq -2 \sum_{b \in D^*} \nabla \overline{h}_t^2(x),
\end{align*}
where in the second step we used summation by parts and that $\overline{h}_t^- = 0$ on $\partial D$. It immediately follows that the stationary dynamics satisfy $\overline{h}_t^-(x) = 0$ for all $x \in D$ and $t \geq 0$. That is, $h_t^\psi \geq h_t^{\tilde{\psi}}$, as desired.
\end{proof}
Combining \eqref{gl::eqn::rand_walk_diff} with Jensen's inequality yields in the unconditioned case that
\[ \overline{h}_t^2(x) \leq \mathbf{E}_x \overline{h}_{T-t}^2(X_{t \wedge \tau}^T).\]
The same result also holds in the conditioned case, though we have to work slightly harder to prove it.
\begin{lemma}
\label{gl::lem::square_bound}
Let $\tau$ be the time of first exit of $X_t^T$ from $D$. Then
\begin{equation}
\label{gl::eqn::rand_walk_diff_square}
\overline{h}_t^2(x) \leq \mathbf{E}_x \overline{h}_{T-t}^2(X_{t \wedge \tau}^T).
\end{equation}
for all $t \leq T$.
\end{lemma}
\begin{proof}
To write in.
\begin{comment}
Following the proof of Proposition 2.8 of \cite{DN07}, let $p(s,t;x,y)$ be the solution of the equation
\[ \partial_s p(s,t;x,y) + \mathcal {L}_s p(s,t;x,y) = 0,\ p(t,t;x,y) = 1_{\{x=y\}}\]
and $p(s,t;x,y) = 0$ for $x,y \notin D$. Let $g_0 = h_0^\psi$, $\tilde{g}_0 = h_0^{\tilde{\psi}}$ in $D$ but with $g_0 = \tilde{g}_0 = \tilde{\psi}$ on $\partial D$. Let $g_t, \tilde{g}_t$ evolve according to the conditioned dynamics and let $\overline{g}_t$
We compute
\begin{align*}
d[\overline{h}_t(y) p(t,T;x,y)]
&= \big[\mathcal {L}_t \overline{h}_t(y) dt + d[\overline{\ell}_t^a - \overline{\ell}_t^b](y)\big] p(t,T;x,y) + \overline{h}_t(y) [\partial_t p(t,T;x,\cdot)](y)\\
&\leq \mathcal {L}_t \overline{h}_t(y) p(t,T;x,y) dt - \overline{h}_t(x) [\mathcal {L}_t p(t,T;x,\cdot)](y).
\end{align*}
Hence, summing by parts,
\begin{align*}
\sum_{y \in \mathbf{L}} d[\overline{h}_t(y) p(t,T;x,y)]
&\leq \sum_{y \in \mathbf{L}} \mathcal {L}_t \overline{h}_t(y) p(t,T;x,y) dt - \overline{h}_t(x) [\mathcal {L}_t p(t,T;x,\cdot)](y)\\
&= \sum_{y \in \mathbf{L}} \overline{h}_t(y) \mathcal {L}_t [p(t,T;x,\cdot)](y) dt - \overline{h}_t(x) [\mathcal {L}_t p(t,T;x,\cdot)](y)\\
&= 0.
\end{align*}
Therefore, for any $0 \leq t \leq T$,
\[ 0 \leq \overline{h}_T(x) \leq \sum_{y} p(t,T;x,y) \overline{h}_t(y),\]
where the left hand inequality came from the previous lemma. Applying Jensen's inequality gives the result.
\end{comment}
\end{proof}
\subsection{Infinite Shift Invariant Dynamics}
\label{subsec::shift_invariant}
By the reverse Brascamp-Lieb inequality (Lemma 2.8 of \cite{DGI00}) it follows that if $D_n$ is any sequence of domains tending locally to the infinite lattice $\mathbf{L}$ and, for each $n$, $h^n$ is an instance of the GL model on $D_n$ then, regardless of the choice of boundary conditions, ${\rm Var}(h^n(x)) \to \infty$ as $n \to \infty$. Therefore it is not possible to take an infinite volume limit of the heights of the height field. However, the Brascamp-Lieb inequality (Lemma \ref{bl::lem::bl_inequalities}) gives that ${\rm Var}(\nabla h^n(b))$ remains uniformly bounded as $n \to \infty$, indicating that it should be possible to take an infinite volume limit of the \emph{gradient field}.
In order to make this construction precise, we need to specify in which space the limit should take its values. Let $\mathcal {X}$ be the set of functions $\eta \colon \mathbf{L}^* \to \mathbf{R}$ such that there exists $f \colon \mathbf{L} \to \mathbf{R}$ with $\nabla f = \eta$ and
\[ \ell_{r,*}^2 = \left\{ f \in \mathcal {X} : \sum_{b \in \mathbf{L}^*} \eta^2(b) e^{-2r|x_b|} < \infty \right\}.\]
Let $\mathcal {F} = \sigma(\eta(b) : b \in \mathbf{L}^*)$ be the $\sigma$-algebra on $\mathcal {X}$ generated by the evaluation maps and, for each $D^* \subseteq \mathcal {L}^*$, let $\mathcal {F}_{D^*} = \sigma(\eta(b) : b \in \mathbf{L}^*)$ be the $\sigma$-algebra generated by the evaluation maps in $D^*$. Suppose that $D \subseteq \mathbf{L}$ is a bounded domain, $\eta \in \mathcal {X}$, and $\nabla \phi = \eta$. Let $\mathbf{P}_D^\phi$ denote the law of the GL model on $D$ with boundary conditions $\phi$ and suppose that $h$ is distributed according to $\mathbf{P}_D^\phi$. Then gradient field $\nabla h$ induces a measure $\mathbf{P}_{D^*}^\eta$ on functions $D^* \to \mathbf{R}$. We call $\mathbf{P}_{D^*}^\eta$ the law of the GL model on $D^*$ with \emph{Neumann boundary} conditions $\eta$. Let $\mu$ be a measure on $\ell_{r,*}^2$ and suppose that $\eta$ has the law $\mu$. We say that $\mu$ is a \emph{gradient Gibbs state} associated with the potential $\mathcal {V}$ if for every domain $D^* \subseteq \mathbf{L}^*$,
\[ \mu(\cdot | \mathcal {F}_{(D^c)^*}) =\mathbf{P}_{D^*}^{\eta|\partial D^*}\]
almost surely.
Fix a vector $u \in \mathbf{R}^2$. A Gibbs state $\mu$ is said to have \emph{tilt} $u$ if $\mathbf{E}^\mu \eta(x+ b_i) = u \cdot e_i$ where $b_i = (0,e_i)$ and $x \in \mathbf{L}$ is arbitrary. Next, $\mu$ is said to be shift invariant if $\mu \circ \tau_x^{-1} = \mu$ for every $x \in \mathbf{L}$ where $\tau_x \colon \mathbf{L} \to \mathbf{L}$ is the translation by $x$. Finally, a shift-invariant $\mu$ is said to be shift-ergodic if whenever $f$ is a shift-invariant function then $f$ is almost surely constant.
Funaki and Spohn in \cite{FS97} proved the existence and uniqueness of shift-ergodic Gibbs states in the special case that $\mathbf{L} = \mathbf{Z}^2$. The natural construction is to take an infinite volume limit of gradient measures $\mathbf{P}_{D^*}^\eta$ as $D^*$ tends locally to $\mathbf{L}$ with boundary conditions $\eta(b) = u \cdot (y_b - x_b)$. The difficulty with this approach is that $\mathbf{P}_{D^*}^\eta$ is not shift invariant. Funaki and Spohn circumvent this issue by instead considering the finite volume measure
\[ \mu_n = \frac{1}{\mathcal {Z}_n} \exp\left( -\sum_{b \in (\mathbf{Z}_n^d)^*} \mathcal {V}(\eta(b) - (y_b - x_b) \cdot u) \right) d\nu_n\]
on gradient fields on the torus, where $\nu$ is the uniform measure on $(\mathbf{Z}_n^d)^*$. By construction, $\mu_n$ is shift invariant and has tilt $u$ and both of these properties are preserved under the limit as $n \to \infty$. Their proof can be replicated on other lattices.
Just as in the finite volume case, an important tool in the analysis of the infinite volume Gibbs states are their associated Langevin dynamics $(\eta_t)$. The SDS describing $(\eta_t)$ is
\begin{equation}
\label{gl::eqn::grad_sds}
d\eta_t(b) = \left(\sum_{b_1 \ni y_b} \mathcal {V}'(\eta_t(b_1)) - \sum_{b_2 \ni x_b} \mathcal {V}'(\eta_t(b_2))\right) dt + \sqrt{2} d[ W_t(y_b) - W_t(x_b)]
\end{equation}
where $W_t(x)$, $x \in \mathbf{L}$, is an infinite collection of iid Brownian motions. Note that this is just the gradient of the natural infinite volume limit of the SDS \eqref{gl::eqn::dynam}. It is possible to show that for each gradient Gibbs state $\mu$ there exists a unique solution to \eqref{gl::eqn::grad_sds} with initial distribution $\mu$. As the dynamics are stationary for $\mu$, $\eta_t \stackrel{d}{=} \mu$ for all $t$.
In the special case of \emph{zero tilt}, i.e. $u = 0$, it is immediate from the method of dynamic coupling that an infinite volume limit of gradient measures $\mathbf{P}_{D^*}^{0}$, with zero boundary conditions, converges to the zero-tilt Funaki-Spohn state. We will include a statement of this result here as well as a short sketch of the proof since it seems that this has not yet in the literature and, while not necessary for us here, will simplify some of our proofs later.
The Brascamp-Lieb inequalities for the gradient field carry over to the infinite volume limit and serve to give us some control on the variance of the limiting gradients: we have that
\[ {\rm Var}^\mu( h(x) - h(y)) \leq C \log(1+|x-y|)\]
where $h(x) - h(y)$ is interpreted as $\sum_{i=1}^n \nabla h(b_i)$ where $b_1,\ldots,b_n$ is any sequence of bonds connecting $x$ to $y$. Now suppose that $\eta_t$ is the dynamics of the zero-tilt shift ergodic Gibbs measure. We can set $h_t^S(0) = 0$ for all $t$ then make sense of the heights of $\eta_t$ by letting $h_t^S$ solve $\nabla h_t^S(b) = \eta_t(b)$. Thus if $|x| = R$ then ${\rm Var}(h_t^S(x)) \leq C \log R$. The form of the dynamics of $h_t^S(x)$ is exactly the same as those of the usual GL model when $x \neq 0$. Thus if $D$ is a $\mathbf{L}$ domain not containing zero and of diameter $R$ and $h_t$ follows the dynamics of a $GL$ model on $D$ with boundary conditions $\psi$ satisfying $\|\psi\|_\infty \leq \overline{\Lambda} (\log R)^{\overline{\Lambda}}$ we can use the random walk representation of the difference developed in the previous section to a construct a coupling of $(h_t^S,h_t)$ such that with $\overline{h}_t = h_t^S - h_t$ we have $\nabla \overline{h}_t(b) = O_{\overline{\Lambda}}(R^{-\xi_1 \xi})$ when ${\rm dist}(b,\partial D) \geq R^{\xi}$. Here, $\xi_1$ is the constant from the Nash continuity estimate. Of course, $\nabla h_t = \eta_t$ has nothing to do with our particular choice of $h_t(0) = 0$. We have obtained:
\begin{proposition}
Suppose that $h_t$ is an instance of a dynamical GL model on a domain $D$ of diameter $R$ and with boundary condition $\psi$ satisfying $\| \psi\|_\infty \leq \overline{\Lambda} (\log R)^{\overline{\Lambda}}$. Then there exists a coupling of $\nabla h_t$ with $\eta_t$, $\eta_t$ the dynamics of the shift ergodic zero tilt infinite gradient Gibbs state, such that if $f$ is $L$-Lipschitz then
\[ \mathbf{E}|f(\nabla h_t(b)) - f(\eta_t(b))| = L [O_{\overline{\Lambda}}(R^{-\xi_1 \xi})]\]
when ${\rm dist}(b,\partial D) \geq R^{\xi}$. In particular, there exists a constant $c_f$ depending only $f$ such that
\[ |\mathbf{E}[f(\nabla h_t(b))] - c_f| = L [O_{\overline{\Lambda}}(R^{-\xi_1 \xi}])\]
when ${\rm dist}(b,\partial D) \geq R^{\xi}$.
\end{proposition}
\section{Harmonic Decomposition}
\label{sec::harm}
Throughout this section we will make use of the following notation. For a given lattice domain $F \in \mathbf{L}$ and function $\phi \colon \partial F \to \mathbf{R}$, we let $\mathbf{P}_F^\phi$ denote the law of the GL model on $F$ with boundary condition $\phi$. We will denote by $h^\phi$ a random variable distributed according to $\mathbf{P}_F^\phi$, where $F$ is understood through the domain of definition of $\phi$. If $\tilde{\phi} \colon \partial F \to \mathbf{R}$ is another given boundary condition, then we denote by $(h_t^{\phi,\mathcal {D}},h_t^{\tilde{\phi},\mathcal {D}})$ the stationary Langevin dynamics for the coupling of $\mathbf{P}_F^\phi, \mathbf{P}_F^{\tilde{\phi}}$ and write $\mathbf{E}_\mathcal {D}^{\phi,\tilde{\phi}}$ for the expectation under this coupling. Omitting the subscript $t$ and writing $(h^{\phi,\mathcal {D}},h^{\tilde{\phi},\mathcal {D}})$ corresponds to a single time-slice of $(h_t^{\phi,\mathcal {D}},h_t^{\tilde{\phi},\mathcal {D}})$. Finally, if $g \colon F \to \mathbf{R}$ is a given function, then we let $\mathbf{Q}_F^{\phi,g}$ denote the law of $(h^\phi - g)$.
We begin by fixing $D \in \mathbf{L}$ with diameter $R$. Let $\psi, \tilde{\psi} \colon \partial D \to \mathbf{R}$ be given functions satisfying $\|\psi\|_\infty + \| \tilde{\psi} \|_\infty \leq \overline{\Lambda} (\log R)^{\overline{\Lambda}}$. The main purpose of this section is to prove the following theorem.
\begin{theorem}
\label{harm::thm::coupling}
There exists $C,\epsilon,\delta > 0$ such that if $r \geq CR^{1-\epsilon}$ then the following holds. There exists a coupling $(h^\psi, h^{\tilde{\psi}})$ of $\mathbf{P}_D^\psi, \mathbf{P}_D^{\tilde{\psi}}$ such that if $\widehat{h} \colon D(r) \to \mathbf{R}$ is the harmonic extension of $\overline{h} = h^\psi - h^{\tilde{\psi}}$ from $\partial D(r)$ to $D(r)$ then
\[ \mathbf{P}[ \overline{h} \neq \widehat{h} \text{ in } D(r)] = O_{\overline{\Lambda}}(R^{-\delta}).\]
\end{theorem}
Taking $\tilde{\psi} = 0$ immediately leads to the following corollary, which is sufficiently important so that we state it as a separate theorem.
\begin{theorem}
\label{harm::thm::mean_harmonic}
There exists $C, \epsilon,\delta > 0$ such that if $r \geq CR^{1-\epsilon}$ then the following holds. If $\widehat{h} \colon D(r) \to \mathbf{R}$ is the harmonic extension of $\mathbf{E}^\phi h$ from $\partial D(r)$ to $D(r)$ then
\[ \sup_{x \in D(r)} |\mathbf{E} h(x) - \widehat{h}(x)| = O_{\overline{\Lambda}}(R^{-\delta}).\]
\end{theorem}
Suppose that $\mu,\nu$ are measures on a common measure space such that $\mu$ is absolutely continuous with respect to $\nu$. Recall that the relative entropy of $\mu$ with respect to $\nu$ is the quantity
\[ \mathbf{H}(\mu|\nu) = \mathbf{E}_\mu \left[ \log \frac{d\mu}{d\nu} \right].\]
Morally, the idea of our proof is to get an explicit upper bound on the rate of decay of
\[ \mathbf{H}(\mathbf{P}^{\tilde{\psi}}|\mathbf{Q}^{\psi,\widehat{h}}) + \mathbf{H}(\mathbf{Q}^{\psi,\widehat{h}}|\mathbf{P}^{\tilde{\psi}})\]
as $R \to \infty$ then invoke the well-known bound that the total variation distance of measures is bounded from above by the square-root of their relative entropy \cite{DZ98}. We will show shortly that the relative entropy takes the form
\begin{equation}
\label{harm::eqn::entropy_form}
\sum_{b \in D^*} \mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} a(b) \nabla \widehat{h}(b) (\nabla \widehat{h}(b) - \nabla \overline{h}(b))
\end{equation}
where $\overline{h} = h^\psi - h^{\tilde{\psi}}$ and $a(b)$ is a collection of conductances which are \emph{random} but uniformly bounded from above and below. In the Gaussian case it turns out that $a(b) \equiv a$ is constant, hence one can sum by parts, then use the harmonicity of $\widehat{h}$ to get that the entropy vanishes. The idea of our proof is to show that the former holds approximately in expectation:
\begin{equation}
\label{harm::eqn::entropy_approx_constant}
\sum_{b \in D^*} \mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} a(b) \nabla \widehat{h}(b) \nabla(\widehat{h}(b)-\overline{h}(b)) = \sum_{b \in D^*} \mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} a_\mathcal {V} \nabla \widehat{h}(b) \nabla(\widehat{h}(b)-\overline{h}(b)) + O_{\overline{\Lambda}}(R^{-\delta})
\end{equation}
for some $\delta > 0$, where $a_{\mathcal {V}} > 0$ is a constant depending only on $\mathcal {V}$.
Precisely two random variables appear in each of the summands of \eqref{harm::eqn::entropy_form}: $a(b)$ and $\nabla \overline{h}(b)$. We will show that $a(b) \approx \mathcal {V}''(h^{\psi,\mathcal {D}}(b))$ up to a negligible error term. The discussion in Section \ref{subsec::shift_invariant} implies that $\mathbf{E}^{\psi} \mathcal {V}''(\nabla h^{\psi}(b)) \approx a_\mathcal {V}$ up to another negligible error term for a constant $a_\mathcal {V}$ depending only on $\mathcal {V}$ if the distance of $b$ to $\partial D$ is $\Omega(R^\xi)$, $\xi > 0$. The desired equality \eqref{harm::eqn::entropy_approx_constant} would follow \emph{provided} we could show that this holds \emph{conditional} on $\nabla \overline{h}(b)$; the guiding intuition that this should be true is that $\overline{h}$ is harmonic hence deterministic when one couples using Langevin dynamics in the Gaussian case.
Two main difficulties arise when applying this idea.
\begin{enumerate}
\item \label{harm::shift_issue} As $\overline{h}$ is random in the case of the general GL model, the joint distribution of $(h^{\psi},\nabla \overline{h})$ under any reasonable coupling is very complicated and seems intractable to deal with directly. In particular, even though it is possible to arrange that $\overline{h}$ possesses a certain amount of regularity and even that $\nabla \overline{h}$ concentrates around a deterministic value, it is still random and even small fluctuations in $\nabla \overline{h}$ could imply vastly different behavior in $h^{\psi}$.
\item \label{harm::reg_issue} Even in the unconditioned case, we only have enough shift invariance if the distance of $b$ is $\Omega(R^\xi)$ from the boundary. This will force us to deal with a boundary term, the magnitude of which will in turn depend on the regularity of both $\nabla \overline{h}$ and $\widehat{h}$ near $\partial D$. Since we make no hypotheses on $\psi, \tilde{\psi}$ other than being pointwise bounded it may very well be that neither $\overline{h}$ nor $\widehat{h}$ possess any regularity near $\partial D$.
\end{enumerate}
The discussion in Section \ref{subsec::shift_invariant} suggests that it would be possible to circumvent \eqref{harm::shift_issue} if it were true that the gradient field $\nabla \overline{h}$ had enough continuity in $b$. Indeed, if we could write
\[ \mathcal {V}''(\nabla h^{\psi,\mathcal {D}}(b)) \nabla \widehat{h}(b) ( \nabla \widehat{h}(b) - \nabla \overline{h}(b)) \approx \mathcal {V}''(\nabla h^{\psi,\mathcal {D}}(b)) \nabla \widehat{h}(b) (\nabla \widehat{h}(b) - \nabla \overline{h}(b'))\]
where the distance between $b,b'$ is $\Omega(R^\xi)$ then we would get the desired result because there is no difficulty in coupling $h^{\psi,\mathcal {D}},h^{\tilde{\psi},\mathcal {D}}$ near $b$ to the zero tilt shift ergodic model conditional on $\nabla h^{\psi,\mathcal {D}}(b'), \nabla h^{\tilde{\psi},\mathcal {D}}(b')$. Unfortunately, $\nabla \overline{h}$ does not possess sufficient continuity to make such an argument possible. One can see this as $\overline{h}_t$ solves the non-linear equation
\begin{equation}
\label{harm::eqn::langevin_diff}
\partial_t \overline{h}_t(x) = \sum_{b \ni x} [\mathcal {V}'(\nabla h_t^{\psi,\mathcal {D}}(b)) - \mathcal {V}'(\nabla h_t^{\tilde{\psi},\mathcal {D}}(b))],
\end{equation}
the linearization of which has conductances that are highly irregular in space.
Another approach, which is very similar to the one we take, is to argue that
\[ \mathcal {V}''(\nabla h_T^{\psi,\mathcal {D}}(b)) \nabla \widehat{h}(b) (\nabla \widehat{h}(b) - \nabla \overline{h}_T(b)) \approx \mathcal {V}''(\nabla h_T^{\psi,\mathcal {D}}(b)) \nabla \widehat{h}(b) (\nabla \widehat{h}(b) - \nabla \overline{h}_{T - R^\xi}(b)).\]
This suffices since conditional on $h_{T-R^\xi}^{\psi,\mathcal {D}}, h_{T- R^\xi}^{\tilde{\psi},\mathcal {D}}$ there is no difficulty in coupling $h_T^{\psi,\mathcal {D}}, h_T^{\tilde{\psi},\mathcal {D}}$ with the shift invariant dynamics with negligible error. Such a bound is non-trivial since the best bound that one can get for $\partial_t \nabla \overline{h}_t(b)$ is $O_{\overline{\Lambda}}(R^{-1})$ and we need a bound of $O_{\overline{\Lambda}}(R^{-1-\epsilon})$, some $\epsilon > 0$. Note that when the model is Gaussian, \eqref{harm::eqn::langevin_diff} is a heat equation, with conductances which are constant both spatially and in time, which is continuous in time and discrete spatially, hence given any reasonable initial conditions $\partial_t \overline{h}_t = O_{\overline{\Lambda}}(R^{\epsilon-2})$ provided $t \sim R^{2-\epsilon}$. The main difficulty with the more general non-Gaussian model is that the conductances in the linearization of \eqref{harm::eqn::langevin_diff} vary in time much more quickly than $\nabla \overline{h}_t$ in addition to being highly irregular in space.
The second issue is less serious and resolved by invoking the length-area comparison technique developed in Section \ref{sec::dyn_couping}. This gives us that, for $\epsilon > 0$ fixed, there exists $\alpha(\epsilon)R^{1-2\epsilon} \leq R_1 \leq (\alpha(\epsilon)+1)R^{1-2\epsilon}$, some $\alpha(\epsilon) > 0$, such that
\[ \mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} \sum_{b \in A^*} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{-\epsilon})\]
where $A = D(2R_1)^* \setminus D(R_1/2)$.
This implies that if $g$ is the harmonic extension of $\overline{h}$ from $\partial A$ to $A$ then $\mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} \sum_{b \in A^*} |\nabla g(b)|^2 = O_{\overline{\Lambda}}(R^{-\epsilon})$ by the variational characterization of $g$. Going back to \eqref{harm::eqn::entropy_form}, this implies that we can construct our initial coupling so that $\overline{h}$ is harmonic with probability $1-O_{\overline{\Lambda}}(R^{-\epsilon/2})$. On the event $\mathcal {H}$ of the harmonic coupling at the boundary, we have that $\nabla \overline{h}(b) = O_{\overline{\Lambda}}(R^{3\epsilon-1})$ uniformly in $b \in \partial D(R_1)$. Thus with high probability $\overline{h}$ has plenty of regularity a bit away from $\partial D$ while $\psi - \tilde{\psi}$ need not have any.
Since we will be dealing with subdomains of $D(R_1)$ that involve moving further into $D(R_1)$ from $\partial D(R_1)$ at scales different from $R^{1-\epsilon}$ it will be useful for us to introduce two pieces of notation: from now on, we set $E = D(R_1)$ and $E'(r) = E \setminus E(r)$.
\begin{figure}[h!]
\includegraphics[width=120mm]{division_diagram.pdf}
\caption{On the right hand side, $D$ is the domain surrounded by the outer dashed line, $A$ is the annular region surrounded by the inner and outer dashed lines, and $E$ is the region shaded in gray. On the left hand side, $E$ is the region bounded by the outer curve, the inner curve with long dashes represents the inner boundary of $A$, and the lightly dashed curve in the middle is the boundary of $E(R^{\epsilon_2})$, which is also colored in light gray.}
\end{figure}
For the rest of this section we will assume that $R_1 \approx R^{1-2\epsilon_1}$ has been chosen such that:
\begin{align}
\label{harm::eqn::h_reg_assump}
\mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} \sum_{b \in A^*} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{-\epsilon_1}),\
\mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} \sum_{b \in D(R_1)^*} |\nabla \overline{h}(b)|^2 &= O_{\overline{\Lambda}}(R^{\epsilon_1}),
\end{align}
where $A = D(2R_1) \setminus D(R_1/2)$ as before. That such a choice is possible is ensured by Lemma \ref{gl::lem::grad_error}. We will fix $\epsilon_1 > 0$ later.
\begin{lemma}
\label{harm::lem::entropy_form}
Suppose that $F \in \mathbf{L}$ is a lattice domain and $\zeta,\tilde{\zeta} \colon \partial F \to \mathbf{R}$ are given boundary conditions. Let $g \colon F \to \mathbf{R}$ be any function such that $g|\partial F = \zeta - \tilde{\zeta}$. Then
\begin{align*}
& \mathbf{H}(\mathbf{P}_F^{\tilde{\zeta}}| \mathbf{Q}_F^{\zeta,g}) + \mathbf{H}(\mathbf{Q}_F^{\zeta,g}|\mathbf{P}_F^{\tilde{\zeta}})\\
=& \sum_{b \in F^*} \mathbf{E}^{\zeta,\tilde{\zeta}} \bigg[ \mathcal {V}''(\nabla h^{\zeta}(b)) \nabla g(b) \nabla(g-\overline{h})(b) + O( (|\nabla \overline{h}(b)|^2 + |\nabla g(b)|^2)|\nabla g(b)|) \bigg].
\end{align*}
where $\mathbf{E}^{\zeta,\tilde{\zeta}}$ denotes the expectation under any coupling of $\mathbf{P}_F^{\zeta},\mathbf{P}_F^{\tilde{\zeta}}$ and $\overline{h} = h^\zeta - h^{\tilde{\zeta}}$.
\end{lemma}
\begin{proof}
The densities $p,q=q_g$ of $\mathbf{P}_F^{\tilde{\zeta}}$ and $\mathbf{Q}_F^{\zeta,g}$ with respect to Lebesgue measure are given by
\[ p(h) = \frac{1}{\mathcal {Z}_p} \exp\left(-\sum_{b \in F^*} \mathcal {V}( \nabla (h \vee \tilde{\zeta})(b))\right) \text{ and }
q(h) = \frac{1}{\mathcal {Z}_q} \exp\left(-\sum_{b \in F^*} \mathcal {V}( \nabla [(h + g) \vee \zeta](b))\right).
\]
Here,
\[ (h \vee \tilde{\zeta})(x) =
\begin{cases}
h(x) \text{ if } x \in F\\
\tilde{\zeta}(x) \text{ if } x \in \partial F
\end{cases}
\]
and likewise for $(h + g) \vee \zeta$.
Assuming that $g = \zeta - \tilde{\zeta}$ on $\partial F$, then with $\mathbf{E}^{\zeta,g}$ denoting the expectation under $\mathbf{Q}_F^{\zeta,g}$ we have
\begin{align*}
\mathbf{H}(\mathbf{Q}_F^{\zeta,g}|\mathbf{P}_F^{\tilde{\zeta}}) - \log \frac{\mathcal {Z}_p}{\mathcal {Z}_q}
&= \sum_{b \in F^*} \mathbf{E}^{\zeta,g} \bigg[\mathcal {V}( \nabla h \vee \tilde{\zeta}(b)) - \mathcal {V}( \nabla ((h + g) \vee \zeta)(b)) \bigg]\\
&= -\sum_{b \in F^*} \left( \mathbf{E}^{\zeta} \int_0^1 \mathcal {V}'(\nabla [(h + (s-1)g) ](b))ds \right) \nabla g(b)\\
\end{align*}
Similarly, with $\mathbf{E}^{\tilde{\zeta}}$ denoting the expectation under $\mathbf{P}_F^{\tilde{\zeta}}$,
\begin{align*}
\mathbf{H}(\mathbf{P}_F^{\tilde{\zeta}}|\mathbf{Q}_F^{\zeta,g}) - \log \frac{\mathcal {Z}_q}{\mathcal {Z}_p}
&= \sum_{b \in F^*} \mathbf{E}^{\tilde{\zeta}} \bigg[\mathcal {V}(\nabla ((h + g) \vee \zeta)(b)) - \mathcal {V}( (\nabla h \vee \tilde{\zeta})(b)) \bigg]\\
&= \sum_{b \in F^*} \left( \mathbf{E}^{\tilde{\zeta}} \int_0^1 \mathcal {V}'(\nabla (h + s g)(b)) ds \right) \nabla g(b)\\
\end{align*}
Thus we have that
\begin{align*}
& \mathbf{H}(\mathbf{P}_F^{\tilde{\zeta}}|\mathbf{Q}_F^{\zeta,g}) + \mathbf{H}(\mathbf{Q}_F^{\zeta,g}|\mathbf{P}_F^{\tilde{\zeta}})\\
&=\sum_{b \in F^*} \left( \mathbf{E}^{\tilde{\zeta}} \int_0^1 \mathcal {V}'(\nabla (h + s g)(b)) ds - \mathbf{E}^{\zeta} \int_0^1 \mathcal {V}'(\nabla [(h + (s-1)g) ](b))ds\right) \nabla g(b)\\
&= \sum_{b \in F^*} \left( \mathbf{E}^{\zeta,\tilde{\zeta}} \int_0^1 \int_0^1 \mathcal {V}''(\nabla (h^{\zeta} + (s-1) g)(b) + r \nabla (g-\overline{h})(b)) dr ds\right) \nabla g(b) \nabla (g-\overline{h})(b).
\end{align*}
The result follows as
\[ \int_0^1 \int_0^1 \mathcal {V}''(\nabla (h^{\zeta} + (s-1) g)(b) + r \nabla (g-\overline{h})(b)) dr ds = \mathcal {V}''(\nabla h^{\zeta}(b)) + O(\nabla \overline{h}(b)) + O(\nabla g(b)).\]
\end{proof}
\begin{lemma}[Harmonic Coupling at the Boundary]
\label{harm::lem::harmonic_coupling} There exists a coupling $(h^{\psi},h^{\tilde{\psi}})$ of $\mathbf{P}_D^\psi, \mathbf{P}_D^{\tilde{\psi}}$ such that
$\overline{h} = h^{\psi} - h^{\tilde{\psi}}$ is harmonic in $A$ with probability $1- O_{\overline{\Lambda}}(R^{-\epsilon_1/2})$ where $A = D(2R_1) \setminus D(R_1/2)$.
\end{lemma}
\begin{proof}
Let $\overline{h} = h^{\psi,\mathcal {D}} - h^{\tilde{\psi},\mathcal {D}}$. Then by our choice of $R_1$ we have that
\[ \sum_{b \in A^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} (\nabla \overline{h}(b))^2 = O_{\overline{\Lambda}}(R^{-\epsilon_1}).\]
Let $g \colon A \to \mathbf{R}$ be the function on $A$ which is harmonic in $A$ and has boundary values given by $\overline{h}$ on $\partial A$. Let $\zeta, \tilde{\zeta} = (h^{\psi,\mathcal {D}},h^{\tilde{\psi},\mathcal {D}})| \partial A \times \partial A$. Conditional on $(\zeta,\tilde{\zeta})$, let $\mathbf{P}_A^\zeta, \mathbf{P}_A^{\tilde{\zeta}}$ have the laws of the GL model on $A$ with boundary conditions $\zeta,\tilde{\zeta}$, respectively, and let $\mathbf{Q}_A^{\zeta,g}$ have the law of the GL model on $A$ less $g$ with boundary condition $\tilde{\zeta}$. As $\mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}} \sum_{b \in A^*} |\nabla g(b)|^2 = O_{\overline{\Lambda}}(R^{-\epsilon_1})$ it follows immediately from the Cauchy-Schwartz inequality and the previous lemma that
\[ \mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}}[ \mathbf{H}(\mathbf{P}_A^\zeta|\mathbf{Q}_A^{\tilde{\zeta},g}) + \mathbf{H}(\mathbf{Q}_A^{\tilde{\zeta},g}|\mathbf{P}_A^{\zeta})] = O_{\overline{\Lambda}}(R^{-\epsilon_1}).\]
This implies the result.
\end{proof}
From now on, we will use the notation $(h^{\psi},h^{\tilde{\psi}})$ to indicate the coupling of $\mathbf{P}_D^\psi,\mathbf{P}_D^{\tilde{\psi}}$ given in the previous lemma. Let $\mathcal {H}$ denote the event that $\overline{h} = h^{\psi} - h^{\tilde{\psi}}$ is harmonic in $A$; for the most part, we will be working on this event. Let $(\zeta,\tilde{\zeta}) = (h^\psi,h^{\tilde{\psi}}) | \partial E \times \partial E$ and, using our usual convention, let $\mathbf{P}_E^{\tilde{\zeta}}$ denote the law of the GL model on $E$ with boundary condition $\tilde{\zeta}$. For a given function $g$ on $E$ let $\mathbf{Q}_E^{\zeta,g}$ have the law of the GL model on $E$ with boundary condition $\zeta$ less $g$.
\begin{lemma}
\label{harm::lem::regularity}
Suppose that $g \colon E \to \mathbf{R}$ is the harmonic extension of $\overline{h} = h^{\psi} - h^{\tilde{\psi}}$ from $\partial E$ to $E$. We have that $\mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}}[|\nabla g(b)| \big| \mathcal {H}] = O_{\overline{\Lambda}}(R^{3\epsilon_1-1})$ uniformly in $b \in E^*$.
\end{lemma}
\begin{proof}
Let $b = (x,y) \in E^*$ be arbitrary. Let $X_t, Y_t$ be standard random walks started at $x,y$, respectively, coupled together so that they move in the same direction at each time step. Let $\sigma_X,\sigma_Y$ denote their first exit times from $E$ and set $\sigma = \sigma_X \wedge \sigma_Y$. Then $(X_\sigma,Y_\sigma) = b'$ for some $b' \in \partial E^*$, hence it suffices to show that
\[ \mathbf{E}_\mathcal {D}^{\psi,\tilde{\psi}}\big[\max_{b' \in \partial E^*} |\nabla g(b')|\big] = O_{\overline{\Lambda}}(R^{3\epsilon_1-1}).\]
Assume now that $b = (x,y) \in \partial E^*$. Let $p_k = \mathbf{P}[{\rm dist}(Y_{\sigma_Y},y) \geq k]$. By Lemma \ref{dhf::lem::beurling_thick} we know that $p_k \leq C (k \wedge R^{1-2\epsilon_1})^{-1}$ for some $C > 0$ independent of $D,E$. Let $K = [R^{1-2\epsilon_1}]$. Now,
\begin{align*}
|\nabla g(b)|
&\leq O_{\overline{\Lambda}}(R^{2\epsilon_1-1}) + \sum_{k=1}^K (p_{k}-p_{k+1}) \sup_{|z-x| = k} |g(z) - g(x)|\\
&\leq O_{\overline{\Lambda}}(R^{2\epsilon_1-1}) + \sum_{k=1}^K p_k \big| \sup_{|z-x| = k} |g(z) - g(x)| - \sup_{|z-x| = k+1} |g(z) - g(x)|\big|\\
&\leq O_{\overline{\Lambda}}(R^{2\epsilon_1-1}) + \sum_{k=1}^K \frac{C}{k} O_{\overline{\Lambda}}(R^{2\epsilon_1-1})\\
&= O_{\overline{\Lambda}}(R^{3\epsilon_1-1}),
\end{align*}
as desired. The extra $O_{\overline{\Lambda}}(R^{2\epsilon_1-1})$ that appeared in the second step came from the boundary terms in the summation by parts.
\end{proof}
Conditional on $(\zeta,\tilde{\zeta})$, assume that $(h_t^{\zeta,\mathcal {D}},h_t^{\tilde{\zeta},\mathcal {D}})$ is the stationary coupling of the dynamics of $h^\zeta$ and $h^{\tilde{\zeta}}$. We will write $\overline{h}_t = h_t^{\zeta,\mathcal {D}} - h_t^{\tilde{\zeta},\mathcal {D}}$ from now on. Assume that $R_2 \approx R^{1-\epsilon_2}$, $\epsilon_2 > 0$ to be fixed later, has been chosen so that
\[ \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \sum_{b \in \partial E(R_2)^*} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{\epsilon_2-1}) \text{ and } \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}}\sum_{b \in E(R_2)^*} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(R^{\epsilon_2}).\]
Set
\[ a_t(b) = \int_0^1 \mathcal {V}''(\nabla h_t^{\tilde{\zeta},\mathcal {D}}(b) + s \nabla \overline{h}_t(b))ds\]
and let $\mathcal {L}_t$ denote the operator
\[ [\mathcal {L}_t f](x) = \sum_{b \ni x} a_t(b) \nabla f(b),\ t \geq 0.\]
For each fixed $u \geq 0$ and $x \in E$ let $p(u,t;x,y)$ be the solution in $E$ of the equation
\begin{equation}
\label{harm::eqn::fund_sol}
\partial_t p(u,t;x,y) = [\mathcal {L}_t p(u,t;\cdot,y)](x),\ p(u,u;x,y) = 1_{\{x = y\}},\ t \geq u
\end{equation}
with $p(u,t;x,y) = 0$ if either $x \notin E$ or $y \notin E$. By \eqref{harm::eqn::langevin_diff} we know that
\[ \overline{h}_t(x) = \sum_{y \in E} p(u,t;x,y) \overline{h}_u(y)\]
whenever $t \geq u$.
\begin{lemma}
Suppose that $u \geq 0$, $t_1 \geq R^{2-3\epsilon_2} + u$ and $t_2 = t_1 + R^{2-3\epsilon_2}$. We have
\[ \frac{1}{R^{2-3\epsilon_2}} \sum_{b,b' \in E(R_2/4)^*} \int_{t_1}^{t_2} |\nabla \nabla p(u,r;b,b')| dr = O(R^{19\epsilon_2}).\]
\end{lemma}
\begin{proof}
As $p$ is just the transition kernel of a symmetric random walk with bounded rates in $E$, we know that $p$ satisfies the Nash-Aronson estimates (Lemma \ref{symm_rw::lem::nash_aronson_bounded})
\[ p(s,t;x,y) \leq \frac{C}{1 \vee (t-s)} \exp\left( - \frac{|x-y|}{1 \vee (t-s)^{1/2}} \right) + O(R^{-100})\]
for all $x,y \in E(R_2/4)$ with $|t-s| \leq R^{2-3\epsilon_2}$. Equation \eqref{harm::eqn::fund_sol} implies that $p$ satisfies the energy inequality
\begin{align*}
& \sum_{x \in F} [p(u,s;x,y)]^2 + \int_s^t \sum_{b \in F^*} (\nabla p(u,r;b,y))^2 dr\\
\leq& \sum_{x \in F} [p(u,t;x,y)]^2 + \int_s^t \sum_{b \in \partial F^*} |\nabla p(u,r;b,y)||p(u,r;x_b,y)|dr
\end{align*}
for $u \leq s < t$, $F \subseteq E$, and $y \in E$. The same holds if we switch the gradient from the third to the fourth coordinates since the semigroup property
\[ p(u,t;x,y) = \sum_{z \in E} p(u,s;x,z) p(s,t;z,y)\]
implies that $\partial_t p(u,t;x,y) = [\mathcal {L}_t p(u,t;x,\cdot)](y)$.
Let $u \geq 0$ and take $s_1 = u+ \tfrac{1}{4} R^{2-3\epsilon_2}$ and $s_2 = s_1 + \tfrac{1}{4} R^{2-3\epsilon_2}$. The Nash-Aronson estimates (Lemma \ref{symm_rw::lem::nash_aronson_bounded}) imply
\[ p(u,r;x,z) \leq \frac{C}{R^{2-7\epsilon_2}} \text{ for all } x \in E(R_2/4), z \in E,\ s_1 \leq r \leq s_2\]
for some $C > 0$ depending only on $\mathcal {V}$. Fix $\xi > 0$ and $x_0 \in E(R_2/4)$. The energy inequality thus implies that
\begin{align*}
& \frac{1}{\tfrac{1}{4}R^{2-3\epsilon_2}} \int_{s_1}^{s_2} \sum_{b \in B(x_0,R^\xi)^*} (\nabla p(u,r;b,z))^2 dr\\
\leq& \frac{1}{\tfrac{1}{4} R^{2-3\epsilon_2}} \sum_{y \in B(x_0,R^\xi)} (p(u,s_2;y,z))^2 + \frac{1}{\tfrac{1}{4} R^{2-3\epsilon_2}} \int_{s_1}^{s_2} \sum_{b \in \partial B(x_0,R^{\xi})^*} |\nabla p(u,r;b,z)|| p(u,r;x_b,z)| dr\\
\leq& O(R^{3\epsilon_2-2}) \cdot O(R^{2\xi + 14\epsilon_2 - 4}) + O(R^{\xi + 14\epsilon_2 -4})
= O(R^{14\epsilon_2 - 4 + \xi}).
\end{align*}
If we now apply the iterative scheme developed in the previous section we see that we can find $\overline{R} = \overline{R}(\xi) = \alpha(\xi,\epsilon_2) R^{\xi}$ so that
\begin{equation}
\label{harm::eqn::kernel_coord3_bound}
\frac{1}{\tfrac{1}{4}R^{2-3\epsilon_2}} \int_{s_1}^{s_2} \sum_{b \in B(x_0,\overline{R})^*} (\nabla p(u,r;b,z))^2 dr = O(R^{15\epsilon_2 - 4})
\end{equation}
where $\overline{R}$ is non-random and does not depend on $x_0 \in E, z \in E$, rather just on $\mathcal {V}$. The constant in the estimate is \emph{uniform} in the choice of $u$ and $x_0$. Set $t_1 = u + R^{2-3\epsilon_2}$ and $t_2 = t_1 + R^{2-3\epsilon_2}$ so that $u \leq s_1 < s_2 < s_2 + \tfrac{1}{4}R^{2-3\epsilon_2} < t_1$. Using exactly the same proof, we have
\begin{equation}
\label{harm::eqn::kernel_coord4_bound}
\frac{1}{R^{3\epsilon_2-2}} \int_{t_1}^{t_2} \sum_{b \in B(x_0,\overline{R})^*} (\nabla p(v,r;z,b))^2 dr = O(R^{15\epsilon_2 - 4}) \text{ for all } z \in E
\end{equation}
whenever $u \leq v \leq s_2$. Here, it is important that $t_1 - s_2 = \Theta(R^{2-3\epsilon_2})$ so that the Nash-Aronson estimate applies as we used it before.
By the semigroup property we can factor the mixed derivative $\nabla \nabla p(u,r;b,b')$ as
\begin{align*}
\nabla \nabla p(u,r;b,b')
&= \sum_{z \in E} \nabla p(u,v;b,z) \nabla p(v,r;z,b')\\
&\leq \sum_{z \in E} \big[ (\nabla p(u,v;b,z))^2 + (\nabla p(v,r;z,b'))^2 \big]
\end{align*}
for any $u < v < r$. Taking $\xi = 1-2\epsilon_2$ and using \eqref{harm::eqn::kernel_coord3_bound}, \eqref{harm::eqn::kernel_coord4_bound} we see that
\begin{align*}
&\sum_{b,b' \in E(R_2/4)^*} \sum_{z \in E} \frac{1}{\tfrac{1}{4}(R^{2-3\epsilon_2})^2} \int_{t_1}^{t_2} \int_{s_1}^{s_2} (\nabla p(u,v;b,z))^2 + (\nabla p(v,r;z,b'))^2 dv dr = \\
& \sum_{z \in E} \frac{1}{\tfrac{1}{4}(R^{2-3\epsilon_2})^2} \int_{t_1}^{t_2} \int_{s_1}^{s_2}
\left(\sum_{b' \in E(R_2/4)^*} \sum_{b \in E(R_2/4)^*} (p(u,v;b,z))^2 + \sum_{b \in E(R_2/4)^*} \sum_{b' \in E(R_2/4)^*} (\nabla p(v,r;z,b'))^2\right) dv dr\\
=& O(R^4) O(R^2 / R^{2-4\epsilon_2}) O(R^{15\epsilon_2-4}) = O(R^{19\epsilon_2}).
\end{align*}
\end{proof}
Now we assume that $\epsilon_2 > 0$ is so small that $\xi_1 > 50000 \epsilon_2$ and $\epsilon_1 > 0$ is so small that $\xi_1 \epsilon_2 > 100 \epsilon_1 > 0$, where $\xi_1$ is the exponent from the Nash continuity estimate (Lemma \ref{symm_rw::lem::nash_continuity_bounded}).
From now on we take $100\epsilon_2 < \alpha < \xi_1/100$ and $0 < \beta < \alpha/100$, and $\overline{R} = \overline{R}(\alpha)$. Let $N(y) = |B(y,\overline{R}) \cap E|$,
\[ \breve{p}(s,t;x,y) = \frac{1}{N(y)} \sum_{z \in B(y,\overline{R})} p(s,t;x,z)\] and set
\[ \breve{h}_t(x) = \sum_{y \in E} \breve{p}(0,t;x,y) \overline{h}_0(y).\]
The purpose of the next lemma is to get an $H^1$ estimate on the difference between $\breve{h}$ and $\overline{h}$ in addition to some control on the integrated time derivative of $\breve{h}$.
\begin{lemma}
There exists non-random $R^{2-3\epsilon_2} \leq r_1 < r_2 \leq 2R^{2-3\epsilon_2}$ with $r_2 - r_1 = R^\beta$ such that
\begin{align*}
\sum_{b \in E(R_2)^*} \mathbf{E}| \nabla \breve{h}_{r_i}(b) - \nabla \overline{h}_{r_i}(b)|^2 &= O_{\overline{\Lambda}}(R^{-2\epsilon_2}) \text{ and }
\mathbf{E} \sup_{x \in E(R_2)} \left( \int_{r_1}^{r_2} |\partial_r \breve{h}_r(x)| dr\right)^2 = O_{\overline{\Lambda}}(R^{-2-2\epsilon_2}).
\end{align*}
\end{lemma}
\begin{proof}
Let $t_1 = R^{2-3\epsilon_2}$ and $t_2 = 2R^{2-3\epsilon_2}$. For $b \in E(R_2)^*$ fixed and $t_1 \leq r \leq t_2$ we have
\begin{align*}
& \sum_{y \in E(R_2/2)} |\nabla \breve{p}(0,r;b,y) - \nabla p(0,r;b,y)|\\
\leq& \sum_{y \in E(R_2/2)}\frac{1}{N(y)}\sum_{z \in B(y,\overline{R})} |\nabla p(0,r;b,z) - \nabla p(0,r;b,y)|\\
\leq& \sum_{b' \in E(R_2/4)^*} \frac{O(R^{4\alpha})}{\Theta(R^{2\alpha})} |\nabla \nabla p(0,r;b,b')|
= O(R^{2\alpha}) \sum_{b' \in E(R_2/4)^*} |\nabla \nabla p(0,r;b,b')|.
\end{align*}
The last step followed by using that for $y \in E(R_2/2)$ we have $N(y) = \Theta(R^{2\alpha})$ and that $|\nabla p(0,r;b,z) - \nabla p(0,r;b,y)| \leq \sum_{k} |\nabla \nabla p(0,r;b,b_k)|$ where $b_1,\ldots,b_m$, $m = O(R^{\alpha})$, is any sequence of bonds connecting $y$ to $z$. If we make any reasonable of bonds connecting pair $z,y$ (say, shortest path), the number of times any particular bond is used in the entire sum is $O(R^{4\alpha})$.
By the Nash continuity estimate (Lemma \ref{symm_rw::lem::nash_continuity_bounded}), trivially
\[ \sum_{y \in E(R_2/2)} |\nabla \breve{p}(0,r;b,y) - \nabla p(0,r;b,y)| = O(R^2) \cdot O(R^{7\epsilon_2-2-\xi_1}) = O(R^{7\epsilon_2 - \xi_1}).\]
Thus
\[ \left( \sum_{y \in E(R_2/2)} |\nabla \breve{p}(0,r;b,y) - \nabla p(0,r;b,y)| \right)^2 \leq O(R^{2\alpha + 7\epsilon_2-\xi_1}) \sum_{b \in E(R_2/4)^*} |\nabla \nabla p(0,r;b,b')|.\]
Combining this estimate with the previous lemma yields
\begin{align*}
& \frac{1}{R^{2-3\epsilon_2}} \mathbf{E} \int_{t_1}^{t_2} \sum_{b \in E(R_2)^*} \left( \sum_{y \in E(R_2/2)} |\nabla \breve{p}(0,r;b,y) - \nabla p(0,r;b,y)| \right)^2 dr\\
\leq& O(R^{2\alpha + 7\epsilon_2-\xi_1}) \cdot O(R^{19\epsilon_2})
= O(R^{-\gamma})
\end{align*}
where
\[ 0 < \gamma = -[26\epsilon_2 + 2\alpha -\xi_1] < 100\epsilon_2.\]
This implies that we can find $t_1 \leq r_1 < r_2 \leq t_2$ non-random with $r_2 - r_1 = R^{\beta}$ such that
\[ \mathbf{E} \sum_{b \in E(R_2)^*} \left(\sum_{y \in E(R_2/2)} |\nabla \breve{p}(0,r_i;b,y) - \nabla p(0,r_i;b,y)|\right)^2 = O(R^{-\gamma}).\]
Then we have that
\[ \sum_{b \in E^*} \mathbf{E}| \nabla \breve{h}_{r_i}(b) - \nabla \overline{h}_{r_i}(b)|^2 = O_{\overline{\Lambda}}(R^{\epsilon_2-\gamma})\]
for $i=1,2$ as the Nash-Aronson estimate implies
\[ \sum_{b \in E(R_2)^*} \sum_{y \in E'(R_2/2)} |\nabla \breve{p}(0,r_i;b,y) - \nabla p(0,r_i;b,y)| = O(R^{-100}).\]
This gives us the first part of the lemma.
The Nash continuity estimate (Lemma \ref{symm_rw::lem::nash_continuity_bounded}) implies that
\[ \nabla p(0,r;x,b) = O(R^{3\epsilon_2-2}) \cdot O(R^{-\xi_1}) = O(R^{3\epsilon_2-2-\xi_1}) \text{ for } t_1 \leq r \leq t_2\]
where $\xi_1 > 0$ depends only on $\mathcal {V}$. Note that $\mathbf{P}[\sup_{x \in E} |\overline{h}(x)|^2 \geq R^{\epsilon_2}] = O_{\overline{\Lambda}}(R^{-200})$. Hence for $x \in E(R_2)$ we compute
\begin{align*}
& \mathbf{E} \left(\int_{r_1}^{r_2} |\partial_r \breve{h}_r(x)| dr\right)^2
\leq (R^{\epsilon_2}) \mathbf{E} \left(\sum_{y \in E(R_2/2)} \int_{r_1}^{r_2} \left| \frac{1}{N(y)} \sum_{z \in B(y,\overline{R})} \partial_r p(0,r;x,z) \right| dr\right)^2 + O_{\overline{\Lambda}}(R^{-100})\\
=& (R^{\epsilon_2}) \mathbf{E} \left(\sum_{y \in E(R_2/2)} \int_{r_1}^{r_2} \left| \frac{1}{N(y)} \sum_{z \in B(y,\overline{R})} [\mathcal {L}_r p(0,r;x,\cdot)](z) \right| dr\right)^2 + O_{\overline{\Lambda}}(R^{-100})\\
=& (R^{\epsilon_2}) \mathbf{E} \left(\sum_{y \in E(R_2/2)} \int_{r_1}^{r_2} \left| \frac{1}{N(y)} \sum_{b \in \partial B(y,\overline{R})^*} a_r(b) \nabla p(0,r;x,b) \right| dr\right)^2 + O_{\overline{\Lambda}}(R^{-100})\\
\leq& \frac{(R^{\epsilon_2})}{N} O(R^2) \cdot O(R^{\beta}) \cdot O(R^{-\alpha}) \mathbf{E} \sum_{b \in E(R_2/2)^*} \int_{r_1}^{r_2} |\nabla p(0,r;x,b)|^2 dr + O_{\overline{\Lambda}}(R^{-100})\\
=& O_{\overline{\Lambda}}(R^{\epsilon_2}) \cdot O(R^{2+2\beta-\alpha}) \cdot O(R^{15\epsilon_2-4})
= O_{\overline{\Lambda}}(R^{16\epsilon_2+2\beta - \alpha -2}),
\end{align*}
The second part of the lemma now follows based on our choice of $\alpha,\beta$.
\end{proof}
Combining all of our estimates we can now prove the main theorem of the section.
\begin{proof}[Proof of Theorem \ref{harm::thm::coupling}]
Recall that our expression for $\mathbf{H}(\mathbf{P}^{\tilde{\zeta}}|\mathbf{Q}^{\zeta,g}) + \mathbf{H}(\mathbf{Q}^{\zeta,g}|\mathbf{P}^{\tilde{\zeta}})$ is
\[ \sum_{b \in E^*} \bigg[ \mathcal {V}''(\nabla h_{r_2}^{\zeta,\mathcal {D}}) \nabla g(b) (\nabla g(b) - \nabla \overline{h}_{r_2}(b)) + O( (|\nabla g(b)|^2 + |\nabla \overline{h}_{r_2}(b)|^2)|\nabla g(b)|) \bigg].\]
We have already pointed out that the second term is negligible, hence we need only deal with the first. Note that
\begin{align*}
& \mathbf{E} \sum_{b \in E(R_2)^*} |\nabla \overline{h}_{r_2}(b) - \nabla \overline{h}_{r_1}(b)|^2\\
\leq& O(1) \mathbf{E} \sum_{b \in E(R_2)^*} \left(\left| \nabla \overline{h}_{r_2}(b) - \nabla \breve{h}_{r_2}(b)\right|^2 +
\left(\int_{r_1}^{r_2} |\nabla \partial_r \breve{h}_r(b)| dr \right)^2 +
\left| \nabla \breve{h}_{r_1}(b) - \nabla \overline{h}_{r_1}(b)\right|^2\right)\\
\leq& O_{\overline{\Lambda}}(R^{-2\epsilon_2}) + O(R^2) \cdot O_{\overline{\Lambda}}(R^{-2-2\epsilon_2})
= O_{\overline{\Lambda}}(R^{-2\epsilon_2}).
\end{align*}
Combining the estimate $\mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \sum_{b \in E(R_2)^*} |\nabla g(b)|^2 = O_{\overline{\Lambda}}(R^{\epsilon_1})$ with the Cauchy-Schwartz inequality and using $\epsilon_1 < \epsilon_2$ yields
\[ \sum_{b \in E(R_2)^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \left| \mathcal {V}''(\nabla h_{r_2}^{\zeta,\mathcal {D}}) \big[ \nabla g(b) (\nabla g(b) - \nabla \overline{h}_{r_2}(b)) - \nabla g(b) (\nabla g(b) - \nabla \overline{h}_{r_1}(b)) \big] \right| = O_{\overline{\Lambda}}(R^{-\epsilon_2/2}).\]
Thus we are left to bound
\begin{align*}
&\sum_{b \in E(R_2)^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \mathcal {V}''(\nabla h_{r_2}^{\zeta,\mathcal {D}}) \nabla g(b) (\nabla g(b) - \nabla \overline{h}_{r_1}(b)),\\
&\sum_{b \in E'(R_2)^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \mathcal {V}''(\nabla h_{r_2}^{\zeta,\mathcal {D}}) \nabla g(b) (\nabla g(b) - \nabla \overline{h}_{r_2}(b)).
\end{align*}
We shall begin with the second term. On the event $\mathcal {H}$, the harmonicity of $\overline{h}$ on $A$ implies that for $x,y \in \partial E^*$ we have $|\overline{h}(x) - \overline{h}(y)| = O_{\overline{\Lambda}}(R^{\epsilon_1-1})|x-y|$ for $|x-y| \leq R^{1-\epsilon_1}$ and $|\overline{h}(x) - \overline{h}(y)| = O(R^{\epsilon_1})$ otherwise. It is a consequence of Lemma \ref{symm_rw::lem::beurling} that the probability that a random walk with bounded rates started at $x$ adjacent to $\partial E$ makes it to distance $r$ from $x$ before exiting $E$ is $O(r^{-\rho})$ for some $\rho > 0$. Hence if $b \in \partial E^*$ then by the random walk representation of the difference (recall subsection \ref{subsec::rw_difference}) we have
\[ \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \overline{h}_t(b) = [1-O(R^{-\rho/2})]\sup_{|x-y| \leq R^{1/2}} O_{\overline{\Lambda}}(R^{\epsilon_1-1/2}) + O_{\overline{\Lambda}}(R^{\epsilon_1-\rho/2}) = O_{\overline{\Lambda}}(R^{\epsilon_1-\xi_1}),\]
the last equality coming by shrinking $\xi_1$ if necessary so that $\xi_1 < \rho/2$.
The energy inequality now gives
\begin{align*}
\sum_{b \in E^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} (\nabla \overline{h}(b))^2
&\leq \sum_{b \in \partial E^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}}|\nabla \overline{h}(b)| |\overline{h}(x_b)|
= O(R) \cdot O_{\overline{\Lambda}}(R^{\epsilon_2-\xi_1})
= O_{\overline{\Lambda}}(R^{1+\epsilon_1-\xi_1})
\end{align*}
on the event $\mathcal {H}$. Second, using the iterative technique from the previous section, we have for $a > 0$ that
\begin{align*}
\sum_{b \in E(R^a)^*} \mathbf{E}_{\mathcal {D}}^{\zeta,\tilde{\zeta}} (\nabla \overline{h}(b))^2
&\leq \sum_{b \in \partial E(R^a)^*} |\nabla \overline{h}(b)| |\overline{h}(x_b)|
= O_{\overline{\Lambda}}(R^{1+2\epsilon_1-a})
\end{align*}
Thus for $a < a'$ on $\mathcal {H}$ we have the bound
\begin{align*}
\sum_{b \in E(R^a)^* \setminus E(R^{a'})^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} |\nabla g(b)||\nabla \overline{h}(b)|
&\leq \left(\sum_{b \in E(R^{a})^* \setminus E(R^{a'})^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} |\nabla g (b)|^2 \right)^{1/2} \left( \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \sum_{b \in E(R^a)^*} |\nabla \overline{h}(b)|^2\right)^{1/2}\\
&\leq [ O_{\overline{\Lambda}}(R^{8\epsilon_1-2+1+(a'-a)})]^{1/2} [ O(R^{1+\epsilon_1-a \vee \xi_1})]^{1/2}\\
&= O_{\overline{\Lambda}}(R^{5\epsilon_1 + a'/2 - (a + a \vee \xi_1)/2})
\end{align*}
Thus taking $a_1 = 0, a_1 = \xi_1/2$, and $a_k = \tfrac{4}{3} a_{k-1}$ for $k \geq 3$ we see that on $\mathcal {H}$ we have
\[ \sum_{b \in E'(R_2)^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} |\nabla g(b)||\nabla \overline{h}(b)| \leq \sum_{k=1}^{K} \sum_{b \in E(R^{a_k})^* \setminus E(R^{a_{k+1}})^*} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} |\nabla g(b)||\nabla \overline{h}(b)|
\leq O_{\overline{\Lambda}}(R^{-\xi_1/4})
\]
where $K$ is the smallest integer so that $a_K \geq R_2$.
This leaves the first term.
Fix $b \in E(R_2)^*$. Let $h_t^S$ denote an instance of the time-varying zero tilt infinite gradient Gibbs state. Assume that the dynamics of $h_t^S, h_t^\zeta, h_t^{\tilde{\zeta}}$ are coupled together so that the Brownian motions of $h_t^S$ and $(h_t^\zeta,h_t^{\tilde{\zeta}})$ are independent up until time $r_1$ after which they are the same. Then conditional on $(h_{r_1}^\zeta,h_{r_1}^{\tilde{\zeta}})$ and in particular on $\overline{h}_{r_1}$, the law of $h_t^S$ for $t \geq r_1$ is still that of a time-varying infinite gradient Gibbs state with zero tilt. On the other hand, we know that $\nabla h_{r_2}^S(b) - \nabla h_{r_2}^{\tilde{\zeta}}(b) = O_{\overline{\Lambda}}(R^{-\xi_1})$. Therefore
\begin{align*}
& \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}}\bigg[ \mathcal {V}''(\nabla h_{r_2}^{\tilde{\zeta}}(b)) \nabla g(b) (\nabla g(b)-\nabla \overline{h}_{r_1}(b))\bigg]\\
=& \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}}\bigg[\mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}}[\mathcal {V}''(\nabla h_{r_2}^S(b))\big| \nabla \overline{h}_{r_1}(b)] \nabla g(b) (\nabla g(b)-\nabla \overline{h}_{r_1}(b)) + O_{\overline{\Lambda}}(R^{-\xi_1 }) \nabla g(b) (\nabla g(b)-\nabla \overline{h}_{r_1}(b))\bigg].
\end{align*}
The second term is negligible and the first is equal to $a_\mathcal {V} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \nabla g(b) (\nabla g(b) - \nabla \overline{h}_{r_1}(b))$ where $a_\mathcal {V} = \mathbf{E} \mathcal {V}''(\nabla h^S(b))$ does not depend on the choice of bond but rather only on $\mathcal {V}$. Summing by parts, on $\mathcal {H}$ we have that
\begin{align*}
& \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \sum_{b \in E(R_2)^*} a_\mathcal {V} \nabla g(b)(\nabla g(b) - \nabla \overline{h}_{r_1}(b) )\\
=& -\mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \sum_{x \in E(R_2)}(g(x) - \overline{h}_{r_1}(x)) a_\mathcal {V} \Delta g(x) + a_\mathcal {V} \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \sum_{b \in \partial E(R_2)^*} (g(x_b) - \overline{h}_{r_1}(x_b)) \nabla g(b)\\
=& \mathbf{E}_\mathcal {D}^{\zeta,\tilde{\zeta}} \sum_{b \in \partial E(R_2)^*} (g(x_b) - \overline{h}_{r_1}(x_b)) \nabla g(b)\\
\end{align*}
Now, we know that $\nabla g(b) = O_{\overline{\Lambda}}(R^{3\epsilon_1-1})$ uniformly in $b \in \partial E(R_2)^*$ on $\mathcal {H}$. It follows from Lemmas \ref{symm_rw::lem::beurling} and \ref{dhf::lem::beurling_thick} that $g(x_b) - \overline{h}_{r_1}(x_b) = O_{\overline{\Lambda}}(R^{-\xi_1 \epsilon_2/2})$. Since $\xi_1 \epsilon_2 > 100\epsilon_1$, the desired result follows.
\end{proof}
\section{Proof of the CLT}
Let $D \subseteq \mathbf{C}$ be a connected, smooth domain. For each $\epsilon > 0$, we let $D_\epsilon = \epsilon (\mathbf{Z}^2 \cap D)$ be a fine lattice approximation of $D$. Let $\eta$ be the random gradient field having the law of the infinite gradient Gibbs state that is shift-ergodic and zero tilt associated with the GL model. Let $h_\epsilon$ have the law of the zero boundary GL model on $D_\epsilon$. Fix a base point $x \in \partial D$ and let $x_\epsilon \in \partial D_\epsilon$ be its lattice approximation. Set $h_\epsilon^0(x) = 0$, then using the gradient field $\eta$, let $h_\epsilon^0(x)$ be the function satisfying $\nabla h_\epsilon^0 = \eta$.
Let $\eta^{\epsilon,D} = \nabla h_\epsilon$ denote the gradient field associated with $h_\epsilon$. Define
\begin{align*}
\xi^{\epsilon,D}(\nabla f) = \epsilon \sum_{b \in (\mathbf{Z}^2)^*} \nabla f(\epsilon x) \nabla \eta^{\epsilon,D}(b)
\end{align*}
for $f \in C_0^\infty(\mathbf{C})$. Let $\xi^{\epsilon}$ be as in the first section. Theorem \ref{thm::coupling} implies that there exists a coupling of $h_\epsilon, h_\epsilon^0$ such that with $\widehat{h}_\epsilon$ the harmonic extension of $h_\epsilon - h_\epsilon^0$ from $\partial D_\epsilon$ to $D_\epsilon$ we have for some $\zeta,\delta > 0$,
\[ \mathbf{P}[ \overline{h}_\epsilon \neq \widehat{h}_\epsilon \text{ in } D_\epsilon(\epsilon^{\zeta-1})] = O(R^{-\delta}).\]
Fix $f \in C_0^\infty(D)$ and assume that $\epsilon >0$ is sufficiently small so that ${\rm supp }(f) \subseteq D_\epsilon(\epsilon^{\zeta-1})$. Then we have that
\[ \xi^{\epsilon,D}(\nabla f) = \xi^{\epsilon}(\nabla f) + \epsilon \sum_{b \in (\mathbf{Z}^2)^*} \nabla f(\epsilon b) \nabla \overline{h}_\epsilon(b) = \xi^{\epsilon}(\nabla f).\]
on an event $\mathcal {H}$ occurring with probability $1-O(\epsilon^{\delta})$. Note that the second equality followed by an integration by parts and the harmonicity of $\widehat{h}_\epsilon$. Observe
\begin{align*}
\mathbf{E} (\xi^{\epsilon,D}(\nabla f))^2
&= \epsilon^2 \sum_{b,b' \in D_\epsilon^*} \mathbf{E} [\nabla f(\epsilon b) \nabla h_\epsilon(b)] [\nabla f(\epsilon b') \nabla h_\epsilon(b')]\\
&= \epsilon^2 \sum_{b,b' \in D_\epsilon^*} O(1/({\rm dist}(b,b')+1))\\
&= \epsilon^2 \frac{1}{\epsilon^2} O(\log \epsilon)\\
&= O(\log \epsilon).
\end{align*}
A similar estimate also holds for $\mathbf{E} (\xi^\epsilon(\nabla f))^2$. Thus,
\begin{align*}
\mathbf{E} |\xi^{\epsilon,D}(\nabla f) - \xi^{\epsilon}(\nabla f)|
&\leq O(\epsilon^{\delta/2}) \cdot O(|\log \epsilon|^{1/2}),
\end{align*}
where we used Cauchy-Schwartz. Therefore it follows that $\xi^{\epsilon,D} \to \xi = h$ whenever $f \in C_0^\infty(D)$. Since $h$, the standard Gaussian on $H_0^1(\mathbf{C})$ restricted to $\mathbf{D}$ is the same as the standard Gaussian on $H_0^1(\mathbf{D})$, the result follows.
\section{Homogenization of the HS Random Walk}
Suppose that $D$ is a smooth domain in $\mathbf{C}$ and that $D_N$ is a lattice approximation of $D$. Suppose that $h_t$ is an instance of the GL model on $D_N$ with boundary conditions given by $f$. The purpose of this section is to show that the corresponding HS random walk $X_t$ converges to a Brownian motion on $D$, killed on its first exit. The strategy of the proof is to use the Nash-Aronson estimates to prove tightness and again to prove that the limiting process is a mixture of Markov processes.
The strategy of proof will be first to use the Nash-Aronson estimates to establish tightness and to prove that the limiting process is Markovian, then to deduce from the central limit theorem that its Green's function is the same as that of a Brownian motion killed on first exiting.
For $f \colon \mathbf{R}^2 \to \mathbf{R}$ a Lipschitz function, let $\| f\|_{L}$ denote the Lipschitz norm of $f$. Let $\| f \|_{BL} = \| f \|_{\infty} + \| f \|_{BL}$ denote the bounded Lipschitz norm. Let $X$ denote the set of Borel measures on $\mathbf{R}^2$ equipped with the norm
\[ \| \mu \| = \sup_{\| f\|_{BL} \leq 1} \left| \int f d\mu \right|.\]
Let $\mathcal {X}$ be denote the space of maps $\mu_{t,x} \colon [0,\infty) \times \mathbf{R}^2 \to X$ equipped with the uniform norm:
\[ \| \mu_{t,x}\|_\infty = \sup_{t,x} \| \mu_{t,x} \|.\]
Suppose that for each $t \geq 0$, $\omega_t \colon [0,\infty) \to [0,\infty)$ is a homeomorphism with $\omega(0) = 0$. Let $\mathcal {Y} \subseteq \mathcal {X}$ be the set with the following properties:
\begin{enumerate}
\item $\mu_{x,t}(A) = \int_{\mathbf{R}^2} \mu_{y,t-s}(A) \mu_{x,s}(dy)$ (semigroup property),
\item $\sup_{\| f \| \leq 1} \left| \int_{\mathbf{R}^2} f(y) \mu_{x_1,t}(dy) - \int_{\mathbf{R}^2} f(y) \mu_{x_2,t}(dy)\right| \leq \omega_t(|x-y|)$ (uniform continuity).
\end{enumerate}
Observe that $\mathcal {Y}$ is a closed subset of $\mathcal {X}$. Indeed, suppose that $(\mu_{t,x}^N)$ is a sequence in $\mathcal {Y}$ converging to $\mathcal {X}$. Property (2) is preserved since it holds for every $N$. As for property (1), fix $f$ with $\| f \|_{BL} \leq 1$. By definition,
\begin{align*}
\lim_{N \to \infty} \int_{\mathbf{R}^2} f(y) \mu_{t,x}^N(dy) = \int_{\mathbf{R}^2} f(y) \mu_{t,x}(dy).
\end{align*}
Moreover,
\begin{align*}
& \int_{\mathbf{R}^2} f(y) \mu_{t,x}^N(dy)
= \int_{\mathbf{R}^2} \int_{\mathbf{R}^2} f(z) \mu_{t-s,y}^N(dz) \mu_{s}^N(dy)\\
=& \int_{\mathbf{R}^2} \int_{\mathbf{R}^2} f(z) \mu_{t-s,y}(dz) \mu_{s}^N(dy) + \int_{\mathbf{R}^2} \left[ \int_{\mathbf{R}^2} f(z) \mu_{t-s,y}^N(dz) - \int_{\mathbf{R}^2} f(z) \mu_{t-s,y}(dz) \right]\mu_{s}^N(dy).
\end{align*}
The first term converges to
\[ \int_{\mathbf{R}^2} \int_{\mathbf{R}^2} f(z) \mu_{t-s,y}(dz) \mu_{s}(dy)\]
by construction and the second term satisfies
\[ \left|\int_{\mathbf{R}^2} \left[ \int_{\mathbf{R}^2} f(z) \mu_{t-s,y}^N(dz) - \int_{\mathbf{R}^2} f(z) \mu_{t-s,y}(dz) \right]\mu_{s}^N(dy)\right| \leq \| \mu^N - \mu \|,\]
hence vanishes in the limit.
Let $\mu^N \in \mathcal {M}$ denote the measure on $\mathcal {M}$ induced by the HS random walk $X_{tN^2}$ on $D_N$, with time rescaled by a factor of $N^2$.
\begin{lemma}
We have $\mu^N \in \mathcal {N}$.
\end{lemma}
\begin{proof}
Let $p(t;x,y)$ be the transition kernel of the HS random walk on $N D_N$ If $t >0$ then by the Nash-Aronson estimates we have
\[ \sup_{x,y} p(t/2N^2;x,y) \leq \frac{C_t}{N^2}.\]
Hence
\begin{align*}
& \left|\int_{\mathbf{R}^2} f(y) \mu_{t,x_1}^N(dy) - \int_{\mathbf{R}^2} f(y) \mu_{t,x_2}^N(dy)\right|
\leq \sum_{y \in D_N} |f(y)| |p(tN^2;Nx_1,y) - p(tN^2;Nx_2,y)|\\
\leq& \| f\|_\infty |D_N| \| p^* \| \left( \frac{|x_1 - x_2|}{t^{1/2}} \right)^{\xi_1}
\leq C_t' \left( \frac{|x_1 - x_2|}{t^{1/2}} \right)^{\xi_1},
\end{align*}
as desired.
\end{proof}
The next step is to prove that $\mu^N$ has subsequential limits. In order to show this, we need to consider a slightly different measure and consider a different topology.
\begin{lemma}[Tightness]
Let $T_1,T_2,\ldots,$ be the jump times of $X_t$ and let $Y_t$ be the process which is given by $Y_{T_k} = X_{T_k}$ for all $k$ and linearly interpolated in between. Let $\mathbf{P}$ be the law of $Y_t$. The $\mathbf{P}$ is supported on
\end{lemma}
\begin{proof}
We compute,
\begin{align*}
\mathbf{E}_x(X_{s+h} - X_s)^2
&= \sum_{y,z} (y-z)^2 p(s;x,y) p(h;y,z)
= c_1 \sum_{y,z} (y-z)^2 p^*(c_2 s;x,y) p( c_2 h;y,z)\\
&\leq c_1 \mathbf{E}_x^*[X_{c_2 (s+h)} - X_{c_2 s}]^2.
\end{align*}
\end{proof}
The previous theorem gives that the limiting Green's function of our Markov process is the same as that of Brownian motion.
\begin{lemma}[Identification of the limit]
asdf asdf
\end{lemma}
\section{Introduction}
The idea of statistical mechanics is to model physical systems by describing them probabilistically at microscopic scales and then studying their macroscopic behavior. Many lattice-based planar models at criticality are believed to have scaling limits which are invariant under conformal symmetries, a reflection of the heuristic that the asymptotic behavior at criticality should be independent of the choice of the underlying lattice. The realizations of these models tend to organize themselves into large clusters separated from each other by thin interfaces which, in turn, have proven to be interesting objects to study in the scaling limit. The last decade has brought a number of rather exciting developments in this direction, primarily due to the introduction of SLE \cite{S01}, a one-parameter family of conformally invariant random curves which are conjectured to describe the limiting interfaces in many models. This has now been proved rigorously in several special cases: loop-erased random walk and the uniform spanning tree \cite{LSW04}, chordal level lines of the discrete Gaussian free field \cite{SS09}, the harmonic explorer \cite{SS05}, the Ising model on the square lattice \cite{S07} and on isoradial graphs \cite{CS10U}, and percolation on the triangular lattice \cite{S01, CN06}.
One of the core principles of statistical mechanics is that of \emph{universality}: the exact microscopic specification of a model should not affect its macroscopic behavior. There are two ways in which universality can arise in this context: stability of the limit with respect to \emph{changes to the lattice} and, the stronger notion, with respect to \emph{changes to the Hamiltonian}. The results of \cite{LSW04, SS09, SS05, CS10U} fall into the first category. Roughly, this follows in \cite{LSW04, SS09, SS05} since underlying the conformal invariance of the models described in these works is the convergence of simple random walk to Brownian motion, a classical result which is lattice independent and is in fact true in much greater generality. Extending Smirnov's results on the Ising model \cite{S07} beyond the square lattice is much more challenging \cite{CS10DCI, CS10U} and it is a well-known open problem to extend the results of \cite{S01, CN06} to other lattices.
The purpose of this work is to prove the conformal invariance of limiting interfaces for a large class of random surface models that is \emph{stable with respect to non-perturbative changes to the Hamiltonian}, of which there is no prior example.
\subsection{Main Results}
Specifically, we study the massless field on $D_n = D \cap \tfrac{1}{n} \mathbf{Z}^2$ with Hamiltonian $\mathcal {H}(h^n) = \sum_{b \in D_n^*} \mathcal {V}(\nabla h^n(b))$. Here, $D \subseteq \mathbf{C}$ is a bounded, simply connected Jordan domain with smooth boundary. The sum is over the set $D_n^*$ of edges in the induced subgraph of $\tfrac{1}{n} \mathbf{Z}^2$ with vertices in $D$ and $\nabla h^n(b) = h^n(y) - h^n(x)$ denotes the discrete gradient of $h^n$ across the oriented bond $b=(x,y)$. We assume that $h^n(x) = \phi^n(x)$ when $x \in \partial D_n$ and $\phi^n \colon \partial D_n \to \mathbf{R}$ is a given bounded function. We consider a general interaction $\mathcal {V} \in C^2(\mathbf{R})$ which is assumed only to satisfy:
\begin{enumerate}
\item $\mathcal {V}(x) = \mathcal {V}(-x)$ (symmetry),
\item $0 < a_\mathcal {V} \leq \mathcal {V}''(x) \leq A_\mathcal {V} < \infty$ (uniform convexity), and
\item $\mathcal {V}''$ is $L$-Lipschitz.
\end{enumerate}
This is the so-called \emph{Ginzburg-Landau $\nabla \phi$ effective interface (GL) model}, also known as the \emph{anharmonic crystal}. The first condition is a reflection of the hypothesis that our bonds are undirected. The role played by the second and third conditions is technical. Note that we can assume without loss of generality that $\mathcal {V}(0) = 0$. The variables $h^n(x)$ represent the heights of a random surface which serves as a model of an interface separating two pure phases. The simplest case is $\mathcal {V}(x) = \tfrac{1}{2} x^2$, which corresponds to the so-called discrete Gaussian free field, but our hypotheses allow for much more exotic choices such as $\mathcal {V}(x) = 4x^2 + \cos(x) + e^{-x^2}$.
The purpose of this work is to determine the limiting law of the \emph{chordal zero-height contours} of $h^n$. To keep the article from being unnecessarily complicated, we will select our boundary conditions in such a way that there is only a single such curve. That is, we fix $x,y \in \partial D$ distinct and let $x_n, y_n$ be points in $\partial D_n$ with minimal distance to $x,y$, respectively. Denote by $\partial_+^n$ the part of $\partial D_n$ connecting $x_n$ to $y_n$ in the clockwise direction and $\partial_-^n = \partial D_n \setminus \partial_+^n$. Let $x_n^*,y_n^*$ be the edges containing $x_n,y_n$, respectively, which connect $\partial_+^n$ to $\partial_-^n$. Suppose that $h^n$ has the law of the GL model on $D_n$ with boundary conditions $h^n|_{\partial D_n} \equiv \phi^n$ where $\phi^n|_{\partial_+^n} \in (0,\infty)$ and $\phi^n|_{\partial_-^n} \in (-\infty,0)$. Let $\gamma^n$ be the unique path in $D_n^*$ connecting $x_n^*$ to $y_n^*$ which has the property that for each $t$, $\gamma^n(t)$ is the first edge $\{u,v\}$ on the square adjacent to $\gamma^n(t-1)$ in the clockwise direction such that $h^n(u) > 0$ and $h^n(v) < 0$ if $\gamma^n(t-1)$ is oriented horizontally and in the counterclockwise direction if $\gamma^n(t-1)$ is oriented vertically (see Figure \ref{fig::turning_rule}).
\begin{comment}
\begin{figure}
\centering
\subfigure{
\includegraphics[width=.55\textwidth]{figures/gl_picture2.pdf}}
\hspace{0.05in}
\subfigure{
\includegraphics[width=.4\textwidth]{figures/sle4.png}}
\caption{Will insert a real picture of zero interface}
\end{figure}
\end{comment}
\begin{theorem}
\label{intro::thm::sle_convergence}
There exists $\lambda \in (0,\infty)$ depending only on $\mathcal {V}$ such that the following is true. If $\phi^n|_{\partial_+^n} = \lambda$ and $\phi^n|_{\partial_-^n} = -\lambda$, then up to reparameterization, the piecewise linear interpolation of $\gamma^n$ converges in distribution with respect to the uniform topology to an ${\rm SLE}(4)$ curve connecting $x$ to $y$ in $D$.
\end{theorem}
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/orient.pdf}
\caption{We must a fix a convention which dictates the direction in which $\gamma^n$ turns, as ambiguities may arise. The white (resp. gray) disks at the boundary of a square indicate sites at which the field is positive (resp. negative). In each of the situations depicted above, there are two possible directions in which $\gamma^n$ can turn and still preserve the constraint that the field is positive (resp. negative) on the left (resp. right) side of $\gamma^n$. Our convention is that on horizontal (resp. vertical) dual edges, $\gamma^n$ goes to the first dual edge in the clockwise direction (resp. counterclockwise) where there is a sign change. \label{fig::turning_rule}}
\end{figure}
This is a resolution of the following conjecture due to Sheffield for the GL model, which constitutes a large and important special case:\bigskip
\noindent{\bf Problem 10.1.3 \cite{SHE_RS06}:}
\emph{``If a height function $\phi$ on $\mathbf{Z}^2$ is interpolated to a function $\overline{\phi}$ which is continuous and piecewise linear on simplices, then the level sets $C_a$, given by $\overline{\phi}^{-1}(a)$, for $a \in \mathbf{R}$ are unions of disjoint cycles. What do the typical ``large'' cycles look like when $\Phi$ is simply attractive and sampled from a rough gradient phase? The answer is given in \cite{SS09} in the simplest case of quadratic nearest neighbor potentials - in this case, ``the scaling limit'' of the loops as the mesh size gets finer is well defined, and the limiting loops look locally like a variant of the Schramm-Loewner evolution with parameter $\kappa = 4$. We conjecture that this limit is universal - i.e., that the level sets have the same limiting law for all simply attractive potentials in a rough phase.''}\bigskip
In Sheffield's terminology, $\Phi$ is said to be a \emph{simply attractive potential}, i.e. a convex, nearest-neighbor, difference potential, if $\Phi$ takes the form $\sum_{b \in D_n^*} \mathcal {V}_b(\nabla h(b))$ where for each $b \in D_n^*$, $\mathcal {V}_b$ is a convex function. $\Phi$ is said to be isotropic if $\mathcal {V}_b = \mathcal {V}$, i.e. does not depend on $b$. Thus the Hamiltonian for the GL model is an \emph{isotropic simply attractive potential} which is \emph{uniformly convex}.
The GL has been the subject of much recent work. Gibbs states were classified by Funaki and Spohn in \cite{FS97}, where they also study macroscopic dynamics. A large deviations principle for the surface shape with zero boundary conditions but in the presence of a chemical potential was established by Deuschel, Giacomin, and Ioffe in \cite{DGI00} and Funaki and Sakagawa in \cite{FS04} extend this result to the case of non-zero boundary conditions using the contraction principle. The behavior of the maximum is studied by Deuschel and Giacomin in \cite{DG00} and by Deuschel and Nishikawa in \cite{DN07} in the case of Langevin dynamics. Central limit theorems for Gibbs states were proved by Naddaf and Spencer in \cite{NS97} for zero tilt and later by Giacomin, Olla, and Spohn for general tilt and dynamics in \cite{GOS01}. The CLT on finite domains as well as an explicit representation for the limiting covariance was obtained in \cite{M10}.
We remark that it is possible to weaken significantly the restrictions on the boundary conditions. As shown in a forthcoming work, the limit is ${\rm SLE}(4;\rho)$ in the piecewise constant case and a ``continuum version'' of ${\rm SLE}(4;\rho)$ for $C^1$ boundary conditions. We also remark that the reason for the convention dictating the direction in which $\gamma^n$ turns in Theorem \ref{intro::thm::sle_convergence} is that with this choice the law of $\gamma^n$ is invariant with respect to the transformation given by exchanging the signs of the boundary conditions. Moreover, this scheme yields a path which is equivalent to that which arises from the triangulation method described in \cite[Section 1.5]{SS09}, in particular by adding to $\mathbf{Z}^2$ the edges of the form $\{(x,y),(x+1,y-1)\}$. There are a number of other natural local rules, for example for the curve to move to the first edge in the clockwise direction where the height field has a sign change. This to a law leads law which is not invariant with respect to this transformation and the limit is no longer ${\rm SLE}(4)$, but rather some ${\rm SLE}(4;\rho)$.
\subsection{Overview of ${\rm SLE}$}
The Schramm-Loewner evolution (${\rm SLE}$) is a one-parameter family of conformally invariant random curves, introduced by Oded Schramm in \cite{S0} as a candidate for, and later proved to be, the scaling limit of loop erased random walk \cite{LSW04} and the interfaces in critical percolation \cite{S01, CN06}. ${\rm SLE}$ comes in two different flavors: radial and chordal. The former describes a curve connecting a point on the boundary of a domain to its interior and the latter a curve connecting two points on the boundary. We will restrict our discussion to the latter case since it is the one relevant for this article. We remark that there are many excellent surveys on ${\rm SLE}$, for example \cite{LAW05, W03}, to which we direct the reader interested in a detailed introduction to the subject.
Chordal ${\rm SLE}(\kappa)$ on the upper half-plane $\mathbf{H}=\{ z \in \mathbf{C} : {\rm Im}(z) > 0\}$ connecting $0$ to $\infty$ is easiest to describe first in terms of a family of \emph{random conformal maps} which are given as the solution to the Loewner ODE
\[ \partial_t g_t(z) = \frac{2}{g_t(z) - W(t)},\ \ g_0(z) = z.\]
Here, $W = \sqrt{\kappa} B$ where $B$ is a standard Brownian motion. The domain of $g_t$ is $\mathbf{H}_t = \{z \in \mathbf{H} : \tau(z) > t\}$ where $\tau(z) = \inf\{ t \geq 0 : {\rm Im}(g_t(z)) = 0\}$. By work of Rohde and Schramm \cite{RS05}, $\mathbf{H}_t$ arises as the unbounded connected component of a random curve $\gamma$ on $[0,t]$, the ${\rm SLE}$ \emph{trace}. This is what allows us to refer to ${\rm SLE}$ as a curve. ${\rm SLE}(\kappa)$ connecting boundary points $x$ and $y$ of a simply connected Jordan domain is defined by applying a conformal transformation $\varphi \colon \mathbf{H} \to D$ to ${\rm SLE}(\kappa)$ on $\mathbf{H}$ sending $0$ to $x$ and $\infty$ to $y$. Of course, this leaves one degree of freedom in the choice of $\varphi$, so this only defines ${\rm SLE}(\kappa)$ on $D$ up to reparameterization.
The following two properties characterize chordal ${\rm SLE}(\kappa)$:
\begin{enumerate}
\item {\it conformal invariance}: If $D,D'$ are simply connected Jordan domains with marked boundary points $x,y \in \partial D$ and $x',y' \in \partial D'$ and $\varphi \colon D \to D'$ is a conformal map taking $x,y$ to $x',y'$, respectively, then the image of a chordal ${\rm SLE}(\kappa)$ connecting $x$ to $y$ in $D$ under $\varphi$ is a chordal ${\rm SLE}(\kappa)$ connecting $x'$ to $y'$ in $D'$
\item {\it domain Markov property}: if $\gamma$ is the trace of a chordal ${\rm SLE}(\kappa)$ from $x$ to $y$ in $D$, then conditional on $\gamma[0,s]$, $\gamma$ has the law of a chordal ${\rm SLE}(\kappa)$ from $\gamma(s)$ to $y$ in the connected component of $D \setminus \gamma[0,s]$ containing $y$.
\end{enumerate}
Many families of random curves arising from interfaces of two-dimensional lattice models are believed to satisfy these two properties in the scaling limit, hence converge to some ${\rm SLE}(\kappa)$. While there are many conjectures, proving such convergence is extremely challenging and, as we mentioned earlier, rigorous proofs are available only in a few isolated cases. Establishing a strong form of universality has proved to be particularly difficult since the arguments in these works are rather delicate and depend critically on the microscopic specification of the model. For example, while the interfaces of percolation on the \emph{triangular lattice} have been shown to converge to ${\rm SLE}(6)$ \cite{CN06, S01} the combinatorial argument of \cite{S01} is not applicable for any other lattice. Even the seemingly simple extension of the results of \cite{S01} to percolation on the square lattice has been open for much of the past decade.
\subsection{Strategy of Proof}
The strategy for proving convergence to ${\rm SLE}$ involves several steps, the most difficult and important of which is to find an observable of the underlying model and prove it has a conformally invariant limit which is also a martingale. We now describe the observable used in this article. Suppose that $\gamma$ is the trace of an ${\rm SLE}(4)$ curve in $\mathbf{H}$ from $0$ to $\infty$ and $(g_t)$ is the corresponding family of conformal maps. Let $f_t \colon \mathbf{H}_t \to \mathbf{R}$ be the function harmonic on $\mathbf{H}_t$ with boundary values $0$ on the right side of $\gamma$ and $(0,\infty)$ and $1$ on the left side of $\gamma$ and $(-\infty,0)$. We can express $f_t$ explicitly in terms of $g_t$ as follows:
\[ f_t(z) = \frac{1}{\pi} {\rm Im}(\log(g_t(z))).\]
A calculation of the Ito derivative of the right side shows that $f_t(z)$ evolves as a martingale in time for $z$ fixed precisely because $\kappa = 4$. This property characterizes ${\rm SLE}(4)$ among random simple curves \cite{SS05, SS09}.
The key of the proof of Theorem \ref{intro::thm::sle_convergence} is to show that this approximately holds for the corresponding interface of the GL model, provided $\lambda > 0$ is chosen appropriately. Specifically, suppose that $D, D_n, x_n, y_n, \gamma^n$ are as in the statement of Theorem \ref{intro::thm::sle_convergence} and that $\mathcal {F}_t^n = \sigma( \gamma_s^n : s \leq t)$. Let $D_n(\gamma,t,\epsilon) = \{ x \in D_n : {\rm dist}(x, \partial D_n \cup \gamma^n([0,t])) \geq \epsilon\}$.
\begin{theorem}
\label{intro::thm::approximate_martingale}
There exists $\lambda \in (0,\infty)$ depending only on $\mathcal {V}$ such that the following is true. Let $f_t^n$ be the function on $D_n \setminus \gamma^n([0,t])$ which is discrete harmonic in the interior and has boundary values $\lambda$ on $\partial_+^n$ and the left side of $\gamma^n([0,t])$ and $-\lambda$ on $\partial_-^n$ and the right side of $\gamma^n([0,t])$. Also let $M_t^n(x) = \mathbf{E}[ h^n(x) | \mathcal {F}_t^n]$ and $\mathcal {E}_t^n(\epsilon) = \max\{|f_t^n(x) - M_t^n(x)| : x \in D_n(\gamma,t,\epsilon)\}$. For every $\epsilon, \delta > 0$ there exists $n$ sufficiently large such that for every $\mathcal {F}_t^n$ stopping time $\tau$ we have
\[ \mathbf{P}[ \mathcal {E}_\tau^n(\epsilon) \geq \delta] \leq \delta.\]
\end{theorem}
\noindent This, in particular, implies that $f_t^n(x)$ is an approximate martingale.
\subsubsection*{Main Steps}
Theorem \ref{intro::thm::approximate_martingale} should be thought of as a law of large numbers for the conditional mean of the height given the realization of the path up to any stopping time. Its proof consists of several important steps. First, Theorem \ref{harm::thm::mean_harmonic} of \cite{M10} implies $M_\tau^n(x)$ is with high probability uniformly close to the discrete harmonic extension of its boundary values from $\partial D_n(\gamma,t,n^{-\epsilon})$ to $D_n(\gamma,t,n^{-\epsilon})$ provided $\epsilon = \epsilon(\mathcal {V}) > 0$ is sufficiently small. In particular, $M_\tau^n(x)$ is approximately discrete harmonic mesoscopically close to $\gamma^n[0,\tau]$ relative to the Euclidean metric in $\mathbf{R}^2$. With respect to the graph metric, in which the distance is given by the number of edges in the shortest path, the distance at which this \emph{a priori} estimate holds from the path is unbounded in $n$. Using the results of Sections \ref{sec::expectation}, \ref{sec::hic} we will prove that this estimate can be boosted further to get the approximate harmonicity of $M_t^n(x)$ up to finite distances from $\gamma^n[0,\tau]$ in the graph metric:
\begin{theorem}
\label{intro::thm::harmonic_up_to_boundary}
Fix $\overline{\Lambda} > 0$ and suppose that $h^n$ has the law of the GL model on $D_n$ with boundary conditions $\phi$ satisfying $\phi|_{\partial_+^n} \in (0,\overline{\Lambda})$ and $\phi|_{\partial_-^n} \in (-\overline{\Lambda},0)$. For every $\delta > 0$ there exists $r_0 = r_0(\overline{\Lambda},\delta) > 0$ such that the following is true. Let $r > r_0$ and $\psi_t^n$ be the function on $D_n(\gamma,t,r n^{-1})$ which satisfies the boundary value problem
\[ (\Delta \psi_t^n)|_{D_n(\gamma,t,rn^{-1})} \equiv 0,\ \ \psi_t^n|_{\partial D_n(\gamma,t,rn^{-1})} \equiv M_t^n\]
where $\Delta$ denotes the discrete Laplacian.
Let $\mathcal {D}_t^n(r) = \max\{|\psi_t^n(x) - M_t^n(x)| : x \in D_n(\gamma,t,r n^{-1})\}$. For every $\mathcal {F}_t^n$ stopping time $\tau$, we have $\mathbf{E}[ \mathcal {D}_\tau^n(r)] \leq \delta.$
\end{theorem}
This reduces the proof of Theorem \ref{intro::thm::approximate_martingale} to showing that the boundary values of $M_t^n$ very close to $\gamma^n[0,t]$ averaged according to harmonic measure are approximately constant. Specifically, the latter task requires two estimates:
\begin{enumerate}
\item Correlation decay of the boundary values of $M_t^n$ at points which are far away from each other.
\item The law of $M_t^n$ at a point on $\gamma^n$ sampled from harmonic measure has a scaling limit as $n \to \infty$.
\end{enumerate}
Step (1) is a consequence of Proposition \ref{hic::prop::hic}, which we will not restate here, and step (2) comes from the following theorem:
\begin{figure}
\includegraphics[width=90mm]{figures/scalinng_r.pdf}
\caption{We show that for each $r \geq 0$, the interface $\gamma^n$ has a scaling limit when viewed from the perspective of $X_{\tau(r)}$, where $X$ is a simple random walk and $\tau(r)$ is the first time $X$ comes within distance $rn^{-1}$ of $\gamma^n$. This expands on the Schramm-Sheffield approach, where it is only necessary to construct the scaling limit for $r = 0$.}
\end{figure}
\begin{theorem}
\label{intro::thm::scaling_limit}
For each $r \geq 0$ there exists a unique measure $\nu_r$ on bi-infinite simple paths in $(\mathbf{Z}^2)^*$ which come exactly within distance $r$ of $0$ such that the following is true. Suppose that $\tau$ is an $\mathcal {F}_t^n$ stopping time, $X$ is a simple random walk on $\tfrac{1}{n} \mathbf{Z}^2$ initialized in $D_n(\gamma,\tau,\epsilon)$ independent of $h^n$, and $\tau(r)$ is the first time that $X$ gets within distance $rn^{-1}$ of $\partial (D_n \setminus \gamma^n[0,\tau])$. Let $\gamma_+^n$ denote the positive side of $\gamma_n$ and ${\rm dist}(\cdot,A)$ denote the distance in the internal metric of $D_n \setminus \gamma^n[0,\tau]$ to $A$. Conditional on both
\begin{enumerate}
\item ${\rm dist}(X_{\tau(r)}, \gamma_+^n) = r n^{-1}$ and
\item ${\rm dist}(X_{\tau(r)},\partial D_n \cup \{\gamma^n(\tau)\}) \geq S n^{-1}$,
\end{enumerate}
let $\nu_{n,r,R,S} $ be the probability on simple paths in $(\mathbf{Z}^2)^*$ induced by the law of $B(0,R) \cap n(\gamma^n[0,\tau] - X_{\tau(r)})$. For every $\delta, r,R> 0$, there exists $S_0$ such that $S \geq S_0$ implies
\[ \| \nu_{n,r,R,S} - \nu_r|_{B(0,R)} \|_{TV} \leq \delta\]
for all $n$ large enough, where $\nu_r|_{B(0,R)}$ denotes the law of $\gamma \cap B(0,R)$ for $\gamma \sim \nu_r$.
\end{theorem}
Theorem \ref{intro::thm::scaling_limit} is a mesoscopic version of \cite[Theorem 3.21]{SS09} applicable for the GL model. Its proof is based on the idea that the geometry of $\gamma^n$ is spatially mixing and has two main steps which, roughly, are:
\begin{enumerate}
\item The geometry of zero height interfaces of $h^n$ near a point $x_0$ is approximately independent of the geometry of $\gamma^n$ away from $x_0$ (see Section \ref{sec::ni}),
\item With high probability, $\gamma^n$ will hook-up with a large zero-height interface passing through a point $x_0$ conditional upon $\gamma^n$ passing near $x_0$.
\end{enumerate}
Step (1) is model specific and requires a challenging argument in the general GL setting. On the other hand, we are able to reuse many of the high level ideas behind step (2) from \cite{SS09} in our setting thanks to their generality.
We now explain how to prove Theorem \ref{intro::thm::approximate_martingale} from Theorems \ref{intro::thm::harmonic_up_to_boundary} and \ref{intro::thm::scaling_limit}. Let $\lambda_r \in (0,\infty)$ be the constant given by the following procedure.
\begin{enumerate}
\item Sample $\gamma_r \sim \nu_r$, let $V_+(\gamma_r)$ be the sites adjacent to $\gamma_r$ which are in the same connected component of $\mathbf{Z}^2 \setminus \gamma_r$ as $0$, and $V_-(\gamma_r)$ the set of all other sites adjacent to $\gamma_r$.
\item Conditional on $\gamma_r$, we let $h_r$ have the law of the GL model on $\mathbf{Z}^2$ conditional on $\{ h_r(x) > 0 : x \in V_+(\gamma_r)\}$ and $\{ h_r(x) < 0 : x \in V_-(\gamma_r)\}$.
\item Set $\lambda_r = \mathbf{E}[h_r(0)]$.
\end{enumerate}
For each $\delta > 0$, we can choose $r$ sufficiently large so that $M_\tau^n(x)$ is with high probability uniformly close to the harmonic function in $D_n(\gamma,\tau,r n^{-1})$ with boundary values $\lambda_r$ (resp. $-\lambda_r$) on the left (resp. right) side of $\gamma^n$. Theorem \ref{intro::thm::approximate_martingale} follows by showing $\lambda = \lim_{r \to \infty} \lambda_r$ exists and $\lambda \in (0,\infty)$.
Deducing convergence to ${\rm SLE}$ in the Caratheodory topology from an estimate such as Theorem \ref{intro::thm::approximate_martingale} follows a procedure which by now is standard, see \cite{LSW04, SS05, SS09}. The model specific arguments of \cite{SS09} used to promote the convergence to the uniform topology work verbatim in our setting, however, are unnecessary thanks to the time symmetry of our problem and recent results of Sheffield and Sun \cite{SS10}.
We remark that the existence and positivity of the limit of $(\lambda_r)$ is one of the crucial points of the proof. Indeed, it follows from the work of Kenyon \cite{K01, K00} that the mean of the height function of the double dimer model also converges to the harmonic extension of its boundary values. Using the same method of proof employed here to deduce the convergence of the chordal interfaces of the double dimer model seems to break down \cite{SP10}, though. The technical difficulty in that setting is the estimate of harmonicity of the mean from \cite{K00} requires the boundary to satisfy certain geometric conditions which need not hold for the zero-height interfaces. Thus, just as in our case, one does not have an estimate of harmonicity of the mean which holds all of the way up to the interfaces. In particular, it appears to be very difficult to show that the mean height remains uniformly positive (resp. negative) on the positive (resp. negative) sides of the interfaces.
\begin{comment}
We emphasize that if we had chosen $\gamma^n$ according to a different local rule so that its law is not invariant under a change of sign of the boundary conditions, then the law of the local geometry of $\gamma^n$ as seen from $X_{\tau(r)}$ would depend on whether or not $X_{\tau(r)}$ hits on the positive or negative side. In this case, there exists $\lambda_r^+, \lambda_r^- > 0$ distinct such that $M_\tau^n(x)$ is close to the harmonic function with boundary values $\pm \lambda_r^{\pm}$ on the positive and negative sides of $\gamma^n$. Both sequences $(\lambda_r^\pm)$ have limits $\lambda^{\pm}$ and if we let $h^n|_{\partial_n^\pm} = \lambda^\pm$, then the limit is still ${\rm SLE}(4)$.
\end{comment}
\subsection{Outline} The rest of the article is structured as follows. In Section \ref{sec::setup_notation}, we will fix some notation which will be used repeatedly throughout. The purpose of Section \ref{sec::cond_dynam} is to develop the theory of dynamic coupling for the GL model under the presence of conditioning in addition to collecting some useful results on stochastic domination. In Section \ref{sec::expectation} we will prove a few technical estimates which allow us to control the moments of the conditioned field near the interface. The main result of Section \ref{sec::hic} is that the law of the field near a particular point $x_0$ on the interface does not depend strongly on the exact geometry of the interface far away from $x_0$. This will allow us to deduce that the mean height is strictly negative near the negative side of the interface and vice versa on the positive side, which in turn implies $\lambda_r$ is uniformly positive in $r$. We will also prove Theorem \ref{intro::thm::harmonic_up_to_boundary} and deduce from it that $\lambda_r$ is uniformly bounded from both $0$ and $\infty$ in $r$. In Section \ref{sec::ni}, we will show that the geometry of the interface near a point $x_0$ is approximately independent from its precise geometry far away from $x_0$. This is the key part of Theorem \ref{intro::thm::scaling_limit}. We will explain the proofs of Theorems \ref{intro::thm::scaling_limit} and \ref{intro::thm::approximate_martingale} in Section \ref{sec::bvi}.
This article is the second in a series of two. The first is a prerequisite for this one and we will cite it heavily throughout.
\section{Setup and Notation}
\label{sec::setup_notation}
Throughout the rest of this article, we will frequently make use of the following two assumptions:
\begin{enumerate}
\item[($\partial$)] \label{assump::boundary} Suppose that $D \subseteq \mathbf{Z}^2$ with ${\rm diam}(D) = R$. Assume that $\overline{\Lambda} > 0$ and $\psi \in \mathbf{B}_{\overline{\Lambda}}(D) \equiv \{ \phi \colon \partial D \to \mathbf{R} : \| \phi \|_\infty \leq \overline{\Lambda}\}$.
\item[($C$)] \label{assump::conditioning} Let $V,V_+, V_-$ be non-empty disjoint subsets of $D$ and let $U = \partial D \cup V_- \cup V_+ \cup V$. Suppose that for every $x \in V_+$ there exists $y \in V_- \cup V \cup \partial D$ with $|x-y| \leq 2$ and vice-versa. Finally, assume that $a,b \colon D \to \mathbf{R}$ satisfy
\begin{enumerate}
\item[(1)] $a(x) = -\infty$, $b(x) = \infty$ for $x \notin U$,
\item[(2)] $a(x) \geq - \overline{\Lambda}$, $b(x) = \infty$ for $x \in V_+$,
\item[(3)] $a(x) = -\infty$, $b(x) \leq \overline{\Lambda}$ for $x \in V_-$, and
\item[(4)] $a(x) \geq - \overline{\Lambda}$, $b(x) \leq \overline{\Lambda}$ for $x \in V$.
\end{enumerate}
\end{enumerate}
We will also occasionally make the assumption
\begin{enumerate}
\item[($\pm$)] The conditions of $(C)$ hold in the special case that $V = \emptyset$, $a \equiv 0$ in $V_+$ and $b \equiv 0$ in $V_-$.
\end{enumerate}
We will also make use of the following notation. For $D \subseteq \mathbf{Z}^2$ bounded and $\psi \colon \partial D \to \mathbf{R}$ a given boundary condition, we let $\mathbf{P}_D^\psi$ denote the law of the GL model on $D$ with boundary condition $\psi$. Explicitly, this is the measure on functions $h \colon D \to \mathbf{R}$ with density
\begin{equation}
\label{gl::eqn::density}
\frac{1}{\mathcal {Z}} \exp\left( - \sum_{b \in D^*} \mathcal {V}( \nabla (h \vee \psi) (b)) \right)
\end{equation}
with respect to Lebesgue measure on $\mathbf{R}^{|D|}$, where
\[ h \vee \psi(x) = \begin{cases} h(x) \text{ if } x \in D,\\ \psi(x) \text{ if } x \in \partial D.\end{cases}\]
If $g \colon D \to \mathbf{R}$, then $\mathbf{Q}_D^{\psi,g}$ is the law of $(h-g)$ where $h$ is distributed according to $\mathbf{P}_D^\psi$. The expectations under $\mathbf{P}_D^\psi$ and $\mathbf{Q}_D^{\psi,g}$ will be $\mathbf{E}^\psi$ and $\mathbf{E}_\mathbf{Q}^{\psi,g}$, respectively, and we will add an extra subscript if we wish to emphasize the domain. We will omit the superscript $\psi$ if the boundary conditions are clear from the context. Often we will be taking expectations over complicated couplings of multiple instances of the GL model, in which case we will typically just write $\mathbf{E}$ since the explicit construction of the coupling will be clear from the context.
We will use $h$ to refer to a generic instance of the GL model and $h_t$ its Langevin dynamics, where the domain and boundary conditions will be clear from the context. If we wish to emphasize the boundary condition, we will write $h^\psi$ and $h_t^\psi$ for $h,h_t$, respectively, and to emphasize $D$ we will write $h^D$ and $h_t^D$. Finally, if we wish to emphasize both then we will write $h^{\psi,D}$ and $h_t^{\psi,D}$. We will often condition on events of the form $\mathcal {K} = \cap_{x \in D} \{ a(x) \leq h(x) \leq b(x)\}$ where $a,b$ arise as in $(C)$. Notationally such conditioning will be expressed in two different ways. The first possibility is that we will indicate in advance that an instance of the GL model $h$ will always be conditioned on $\mathcal {K}$ and then make no further indication of it, in which case $h_t$ refers to the conditioned dynamics. If either we need to emphasize the conditioning or $h$ refers to an unconditioned model, we will write $h|\mathcal {K}$ for the conditioned model, $(h|\mathcal {K})_t$ for its dynamics, $h^\psi|\mathcal {K}$ to emphasize the boundary condition, and $h^D|\mathcal {K}$ to emphasize the domain.
The proofs in this article will involve many complicated estimates involving numerous constants. In order to keep the arguments succinct, we will make rather frequent usage of $O$-notation. Specifically, we say that $f = O(g)$ if there exists constants $c_1,c_2 > 0$ such that $|f(x)| \leq c_1 + c|g(x)|$ for all $x$. If we write $f = O_\alpha(g)$ for a parameter or possibly family of parameters $\alpha$, then $c_1, c_2$ depend only on $\alpha$. Finally, if $X$ and $Y$ are random variables, then $X = O(Y)$ means that $|X| \leq c_1 + c_2|Y|$ for \emph{non-random} $c_1, c_2$.
\section{Conditioned Dynamics}
\label{sec::cond_dynam}
Suppose that $D \subseteq \mathbf{Z}^2$ with ${\rm diam}(D) < \infty$, $\psi \colon \partial D \to \mathbf{R}$, and $a,b \colon D \to [-\infty,\infty]$ satisfy $a\leq b$. The Langevin dynamics associated with $h \sim \mathbf{P}_D^\psi[ \cdot | \mathcal {K}]$ where $\mathcal {K} = \cap_{x \in D} \{ a(x) \leq h(x) \leq b(x)\}$ are described by the SDS
\begin{align}
\label{gl::eqn::cond_dynam}
dh_t(x) = \sum_{b \ni x} \mathcal {V}'(\nabla (h_t \vee \psi)(b))dt& + d[\ell_t^a - \ell_t^b](x) + \sqrt{2}dW_t(x),\\
&x \in D, t \in \mathbf{R} \notag.
\end{align}
Here, $W$ is a family of independent, standard, two-sided Brownian motions and the processes $\ell^a,\ell^b$ are of bounded variation, non-decreasing, and non-zero only when $h_t(x) = a(x)$ or $h_t(x) = b(x)$, respectively. If $a(x) = -\infty$, then $\ell^a(x) \equiv 0$ and if $b(x) = \infty$, then $\ell^b(x) \equiv 0$. In particular, if $a \equiv -\infty$ and $b \equiv \infty$, then we just recover the Langevin dynamics of $\mathbf{P}_D^\psi$; see \eqref{gl::eqn::dynam} of \cite{M10}.
\subsection{Brascamp-Lieb and FKG inequalities}
In subsection \ref{subsec::hs_representation} of \cite{M10} we collected a few of the basic properties of the HS representation for the GL model without conditioning. The HS representation is actually applicable in much more generality. We will summarize that which is developed in Remark 2.3 of \cite{DGI00} relevant for our purposes. Suppose that $\mathcal {U}_x$ is a family of $C^2$ functions indexed by $x \in D$ satisfying $0 \leq \mathcal {U}_x'' \leq \alpha.$ The law of the GL model with potential $\mathcal {V}$ and self-potentials $\mathcal {U}_x$ is given by the density
\begin{align*}
\frac{1}{\mathcal {Z}_{\mathcal {V},\mathcal {U}}} \exp\left( -\sum_{b \in D^*} \mathcal {V}( \nabla (h \vee \psi)(b)) - \sum_{x \in D} \mathcal {U}_x(h(x)) \right)
\end{align*}
with respect to Lebesgue measure. The associated Langevin dynamics are described by the SDS:
\begin{align*}
d h_t^{\mathcal {U}}(x) = \left[ \sum_{b \ni x} \mathcal {V}'(\nabla h_t^{\mathcal {U}} \vee \psi(b)) + \mathcal {U}_x'(h_t^{\mathcal {U}}(x))\right]dt + \sqrt{2} dW_t(x).
\end{align*}
Letting $X_t^\mathcal {U}$ be the random walk with time-dependent jump rates $\mathcal {V}''(\nabla h_t^\mathcal {U}(b))$, the covariance is given by:
\begin{align}
\label{gl::eqn::hs_self_potential}
&{\rm Cov}(h^\mathcal {U}(x), h^\mathcal {U}(y))\\
=& \mathbf{E}_x^\mathcal {U} \left[ \int_0^\tau \exp\left( -\int_0^s \mathcal {U}_{X_u^\mathcal {U}}''(h_u^\mathcal {U}(X_u^\mathcal {U})) du \right) \mathbf{1}_{\{ X_s^\mathcal {U} = y\}} ds \right], \notag
\end{align}
where the subscript $x$ indicates that $X_0^\mathcal {U} = x$.
Recall that the DGFF $h^*$ on $D$ is the random field with density as in \eqref{gl::eqn::density} in the special case $\mathcal {V}(x) = \tfrac{1}{2} x^2$. From \eqref{gl::eqn::hs_self_potential}, we immediately obtain the following comparison inequality which bounds from above centered moments of linear functionals of the \emph{conditioned} GL model by the corresponding moments of the \emph{unconditioned} DGFF. Specifically, for $\nu,\mu \in \mathbf{R}^{|D|}$, letting
\[ \langle \mu, \nu \rangle = \sum_{x \in D} \mu_x \nu_x,\] we have:
\begin{lemma}[Brascamp-Lieb inequalities]
\label{bl::lem::bl_inequalities}
Suppose that $h^*$ is a zero-boundary DGFF on $D$ and $h \sim \mathbf{P}_D^\psi[\cdot|\mathcal {K}]$. There exists $C > 0$ depending only on $a_\mathcal {V},A_\mathcal {V}$ such that the following inequalities hold:
\begin{align}
& {\rm Var}(\langle \nu, h \rangle) \leq C {\rm Var}( \langle \nu, h^* \rangle ) \label{gl::eqn::bl_var}, \\
& \mathbf{E}[\exp(\langle \nu, h \rangle - \mathbf{E}[\langle \nu,h \rangle])] \leq \mathbf{E}[\exp( C\langle \nu, h^* \rangle )] \label{gl::eqn::bl_exp}
\end{align}
for all $\nu \in \mathbf{R}^{|D|}$.
\end{lemma}
\begin{proof}
For each $-\infty \leq \alpha < \beta \leq \infty$, fix a $C^\infty(\mathbf{R})$ function $f_{\alpha,\beta}$ such that $f_{\alpha,\beta}|_{[\alpha,\beta]} \equiv 0$, $f_{\alpha,\beta}|_{[\alpha,\beta]^c} > 0$, and $0 \leq f_{\alpha,\beta}''(x) \leq 1$ for all $x \in \mathbf{R}$. Let $\mathcal {U}_x^n = n f_{a(x),b(x)}$. If $h_n$ has the law of the GL model with self-potentials $\mathcal {U}_x^n$ it follows from \eqref{gl::eqn::hs_self_potential} that
\[ {\rm Var}(\langle \nu, h_n\rangle) \leq C {\rm Var}( \langle \nu, h^* \rangle)\]
for some $C > 0$ depending only on $a_\mathcal {V},A_\mathcal {V}$.
As $n \to \infty$, $h_n \stackrel{d}{\to} h$, which proves \eqref{gl::eqn::bl_var}. One proves \eqref{gl::eqn::bl_exp} using a similar method; see also Corollary 2.7 from \cite{DGI00}.
\end{proof}
More generally, if $F, G \colon \mathbf{R}^{|D|} \to \mathbf{R}$ are smooth, then \eqref{gl::eqn::hs_self_potential} becomes
\begin{align}
\label{gl::eqn::hs_self_potential_local_cov}
&{\rm Cov}(F(h),G(h)) = \\
\mathbf{E}_x\bigg[ \partial F(X_0^\mathcal {U}&, h_0^\mathcal {U}) \int_0^\tau \exp\bigg( - \int_0^s \mathcal {U}_{X_u^\mathcal {U}}''(h_u^\mathcal {U}(X_u^\mathcal {U})) du \bigg) \partial G(X_s^\mathcal {U}, h_s^\mathcal {U}) ds \bigg] \notag
\end{align}
where $\partial F(x,h) = \frac{\partial F}{\partial h(x)}(h)$. This leads to a simple proof of the FKG inequality, which gives that monotonic functionals of the field are non-negatively correlated:
\begin{lemma}[FKG inequality]
\label{gl::lem::fkg}
Suppose that $F,G \colon \mathbf{R}^{|D|} \to \mathbf{R}$ are smooth monotonic functionals, i.e. if $\varphi_1,\varphi_2 \in \mathbf{R}^{|D|}$ with $\varphi_1(x) \leq \varphi_2(x)$ for every $x \in D$ then $F(\varphi_1) \leq F(\varphi_2)$ and $G(\varphi_1) \leq G(\varphi_2)$. For $h \sim \mathbf{P}_D^\psi[\cdot|\mathcal {K}]$, we have
\[ \mathbf{E}[F(h) G(h)] \geq \mathbf{E}[F(h)] \mathbf{E}[G(h)].\]
\begin{proof}
This can be deduced from \eqref{gl::eqn::hs_self_potential_local_cov} using the same method as the previous lemma to deal with the conditioning; see also Remark 2.4 of \cite{DGI00}.
\end{proof}
\end{lemma}
\subsection{Dynamic Coupling}
The method of dynamic coupling, introduced in \cite{FS97} and which played a critical role in \cite{M10}, also generalizes in the presence of conditioning. Specifically, suppose that $h_t^{\psi}, h_t^{\widetilde{\psi}}$ both solve \eqref{gl::eqn::cond_dynam} with the same Brownian motions but possibly different boundary conditions $\psi, \widetilde{\psi}$. Then $\overline{h}_t(x) = h_t^\psi(x) - h_t^{\widetilde{\psi}}(x)$ solves the SDE
\begin{equation}
\label{gl::eqn::cond_dynam_diff_long} d\overline{h}_t(x) = \sum_{b \ni x} [\mathcal {V}'(\nabla h_t^{\psi}(b)) - \mathcal {V}'(\nabla h_t^{\widetilde{\psi}}(b))]dt + d(\overline{\ell}_t^a - \overline{\ell}_t^b)(x)
\end{equation}
where $\overline{\ell}^a = \ell^{a,\psi} - \ell^{a,\widetilde{\psi}}$ and $\overline{\ell}^b = \ell^{b,\psi} - \ell^{b,\widetilde{\psi}}$. Letting
\begin{equation}
\label{gl::eqn::c_l_def}
c_t(b) = \int_0^1 \mathcal {V}''( \nabla h_t^{\widetilde{\psi}}(b) + s \nabla \overline{h}_t(b)) ds \text{ and }
\mathcal {L}_t f(x) = \sum_{b \ni x} c_t(b) \nabla f(b),
\end{equation}
we can rewrite \eqref{gl::eqn::cond_dynam_diff_long} more concisely as
\begin{equation}
\label{gl::eqn::cond_dynam_diff}
d \overline{h}_t(x) = \mathcal {L}_t \overline{h}_t(x) dt + d(\overline{\ell}_t^a - \overline{\ell}_t^b)(x).
\end{equation}
By a small computational miracle, the following energy inequality holds in the setting of conditioning:
\begin{lemma}[Energy Inequality]
\label{dynam::lem::ee}
Suppose that $(h_t^\psi,h_t^{\widetilde{\psi}})$ satisfy \eqref{gl::eqn::cond_dynam} with the same driving Brownian motions and $\overline{h} = h^\psi - h^{\widetilde{\psi}}$. There exists $C > 0$ depending only on $\mathcal {V}$ such that for every $T > S$ we have
\begin{align}
\label{gl::eqn::ee}
&\sum_{x \in D} |\overline{h}_T(x)|^2 + \int_S^T \sum_{b \in D^*} |\nabla \overline{h}_t(b)|^2 dt \notag\\
\leq& C \left(\sum_{x \in D} |\overline{h}_S(x)|^2 + \int_S^T \sum_{b \in \partial D^*} |\overline{\psi}(x_b)||\nabla \overline{h}_t(b)|dt\right).
\end{align}
\end{lemma}
\begin{proof}
This is a generalization of Lemma 2.3 of \cite{FS97}.
From \eqref{gl::eqn::cond_dynam_diff} we have
\[ d (\overline{h}_t(x))^2
= 2 \overline{h}_t(x) \mathcal {L}_t \overline{h}_t(x) dt + 2\overline{h}_t(x) d[\overline{\ell}_t^a - \overline{\ell}_t^b](x).
\]
We are now going to prove
\[ d (\overline{h}_t(x))^2
\leq 2 \overline{h}_t(x) \mathcal {L}_t \overline{h}_t(x) dt,
\]
from which the result follows by summing by parts and then integrating from $S$ to $T$.
Suppose $a(x) > -\infty$ and $t$ is such that $h_t^\psi(x) \geq h_t^{\widetilde{\psi}}(x) = a(x)$. If $h_t^\psi(x) = h_t^{\widetilde{\psi}}(x)$, then obviously $\overline{h}_t(x) d \overline{\ell}_t^a(x) = 0$. If $h_t^\psi(x) > h_t^{\widetilde{\psi}}(x)$ then $d\ell_t^{a,\psi}(x) = 0$ while $d \ell_t^{a,\widetilde{\psi}}(x) > 0$. Thus $\overline{h}_t(x) > 0$ and $d \overline{\ell}_t^a(x) < 0$, so that $\overline{h}_t(x) d\overline{\ell}_t^a(x) < 0$. Therefore $\overline{h}_t(x) d\overline{\ell}_t^a(x) \leq 0$. We can play exactly the same game to prove $-\overline{h}_t(x) d \overline{\ell}_t^b(x) \leq 0$ if $b(x) < \infty$, which proves our claim.
\end{proof}
Suppose that $(h_\infty^\psi,h_\infty^{\widetilde{\psi}})$ is a subsequential limit of $(h_t^\psi,h_t^{\widetilde{\psi}})$ as $t \to \infty$. By dividing both sides of \eqref{gl::eqn::ee} by $T$ and sending $T \to \infty$ we see that $\overline{h}_\infty$ satisfies
\begin{equation}
\label{gl::eqn::ee_limit}
\sum_{b \in D^*} \mathbf{E} |\nabla \overline{h}_\infty(b)|^2 \leq C\sum_{b \in \partial D^*} \mathbf{E} |\overline{\psi}(x_b)||\nabla \overline{h}_\infty(b)|
\end{equation}
\begin{lemma}$\ $
\label{gl::lem::ergodic}
\begin{enumerate}
\item The SDS \eqref{gl::eqn::cond_dynam} is ergodic.
\item \label{gl::lem::ergodic::stationary} More generally, any finite collection $h^1,\ldots,h^n$ satisfying the SDS \eqref{gl::eqn::cond_dynam} each with the same conditioning and driven by the same family of Brownian motions is ergodic.
\item If $(h^1,\ldots,h^n)$ is distributed according to the unique stationary distribution from part \eqref{gl::lem::ergodic::stationary}, then $\overline{h}^{ij} = h^i - h^j$ satisfies \eqref{gl::eqn::ee_limit}
\end{enumerate}
\end{lemma}
\begin{proof}
Lemma \ref{gl::lem::ergodic} of \cite{M10} contains the same statement but for the unconditioned dynamics. The proof, however, relies only on the energy inequality hence is also valid here.
\end{proof}
\noindent We shall refer to the coupling $(h^1,\ldots,h^n)$ provided by part \eqref{gl::lem::ergodic::stationary} of the previous lemma as the \emph{stationary coupling} of the laws of $h^1,\ldots,h^n$.
We now need an analog of Lemma \ref{gl::lem::grad_error} from \cite{M10}. In the setting of that article, this followed by combining the Caccioppoli inequality with the Brascamp-Lieb inequalities. While we do have the latter even in the presence of conditioning, we do not have the former. Luckily, we are able to deduce the same result using only the energy inequality and an iterative technique. For $E \subseteq D$ we let $E(s) = \{ x \in E : {\rm dist}(x, \partial E) \geq s\}$.
\begin{lemma}
\label{gl::lem::grad_error}
Suppose $D \subseteq \mathbf{Z}^2$ with $R = {\rm diam}(D) < \infty$, let $\psi,\widetilde{\psi} \colon \partial D \to \mathbf{R}$, and let $E \subseteq D$ with $r = {\rm diam}(E)$. Assume that $(h_t^\psi,h_t^{\widetilde{\psi}})$ is a stationary coupling of two solutions of the SDS \eqref{gl::eqn::dynam} with the same conditioning. Let
\[ M = \max_{x \in E} \big[ \mathbf{E}[(h^\psi)^2(x)] + \mathbf{E}[(h^{\widetilde{\psi}})^2(x)] \big].\]
For every $\epsilon > 0$ there exists a constant $k = k(\epsilon)$ and $kr^{1-\epsilon} \leq r_\epsilon \leq (k+1) r^{1-\epsilon}$ such that
\begin{equation}
\label{gl::eqn::dirichlet_bound}
\sum_{b \in E^*(r_\epsilon)} \mathbf{E} |\nabla \overline{h}(b)|^2 = O(r^{3\epsilon} M).
\end{equation}
\end{lemma}
\begin{proof}
Equation \eqref{gl::eqn::ee_limit} used in conjunction with the Cauchy-Schwarz inequality implies that for any subdomain $E_1 \subseteq D$,
\[ \sum_{b \in E_1^*} \mathbf{E} |\nabla \overline{h}(b)|^2 = O(|\partial E_1| M).\]
Since
\[ \sum_{s=1}^{r^{1-\epsilon}} |\partial E(s)| \leq |E| = O(r^2) \text{ and }
\sum_{s=1}^{r^{1-\epsilon}} \sum_{b \in \partial E^*(s)} \mathbf{E} | \nabla \overline{h}(b)|^2 = O(r^{2} M),\]
it follows there exists $0 \leq r_1 \leq r^{1-\epsilon}$ with $|\partial E(r_1)| = O(r^{1+\epsilon})$ such that
\[ \sum_{b \in \partial E^*(r_1)} \mathbf{E} |\nabla \overline{h}(b)|^2 = O(r^{1+\epsilon} M).\]
Inserting these bounds back into \eqref{gl::eqn::ee_limit} and applying the Cauchy-Schwarz inequality, we see that
\begin{align*}
\sum_{b \in E^*(r_1)} \mathbf{E}| \nabla \overline{h}(b)|^2 \leq&
C\left(|\partial E^*(r_1)| \max_{b \in \partial E^*(r_1)} \mathbf{E} |\overline{h}(x_b)|^2 \right)^{1/2}\left( \sum_{b \in \partial E^*(r_1)} \mathbf{E} |\nabla \overline{h}(b)|^2\right)^{1/2}\\
=& O(\sqrt{ r^{1+\epsilon} M}) O(\sqrt{r^{1+\epsilon} M}) =
O(r^{1+\epsilon} M).
\end{align*}
By the same averaging argument, this in turn implies there exists $r^{1-\epsilon} \leq r_2 \leq 2 r^{1-\epsilon}$ such that
\[ \sum_{b \in \partial E^*(r_2)} \mathbf{E} |\nabla \overline{h}(b)|^2 = O(r^{2\epsilon} M)\]
and $|\partial E^*(r_2)| = O(r^{1+\epsilon})$. Combining \eqref{gl::eqn::ee_limit} with the Cauchy-Schwarz inequality again yields
\begin{align*}
\sum_{b \in E^*(r_2)} \mathbf{E} |\nabla \overline{h}(b)|^2
&\leq C \left(|\partial E^*(r_2)| \max_{b \in \partial E^*(r_2)} \mathbf{E} |\overline{h}(x_b)|^2\right)^{1/2} \left(\sum_{b \in \partial E^*(r_2)} \mathbf{E} |\nabla \overline{h}(b)|^2\right)^{1/2}\\
&= O(\sqrt{r^{1+\epsilon} M}) O(\sqrt{r^{2 \epsilon} M}) = O(r^{1/2+3/2\epsilon} M).
\end{align*}
Iterating this $k$ times yields the existence of $(k-1) r^{1-\epsilon} \leq r_k \leq k r^{1-\epsilon}$ such that $|\partial E^*(r_k)| = O(r^{1+\epsilon})$ and
\begin{align*}
\sum_{b \in E^*(r_k)} \mathbf{E} |\nabla \overline{h}(b)|^2
&\leq C \left(|\partial E^*(r_k)| \max_{b \in \partial E^*(r_k)} \mathbf{E} |\overline{h}(x_b)|^2\right)^{1/2} \left(\sum_{b \in \partial E^*(r_k)} \mathbf{E} |\nabla \overline{h}(b)|^2\right)^{1/2}\\
&= O(r^{2^{-k} + \alpha_k \epsilon} M),
\end{align*}
where $\alpha_k = \sum_{j=0}^k 2^{-j} \leq 2$. Taking $k$ large enough gives \eqref{gl::eqn::dirichlet_bound}.
\end{proof}
Lemma \ref{gl::lem::grad_error} will be used in conjunction with Lemma \ref{te::lem::expectation}, which provides bounds for $M$.
\subsection{The Random Walk Representation and Stochastic Domination}
\label{subsec::rw_difference}
The energy method of the previous subsection allowed us to deduce macroscopic regularity and ergodicity properties of the dynamic coupling. In this subsection, we will develop the random-walk representation of $\overline{h}_t(x)$, which allows for pointwise estimates.
Fix $T > 0$ and let $X_t^T$ be the random walk in $D$ with time-dependent generator $t \mapsto \mathcal {L}_{T-t}$ with $\mathcal {L}_t$ as in \eqref{gl::eqn::c_l_def}. Note that $c_t(b)$ makes sense for $t < 0$ hence $\mathcal {L}_{T-t}$ for $t > T$ since \eqref{gl::eqn::cond_dynam} is defined for all $t \in \mathbf{R}$. Let $\tau = \inf\{t \geq 0 : X_t^T \notin D\}$.
\begin{remark}
\label{gl::rem::stoc_rep}
In the special case $a(x) = -\infty$ and $b(x) = \infty$, so that the fields are unconditioned, the stationary coupling $(h_t^\psi, h_t^{\widetilde{\psi}})$ satisfies
\begin{equation}
\label{gl::eqn::rand_walk_diff}
\overline{h}_T(x) = \mathbf{E}_x[\overline{h}_{T-\tau}(X_{\tau}^T)],
\end{equation}
where the expectation is taken only over the randomness of $X^T$. Consequently, if $\overline{h}|\partial D \geq 0$ then $\overline{h}_T \geq 0$. In other words, the stationary coupling $(h_t^\psi,h_t^{\widetilde{\psi}})$ satisfies $h_t^\psi \geq h_t^{\widetilde{\psi}}$ if the inequality is satisfied uniformly on the boundary.
\end{remark}
The purpose of the following lemma is to establish the same result in the presence of conditioning.
\begin{lemma}
\label{gl::lem::stoch_dom}
If $D \subseteq \mathbf{Z}^d$ is bounded and $\psi,\widetilde{\psi} \colon \partial D \to \mathbf{R}$ satisfy $\psi(x) \geq \widetilde{\psi}(x)$ for every $x \in \partial D$, then the stationary coupling $(h_t^\psi,h_t^{\widetilde{\psi}})$ of $\mathbf{P}_D^\psi[\cdot|\mathcal {K}], \mathbf{P}_D^{\widetilde{\psi}}[\cdot|\mathcal {K}]$ satisfies $h_t^\psi(x) \geq h_t^{\widetilde{\psi}}(x)$ for every $x \in D$.
\end{lemma}
\begin{proof}
The proof is similar to that of \cite[Lemma 2.4]{DN07}, which gives a stochastic domination result in a slightly different context. For $\alpha \in \mathbf{R}$, we let $\alpha^- = \min(\alpha,0)$. Note that
\begin{align*}
d((\overline{h}_t)^-(x))^2
=& 2 (\overline{h}_t)^-(x) \mathcal {L}_t \overline{h}_t(x) dt + 2(\overline{h}_t)^-(x) d[\overline{\ell}_t^a - \overline{\ell}_t^b](x)\\
\leq& 2(\overline{h}_t)^-(x) \mathcal {L}_t \overline{h}_t(x) dt.
\end{align*}
The last inequality used that $(\overline{h}_t)^-(x) d[\overline{\ell}_t^a - \overline{\ell}_t^b](x) \leq 0$, as in the proof of the energy inequality. Thus,
\begin{align}
& d\left(\sum_{x \in D} ((\overline{h}_t)^-(x))^2 \right)
\leq 2\sum_{x \in D} (\overline{h}_t)^-(x) \mathcal {L}_t \overline{h}_t(x) dt \notag\\
=& -2\sum_{b \in D^*} c_t(b) \nabla (\overline{h}_t)^-(b) \nabla \overline{h}_t(b) dt, \label{gl::eqn::square_neg}
\end{align}
where in the last step we used summation by parts and that $(\overline{h}_t)^-|_{\partial D} \equiv 0$. Now using $(\alpha^- - \beta^-)(\alpha-\beta) \geq (\alpha^- - \beta^-)^2$, we see that the previous expression is bounded from above by
\begin{equation}
\label{gl::eqn::square_neg2}
-2 \sum_{b \in D^*} a_\mathcal {V} [\nabla (\overline{h}_t)^-(b)]^2 dt.
\end{equation}
This implies that $S = \lim_{t \to \infty} \sum_{x \in D} ((\overline{h}_t)^-(x))^2$ exists and is constant. By the Poincar\'e inequality,
\[ \liminf_{t \to \infty} \sum_{b \in D^*} [\nabla (\overline{h}_t)^-(b)]^2 dt \geq c_D S,\]
for $c_D > 0$ depending only on $D$. Combining this with \eqref{gl::eqn::square_neg}, \eqref{gl::eqn::square_neg2} clearly implies $S = 0$.
\end{proof}
\begin{remark}
In the setting of Remark \ref{gl::rem::stoc_rep}, combining \eqref{gl::eqn::rand_walk_diff} with Jensen's inequality yields in the unconditioned case that
\[ \overline{h}_T^2(x) \leq \mathbf{E}_x [\overline{h}_{T-\tau}^2(X_{\tau}^T)],\]
where the expectation is just over the randomness in $X^T$.
\end{remark}
The same result also holds in the conditioned case, though we have to work ever so slightly harder to prove it.
\begin{lemma}
\label{gl::lem::square_bound}
Assume that we have the same setup as Lemma \ref{gl::lem::stoch_dom}. We have,
\begin{equation}
\label{gl::eqn::rand_walk_diff_square}
\overline{h}_T^2(x) \leq \mathbf{E}_x[\overline{h}_{T-\tau}^2(X_{\tau}^T)].
\end{equation}
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{dynam::lem::ee},
\[ d \overline{h}_{T-t}^2(x) = 2 \overline{h}_{T-t}(x) d (\overline{h}_{T-t}(x)) \geq -2\overline{h}_{T-t}(x) \mathcal {L}_{T-t} \overline{h}_{T-t}(x) dt.\]
Thus as
\[ \mathcal {L}_{T-t} \overline{h}_{T-t}^2(x) = 2 \overline{h}_{T-t}(x) (\mathcal {L}_{T-t} \overline{h}_{T-t})(x) + \sum_{b \ni x} c_{T-t}(b) (\nabla \overline{h}_{T-t}(b))^2\]
and, in particular,
\[ \mathcal {L}_{T-t} \overline{h}_{T-t}^2(x) \geq 2 \overline{h}_{T-t}(x) (\mathcal {L}_{T-t} \overline{h}_{T-t})(x)\]
we consequently have
\begin{equation}
\label{gl::eqn::h_square_bound}
d \overline{h}_{T-t}^2(x) + \mathcal {L}_{T-t} \overline{h}_{T-t}^2(x) dt \geq 0.
\end{equation}
If $g \colon [0,T] \times \mathbf{Z}^2 \to \mathbf{R}$ is $C^1$ in its first variable with $\| \partial_s g\|_\infty < \infty$, then with
\[ M_t(g) = g(t,X_{t}^T) - g(0,x) - \int_0^t (\partial_s g)(s,X_{s}^T) + \mathcal {L}_{T-s} g(s,X_{s}^T) ds,\ \ t \in [0,T],\]
we see that $M_{t \wedge \tau}(g)$ is a bounded martingale with respect to the filtration $(\mathcal {F}_t)$, $\mathcal {F}_t = \sigma( X_s^T : s \leq t)$. Let $(\tau_k)$ be the jump times of $X^T$. Then $M_t(g)$ can also be expressed as
\begin{align*}
M_t(g) =& g(t,X_{t}^T) - g(0,x) - \sum_{k} [g(\tau_{k+1} \wedge t,X_{\tau_k \wedge t}^T) - g(\tau_k \wedge t,X_{\tau_k \wedge t}^T)]\\
&- \int_0^t \mathcal {L}_{T-s} g(s,X_{s}^T) ds.
\end{align*}
This representation allows us to make sense of $g \mapsto M_t(g)$ for $g$ which are not necessarily differentiable in time. Letting $N(T) = \sup\{ k : \tau_k \leq T\}$, we have
\[ \|M_{\cdot \wedge \tau}(g)\|_\infty \leq C(T+1+N(T \wedge \tau))\| g \|_\infty\]
for some $C > 0$, where the supremum on the left hand side is taken over $[0,T]$ and on the right over $[0,T] \times D$. Observe that $N(T \wedge \tau)$ has finite moments of all orders uniformly bounded in $T$ since the jump rates of $X^T$ are bounded from below and $D$ is bounded. Consequently, taking a sequence $(g_n)$ with $g_n \colon [0,T] \times \mathbf{Z}^2 \to \mathbf{R}$ which is $C^1$ in the first variable such that $g_n(\cdot,x) \to \overline{h}_{T-\cdot}^2(x)$ uniformly in $t$, implies $M_{t \wedge \tau}(\overline{h}_{T-\cdot}^2)$ is also a $(\mathcal {F}_t)$ martingale. Note that
\[ M_t(\overline{h}_{T-\cdot}^2) = \overline{h}_{T-t}^2(X_t^T) - \overline{h}_T^2(x) - \int_0^t d \overline{h}_{T-s}^2(X_s^T) - \int_0^t \mathcal {L}_{T-s} \overline{h}_{T-s}^2(X_{s}^T) ds.\]
Combining this with \eqref{gl::eqn::h_square_bound} implies
\[ \overline{h}_{T}^2(x) \leq \overline{h}_{T- \tau}^2(X_{\tau}^T) - M_{\tau}.\]
Taking expectations of both sides, using the uniform integrability of the martingale $M_{t \wedge \tau}$, and invoking the optional stopping theorem proves the lemma.
\end{proof}
\section{Moment Estimates}
\label{sec::expectation}
It will be rather important for us to have control on the exponential moments of $h$ conditional on $\mathcal {K} = \cap_{x \in D} \{ a(x) \leq h(x) \leq b(x)\}$ near $U$. Such an estimate does not follow from the exponential Brascamp-Lieb inequality since this only bounds the \emph{centered exponential moment} in terms of the corresponding moment for the \emph{unconditioned DGFF}, the latter of which is of polynomial order in $R = {\rm diam}(D)$ in the bulk. It will also be important for us to know that $h(x) - a(x)$ for $x \in V_+ \cup V$ and $b(x) - h(x)$ for $x \in V_+ \cup V$ are uniformly positive in expectation conditional on $\mathcal {K}$.
\begin{lemma}
\label{te::lem::expectation}
Assume $(\partial)$, $(C)$, and fix $\eta \in (0,1/2)$. There exists constants $c_1 = c_1(\eta)$ and $c_2 = c_2(\overline{\Lambda}, \eta)$ such that the following holds. If $v \in D$, $r > (\log R)^{c_1}$ is such that $B(v,r^{1+3\eta} \wedge R) \cap U$ contains a connected subgraph $U_0$ of $U$ with $U_0 \cap \partial B(v,r^{1+3\eta} \wedge R) \neq \emptyset$ and ${\rm dist}(v,U_0) \leq r$, then
\begin{equation}
\label{te::eqn::exponential_moment}
\mathbf{E}[ \exp(|h(v)|) | \mathcal {K}] \leq c_2 r^{c_2}.
\end{equation}
\end{lemma}
The reason for the hypothesis that there is a large, connected subgraph $U_0$ of $U$ near $v$ is to ensure that symmetric random walk with bounded rates initialized at $v$ is much more likely to hit $U_0$ before exiting a ball of logarithmic size around $v$. Note in particular that this hypothesis trivially holds when $U$ consists of a path in $D$ connected to and along with $\partial D$.
The idea of the proof is to use repeatedly the stochastic domination results of the previous section along with an iterative argument to reduce the problem to a GL model on a domain whose size is polynomial in $r$. Specifically, we without loss of generality assume $v \in D_W \equiv D \setminus W$ where $W = U \setminus V_+$. By stochastic domination, it suffices to control $\mathbf{E}[\exp(h^{D_W}(v))|\mathcal {K}^{D_W}]$ where $h^{D_W}$ has the law of the GL model on $D_W$ with the same boundary conditions as $h$ on $\partial D$, constant boundary conditions $\overline{\Lambda}$ on $W$, and $\mathcal {K}^{D_W} = \cap_{x \in V_+} \{a(x) \leq h^{D_W}(x)\}$. By hypothesis, there exists $u \in W$ with $|u-v| \leq r+2$, hence the exponential Brascamp-Lieb inequality applied to $h^{D_W}|\mathcal {K}^{D_W}$ implies that the centered, exponential moments of $(h^{D_W}|\mathcal {K}^{D_W})(v)$ are polynomial in $r$. This reduces the problem to estimating $\mathbf{E}[h^{D_W}(v) | \mathcal {K}^{D_W}]$. The idea now is to prove an \emph{a priori} estimate of $\mathbf{E}[h^{D_W}(v)|\mathcal {K}^{D_W}]$ using the FKG inequality, then use the method of dynamic coupling repeatedly to construct a comparison between $\mathbf{E}[h^{D_W}(v)|\mathcal {K}^{D_W}]$ and the expected height of a GL model on a ball with diameter which is polynomial in $r$. This completes the proof since our \emph{a priori} estimate implies the latter is $O_{\overline{\Lambda}}(\log r)$.
\begin{proof}
We begin with the observation
\[ \mathbf{E}[ \exp(|h(v)|) | \mathcal {K}] \leq \mathbf{E}[ \exp(h(v)) | \mathcal {K}] + \mathbf{E}[ \exp(-h(v)) | \mathcal {K}].\]
Let $W, D_W, h^{D_W}, \mathcal {K}^{D_W}$ be as in the paragraph after the statement of the lemma. By Lemma \ref{gl::lem::stoch_dom}, there exists a coupling of $h| \mathcal {K}$, $h^{D_W} | \mathcal {K}^{D_W}$ such that $h^{D_W} | \mathcal {K}^{D_W} \geq h | \mathcal {K}$, hence to bound $\mathbf{E}[\exp(h(v)) | \mathcal {K}]$ it suffices to bound $\mathbf{E}[ \exp(h^{D_W}(v))| \mathcal {K}^{D_W}]$.
By the exponential Brascamp-Lieb inequality (Lemma \ref{bl::lem::bl_inequalities}), we have
\begin{align*}
& \mathbf{E}[\exp( h^{D_W}(v)) | \mathcal {K}^{D_W}]\\
\leq& \exp(\mathbf{E}[ h^{D_W}(v) \big| \mathcal {K}^{D_W}])\mathbf{E}[\exp(h^{D_W}(v) - \mathbf{E}[h^{D_W}(v)|\mathcal {K}^{D_W}]) | \mathcal {K}^{D_W}]\\
\leq& \exp(\mathbf{E}[h^{D_W}(v) \big| \mathcal {K}^{D_W}]) \mathbf{E}[\exp( C (h^{D_W})^*(v))]
\end{align*}
where $(h^{D_W})^*$ has the law of a zero-boundary DGFF on $D_W$ and $C = C(\mathcal {V}) > 0$ is a constant depending only on $\mathcal {V}$. Since ${\rm dist}(v,V_+) \leq r$, there exists $w \in \partial D_W$ such that $|v - w| \leq r+2$ by (C), hence ${\rm Var}((h^{D_W})^*(v)) = O(\log r)$. The reason for this is that a random walk initialized at $v$ has probability $\Omega((\log r)^{-1})$ of hitting $w$ hence $W$ before visiting $v$ again after each successive visit. This, in turn, implies $\mathbf{E}[\exp( C (h^{D_W})^*(v))] \leq C' r^{C'}$ for some $C' > 0$. Consequently, to prove the lemma we just need to bound $\mathbf{E}[| h^{D_W}(v)| \big| \mathcal {K}^{D_W}]$. We will break the proof up into three main steps. The first is to get an \emph{a priori} estimate on the behavior of the maximum, the second is to use a coupling argument to improve the estimate by comparison to a model on a smaller domain, and the third is to show how this coupling argument may be iterated repeatedly in order to get the final bound.
{\it Step 1.} Let $A = \cup_{y \in D_W} \{h^{D_W}(y) \geq \alpha (\log R)\}.$
The goal of this step is to prove that $\mathbf{P}[ A | \mathcal {K}^{D_W}] = O_{\overline{\Lambda}}(R^{-100})$
provided $\alpha = \alpha(\overline{\Lambda})$ is chosen sufficiently large.
Let $h^{D_W,\gamma}$ have the law of the GL model on $D_W$ with constant boundary conditions $\gamma C^{-1} (\log R)$ where $\gamma = \gamma(\overline{\Lambda})$ is to be chosen later. Let $\mathcal {K}^{D_W,\gamma} = \cap_{x \in D_W} \{ a(x) \leq h^{D_W,\gamma}(x)\}$. Finally, let $A^\gamma$ be the event analogous to $A$ but with $h^{D_W}$ replaced with $h^{D_W,\gamma}$. It suffices to show that $\mathbf{P}[ A^\gamma | \mathcal {K}^{D_W,\gamma}] = O_{\overline{\Lambda}}(R^{-100})$
since by Lemma \ref{gl::lem::stoch_dom} we can couple the laws of $h^{D_W,\gamma}|\mathcal {K}^{D_W,\gamma}$ and $h^{D_W}|\mathcal {K}^{D_W}$ such that $h^{D_W,\gamma}|\mathcal {K}^{D_W,\gamma} \geq h^{D_W}|\mathcal {K}^{D_W}$ almost surely. We will first prove that we can pick $\alpha = \alpha(\overline{\Lambda})$ large enough so our claim holds without conditioning:
\begin{equation}
\label{te::eqn::abound} \mathbf{P}[A^\gamma] \leq O_{\overline{\Lambda}}(R^{-100}),
\end{equation}
then show that $\mathbf{P}[\mathcal {K}^{D_W,\gamma}] = 1-o(1)$ for $\gamma$ large enough.
By the exponential Brascamp-Lieb and Chebychev inequalities, for some $C > 0$ we have
\begin{align*}
\mathbf{P}[h^{D_W,\gamma}(y) > \beta C^{-1} (\log R)]
\leq& \exp( -\beta (\log R)) \mathbf{E}[\exp(Ch^{D_W,\gamma}(y))]\\
\leq& \exp((O_{\overline{\Lambda}}(1)-\beta)(\log R)).
\end{align*}
Here, we are using that ${\rm Var}(h^{D_W,\gamma}(y)) = O(\log R)$ and $\mathbf{E}[h^{D_W,\gamma}(y)] = O_{\overline{\Lambda}}(\log R)$. The latter can be seen, for example, using Lemma \ref{gl::lem::hs_mean_cov} of \cite{M10}, the HS representation of the mean. Choosing $\beta = \beta(\gamma) > 0$ large enough along with a union bound now gives \eqref{te::eqn::abound}.
With $h^{D_W,0}$ the zero-boundary GL model on $D_W$, a similar argument with the Brascamp-Lieb and Chebychev inequalities yields
\begin{align*}
\mathbf{P}[ h^{D_W,0}(y) \leq \gamma C^{-1} (\log R) - \overline{\Lambda}] = 1-O_{\overline{\Lambda}}(R^{-5})
\end{align*}
provided we choose $\gamma = \gamma(\overline{\Lambda})$ large enough.
Note that for $y \in D_W$, the symmetry of the law of $h^{D_W,0}$ about zero gives us
\begin{align*}
&\mathbf{P}[h^{D_W,\gamma}(y) > a(y)]
\geq \mathbf{P}[ h^{D_W,0}(y) \geq \overline{\Lambda} - \gamma C^{-1} (\log R)]\\
=& \mathbf{P}[ h^{D_W,0}(y) \leq \gamma C^{-1} (\log R) - \overline{\Lambda}]
\geq 1 - O_{\overline{\Lambda}}(R^{-5}).
\end{align*}
Invoking the FKG inequality yields
\begin{align*}
\mathbf{P}[ \mathcal {K}^{D_W,\gamma}]
&\geq \prod_{y \in V_+} \mathbf{P}[ h^{D_W,\gamma}(y) \geq a(y)]
\geq (1 - O_{\overline{\Lambda}}(R^{-5}))^{R^2}
= 1 - O_{\overline{\Lambda}}(R^{-1}).
\end{align*}
Therefore $\mathbf{P}[ A^\gamma | \mathcal {K}^{D_W,\gamma}] = O_{\overline{\Lambda}}(R^{-100})$, as desired.\newline
{\it Step 2.} We next claim that
\[ |\mathbf{E}[h^{D_W}(v) | \mathcal {K}^{D_W}]| = O_{\overline{\Lambda}}(\log r + \log \log R).\]
If $r \geq R^{1/3}$, then this is immediate from the previous part, so assume that $r < R^{1/3}$.
By the definition of $A$,
\begin{align*}
&\mathbf{E}[\max_{y \in D_W} \big| h^{D_W}(y)|^p \big| \mathcal {K}^{D_W}]
= O_{\overline{\Lambda}}((\log R)^p) + \sum_{y \in D_W} \mathbf{E}[|h^{D_W}(y)|^p \mathbf{1}_{A}| \mathcal {K}^{D_W}].
\end{align*}
Using that ${\rm Var}[ h^{D_W}(y) | \mathcal {K}^{D_W}] = O(\log R)$, the Brascamp-Lieb and Cauchy-Schwarz inequalities yield
\begin{align*}
&\mathbf{E}[| h^{D_W}(y)|^p \mathbf{1}_{A} \big|\mathcal {K}^{D_W}]\\
\leq &\bigg( \mathbf{E}[ (h^{D_W}(y) - \mathbf{E}[|h^{D_W}(y)| \big| \mathcal {K}^{D_W}])^{2p} | \mathcal {K}^{D_W}] + (\mathbf{E}[|h^{D_W}(y)| \big| \mathcal {K}^{D_W}])^{2p} \bigg)^{1/2} O_{\overline{\Lambda}}(R^{-50})\\
\leq &O_{\overline{\Lambda}}(R^{-20}) + \mathbf{E}[\max_{y \in D_W}|h^{D_W}(y)|^p \big| \mathcal {K}^{D_W}] O_{\overline{\Lambda}}(R^{-20}).
\end{align*}
Inserting this into the previous equation and rearranging leads to the bound
\begin{align}
\label{te::eqn::max_bound}
\mathbf{E}[\max_{y \in D_W} |h^{D_W}(y)|^p \big|\mathcal {K}^{D_W}]
= O_{\overline{\Lambda}}((\log R)^p).
\end{align}
Let $\delta > 1$; we will determine its precise value shortly. Let $B_\delta^W = B(v,r^\delta) \cap D_W$, $\zeta = h^{D_W} |_{\partial B_\delta^W}$, and let $h^{\zeta}$ have the law of the GL model on $B_\delta^W$ with boundary condition $\zeta$ and with conditioning $a(x) \leq h^{\zeta}(x) \leq b(x)$. Let $h^{\zeta,0}$ have the law of the GL model on $B_\delta^W$ with the same boundary conditions as $h^{\zeta}$ on $(\partial B_\delta^W) \cap W$ and with zero boundary conditions on $(\partial B_\delta^W) \setminus W$. Finally, let $(h_t^{\zeta}, h_t^{\zeta,0})$ be the stationary coupling of the corresponding dynamic models. With $\overline{h}_t = h_t^{\zeta} - h_t^{\zeta,0}$, by Lemma \ref{gl::lem::square_bound} we have $\overline{h}_0^2(z) \leq \mathbf{E}_z[\overline{h}_{-\tau}^2(X_{\tau})]$,
where the expectation is taken only over the randomness of the Markov process $X = X^0$ as in subsection \ref{subsec::rw_difference} initialized at $z$ and $\tau$ its time of first exit from $B_\delta^W$. It follows from \cite{M10}, Lemma \ref{symm_rw::lem::beurling} that if $z \in B(v,r^{1+\eta \delta})$, then the probability that $X_t$ makes it to the outer boundary of $\partial B(v,r^{\delta})$ before hitting $(\partial B_\delta^W) \cap W$ is $O(r^{-\rho_{\rm B}(\delta(1-\eta) - 1)})$, some $\rho_{\rm B} > 0$ depending only on $\mathcal {V}$. Combining this with \eqref{te::eqn::max_bound} implies $\mathbf{E}[|\overline{h}_0(z)|^2] = O_{\overline{\Lambda}}(r^{-\rho_{\rm B}(\delta(1-\eta) - 1)}(\log R)^2).$
Hence taking
\[ \gamma_0 = \frac{4}{ \rho_{\rm B} (1-\eta)} \text{ and } \delta = \frac{1}{1-\eta} + \gamma_0 \frac{\log \log R}{\log r},\]
we get that
\begin{equation}
\label{te::eqn::step2_bound}
\big( \mathbf{E}[|\overline{h}_0(z)|^2] \big)^{1/2} = O_{\overline{\Lambda}}((\log R)^{-1}).
\end{equation}
Note that with this choice of $\delta$ we have that $r^{\delta} \leq r^{1+3\eta}$ provided we take $c_1 = c_1(\eta)$ large enough. This implies our claim as Step 1 gives
\[ \mathbf{E}[(h^{\zeta,0})^2(v)] = O_{\overline{\Lambda}}(\log r^\delta) = O_{\overline{\Lambda}}(\log r + \log \log R).\]
{\it Step 3.} In the previous step, we took our initial estimate of $O_{\overline{\Lambda}}(\log R)$ and improved it to $O_{\overline{\Lambda}}(\log r + \log \log R)$ using a coupling argument to reduce the problem to one on a domain of size $r^{\delta} = r^{1/(1-\eta)} (\log R)^{\gamma_0}$. Assume that $(\log R)^{\gamma_0} \geq r^{1/(1-\eta)}$, for otherwise we are already done. Then $r^{\delta} \leq (\log R)^{\gamma}$ for $\gamma = 2\gamma_0$. That is, the new domain produced by one application of Step 2 has diameter which is poly-log in the diameter of the initial domain. Suppose that $n_0$ is the smallest positive integer such that $\log^{(n_0)}(R) < 100 r$, where $\log^{(n_0)}$ indicates the $\log$ function applied $n_0$ times. It is not difficult to see that if we run the argument of Step 2 successively $n_0$-times we are left with a domain with size which is polynomial in $r$. Equation \eqref{te::eqn::step2_bound} implies that the sum of the $L^2$ error that we accrue from iterating this procedure is bounded from above by
\[ O_{\overline{\Lambda}} \left( \sum_{m=1}^{n_0} \frac{1}{\exp^{(m)}(c_0)} \right) \leq O_{\overline{\Lambda}} \left( \sum_{m=1}^\infty \frac{1}{\exp^{(m)}(c_0)} \right) < \infty,\]
where $\exp^{(m)}$ denotes the exponential function applied $m$ times and $c_0 = c_0(\eta, \overline{\Lambda}) > 0$ is some fixed constant.
Since the final domain is polynomial in $r$, the desired result follows by another application of Step 1.
\end{proof}
We are now going to show that the conditional expectation of the height along the interface is uniformly larger than $a(v)$ for $v \in V_+$ and less than $B(v)$ for $v \in V_-$.
\begin{lemma}
\label{te::lem::expectation_pos}
Assume $(\partial), (C)$, and that $r > (\log R)^{2 c_1}$ with $c_1 = c_1(1/4)$ as in Lemma \ref{te::lem::expectation}. Suppose $v \in U$ is such that the connected component of $U \cap B(v,r)$ containing $v$ has non-empty intersection with $\partial B(v,r)$. Then
\begin{align}
\label{te::eqn::abs_upper_lower_bound}
\mathbf{E}[h(v) - a(v) |\mathcal {K}] &\geq \tfrac{1}{c_3} \text{ for } v \in V_+ \cup V,\\
\mathbf{E}[b(v) - h(v)|\mathcal {K}] &\geq \tfrac{1}{c_3} \text{ for } v \in V_- \cup V
\end{align}
for $c_3 > 0$ a universal constant.
\end{lemma}
\begin{proof}
Fix $v \in V_+$ which satisfies the hypotheses of the lemma and let $v_1,\ldots,v_m$ be the neighbors of $v$ in $D$. Let $M > 0$ be some fixed positive constant. By the explicit form of the law of $h$ conditional on $h(v_1),\ldots,h(v_m), \mathcal {K}$, we obviously have that
\[ \mathbf{E}[h(v) - a(v)\big| |h(v_1)|,\ldots,|h(v_m)| \leq M, \mathcal {K}] \geq c(M) > 0.\]
This yields \eqref{te::eqn::abs_upper_lower_bound} since \eqref{te::eqn::exponential_moment} implies
\[ \mathbf{P}[|h(v_1)|,\ldots,|h(v_m)| \leq M | \mathcal {K}] \geq \epsilon_1 > 0.\]
\end{proof}
We are now going to prove that the mean height of the field at a point $v$ in $U$ remains uniformly bounded conditional on the boundary data of the field in a large ball around $v$ provided that it is of at most logarithmic height. The proof follows by reusing the coupling and stochastic domination procedure from the proof of Lemma \ref{te::lem::expectation}.
\begin{lemma}
\label{te::lem::local_bound}
Assume ($\partial$) and $(C)$. For each $\alpha > 0$ there exists $p_0 > 0$ and $c_4 = c_4(\overline{\Lambda},\alpha)$ such that the following holds. For each $r \geq 0$ and $v \in U$ such that the connected component of $U \cap B(v,r)$ containing $v$ has non-empty intersection with $\partial B(v,r)$, we have
\begin{equation}
\label{te::eqn::boundary_conditioned_mean}
\mathbf{E}[ |h(v)| \big| \mathcal {K}, h|_{\partial B(v,\widetilde{r})}] \mathbf{1}_{A^c} \leq c_4
\end{equation}
for every $(\log r)^{p_0} \leq \widetilde{r} \leq r$, and $A = \{ \max_{x \in B(v,r)} |h(x)| > \alpha \log r\}$.
\end{lemma}
\begin{proof}
Without loss of generality, it suffices to consider $v \in V_+$ since the argument is symmetric for $v \in V_-$ and is trivial if $v \in V$. We re-apply the idea of Step 2 from the proof of Lemma \ref{te::lem::expectation}. With $W, D_W$ as in the proof of Lemma \ref{te::lem::expectation}, let $\zeta = h|_{\partial B_W}$ where $B_W = B(x,\widetilde{r}) \cap D_W$ and let $h^{\zeta}$ have the law of the GL model on $B_W$ with boundary condition $\zeta$ conditional on $\cap_{x \in B_W} \{a(x) \leq h^{\zeta}(x) \leq b(x)\}$. Let $\partial_1$ denote the part of $\partial B_W$ which does not intersect $W$ and $\partial_2$ the part which is contained in $W$. Assume that $h^{\zeta,0}$ has the law of the GL model on $B_W$ with $h^{\zeta,0}|_{\partial_1} \equiv 0$, $h^{\zeta,0}|_{\partial_2} \equiv \zeta$, and the same conditioning as $h^{\zeta}$. With $(h_t^{\zeta}, h_t^{\zeta,0})$ the stationary coupling of $h^\zeta,h^{\zeta,0}$, Lemma \ref{gl::lem::square_bound} implies $\overline{h}_0^2(v) \leq \mathbf{E}_v[\overline{h}_{-\tau}^2(X_{\tau})]$ where $\tau$ is the first exit time of $X = X^0$ from $B_W$. Using that $\overline{h} = h^\zeta - h^{\zeta,0} \equiv 0$ on $\partial_2$, Lemma \ref{symm_rw::lem::beurling} of \cite{M10} thus implies $\overline{h}_0^2(v) \leq O_{\overline{\Lambda}}( \widetilde{r}^{-\rho_B} \max_{x \in \partial_1} |\zeta(x)|^2)$. Therefore
\begin{equation}
\label{te::eqn::part2_bound}
\mathbf{E}[ \overline{h}_T^2(v) | \zeta] \mathbf{1}_{A^c} = O_{\alpha}( \widetilde{r}^{-\rho_{\rm B}} (\log r)^2).
\end{equation}
Assume now $p_0 > 2 / \rho_{\rm B}$ so that $\widetilde{r}^{-\rho_{\rm B}} (\log r)^2 = O(1)$. Then the right hand side of \eqref{te::eqn::part2_bound} is $O_{\alpha}(1)$. Therefore it suffices to show that $\mathbf{E}[ |h^{\zeta,0}(v)| \big| \zeta] \mathbf{1}_{A^c} = O_{\overline{\Lambda}}(1)$. Since $h^{\zeta,0}(v) \geq -\overline{\Lambda}$, we actually just need to prove $\mathbf{E}[ h^{\zeta,0}(v) \big| \zeta] \mathbf{1}_{A^c} = O_{\overline{\Lambda}}(1)$. Let $h^{\overline{\Lambda}}$ have the law of the GL model on $B_W$ with $h^{\overline{\Lambda}} |_{\partial_1} \equiv 0$, $h^{\overline{\Lambda}} |_{\partial_2} \equiv \overline{\Lambda}$, and the same conditioning as $h^\zeta$. As $\zeta|_{\partial_2} \leq \overline{\Lambda}$, Lemma \ref{gl::lem::stoch_dom} implies that the stationary coupling $(h_t^{\zeta,0}, h_t^{\overline{\Lambda}})$ satisfies $h_t^{\overline{\Lambda}} \geq h_t^{\zeta,0}$ almost surely. As $h_t^{\overline{\Lambda}}$ satisfies the hypotheses of Lemma \ref{te::lem::expectation}, we consequently have $\mathbf{E}[ h^{\overline{\Lambda}}(v)] = O_{\overline{\Lambda}}(1)$, hence $\mathbf{E}[ h^{\zeta,0}(v) | \zeta] \mathbf{1}_{A^c} = O_{\overline{\Lambda}}(1)$.
\end{proof}
\section{Coupling Near the Interface}
\label{sec::hic}
Throughout this section, we shall assume $(\partial)$ and $(C)$. Suppose that $h_t$ solves \eqref{gl::eqn::cond_dynam} initialized at stationarity and conditioned so that $a(x) \leq h_t(x) \leq b(x)$ for every $x \in D$. Our first goal will be to show that the law of $h_t(z)$ near some $x_0 \in U$ does not depend too strongly on the precise geometry of $U$ nor $D$ far away from $x_0$ (Proposition \ref{hic::prop::hic}). The next objective is to boost the estimate of approximate harmonicity of the mean given by Theorem \ref{harm::thm::coupling} of \cite{M10} very close to $\partial D$ and $U$. This will, in particular, prove Theorem \ref{intro::thm::harmonic_up_to_boundary}. We end the section by combining Proposition \ref{hic::prop::hic} with Lemma \ref{hic::lem::harmonic_boundary} to show that, under the additional hypothesis of $(\pm)$, the mean height remains uniformly negative close to $V_-$ and uniformly positive near $V_+$. We remark that the latter is one of the crucial points of the proof.
\subsection{Continuity of the Law Near $U$}
We now work towards establishing Proposition \ref{hic::prop::hic}. Before we proceed, it will be helpful to give an overview of the proof. We will first argue (Lemma \ref{hic::lem::min}) that along $U$ there are many points $y$ where $h_t(y)$ is very close to either $a(y)$ or $b(y)$. The reason that one should expect this to be true is that, for any fixed $y$, this holds with positive probability and, using the Markovian structure, we are able to argue a certain amount of approximate independence between different $y$. Then we will fix another instance $\widetilde{h}_t$ of the GL model, though on possibly a different domain $\widetilde{D}$ and region of conditioning $\widetilde{U}$ which agrees with $U$ near $x_0$, and take the stationary coupling of $h_t$ and $\widetilde{h}_t$. By the energy inequality, we can find large, connected, non-random subsets of deterministic bonds $b$ in $U$ near $x$ where $\mathbf{E} | \nabla \overline{h}_t(b)|$ is small, with $\overline{h} = h - \widetilde{h}$ as usual. This implies $\overline{h}_t$ is nearly constant throughout each region. We will then combine this with Lemma \ref{hic::lem::min} to argue that this constant must be very close to zero, for otherwise either $h$ or $\widetilde{h}$ would violate the constraint (C). The result then follows by recoupling $h,\widetilde{h}$ near $x_0$ with boundary values fixed in the ``good'' regions and then applying the random walk representation.
\begin{lemma}
\label{hic::lem::min}
Fix $\delta > 0$, $r > 0$, and $n = [r^\delta]$. Suppose that $x_1,\ldots,x_n \in V_+ \cup V$ are distinct. Assume that $|x_i-x_j| \geq 2r_0 \equiv 2(\log r)^{p_0}$, where $p_0$ is as in Lemma \ref{te::lem::local_bound}, and that for each $i$, the connected component $U_i$ of $U$ containing $x_i$ satisfies $U_i \cap \partial B(x_i,r_1) \neq \emptyset$ for $r_1 = (\log R)^{2c_1}$, $c_1$ as in Lemma \ref{te::lem::expectation}. For each $\epsilon > 0$, we have
\begin{equation}
\label{hic::eqn::min}
\mathbf{P}[ \cap_{k=1}^n\{|h(x_k) - a(x_k)| \geq n^{\epsilon-1}\}] = O_{\overline{\Lambda}, \epsilon}(r^{-50}).
\end{equation}
Similarly, if $x_1,\ldots,x_n$ are distinct in $V_- \cup V$, then
\begin{equation}
\label{hic::eqn::max}
\mathbf{P}[ \cap_{k=1}^n\{|h(x_k) - b(x_k)| \geq n^{\epsilon-1}\}] = O_{\overline{\Lambda},\epsilon}(r^{-50}).
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality, if suffices to prove \eqref{hic::eqn::min}. To this end, for each $k$, let $x_{k1},\ldots,x_{km_k}$, $m_k \leq 4$, be the neighbors of $x_k$ in $D$. By Lemma \ref{te::lem::expectation}, we know that if $\alpha > 0$ is large enough, then the event
\[ A = \cup_{k} \{ \max_{x \in B(x_k,r)} |h(x)| > \alpha (\log r)\}\]
satisfies $\mathbf{P}[A] = O_{\overline{\Lambda}}(r^{-50})$ and, by Lemma \ref{te::lem::local_bound},
\[ \mathbf{E}\big[ |h(x_{ki})|\ \big|\ h|_{\partial B(x_k,r_0)} \big] \mathbf{1}_{A^c} = O_{\overline{\Lambda}}(1).\]
Combining this with Markov's inequality implies the existence of $M = M(\overline{\Lambda}) > 0$ sufficiently large such that for each $k$ we have
\begin{equation}
\label{te::eqn::neighbor_bound}
\mathbf{P}[ E_k\ \big|\ h|_{\partial B(x_k,r_0)} ] \mathbf{1}_{A^c} \geq \frac{1}{2} \mathbf{1}_{A^c} \text{ where }
E_k = \cap_{\ell=1}^{m_k} \{ |h(x_{k\ell})| \leq M\}.
\end{equation}
From the explicit form of the density of the law of $h(x_k)$ conditional on $h(x_{k1}),\ldots,h(x_{km_k})$ with respect to Lebesgue measure, it is clear that
\begin{equation}
\label{hic::eqn::prob_lower_bound}
\mathbf{P}[ h(x_k) - a(x_k) \leq \beta \big| E_k] \geq a_1 \beta
\end{equation}
for some $a_1 = a_1(M) > 0$ and all $\beta \in [0,\beta_0]$ for some $0 < \beta_0 = \beta_0(M)$. Let
\[ B_M = \{ 1 \leq k \leq n : |h(x_{k1})| \leq M, \ldots, |h(x_{km_k})| \leq M\}.\]
It is immediate from \eqref{te::eqn::neighbor_bound} that there exists a random variable $Z$ which, conditional on $A^c$, is binomial with parameters $(n,\tfrac{1}{2})$ such that $|B_M| \mathbf{1}_{A^c} \geq Z \mathbf{1}_{A^c}$. Consequently,
\begin{align*}
\mathbf{P}[ |B_M| \geq \tfrac{1}{4} n \big| A^c]
&\geq \mathbf{P}[ Z \geq \tfrac{1}{4} n \big| A^c]
\geq 1-O(e^{-a_2 n}),
\end{align*}
some $a_2 > 0$. The lemma now follows as by \eqref{hic::eqn::prob_lower_bound},
\begin{align*}
\mathbf{P}[ \cap_{k=1}^n \{h(x_k) - a(x_k) \geq n^{\epsilon-1}\} \big| |B_M| \geq \tfrac{1}{4} n]
&\leq (1-a_1 n^{\epsilon-1})^{n/4}.
\end{align*}
\end{proof}
We assume that $\widetilde{D} \subseteq \mathbf{Z}^2$ is another bounded domain with distinguished subsets of vertices $\widetilde{V}_-,\widetilde{V}_+, \widetilde{V}$ and with functions $\widetilde{a} \leq \widetilde{b}$ satisfying the hypotheses of $(C)$. Let $\widetilde{h}_t$ solve \eqref{gl::eqn::cond_dynam} with stationary initial conditions, conditioned to satisfy $\widetilde{a}(x) \leq \widetilde{h}(x) \leq \widetilde{b}(x)$ for all $x \in \widetilde{D}$, and boundary condition satisfying $(\partial)$. Further, we suppose there exists $x_0 \in D \cap \widetilde{D}$ and $r \geq 5(\log R)^{2c_1}$, $c_1 > 0$ as in Lemma \ref{te::lem::expectation}, such that
\begin{enumerate}
\item \label{hic::assump::ball_contained} $B(x_0,2r) \subseteq D \cap \widetilde{D}$,
\item \label{hic::assump::agree_geom} $B(x_0,2r) \cap U = B(x_0,2r) \cap \widetilde{U}$,
\item \label{hic::assump::agree_cond} $a = \widetilde{a}, b = \widetilde{b}$ in $B(x_0,2r)$, and
\item \label{hic::assump::curve} the connected component of $U_0 \equiv U \cap B(x_0,2r)$ containing $x_0$ has non-empty intersection with $\partial B(x_0,2r)$.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{figures/heights_interface_continuity.pdf}
\caption{The setup for constructing the net $Y$ in the first step of the proof of Proposition \ref{hic::prop::hic}. The large circles indicate the balls $B_k$ associated with the initial $r^{99/100}$ net $(x_n : n \leq N_r)$ and the smaller disks are the balls $B_{kj}$ of the corresponding nets of the $B_k$. The collection of disks with dashed boundary indicates one the groups $\cup_k B_{kj}$ used to construct $Y$.}
\end{figure}
\begin{proposition}
\label{hic::prop::hic}
For every $\epsilon > 0$, there exists $\delta > 0$ independent of $r,h,\widetilde{h}$ such that there is a coupling of the laws of $h$, $\widetilde{h}$ satisfying
\[ \mathbf{E}\bigg[ \max_{x \in B(x_0,r^{1-\epsilon})} |h(x) - \widetilde{h}(x)| \bigg] = O_{\overline{\Lambda}}(r^{-\delta}).\]
\end{proposition}
We remark that the coupling constructed in Proposition \ref{hic::prop::hic} will not be the same as the stationary coupling.
\begin{proof}
{\it Step 1.} Construction of the net. First, Lemma \ref{te::lem::expectation} implies
\[ \max_{z \in B(x_0,2r)} \mathbf{E}[ h_t^2(z) + \widetilde{h}_t^2(z)] = O_{\overline{\Lambda}}( (\log r)^2).\]
Combining this with Lemma \ref{gl::lem::grad_error} and assumptions \eqref{hic::assump::ball_contained}-\eqref{hic::assump::curve}, we see that the stationary coupling of $(h_t,\widetilde{h}_t)$ satisfies
\begin{equation}
\label{hic::eqn::grad_bound}
\sum_{b \in B^*(x_0,r)} \mathbf{E} |\nabla \overline{h}(b)|^2 = O_{\overline{\Lambda}}(r^{\epsilon}).
\end{equation}
Let $(x_n : n \leq N_r)$ be an $r^{99/100}$-net of $U_0$ contained in $V_+ \cup V$ and let $n_0 = r^{1/4}$. Assumption \eqref{hic::assump::curve} implies $U \cap B(x_k,r^{1/2}) \geq r^{1/2}$. Hence, we can find an $r^{1/4}$-net $(x_{kj})$ of $U \cap B(x_k, r^{1/2})$ of cardinality at least $n_0$. Let $B_{kj} = B(x_{kj}, r^{1/20})$ and $U_{kj} = U \cap B_{kj}$. Trivially, $|U_{kj}| \leq |B_{kj}| \leq 10 r^{1/10}$. Since the balls $B_{kj}$ are pairwise disjoint, \eqref{hic::eqn::grad_bound} implies
\[ \mathbf{E} \left[ \sum_{j=1}^{n_0} \left( \sum_{k=1}^{N_r} \sum_{b \in B_{kj}^*} |\nabla \overline{h}(b)|^2 \right)\right] = O_{\overline{\Lambda}}(r^{\epsilon}).\]
Therefore there exists $1 \leq j_0 \leq n_0$ such that
\[ \mathbf{E} \left[ \sum_{k=1}^{N_r} \sum_{b \in B_{kj_0}^*} |\nabla \overline{h}(b)|^2 \right] = O_{\overline{\Lambda}}( r^{\epsilon-1/4}).\]
Noting that $N_r = O(r^2 / r^{198/100}) = O(r^{1/50})$ and $|B_{kj_0}^*| = O(r^{1/10})$, the Cauchy-Schwarz inequality implies
\[ \mathbf{E} \left[ \sum_{k=1}^{N_r} \sum_{b \in B_{kj_0}^*} |\nabla \overline{h}(b)| \right] = \big( O(r^{1/50}) O(r^{1/10}) O_{\overline{\Lambda}}( r^{\epsilon-1/4}) \big)^{1/2} = O_{\overline{\Lambda}}(r^{-1/20}),\]
assuming we have chosen $\epsilon > 0$ small enough.
As each of the sets $B_{kj_0}$ is connected, we consequently have that for each $1 \leq k \leq N_r$ there exists (random) $e_{k}$ with
\begin{equation}
\label{hic::eqn::e_bound}
\mathbf{E} \left[ \sum_{k=1}^{N_r} M_k \right] = O_{\overline{\Lambda}}(r^{-1/20}) \text{ where } M_k = \max_{y \in B_{kj_0}} |\overline{h}(y) - e_k|.
\end{equation}
We next claim that $\mathbf{E} |e_k| = O_{\overline{\Lambda}}(r^{-1/20})$ uniformly in $k$. To see this, fix $y \in B_{kj_0} \cap (V_+ \cup V)$. By rearranging the inequality $e_k - \overline{h}(y) \leq M_k$ and using $\widetilde{h}(y) \geq a(y)$, we see that $h(y) - a(y) + M_k \geq e_k$. By a symmetric argument except starting with the inequality $\overline{h}(y) - e_k \leq M_k$, we also have $\widetilde{h}(y) - a(y) + M_k \geq -e_k$. Combining the two inequalities yields
\[ -\widetilde{X}_k - M_k \leq e_k \leq X_k + M_k\]
where
\[ \widetilde{X}_k = \min_{y \in U_{kj_0}} |\widetilde{h}(y) - a(y)| \text{ and } X_k = \min_{y \in U_{kj_0}} |h(y) - a(y)|.\]
Let $E_k = \{ X_k \geq r^{-1/20}\}$ and $\widetilde{E}_k = \{ \widetilde{X}_k \geq r^{-1/20}\}$. From Lemma \ref{hic::lem::min}, we have both $\mathbf{P}[E_k] = O_{\overline{\Lambda}}(r^{-50})$ and $\mathbf{P}[\widetilde{E}_k] = O_{\overline{\Lambda}}(r^{-50})$. Note that
\[ \mathbf{E}[X_k] = O_{\overline{\Lambda}}(r^{-1/20}) + \mathbf{E}[X_k \mathbf{1}_{E_k}] = O_{\overline{\Lambda}}(r^{-1/20}) + \sqrt{ \mathbf{E}[X_k^2] O_{\overline{\Lambda}}(r^{-50})}.\]
By Lemma \ref{te::lem::expectation}, $\mathbf{E}[ X_k^2] = O_{\overline{\Lambda}}( (\log r)^2)$, hence $\mathbf{E}[X_k] = O(r^{-1/20})$. Similarly, $\mathbf{E}[\widetilde{X}_k] = O(r^{-1/20})$. Combining this with \eqref{hic::eqn::e_bound} implies
\[ \sum_{k=1}^{N_r} \mathbf{E}[|e_k|] \leq \sum_{k=1}^{N_r} \mathbf{E}[|M_k|] + \sum_{k=1}^{N_r} \mathbf{E} \big[X_k + \widetilde{X}_k \big] = O_{\overline{\Lambda}}(r^{-1/50}).\]
By yet another application of \eqref{hic::eqn::e_bound}, for each $1 \leq k \leq N_r$ we can pick $y_k \in U_{kj_0}$ such that $Y = (y_k : 1 \leq k \leq N_r)$ is an $r^{99/100}$-net of $U_0$ satisfying
\[ \sum_{k=1}^{N_r} \mathbf{E}[|\overline{h}(y_k)|] = O_{\overline{\Lambda}}(r^{-1/50}).\]
{\it Step 2.} Coupling at the interface. Let $B_Y = B(x_0,r) \setminus Y$. Conditional on $\zeta = h|_{\partial B_Y}$, let $h_t^{B_Y}$ be a dynamic version of the GL model on $B_Y$ with $h^{B_Y}|_{\partial B_Y} = \zeta$ and the same conditioning as $h$ off of $Y$. Define $\widetilde{h}_t^{B_Y}$ analogously, let $(h_t^{B_Y}, \widetilde{h}_t^{B_Y})$ be the corresponding stationary coupling, and let $\overline{h}^{B_Y} = h^{B_Y} - \widetilde{h}^{B_Y}$. With $X_t = X_t^0$ defined as in subsection \ref{subsec::rw_difference}, Lemma \ref{gl::lem::square_bound} implies that
\begin{equation}
\label{hic::eqn::cai_bound}(\overline{h}_0^{B_Y})^2(x) \leq \mathbf{E}_x [ (\overline{h}_{- \tau}^{B_Y})^2(X_{\tau})]
\end{equation}
where $\tau = \tau_Y \wedge \tau_r$,
\[ \tau_Y = \inf\{t > 0 : X_t \in Y\} \text{ and } \tau_r = \inf\{t > 0 : X_t \in \partial B(x_0,r)\},\]
and the expectation is taken only over the randomness of $X$ initialized at $x$. We claim that there exists $\rho = \rho(\mathcal {V},\epsilon) > 0$ such that $\mathbf{P}_x[ \tau_Y \leq \tau_r] \geq 1-O(r^{-\rho})$ for $x \in B(x_0,r^{1-\epsilon})$. The reason for this is that after hitting the center ring of the annulus $A_k = A(x_0, 2^k r^{1-\epsilon}, 2^{k+1} r^{1-\epsilon})$, $X$ runs a full circle around $A_k$ hence hits $U_0$ before exiting $A_k$ with positive probability (see the proof of Lemma \ref{symm_rw::lem::beurling} of \cite{M10}). On this event, upon hitting $U_0$, there exists $y \in Y$ with distance at most $r^{99/100}$ of $X$, hence $X$ has positive probability of hitting $y$ before exiting $A_k$. The claim now follows as there are at least $c(\epsilon) \log r$ chances for this to occur. Consequently, by \eqref{hic::eqn::cai_bound} we have that
\[ (\overline{h}_0^{B_Y})^2(x) \leq \max_{y \in Y} |\overline{h}(y)| + O(r^{-\rho}) \max_{y \in \partial B(x_0,r)} |\overline{h}(y)| \equiv A_1 + O(r^{-\rho}) A_2.\]
The first part of the lemma implies $\mathbf{E}[A_1] = O_{\overline{\Lambda}}(r^{-1/50})$ and Lemma \ref{te::lem::expectation} implies $\mathbf{E}[A_2] = O_{\overline{\Lambda}}(\log r)^2$. Taking $\delta = (\rho/2) \wedge \tfrac{1}{50}$ proves the proposition.
\end{proof}
\subsection{Harmonicity of the Mean Near the Boundary}
In view of Proposition \ref{hic::prop::hic}, we now boost the estimate of harmonicity of the mean coming from Theorem \ref{harm::thm::mean_harmonic} of \cite{M10} all of the way up to $\partial D$ and $U$. This result is \emph{only} applicable for the mean; it does not imply that we can \emph{couple harmonically} up to the boundary. Recall that $E(r) = \{x \in D : {\rm dist}(x, \partial E) \geq r\}$ for $E \subseteq D$. Let $D_U = D \setminus U$.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{figures/harmonic_boost.pdf}
\caption{A typical step in the localization procedure used in the proof of Lemma \ref{hic::lem::harmonic_boundary}. The dark gray region on the right side indicates $B_n$.}
\end{figure}
\begin{lemma}
\label{hic::lem::harmonic_boundary}
Assume that for every connected component $U_0$ of $U$ there exists a connected component $U_1$ of $U$ with ${\rm dist}(U_0, U_1) \leq ({\rm diam}(U_0))^2$ and ${\rm diam}(U_1) \geq (\log R)^{2c_1}$ with $c_1$ as in Lemma \ref{te::lem::expectation}. There exists $\delta = \delta(\mathcal {V}) > 0$ such that if $\widehat{g}$ is the harmonic extension of $\mathbf{E}[h(x)]$ from $\partial D_U(r)$ to $D_U(r)$, then
\[ \max_{x \in D_U(r)} |\mathbf{E}[h(x)] - \widehat{g}(x)| = O_{\overline{\Lambda}}(r^{-\delta}).\]
\end{lemma}
\begin{proof}
We are going to provide a proof for the lemma under the stronger hypothesis that $U$ is connected (as is the case corresponding to the exploration path $\gamma$ of our main theorem), since moving to the more general case is straightforward though notationally more complicated. Let $d(x) = |\mathbf{E}[h(x)] - \widehat{g}(x)|$.
Fix $\epsilon,\delta > 0$ so that Theorem \ref{harm::thm::mean_harmonic} of \cite{M10} holds and let $\gamma_n = (1-\epsilon)^n$. Let $x_1$ be a point in $D_U(R^{\gamma_1})$ which maximizes $d|_{D_U(R^{\gamma_1})}$. For each $n \geq 2$, let $x_n$ be a point in $D_U(R^{\gamma_{n}}) \setminus D_U(R^{\gamma_{n-1}})$ which maximizes $d|_{D_U(R^{\gamma_{n}}) \setminus D_U(R^{\gamma_{n-1}})}$, and let $\Delta_n = d(x_n)$. We are going to prove that
\begin{equation}
\label{hic::eqn::delta_bound}
\Delta_n \leq O_{\overline{\Lambda}}(R^{-c \delta \gamma_{n+1}}) + \Delta_{n+1},
\end{equation}
for some $c > 0$ which depends only on $\mathcal {V}$.
The constant will be uniform in $n$, so that the result follows by summation.
We will first prove \eqref{hic::eqn::delta_bound} for $n=1$. Let $\widehat{g}_1$ be the harmonic extension of $\mathbf{E}[h(x)]$ from $\partial D_U(R^{\gamma_1})$ to $D_U(R^{\gamma_1})$. Lemma \ref{te::lem::expectation} implies that with
\[ A = \{ \max_{x \in \partial D_U} |h(x)| > \alpha (\log R)\},\]
we have $\mathbf{P}[A] = O_{\overline{\Lambda}}(R^{-100})$ provided $\alpha > 0$ is chosen large enough. Applying Lemma \ref{te::lem::expectation} a second time along with the Cauchy-Schwarz inequality implies
\begin{equation}
\label{hic::eqn::harmonic_good_bc}
\big|\mathbf{E}[ \mathbf{E}[ h(x) \big| h|_{\partial D_U}] \mathbf{1}_{A^c}] - \mathbf{E}[h(x)] \big| \leq (\mathbf{E}[ h^2(x)] \mathbf{P}[A])^{1/2} = O_{\overline{\Lambda}}(R^{-10})
\end{equation}
for all $x \in D$ since $A$ is $\sigma(h|_{\partial D_U})$-measurable. Theorem \ref{harm::thm::mean_harmonic} of \cite{M10} is applicable to $h|_{D_U}$ on $D_U$ conditional on $h|_{\partial D_U}$ and $A$, which combined with \eqref{hic::eqn::harmonic_good_bc} implies
\[ \Delta_1 \leq O_{\overline{\Lambda}}(R^{-\delta})+ |\widehat{g}_1(x_1) - \widehat{g}(x_1)|.\]
By the maximum principle for discrete harmonic functions, we know that there exists $\widetilde{x}_1 \in \partial D_U(R^{\gamma_1}) \subseteq D_U(R^{\gamma_2}) \setminus D_U(R^{\gamma_1})$ such that $|\widehat{g}_1(x_1) - \widehat{g}(x_1)| \leq |\widehat{g}_1(\widetilde{x}_1) - \widehat{g}(\widetilde{x}_1)|$. Applying Theorem \ref{harm::thm::mean_harmonic} of \cite{M10} a second time yields
\[ |\widehat{g}_1(\widetilde{x}_1) - \widehat{g}(\widetilde{x}_1)| \leq O_{\overline{\Lambda}}(R^{-\delta}) + d(\widetilde{x}_1) \leq O_{\overline{\Lambda}}(R^{-\delta}) + \Delta_2,\]
which gives \eqref{hic::eqn::delta_bound} for $n=1$, as desired.
We are now going to prove \eqref{hic::eqn::delta_bound} for $n \geq 2$. Let $\widetilde{\gamma}_n = (1-\epsilon/10)\gamma_{n-1}$, $\gamma_n' = (1-\epsilon/3)\gamma_{n-1}$, $\widetilde{B}_n = B(x_n,R^{\widetilde{\gamma}_{n-1}}) \cap D$, and $B_n = B(x_n, R^{\gamma_{n-1}'}) \cap D_U(R^{\gamma_{n+1}})$. Let $\widetilde{\partial}_n^1$ be the part of $\partial \widetilde{B}_n$ which is contained in $\partial D$ and $\widetilde{\partial}_n^2 = \partial \widetilde{B}_n \setminus \partial D$. Let $h_n$ have the law of the GL model on $\widetilde{B}_n$ with $h_n|_{\widetilde{\partial}_n^1} \equiv h|_{\widetilde{\partial}_n^1}$, $h_n|_{\widetilde{\partial}_n^2} \equiv 0$, and the same conditioning as $h$ otherwise. By decreasing $\delta > 0$ if necessary, Proposition \ref{hic::prop::hic} implies that we can couple $h,h_n$ such that $\max_{x \in B_n} \mathbf{E}[| h(x) - h_n(x)|] = O(R^{-\widetilde{\gamma}_n \delta})$. Let $\widehat{g}_n$ be the harmonic extension of $\mathbf{E}[h_n(x)]$ from $\partial B_n$ to $B_n$. Since $x \in B_n$ implies that ${\rm dist}(x, \partial \widetilde{B}_n \cup U) \geq R^{\gamma_{n+1}}$, Lemma \ref{te::lem::expectation} and Theorem \ref{harm::thm::mean_harmonic} imply that $\max_{x \in B_n}|\mathbf{E}[h_n(x)] - \widehat{g}_n(x)| = O_{\overline{\Lambda}}(R^{-\gamma_{n+1} \delta})$, hence
\begin{equation}
\label{hic::eqn::harm_diff_bound_n}
\max_{x \in B_n}|\mathbf{E}[h(x)] - \widehat{g}_n(x)| = O_{\overline{\Lambda}}(R^{-\gamma_{n+1} \delta}).
\end{equation}
Therefore
\begin{equation}
\label{hic::eqn::delta_n_prebound}
\Delta_n \leq O_{\overline{\Lambda}}(R^{-\gamma_{n+1} \delta}) + |\widehat{g}(x_n) - \widehat{g}_n(x_n)|.
\end{equation}
We can divide the boundary of $B_{n}$ into the part $\partial_{n}^1$ which intersects $\partial D_U(R^{\gamma_{n+1}})$ and $\partial_{n}^2 = \partial B_{n} \setminus \partial_{n}^1$. We claim that the harmonic measure of $\partial_n^1$ from $x_n$ in $B_n$ is $1-O(R^{-\rho_{\rm B} \delta \gamma_{n-1}})$ provided we take $\delta < \epsilon/100$. To see this, let $x_{n,U}$ be a point in $U$ with minimal distance to $x_n$. Note that $d(x_n,x_{n,U}) \leq R^{\gamma_{n-1}}$. Since $U$ is connected, there exists a connected subgraph $U_n$ of $U$ contained in $B(x_n, R^{\gamma_{n-1}'}) \cap D$ which itself contains $x_{n,U}$ and has non-empty intersection with $\partial ( B(x_n, R^{\gamma_{n-1}'}) \cap D)$. Consequently, Lemma \ref{symm_rw::lem::beurling} of \cite{M10} implies that the probability that a random walk started at $x_n$ exits $B(x_n,R^{\gamma_{n-1}'}) \cap D$ before hitting $U_n$ is at most $O((R^{\gamma_{n-1}} / R^{\gamma_{n-1}'})^{\rho_{\rm B}}) = O(R^{-\delta \rho_{\rm B} \gamma_{n-1}})$ since $\delta < \epsilon/100$. This proves our claim.
Letting $M_n^i = \max_{x \in \partial_{n}^i} |\widehat{g}(x) - \widehat{g}_n(x)|$, we thus see that
\[ |\widehat{g}(x_n) - \widehat{g}_n(x_n)| \leq M_n^1 + O(R^{- \rho_{\rm B} \delta \gamma_{n-1}}) M_n^2.\]
Equation \eqref{hic::eqn::harm_diff_bound_n} and the definition of $\Delta_{n+1}$ implies that
\[ M_n^1 \leq \Delta_{n+1} + O_{\overline{\Lambda}}(R^{-\delta \gamma_{n+1}}),\]
hence
\[ \Delta_n \leq \Delta_{n+1} + O(R^{- \rho_{\rm B} \delta \gamma_{n-1}}) M_{n}^2 + O_{\overline{\Lambda}}(R^{-\delta \gamma_{n+1}}).\]
Lemma \ref{te::lem::expectation} implies $M_n^2 = O_{\overline{\Lambda}}(\log R^{\gamma_n})$ hence $O(R^{-\rho_{\rm B} \delta \gamma_{n-1}}) M_n^2 = O_{\overline{\Lambda}}(R^{-\rho_{\rm B}\delta \gamma_{n}})$, which gives exactly \eqref{hic::eqn::delta_bound} and proves the lemma.
\end{proof}
\subsection{Sign of the Mean Near $U$}
We will next show that the mean height is uniformly bounded in $D$, uniformly positive near $V_+$, and uniformly negative near $V_-$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{figures/strictly_positive.pdf}
\caption{The setup for Proposition \ref{hic::prop::exp_bounds}. The regions shaded black and dark gray correspond to $V_-$ and $V_+$, respectively. The light gray region is $F_0$ and the subset of $F_0$ surrounded by the disk with dashed boundary arc is $F$.}
\end{figure}
\begin{proposition}
\label{hic::prop::exp_bounds}
We assume $(\pm)$ and that $U$ is connected in addition to $(C)$ and $(\partial)$. Suppose $r > 0$ and $x_0 \in V_+$ are such that the boundary of every connected component of $B(x_0,r) \setminus U$ does not contain vertices from both $V_-$ and $V_+$. Let $\mathcal {C}_+$ be the set of connected components of $B(x_0,r) \setminus U$ whose boundary has non-empty intersection with $V_+$ and let $F_0 = \cup_{C \in \mathcal {C}_+} C$. Fix $\epsilon > 0$ and let $F = F_0 \cap B(x_0, r^{1-\epsilon})$. There exists $\lambda_0 = \lambda_0(\epsilon,\overline{\Lambda}) > 0$ such that
\[ \frac{1}{\lambda_0} \leq \mathbf{E}[h(x)] \leq \lambda_0 \text{ for all } x \in F.\]
\end{proposition}
The easy part is the upper bound: this follows by using Lemma \ref{hic::lem::harmonic_boundary} and the maximum principle to reduce it to bounding $\mathbf{E}[h(x)]$ for $x$ with ${\rm dist}(x,\partial D)$ uniformly bounded, then applying Proposition \ref{hic::prop::hic}. The lower bound is more challenging.
\begin{lemma}
\label{hic::lem::exp_upper_bound}
Suppose that we have the same assumptions as Lemma \ref{hic::lem::harmonic_boundary}. There exists $\lambda_0 = \lambda_0(\epsilon, \overline{\Lambda}) > 0$ such that $\mathbf{E}[ h(x)] \leq \lambda_0$ for every $x \in D$.
\end{lemma}
\begin{proof}
Fix $s \geq 1$ sufficiently large that Lemma \ref{hic::lem::harmonic_boundary} applies. Let $D_U = D \setminus U$. If ${\rm dist}(x,\partial D_U) \leq s$, then Lemma \ref{te::lem::expectation} implies $\mathbf{E}[h(x)] = O_{\overline{\Lambda}}(1)$. It thus suffices to prove the bound on $D_U(s)$. Applying Lemma \ref{hic::lem::harmonic_boundary}, we see that if $\widehat{g}$ denotes the harmonic extension of $\mathbf{E}[h(x)]$ from $\partial D_U(s)$ to $D_U(s)$, then $|\mathbf{E}[h(x)] - \widehat{g}(x)| = O_{\overline{\Lambda}}(1)$ uniformly in $x \in D_U(s)$. By the maximum principle for discrete harmonic functions, the maximum of $\widehat{g}$ in $D_U(s)$ is attained at some point $y_0 \in \partial D_U(s)$. Consequently,
\[ \mathbf{E}[ h(x)] \leq O_{\overline{\Lambda}}(1) + |\widehat{g}(x)| \leq O_{\overline{\Lambda}}(1) + |\widehat{g}(y_0)| \leq O_{\overline{\Lambda}}(1) + |\mathbf{E}[h(y_0)]|.\]
Lemma \ref{te::lem::expectation} implies that the right hand side is $O_{\overline{\Lambda}}(1)$, which proves the lemma.
\end{proof}
The proof of the lower bound will also use Lemma \ref{hic::lem::harmonic_boundary} to reduce the problem to a boundary computation: we will show that $\mathbf{E}[h(x)]$ is uniformly positive very near $V_+$. This strategy is a bit more difficult to implement in this case, however, since we need to show that this uniform positivity is enough to dominate the error associated with approximating $\mathbf{E}[h(x)]$ by the harmonic extension of its boundary values. We will deduce this by arguing that along, say, the positive side of the interface, points at which the height is larger than a given threshold are typically not too far from each other. Then, we will invoke the HS representation of the mean combined with a uniform lower bound on the probability that the HS walk hits any one of these points.
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{figures/mean_decay.pdf}
\caption{The idea of the proof of Lemma \ref{hic::lem::exp_lower_bound} is to show that with high probability we can find a $\sqrt{\log r}$ net $Y$ of $V_+$, indicated by the light gray disks, such that $h|_{Y} \geq r^{-a}$. Thus the position of the HS random walk $X_\sigma$ upon first becoming adjacent to $V_+$ is within $\sqrt{\log r}$ jumps of exiting in $Y$. Since the jump rates of $X$ are bounded, $X$ first enters $V_+$ in $Y$ with probability at least $\rho^{\sqrt{\log r}}$, some $\rho = \rho(\mathcal {V}) > 0$.}
\end{figure}
\begin{lemma}
\label{hic::lem::exp_lower_bound}
Suppose we have the same setup as in Proposition \ref{hic::prop::exp_bounds}. For every $\epsilon > 0$ and $a > 0$ there exists $c(a,\epsilon)$ such that with $F = F_0 \cap B(x_0, r^{1-\epsilon})$ we have
\begin{equation}
\label{hic::eqn::min_decay_lem}
\mathbf{E}[h(x)] \geq c(a,\epsilon) r^{-a} \text{ for all } x \in F.
\end{equation}
\end{lemma}
\begin{proof}
Notationally, it will be easier for us to establish \eqref{hic::eqn::min_decay_lem} with $a$ replaced by $2a$: there exists $c = c(a,\epsilon)$ such that
\begin{equation}
\label{hic::eqn::min_decay}
\min_{y \in F} \mathbf{E}[ h(y)] \geq c(a,\epsilon) r^{-2 a}.
\end{equation}
Let $B = B(x_0,r)$, $V_\pm^B = V_\pm \cap B$. For $x,y \in B(x_0,r)$ let $d_P(x,y)$ denote the length of the shortest path in $B \setminus (V_+ \cup V_-)$ which connects $x$ to $y$ and set set $d_P(x,y) = \infty$ if there is no such path. Fix $z_0 \in F_0$ and assume that the connected component $C_0$ of $F_0$ containing $z_0$ has diameter $s > 0$ with respect to $d_P$. Let $y_1,\ldots,y_n$ be a $\sqrt{\log s}$ net of the subset of $V_+^B$ which is adjacent to $C_0$ with respect to $d_P$. Fix $M > 0$ and, for each $i$, let $Y_i = \{y_{i1},\ldots,y_{im_i}\}$ be an $M$ net of $B_{d_P}(y_i,\sqrt{\log s}) \cap V_+^B$. Obviously, $\sqrt{\log s} \leq m_i \leq \log s$ for each $i$. Fix $a > 0$ arbitrary, let $E_{ij}^a = \{ h(y_{ij}) \geq s^{-a}\}$, and $G_{ij}^a = \cap_{k \neq j} (E_{ik}^a)^c$. By Proposition \ref{hic::prop::hic}, it follows that if $y \sim y_{ij}$ then $\mathbf{E}[|h(y)| \big| G_{ij}^a] - \mathbf{E}[|h(y)|] = O_{\overline{\Lambda}}(1)$ hence $\mathbf{E}[|h(y)| \big| G_{ij}^a] = O_{\overline{\Lambda}}(1)$. As in the proof of Lemma \ref{hic::lem::min}, this in turn implies
\begin{equation}
\label{hic::eqn::approx_ind}
\mathbf{P}[ (E_{ij}^a)^c \big| G_{ij}^a] \leq a_1 s^{-a}
\end{equation}
for some constant $a_1 > 0$. Indeed, as we are able to bound the mean heights of $h(y)$ for $y \sim y_{ij}$ conditional on $G_{ij}^a$, we can use Markov's inequality to show that $h$ is uniformly bounded at the neighbors of $y_{ij}$ with uniformly positive probability. Conditioning further on this event, the desired result is clear from the explicit form of the conditional density of $h(y_{ij})$. With $\widetilde{G}_{ij}^a$ the intersection of any combination of $E_{ik}^a$ or $(E_{ik}^a)^c$ over $k \neq j$, we see that we can couple together $h|\widetilde{G}_{ij}^a$ and $h|G_{ij}^a$ such that $h|\widetilde{G}_{ij}^a \geq h|G_{ij}^a$ by Lemma \ref{gl::lem::stoch_dom}. By \eqref{hic::eqn::approx_ind} we therefore have
\begin{equation}
\mathbf{P}[ (E_{ij}^a)^c \big| \widetilde{G}_{ij}^a] \leq \mathbf{P}[ (E_{ij}^a)^c \big| G_{ij}^a] \leq a_1 s^{-a}.
\end{equation}
Consequently,
\begin{align}
\log \mathbf{P}[ \cap_{j=1}^{m_i} (E_{ij}^a)^c]
&= \log \mathbf{P}[ (E_{1j}^a)^c \big| \cap_{j=2}^{m_i} (E_{ij}^a)^c] + \log \mathbf{P}[ \cap_{j=2}^{m_i} (E_{ij}^a)^c] \notag\\
&\leq a_1 - a(\log s) + \log \mathbf{P}[ \cap_{j=2}^{m_i} (E_{ij}^a)^c] \label{hic::eqn::e_bound1}.
\end{align}
Using that $\sqrt{\log s} \leq m_i \leq \log s$ and iterating \eqref{hic::eqn::e_bound1}, we thus see that
\begin{align}
\log \mathbf{P}[ \cap_{j=1}^{m_i} (E_{ij}^a)^c]
&\leq c_1(\log s) - a(\log s)^{3/2}
\leq -\frac{a}{2} (\log s)^{3/2} \label{hic::eqn::e_int_bound}
\end{align}
for $s$ sufficiently large. Therefore
\begin{equation}
\log \mathbf{P}[ E] \leq -\frac{a}{4}(\log s)^{3/2} \label{hic::eqn::log_e_bound}
\end{equation}
where $E = \cup_i \cap_{j=1}^{m_i} (E_{ij}^a)^c$, again for $s > 0$ sufficiently large. The reason for this is that the number of elements in the outer union in the definition of $E$ is clearly polynomial in $s$, so \eqref{hic::eqn::log_e_bound} follows from \eqref{hic::eqn::e_int_bound} by a union bound. Thus
\[ |\mathbf{E}[h(x) \mathbf{1}_{E}]| \leq (\mathbf{E}[ |h(x)|^2 ])^{1/2} [\mathbf{P}[E]]^{1/2} = O_{\overline{\Lambda}}(s^{-100}),\]
hence to prove the lemma it suffices to show that $|\mathbf{E}[h(x) \mathbf{1}_{E^c}]| \geq c s^{-a}$ for $s$ sufficiently large. Let $B_U = B \setminus U$ and $\psi = h |_{\partial B_U}$. By Lemma \ref{gl::lem::hs_mean_cov} of \cite{M10}, we have the HS representation for the conditional mean:
\begin{equation}
\label{hic::eqn::hs_mean}
\mathbf{E}[h(x)|\psi] = \int_0^1 \mathbf{E}_x^{t\psi}[\psi(X_\tau)] dt,
\end{equation}
where under $\mathbf{P}_x^{t\psi}$, $X$ is the HS random walk on $D$ started at $x$ associated with the GL model on $B_U$ with boundary condition $t\psi$ and $\tau = \inf\{t : X_t \notin B_U\}$. Our hypotheses imply
\begin{equation}
\label{hic::eqn::hit_vplus_bound}
\mathbf{P}_{z_0}[X_\tau \notin V_+^B] = O(r^{-\epsilon \rho_{\rm B}})
\end{equation}
for $\rho_{\rm B} > 0$ as in Lemma \ref{symm_rw::lem::beurling} of \cite{M10}. Let $\sigma = \inf\{ t : {\rm dist}(X_t, V_+^B) = 1\}$. On $E$, we know that $X_\sigma$ is at most $\sqrt{\log s}$ jumps from a site $y \in V_+^B$ such that $\psi(y) \geq s^{-a}$. Therefore the probability that $X$ started at $X_\sigma$ exits at such $y$ is at least $\rho^{\sqrt{\log s}} \geq c_1(a)s^{-a}$, some $\rho > 0$ depending only on $\mathcal {V}$ and $c_1(a)$ depending only on $a$. Combining this with \eqref{hic::eqn::hs_mean} and \eqref{hic::eqn::hit_vplus_bound} with $s > 0$, we have
\begin{align*}
&\mathbf{E}[h(x)|\psi]
\geq \frac{c_1(a)}{2} s^{-2a} + O(r^{-\epsilon \rho_{\rm B}}) \| \psi\|_\infty,
\end{align*}
provided we take $r$ sufficiently large. Lemma \ref{te::lem::expectation} implies $\mathbf{E}[\| \psi\|_\infty] = O_{\overline{\Lambda}}(\log r)$, hence integrating both sides over $\psi$ yields \eqref{hic::eqn::min_decay} as $s = O(r^2)$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{hic::prop::exp_bounds}]
Let $\widetilde{h}$ have the law of the GL model on $B = B(x_0,r)$ with the same conditioning as $h$ and $\widetilde{h}|_{\partial B} \equiv 0$. By Proposition \ref{hic::prop::hic}, we can find a coupling of $\widetilde{h}$ and $h$ such that $\max_{x \in B(x_0, r^{1-\epsilon})} \mathbf{E}|\widetilde{h}(x) - h(x)| = \epsilon_1 \equiv O_{\overline{\Lambda}}(r^{-\delta})$. Let $B_U = B \setminus U$ and let $\widehat{g}$ be the harmonic extension of $\mathbf{E}[\widetilde{h}(x)]$ from $\partial B_U(s)$ to $B_U(s)$. Lemma \ref{hic::lem::harmonic_boundary} implies that $|\mathbf{E}[\widetilde{h}(x)] - \widehat{g}(x)| = \epsilon_2 \equiv O_{\overline{\Lambda}}(s^{-\delta})$. For $x \in F$, the harmonic measure of the part of $\partial B_U(s)$ which is not in $B(x_0, r^{1-\epsilon})$ is $\epsilon_3 \equiv O(r^{-\rho_{\rm B}\epsilon})$. Assume $s > 0$ is chosen sufficiently large so that, with $a > 0$ chosen much smaller than $\delta, \epsilon$, the uniform lower bound of $c(a,\epsilon) s^{-a}$ dominates $\epsilon_1+\epsilon_2+\epsilon_3$. Putting everything together, increasing $\lambda_0 > 0$ from Lemma \ref{hic::lem::exp_upper_bound} if necessary implies
\begin{align*}
\mathbf{E}[h(x)] &\geq \epsilon_1 + \mathbf{E}[\widetilde{h}(x)] \geq \epsilon_1+\epsilon_2 + \widehat{g}(x)\\
&\geq \epsilon_1 + \epsilon_2 + \epsilon_3+ c(a,\epsilon) s^{-a} \geq \frac{1}{\lambda_0}.
\end{align*}
\end{proof}
\section{Independence of Interfaces}
\label{sec::ni}
We show in this section that the geometry of zero height interfaces near a particular point $x_0$ is approximately independent from the exact geometry of those which are far from $x_0$, that is:
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{figures/near_independence.pdf}
\caption{The setup for Proposition \ref{ni::prop::ni}. We emphasize that $U$ consists of both the black arcs emanating from $\partial D$ along with $\partial D$ itself and likewise for $\widetilde{U}$. It is important that $U$, $\widetilde{U}$ have non-empty intersection with $B_2$ since their presence moderates the fluctuations of the fields in $B_\gamma$.}
\end{figure}
\begin{proposition}[Independence of Interfaces]
\label{ni::prop::ni}
Suppose $D, \widetilde{D} \subseteq \mathbf{Z}^2$ are bounded, $r > 0$, and $x_0 \in D \cap \widetilde{D}$. For each $\alpha > 0$, let $B_\alpha \equiv B(x_0, \alpha r)$, $B \equiv B_1$, and assume $B_3 \subseteq D \cap \widetilde{D}$. Suppose $\phi \colon \partial D \to \mathbf{R}, \widetilde{\phi} \colon \partial \widetilde{D} \to \mathbf{R}$ satisfy $(\partial)$ and that $U \subseteq D, \widetilde{U} \subseteq \widetilde{D}$ correspond to systems of conditioning $(a,b)$, $(\widetilde{a},\widetilde{b})$, respectively, both of which satisfy (C), are connected, and intersect $B_2, \partial B_3$ but not $B$. Let $\mathcal {K} = \cap_{x \in D} \{ a(x) \leq h(x) \leq b(x)\}$ and $\widetilde{\mathcal {K}} = \cap_{x \in \widetilde{D}}\{\widetilde{a}(x) \leq \widetilde{h}(x) \leq \widetilde{b}(x)\}$. Fix $0 < \gamma < 1$, suppose $U_\gamma \subseteq B_\gamma$ corresponds to a system of conditioning $(a_\gamma,b_\gamma)$ satisfying (C), is connected, and has non-empty intersection with $B_{\gamma/2}$ and $\partial B_\gamma$, and let $\mathcal {K}_\gamma = \cap_{x \in B_\gamma}\{ a_\gamma(x) \leq h(x) \leq b_\gamma(x)\}$ and $\widetilde{\mathcal {K}}_\gamma = \cap_{x \in B_\gamma}\{ a_\gamma(x) \leq \widetilde{h}(x) \leq b_\gamma(x)\}$. There exists $c = c(\gamma, \overline{\Lambda})$ such that
\[ \frac{1}{c} \mathbf{P}_D^\phi[ \mathcal {K}_\gamma|\mathcal {K}] \leq \mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[\widetilde{\mathcal {K}}_\gamma | \widetilde{\mathcal {K}}] \leq c \mathbf{P}_D^{\phi}[ \mathcal {K}_\gamma | \mathcal {K}].\]
\end{proposition}
We will now give an overview of the main steps. We begin by fixing $0 < \alpha < \alpha'$ small and then couple $h|\mathcal {K},\widetilde{h}|\widetilde{\mathcal {K}}$ in $H = B \setminus B_\gamma$ using Theorem \ref{harm::thm::coupling} of \cite{M10} so that $\overline{h} = h|\mathcal {K}-\widetilde{h}|\widetilde{\mathcal {K}}$ is with high probability harmonic in $H(r^{1-\epsilon})$, $\epsilon > 0$ small. Recall that $H(r) = \{ x \in H : {\rm dist}(x, \partial H) \geq r\}$. We show in Lemma \ref{ni::lem::bounded_coupling} for $H^\alpha = H(\alpha r)$ that $\mathbf{E}[ \max_{x \in H^\alpha} |\overline{h}(x)|^p] = O_{\alpha,\overline{\Lambda},p}(1)$. This allows us to conclude for $H^{\alpha'} = H(\alpha' r)$ that $\max_{b \in (H^{\alpha'})^*} |\nabla \overline{h}(b)| \leq C r^{-1}$ with high probability when $\overline{h}$ is harmonic provided $C = C(\alpha,\alpha',\overline{\Lambda}) > 0$ is taken sufficiently large. Fix $\beta > 0$ so that $\partial B_\beta \subseteq H^{\alpha'}$ and let $(\xi,\widetilde{\xi}) = (h,\widetilde{h})|_{\partial B_\beta \times \partial B_\beta}$. We next study the effect of changing the boundary conditions from $\xi$ to $\widetilde{\xi}$ on $\partial B_\beta$ has on the probability that $\mathcal {K}_\gamma$ occurs. To this end, we let $\varphi \colon B_\beta \to \mathbf{R}$ solve the boundary value problem
\[ \varphi|_{\partial B_\beta} = \overline{\xi},\ \ \varphi|_{B_\gamma} \equiv 0,\ \ (\Delta \varphi)|_{B_\beta \setminus B_\gamma} \equiv 0,\]
where $\overline{\xi} = \xi - \widetilde{\xi}$, then control the Radon-Nikodym derivative of $\mathbf{P}_{B_\beta}^\xi$ with respect to $\mathbf{Q}_{B_\beta}^{\widetilde{\xi},\varphi}$ integrated over $\mathcal {K}_\gamma$, where we recall that $\mathbf{Q}_{B_\beta}^{\widetilde{\xi},\varphi}$ is the law of $h^{\widetilde{\xi}} - \varphi$ and $h^{\widetilde{\xi}} \sim \mathbf{P}_{B_{\beta}}^{\widetilde{\xi}}$. Repeated applications of Jensen's inequality (Lemma \ref{ni::lem::prob_ratio_bound}) shows that this quantity is bounded from below by:
\[ \mathbf{E}^{\widetilde{\xi}} \left[ \exp\left( \sum_{b \in B_\beta^*} \mathbf{E}_{\mathcal {K}_\gamma}^{\xi,\widetilde{\xi}}\bigg[ c(b) \nabla \overline{h}^{\xi,\widetilde{\xi}}(b) \nabla \varphi(b) + O(\mathcal {E}(b)) \bigg] \right) \mathbf{1}_{\mathcal {K}_\gamma} \right],\]
where $\mathbf{E}_{\mathcal {K}_\gamma}^{\xi,\widetilde{\xi}}$ is the stationary coupling of $\mathbf{P}_{B_\beta}^\xi$ and $\mathbf{P}_{B_\beta}^{\widetilde{\xi}}[ \cdot | \mathcal {K}_\gamma]$. We will then show (Lemma \ref{ni::lem::rn_bound}) that the expectation in the exponential is bounded on $\mathcal {A}_C = \{ \max_{x \in \partial B_\beta} |\overline{\xi}(x)| \leq C,\ \max_{b \in \partial B_\beta^*} |\nabla \overline{\xi}(b)| \leq C/r\}$, though the estimate deteriorates as we increase $C$. Integrating the result over $(\xi,\widetilde{\xi})$ leaves us with an inequality of the form
\[ \mathbf{P}_D^{\phi}[ \widetilde{\mathcal {K}}_\gamma | \mathcal {K}] \geq c_1\mathbf{E}[ \mathbf{P}_{B_\beta}^{\widetilde{\xi}}[ \widetilde{\mathcal {K}}_\gamma] \mathbf{1}_{\mathcal {A}_C} | \mathcal {K}, \widetilde{\mathcal {K}}].\]
We end the proof (Lemma \ref{ni::lem::density_ratio_bound}) by showing that there exists another event $\mathcal {B}$, whose probability is uniformly bounded from $0$, such that we have
\[ \mathbf{P}_{B_\beta}^{\widetilde{\xi}}[\widetilde{\mathcal {K}}_\gamma] \mathbf{1}_{\mathcal {B}} \geq c_2 \mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[ \widetilde{\mathcal {K}}_\gamma | \widetilde{\mathcal {K}}] \mathbf{1}_{\mathcal {B}}.\]
This completes the proof since we can make $\mathbf{P}[\mathcal {A}_C|\mathcal {K}, \widetilde{\mathcal {K}}]$ as close to $1$ as we like by choosing $C$ large enough, hence we can ensure that $\mathbf{P}[ \mathcal {A}_C \cap \mathcal {B} | \mathcal {K}, \widetilde{\mathcal {K}}]$ is uniformly bounded from zero.
\begin{lemma}[Bounded Coupling]
\label{ni::lem::bounded_coupling}
Assume the hypotheses Proposition \ref{ni::prop::ni} except we replace the restrictions on the geometry of $U, \widetilde{U}$ with the following. Suppose that $U \setminus B, \widetilde{U} \setminus B$ are connected and intersect $B_2, \partial B_3$ and $U, \widetilde{U}$ do not intersect $H = B \setminus B_\gamma$, and $U \cap B_\gamma, \widetilde{U} \cap B_\gamma$ are either empty or connected and have non-empty intersection with $B_{\gamma/2}$ and $\partial B_{\gamma}$. Fix $\epsilon > 0$ so that Theorem \ref{harm::thm::coupling} of \cite{M10} holds. Consider the coupling of $(h|\mathcal {K}, \widetilde{h} | \widetilde{\mathcal {K}})$ given by:
\begin{enumerate}
\item Sampling $(\zeta,\widetilde{\zeta}) \equiv (h|\mathcal {K},\widetilde{h}|\widetilde{\mathcal {K}})|_{\partial H \times \partial H}$ according to any given coupling,
\item Conditional on $\{\| \zeta\|_\infty + \| \widetilde{\zeta} \|_\infty \leq (\log r)^2\}$, resample $(h|\mathcal {K},\widetilde{h}|\widetilde{\mathcal {K}})$ in $H_\epsilon = H(r^{1-\epsilon})$ according to the coupling of Theorem \ref{harm::thm::coupling} of \cite{M10} \label{ni::item::boundary_good}.
\end{enumerate}
Then $\mathbf{E}[\max_{x \in H^\alpha} |\overline{h}(x)|^p] = O_{\alpha,\overline{\Lambda},p}(1)$ for every $p \geq 1$ and $\alpha > 0$ where $H^\alpha = H(\alpha r)$.
\end{lemma}
\begin{proof}
Our hypotheses on the geometry of $U \setminus B$ imply that Lemma \ref{te::lem::expectation} applies for $h|\mathcal {K}$ on all of $B$. This similarly holds for $\widetilde{h}|\widetilde{\mathcal {K}}$ on $B$, so we consequently have $\mathbf{P}[\mathcal {E}] = O_{\overline{\Lambda}}(r^{-100})$ for $\mathcal {E} = \{\| \zeta\|_\infty + \| \widetilde{\zeta}\|_\infty > (\log r)^2\}$. From Theorem \ref{harm::thm::coupling} of \cite{M10}, we know that on the event $\mathcal {E}$ the harmonic coupling of $h|\mathcal {K} ,\widetilde{h}| \widetilde{\mathcal {K}}$ in $H_\epsilon$ is such that with $\widehat{g}$ the harmonic extension of $\overline{h} = h|\mathcal {K}-\widetilde{h}|\widetilde{\mathcal {K}}$ from $\partial H_\epsilon$ to $H_\epsilon$ and $\mathcal {H} = \{\overline{h} = \widehat{g} \text{ in } H_\epsilon\}$ we have $\mathbf{P}[\mathcal {H}^c | \mathcal {E}] = O_{\overline{\Lambda}}(r^{-\delta})$, some $\delta > 0.$ It suffices to prove
\[ \mathbf{E}[\max_{x \in H^\alpha} |\overline{h}(x)|^p | \mathcal {H},\mathcal {E}] = O_{\alpha,\overline{\Lambda},p}(1).\]
Indeed, by Lemma \ref{te::lem::expectation} we know that $\mathbf{E}[\max_{x \in H^\alpha} |h(x)|^{2p} | \mathcal {K}] = O( (\log r)^{2p})$ and likewise for $\widetilde{h}$. Hence, by the Cauchy-Schwarz inequality,
\[ \mathbf{E}[\max_{x \in H^\alpha} |\overline{h}(x)|^p (\mathbf{1}_{\mathcal {H}^c} + \mathbf{1}_{\mathcal {E}^c})]\]
is negligible in comparison to the bound we seek to establish.
Let $g,\widetilde{g}$ be the harmonic extensions of $h,\widetilde{h}$ from $\partial H_\epsilon$ to $H_\epsilon$. Then it in turn suffices to show that $\mathbf{E}[ \max_{x \in H^\alpha} |g(x)|^p | \mathcal {K}] = O_{\alpha, \overline{\Lambda},p}(1)$ and likewise with $\widetilde{g}$ in place of $g$. Let $W = V \cup V_-$ and $D_{W} = D \setminus W$. Let $\partial_1,\partial_2$ be the parts of $\partial D_W$ which do and do not intersect $\partial D$, respectively. Let $h^{D_W}$ have the law of the GL model on $D_W$ with $h^{D_W}|_{\partial_1} \equiv h|_{\partial_1}$, $h|_{\partial_2} = \overline{\Lambda}$, and the same conditioning as $h|\mathcal {K}$ otherwise. Lemma \ref{gl::lem::stoch_dom} implies that we can find a coupling of $h^{D_W}, h|\mathcal {K}$ such that $h^{D_W} \geq h|\mathcal {K}$ almost surely. Hence letting $g^W$ be the harmonic extension of $h^{D_W}$ from $\partial H_\epsilon$ to $H_\epsilon$, we have that $g^W \geq g|\mathcal {K}$ almost surely. Of course, we can do exactly the same thing except removing $W' = V \cup V_+$ rather than $W$ and setting the corresponding boundary condition to $-\overline{\Lambda}$. This leaves us with the lower bound $h|\mathcal {K} \geq h^{D_{W'}}$ and, with $g^{W'}$ the corresponding harmonic function, we have $g|\mathcal {K} \geq g^{W'}$. Thus since $|g|^p|\mathcal {K} \leq (2^p)(|g^W|^p + |g^{W'}|^p)$, it suffices to show that
\begin{equation}
\label{ni::eqn::bounded_reduction}
\mathbf{E}[ \max_{x \in H^\alpha} |g^W(x)|^p] = O_{\alpha, \overline{\Lambda},p}(1),
\end{equation}
and likewise for $g^{W'}$.
Applying the maximum principle to the harmonic function $\mathbf{E}[g^W(x)]$ along with Lemma \ref{hic::lem::exp_upper_bound} implies $\max_{x \in H_\epsilon} |\mathbf{E}[g^W(x)]| = O_{\overline{\Lambda}}(1)$. Hence to prove \eqref{ni::eqn::bounded_reduction}, we need to prove
\begin{equation}
\label{ni::eqn::bounded_reduction_centered}
\mathbf{E}[ \max_{x \in H^\alpha} |g^W(x) - \mathbf{E}[g^W(x)]|^p] = O_{\alpha,\overline{\Lambda},p}(1).
\end{equation}
Fix $p \geq 1$ and let $g_p^W(x)$ be the harmonic extension of $|g^W(x) - \mathbf{E}[g^W(x)]|^p$ from $\partial H_\epsilon$ to $H_\epsilon$. For $x \in H_\epsilon$ and $y \in \partial H_\epsilon$, let $p(x,y)$ be the probability that a simple random walk initialized at $x$ first exits $H_\epsilon$ at $y$. Since
\[ |g^W(x) - \mathbf{E}[g^W(x)]|^p = \left|\sum_{y \in \partial H_\epsilon} p(x,y)(g^W(y) - \mathbf{E}[g^W(y)]) \right|^p,\]
Jensen's inequality implies $|g^W(x) - \mathbf{E}[g^W(x)]|^p \leq g_p^W(x)$. Fix $y_0 \in H^\alpha$. By the Harnack inequality, there exists $C_1 = C_1(\alpha)$ such that
\[ \max_{x \in H^\alpha} |g_p^W(x)| \leq C_1 g_p^W(y_0)\]
since $g_p^W \geq 0$. Hence we need to bound $\mathbf{E}[ g_p^W(y_0)]$ which, by the maximum principle, is bounded by $\max_{x \in \partial H^\alpha} \mathbf{E}[ |g^W(x) - \mathbf{E}[g^W(x)]|^p]$.
We can bound this moment using the Brascamp-Lieb inequality (Lemma \ref{bl::lem::bl_inequalities}). To this end, let $G_{D_W}(x,y)$ be the Green's function for simple random walk on $D_W$. For $x \in H_\epsilon$, note that
\begin{align*}
\sum_{y \in \partial H_\epsilon} G_{D_W}(x,y) = O(r).
\end{align*}
The reason for this is that the expected amount of time a random walk started at $x$ spends in $\partial H_\epsilon$ before exiting $B_3$ is $O(r)$ and the expected number of times a random walk reenters $B$ after exiting $B_3$ before hitting $U$ hence $W$ is stochastically dominated by a geometric random variable with parameter $\rho_0 > 0$ by Lemma \ref{symm_rw::lem::beurling} of \cite{M10}. We also have that $p(x,z) = O_{\alpha}(r^{-1})$ uniformly in $x \in H_{\alpha}$ and $z \in \partial H_\epsilon$. Hence
\begin{align}
\label{ni::eqn::var_bound}
\sum_{z_1,z_2 \in \partial H_\epsilon} p(x,z_1) p(x,z_2) G_{D_W}(z_1,z_2) = O_{\alpha}(1).
\end{align}
Combining with the Brascamp-Lieb inequality (Lemma \ref{bl::lem::bl_inequalities}) implies the result since the expression on the left side is exactly the variance of the DGFF on $D_W$.
\end{proof}
\begin{lemma}
\label{ni::lem::prob_ratio_bound}
Suppose $F \subseteq \mathbf{Z}^2$ is bounded, $E \subseteq F$, $\mathcal {A} \in \mathcal {F}_E = \sigma(h(x) : x \in E)$, $\psi, \widetilde{\psi} \colon \partial F \to \mathbf{R}$, and $\varphi \colon F \to \mathbf{R}$ satisfies $\varphi|_E \equiv 0$, $\varphi|_{\partial F} = \psi - \widetilde{\psi}$. Then we have that
\begin{align*}
\mathbf{P}_{F}^\psi[\mathcal {A}]
\geq \mathbf{E}^{\widetilde{\psi}} \left[ \exp\left( \sum_{b \in F^*} \mathbf{E}_{\mathcal {A}}^{\psi,\widetilde{\psi}}\bigg[ c(b) \nabla \overline{h}(b) \nabla \varphi(b) + O(\mathcal {E}(b)) \bigg] \right) \mathbf{1}_{\mathcal {A}} \right]
\end{align*}
where $\mathbf{E}_\mathcal {A}^{\psi,\widetilde{\psi}}$ is the expectation under any coupling of $\mathbf{P}_F^{\psi}$ and $\mathbf{P}_F^{\widetilde{\psi}}[\cdot | \mathcal {A}]$,
\[ c(b) = \mathcal {V}''(\nabla h^{\widetilde{\psi}}(b)) \text{ and } \mathcal {E}(b) = |\nabla \overline{h}(b)|^2|\nabla \varphi(b)| + (\nabla \varphi(b))^2.\]
\end{lemma}
\begin{proof}
Let $\mathcal {Z}, \widetilde{\mathcal {Z}}$ be the normalization constants that appear in the densities of $\mathbf{P}_F^\psi, \mathbf{P}_F^{\widetilde{\psi}}$ with respect to Lebesgue measure. Recall that $\mathbf{Q}_F^{\widetilde{\psi},-\varphi}$ denotes the law of $(h^{\widetilde{\psi}} + \varphi)$ for $h^{\widetilde{\psi}} \sim \mathbf{P}_F^{\widetilde{\psi}}$ and $\mathbf{E}_\mathbf{Q}^{\widetilde{\psi},-\varphi}$ is the corresponding expectation. Note that the normalization constant of $\mathbf{Q}_F^{\widetilde{\psi},-\varphi}$ is also $\widetilde{\mathcal {Z}}$. We compute,
\begin{align}
\mathbf{P}_{F}^\psi[\mathcal {A}]
=& \frac{\widetilde{\mathcal {Z}}}{\mathcal {Z}} \mathbf{E}_{\mathbf{Q}}^{\widetilde{\psi},-\varphi}\left[ \exp\left(\sum_{b \in F^*} [ \mathcal {V}(\nabla (h - \varphi) \vee \widetilde{\psi}(b)) - \mathcal {V}( \nabla h \vee \psi(b))] \right) \mathbf{1}_{\mathcal {A}} \right] \notag\\
=&\frac{\widetilde{\mathcal {Z}}}{\mathcal {Z}} \mathbf{E}^{\widetilde{\psi}}\left[ \exp\left(\sum_{b \in F^*} [ \mathcal {V}(\nabla h(b)) - \mathcal {V}( \nabla (h +\varphi)(b))] \right) \mathbf{1}_{\mathcal {A}} \right] \label{ni::eqn::rn}
\end{align}
Since $\varphi \equiv 0$ on $E$, the part of the summation over $b \in E^*$ is identically zero. Let $A = F \setminus E$. By Jensen's inequality, the expression in \eqref{ni::eqn::rn} is bounded from below by
\begin{align}
\frac{\widetilde{\mathcal {Z}}}{\mathcal {Z}} \mathbf{E}^{\widetilde{\psi}}\left[ \exp\left(\sum_{b \in A^*} \mathbf{E}^{\widetilde{\psi}}[ \mathcal {V}(\nabla h(b)) - \mathcal {V}( \nabla (h +\varphi)(b)) \big| \mathcal {A} ] \right) \mathbf{1}_{\mathcal {A}} \right].
\end{align}
Applying a first order Taylor expansion to $\mathcal {V}$ about $\nabla h(b)$ and using that $\mathcal {V}''$ is uniformly bounded, we can rewrite our formula for $\mathbf{P}_F^\psi[\mathcal {A}]$ as
\begin{align}
\frac{\widetilde{\mathcal {Z}}}{\mathcal {Z}} \mathbf{E}^{\widetilde{\psi}}\left[ \exp\left(\sum_{b \in A^*} \bigg[ \mathbf{E}^{\widetilde{\psi}}[ -\mathcal {V}'(\nabla h(b)) | \mathcal {A}] \nabla \varphi(b) + O( (\nabla \varphi(b))^2) \bigg] \right) \mathbf{1}_{\mathcal {A}} \right] \label{ni::eqn::rn_lb}.
\end{align}
Applying exactly the same procedure but with $\mathbf{P}_F^{\widetilde{\psi}}, \mathbf{Q}_F^{\widetilde{\psi},-\varphi}$ replaced by $\mathbf{P}_F^{\psi}, \mathbf{Q}_F^{\psi,\varphi}$, respectively, and $\mathcal {A}$ by the whole sample space, we also have
\begin{align}
\frac{\widetilde{\mathcal {Z}}}{\mathcal {Z}} \geq \exp\left( \sum_{b \in A^*} \bigg[ \mathbf{E}^\psi[\mathcal {V}'(\nabla h(b))] \nabla \varphi(b) + O( (\nabla \varphi(b))^2) \bigg] \right). \label{ni::eqn::constant_lb}
\end{align}
Combining \eqref{ni::eqn::rn_lb} and \eqref{ni::eqn::constant_lb} with \eqref{ni::eqn::rn} yields that $\mathbf{P}_F^\psi[\mathcal {A}]$ is bounded from below by
\begin{align*}
\mathbf{E}^{\widetilde{\psi}} \left[ \exp\left( \sum_{b \in A^*} \bigg[ \big(\mathbf{E}^{\psi}[\mathcal {V}'(\nabla h(b))] - \mathbf{E}^{\widetilde{\psi}}[\mathcal {V}'(\nabla h(b)) | \mathcal {A}] \big)\nabla \varphi(b) + O((\nabla \varphi(b))^2)\bigg] \right) \mathbf{1}_{\mathcal {A}} \right].
\end{align*}
Fixing a coupling $(h^\psi, h^{\widetilde{\psi}})$ of $\mathbf{P}_F^\psi$, $\mathbf{P}_F^{\widetilde{\psi}}[ \cdot | \mathcal {A}]$ and setting $\overline{h} = h^\psi - h^{\widetilde{\psi}}$, with $\mathbf{E}_\mathcal {A}^{\psi,\widetilde{\psi}}$ denoting the corresponding expectation, another application of Taylor's formula implies
\[ (\mathbf{E}^{\psi}[\mathcal {V}'(\nabla h(b))] - \mathbf{E}^{\widetilde{\psi}}[\mathcal {V}'(\nabla h(b)) | \mathcal {A}]) \nabla \varphi(b) = \mathbf{E}_{\mathcal {A}}^{\psi,\widetilde{\psi}}[ c(b) \nabla \overline{h}(b) \nabla \varphi(b) + O(\mathcal {E}(b))],\]
which, when combined with the previous expression, proves the lemma.
\end{proof}
We say that $F \subseteq \mathbf{Z}^2$ with ${\rm diam}(F) < \infty$ is $C$-stochastically regular if
\[ \mathbf{P}_x[ |X_\tau - x| \geq s] \leq \frac{Cs}{{\rm diam}(F)}\]
for every $x \in F$ with ${\rm dist}(x, \partial F) = 1$ where $X$ is a simple random walk and $\tau$ is its time of first exit from $F$. We also define the norm
\[ \| \psi \|_{F}^{\nabla} = \max_{x \in \partial F} |\psi(x)| + {\rm diam}(F) \left(\max_{\stackrel{x,y \in \partial F}{x \neq y}} \frac{|\psi(x) - \psi(y)|}{|x-y|} \right)\]
on the space of functions $\{ \psi \colon \partial F \to \mathbf{R}\}$.
\begin{lemma}
\label{ni::lem::rn_bound}
Suppose that $F \subseteq \mathbf{Z}^2$ with $r = {\rm diam}(F) < \infty$ is $C$-stochastically regular. Assume that $E \subseteq \mathbf{Z}^2$, $E_\alpha = \cup_{x \in E} B(x,\alpha r)$, $E_{2C^{-1}} \subseteq F$, $A=F \setminus E'$ with $E' = E_{C^{-1}}$ is also $C$-stochastically regular and, for each $k,\delta > 0$, the number of balls of radius $r^{\delta}$ required to cover $A(r^{k\delta}, r^{(k+1)\delta})$ is $O(r^{1-(k-1)\delta})$. Suppose that $\widetilde{U}_E \subseteq E$ corresponds to a system of conditioning $(\widetilde{a}_E,\widetilde{b}_E)$ satisfying $(C)$ and let $\widetilde{\mathcal {K}}_E = \cap_{x \in E} \{ \widetilde{a}_E(x) \leq h^{\widetilde{\psi}}(x) \leq \widetilde{b}_E(x) \}$. Let $\psi, \widetilde{\psi} \colon \partial F \to \mathbf{R}$ satisfy $\| \psi - \widetilde{\psi}\|_{F}^\nabla \leq C$. Suppose $\varphi \colon F \to \mathbf{R}$ is harmonic off of $E'$, $\varphi|_{E'} \equiv 0$, and $\| \varphi \|_{F}^\nabla \leq C$. Let $(h^\psi, h^{\widetilde{\psi}}|\widetilde{\mathcal {K}}_E)$ denote the stationary coupling of $\mathbf{P}_F^{\psi}, \mathbf{P}_F^{\widetilde{\psi}}[\cdot|\widetilde{\mathcal {K}}_E]$ and $\mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}$ the corresponding expectation. Using the notation $c(b)$ and $\mathcal {E}(b)$ from the previous lemma, we have that
\[ \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}} \left[ \sum_{b \in F^*} [ c(b) \nabla \overline{h}(b) \nabla \varphi(b) + O(\mathcal {E}(b)) ]\right] = O_C(1).\]
\end{lemma}
\begin{proof}
With $\overline{\psi} = \psi - \widetilde{\psi}$, let
\[ \alpha_j = \max \{| \overline{\psi}(z_1) - \overline{\psi}(z_2)| : z_1, z_2 \in \partial F, |z_1-z_2| \leq j\}\]
and note that $|\alpha_{j+1} - \alpha_j| \leq C r^{-1}$ since $\|\overline{\psi}\|_F^\nabla \leq C$. Fix $b = (x,y) \in \partial F^*$. Letting $X$ be the random walk of Subsection \ref{subsec::rw_difference}, $\tau$ its time of first exit from $A$, $p_j = \mathbf{P}_y[ |X_\tau-x| \geq j]$, $J= r/C$, and $M = \max_{x \in F}\big( |h^\psi(x)| + |(h^{\widetilde{\psi}} |\widetilde{\mathcal {K}}_E)(x)| \big)$ an application of summation by parts implies
\begin{equation}
\label{ni::eqn::sbp}
|\nabla \overline{h}(b)| \leq \sum_{j=0}^{J-1} (p_{j+1} - p_j) \alpha_j + p_J M \leq \sum_{j=0}^{J-1} |\alpha_{j}-\alpha_{j-1}| p_j + 2p_J M.
\end{equation}
Lemma \ref{symm_rw::lem::beurling} of \cite{M10} implies $p_j = O(j^{-\rho_{\rm B}})$ for $\rho_{\rm B} = \rho_{\rm B}(\mathcal {V}) \in (0,1)$, hence the right side of \eqref{ni::eqn::sbp} is bounded by
\[ \sum_{0=1}^{J-1} O_C( j^{-\rho_{\rm B}} r^{-1}) + O_C(r^{-\rho_{\rm B}}M ) = O_C(r^{-\rho_{\rm B}}(1+M)).\]
Lemma \ref{te::lem::expectation} implies that $\mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}[M^p] = O_{\overline{\Lambda}}( (\log r)^{p})$. The Nash continuity estimate (Lemma \ref{symm_rw::lem::nash_continuity_bounded} of \cite{M10}) implies that $\nabla h(b) = O_{C}( M r^{-\rho_{\rm NC}})$ uniformly in $b \in \partial (E')^*$. Consequently, with $\rho = \rho_{\xi_{\rm NC}} \wedge \rho_{\rm B}$, the energy inequality \eqref{gl::eqn::ee_limit} along with Cauchy-Schwarz implies
\begin{equation}
\label{ni::eqn::ee_bound}
\sum_{b \in A^*} \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}[ |\nabla \overline{h}(b)|^2]
\leq c_1 \sum_{b \in \partial A^*} \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}[ | \nabla \overline{h}(b)| |\overline{h}(x_b)|] = O_{C}(r^{1+\epsilon-\rho}).
\end{equation}
The hypotheses of the lemma imply $|\nabla \varphi(b)| = O_{C}(r^{-1})$ uniformly in $b \in F^*$, hence
\[ \sum_{b \in F^*} \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}[ \mathcal {E}(b)] = O_{C}(1).\]
Thus to prove the lemma we need to control
\[ \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}\left[ \sum_{b \in A^*} c(b) \nabla \overline{h}(b) \nabla \varphi(b) \right].\]
We will first argue that the contribution coming from the terms near $\partial H$ is negligible. Using \eqref{ni::eqn::ee_bound} along with Cauchy-Schwarz and that $|A^* \setminus A^*(r^{\rho/2})| = O(r^{1+\rho/2})$, we have
\begin{align}
&\mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}\left[ \sum_{b \in A^* \setminus A^*(r^{\rho/2})} |\nabla \overline{h}(b) \nabla \varphi(b)| \right] \notag\\
=& \sqrt{O_C(r^{1+\epsilon-\rho}) O_C(r^{-1+\rho/2})} = O_{C}(r^{\epsilon/2-\rho/4}) \label{ni::eqn::boundary_error}.
\end{align}
We now handle the interior term. Let $(h_t^\psi,(h^{\widetilde{\psi}}|\widetilde{\mathcal {K}}_E)_t)$ denote the dynamics of the stationary coupling. Fixing $\delta > 0$, by hypothesis each of the annuli $A(r^{k\delta},r^{(k+1)\delta})$ can be covered by $O(r^{1-(k-1)\delta})$ balls of radius $r^{k\delta}$. On such a ball $Q$, Theorem \ref{cd::thm::cd} of \cite{M10} implies that
\begin{align}
&\mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}} \left[ \sum_{b \in Q^*} \mathcal {V}''(\nabla h^{\widetilde{\psi}}(b))\nabla \overline{h}(b) \nabla \varphi(b) \right] \notag\\
=& \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}} \left[\sum_{b \in Q^*} c_{\mathcal {V}} \nabla \overline{h}(b) \nabla \varphi(b) \right] + O_{C}(r^{\epsilon+k\delta(1-\rho_{\rm CD})-1}). \label{ni::eqn::interior_error}
\end{align}
Thus summing over a covering of $A(r^{k\delta},r^{(k+1)\delta})$ by such balls yields an error of $O_{C}(r^{\epsilon+\delta-k \delta \rho_{\rm CD}})$. The exponent is negative for the relevant values of $k$ since boundary term includes those annuli of with $k \delta < \rho/2$. That is, we may assume $k \delta \geq \rho/2$ and, since we are free to choose $\epsilon, \delta > 0$ as small as we like, we also assume that $\rho > \rho_{\rm CD}^{-1} 10^{10}(\epsilon+\delta)$. Combining \eqref{ni::eqn::boundary_error} with \eqref{ni::eqn::interior_error} implies that there exists non-random $c_\mathcal {V},\overline{\rho} > 0$ depending only on $\mathcal {V}$ such that
\begin{align*}
\mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}\left[ \sum_{b \in A^*} \mathcal {V}''(\nabla h^{\widetilde{\psi}}(b)) \nabla \overline{h}(b) \nabla \varphi(b) \right]
= \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}}\left[ \sum_{b \in A^*} c_\mathcal {V} \nabla \overline{h}(b) \nabla \varphi(b) \right] + O_C(r^{-\overline{\rho}}).
\end{align*}
Summing by parts and using the harmonicity of $\varphi$, we see that the expectation on the right hand side is bounded from above by
\[ c_{\mathcal {V}} \mathbf{E}_{\widetilde{\mathcal {K}}_E}^{\psi,\widetilde{\psi}} \left[ \sum_{b \in \partial A^*} |\nabla \varphi(b)|| \overline{h}_0(x_b)| \right] = O_C(1).\]
\end{proof}
\begin{lemma}
\label{ni::lem::density_ratio_bound}
Suppose that we have the same setup as Lemma \ref{ni::lem::bounded_coupling} and fix $\beta \in (\gamma,1)$ Let $f$ and $g$ be the densities of $\xi = (h|\mathcal {K})|_{\partial B_\beta}$ and $\widetilde{\xi} = (\widetilde{h} | \widetilde{\mathcal {K}})|_{\partial B_\beta}$ with respect to Lebesgue measure on $\mathbf{R}^{|\partial B_\beta|}$, respectively. There exists $\delta_i = \delta_i(\beta, \gamma, \overline{\Lambda}) > 0$ such that
\[ \mathbf{P}_{D}^{\phi}\left[ \frac{g(\xi)}{f(\xi)} \geq \delta_1 \bigg| \mathcal {K} \right] \geq \delta_2.\]
\end{lemma}
\begin{proof}
We have the trivial bound
\[ \int_{\mathbf{R}^{|\partial B_\beta|}} \left| \frac{f(z)}{g(z)} - 1 \right| g(z) dz \leq 2.\]
Hence applying Markov's inequality for $\mathcal {B}^c$ where
\[ \mathcal {B} = \left\{ \xi : \frac{f(\xi)}{g(\xi)} \leq 100\right\} =
\left\{ \xi : \frac{g(\xi)}{f(\xi)} \geq \frac{1}{100}\right\},\]
we have that $\mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[\mathcal {B}| \widetilde{\mathcal {K}}] \geq \frac{49}{50}$. Our goal now is to convert this into a lower bound on $\mathbf{P}_D^{\phi}[\mathcal {B} | \mathcal {K}]$.
Let $\beta = (1+\gamma)/2$ and assume that $0 < \alpha < \alpha'$ are chosen sufficiently small so that $\partial B_\beta \subseteq H^{\alpha'}$. Assume that $h,\widetilde{h}$ are coupled together in $H_\epsilon$ as in the setup of Lemma \ref{ni::lem::bounded_coupling}. Letting $(\zeta_{\alpha'},\widetilde{\zeta}_{\alpha'}) = (h, \widetilde{h})|_{\partial H^{\alpha'} \times \partial H^{\alpha'}}$, Lemma \ref{ni::lem::bounded_coupling} implies that with $\mathcal {A}_C = \{ \| \overline{\zeta}_{\alpha'}\|_{H^{\alpha'}}^{\nabla} \leq C\}$, $\overline{\zeta}_{\alpha'} = \zeta_{\alpha'} - \widetilde{\zeta}_{\alpha'}$,
we can make $\mathbf{P}[\mathcal {A}_C]$ as close to $1$ as we like by choosing $C,r$ sufficiently large. Let $\varphi \colon H^{\alpha'} \to \mathbf{R}$ be the solution of the boundary value problem
\[ \varphi|_{\partial H^{\alpha'}} \equiv \overline{\zeta}_{\alpha'},\ \ \varphi|_{\partial B_\beta} \equiv 0,\ \ \Delta \varphi|_{H^{\alpha'} \setminus \partial B_\beta} \equiv 0.\]
By Lemma \ref{harm::lem::entropy_form} of \cite{M10} and with $\mathbf{H}(\cdot|\cdot)$ denoting the relative the entropy, we know that
\begin{align}
&\mathbf{H}(\mathbf{P}_{H^{\alpha'}}^{\zeta_{\alpha'}} | \mathbf{Q}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'},\varphi}) + \mathbf{H}(\mathbf{Q}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'},\varphi}|\mathbf{P}_{H^{\alpha'}}^{\zeta_{\alpha'}}) \notag\\
=& \sum_{b \in (H^{\alpha'})^*} \mathbf{E}^{\zeta_{\alpha'},\widetilde{\zeta}_{\alpha'}}\big[ c(b) \nabla \overline{h}(b) \nabla \varphi(b) + O(\mathcal {E}(b)) \big] \label{ni::eqn::ni_ent_bound}
\end{align}
with $c(b), \mathcal {E}(b)$ as in Lemma \ref{ni::lem::prob_ratio_bound}. On $\mathcal {A}_C$, Lemma \ref{ni::lem::rn_bound} implies \eqref{ni::eqn::ni_ent_bound} is of order $O_C(1)$. By the non-negativity of the relative entropy, this implies $\mathbf{H}(\mathbf{Q}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'},\varphi}|\mathbf{P}_{{H^{\alpha'}}}^{\zeta}) \mathbf{1}_{\mathcal {A}_C} =O_C(1) \mathbf{1}_{\mathcal {A}_C}$, hence invoking the elementary entropy inequality (see the proof of \cite[Lemma 5.4.21]{DS89})
\[ \mathbf{P}_{H^{\alpha'}}^{\zeta_{\alpha'}}[Q]
\geq \exp\left( - \frac{\mathbf{H}(\mathbf{Q}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'},\varphi}|\mathbf{P}_{H^{\alpha'}}^{\zeta_{\alpha'}}) + e^{-1}}{\mathbf{Q}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'},\varphi}[Q]} \right) \mathbf{Q}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'},\varphi}[Q]\]
we have the lower bound
\begin{align}
\mathbf{P}_{H^{\alpha'}}^{\zeta_{\alpha'}}[\mathcal {B}]
\geq \exp\left( - \frac{O_C(1)}{\mathbf{P}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'}}[\mathcal {B}]} \right) \mathbf{P}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'}}[\mathcal {B}] \mathbf{1}_{\mathcal {A}_C} \label{ni::eqn::ni_ent_ineq_bound}.
\end{align}
Note that we used $\varphi|_{\partial B_\beta} \equiv 0$ to conclude $\mathbf{P}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'}}[\mathcal {B}] = \mathbf{Q}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'},\varphi}[\mathcal {B}]$.
As $\mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[ \mathcal {B} | \widetilde{\mathcal {K}}] = \mathbf{E}^{\widetilde{\phi}}[\mathbf{P}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'}}[\mathcal {B}] | \widetilde{\mathcal {K}}] \geq 49/50$, we have
\[ \mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[ \{\mathbf{P}_{H^{\alpha'}}^{\widetilde{\zeta}_{\alpha'}}[\mathcal {B}] \geq 1/2\} | \widetilde{\mathcal {K}}] \geq c_2(\beta,\gamma,\overline{\Lambda}) > 0.\]
Consequently, taking expectations of both sides of \eqref{ni::eqn::ni_ent_ineq_bound} over $(\zeta_{\alpha'},\widetilde{\zeta}_{\alpha'})$ conditional on $\mathcal {K}, \widetilde{\mathcal {K}}$, we see that $\mathbf{P}_D^\phi[\mathcal {B} | \mathcal {K}] \geq c_1(\alpha,\gamma,\overline{\Lambda}) > 0$, as desired.
\end{proof}
\subsection{Proof of Proposition \ref{ni::prop::ni}}
Assume that $h|\mathcal {K},\widetilde{h} | \widetilde{\mathcal {K}}$ are coupled together as in the setup of Lemma \ref{ni::lem::bounded_coupling}. Let $\beta = (1+\gamma)/2$, $A = B_\beta \setminus B_\gamma$, and $(\xi, \widetilde{\xi}) = (h|\mathcal {K},\widetilde{h}|\widetilde{\mathcal {K}})|_{\partial B_\beta \times \partial B_\beta}$. Let $\mathbf{E}_{\widetilde{\mathcal {K}}_\gamma}^{\xi,\widetilde{\xi}}$ denote the expectation under the stationary coupling of $\mathbf{P}_{B_\beta}^{\xi}$ and $\mathbf{P}_{B_\beta}^{\widetilde{\xi}}[\cdot | \widetilde{\mathcal {K}}_\gamma]$, set $\overline{\xi} = \xi - \widetilde{\xi}$, and let $\mathcal {A}_C = \{ \| \overline{\xi}\|_{B_\beta}^{\nabla} \leq C\}.$
Let $\varphi \colon B_\beta \to \mathbf{R}$ be the solution of the boundary value problem
\[ \varphi |_{\partial B_\beta} \equiv \overline{\xi},\ \ \varphi|_{B_\gamma} \equiv 0,\ \ (\Delta \varphi)|_{A} \equiv 0.\]
By the definition of $\mathcal {A}_C$ and the harmonicity of $\varphi$, we have that
\begin{equation}
\label{ni::eqn::phi_grad_bound}
\max_{b \in A^*} | \nabla \varphi(b)| \mathbf{1}_{\mathcal {A}_C} = O_{C}(r^{-1}) \mathbf{1}_{\mathcal {A}_C}.
\end{equation}
Taking $F = B_\beta$, $E = B_\gamma$, $\mathcal {A} = \widetilde{\mathcal {K}}_\gamma$ in Lemma \ref{ni::lem::prob_ratio_bound} combined with Lemma \ref{ni::lem::rn_bound} implies that
\begin{equation}
\label{ni::eqn::initial_bound}
\mathbf{P}_{B_\beta}^\xi[\mathcal {K}_\gamma] \geq \exp\left( -O_{\gamma,\overline{\Lambda},C}(1) \right) \mathbf{P}_{B_\beta}^{\widetilde{\xi}}[\widetilde{\mathcal {K}}_\gamma] \mathbf{1}_{\mathcal {A}_C}.
\end{equation}
To finish the proof of Proposition \ref{ni::prop::ni} it suffices to prove the existence of non-random $c > 0$ so that
\[ \mathbf{P}_{B_\beta}^{\widetilde{\xi}}[\widetilde{\mathcal {K}}_\gamma] \mathbf{1}_{\mathcal {A}_C} \geq c\mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[\widetilde{\mathcal {K}}_\gamma] \mathbf{1}_{\mathcal {A}_C}.\]
Let $f$ denote the density of $\widetilde{\xi} = (\widetilde{h}|\widetilde{\mathcal {K}})|_{\partial B_\beta}$ and $g$ be the density of $(\widetilde{h}| \widetilde{\mathcal {K}}_\gamma \cap \widetilde{\mathcal {K}}) |_{\partial B_\beta}$, both with respect to Lebesgue measure on $\mathbf{R}^{|\partial B_\beta|}$. The Markovian structure of the field implies that the events $\widetilde{\mathcal {K}}, \widetilde{\mathcal {K}}_\gamma$ are independent conditional on $\widetilde{\xi}$. Consequently, by Bayes' rule we have
\[ \frac{g(\widetilde{\xi})}{f(\widetilde{\xi})} = \frac{\mathbf{P}_{B_\beta}^{\widetilde{\xi}}[\widetilde{\mathcal {K}}_\gamma]}{\mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[ \widetilde{\mathcal {K}}_\gamma | \widetilde{\mathcal {K}}]}, \ \ \ \text{hence}\ \ \ \mathbf{P}_{B_\beta}^{\widetilde{\xi}}[\widetilde{\mathcal {K}}_\gamma] \mathbf{1}_{\mathcal {A}_C} = \frac{g(\widetilde{\xi})}{f(\widetilde{\xi})} \mathbf{P}_{\widetilde{D}}^{\widetilde{\phi}}[\widetilde{\mathcal {K}}_\gamma | \widetilde{\mathcal {K}}] \mathbf{1}_{\mathcal {A}_C}.\]
Since we can make $\mathbf{P}[\mathcal {A}_C]$ as close to $1$ as we like by increasing $C, r$, it thus suffices to show that $g(\widetilde{\xi}) / f(\widetilde{\xi})$ is uniformly bounded from zero with uniformly positive probability. This is exactly the statement of Lemma \ref{ni::lem::density_ratio_bound}.
\qed
\section{Completing the Proof}
\label{sec::bvi}
We will now explain how the estimates of Sections \ref{sec::gl}-\ref{sec::ni} can be put together to prove Theorems \ref{intro::thm::approximate_martingale} and \ref{intro::thm::scaling_limit}. Both proofs follow from the strategy of \cite[Subsections 3.5-3.7]{SS09}, so we will only give an overview of how everything fits together in our setting and leave the reader to \cite{SS09} for more details.
\subsection{Scaling Limits}
\begin{figure}[h]
\centering
\subfigure[$Y_1$ and $Y_2$ are barriers and $\widetilde{\gamma}^n$ is part of the interface in the setup of Theorem \ref{intro::thm::sle_convergence}. If $\gamma^n$ does not cross $Y_1, Y_2$, then the strands of $\widetilde{\gamma}^n$ are forced to connected in the red region.]{
\includegraphics[height=.55\textwidth]{figures/barriers.pdf}
\label{bvi::fig::barrier}}
\hspace{0.05in}
\subfigure[Using barriers, it is possible to show that internal and external configurations in which the strands are well-separated connect with probability proportional to $(\log R)^{-1}$.]{
\includegraphics[height=.55\textwidth]{figures/hookup.pdf} \label{bvi::fig::hookup}}
\caption{Typical applications of the Barriers Theorem.}
\end{figure}
The proof of Theorem \ref{intro::thm::scaling_limit} has two main inputs: Proposition \ref{ni::prop::ni} and the notion of a \emph{barrier}, developed in \cite[Subsection 3.4]{SS09}. Roughly, the latter is a deterministic curve $Y$ through which $\gamma^n$ does not pass with uniformly positive probability (u.p.p.). Barriers can be used in conjunction with each other to prove that $\gamma^n$ with u.p.p. must pass through certain regions. A typical usage is illustrated in Figure \ref{bvi::fig::barrier}. The black line labeled $\widetilde{\gamma}$ indicates the two strands of $\gamma^n$ from the setup of Theorem \ref{intro::thm::sle_convergence} emanating from $x,y$ and the thin lines $Y_1,Y_2$ indicate barriers. The Barriers Theorem \cite[Theorem 3.11]{SS09} implies that, conditional on $\widetilde{\gamma}^n$, with u.p.p. $\gamma^n$ does not pass through $Y_1, Y_2$. On this event, the two strands of $\widetilde{\gamma}^n$ are forced to connect since $\gamma^n$ is connected. The proof of \cite[Theorem 3.11]{SS09} has some dependencies on the specific structure of the DGFF. The modifications necessary to transfer the result to our setting are deferred to subsection \ref{subsec::nob}.
Barriers can be used in combination with Proposition \ref{ni::prop::ni} to prove Theorem \ref{intro::thm::scaling_limit}. Assume that we are in the setting of Theorem \ref{intro::thm::scaling_limit}, fix $v_0 \in D_n(\gamma,t,\epsilon)$, $r > 0$, let $X$ be a simple random walk on $\tfrac{1}{n} \mathbf{Z}^2$ initialized at $v_0$ independent of $h^n$, and let $\tau(r)$ be the first time $X$ gets within distance $r n^{-1}$ of $\gamma^n$ with respect to the internal metric of $D_n \setminus \gamma[0,t]$. Fix $R > r$, which we assume not to vary with and be much smaller than $n$. We define the \emph{internal configuration} $\Theta_{r,R}$ of $\gamma^n$ and $X$ as seen from $X_{\tau(r)}$ to be the pair $({\rm int}_{r,R}(\gamma^n), {\rm int}_{r,R}(\breve{X}))$ where ${\rm int}_{r,R}(\gamma^n)$ is the connected component of $\gamma^n \cap B(X_{\tau(r)},Rn^{-1})$ with ${\rm dist}({\rm int}_{r,R}(\gamma^n), X_{\tau(r)}) = rn^{-1}$ re-centered at $X_{\tau(r)}$; ties are broken according to some fixed but unspecified convention. We remark that by \cite[Lemma 3.17]{SS09} with high probability there is only one such component. Here, ${\rm int}_{r,R}(\breve{X})$ is the time reversal of $X$ starting at $X_{\tau(r)}$ up until its first exit from $B(X_{\tau(r)},Rn^{-1})$, then translated by $-X_{\tau(r)}$. The \emph{external configuration} $\Phi_{r,R}$ of $\gamma^n$ and $X$ as seen from $X_{\tau(r)}$ is the pair $({\rm ext}_{r,R}(\gamma^n), {\rm ext}_{r,R}(X))$ along with data associated with $D_n$ and the boundary conditions of $h^n$. Here, ${\rm ext}_{r,R}(\gamma^n)$ consists of the two connected components of $\gamma^n \setminus B(X_{\tau(r)},Rn^{-1})$ containing $x_n$ and $y_n$, re-centered at $X_{\tau(r)}$, and ${\rm ext}_{r,R}(X)$ is $X$ stopped at its first hitting time of $B(X_{\tau(r)},Rn^{-1})$, re-centered at $X_{\tau(r)}$.
Fix $w \in D_n$ with ${\rm dist}(w,\partial D_n)$ much larger than $R n^{-1}$ and let $\mathcal {Z}_w = \{ X_{\tau(r)} = w\}$. Let $\zeta = (\beta, \breve{Y}_w)$ be a configuration consisting of an oriented curve $\beta$ in $D_n^*$ coming exactly within distance $rn^{-1}$ to $w$ whose endpoints are contained in $\partial B(w,R n^{-1})$ and $\breve{Y}_w$ a path in $D_n$ connecting $w$ to $\partial B(w,R n^{-1})$. Let $\mathcal {K}_\beta$ be the event that $\beta$ is an oriented zero-height interface of $h^n$. Let $\Phi_{r,R}(w)$ be the external configuration at $w$. That is, $\Phi_{r,R}(w) = ({\rm ext}_{r,R}(\gamma^n;w), {\rm ext}_{r,R}(X;w))$, along with the data associated with $D_n$ and the boundary conditions of $h^n$, where ${\rm ext}_{r,R}(\gamma^n;w)$ consists of the two connected components of $\gamma^n$ emanating from $x_n,y_n$ until first hitting $B(w,R n^{-1})$ and ${\rm ext}_{r,R}(X;w)$ is the initial segment of $X$ up until it first hits $B(w,R n^{-1})$. Let $\breve{X}_w$ denote the time reversal of $X$ starting from when it first hits $w$ to its first exit from $B(w,R n^{-1})$. Using $+w$ to denote translation by $w$, we can write
\begin{align}
& \mathbf{P}[ \Theta_{r,R} = \zeta-w | \mathcal {Z}_w, \Phi_{r, 2R}] \notag\\
=& \mathbf{P}[ \mathcal {K}_\beta, \breve{X}_w = \breve{Y}_w | \mathcal {Z}_w, \Phi_{r, 2R}(w)]
= \frac{\mathbf{P}[ \mathcal {K}_\beta, \breve{X}_w = \breve{Y}_w, \mathcal {Z}_w | \Phi_{r, 2R}(w)]}{\mathbf{P}[\mathcal {Z}_w | \Phi_{r, 2R}(w)]} \notag\\
=& \frac{\mathbf{P}[ \mathcal {Z}_w | \mathcal {K}_\beta, \breve{X}_w = \breve{Y}_w, \Phi_{r, 2R}(w)]}{\mathbf{P}[\mathcal {Z}_w | \Phi_{r, 2R}(w)]} \mathbf{P}[\mathcal {K}_\beta, \breve{X}_w = \breve{Y}_w | \Phi_{r,2R}(w)] \label{scaling::decomp}
\end{align}
It is an immediate consequence of Proposition \ref{ni::prop::ni} that
\begin{equation}
\label{scaling::ni}
\mathbf{P}[ \mathcal {K}_\beta, \breve{X}_w = \breve{Y}_w | \Phi_{r, 2R}(w)] \asymp q(\zeta)
\end{equation}
for some function $q$ and $a \asymp b$ for $a,b > 0$ means that there exists a universal constant $C > 0$ such that $C^{-1} b \leq a \leq C b$; see \cite[Lemma 3.13]{SS09}.
We are now going to explain how the numerator in \eqref{scaling::decomp} can be estimated when the strands of $\beta, \breve{X}_w$, and $\Phi_{r,2R}(w)$ are \emph{well-separated}. This means that the three points where ${\rm ext}_{r,2R}(\gamma^n;w)$ and ${\rm ext}_{r,2R}(X;w)$ enter $B(w,2Rn^{-1})$ are of distance $\epsilon R n^{-1}$, $\epsilon > 0$, from each other and likewise for the exit points of $\beta$ and $\breve{X}_w$ from $B(w, Rn^{-1})$. Such configurations are said to be of \emph{high-quality}. The hypothesis that our configurations are of high-quality allows for the application of barriers to show that the strands of ${\rm ext}_{r,2R}(\gamma^n;w)$ and $\beta$ connect with each other with u.p.p. and, using standard random walk estimates, ${\rm ext}_{r,2R}(X;w)$ hooks up with $\breve{X}_w$ without touching $\gamma^n$ with probability proportional to $(\log R)^{-1}$. That is,
\begin{equation}
\label{scaling::hq_hookup}
\mathbf{P}[\mathcal {Z}_w | \mathcal {K}_\beta, \breve{X}_w = \breve{Y}_w, \Phi_{r,2R}(w)] \asymp \frac{1}{\log R}.
\end{equation}
Indeed, the reason for the latter is that the law of $X$ conditional on ${\rm ext}_{r,2R}(X;w)$ and $\breve{X}_w = \breve{Y}_w$ is that of the concatenation of a random walk $\widehat{X}$ initialized at the first entrance point of ${\rm ext}_{r,2R}(X;w)$ to $\partial B(w,2R n^{-1})$ and stopped when it first hits $z_0$, the first exit point of $\breve{X}_w$ from $B(w, R n^{-1})$, along with some number $N$ of random walk excursions $\widehat{X}^i$ from $z_0$ back to itself and ${\rm ext}_{r,2R}(X;w)$, $\breve{X}_w$. It is easy to see that with u.p.p., $\widehat{X}$ gets within distance $\tfrac{\epsilon}{100} R n^{-1}$ of $z_0$ without hitting the barriers nor $\gamma^n$ and, conditional on this, the probability that $\widehat{X}$ hits $z_0$ before hitting $\gamma^n$ is proportional to $(\log R)^{-1}$. It is also not difficult to see that $N$ is geometric with parameter proportional to $(\log R)^{-1}$ and the probability that a given $\widehat{X}^i$ hits $\gamma^n$ before $z_0$ is again proportional to $(\log R)^{-1}$, so the two factors exactly cancel. See Figure \ref{bvi::fig::hookup} for an illustration of this event and \cite[Lemma 3.14]{SS09} for a precise statement of this result in the case $r=0$.
\begin{figure}
\centering
\includegraphics[height=.4\textwidth]{figures/separation.pdf}
\caption{An illustration of the separation lemma, that poorly separated strands with positive probability become well-separated, in the special case of the external configuration.}
\label{bvi::fig::separation}
\end{figure}
In order for \eqref{scaling::hq_hookup} to be useful and also to estimate the denominator of \eqref{scaling::decomp}, we need that with u.p.p. high-quality configurations occur. This is the purpose of the so-called ``separation lemma,'' the second important ingredient in the proof of Theorem \ref{intro::thm::scaling_limit}, which states $\Theta_{r,2R}$ and $\Phi_{r,3R}$ are of high-quality with u.p.p. conditional on $\Theta_{r,R}, \Phi_{r,4R}$ as well as $\mathcal {Z}_w$, regardless of their quality. See Figure \ref{bvi::fig::separation} for an illustration and as well as \cite[Lemma 3.15]{SS09} for the precise statement when $r = 0$. The idea of the proof is to invoke the Barriers Theorem iteratively along with some random walk estimates to show that the strands tend to spread apart. Exactly the same proof works for $r > 0$.
\begin{figure}[h]
\centering
\subfigure[External configurations $\Phi_{r,2R}$ (black) and $\Phi_{r,2R}'$ (blue) and one internal configuration $\Theta_{r,R}$ (red). The separation lemma combined with \eqref{scaling::decomp} implies that $\mathbf{P}\text{[}\Theta_{r,R} = \zeta | \mathcal {Z}_w, \Phi_{r,2R}\text{]}$ is comparable to $\mathbf{P}\text{[}\Theta_{r,R} = \zeta | \mathcal {Z}_w, \Phi_{r,2R}'\text{]}$.]{
\includegraphics[width=.45\textwidth]{figures/coupling.pdf}}
\hspace{0.05in}
\subfigure[If a step of the coupling argument is successful, then the external configurations in the next step agree in a large annulus, hence future steps in the coupling are more likely to be successful.]{
\includegraphics[width=.45\textwidth]{figures/coupling2.pdf}}
\caption{A typical step in the coupling argument in the proof of Theorem \ref{intro::thm::scaling_limit}.}
\label{bvi::fig::coupling}
\end{figure}
Combining the separation lemma with \eqref{scaling::hq_hookup} implies that the estimate analogous to \eqref{scaling::hq_hookup} holds with arbitrary internal and external configurations. This can also be used to give an estimate of the denominator of \eqref{scaling::decomp}. Putting everything together thus implies the approximate independence of $\Theta_{r,R}$ from $\Phi_{r,2R}$:
\[ \mathbf{P}[ \Theta_{r,R} = \zeta | \mathcal {Z}_w, \Phi_{r, 2R}] \asymp \mathbf{P}[ \Theta_{r,R} = \zeta | \mathcal {Z}_w, \Phi_{r, 2R}']\]
uniformly in $\Phi_{r,2R}, \Phi_{r,2R}'$. See \cite[Corollary 3.16]{SS09}, where we again emphasize that Proposition \ref{ni::prop::ni} takes the role of \cite[Proposition 3.7]{SS09}.
Theorem \ref{intro::thm::scaling_limit} can now be proved using the following iterative coupling argument. Fix $R_1$ very large and let $(R_k : k \geq 1)$ be a sequence decreasing appropriately quickly so that the previous lemmas always apply for the pairs $(r,R_k)$ and $(r,R_{k+1})$. We shall assume that we are always conditioning on $\mathcal {Z}_w$ where $w$ satisfies ${\rm dist}(w, \partial D) \geq 100 R_1 n^{-1}$. Suppose we start with two arbitrary external configurations $\Phi_{r,R_1},\Phi_{r,R_1}'$, which we emphasize could come from different domains, boundary conditions, starting points of $X$, or all three. We couple $\Theta_{r,R_2}|\Phi_{r,R_1}$, the conditional law of $\Theta_{r,R_2}$ given $\Phi_{r,R_1}$, and $\Theta_{r,R_2}'|\Phi_{r,R_1}$ to maximize the probability of success, that is $\Theta_{r,R_2} = \Theta_{r,R_2}'$. In subsequent steps, we couple $\Theta_{r,R_{\ell+1}}|\Phi_{r,R_\ell}$ and $\Theta_{r,R_{\ell+1}}'|\Phi_{r,R_\ell}'$ to maximize the probability of success. This probability is always uniformly positive regardless of whether or not previous stages of coupling have succeeded. Thus we can make the probability that there is at least one successful coupling as close to $1$ as desired by choosing $R_1$ large enough to allow for sufficiently many steps of this procedure. Conditional on the event that there is at least one success, the probability that the terminal internal configurations agree is also very close to $1$, depending on the rate at which the $R_\ell$ are decreasing, since then the external configurations at some step agree in a large annulus. Rescaling by $n$, the theorem clearly follows. A typical step of this procedure is illustrated in Figure \ref{bvi::fig::coupling}. See Lemmas 3.19, 3.20, as well as Theorem 3.21 of \cite{SS09} for a proof when $r=0$.
Note that ${\rm int}_{r,R}(\gamma^n) \cap B(0,\widetilde{r} n^{-1})$, $\widetilde{r} > r$ much smaller than $R$, is not necessarily the same as $\gamma^n \cap B(X_{\tau(r)}, \widetilde{r} n^{-1}) - X_{\tau(r)}$ since it could be that $\gamma^n$ makes multiple excursions from $B(X_{\tau(r)}, R n^{-1})$ to $B(X_{\tau(r)}, \widetilde{r} n^{-1})$. This possibility is ruled out w.h.p. by \cite[Lemma 3.17]{SS09}.
The argument we have described thus far gives Theorem \ref{intro::thm::scaling_limit} in the special case that the $\mathcal {F}_t^n$-stopping time $\tau$ is when $\gamma^n$ hits $y_n$. We can repeat the same procedure for general $\tau$ provided we make the following modifications. We let $\tau(r)$ be the first time that $X$ gets within distance $r$ of $\gamma^n|_{[0,\tau]}$ and change the definitions of $\Theta_{r,R}$ and $\Phi_{r,R}$ accordingly. Exactly the same coupling procedure goes through provided $w$ is far from both $\partial D_n$ as well as the tip $\gamma^n(\tau)$, we just need to be sure that the result does not depend on $\tau$. This, however, is not the case since it is not difficult to see that the coupling works even if the initial external configurations $\Phi_{r,R}, \Phi_{r,R'}$ arise from stopping $\gamma^n$ at different times $\tau,\tau'$.
\subsection{Boundary Values}
\begin{figure}
\centering
\includegraphics[width=.40\textwidth]{figures/harmonic_measure.pdf}
\caption{The local geometry of $\gamma^n$ as seen from $X_{\tau(r)}$ looks like $\gamma_r$, the bi-infinite path sampled from $\nu_r$ of Theorem \ref{intro::thm::scaling_limit}, provided $X_{\tau(r)}$ is sufficiently far from the tip of $\gamma^n$ and $\partial D_n$. Thus it is important in the proof of Theorem \ref{intro::thm::approximate_martingale} to show that such regions have small harmonic measure.}
\end{figure}
We will now explain how Theorem \ref{intro::thm::approximate_martingale} can be proved by combining Theorems \ref{intro::thm::harmonic_up_to_boundary} and \ref{intro::thm::scaling_limit}. Fix $r \geq 0$ and let $\gamma_r$ be a bi-infinite path in $(\mathbf{Z}^2)^*$ sampled according to the probability $\nu_r$ of Theorem \ref{intro::thm::scaling_limit}. Let $V_+(\gamma_r)$ be the set of vertices in $\mathbf{Z}^2$ which are adjacent to $\gamma_r$ and are contained in the same connected component of $\mathbf{Z}^2 \setminus \gamma_r$ as $0$ and $V_-(\gamma_r)$ the set of all other vertices adjacent to $\gamma_r$. From Proposition \ref{hic::prop::hic}, it is clear that we can construct a random field $h_r \colon \mathbf{Z}^2 \to \mathbf{R}$ that given $\gamma_r$ has the law of the GL model on $\mathbf{Z}^2$ conditioned on the events
\[ \bigcap_{x \in V_+(\gamma_r)} \{ h_r(x) > 0\}\ \text{ and } \bigcap_{x \in V_-(\gamma_r)} \{ h_r(x) < 0\}.\]
Let
\begin{equation}
\label{bvi::eqn::lambda}
\lambda_r = \mathbf{E}[h_r(0)].
\end{equation}
Let $\mathcal {F}_t^n = \sigma(\gamma^n(s) : s \leq t)$ and let $\tau$ be any $\mathcal {F}_t^n$-stopping time. Fix a point $v_0 \in D_n(\gamma,\tau,\epsilon)$, let $X$ be a random walk initialized at $v_0$, and $\tau(r)$ the first time it gets within distance of $r n^{-1}$ of $\partial D_n$ or $\gamma^n[0,\tau]$. Theorem \ref{intro::thm::scaling_limit} implies that the local geometry of $\gamma^n$ near $X_{\tau(r)}$ looks like $\gamma_r$ provided $X_{\tau(r)}$ lands on the positive side of $\gamma^n$ and is neither close to the tip of $\gamma^n[0,\tau]$ nor $\partial D_n$, which, in view of \cite[Lemmas 3.23 and 3.26]{SS09}, happens with low probability. Letting $V_+^r(\gamma^n,\tau)$ (resp. $V_-^r(\gamma^n,\tau)$) be the set of vertices in $D_n$ with distance exactly $rn^{-1}$ from the positive (resp. negative) side of $\gamma^n[0,\tau]$, it thus follows from Proposition \ref{hic::prop::hic} that
\begin{equation}
\label{bvi::eqn::sample_mean}
\mathbf{E}\big[ (h^n(X_{\tau(r)}) - \lambda_r) \mathbf{1}_{\{X_{\tau(r)} \in V_+^r(\gamma^n,\tau)\}}\big]
\end{equation}
is small and similarly when $+$ and $-$ are swapped. It is then possible to show that
\begin{equation}
\label{bvi::eqn::sample_mean_cond}
\mathbf{E}\bigg[ \left( \mathbf{E}\big[ (h^n(X_{\tau(r)}) - \lambda_r) \mathbf{1}_{\{X_{\tau(r)} \in V_+^r(\gamma^n,\tau)\}} \big| \gamma^n[0,\tau] \big] \right)^2 \bigg]
\end{equation}
is small by considering independent copies and arguing that the corresponding random walks are unlikely to hit $\gamma^n[0,\tau]$ close to each other, then invoke the approximate independent of internal and external configurations. It is important here that the internal configuration of one random walk is contained in the external configuration of the other, but this happens with high probability by \cite[Lemma 3.17]{SS09}. Theorem \ref{intro::thm::harmonic_up_to_boundary} then implies that, with $\psi_\tau^{n,r}$ the discrete harmonic function in $D_n(\gamma,\tau,rn^{-1})$ with boundary values $\pm \lambda_r$ on $V_\pm^r(\gamma^n,\tau)$ and the same boundary values as $h^n$ otherwise and $M_\tau^n(x) = \mathbf{E}[ h^n(x) | \mathcal {F}_{\tau}^n]$, we have that
\begin{equation}
\label{bvi::eqn::harmonic_error}
\mathbf{E}\big[ \max_{x \in D_n(\gamma,\tau,\epsilon)} |M_\tau^n(x) - \psi_\tau^{n,r}(x)|\big] \leq \delta
\end{equation}
for $\delta = \delta(r)$ with $\lim_{r \to \infty} \delta(r) = 0$.
To finish proving Theorem \ref{intro::thm::approximate_martingale}, it is left to show that the sequence $(\lambda_r)$ has a positive and finite limit $\lambda$:
\begin{lemma}
There exists $\lambda = \lambda(\mathcal {V}) \in (0,\infty)$ such that
\[ \lim_{r \to \infty} \lambda_r = \lambda.\]
\end{lemma}
\begin{proof}
Suppose that $h_r, \gamma_r$ are as in the paragraph just above \eqref{bvi::eqn::lambda}. By construction, $0$ is separated from $V_-(\gamma_r)$ by $V_+(\gamma_r)$. Proposition \ref{hic::prop::exp_bounds} thus implies the existence of non-random $\lambda_0 = \lambda_0(\mathcal {V}) > 0$ such that $\lambda_0^{-1} \leq \mathbf{E}[h_r(0) | \gamma_r] \leq \lambda_0$, hence also $\lambda_0^{-1} \leq \lambda_r \leq \lambda_0$. Thus we just need to show that $(\lambda_r)$ is Cauchy, which in turn is an immediate consequence of \eqref{bvi::eqn::harmonic_error}. Indeed, we first fix $r_1,r_2 > 0$ then $n$ very large so that \eqref{bvi::eqn::harmonic_error} holds for both $\psi^{n,r_1}$ and $\psi^{n,r_2}$ simultaneously. We omit the subscript to indicate the functions corresponding to the entire path $\gamma^n$. Applying the triangle inequality, we have
\begin{equation}
\label{bvi::eqn::harmonic}
\mathbf{E}\big[ \max_{x \in D_n(\gamma,\epsilon)} |\psi^{n,r_1}(x) - \psi^{n,r_2}(x)| \big] \leq 2\delta(r_1) \vee \delta(r_2).
\end{equation}
Fix $x \in D_n(\gamma,\epsilon)$ very close to the positive side of $\gamma^n$ and away from $\partial D_n$. We can always make such a choice so that $|\psi^{n,r_i}(x) - \lambda_{r_i}| \leq 2\delta(r_1) \vee \delta(r_2)$, which implies $|\lambda_{r_1} - \lambda_{r_2}| \leq 4\delta(r_1) \vee \delta(r_2)$.
\end{proof}
\subsection{The Barriers Theorem for the GL Model}
\label{subsec::nob}
We conclude by explaining how to redevelop the relevant parts of \cite[Subsections 3.3-34]{SS09} so that the proof of \cite[Theorem 3.11]{SS09} is applicable in the GL setting. To keep the exposition compact, we will just indicate the necessary changes without repeating statements and proofs.
We begin with \cite[Lemma 3.8]{SS09}, ``Narrows.'' By the Brascamp-Lieb inequalities (Lemma \ref{bl::lem::bl_inequalities}), the the conditional variance upper bound \cite[Equation (3.25)]{SS09} also holds for the GL model. The only claim that needs to be reproved is that $b \geq 0$ if $\delta > 0$ is sufficiently small since, while the inequality $\mathbf{E}[ h(u) | \mathcal {K}] \geq c^{-1}$ for $u \in V_+$ does hold, we do not have the linearity of the mean in its boundary values. Nevertheless, we get the desired result by invoking Proposition \ref{hic::prop::exp_bounds}. The rest of the proof is exactly the same. The ``Domain boundary narrows,'' \cite[Lemma 3.9]{SS09}, goes through with the same modifications.
We now turn to \cite[Lemma 3.10]{SS09}, ``Obstacle.'' One of the ingredients of the proof is equation (3.5) from \cite[Lemma 3.1]{SS09}. This can be established in the GL setting by writing the second moment as the sum of the variance and the square of the mean, bounding the former using the Brascamp-Lieb inequality and the corresponding bound in \cite{SS09}, then controlling the mean using Lemma \ref{hic::lem::exp_upper_bound}. Note that while our setting does not satisfy the hypotheses of Lemma \ref{hic::lem::exp_upper_bound}, the proof is in fact very general and still applies here. The proof in \cite{SS09} breaks down at (3.29) since we do not have the exact harmonicity of the mean. However, using Lemma \ref{hic::lem::harmonic_boundary} we can replace \cite[Equation (3.29)]{SS09} with
\[ \mathbf{E}\big[ h(x) | \mathcal {K}, \mathcal {Q}, \beta \big] \leq O_{\overline{\Lambda}}(1) + \frac{\|g\|_\infty}{100} - \sum_{u \in U'} p_u g(u).\]
Since we may assume without loss of generality that $\| g\|_\infty$ is larger than any fixed constant, we can replace the above with
\[ \mathbf{E}\big[ h(x) | \mathcal {K}, \mathcal {Q}, \beta \big] \leq \frac{\|g\|_\infty}{50} - \sum_{u \in U'} p_u g(u),\]
from which the rest of the proof goes through without any changes.
The remaining part of the proof of \cite[Theorem 3.11]{SS09} that needs modification is the application of \cite[Lemma 3.6]{SS09}, for which we offer the following substitute. Note that this result is in fact different from \cite[Lemma 3.6]{SS09} since we work with entropies as there is no analog of the Cameron-Martin formula for the GL model. Recall that $\mathbf{Q}_D^{\psi,g}$ is the law of $h^\psi - g$ for $h^\psi \sim \mathbf{P}_D^\psi$.
\begin{lemma}
\label{nob::lem::distortion}
Let $D \subseteq \mathbf{Z}^2$ be bounded and let $g \colon D \to \mathbf{R}$ satisfy $g = 0$ on $\partial D$. Then
\[ \mathbf{H}(\mathbf{P}_D^\psi|\mathbf{Q}_D^{\psi,g}) + \mathbf{H}(\mathbf{Q}_D^{\psi,g}|\mathbf{P}_D^\psi) \leq C \sum_{b \in D^*} |\nabla g(b)|^2\]
for $C = C(\mathcal {V})$. In particular, if $A$ is any event then
\begin{align*}
\exp\left(-\frac{C \sum_{b \in D^*} |\nabla g(b)|^2 + e^{-1}}{\mathbf{Q}^{\psi,g}[A]}\right) \leq
\frac{\mathbf{P}^\psi[A]}{\mathbf{Q}^{\psi,g}[A]} \leq
\exp\left(\frac{C \sum_{b \in D^*} |\nabla g(b)|^2 + e^{-1}}{\mathbf{P}^{\psi}[A]}\right)
\end{align*}
\end{lemma}
\begin{proof}
The latter claim is an immediate consequence of the first part of the lemma, the non-negativity of the entropy, and the entropy inequality (see the proof of \cite[Lemma 5.4.21]{DS89})
\[ \log\left( \frac{\mu(A)}{\nu(A)} \right) \geq - \frac{\mathbf{H}(\mu|\nu) + e^{-1}}{\nu(A)}.\]
Using a proof similar to Lemma \ref{harm::lem::entropy_form} of \cite{M10}, we have
\begin{align*}
& \mathbf{H}(\mathbf{Q}_D^{\psi,g}| \mathbf{P}_D^\psi) + \mathbf{H}(\mathbf{P}_D^{\psi}|\mathbf{Q}_D^{\psi,g})\\
=& \sum_{b \in D^*} \mathbf{E}^{\psi} \left( \int_0^1 \mathcal {V}'(\nabla (h + s g)(b)) ds - \int_0^1 \mathcal {V}'(\nabla [(h + (s-1)g) ](b))ds\right) \nabla g(b)\\
=& \sum_{b \in D^*} \left( \mathbf{E}^{\psi} \left( \int_0^1 \mathcal {V}'(\nabla h(b)) ds - \int_0^1 \mathcal {V}'(\nabla h(b))ds\right) \nabla g(b) + O( (\nabla g(b))^2) \right)\\
=& \sum_{b \in D^*} O( (\nabla g(b))^2).
\end{align*}
\end{proof}
Using the same notation as the proof of \cite{SS09} in the paragraph after equation \cite[Equation (3.32)]{SS09}, we get the same bound
\[ \mathbf{P}\bigg[ \mathbf{P}[ \widehat{\gamma}_g \cap (\cup Y) = \emptyset | \mathcal {K}, h_U] > 1/10 \big| \mathcal {K} \bigg] > 1/10.\]
Suppose that $h_U$ is chosen so that the inner inequality holds. Invoking Lemma \ref{nob::lem::distortion} then implies
\begin{align*}
&\mathbf{P}[ \widehat{\gamma} \cap (\cup Y) = \emptyset| \mathcal {K}, h_U] =
\mathbf{Q}^{g}[\widehat{\gamma}_g \cap (\cup Y) = \emptyset | \mathcal {K}, h_U]\\
\geq& \frac{1}{10} \exp\left( - 10 C \sum_{b \in D^*} |\nabla g(b)|^2 - 10e^{-1} \right)
\geq \rho > 0
\end{align*}
where $\rho$ depends only on $\epsilon, \overline{\Lambda},m$. This is equivalent to the statement in \cite{SS09} that
\[ O_{\epsilon,\overline{\Lambda},m}(1) \mathbf{P}[ \widehat{\gamma} \cap (\cup Y) = \emptyset | \mathcal {K}, h_U] \geq 1.\]
\begin{comment}
\begin{lemma}
Let $U = V_+ \cup V_-$. Let $B \subseteq D$ be a disk whose radius $r$ is smaller than its distance to $U$. Assume that $B \cap V_D \neq \emptyset$. If $\epsilon > 0$ and $U$ has a connected component whose distance from $B$ is $R$ and whose diameter is at least $\epsilon R$, then
\begin{equation}
\label{nob::eqn::avg_bound}\mathbf{E} \left[ \left(|V_D \cap B|^{-1} \sum_{x \in V_D \cap B} h(x) \right)^2 \big| \mathcal {K} \right] \leq c + c' \log \frac{R}{r},
\end{equation}
where $c = c'(\epsilon)$.
\end{lemma}
\begin{proof}
Observe that
\begin{align*}
& \mathbf{E} \left[ \left(|V_D \cap B|^{-1} \sum_{x \in V_D \cap B} h(x) \right)^2 \big| \mathcal {K} \right]\\
=& |V_D \cap B|^{-2} \left( {\rm Var}\left( \sum_{x \in V_D \cap B} h(x) \right) + \left[\mathbf{E} \left( \sum_{x \in V_D \cap B} h(x) \right) \right]^2 \right)\\
\leq& C |V_D \cap B|^{-2} {\rm Var}^*\left( \sum_{x \in V_D \cap B} h(x) \right) + O_{\overline{\Lambda}}(1)
\end{align*}
by the Brascamp-Lieb inequalities.
The variance estimate for the Gaussian model is proved in Lemma 2.3 of \cite{SS09}.
\end{proof}
\end{comment}
\begin{comment}
\begin{proof}
Assume that $g(x_0) > 0$. Let $q = \| g \|_\infty / \|\nabla g \|_\infty$ and $r_1 = q/10$. Since between any two vertices $x,x'$ in $\mathbf{Z}^2$ there is a path in $\mathbf{Z}^2$ whose length is at most $2|x-x'|$,
\[ \min \{g(x) : x \in B(x_0,r_1)\} \geq g(x_0) - 2r_1 \| \nabla g \|_\infty = g(x_0) - \|g\|_\infty/5 \geq \|g\|_\infty/4.\]
Since $g = 0$ on $U$ and $g(x_0) \geq \|g \|_\infty/2$ it follows that $\epsilon^{-1} d \geq r_1$. Since we assume that $q > cr$, and we may assume that $c$ is a large constant which may depend on $\epsilon$, it follows that $d/r > 100$, say. Thus, we also assume, with no loss of generality that $\|g\|_\infty \geq \sqrt{c}$, since the required inequality is trivial otherwise.
Let $X$ denote the average value of $\varphi$ on the vertices in $B(x_0,r)$. The inequality in (3.5) of \cite{SS09} and $d/r > 100$ imply
\begin{align}
\label{nob::eqn::log_bound}
\mathbf{E}[ X^2 | \mathcal {K}] \leq O_{\epsilon,\overline{\Lambda}}(1) \log(d/r)
\end{align}
If $\gamma_1$ is not a closed path, we start exploring the interface $h+g$ containing $\gamma_1$ starting from one of the endpoints of $\gamma_1$ until that interface is completed or $B(x_0,r)$ is hit, whichever occurs first. If that interface is completed before we hit $B(x_0,r)$, we continue and explore the interface of $h+g$ containing $\gamma_2$, and so forth, until finally either all of $\hat{\gamma}_g$ is explored or $B(x_0,r)$ is hit. Let $\mathcal {Q}$ denote the event that $B(x_0,r)$ is hit, and let $\beta$ be the interfaces explored up to the time when the exploration terminates.
Lemma \ref{te::lem::expectation} implies that for $x \in B(x_0, r_1)$ with distance at most $\delta r_1$ from $\widehat{\gamma}_g$ we have
\[ \mathbf{E}[ h(x) | \mathcal {K}, \mathcal {Q}, \beta] \leq \lambda_0 - \frac{\| g \|_\infty}{4}.\]
Let $E = \{ x \in B(x_0,r_1) : {\rm dist}(x, \widehat{\gamma}_g \cup \partial B(x_0,r_1) \geq r^{1-\epsilon}\}$. Let $\widehat{h}$ be the harmonic extension of $\mathbf{E}[ h(x) | \mathcal {K},\mathcal {Q},\beta]$ from $\partial E$ to $E$. By Theorem \ref{harm::thm::mean} we know that $|\mathbf{E}[ h(x) | \mathcal {K}, \mathcal {Q}, \beta] - \widehat{h}(x)| \leq r_1^{-\delta}$ for all $x \in E$. By Lemma \ref{te::lem::expectation} we also have that $\mathbf{E}[ h(x) | \mathcal {K}, \mathcal {Q}, \beta] \leq C \overline{\Lambda}$ for some $C > 0$. Fix $x \in B(x_0,r) \cap E$. By Lemma \ref{symm_rw::lem::beurling} of \cite{M10} we know that the probability that a random walk started at $x$ makes it to $\partial B(x_0,r_1)$ before hitting $\widehat{\gamma}_g \cap B(x_0,r)$ is of order $O( \sqrt{r/r_1})$. Consequently, for such $x$ it is clear that $\widehat{h}(x) \leq -\| g \|_\infty/4$. If $x \in B(x_0,r) \setminus E$ then we get the same estimate from Lemma \ref{hic::lem::exp_bounds}, and therefore it holds for all $x \in E$. Therefore
\[ \mathbf{E}[ X^2 | \mathcal {K}, \mathcal {Q}] \geq \frac{\|g \|_\infty^2}{100}.\]
Since
\[ \mathbf{P}[\mathcal {Q}| \mathcal {K}] \leq \frac{\mathbf{E}[X^2 | \mathcal {K}]}{\mathbf{E}[ X^2 | \mathcal {Q}, \mathcal {K}]}\]
the lemma now follows from \eqref{nob::eqn::log_bound}.
\end{proof}
\end{comment}
\section*{Acknowledgements} I thank Amir Dembo and Scott Sheffield for very helpful comments on an earlier version of this manuscript, which led to many significant improvements in the exposition.
\bibliographystyle{acmtrans-ims.bst}
|
1,941,325,220,702 | arxiv | \section{Introduction}
The goal of this article is to review results in \cite{Pa2} and to obtain a complete characterisation of extreme points, similar to the one in \cite{Br}. There were two obstacles preventing us from obtaining a complete characterisation.
Firstly, amplifications were a big issue. We managed to prove a link between extreme points and ergodicity of the commutant largely due to Proposition 2.8. It proved that ergodicity is preserved under taking amplifications. In order to get an "if and only if" statement, one would need a converse to Proposition 2.8. Sadly this kind of questions proved to be rather hard to settle. Also, in \cite{Br}, there was no need to ask these questions, due to the diffuse nature of the hyperfinite factor, as opposed to our finite dimensional object $P_n$ (permutation matrices). This problem will be solved by considering "type $II_1$" permutations, that is the full group of an amenable type $II_1$ equivalence relation.
Secondly, it was hard to construct elements in the commutant of a sofic representation. Much to my surprise, there are sofic representations that act ergodically on the Loeb space (the sofic representation itself is ergodic, not its commutant). Such sofic representations are necessary extreme points and it seems that there are no tools to construct enough elements in the commutant to get ergodicity. This problem can only be solved by restricting the Loeb space to the commutant of the sofic representation.
Note however that the result in \cite{Pa2}, Theorem 2.10 is still useful in that form. Ergodicity of the communtant of a sofic representation is a question that appears in the study of sofic entropy, see \cite{Ke-Li}.
Throughout the article $\omega$ denotes, as usual, a free ultrafilter on $\mathbb{N}$. We work with $M_n(\mathbb{C})$, the algebra of matrices in dimension $n$ and its special subsets $P_n$, the subgroup of permutation matrices and $D_n$, the subalgebra of diagonal matrices. This time we also need $R$, the unique hyperfinite type $II_1$ factor. We assume familiarity with ultraproducts of finite von Neumann algebras, there are many places in literature where the construction can be checked.
\subsection{The convex strucutre on $Hom(N,R^\omega)$}
Lets recall the construction from \cite{Br}. Let $N$ be a separable type $II_1$ factor. The following theorem is a fruitful result in the theory of type $II_1$ factors:
\begin{te}\label{mcduff-jung}
The factor $N$ is the hyperfinite factor if and only if any two unital homomorphism $\pi,\rho:N\to R^\omega$ are unitary conjugate, i.e.
there exists $u\in\mathcal{U}(R^\omega)$ such that $\pi(a)=u\rho(a)u^*$ for all $a\in N$.
\end{te}
The direct implication is classic, while the converse is a much recent result due to Jung, \cite{Ju}. The question is now what happens outside of the hyperfinite world?
One can always consider the set:
\[Hom(N,R^\omega)=\{\pi:N\to R^\omega:\mbox{unital homeomorphism}\}/\sim,\]
where $\sim$ is unitary conjugacy defined as in the statement of Theorem \ref{mcduff-jung}.
This space has a natural topology given by pointwise convergence in the weak topology of $R^\omega$. Due to the separability of $N$ this turns out to be a metrizable topology.
Ozawa showed in the appendix of \cite{Br} that for non-hyperfinite $N$ the space $Hom(N,R^\omega)$ is non-separable (given that $N$ satisfies Connes' Embedding Conjecture,
i.e. $Hom(N,R^\omega)$ is non-empty). This is quite an unpleasant fact, being just another example where the hyperfinite case is completly separated from the rest of the world.
Still, if you want to study the set $Hom(N,R^\omega)$, the first observation is that the direct sum operation is constructing new elements out of old ones. Let $\pi,\rho:N\to R^\omega$
and consider the direct sum:
\[\pi\oplus\rho:N\to R^\omega\oplus R^\omega=(R\oplus R)^\omega.\]
Choose a unital embedding $\theta:R\oplus R\to R$ so that $\theta^\omega:(R\oplus R)^\omega\to R^\omega$. Then:
\[\theta^\omega\circ(\pi\oplus\rho):N\to R^\omega\]
is a unital embedding whose class in $Hom(N,R^\omega)$ can be different than the classes of $\pi$ and $\rho$.
The map $\theta:R\oplus R\to R$ is unital so $\theta(1\oplus 1)=1$. Notice that $\theta(1\oplus 0)$ is a projection in $R$. Let $\lambda=Tr(\theta(1\oplus 0))$. Then $Tr(\theta(0\oplus 1))$ must equal $1-\lambda$. Denote by $[\xi]$ the class in $Hom(N,R^\omega)$ of a unital morphism $\xi:N\to R^\omega$. We set by definition:
\[[\theta^\omega\circ(\pi\oplus\rho)]=\lambda[\pi]+(1-\lambda)[\rho].\]
The term $\lambda[\pi]+(1-\lambda)[\rho]$ is just a formal writing for the element $[\theta^\omega\circ(\pi\oplus\rho)]$ that we constructed. Of course there are some
well-defined problems to be solved, but nothing more than routine work. The last observation to be made is that one can construct a map $\theta$ for any prescribed $\lambda\in [0,1]$.
It can be checked that if $[\pi]\neq[\rho]$ and $\lambda\neq 1$ then $\lambda[\pi]+(1-\lambda)[\rho]\neq[\pi]$ as expected. Furhermore there are some axioms
involving also a metric for the topology on $Hom(N,R^\omega)$ that have to be settled. After this axioms are checked one can deduce due to a result by Capraro and Fritz (\cite{Ca-Fr}) that $Hom(N,R^\omega)$
together with its metric and convex structures can be regarded as an honest closed convex subset of a Banach space.
Now we can state the main result of Brown's theory.
\begin{te}
The class $[\pi]\in Hom(N,R^\omega)$ is an extreme point if and only if the relative comutant $N'\cap R^\omega$ is a factor.
\end{te}
This is a nice result describing the extreme points of this convex structure, but their existence is still an open problem.
\subsection{The space $Sof(G,P^\omega)$}
In \cite{Pa2}, we replaced the separable factor $N$ and Connes' Embedding Conjecture by a countable group $G$ and by the sofic property respectively. We review here the construction of $Sof(G,P^\omega)$ from the article but, though this paper is pretty much self-contained, I'm assuming some familiarity with notations and results from \cite{Pa2}.
We want to study embeddings of the group $G$ into the universal sofic group $\Pi_{k\to\omega}P_{n_k}$ that are "trace preserving", i.e. the trace of each nontrivial element is $0$. We call such morphisms \emph{sofic representations} of $G$. We first note that we have a similar result to Theorem \ref{mcduff-jung} due to Elek and Szabo, \cite{El-Sz2}:
\begin{te}\label{elek-szabo}
The group $G$ is amenable if and only if for any two group morphisms $\Theta_1,\Theta_2:G\to \Pi_{k\to\omega}P_{n_k}$ such that $Tr(\Theta_i(g))=0$ for any $g\neq e$ and any $i=1,2$,
there exists $p\in\Pi_{k\to\omega}P_{n_k}$ such that $\Theta_2(g)=p\Theta_1(g)p^*$ for any $g\in G$.
\end{te}
In order to be able to construct a convex structure on the set of sofic representations, we have to be flexible, considering universal sofic groups over any sequence of dimensions $\{n_k\}_k$ such that $n_k\to\infty$. This brings in certain complications, as we want to compare sofic representations over different universal sofic groups $\Pi_{k\to\omega}P_{n_k}$ and $\Pi_{k\to\omega}P_{m_k}$.
We notice that for a sequence $\{r_k\}_k\subset\mathbb{N}^*$ the universal sofic group $\Pi_{k\to\omega}P_{n_k}$ canonically embeds in $\Pi_{k\to\omega}P_{n_kr_k}$ by tensoring by identity:
\[\Pi_{k\to\omega}P_{n_k}\ni\Pi_{k\to\omega}p_k\to\Pi_{k\to\omega}(p_k\otimes 1_{r_k})\in\Pi_{k\to\omega}P_{n_kr_k}.\]
Let $\Theta:G\to\Pi_{k\to\omega}P_{n_k}$ be a sofic approximation, $\Theta=\Pi_{k\to\omega}\theta_k$. We call an \emph{amplification} of $\Theta$ the composition of $\Theta$ with the above canonic map:
\[\Theta\otimes 1_{r_k}:G\to\Pi_{k\to\omega}P_{n_kr_k}\ \ \Theta\otimes 1_{r_k}(g)=\Pi_{k\to\omega}\theta_k(g)\otimes 1_{r_k}.\]
The space of sofic representations is now:
\[Sof(G,P^\omega)=\{\Theta:G\to\Pi_{k\to\omega}P_{n_k}:\mbox{sofic representation } (n_k)_k\subset\mathbb{N},n_k\to\infty\}/\sim,\]
where $\sim$ is amplifications and conjugacy as in Theorem \ref{elek-szabo}. So two sofic representations are equivalent if they have amplifications that are conjugate. For $\Theta:G\to\Pi_{k\to\omega}P_{n_k}$ we denote by $[\Theta]_P$ its class in $Sof(G,P^\omega)$, to distinguish it from $[\Theta]_\mathcal{E}$ that we shall construct.
\section{Type $II_1$ permutations}
\subsection{The $I_n$ case} One way of discussing sofic objects is by starting with a probability measure preserving equivalence relation. Usually one takes the full equivalence relation on a space with $n$ elements, endowed with the normalised cardinal measure. The Feldman-Moore construction of this equivalence relation is the Cartain pair $D_n\subset M_n(\mathbb{C})$, where $D_n$ is the subalgebra of diagonal matrices. The full group of this type $I_n$ equivalence relation is $Sym(n)$, the symmetric group. It embeds in $M_n$ as $P_n$, the subgroup of permutation matrices. If $x\in Sym(n)$ then the corresponding element in $P_n$ is $\tilde x(i,j)=\delta_i^{p(j)}$ or $\tilde x$ is the characteristic function of the graph of $p^{-1}$.
The group $P_n$ is in the normaliser of $D_n$, i.e. if $p\in P_n$ and $a\in D_n$ then $pap^*\in D_n$. What we have here is just a symmetric group, $P_n$, acting on a set with $n$ elements in an obvious way. Passing to ultraproducts things become more interesting:
\[\Pi_{k\to\omega}P_{n_k}\curvearrowright\Pi_{k\to\omega}D_{n_k}.\]
The group $\Pi_{k\to\omega}P_{n_k}$ was introduced by Elek and Sazbo (\cite{El-Sz1}) and it is called \emph{the universal sofic group}. A countable group is \emph{sofic} if and only if it is a subgroup of this group.
The algebra $\Pi_{k\to\omega}D_{n_k}$ is an abelian von Neumann algebra, isomorphic to $L^\infty(X_\omega,\mu_\omega)$, where $(X_\omega,\mu_\omega)$ is a Loeb measure space, i.e. an ultra product of probability spaces (for its construction see \cite{Lo} or \cite{El-Sze}).
I call the action itself $\Pi_{k\to\omega}P_{n_k}\curvearrowright\Pi_{k\to\omega}D_{n_k}$ \emph{the universal sofic action}. By definition, a standard action is \emph{sofic} if it can be embedded into a universal sofic action.
\subsection{The hyperfinite case} Instead of the space $\{1,\ldots,n\}$ with the normalised cardinal measure, we consider the unit interval $[0,1]$ endowed with the Lebesgue measure $\mu$. On this space consider the equivalence relation:
\[E=\{(x,y):x-y\in\mathbb{Q}\}\]
It is a standard fact that $E$ is a measurable, countable, $\mu$-preserving, ergodic, amenable equivalence relation. A consequence of the famous Connes-Feldman-Weiss theorem (\cite{CFW}) is the unicity of such an object: there is a unique ergodic, amenable type $II_1$ equivalence relation corresponding to the unique hyperfinite type $II_1$ factor.
We denote by $[E]$ \emph{the full group} of $E$:
\[[E]=\{u:[0,1]\to[0,1]:u\mbox{ bijection }, (x,u(x))\in E\mbox{ for $\mu$-almost every $x$}\}.\]
We still have a Hamming distance on $[E]$ defined by $d_H(u,v)=\mu(\{x:u(x)\neq v(x)\})$. The Feldman-Moore construction of $E$, by definition, consists of some functions from $E$ to $\mathbb{C}$ (we don't go here into details, they are not so important):
\[M(E)=\{f:E\to\mathbb{C}: f\mbox{ is a multiplier}\}.\]
Operations on this algebra are defined as follows:
\begin{align*}
f\cdot g(x,z)=&\sum_{yEx}f(x,y)g(y,z);\\
f^*(x,y)=&\overline{f(y,x)}\\
Tr(f)=&\int_Xf(x,x)d\mu(x).
\end{align*}
The Cartain subalgebra of $M(E)$ is composed of those functions with the support on the diagonal:
\[A=\{f:E\to\mathbb{C}: f\in M(E)\mbox{ and }f(x,y)=0 \mbox{ if } x\neq y\}.\]
It is a standard fact that $A$ is isomorphic to $L^\infty(X,\mu)$ and $M(E)$ is isomorphic to $R$, the hyperfinite type $II_1$ factor.
The full group $[E]$ can be embedded in $M(E)$ and we denote its image by $\mathcal{E}$ (in the type $I_n$ the full group was $Sym(n)$ and it's image in the Feldman-Moore construction was $P_n$). The group $\mathcal{E}$ is composed of those functions $f:E\to\mathbb{C}$ that have exactly one entry of $1$ on each row and column:
\[\mathcal{E}=\{f\in M(E):f(E)=\{0,1\}\mbox{ and }\forall x\exists! y f(x,y)=1\mbox{ and }\forall y\exists! x f(x,y)=1\}.\]
If $u\in [E]$ denote by $\tilde u=\chi_{graph(u^{-1})}\in\mathcal{E}$. The formula $d_H(u,v)=1-Tr(\tilde u\tilde v^*)$ can be easily checked. Now we have $\mathcal{E}$ acting on $A$ inside $M(E)$, to replace the old picture of $P_n$ action on $D_n$ inside $M_n$. Passing to ultraproducts we get the diffuse universal sofic picture:
\[\mathcal{E}^\omega\curvearrowright A^\omega.\]
This is still a universal sofic action and $\mathcal{E}^\omega$ is a universal sofic group. The benefit of this objects is that we no longer need amplifications in order to compare two sofic representations.
\subsection{The limit of symmetric groups}
As we said, the group $\mathcal{E}$ is the type $II_1$ analogue of $P_n$ and the next result will strengthen this idea.
Let $n,r$ be two natural numbers. Then $f_{n,r}:M_n\to M_{nr}$ defined by $f_{n,r}(x)=x\otimes 1_r$ is a trace preserving embedding. More over $f_{n,r}(P_n)\subset P_{nr}$ and $f_{n,r}(D_n)\subset D_{nr}$.
As $d_H(p,q)=1-Tr(pq^*)$, the map $f_{n,r}$ restricted to $P_n$ preserves the Hamming distance. We construct the direct limit of metric groups $\varinjlim (P_n,d_H)$ by taking the metric closure in the Hamming distance of the algebraic limit of the directed system $(P_n,f_{n,r})$. The same construction is available for $(D_n,Tr)$.
\begin{te}\label{directlimit}
The direct limit $\varinjlim (P_n,d_H)$ is isomorphic to $[E]$.
\end{te}
\begin{proof}
For $n\in\mathbb{N}$ embed $P_n$ into $[E]$ by dividing the interval $[0,1]$ in $n$ equal parts and permuting these small intervals. The formula will look something like:
\[\Phi_n:P_n\to[E],\ \ \Phi_n(p)=x\to\frac{p([nx])+\{nx\}}{n},\]
where $[y],\{y\}$ are the integer and fractional part of $y$ and permutations $p\in P_n$ act on the set $\{0,1,\ldots,n-1\}$. It is easy to check that the maps $\Phi_n$ preserve the metrics and they are compatible with the directed system. It follows that we have an embedding $\Phi:\varinjlim P_n\to[E]$. To show that this map is surjective we need to check that $\bigcup_n\Phi_n(P_n)$ is dense in $[E]$.
For this, let $\phi\in[E]$ and $\varepsilon>0$. For $q\in\mathbb{Q}\cap[0,1)$ let
\[A_q=\{x\in[0,1]:\phi(x)=x+q\ (mod\ 1)\}.\]
Then $\{A_q:q\in\mathbb{Q}\cap[0,1)\}$ is a partition of $[0,1]$ and we can choose $\{q_1,\ldots,q_k\}$ a finite set such that $\sum_{i=1}^k\mu(A_{q_i})>1-\varepsilon$. To simplify notation, denote $A_{q_i}$ by $A_i$.
Let $\mathcal{B}_n$ be the $\sigma-algebra$ generated by the sets $[j/n,(j+1)/n)$, where $j=0,\ldots,(n-1)$, such that elements in $\Phi_n(P_n)$ are measurable as functions from $([0,1],\mathcal{B}_n)$ to $([0,1],\mathcal{B}_n)$.
Using the regularity properties of the Lebesgue measure, we can find a suficiently large $n$ and sets $B_i'\in\mathcal{B}_n$ such that $\mu(A_i\Delta B_i')<\varepsilon/k^2$. As sets $\{A_i\}$ are disjoint it follows that $\mu(B_i'\cap B_j')<2\varepsilon/k^2$ for $i\neq j$. Take out overlaping intervals to get disjoint sets $B_i\in\mathcal{B}_n$ such that $\mu(A_i\Delta B_i)<2\varepsilon/k$.
Increasing $n$ we can assume that each $q_i$ is an integer multiple of $1/n$. Then there exists an element $\psi\in\Phi_n(P_n)$ such that $\psi(x)=x+q_i$ if $x\in B_i$. Then $\psi=\phi$ on $C=\bigcup_{i=1}^k(A_i\cap B_i)$. But $\mu(C)>\sum_{i=1}^k(\mu(A_i)-\varepsilon/k)>1-2\varepsilon$.
It follows that the distance between $\phi$ and $\psi$ is smaller than $2\varepsilon$ and we are done.
\end{proof}
Actually much more is true. By dividing the interval $[0,1]$ in $n$ equal parts we can construct a trace preserving embedding $\Psi_n:M_n(\mathbb{C})\to M(E)$. Then $\Psi_n(D_n)\subset A$ and $\Psi_n(P_n)\subset\mathcal{E}$, the later being actually just $\Phi_n$ from the proof above composed with the canonical isomorphism between $[E]$ and $\mathcal{E}$. The maps $(\Psi_n)_n$ are also constructing isomorphisms $\varinjlim D_n\simeq A$ and $\varinjlim M_n\simeq M(E)$.
In this article we construct sofic representations in $\mathcal{E}^\omega$, but, due to the last theorem, most of the time we still deal just with $\Pi_{k\to\omega}P_{n_k}$. This is good for some definitions and it is also intuitive. So the next notation and theorem are quite important for the technical part of the article.
\begin{nt}
For a fixed sequence $\{n_k\}$ construct $\Psi:\Pi_{k\to\omega}M_{n_k}\to M(E)^\omega$ defined by $\Psi(\Pi_{k\to\omega}x_k=\Pi_{k\to\omega}\Psi_{n_k}(x_k)$.
\end{nt}
By construction $\Psi(\Pi_{k\to\omega}D_{n_k})\subset A^\omega$ and $\Psi(\Pi_{k\to\omega}P_{n_k})\subset\mathcal{E}^\omega$. Also note that $\Psi$ is trace preserving, in particular it is injective for a fixed sequence $\{n_k\}$.
\begin{te}\label{dense permutation}
Let $\{u_i\}_{i\in\mathbb{N}}$ be a countable set of elements in $\mathcal{E}^\omega$ and $\{a_i\}_{i\in\mathbb{N}}$ be a countable set of elements in $A^\omega$. Then there exists a sequence $\{n_k\}_k$ and elements $v_i\in\Pi_{k\to\omega}P_{n_k}$ and $b_i\in\Pi_{k\to\omega}D_{n_k}$ such that $\Psi(v_i)=u_i$ and $\Psi(b_i)=a_i$ for any $i\in\mathbb{N}$.
\end{te}
\begin{proof}
The proof is just a consequence of Theorem \ref{directlimit} and of the analogue result $\varinjlim D_n\simeq A$, by a diagonal argument. For simplicity in writing (not in the argument) we only consider the family $\{u_i\}_i$. Let $u_i=\Pi_{k\to\omega}u_i^k$, where $u_i^k\in\mathcal{E}$. Choose $\{\varepsilon_k\}_k$ strictly positive numbers such that $\varepsilon_k\to_k 0$.
By Theorem \ref{directlimit}, $\bigcup_n\Psi_n(P_n)$ is dense in $\mathcal{E}$. Then, for each $k$, there exists $n_k\in\mathbb{N}$ and $v_1^k,\ldots, v_k^k\in P_{n_k}$ such that $d_H(u_i^k,\Psi_{n_k}(v_i^k))<\varepsilon_k$, for any $i\leqslant k$.
Define $v_i=\Pi_{k\to\omega}v_i^k$. It's clear by construction that $\Psi(v_i)=u_i$.
\end{proof}
\section{The space of diffuse sofic representations}
\begin{de}
A \emph{diffuse sofic representation} is a group morphism $\Theta:G\to\mathcal{E}^\omega$ such that $Tr(\Theta(g))=0$ for any $g\neq e$.
\end{de}
\begin{de}
For a countable group $G$ denote by $Sof(G,\mathcal{E}^\omega)$ the space of diffuse sofic representations factored by conjugacy: $\Theta_1\sim \Theta_2$ iff there exists $u\in\mathcal{E}^\omega$ such that $\Theta_2=Ad u\circ\Theta_1$.
\end{de}
\begin{nt}
For a diffuse sofic representation $\Theta$ we denote by $[\Theta]_\mathcal{E}$ its class in $Sof(G,\mathcal{E}^\omega)$.
\end{nt}
We prove now that there is a bijection between $Sof(G,P^\omega)$ and $Sof(G,\mathcal{E}^\omega)$.
\begin{p}
Let $\Theta_1,\Theta_2$ be two sofic representations such that $[\Theta_1]_P=[\Theta_2]_P$. Then $[\Psi\circ\Theta_1]_\mathcal{E}=[\Psi\circ\Theta_2]_\mathcal{E}$.
\end{p}
\begin{proof}
Let $\Theta:G\to\Pi_{k\to\omega}P_{n_k}$ be a sofic representation and $\{r_k\}_k\subset\mathbb{N}^*$. Inspecting the definition of $\Phi_n$ from the proof of Theorem \ref{directlimit} we see that $\Phi_n(s)=\Phi_n(s\otimes 1_r)$, for any $s\in P_n$ and $r\in\mathbb{N}^*$. It follows that $\Psi(\Theta)=\Psi(\Theta\otimes 1_{r_k})$.
Assume now that $\Theta_1$ and $\Theta_2$ are conjugate. So there exists $u\in\Pi_{k\to\omega}P_{n_k}$ such that $u\Theta_1u^*=\Theta_2$. Then $\Psi(u)\Psi(\Theta_1)\Psi(u)^*=\Psi(\Theta_2)$, implying that $[\Psi(\Theta_1)]_\mathcal{E}=[\Psi(\Theta_2)]_\mathcal{E}$.
\end{proof}
\begin{te}
The map $A:Sof(G,P^\omega)\to Sof(G,\mathcal{E}^\omega)$ defined by $A([\Theta]_P)=[\Psi\circ\Theta]_\mathcal{E}$ is a bijection.
\end{te}
\begin{proof}
If $\Theta$ is a diffuse sofic representation then use Theorem \ref{dense permutation} to construct a sofic representation $\Gamma$ with $\Psi(\Gamma)=\Theta$. This shows that $A$ is surjective.
Let now $\Theta_1$ and $\Theta_2$ be sofic representation such that $A([\Theta_1]_P)=A([\Theta_2]_P)$. Then there is $u\in\mathcal{E}^\omega$ so that $u\Psi(\Theta_1)u^*=\Psi(\Theta_2)$. Again by Theorem \ref{dense permutation} there is $v$ in some $\Pi_{k\to\omega}P_{n_k}$ with $\Psi(v)=u$. Now, amplifying $\Theta_1,\Theta_2$ and $v$ to a common sequence of dimensions and using the injectivity of $\Psi$ we get $(v\otimes 1)(\Theta_1\otimes 1)(v\otimes 1)^*=\Theta_2\otimes 1$. It follows that $[\Theta_1]_P=[\Theta_2]_P$.
\end{proof}
A convex-like structure is defined on a metric space. We transport, via the bijection $A$, the metric defined in \cite{Pa2} Section 1.4.
\begin{de}
Let $G=\{g_0,g_1,\ldots\}$ where $g_0=e$. For $[\Theta_1]_\mathcal{E},[\Theta_2]_\mathcal{E}\in Sof(G,\mathcal{E}^\omega)$ define:
\[d([\Theta_1],[\Theta_2])=inf\{\big(\sum_{i=1}^\infty\frac1{4^i}||\Theta_1(g_i)-u\Theta_2(g_i)u^*||_2^2\big)^{1/2}:u\in\mathcal{E}^\omega\}.\]
\end{de}
It follows that $Sof(G,P^\omega)$ and $Sof(G,\mathcal{E}^\omega)$ are isomorphic as metric spaces, via the map $A$.
\subsection{The direct sum of the universal sofic group}
Let $\lambda\in[0,1]$. We construct a morphism $\Phi_\lambda:\mathcal{E}^\omega\oplus\mathcal{E}^\omega\to\mathcal{E}^\omega$ to be used in the definition of the convex structure.
Let $u,v\in\mathcal{E}^\omega$. Use Theorem \ref{dense permutation} to get $u_1,v_1\in\Pi_{k\to\omega}P_{n_k}$ so that $\Psi(u_1)=u$ and $\Psi(v_1)=v$. Choose two sequences of natural numbers $\{r_k\}_k$ and $\{s_k\}_k$ such that $\lim_{k\to\omega}r_k/(r_k+s_k)=\lambda$. Construct $(u_1\otimes 1_{r_k} )\oplus(v_1\otimes 1_{s_k}):G\to\Pi_{k\to\omega}P_{(r_k+s_k)n_k}$. Define:
\[\Phi_\lambda(u\oplus v)=\Psi[(u_1\otimes 1_{r_k})\oplus(v_1\otimes 1_{s_k})].\]
Note that $\Phi_\lambda(u\oplus v)$ does not depend on the particular choice of the sequences $\{r_k\}_k$ and $\{s_k\}_k$ as long as $\lim_{k\to\omega}r_k/(r_k+s_k)=\lambda$. The equality $\Psi(x)=\Psi(x\otimes 1)$ for any $x\in M_n(\mathbb{C})$ is important here. Also, the ultraproduct construction is factoring out small dependencies.
If $a_\lambda=\chi_{[0,\lambda]}\in A$ (characteristic function) then $\Phi_\lambda(u\oplus v)$ commutes with $(a_\lambda)^\omega\in A^\omega$ for any $u,v\in\mathcal{E}^\omega$ . This is usual geometry in type $II_1$ factors. It is also a central observation for these convex structures that has to be made a theorem.
\begin{te}\label{image}
The image of $\Phi_\lambda$ is composed of those elements that commutes with $(a_\lambda)^\omega$, where $a_\lambda$ is the characteristic function of $[0,\lambda]$:
\[\Phi_\lambda(\mathcal{E}^\omega\oplus\mathcal{E}^\omega)=\{u\in\mathcal{E}^\omega:ua_\lambda=a_\lambda u\}.\]
\end{te}
\begin{ob}
The definition of $\Phi_\lambda$ can be extended to $M(E)^\omega\oplus M(E)^\omega$. A nice application is the formula $(a_\lambda)^\omega=\Phi_\lambda(1\oplus 0)$.
\end{ob}
\subsection{Cutting representations}
Cutting a diffuse sofic representation by a projection in $A^\omega$ represents the inverse operation of the direct sum. By Theorem \ref{image}, one needs a projection commuting with $\Theta$ (as $a_\lambda$ is in the role of the projection cutting a corner of the sofic representation).
\begin{nt}
Denote by $T_1:\mathcal{E}^\omega\oplus\mathcal{E}^\omega\to\mathcal{E}^\omega$ the projection on the first summand, i.e. $T_1(u\oplus v)=u$. Similarly $T_2(u\oplus v)=v$.
\end{nt}
\begin{de}
Let $p$ be a projection in $A^\omega$ commuting with $\Theta$. Let $\lambda=Tr(p)$. Choose an element $u\in\mathcal{E}^\omega$ such that $upu^*=a_\lambda$. Define $\Theta_p^u=T_1(\Phi_\lambda^{-1}(u\Theta u^*))$.
\end{de}
\begin{p}
The class of $\Theta_p^u$ does not depend on the choice of $u$.
\end{p}
\begin{proof}
Let $u,v\in\mathcal{E}^\omega$ be so that $upu^*=a_\lambda=vpv^*$. Then $uv^*$ commutes with $a_\lambda$. We have:
\begin{align*}
\Theta_p^u=&T_1(\Phi_\lambda^{-1}(u\Theta u^*))=T_1(\Phi_\lambda^{-1}(uv^*v\Theta v^*vu^*))=T_1[\Phi_\lambda^{-1}(uv^*)\Phi_\lambda^{-1}(v\Theta v^*)\Phi_\lambda^{-1}(vu^*)]\\
=&T_1(\Phi_\lambda^{-1}(uv^*))\cdot\Theta_p^v\cdot T_1(\Phi_\lambda^{-1}(uv^*))^*.
\end{align*}
As $T_1(\Phi_\lambda^{-1}(uv^*))$ is an element of $\mathcal{E}^\omega$, it follows that $[\Theta_p^u]_\mathcal{E}=[\Theta_p^v]_\mathcal{E}$.
\end{proof}
\begin{de}
For a projection $p\in A^\omega$ commuting with $\Theta$ define $[\Theta_p]_\mathcal{E}$ to be the class of $\Theta_p^u$ for a $u\in\mathcal{E}^\omega$ so that $upu^*=a_{Tr(p)}$.
\end{de}
The following results are useful, both for the proof of the main result and also as an exercise to get the intuition of direct sums and amplifications of diffuse sofic representations.
\begin{lemma}
Let $p$ be a projection in $\Theta'\cap A^\omega$ with $Tr(p)=\lambda$. Choose an element $u\in\mathcal{E}^\omega$ such that $upu^*=a_\lambda$. Then $[T_2(\Phi_\lambda^{-1}(u\Theta u^*))]_\mathcal{E}=[\Theta_{1-p}]_\mathcal{E}$.
\end{lemma}
\begin{proof}
The main observation is that there exists $v\in\mathcal{E}^\omega$ such that $va_{1-\lambda}v^*=1-a_\lambda$ and $v\Phi_{1-\lambda}(x\oplus y)v^*=\Phi_\lambda(y\oplus x)$ for any $x,y\in\mathcal{E}^\omega$.
Let $\Theta_1,\Theta_2:G\to\mathcal{E}^\omega$ be such that $\Theta_1\oplus\Theta_2=\Phi_\lambda^{-1}(u\Theta u^*)$. Then $u\Theta u^*=\Phi_\lambda(\Theta_1\oplus\Theta_2)=v\Phi_{1-\lambda}(\Theta_2\oplus\Theta_1)v^*$. It follows that $\Theta_2=T_1(\Phi_{1-\lambda}^{-1}(v^*u\Theta u^*v))$. As $v^*u(1-p)u^*v=v^*(1-a_\lambda)v=a_{1-\lambda}$, by definition we have $\Theta_2=\Theta_{1-p}^{v^*u}$. As $\Theta_2=T_2(\Phi_\lambda^{-1}(u\Theta u^*))$ it follows that $[T_2(\Phi_\lambda^{-1}(u\Theta u^*))]_\mathcal{E}=[\Theta_{1-p}]_\mathcal{E}$.
\end{proof}
In a way the following Proposition is an anti-amplification. This feature is unique to diffuse sofic representations.
\begin{p}\label{anti amplification}
For any diffuse sofic representation $\Theta$ and any $\lambda\in(0,1)$ there exists a projection $p\in A^\omega$, commuting with $\Theta$, such that $Tr(p)=\lambda$ and $[\Theta]_\mathcal{E}=[\Theta_p]_\mathcal{E}$
\end{p}
\begin{proof}
Let $\Gamma:G\to\Pi_{k\to\omega}P_{n_k}$ be a sofic representation such that $\Theta=\Psi\circ\Gamma$. Let $\{r_k\}$ be a strictly increasing sequence of natural numbers. Let also $q\in\Pi_{k\to\omega}D_{r_k}$ be a projection such that $Tr(q)=\lambda$. Then $1_{n_k}\otimes q$ commutes with $\Gamma\otimes 1_{r_k}$. Moreover $\Gamma_{1_{n_k}\otimes q}$ is still an amplification of $\Gamma$, so $[\Gamma]_P=[\Gamma_{1_{n_k}\otimes q}]_P$.
Let $p=\Psi(1_{n_k}\otimes q)$. Then $p$ commutes with $\Theta=\Psi(\Gamma)$ and $Tr(p)=\lambda$. Also the equality $[\Gamma]_P=[\Gamma_{1_{n_k}\otimes q}]_P$, transported via $\Psi$, becomes $[\Theta]_\mathcal{E}=[\Theta_p]_\mathcal{E}$.
\end{proof}
One problem in proving that the old action $\alpha(\Theta)$ is ergodic is constructing elements in the commutant $\Theta'$. The last proposition solved this problem, by an easy amplification. In the following lemma we note that there are plenty of projections inside $\Theta'\cap A^\omega$.
\begin{lemma}\label{diffuse abelian}
The algebra $\Theta'\cap A^\omega$ is diffuse, i.e. it has no minimal projection.
\end{lemma}
\begin{proof}
Let $p\in\Theta'\cap A^\omega$ be a projection. Choose a sequence $\{n_k\}_k$ so that there exists $\Gamma:G\to\Pi_{k\to\omega}P_{n_k}$ a sofic representation and $q\in\Pi_{k\to\omega}D_{n_k}$ a projection such that $\Theta=\Psi\circ\Gamma$ and $p=\Psi(q)$. Because $\Psi$ is injective on $\Pi_{k\to\omega}M_{n_k}$, $q$ commutes with $\Gamma$.
Let $a\in D_2$ be a projection of trace $1/2$. Construct $q\otimes a\in\Pi_{k\to\omega}D_{2n_k}$. This is a projection with $Tr(q\otimes a)=\frac12Tr(q)$ that commutes with $\Gamma\otimes 1_2$. Then $\Psi(q\otimes a)$ commutes with $\Psi(\Gamma\otimes 1_2)=\Theta$ and $\Psi(q\otimes a)$ is a sub-projection of $p$.
\end{proof}
\section{The convex structure}
\begin{de}
For $\Theta_1,\Theta_2$ diffuse sofic representations and $\lambda\in[0,1]$ define:
\[\lambda[\Theta_1]_\mathcal{E}+(1-\lambda)[\Theta_2]_\mathcal{E}=[\Phi_\lambda(\Theta_1\oplus\Theta_2)]_\mathcal{E}.\]
\end{de}
At this stage we can consider $\lambda[\Theta_1]_\mathcal{E}+(1-\lambda)[\Theta_2]_\mathcal{E}$ to be just a formal notation for the element in $Sof(G,\mathcal{E}^\omega)$ that we constructed. After the axioms of convex-like structures are checked, we can use Capraro-Fritz theorem to deduce that $Sof(G,\mathcal{E}^\omega)$ endowed with the metric and this convex structure is a bounded closed convex subset of a Banach space. Then $\lambda[\Theta_1]_\mathcal{E}+(1-\lambda)[\Theta_2]_\mathcal{E}$ is a convex combination in this Banach space.
As an observation, this definition is just the old convex structure on $Sof(G,P^\omega)$ transported on $Sof(G,\mathcal{E}^\omega)$ via the map $A$. This is enough to deduce that the axions of convex-like structures (see Section 2 of \cite{Br}) are satisfied by $[Sof(G,\mathcal{E}^\omega),d]$. However, it is easy to check them directly from the definitions presented in this paper.
\begin{p}\label{isolating summand}
If $[\Theta]_\mathcal{E}=\lambda[\Theta_1]_\mathcal{E}+(1-\lambda)[\Theta_2]_\mathcal{E}$ then there exists $p\in A^\omega$ a projection commuting with $\Theta$, $Tr(p)=\lambda$ such that $[\Theta_p]_\mathcal{E}=[\Theta_1]_\mathcal{E}$.
\end{p}
\begin{proof}
We can assume that $\Theta=\Phi_\lambda(\Theta_1\oplus\Theta_2)$. Then $p=a_\lambda$. By definition $\Theta_p^{Id}=T_1(\Phi_\lambda^{-1}(\Theta))=\Theta_1$. It follows that $[\Theta_p]_\mathcal{E}=[\Theta_1]_\mathcal{E}$.
\end{proof}
\begin{p}\label{P3.3.4}(Analog of Proposition 3.3.4 of \cite{Br})
Let $p,q\in \Theta'\cap A^\omega$ be such that $Tr(p)=Tr(q)$. Then $[\Theta_p]=[\Theta_q]$ if and only if there exists an element $u\in\Theta'\cap\mathcal{E}^\omega$ such that $upu^*=q$.
\end{p}
\begin{proof}
Let $\lambda=Tr(p)=Tr(q)$ and let $v_p,v_q\in\mathcal{E}^\omega$ be such that $v_ppv_p^*=a_\lambda=v_qqv_q^*$.
Let $u\in\Theta'\cap\mathcal{E}^\omega$ be such that $upu^*=q$. Then:
\[(v_quv_p^*)a_\lambda(v_quv_p^*)^*=(v_qu)p(v_qu)^*=v_qqv_q^*=a_\lambda,\]
so $(v_quv_p^*)$ commutes with $a_\lambda$. Let $u_1=T_1(\Phi_\lambda^{-1}(v_quv_p^*))$. Then:
\begin{align*}
u_1\Theta_p^{v_p}u_1^*=&T_1(\Phi_\lambda^{-1}(v_quv_p^*))T_1(\Phi_\lambda^{-1}(v_p\Theta v_p^*))T_1(\Phi_\lambda^{-1}(v_quv_p^*))^*\\
=&T_1[\Phi_\lambda^{-1}(v_quv_p^*v_p\Theta v_p^*(v_quv_p^*)^*)]=T_1[\Phi_\lambda^{-1}(v_q\Theta v_q^*)]=\Theta_q^{v_q}.
\end{align*}
It follows that $[\Theta_p]=[\Theta_q]$.
Assume now that $[\Theta_p]=[\Theta_q]$. By the axioms of the convex-like structures (metric compatibility, see also Proof of Corollary 6 from \cite{Ca-Fr}) it follows that also $[\Theta_{1-p}]=[\Theta_{1-q}]$. Recall that $[\Theta_p]=[T_1(\Phi_\lambda^{-1}(v_p\Theta v_p^*))]$ and $[\Theta_{1-p}]=[T_2(\Phi_\lambda^{-1}(v_p\Theta v_p^*))]$. So there exists $u_1,u_2\in\mathcal{E}^\omega$ such that:
\[u_1T_1(\Phi_\lambda^{-1}(v_q\Theta v_q^*))u_1^*=T_1(\Phi_\lambda^{-1}(v_p\Theta v_p^*))\mbox{ and }u_2T_2(\Phi_\lambda^{-1}(v_q\Theta v_q^*))u_2^*=T_2(\Phi_\lambda^{-1}(v_p\Theta v_p^*))\]
Let $u=\Phi_\lambda(u_1\oplus u_2)$. Then $u_1=T_1(\Phi_\lambda^{-1}(u))$ and $u_1=T_2(\Phi_\lambda^{-1}(u))$. We have:
\begin{align*}
v_p\Theta v_p^*=&\Phi_\lambda(\Phi_\lambda^{-1}(v_p\Theta v_p^*))=\Phi_\lambda[T_1(\Phi_\lambda^{-1}(v_p\Theta v_p^*))\oplus T_2(\Phi_\lambda^{-1}(v_p\Theta v_p^*))]\\
=&\Phi_\lambda[u_1T_1(\Phi_\lambda^{-1}(v_q\Theta v_q^*))u_1^*\oplus u_2T_2(\Phi_\lambda^{-1}(v_q\Theta v_q^*))u_2^*]\\
=&\Phi_\lambda[T_1(\Phi_\lambda^{-1}(uv_q\Theta v_q^*u^*))\oplus T_2(\Phi_\lambda^{-1}(uv_q\Theta v_q^*u^*))]\\
=&\Phi_\lambda[\Phi_\lambda^{-1}(uv_q\Theta v_q^*u^*)]=uv_q\Theta v_q^*u^*
\end{align*}
We proved that $v_q^*u^*v_p$ commutes with $\Theta$. As $u$ is in the image of $\Phi_\lambda$ it commutes with $a_\lambda$. It follows that:
\[(v_q^*u^*v_p)p(v_q^*u^*v_p)^*=(v_q^*u^*)a_\lambda(v_q^*u^*)^*=v_q^*a_\lambda v_q=q.\]
\end{proof}
\subsection{Actions on the Loeb space}
In $Sof(G,\mathcal{E}^\omega)$ there is no need for amplifications. Another difference is that we consider only those elements of the Loeb space that commute with the diffuse sofic representation.
\begin{nt}
For a diffuse sofic representation $\Theta:G\to\mathcal{E}^\omega$ denote by $\gamma(\Theta)$ the action of $\Theta'\cap\mathcal{E}^\omega$ on $\Theta'\cap A^\omega$, defined by $\gamma(u)(a)=uau^*$.
\end{nt}
The following lemma is easy, but it is one of the few tools that allows us to construct permutations. This is why it is so important. It was used in \cite{Pa1} and \cite{Pa2} (Lemma 1.6 in both articles, by a strange coincidence that I am noticing now). Here we need the diffuse version of this lemma. The proof is still the same, using Theorem \ref{dense permutation}.
\begin{lemma}\label{permutations}
Let $\{p_i:i\in\mathbb{N}\}$ be projections in $A^\omega$ such that $\sum_ip_i=1$. Let $\{u_i:i\in\mathbb{N}\}$ be unitary elements in $\mathcal{E}^\omega$ such that $\sum_iu_i^*p_iu_i=1$. Then
$v=\sum_iu_ip_i$ is an element in $\mathcal{E}^\omega$.
\end{lemma}
\begin{p}\label{full group}
Let $\Theta$ be a diffuse sofic representation such that $\gamma(\Theta)$ is ergodic. Then if $p,q$ are projections in $\Theta'\cap A^\omega$ such that $Tr(p)=Tr(q)$ then there exists $u\in\Theta'\cap\mathcal{E}^\omega$ such that $q=upu^*$.
\end{p}
\begin{proof}
Assume first that $pq=0$ (the underlying sets, on which $p$ and $q$ are projecting, are disjoint). We want to construct a partial isometry $v$ such that $vpv^*=q$. As $\gamma(\Theta)$ is ergodic there exists $u\in\Theta'\cap\mathcal{E}^\omega$ such that $upu^*\cdot q\neq 0$. By a maximality argument we can construct projections $\{p_i\}_i$ and $\{q_i\}_i$ in $\Theta'\cap A^\omega$ and unitaries $\{u_i\}_i$ in $\Theta'\cap\mathcal{E}^\omega$ such that $\sum_ip_i=p$, $\sum_iq_i=q$ and $u_ip_iu_i^*=q_i$ for any $i$.
Define $v=\sum_iu_ip_i$. It is easy to check that $vpv^*=q$, $vv^*=p$ and $v^*v=q$. Then $u=(1-p-q)+v+v^*$ is a unitary commuting with $\Theta$ such that $upu^*=q$. The proof is algebraic, but there is a lot of geometry behind the scene. The unitary $u$ is sending the underlying set of $p$ onto the underlying set of $q$ and vice-versa, while being identity on the rest of the space.
In order to prove that $u\in\mathcal{E}^\omega$, use the previous lemma with $\{p_i\}_i\cup\{q_i\}_i\cup\{1-p-q\}$ as the set of projections and $\{u_i\}_i\cup\{u_i^*\}_i\cup\{Id\}$ as the set of unitaries.
If $pq\neq 0$, replace $p$ and $q$ by $p_1=p-pq$ and $q_1=q-pq$.
\end{proof}
\subsection{The main result}
\begin{p}
Let $\Theta:G\to\mathcal{E}^\omega$ be a sofic representation. Then $[\Theta]$ is an extreme point in $Sof(G,\mathcal{E}^\omega)$ if and only if $[\Theta]=[\Theta_p]$ for any projection $p\in \Theta(G)'\cap A^\omega$.
\end{p}
\begin{proof}
This is just a consequence of Proposition \ref{isolating summand}.
\end{proof}
\begin{te}(Analog of Proposition 5.2 of \cite{Br})
Let $\Theta:G\to\mathcal{E}^\omega$ be a sofic representation. Then $[\Theta]$ is an extreme point in $Sof(G,\mathcal{E}^\omega)$ if and only if the action $\gamma(\Theta)$ is ergodic.
\end{te}
\begin{proof}
Assume that $[\Theta]_\mathcal{E}=\lambda[\Theta_1]_\mathcal{E}+(1-\lambda)[\Theta_2]_\mathcal{E}$. Then by Proposition \ref{isolating summand} there exists $p\in\Theta'\cap A^\omega$ a projection with $Tr(p)=\lambda$ such that $[\Theta_p]_\mathcal{E}=[\Theta_1]_\mathcal{E}$. Also by Proposition \ref{anti amplification} there exists $q\in\Theta'\cap A^\omega$ a projection with $Tr(q)=\lambda$ such that $[\Theta_q]_\mathcal{E}=[\Theta]_\mathcal{E}$. If $\gamma(\Theta)$ is ergodic then by Proposition \ref{full group} there exists $u\in\Theta'\cap\mathcal{E}^\omega$ such that $upu^*=q$. Use now Proposition \ref{P3.3.4} to deduce that $[\Theta_p]_\mathcal{E}=[\Theta_q]_\mathcal{E}$. It follows that $[\Theta]_\mathcal{E}=[\Theta_1]_\mathcal{E}$ proving that $[\Theta]_\mathcal{E}$ is an extreme point.
For the converse, let $p,q\in\Theta'\cap A^\omega$ be two projections such that $Tr(p)=Tr(q)$. By the previous proposition $[\Theta_p]_\mathcal{E}=[\Theta_q]_\mathcal{E}$. Then, by Proposition \ref{P3.3.4} there exists $u\in\Theta'\cap\mathcal{E}^\omega$ such that $q=upu^*$. This is enough to deduce the ergodicity of $\gamma(\Theta)$ as $\Theta'\cap A^\omega$ is diffuse (Lemma \ref{diffuse abelian}).
\end{proof}
\begin{ob}
The convex-like structures $Sof(G,P^\omega)$ and $Sof(G,\mathcal{E}^\omega)$ are isomorphic. This means that the extreme points constructed in Section 2.6 of \cite{Pa2} are still valid for $Sof(G,\mathcal{E}^\omega)$. The existence of extreme points for any sofic group remains however an open question.
\end{ob}
\section{Sofic representations that cannot be extended}
Let $G=\mathbb{Z}*\mathbb{Z}_2=<a,b:b^2=e>$ and $c=bab$. Then $\mathbb{F}_2=<a,c>$ is a copy of the free group inside $G$. Let $R:Sof(G,P^\omega)\to Sof(\mathbb{F}_2,P^\omega)$ be the restriction map $R([\Theta])=[\Theta|_{\mathbb{F}_2}]$. In this section we show that $R$ is not surjective.
It is quite easy to show that most of the sofic representations $\Theta:\mathbb{F}_2\to\Pi_{k\to\omega}P_{n_k}$ can not be extended to a sofic representation $\tilde\Theta:G\to\Pi_{k\to\omega}P_{n_k}$ (same sequence of dimensions). A sofic representation of $\mathbb{F}_2$ is obtained by choosing to sequences of $n_k$-cycles. A sofic representation of $G$ is obtained by choosing to sequences of $n_k$-cycles that are (almost) conjugated by an element of order $2$. As a relatively low number of pairs of cycles are conjugated by an element of order two, it follows that most $\Theta:\mathbb{F}_2\to\Pi_{k\to\omega}P_{n_k}$ cannot be extended to $G$.
However, when studying the function $R$, we must take into consideration amplifications. Indeed, there are sofic representations $\Theta$ that are not extendable, but they have amplifications that are. For example, assume that there exist an element $y\in\Pi_{k\to\omega}P_{n_k}$ such that $y\Theta(a)y^{-1}=\Theta(c)$ and $y^2\Theta(a)=\Theta(a)y^2$. Then, one can check that, if $\tilde y=
\left[ {\begin{array}{cc}
0 & y \\
y^{-1} & 0 \\
\end{array} } \right]$, then $\tilde y^2=Id$ and $\tilde y(\Theta(a)\otimes 1_2)\tilde y^{-1}=\Theta(c)\otimes 1_2$.
I'm quite sure that the existence of such an element $y\in\Pi_{k\to\omega}P_{n_k}$ is equivalent to the fact that $\Theta\otimes 1_2$ is extendable to $G$. We don't need this result. We shall prove that when $\Theta$ is an expander, which is know to happen most of the time, this is the only phenomena that may make an amplification of $\Theta$ expandable.
\subsection{Hamming distance on matrices}
\begin{de}
For $x,y\in M_n(\mathbb{C})$ define the \emph{Hamming distance on matrices}:
\[d_H(x,y)=\frac1n|\{i:\exists j\ x(i,j)\neq y(i,j)\}|.\]
\end{de}
The formula counts the number of rows that are different in $x,y$. Note that if $x,y\in P_n$ then this distance is the usual Hamming distance on a symmetric group.
\begin{de}
We call a matrix $q\in M_n$ a \emph{pice of permutation} if $q$ has only $0$ and $1$ entries and no more than one entry of $1$ on each row and each column. Alternatively $q=pa$, where $p\in P_n$ and $a$ is a projection in $D_n$.
\end{de}
\begin{p}
Let $x,y\in M_n$ and $p\in P_n$. Then $d_H(x,y)=d_H(px,py)=d_H(xp,yp)$. Instead, if $p$ is a pice of permutation then:
\[d_H(px,py)\leqslant d_H(x,y)\mbox{ and }d_H(xp,yp)\leqslant d_H(x,y).\]
\end{p}
The following lemma is the key of the proof. From the existence of an element $y\in P_{nr}$ with some properties, we interfere the existence of an element $w\in P_n$ with similar properties. These type of results we are looking for.
\begin{lemma}\label{main lemma}
Let $x,z\in P_n$ and $y\in P_{nr}$ be such that $y^2=Id_{nr}$ and $d_H(y(x\otimes 1_r),(z\otimes 1_r)y)<\varepsilon$. Assume that for any projection $p\in D_n$, $Tr(p)<1/2$ implies $\lambda Tr(p)<d_H(p,xpx^*)+d_H(p,zpz^*)$. Then there exists $w\in P_n$ such that $d_H(wx,zw)<72\varepsilon/\lambda$ and $d_H(xw,wz)<72\varepsilon/\lambda$.
\end{lemma}
\begin{proof}
As $M_{nr}\simeq M_r\otimes M_n$, elements in $M_{nr}$ can be viewed as functions from $\{1,\ldots r\}^2$ to $M_n$. Then $(x\otimes 1_r)(i,j)=\delta_i^jx$ and $[y(x\otimes 1_r)](i,j)=\sum_ky(i,k)(x\otimes 1_r)(k,j)=y(i,j)x$. Similarly $[(z\otimes 1_r)y](i,j)=z\cdot y(i,j)$.
Let $A,B\in P_{nr}$. We want to compare $d_H(A,B)$ to $\sum_{i,j=1}^rd_H(A(i,j),B(i,j))$. If $A$ and $B$ are different on a row it may happen that we count twice this error in $\sum_{i,j=1}^rd_H(A(i,j),B(i,j))$. It follows that:
\[2d_H(A,B)\geqslant\frac1r\sum_{i,j=1}^rd_H(A(i,j),B(i,j)).\]
By hypothesis $d_H(y(x\otimes 1_r),(z\otimes 1_r)y)<\varepsilon$ and we can also deduce $d_H((x\otimes 1_r)y,y(z\otimes 1_r))<\varepsilon$. Let $d_H(y(i,j)x,zy(i,j))=\varepsilon_{i,j}^1$ and $d_H(xy(i,j),y(i,j)z)=\varepsilon_{i,j}^2$. Then:
\begin{align*}
\frac1r\sum_{i,j=1}^r\varepsilon_{i,j}^1&\leqslant 2d_H(y(x\otimes 1_r),(z\otimes 1_r)y)<2\varepsilon;\\
\frac1r\sum_{i,j=1}^r\varepsilon_{i,j}^2&\leqslant 2d_H((x\otimes 1_r)y,y(z\otimes 1_r))<2\varepsilon.\\
\end{align*}
From these inequalities we can deduce the existence of an $i\in\{1,\ldots,r\}$ for which:
\[\sum_{j=1}^r\varepsilon_{i,j}^1<8\varepsilon\mbox{ , }\sum_{j=1}^r\varepsilon_{i,j}^2<8\varepsilon\mbox{ , }\sum_{j=1}^r\varepsilon_{j,i}^1<8\varepsilon\mbox{ and }\sum_{j=1}^r\varepsilon_{j,i}^2<8\varepsilon.\]
From now on $i$ is fixed with this property. Noting that $y(i,j)$ is a piece of permutation, we get:
\begin{align*}
d_H(y(i,j)y(j,i)x,xy(i,j)y(j,i))&\leqslant d_H(y(i,j)y(j,i)x,y(i,j)zy(j,i))+d_H(y(i,j)zy(j,i),xy(i,j)y(j,i))\\ &\leqslant d_H(y(j,i)x,zy(j,i))+d_H(y(i,j)z,xy(i,j))=\varepsilon_{j,i}^1+\varepsilon_{i,j}^2.
\end{align*}
Denote by $p_j=y(i,j)y(j,i)$ and note that $y^2=Id_{nr}$ implies $\sum_jp_j=Id_n$. The above inequality is $d_H(p_j,xp_jx^*)=d_H(p_jx,xp_j)\leqslant\varepsilon_{j,i}^1+\varepsilon_{i,j}^2$. Analogous, $d_H(p_j,zp_jz^*)\leqslant\varepsilon_{i,j}^1+\varepsilon_{j,i}^2$.
For $S\subset\{1,\ldots,r\}$ define $p_S=\sum_{j\in S}p_j$. Both $p_j$ and $xp_jx^*$ are elements in $D_n$ and this implies that $d_H(p_S,xp_Sx^*)\leqslant\sum_{j\in S}d_H(p_j,xp_jx^*)$. Using the above inequalities we get that for any subset $S$:
\[d_H(p_S,xp_Sx^*)\leqslant\sum_{j\in S}\varepsilon_{j,i}^1+\varepsilon_{i,j}^2<16\varepsilon.\]
The same statement is true for $d_H(p_S,zp_Sz^*)$.
Assume that $Tr(p_S)<1/2$. Then, by hypothesis, $\lambda Tr(p_S)<d_H(p_S,xp_Sx^*)+d_H(p_S,zp_Sz^*)$. Hence $Tr(p_S)<32\varepsilon/\lambda$ in this case. As $\sum_{j=1}^rp_j=Id_n$ it follows that there exists $j$ such that $Tr(p_j)>1-32\varepsilon/\lambda$.
Let $w\in P_n$ be such that $d_H(w,y(i,j))<32\varepsilon/\lambda$. It is easy to see that $d_H(wx,zw)\leqslant 32\varepsilon/\lambda+8\varepsilon+32\varepsilon/\lambda<72\varepsilon/\lambda$. The same is true for $d_H(xw,wz)$.
\end{proof}
\begin{p}\label{main theorem}
Let $\Theta:\mathbb{F}_2\to\Pi_{k\to\omega}P_{n_k}$ be a sofic representation of $\mathbb{F}_2$. Choose $a_k,c_k\in P_{n_k}$ such that $\Theta(a)=\Pi_{k\to\omega}a_k$ and $\Theta(c)=\Pi_{k\to\omega}c_k$. Assume that:
\begin{enumerate}
\item $\{a_k,c_k\}_k$ is an expander, i.e. there exists $\lambda>0$ such that for any $k$ for any projection $p\in D_{n_k}$ with $Tr(p)<1/2$ we have $\lambda Tr(p)<d_H(p,a_kpa_k^*)+d_H(p,c_kpc_k^*)$;
\item there is no $w\in\Pi_{k\to\omega}P_{n_k}$ such that $w\Theta(a)w^{-1}=\Theta(c)$ and $w^2\Theta(a)=\Theta(a)w^2$.
\end{enumerate}
Then there is no $\Psi$ sofic representation of $G$ such that $R([\Psi])=[\Theta]$.
\end{p}
\begin{proof}
Assume that there exists a sofic representation $\Psi:G\to\Pi_{k\to\omega}P_{n_kr_k}$ such that $\Psi|_{\mathbb{F}_2}=\Theta\otimes 1_{r_k}$. Let $y=\Psi(b)$. Then $y^2=Id$ and $y\cdot[\Theta\otimes 1_{r_k}](a)=[\Theta\otimes 1_{r_k}](c)\cdot y$.
Find $y_k\in P_{n_kr_k}$ such that $y_k^2=Id_{n_kr_k}$ and $y=\Pi_{k\to\omega}y_k$. Then $d_H(y_k(a_k\otimes 1_{r_k}),(c_k\otimes 1_{r_k})y_k)\to0$ when ${k\to\omega}$. Use Lemma \ref{main lemma} to construct $w_k\in P_{n_k}$ such that $d_H(w_ka_k,c_kw_k)\to_{k\to\omega}0$ and $d_H(a_kw_k,w_kc_k)\to_{k\to\omega}0$. Let $w=\Pi_{k\to\omega}w_k$. Then $w\Theta(a)=\Theta(c)w$ and $\Theta(a)w=w\Theta(c)$. This is in contradiction with condition $(2)$.
\end{proof}
\subsection{Construction}
We show that there exists sofic representations of $\mathbb{F}_2$ satisfying conditions $(1)$ and $(2)$ from Proposition \ref{main theorem}. Fix a sequence $\{n_k\}_k$ increasing to infinity. For each $k$, arbitrary choose two $n_k$-cycles $(a_k,c_k)$ from the $[(n_k-1)!]^2$ pairs available. It is know that the sequence $(a_k,c_k)_k$ is generating a sofic representation of the free group with probability $1$. The theory of expander graphs tells us that also the first condition required in Proposition \ref{main theorem} is attain with probability $1$ (for small enough $\lambda$). Some estimations will show that also condition $(2)$ is satisfied with probability $1$.
\subsubsection{First condition}We review here some basic facts about expanders.
\begin{nt}
For any two cycles $a,c\in P_n$ denote by $G_{a,c}$ the 4-regular graph $(V,E)$, where $V=\{1,\ldots, n\}$ and $E=\{(i,a(i));(i,a^{-1}(i));(i,c(i));(i,c^{-1}(i)):i\in V\}$. These graphs may have multiple edges.
\end{nt}
\begin{de}
For a graph $G=(V,E)$ the \emph{Cheeger constant} $h(G)$ is defined as:
\[h(G)=\min_{0<|S|\leqslant\frac n2}\frac{|\partial S|}{|S|},\]
where $S\subset V$ and $\partial S$ is the set of edges in $E$ with exactly one vertex in $S$.
\end{de}
The link between Cheeger constant and condition $(1)$ from Proposition \ref{main theorem} is clear. Choose $a,c\in P_n$ and a projection $p\in D_n$. Construct $G_{a,c}=(V,E)$. Let $S$ be the subset of $V$ corresponding to $p$, so that $Tr(p)=(1/n)|S|$. We can see that $d_H(p,apa^*)+d_H(p,cpc^*)=(1/n)|\partial S|$. It follows that condition $(1)$ is satisfyed iff $\{h(G_{a_k,c_k})\}_k$ is bonded away from $0$.
The Cheeger constant is strongly connected to the spectral gap. For a graph $G$ we shall denote by $\lambda_1(G)\geqslant\lambda_2(G)\geqslant\ldots\geqslant\lambda_n(G)$ the eigenvalues of the adjacency matrix. If $G$ is a 4-regular graph, as is always the case in this paper, then $\lambda_1(G)=4$. The second eigenvalue is of interest to us.
\begin{p}(Cheeger inequality)
For any $d$-regular graph the following holds:
\[\frac12(d-\lambda_2(G))\leqslant h(G).\]
\end{p}
The following theorem is the missing piece of the puzzle.
\begin{te}(Theorem 1.2 of \cite{Fr})
For any $\varepsilon>0$ there exists a constant $\mu_\varepsilon$ such that for at least $(1-\mu_\varepsilon/n)[(n-1)!]^2$ pairs of $n$-cycles $\{a,c\}$ we have for all $i>1$:
\[|\lambda_i(G_{a,c})|\leqslant 2\sqrt3+\varepsilon.\]
\end{te}
From now we fix $a\in P_n$ to be the cycle $(1,2,\ldots,n)$. As any two $n$-cycles are conjugate, we can deduce the following from the above theorem.
\begin{te}
There exists a constant $\mu_1$ such that for at least $(1-\mu_1/n)[(n-1)!]$ $n$-cycles $c$ we have for all $i>1$:
\[|\lambda_i(G_{a,c})|\leqslant 3.6\]
\end{te}
Altogether, setting $\lambda=0.2$, we have the following result:
\begin{p}\label{first condition}
There exists a constant $\mu_1$ such that for at least $(1-\mu_1/n)[(n-1)!]$ $n$-cycles $c$ the following holds: for any projection $p\in D_n$ with $Tr(p)<1/2$ we have $\lambda Tr(p)<d_H(p,apa^*)+d_H(p,cpc^*)$.
\end{p}
\subsubsection{Second condition}
Now we try to estimate the number of elements $w\in P_n$ so that $d_H(w^2a,aw^2)<\varepsilon$. We stick to our choice $a=(1,\ldots, n)$.
\begin{p}\label{maximal number commuting}
Let $\varepsilon>0$. Then the number of permutations $y\in P_n$ such that $d_H(ay,ya)<\varepsilon$ is less than $n^{[\varepsilon n]+1}$.
\end{p}
\begin{proof}
We construct elements almost commuting with $a$ as follows: divide $\{1,\ldots,n\}$ into $k$ subsets composed of consecutive numbers, then permute these subsets.
Formally, choose $1=i_1<i_2<\ldots<i_k<i_{k+1}=n+1$. Define $l_j=i_{j+1}-i_j$, $j=1,\ldots k$ (the length of the $j^{th}$ segment). Define $s:\{1,\ldots,n\}\to\{1,\ldots,k\}$, $s(v)$ is the unique number such that $i_{s(v)}\leqslant v<i_{s(v)+1}$.
Choose $r\in Sym(k)$ and let $t_{r(w)}=1+\sum_{r(j)<r(w)}l_{r(j)}$ (these are the new starting points of the segments to replace the numbers $i_j$). Define:
\[y(v)=t_{r(s(v))}+(v-i_{s(v)}) .\]
If $v\in\{1,\ldots,n\}\setminus\{i_1,\ldots,i_k\}$ then $s(v-1)=s(v)$. Then $y(v-1)=t_{r(s(v))}+((v-1)-i_{s(v)})=y(v)-1$. This can be rewritten as $ya(v-1)=ay(v-1)$, so $d_H(ay,ya)\leqslant k/n$.
All permutations $y$ for which $d_H(ay,ya)\leqslant (k-1)/n$ can be constructed this way. We need to consider $k-1=[\varepsilon n]$. The number of permutations is less than $C_n^{k-1}\cdot k!=\frac{n!\cdot k}{(n-k+1)!}<n^k$.
\end{proof}
\begin{nt}
For $y\in Sym(n)$ denote by $S_2(y)$ the number of solutions in $Sym(n)$ of the equation $x^2=y$.
\end{nt}
We now compute $S_2(y)$ for some $y\in Sym(n)$.
\begin{nt}
For $x\in Sym(n)$ denote by $c_x(i)$ the number of $x$-cycles of length $i$.
\end{nt}
From this definition we see that $c_x(1)$ is the number of fix points of $x$ and $\sum_iic_x(i)=n$.
Let $x(1)=t$. Then $x(t)=x^2(1)=y(1)$ and $x(y(1))=x^2(t)=y(t)$. Inductively, we get:
\[x(y^k(t))=y^{k+1}(1)\mbox{ and }x(y^k(1))=y^k(t)\mbox{ for any }k\geqslant 0.\]
There are two cases. Assume that $1$ and $t$ are in the same $y$-cycle, i.e. there exists $a\in\mathbb{N}$ so that $t=y^a(1)$. It follows that $x(y^{k+a}(1))=y^{k+1}(1)$ and $x(y^k(1))=y^{k+a}(1)$. Combining the two equations we get $y^{2a-1}(1)=1$. It is easy to get a contradiction if $y^k(1)=1$ for $k<2a-1$, so $1$ must be in a $y$-cycle of length $2a-1$. All the values of $x$ on the elements composing this $y$-cycle are determined ($x(1)=y^a(1)$ and the rest will follow).
Assume now that $1$ and $t$ are in two distinct $y$-cycles. Then the two cycles must be of equal length. The values of $x$ on the elements composing the two cycles are determined once we chose the value of $x(1)$.
Let's determine the number of solutions of the equation $x^2=y$ when $y$ has only cycles of length $i$. If $i$ is even then $c_y(i)$ must be even, otherwise we have no solution. We have to group these cycles in pairs of two and there are $(c_y(i))!/[(c_y(i)/2)!2^{c_y(i)/2}]$ possibilities to do so. For each coupling we have $i^{c_y(i)/2}$ associated solutions. All in all the number of solutions in this case is:
\[S_2(y)=\frac{(c_y(i))!(i/2)^{c_y(i)/2}}{(c_y(i)/2)!}.\]
If $i$ is odd then we can group $2k$ cycles in $k$ pairs for $k=0,\ldots,[c_y(i)/2]$ (here $[t]$ is the largest integer smaller than $t$). The cycles left unpaired are not adding to the number of solutions, as the permutation $x$ is perfectly determined on the elements of those cycles. We reach the formula:
\[S_2(y)=\sum_{k=0}^{[c_y(i)/2]}\frac{(c_y(i))!i^k}{(c_y(i)-2k)!k!2^k}.\]
If $y$ is an arbitrary element of $Sym(n)$ then:
\[S_2(y)=\left[\Pi_i\frac{(c_y(2i))!(i)^{c_y(2i)/2}}{(c_y(2i)/2)!}\right]\left[\Pi_i\left(\sum_{k=0}^{[c_y(2i+1)/2]}\frac{(c_y(2i+1))!(2i+1)^k}{(c_y(2i+1)-2k)!k!2^k}\right)\right]\]
iff $c_y(2i)$ is even for each $i$, otherwise $S_2(y)=0$.
\begin{p}\label{maximal number y^2=x}
The maximal number of solutions of the equation $x^2=y$ is attain when $y$ is identity. In other words: \[S_2(Id)=max\{S_2(y):y\in Sym(n)\}.\]
\end{p}
\begin{proof}
Assume first that $y$ is composed only of cycles of length $i$. So $n=ic_y(i)$. Then:
\[S_2(y)\leqslant\sum_{k=0}^{[c_y(i)/2]}\frac{(c_y(i))!i^k}{(c_y(i)-2k)!k!2^k}\]
As $S_2(Id)=\sum_{k=0}^{[n/2]}\frac{n!}{(n-2k)!k!2^k}$ it is enough to prove that:
\[\frac{(c_y(i))!i^k}{(c_y(i)-2k)!k!2^k}\leqslant\frac{n!}{(n-2k)!k!2^k}\mbox{ for }k=0,\ldots,[c_y(i)/2].\]
This inequality is equivalent to:
\[c_y(i)(c_y(i)-1)\ldots (c_y(i)-2k+1)i^k\leqslant n(n-1)\ldots (n-2k+1).\]
As $n=ic_y(i)$ we see that $n(n-i)(n-2i)\ldots (n-i(2k+1))$ is an intermediate value in the inequality above.
Let now $y$ be an arbitrary element in $Sym(n)$. By the first part of the proof $S_2(y)\leqslant\Pi_iS_2(Id_{ic_y(i)})$. The inequality $\Pi_iS_2(Id_{ic_y(i)})\leqslant S_2(Id_n)$ is clear as $\Pi_iS_2(Id_{ic_y(i)})$ counts only some of the solutions of the equation $x^2=Id_n$.
\end{proof}
\begin{nt}
Denote by $Bcyc(n,\varepsilon)=\{c\in Sym(n):\exists w\in Sym(n), waw^{-1}=c, d_H(w^2a,aw^2)<\varepsilon\}$.
\end{nt}
\begin{p}\label{second condition}
For small enough $\varepsilon$ and large enough $n$ we have $Bcyc(n,\varepsilon)<\frac1n\cdot(n-1)!$.
\end{p}
\begin{proof}
Combining Propositions \ref{maximal number commuting} and \ref{maximal number y^2=x}, we get that $Bcyc(n,\varepsilon)< n^{[\varepsilon n]+1}S_2(Id_n)$.
Clearly $(n-2k)!\cdot k!>[\frac n3]!$ for any $k=0,\ldots,[\frac n2]$. It follows that:
\[S_2(Id_n)=\sum_{k=0}^{[n/2]}\frac{n!}{(n-2k)!k!2^k}<\left[\frac n2\right]\cdot(n!)\cdot\left(\left[\frac n3\right]!\right)^{-1}.\]
It is easy to see that there exists a constant $t>0$ so that $[\frac n3]!>n^{tn}$ for large enough $n$ (one can use Stirlings's formula to deduce that $t$ can be chosen arbitrary close to $1/3$, but we don't need this). Altogether:
\[Bcyc(n,\varepsilon)<n^{[\varepsilon n]+1}\cdot n\cdot n^{-tn}\cdot (n!)=n^{[\varepsilon n]+4-tn}\cdot\frac1n(n-1)!.\]
The conclusion can now be deduced.
\end{proof}
\begin{p}
Let $\{n_k\}_k$ be a sequence, $n_k\to\infty$ and $a_k=(1,\ldots,n_k)$. Choose $c_k\in P_{n_k}$ a random cycle from the $(n_k-1)!$ possibilities. Then $\Theta:\mathbb{F}_2\to\Pi_{k\to\omega}P_{n_k}$ defined by $\Theta(a)=\Pi_{k\to\omega}a_k$ and $\Theta(c)=\Pi_{k\to\omega}c_k$ is a sofic representation satisfying the conditions in Proposition \ref{main theorem} with probability $1$.
\end{p}
\begin{proof}
Combine Propositions \ref{first condition} and \ref{second condition}.
\end{proof}
\section*{Acknowledgements}
Special thanks to Lewis Bowen and Florin R\u adulescu for important discussions and references that I used for this paper.
|
1,941,325,220,703 | arxiv | \section{Introduction}
\label{introduction}
The Gamow Shell Model (GSM) is an extension of the traditional nuclear shell model (SM) in the complex-energy plane to describe weakly bound and resonant nuclei (see Ref.\cite{JPG} for a review and Refs.\cite{Mg,tetraneutron,FHT_Yannen} regarding recent developments). The fundamental idea in GSM is to use a one-body basis generated by a finite range potential, called the Berggren basis, which contains bound, resonant and scattering states, so that the outgoing character of the asymptotes of weakly bound and resonant nuclei can be imposed. It allows calculation of halo states, which extend far away from the nuclear zone, and resonant states, which are unbound.
Since GSM is a nuclear configuration interaction (CI) model, dimensions of the Hamiltonian matrix, and therefore the computational and storage costs, increase exponentially with the number of particles in a calculation. It is obviously necessary to develop a distributed memory parallel implementation of GSM, which was done in an earlier work\,\cite{JPG}. The initial version of the parallel GSM code utilized a one dimensional (1D) partitioning scheme for the Hamiltonian matrix, and simple hybrid parallelization techniques using MPI/OpenMP libraries. This initial version was convenient to implement and is well-balanced in terms of the storage and computations associated with the Hamiltonian matrix elements. It performs relatively well when the number of computational nodes utilized is small, but it requires expensive inter-node communications for large scale computations due to the 1D partitioning scheme, and it does not make effective use of the thread parallelism available on each node. Also, basis vectors of the eigensolver used in finding the nuclear states of interest were replicated redundantly on each node. These limitations significantly hamper the efficiency of the GSM code when performing large-scale calculations.
In this paper, we describe an implementation of the GSM code which significantly improves its performance and storage requirements for large-scale calculations. As mentioned above, GSM is essentially a shell model code; therefore our implementation benefits greatly from techniques used in another shell model code called Many-body Fermion Dynamics nuclei (MFDn), which has been optimized to run efficiently on leadership class supercomputers\,\cite{SM_JPCS,SM_JPCS2,SM_JPCS3,SM_SPMM,SM_LOBPCG}. In particular, we adopt the two dimensional (2D) checkerboard partitioning scheme of MFDn, which is a powerful technique allowing to take advantage of the symmetry of the Hamiltonian matrix while reducing the MPI communication overheads\,\cite{SM_2D}.
However, there are important differences between the GSM code and MFDn. First, GSM uses a continuous basis, i.e.~the Berggren basis, as opposed to the discrete harmonic oscillator basis used in MFDn. This has important consequences in terms of the construction of the Hamiltonian matrix in a 2D partitioned context as the matrix is less trivially sparse. Second, a fundamental feature of SM is the separation of the large proton-neutron full model space in smaller proton and neutron subspaces, whose treatment poses no numerical problem. On the contrary, the weakly bound nuclei studied within GSM often present a large asymmetry between the number of neutrons versus the number of protons. This asymmetry can lead to very large one-body spaces and demands additional memory storage optimizations in GSM, whereas they are minute compared to the total space in the regular SM. Consequently, a memory optimization absent from SM had to be devised in GSM. This optimization deals with the storage of matrix elements between Slater determinants of the same type (only protons or only neutrons) and with the storage of uncoupled two-body matrix elements. Finally, the computation of many-body resonant states in GSM has lead to additional problems absent from SM. Indeed, contrary to the bound states targeted with SM, many-body resonant states targeted with GSM are situated in the middle of scattering states, their energies therefore being interior eigenvalues. Hence, they cannot be calculated with the Lanczos method, which is well suited to calculate extremal eigenvalues, as is the case for well-bound states. As a result, we have to use an eigensolver different than the Lanczos method, used in MFDn. We use the Jacobi-Davidson (JD) method, which can directly target the interior eigenvalues and eigenvectors. We use preconditioning and angular momentum projection techniques to ensure rapid convergence of this eigensolver.
We begin by giving an overview of SM and GSM in Sect.\,\ref{SM_generalities}. Techniques used in the construction of the Hamiltonian matrix are described in Sect.\,\ref{Data_storage}. The description of the JD eigensolver, implementation of a suitable preconditioner and use of angular momentum projection to ensure rotational invariance of the basis vectors are described in Sect.\,\ref{GSM_diagonalization}. The parallelization of the Hamiltonian construction, the JD eigensolver and GSM basis orthogonalization are described in Sect.\,\ref{H_parallelization}. Examples of MPI memory storage and computation times for two nuclear systems will be given in Sect.\,\ref{GSM_MPI_computation_examples}.
\section{Background on Shell Model Approaches}
\label{SM_generalities}
\subsection{One-body states} \label{one_body_states}
The basic idea of configuration interaction (CI) is to use a basis of independent-particle many-body states to expand correlated many-body states.
In order to build independent-particle basis states, one starts from the one-body Schr{\"o}dinger equation:
\begin{equation}
\left( \frac{\hat{p}^2}{2 \mu} + \hat{V} \right) \ket{\phi} = e \ket{\phi} \label{one_body_Schrodinger_equation}
\end{equation}
where $\hat{p}^2/(2 \mu)$ is the kinetic operator, proportional to a Laplacian, $\hat{V}$ is the one-body potential, $\mu$ is the effective mass of the particle, $\ket{\phi}$ is the one-body state solution of the one-body Schr{\"o}dinger equation and $e$ is its energy.
In most CI models, spherical potential operators $\hat{V}$ are employed \cite{SM_2D,SM_CPC} as they allow to take into account the rotational invariance of solutions exactly.
This is also well suited for the present approach where only quasi-spherical nuclei are considered.
As $\hat{V}$ is spherical, one can decompose $\ket{\phi}$ into radial and angular parts \cite{Cohen_Tannoudji}:
\begin{equation}
\phi(\vec{r}) = \frac{u(r)}{r} \mathcal{Y}^{\ell j}_m(\theta,\varphi) \label{Phi_one_body}
\end{equation}
where $u(r)$ is the radial wave function and $\mathcal{Y}^{\ell j}_m(\theta,\varphi)$ is a spherical harmonic of orbital angular momentum $\ell$ coupled to spin degrees of freedom, so that its total angular momentum is $j$ and the projection of $j$ on the $z$ axis is $m$ \cite{Cohen_Tannoudji}.
We will use the standard orbital notation in the following, where the orbital quantum number $\ell= 0$, 1, 2, 3, 4, $\dots$ is denoted as $s$, $p$, $d$, $f$, $g$, $\dots$.
Hence, for example, the angular part of a wave function bearing $\ell = 1$ and $j=3/2$ is denoted as $p_{3/2}$.
The radial wave function $u(r)$ obeys the following Schr{\"o}dinger equation:
\begin{equation}
u''(r) = \frac{2 \mu}{\hbar^2} \left( \left( \frac{\ell(\ell + 1)}{r^2} + V_l(r) - e \right) u(r) + \int V_{nl}(r,r')~u(r')~dr' \right) \label{Eq_ukr}
\end{equation}
where $V_l(r)$ and $V_{nl}(r,r')$ are the radial local and non-local potentials, respectively, issued from $\hat{V}$. Equation (\ref{Eq_ukr}) is solved numerically using the method of Ref.\cite{V_non_local}.
In the traditional SM, the $\ket{\phi}$ states are harmonic oscillator states \cite{SM_2D,SM_CPC}, which are well suited for well-bound nuclear states, but not for loosely bound and resonant nuclei.
Hence, we use Berggren basis states instead \cite{Berggren}, which are generated by a finite depth potential, and contain bound, resonant and scattering states (see Fig.(\ref{Berggren_basis})). The Berggren completeness relation reads:
\begin{equation}
\sum_{n} \ket{\phi_n} \bra{\phi_n} + \int_{L_+} \ket{\phi(k)} \bra{\phi(k)} \, dk = I,
\label{Berggren_comp}
\end{equation}
where the $\ket{\phi_n}$ states are the bound states and the resonant states inside the $L^+$ contour of Fig.(\ref{Berggren_basis}). These states are usually called pole states as they are poles of the $S$-matrix \cite{JPG}. The $\ket{\phi(k)}$ states stand for scattering states and follow the $L^+$ contour of Fig.(\ref{Berggren_basis}), and $I$ is the identity operator.
Scattering states, which initially form a continuum, are discretized using a Gauss-Legendre quadrature with about 30 points to ensure convergence \cite{Gauss_Legendre}. Once discretized, the Berggren basis is in effect the same as that of harmonic oscillator states in the context of CI.
\begin{figure}
\centering
\includegraphics[scale=0.6]{Berggren.pdf}
\caption{Berggren basis for a given partial wave. Different types of states, including bound, decaying, scattering, as well as anti-bound and capturing states, are depicted.}
\label{Berggren_basis}
\end{figure}
As $\ket{\phi}$ has fixed quantum numbers $j$ and $m$, angular momentum algebra can be used therein with ladder operators $j^\pm$. Let us write $\ket{\phi} = \ket{u ~ \ell ~ j ~ m}$ so as to make apparent the dependence on quantum numbers:
\begin{equation}
j^\pm \ket{u ~ \ell ~ j ~ m} = \hbar ~ \sqrt{j (j + 1) - m (m \pm 1)} \ket{u ~ \ell ~ j ~ m \pm 1} \label{jpm}
\end{equation}
which has the property to raise or lower the value of $m$ by one unit keeping all other values unchanged.
Another operator based on one-body states properties is the $m$-reversal operator (also improperly called time-reversal symmetry):
\begin{equation}
T \ket{j ~ m} = (-1)^{j - m} \ket{j ~ -m} \label{TRS_one_body}
\end{equation}
The use of this operator allows to save memory and computations in CI, as we will see in the following sections.
\subsection{Slater determinants} \label{SD_section}
One can build independent-particle many-body states of all nucleons from antisymmetrized tensor products of $\ket{\phi}$ states of coordinates $\vec{r}$, or Slater determinants:
\begin{equation}
SD(\vec{r}_1, \cdots, \vec{r}_A) = \sqrt{\frac{1}{A!}}
\begin{vmatrix}
{\phi_1} (\vec{r}_1) & {\phi_2} (\vec{r}_1) & \cdots & {\phi_A} (\vec{r}_1) \\
{\phi_1} (\vec{r}_2) & {\phi_2} (\vec{r}_2) & \cdots & {\phi_A} (\vec{r}_2) \\
\vdots & \vdots & \vdots & \vdots \\
{\phi_1} (\vec{r}_A) & {\phi_2} (\vec{r}_A) & \cdots & {\phi_A} (\vec{r}_A)
\end{vmatrix}
\label{SD_det}
\end{equation}
where $A$ is the number of nucleons of the considered nuclear state. A complete basis can then be generated by considering all combinations of $\ket{\phi}$ states provided by Eq.(\ref{Eq_ukr}). This basis is of infinite dimension and has to be truncated in practice. For this, one considers maximal values for $\ell$ and $e$ for used $\ket{\phi}$ states, and one also limits the number of occupied scattering $\ket{\phi}$ states in Eq.(\ref{SD}) (see below), which is typically 2 to 4 at most.
Slater determinants involving pole states only are of fundamental importance, as they form the most important many-body basis states of a nuclear state decomposition. In particular, a diagonalization of the Hamiltonian in a model space generated by pole Slater determinants is called pole approximation and is used to initialize the JD method (see Sect.\,\ref{one_body_states}). Other Slater determinants are called \emph{scattering states}.
The extension of the $m$-reversal operator of Eq.(\ref{TRS_one_body}) to Slater determinants is also straightforward:
\begin{equation}
T \ket{SD} = \prod_{i=1}^A T_i \ket{\phi_i} \label{TRS_SD}
\end{equation}
\subsection{Occupation formalism} \label{occupation_formalism}
In the following, we will use occupation formalism \cite{Cohen_Tannoudji}. It is based on the use of creation and annihilation operators of one-body states, denoted by $a^\dagger_{\alpha}$ and $a_{\alpha}$ respectively for the creation and annihilation of the state $\ket{\alpha}$.
Consequently, a Slater determinant can be written in a more concise form:
\begin{equation}
\ket{SD} = a^\dagger_{\phi_A} ~ \cdots ~ a^\dagger_{\phi_1} \ket{~} = \ket{{\phi_1} ~ {\phi_2} ~ \cdots ~ {\phi_A}} \label{SD}
\end{equation}
where $\ket{~}$ is the vacuum state, which contains no particles by definition.
Occupation formalism algebra is closed and fulfill antisymmetry requirements if operators verify the following equations:
\begin{equation}
\{ a^\dagger_{\alpha} , a^\dagger_{\beta} \} = 0 \mbox{ , } \{ a_{\alpha} , a_{\beta} \} = 0 \mbox{ , } \{ a^\dagger_{\alpha} , a_{\beta} \} = \delta_{\alpha \beta} \label{a_dagger_a}
\end{equation}
where $\ket{\alpha}$ and $\ket{\beta}$ are one-body states and brackets denote the anticommutation relation:
\begin{equation}
\{ O_1 , O_2 \} = O_1 O_2 + O_2 O_1 \label{anticommutation}
\end{equation}
where $O_1$ and $O_2$ are two operator functions of $a^\dagger$ and $a$.
It is then convenient to write the Hamiltonian in occupation formalism:
\begin{equation}
H = \sum_{\alpha \beta} \braket{\alpha | h | \beta} {a^\dagger_\alpha} {a_\beta} + \sum_{\alpha < \beta , \gamma < \delta} \braket{\alpha \beta | V | \gamma \delta} {a^\dagger_\alpha} {a^\dagger_\beta} {a_\delta} {a_\gamma} \label{H_occupation_formalism}
\end{equation}
where $h$ is the one-body part of $H$, containing its kinetic and mean-field part, while $V$ is its two-body part, embodying inter-nucleon correlations, and $\alpha$, $\beta$, $\gamma$, $\delta$ are one-body states.
In the following, as $h$ matrix elements are used with one creation operator and one annihilation operator, and as $V$ is used with two creation and two annihilation operators, they will be referred as the one particle-one hole (1p-1h) part and two particle-two hole (2p-2h) part, respectively.
In particular, with $N_s$ being the number of states used in the one-body basis, one can see that the number of 1p-1h matrix elements scales as $N_s^2$ and that of 2p-2h matrix elements scales as $N_s^4$. In order to provide with an order of magnitude for $N_s$, let us consider a $p_{3/2}$ contour for the Berggren basis. As the Gauss-Legendre quadrature imposed to the Berggren basis contour provides with about 30 states (see Sect.\,\ref{one_body_states}), and as one has 4 possible $m$-projections for a $p_{3/2}$ shell, one has $N_s = 120$. It is thus clear that the number of 1p-1h calculations in $H$ (see Eq.(\ref{H_occupation_formalism})) is at least four orders of magnitude smaller than the number of the 2p-2h calculations. Consequently, the 1p-1h part can be neglected from a performance point of view.
A matrix element $\braket{SD_f | H | SD_i}$, with $\ket{SD_i}$ and $\ket{SD_f}$ the initial and final Slater determinants, respectively, can be written as a function of one-body and two-body matrix elements, as well as expectation values of creation and annihilation matrix elements between $\ket{SD_i}$ and $\ket{SD_f}$ (see Eq.(\ref{H_occupation_formalism})). The latter are in particular equal to 0 or $\pm 1$ from Eq.(\ref{a_dagger_a}).
\subsection{Rotational invariance} \label{J2_projection}
The basis of Slater determinants provided by Eq.(\ref{SD}) is complete and fully antisymmetric. However, it is not rotationally invariant, as the total angular momentum is not defined therein. Its projection on the $z$ axis, however, is conserved, since its value $M$ is equal to the sum of one-body total angular momentum projections $m_i$, $i \in [1:A]$ (see Eq.(\ref{Phi_one_body})).
Consequently, given that $M$ is a good quantum number in nuclear states, we only have to consider the Slater determinants of fixed total angular momentum projection $M$ in a shell model calculation, which is called the $M$-scheme approach in CI \cite{SM_ANP}.
This approach is preferred to $J$-scheme \cite{SM_ANP}, where Slater determinants are replaced by independent-particle many-body states coupled to $J$. Due to the conservation of the many-body total angular momentum $J$ at basis level in $J$-scheme, CI dimensions are typically smaller by a factor of 5-20 in light nuclei.
However, the $J$-scheme formulation is also more difficult to implement due to antisymmetry requirements and generally leads to denser Hamiltonian matrices (more non-zero elements)\,\cite{aktulga_jscheme}.
The $J$ quantum number can be imposed in $M$-scheme by using appropriate linear combinations of Slater determinants. This can be effected because the $\hat{J}^2$ operator is closed in $M$-scheme, as it connects Slater determinants whose one-body states differ through their $m$ quantum number only, i.e.~belonging to the same configuration (also called partition) \cite{SM_2D,SM_CPC}.
A configuration enumerates its occupied shells without consideration of the $m$ quantum numbers of the occupied one-body states. For example, if $\ket{0s_{1/2}(-1/2) ~ 0s_{1/2}(1/2) ~ 0p_{3/2}(-1/2)}$ is a Slater determinant, its associated configuration is $[0s_{1/2}^2 ~ 0p_{3/2}^1]$.
Hence, it is always possible to build $J$ coupled states in a configuration from linear combinations of its Slater determinants. This is done using the L{\"o}wdin operator \cite{Lowdin}:
\begin{equation}
P_J = \prod_{J' \neq J} \frac{\hat{J}^2 - J(J+1)~I}{J'(J' + 1) - J(J+1)} \label{PJ_Lowdin}
\end{equation}
which projects out all the angular momenta $J' \neq J$. It has been checked numerically that the $J$ quantum number conservation is precise up to $10^{-10}$ or less in our applications.
In order to apply $\hat{J}^2$, one considers its standard formulation:
\begin{eqnarray}
&&\hat{J}^2 = J^- J^+ + M(M + 1)~I \label{J2} \\
&&J^\pm = \sum_{i=1}^A j^\pm_i \label{Jpm}
\end{eqnarray}
where Eq.(\ref{jpm}) has been used. As $J^\pm$ connects Slater determinants of the same configuration, and whose $M$ quantum numbers vary by one unit, it is a very sparse one-body operator, as such the computation of $\hat{J}^2$ is fast as well.
Consequently, even though Slater determinants do not have $J$ as a good quantum number, the use of $P_J$ of Eq.(\ref{PJ_Lowdin}) allows to efficiently build $J$ coupled linear combinations of Slater determinants.
The $M$-reversal symmetry (see Eq.(\ref{TRS_one_body}) and Eq.(\ref{TRS_SD})) is a direct consequence of rotational invariance. Indeed, rotational invariance implies that the physical properties of a shell model eigenvector are independent of $M$, while the $M$-reversal symmetry implies that they are invariant through the symmetry $M \leftrightarrow -M$.
In particular, if $M=0$, the $M$-reversal symmetry allows to calculate only half of the output shell model vector when one multiplies $H$ by an input shell model vector. Indeed, the components of the Slater determinants $\ket{SD}$ and $T \ket{SD}$ differ by a phase equal to $\pm 1$ straightforward to calculate.
Consequently, as it is always possible to calculate shell model eigenvectors of even nuclei with $M=0$, even nuclei are twice faster to calculate than originally expected.
Theoretically, the use of $P_J$ of Eq.(\ref{PJ_Lowdin}) to impose $J$ as a good quantum number is not necessary. Indeed, as $[H,\vec{J}] = 0$, a linear combination of Slater determinants coupled to $J$ provides another vector coupled to $J$ when acted on by $H$. However, due to numerical inaccuracy, $H$ and $\vec{J}$ do not exactly commute, so that the $J$ quantum number can be lost after several matrix-vector multiplications. The obvious treatment for this is to suppress the shell model vector components with $J' \neq J$, when it is no longer an eigenstate of $\hat{J}^2$, which is effected using Eq.(\ref{PJ_Lowdin}).
In order to know whether $P_J$ has to be applied or not, one has to test whether a shell model vector is an eigenstate of $\hat{J}^2$ or not, which is rather fast. Indeed, if $M = J$, as is usually the case, shell model vectors are checked if they are eigenstates of $\hat{J}^2$ if the action of $J^+$ on this shell model vector provides zero, as the angular momentum projection of a vector coupled to $J$ cannot be larger than $J$. The only exception to this rule is when one uses $M$-reversal symmetry when $J > 0$, as $M = 0$ in all cases when $M$-reversal symmetry is applied. In this case, one has to calculate the action of $\hat{J}^2 - J(J+1)~I$ on the considered shell model vector to check if it is equal to zero.
Hence, as one applies $J^\pm$ only once or twice after the matrix-vector operation borne by $H$ application, this test is very fast compared to $H$ or $P_J$ application.
\section{Data Storage in GSM}
\label{Data_storage}
In the SM and GSM approaches, memory utilization is of critical importance as the sizes of the matrices involved grow rapidly with the problem size. To draw a comparison between SM and GSM in this regard, let us consider $N_v$ valence nucleons in a one-body space of $N_s$ states. Based on the discussions in the previous section, one can neglect antisymmetry, parity and $M$ projection to make such a comparison, as the overall impact of these factors is minimal. Consequently, we will consider only 2p-2h matrix elements (see Sect.\,\ref{occupation_formalism}), of the form $\braket{SD_{f} | H | SD_{i}} = \pm \braket{\alpha_{f} ~ \beta_{f} | V | \alpha_{i} ~ \beta_{i}}$. One then obtains that the dimension of the Hamiltonian in the GSM approach, which we will denote as $d$, is proportional to ${N_s}^{N_v}$ and that the probability to have a non-zero matrix element is proportional to $(1/N_s)^{N_v - 2}$, as all states must be equal in $\ket{SD_{i}}$ and $\ket{SD_{f}}$, except for $\ket{\alpha_{i} ~ \beta_{i}} \neq \ket{\alpha_{f} ~ \beta_{f}}$. This implies that the total number of non-zeros is close to $d^{1 + 2/N_v}$, which corresponds to $d^{1.67}$ and $d^{1.5}$ for 3p-3h and 4p-4h truncations in the continuum, respectively. For instance, for $d \sim 10^9$, if one compares to the typical $d^{1.4}$ number of non-zeros in SM for this dimension \cite{SM_JPCS}, the Hamiltonian for the GSM approach results in matrices which are typically one to two orders of magnitude larger than that of the regular SM. To ensure fast construction of the GSM Hamiltonian and tackle the data storage issue of the resulting sparse matrices, we have developed the techniques presented below.
\subsection{Bit algebra in SM vs state storage in GSM}
We saw that configurations and Slater determinants are necessary to build the many-body basis states of GSM. There are much fewer configurations than Slater determinants, which leads to many advantages. Due to valence space truncations, configurations are generated sequentially. This is computationally inexpensive, as there is a small number of configurations. Since Slater determinants of different configurations are independent, it is straightforward to use parallelization at this level: Configurations are distributed over available processing cores, so that all Slater determinants are generated independently by varying $m$ quantum numbers in occupied shells in a fixed configuration. The obtained Slater determinants are then distributed to all compute nodes as the full basis of Slater determinants must be present in each node.
When building the Hamiltonian matrix $H$, we first consider the configuration and their Slater determinants. This way, searches for Slater determinant indices are very fast, as binary search is effected at the level of configurations first and at the level of Slater determinants afterwards. This also allows us to save memory space, as all indices related to shells can be stored in arrays involving configurations only, while those involving the $m$ quantum numbers only are effected in arrays involving Slater determinants.
In GSM, configurations and Slater determinants involve few valence particles, many shells and many one-body states, whereas one has many valence particles, few shells and few one-body states in SM. Thus, we use an implementation based on bit storage which is different than traditional SM \cite{SM_CPC}.
As a state can be occupied by at most one nucleon due to antisymmetry, and previously mentioned SM features, it is convenient in SM to associate a Slater determinant to a binary number. For example, $\ket{1011000000}$ is a Slater determinant where the states 1, 3 and 4 are occupied, while the states 2 and 5, ..., 10 are unoccupied. The strength of this method is that Slater determinant algebra is taken care of by operations on bits, which are very fast. However, this method becomes inefficient if one has many unoccupied states, as one integer possesses 4 bytes, or 32 bits, which can be equal to 0 or 1, so that it can contain at most 32 states. It is customary in GSM to have contours for the $s_{1/2}$, $p_{1/2}$ and $p_{3/2}$ partial waves, which are discretized with 30 points each typically (see Sect.\,\ref{one_body_states}). Conversely, it is sufficient to use 5 to 10 harmonic oscillator states for the $d_{3/2}$ and $d_{5/2}$ partial waves to obtain convergence. Let us consider as an example that we discretize contours with 30 points and that we take 10 harmonic oscillator states for the $d$ partial waves, which are in fact typical values in practice. This generates 340 one-body states, therefore 11 integers, of 44 bytes as a whole, would be necessary to store a single Slater determinant.
This would be inefficient memory-wise because one would have to store many zeros, and maybe also inefficient performance-wise due to the larger arrays to consider.
Hence, we store configurations and Slater determinants as regular arrays, in which case the former example would be stored as $\{1,3,4\}$, requiring only 6 bytes, as each state index is represented by a short integer.
The bit scheme and regular scheme become equivalent if one has 22 valence nucleons, which is well beyond the current capacities of our code, where the maximal number of particles in the continuum is about 4.
Calculations are fast with this storage scheme, and they can be easily loop-parallelized (unlike bit operators acting on individual bits).
\subsection{Calculation and storage of phases in GSM}
We will deal here with numerical problems generated by the large asymmetry between proton and neutron spaces.
As the problem is the same whether the neutron space is much larger than the proton space or not,
and as we will study examples where one has valence neutrons only, we will only consider the case where the neutron space is dominant.
Obviously, proton-rich nuclei, for which the proton space is largest, bear similar properties.
As already mentioned, one cannot store all data related to the neutron space, as the space required to do so is much larger than that needed in SM.
The fundamental reason for this is the use of the Berggren basis. Indeed, one typically has 100-200 neutron valence shells, arising from the discretization of scattering contours (see Fig.(\ref{Berggren_basis})), as one has to discretize the contour of one partial wave with about 30 states, and one has 5 partial waves when $\ell \leq 2$ ($s_{1/2}$, $p_{1/2}$, $p_{3/2}$, $d_{3/2}$, $d_{5/2}$). Conversely, this number is 30 for an 8$\hbar \omega$ space in SM, as one considers harmonic oscillator shells of the form $\ket{n ~ \ell ~ j}$ which then have to satisfy $2n + \ell \leq 8$.
Additionally, the neutron-to-proton ratio in typical GSM applications is usually very large (for instance, in the study of nuclei along the neutron drip-line) and one may often have only valence neutrons in a calculation. It would be too costly to recalculate neutron matrix elements each time. Thus, memory optimization schemes had to be devised to store matrix elements between neutron Slater determinants. They deal with what we call phases, i.e.~matrix elements involving creation or annihilation operators, only of the form $\braket{SD_f | a_\alpha^\dagger | SD_i}$, $\braket{SD_f | a_\alpha^\dagger ~ a_\beta | SD_i}$, ... that are equal to $\pm 1$, with which the indices of involved one-body shells and states, configurations and Slater determinants must be included.
Let us define the initial and final spaces as being the spaces to which the initial $\ket{\Psi_i}$ and final $\ket{\Psi_f}$ many-body states belong, where one has $H \ket{\Psi_i} = \ket{\Psi_f}$.
In the 2D parallelization scheme (which will be discussed in more detail in Sect.\,\ref{H_parallelization}), one can use the fact that both initial and final spaces,
assigned to each processor core on the square Hamiltonian are only a fraction of the full space. They refer to the number of rows and columns, respectively \cite{SM_2D}.
Hence, we can store matrix elements of the form $\braket{SD_{int} | a_\alpha | SD_i}$ and $\braket{SD_f | a_\alpha^\dagger | SD_{int}}$, $\braket{SD_{int} | a_\alpha ~ a_\beta | SD_i}$ and $\braket{SD_f | a_\alpha^\dagger ~ a_\beta^\dagger | SD_{int}}$,
where $\ket{SD_{int}}$ is an intermediate Slater determinant, chosen so that the data to store are minimal.
It is clear that any one-body or two-body observable can be calculated with this scheme.
Indeed, one has:
\begin{eqnarray}
\braket{SD_f | a_\alpha^\dagger ~ a_\beta | SD_i} &=& \braket{SD_f | a_\alpha^\dagger | SD_{int}} \braket{SD_{int} | a_\beta | SD_i} \label{one_body_phase_2D} \\
\braket{SD_f | a_\alpha^\dagger ~ a_\beta^\dagger ~ a_\delta ~ a_\gamma | SD_i} &=& \braket{SD_f | a_\alpha^\dagger ~ a_\beta^\dagger | SD_{int}} \braket{SD_{int} | a_\delta ~ a_\gamma | SD_i} \label{two_body_phase_2D}
\end{eqnarray}
The phase matrix, containing these matrix elements, is stored in a sparse form:
One fixes $\ket{SD_i}$, and all $\ket{SD_f}$ varying by one or two states are generated.
Thus, the obtained phase, but also the indices of $\ket{SD_f}$ and of associated one-body states ($\alpha$, $\beta$, $\gamma$ and $\delta$) must be stored.
In order to save memory, one loops firstly over configurations (see Sect.\,\ref{J2_projection}), so that only the configuration index of $\ket{SD_f}$ and the shell indices (functions of $n$, $\ell$, $j$,
but not $m$) associated with the one-body states ($\alpha$, $\beta$, $\gamma$ and $\delta$) are stored, and one loops over the Slater determinants of that fixed configuration afterwards, where $m$-dependent values only are stored.
From that information, one can recover the full phase matrix.
One can see that the number of phases involving $\ket{SD_{int}}$ is proportional to $2 N_v$ for one-body phases and $N_v (N_v - 1)$ for two-body phases, with $N_v$ being the number of valence nucleons. Indeed, one has two arrays of one-body phases and two arrays of two-body phases. The storage of phase matrices is further optimized by requiring that one has $\alpha < \beta$ and $\gamma < \delta$ and leveraging the $M$-reversal symmetry. Indeed, if one applies the $M$-reversal operator of Eq.(\ref{TRS_SD}) to the Slater determinants present in $\braket{SD_{int} | a_\alpha | SD_i}$ and $\braket{SD_{int} | a_\alpha ~ a_\beta | SD_i}$, the obtained phase and associated indices can be easily recovered from the phase matrix element of the initial Slater determinants. This yields an additional memory gain of about a factor of 2 for the storage of phases at the cost of negligible additional numerical operations.
We note that memory optimization of phases described above is not utilized for the angular momentum operator $\hat{J}^2$. Since $\hat{J}^2$ is function of $J^{\pm}$ (see Sect.\,\ref{J2_projection}), which are very sparse operators, the number of phases used therein is much reduced compared to the Hamiltonian matrix. Decomposition of Eq.(\ref{one_body_phase_2D}) involving an intermediate Slater determinant is therefore not necessary.
\subsection{Construction and Storage of the Hamiltonian} \label{H_MEs}
The 1p-1h part of $H$ is negligible for performance purposes and hence is not considered in this section (see Sect.\,\ref{occupation_formalism}). Therefore, we concentrate firstly on the neutron 2p-2h part of $H$. From Eq.(\ref{H_occupation_formalism}), one has:
\begin{equation}
\braket{SD_f | H | SD_i} = \braket{SD_f | a_\alpha^\dagger ~ a_\beta^\dagger ~ a_\delta ~ a_\gamma | SD_i} \braket{\alpha \beta | V | \gamma \delta} \label{H_ME_2p_2h_neutron}.
\end{equation}
One then has to generate all the Slater determinants $\ket{SD_f}$ for a fixed Slater determinant $\ket{SD_i}$.
For this, one loops over all intermediate configurations $C_{int}$ and Slater determinants $\ket{SD_{int}}$ (see above). One then obtains the $\braket{SD_{int} | a_\delta ~ a_\gamma | SD_i}$ phase and associated one-body states. Using the same procedure on $\ket{SD_{int}}$, one generates $\braket{SD_f | a_\alpha^\dagger ~ a_\beta^\dagger | SD_{int}}$ phase and associated one-body states, so that the two-body phase $\braket{SD_f | a_\alpha^\dagger ~ a_\beta^\dagger ~ a_\delta ~ a_\gamma | SD_i}$ is obtained with its one-body states.
The two-body matrix element $\braket{\alpha \beta | V | \gamma \delta}$ comes forward from the knowledge of one-body states, so that the Hamiltonian matrix element $\braket{SD_f | H | SD_i}$ is obtained.
The treatment of the proton 2p-2h part of $H$ is mutatis mutandis the same as its neutron 2p-2h part, and its proton-neutron 2p-2h part of $H$ is very similar:
\begin{eqnarray}
\braket{SD_f | H | SD_i} &=& \braket{\alpha_p ~ \beta_n | V | \gamma_p ~ \delta_n} \nonumber \\
&\times& \braket{{SD_p}_{(f)} | a_{\alpha_p}^\dagger | {SD_p}_{(int)}} \braket{{SD_p}_{(int)} | a_{\gamma_p} | {SD_p}_{(i)}} \nonumber \\
&\times& \braket{{SD_n}_{(f)} | a_{\beta_n}^\dagger | {SD_n}_{(int)}} \braket{{SD_n}_{(int)} | a_{\delta_n} | {SD_n}_{(i)}}
\end{eqnarray}
where the proton and neutron character of states have been explicitly written. The computational method is otherwise the same as in the neutron 2p-2h part of $H$.
\subsection{On-the-fly calculation and partial storage of the Hamiltonian matrix} \label{on_the_fly_partial}
The discussion so far has focused on the \emph{full storage scheme} in the GSM code, where all elements of the Hamiltonian matrix are stored in memory. As the Hamiltonian matrix is always sparse, one only stores non-zero matrix elements and their associated indices. While this is ideal from a computational cost point of view as no recalculation of matrix elements is needed, memory requirements in GSM grow rapidly with the number of nucleons.
This is the main factor affecting problem size, as work vectors, even though stored on fast memory, take much less memory than Hamiltonian as only a few tens, or at worse hundreds, of them are necessary (see Sect.\,\ref{H_parallelization}).
Despite, the memory optimizations described above, the \emph{full storage scheme} is essentially \emph{total memory bound}; in other words, calculations possible with this scheme are limited by the aggregate memory space available on the compute nodes. Therefore, in addition to the full storage scheme, we have developed on-the-fly and partial matrix storage schemes for GSM.
In the \emph{on-the-fly scheme}, the sparse matrix-vector multiplications (SpMVs) needed during the eigensolver are performed on-the-fly as the Hamiltonian is being recalculated from scratch at each eigensolver iteration. While memory requirements for the Hamiltonian matrix are virtually non-existent in this case, computation time is maximal due to repeated constructions of the Hamiltonian.
In the \emph{partial storage} scheme, we do not store the final Hamiltonian matrix elements which is made up of two-body interaction coefficient multiplied by a phase, which form basically the whole Hamiltonian matrix up to a negligible part (see discussion in Sect.\,\ref{H_parallelization}). Indeed, the number of two-body interaction coefficients is much smaller than the number of Hamiltonian matrix elements (see Sect.\,\ref{occupation_formalism}). Therefore, it is more efficient from the storage point of view to store the index and phase of two-body matrix elements in the Hamiltonian phase, so that the final two-body matrix element is determined by a lookup into the two-body interaction coefficient array and multiplied by its corresponding phase. Here, the most time consuming part is the search in the two-body interaction coefficient array, as the latter is very large and the two-body matrix elements considered in subsequent searches are not necessarily found to be contiguous in the interaction array. The obtained memory gain with the partial storage method compared to the full storage method is about 2.5, as two-body matrix elements are complex numbers and integers are stored instead, on the one hand, while it is still necessary to store the indices of Slater determinants $\ket{SD_f}$ (see Sect.\,\ref{H_MEs}), on the other hand. Consequently, the partial storage scheme lies in between the full storage and the on-the-fly computation schemes, as its memory requirements (while still being significant) are not as large as the full storage scheme, but the partial storage method is faster than the on-the-fly scheme as it does not require reconstruction of the Hamiltonian from scratch.
\section{Diagonalization of the GSM matrix}
\label{GSM_diagonalization}
In SM, as one is only interested in the low-lying spectrum of a real symmetric Hamiltonian, the Lanczos method is the tool of choice. It is a Krylov subspace method which makes use of matrix times vector operations only and the low-lying extremal eigenvalues are the ones that converge first. While the same method could be used for searching the bound states in GSM, it leads to poor or no convergence at all for resonant states. This is due to the presence of numerous scattering states lying below a given resonant state in GSM, which would have to converge before the desired resonant state can converge in the Lanczos method. Consequently, we turn to a diagonalization method that can directly target the interior eigenvalues, i.e.~the JD method. Indeed, JD only involves sparse matrix times vector operations like the Lanczos method, but it also includes an approximated shift-and-invert method which allows it to directly target interior eigenvalue and eigenvector pairs.
\subsection{Complex symmetric character of the GSM Hamiltonian}
As GSM Hamiltonian matrices are complex symmetric,
a standard problem mentioned in this case is the numerical breakdown of
JD method \cite{complex_symmetric_householder}. Indeed, the Berggren metric is not the Hermitian metric, but the analytic continuation of the real symmetric scalar product. Consequently, the Berggren norm of a GSM vector can be in principle equal to zero. Then, this vector cannot be normalized and the
JD iterations fail. However, for this to occur, one would need matrix elements whose imaginary parts are of the same order of magnitude as their real parts on one hand, and rather large off-diagonal matrix elements on the other hand. As neither of these cases are satisfied in practice, breakdown does not occur. Moreover, as the number of basis vectors is typically a few tens or hundreds, this additional memory storage and diagonalization time is negligible in our case. Consequently, the use of complex symmetric matrices does not lead to any convergence issues in GSM.
Let us note that the Lanczos method can be directly applied to the complex symmetric case if one replaces the Hermitian metric by the Berggren metric \cite{Berggren}.
But the Lanczos method performs poorly compared to the JD method because it requires hundreds of iterations if one calculates a loosely bound nuclear eigenstate, and does not even converge for resonant many-body eigenstates unless a full diagonalization is utilized. Nevertheless, the JD method requires an initial set of eigenpairs to start its iterations, and the Lanczos method can determine the GSM eigenpairs at the pole approximation level. But we note that one can generally perform a full diagonalization of the Hamiltonian matrix at the pole approximation level instead.
\subsection{Preconditioning} \label{Hprec}
Even though the use of an approximated shift-and-invert scheme ensures convergence of the JD method to the desired interior eigenvalues, this convergence can be very slow. Hence, a crucial need in GSM is to find a good preconditioner $H_{prec}$, which transforms the eigenvalue problem into a form that is more favorable for a numerical solver, essentially accelerating convergence. The simple diagonal approximation for $H_{prec}$ is not sufficient therein, as off-diagonal matrix elements are large in the pole space. However, $H$ is diagonally dominant in the scattering space. Consequently, to build an effective preconditioner, one can take $H_{prec}$ equal to $H$ itself in the pole space and equal to the diagonal of $H$ in the scattering space (the diagonal matrix elements $\braket{SD | H | SD}$ involving the Slater determinants of a fixed configuration are in fact replaced by their average therein in order to ensure that $[H_{prec},\vec{J}] = 0$). Using $H_{prec}$ as a preconditioner in JD requires computing its inverse matrix. Since the dimension of the pole space is very small (a few hundreds at most), and the rest of $H_{prec}$ is only a diagonal matrix, computing the inverse of $H_{prec}$ is inexpensive. Consequently, the chosen preconditioner has a minimal computational overhead, and by providing a reasonable approximation to $H$, it facilitates quick convergence. As the coupling to the continuum is small, with typically 70-80\% of the GSM eigenpairs residing in the pole space, the preconditioned JD method converges in typically 30 iterations per eigenpair, \emph{e.g.}, one needs 30 JD iterations to calculate the ground state only, and 60 iterations to calculate the ground state plus the first excited state.
\subsection{Implementation}
In light of the above discussion, the preconditioned JD method as implemented in the GSM code is summarized below (see Ref.\cite{Jacobi_Davidson} for a more detailed description of the JD method):
\begin{itemize}
\item Start from an approximation to the desired the GSM eigenpair, $E_i$ and $\ket{\Psi_i}$. The first eigenpair $E_0$ and $\ket{\Psi_0}$ is obtained from the pole approximation (see Sect.\,\ref{SD_section}), where the Lanczos method and full orthogonalization (due to the small size of the Hamiltonian) can be used. After the first eigenpair, each set of converged pairs provide an approximation for the next pair.
\item Calculate the residual $\ket{R_i} = H \ket{\Psi_i} - E_i \ket{\Psi_i}$.
\item Update the approximate eigenvector by solving the linear system $(H_{prec} - E_i I) \ket{\Phi_{i+1}} = \ket{R_i}$, where $H_{prec}$ is the preconditioner for $H$ (see Sect.\,\ref{Hprec}).
\item Orthonormalize $\ket{\Phi_{i+1}}$ with respect to all previous JD vectors $\ket{\Phi_j}$, $0 \leq j \leq i$, and project it onto $J$ if necessary (see Sect.\,\ref{J2_projection}).
\item Project $H$ onto the extended basis set $\ket{\Phi_j}$, $0 \leq j \leq i+1$, so that one obtains a very small complex symmetric matrix which is diagonalized exactly using standard methods. This provides with a new approximation to the eigenpair $E_{i+1}$ and $\ket{\Psi_{i+1}}$.
$\ket{\Psi_{i+1}}$ is identified from the spectrum of the diagonalized small complex symmetric matrix using the overlap method \cite{JPG}. For this, one looks for the eigenstate whose overlap with pole approximation (see Sect.\,\ref{SM_generalities}) is maximal.
As pole Slater determinant components are always more important than scattering Slater determinant components (see Sect.\,\ref{SD_section}), this guarantees that the eigenstate obtained with the overlap method corresponds to the sought physical nuclear state.
\item Repeat the above steps until convergence, as indicated by the norm of the residual $\ket{R_i}$.
\end{itemize}
We note that this procedure must be performed separately for each Hamiltonian eigenpair. However, one typically needs to calculate fewer than 5 eigenpairs per nucleus, the most common situations being calculation of the ground state only, or the ground state and the first excited state. Hence, it is not prohibitive to use a new JD procedure per eigenpair.
\section{Parallelization of the GSM code}
\label{H_parallelization}
The first version of the GSM code followed a 1D partitioning scheme, where the Hamiltonian $H$ was partitioned along its rows. The $H$ matrix, despite being symmetric, was stored as a full (but still sparse) matrix, and each block of rows was assigned to a different process. MPI, OpenMP versions, and the hybrid scheme combining the two of them, were implemented. This scheme is indeed convenient to implement and is well balanced for the storage of $H$ matrix elements, and is reasonably efficient when the number of processes is small, \emph{e.g.} less than 100 as is the case in a small-scale execution. Its main drawback is that to effect the sparse Hamitonian times vector operation during JD iterations, the entire input vector must be replicated on each node; this clearly generates expensive MPI communications. Consequently, the 1D partitioning approach is not scalable to thousands of cores which would be needed for accurate calculations of heavy nuclei.
In the current version of GSM, we implemented a 2D partitioning scheme that improves both memory storage and MPI communication costs of the 1D partitioning scheme.
In this scheme, the symmetry of $H$ is exploited and $H$ is divided into $N = n_d(n_d+1)/2$ squares, each of which are assigned to a different process. A simple example of the matrix distribution in the 2D scheme where $n_d=5$ and $N=15$ is depicted in Fig.(\ref{matrix_2D}). Consequently, each process receives an (almost) equal number of $\ket{SD_i}$ Slater determinants (corresponding to the \emph{rows} of the small squares assigned to them) and $\ket{SD_f}$ Slater determinants (corresponding to the \emph{columns} of the small squares assigned to them) that scale roughly with $1 / \sqrt{N}$. As the phases needed in one node are already stored therein during the construction of the Hamiltonian, each process works independently from others and there are no MPI communications at this stage.
Compared to the 1D scheme, each process accesses a much smaller portion of the input and output vectors during the Hamiltonian times vector operations, and as a result MPI communications create a significantly smaller overhead with increasing problem dimensions. Moreover, as will be discussed in more detail below, in 2D partitioning with a hybrid MPI/OpenMP parallelization scheme, a single (or a small group of) thread(s) takes care of MPI data transfers, while all other threads are dedicated to matrix-vector multiplications. As such, MPI communications can be overlapped with matrix-vector multiplication calculations, providing even greater scalability.
\begin{figure}
\centering
\includegraphics[scale=0.6]{matrix_2D.pdf}
\vspace{-.8in}
\caption{Example of the 2D partitioning scheme for the Hamiltonian matrix. Occupied squares are denoted by numbers and unoccupied squares by dashed lines.}
\label{matrix_2D}
\end{figure}
\subsection{Eigensolver Iterations}
Two main operations in eigensover iterations are i) sparse matrix times vector operations,
and ii) orthonormalization of the new JD vector with respect to the previous JD vectors.
For the $H$ times vector operation, note that as a result of exploiting symmetry of the Hamiltonian, each process must perform a regular sparse matrix multiplication (SpMV) with the submatrix it owns, and a second SpMV with the transpose of its submatrix. To perform these operations, one uses row and column communicators, which group processes along each row and column. Hence, each process is part of two groups corresponding to its own row and column, with each group having $n_d/2$ processes in them.
\subsubsection{Distributed SpMV}
Let us now describe the algorithm of matrix times vector when one applies the Hamiltonian $H$ on a GSM vector within the 2D partitioning scheme, of dimension $d$:
\begin{itemize}
\item All submatrices (i.e.~occupied squares in Fig.(\ref{matrix_2D})) are effected in parallel by their owner processes.
\item For submatrices on the diagonal, their associated process (i.e.~the process where this submatrix is stored) is the master node of the row and column communication groups. The master node is responsible for distributing the corresponding part of the input GSM vector to all nodes on the same row and column via the row and column communicators. For example, in Fig.(\ref{matrix_2D}), nodes 2, 4 and 15 form a row communication group, while nodes 4, 5 and 6 form a column communication group. For both communication groups, node 4 is the master node.
\item Before any computation is performed, the master nodes broadcast their part of the input GSM vector to their column communication groups.
\item Upon receiving the input vector through the column communicator, each process starts effecting the SpMV for the $H$ submatrix that it owns by utilizing multiple threads. In parallel to the SpMV, one thread takes part in the collective communication of the GSM input vectors, this time through the row communication groups in preparation for the transpose SpMV operation to follow. This way, the SpMV operation and MPI communications are overlapped.
\item The next step is to effect the transpose SpMV operations using the input vectors communicated through the row groups in the preceding step. While the transpose SpMV operations are being effected using multiple threads, this time the communication thread takes part in reduction of the partial outputs of the SpMV operation above to the master nodes through the row communicators, again overlapping SpMV communications with computations.
\item Note that while performing the transpose SpMV, race conditions would occur because input and output indices are exchanged when one leverages $H$ symmetry. A block of $n_t$ vectors is used therein for output, where $n_t$ is the number of threads per node, to avoid a race conditions (see Ref.\cite{SM_2D}). These blocks of output vectors are first aggregated locally, and then are reduced at the master nodes through column communicators, thereby completing the distributed memory SpMV operation.
\end{itemize}
The cost of MPI communications is then that of $4~d/n_d$ complex numbers to be transferred collectively, with two transfers out of four being overlapped with multiplications. Consequently, this scheme is as efficient as in SM.
Moreover, given that the matrices in GSM are relatively denser than in SM (see Sect.\,\ref{Data_storage}), inter-node communications are expected to incur less overhead in GSM than in SM.
\subsubsection{GSM vectors storage and orthogonalisation}
The JD method requires the storage of tens or hundreds of GSM vectors, which amount to about 160 Mb each already for a dimension of $10^7$ (GSM vectors are complex).To reduce the memory overhead and computational load imbalance, JD/GSM vectors are distributed among all nodes using a method similar to the 1D hierarchical partitioning scheme, first devised in Ref.\cite{SM_2D}. In this scheme, at the end of the distributed SpMV, each master node separates their portion of the GSM vector output into (almost) equal parts and scatters them to a few processes. Consequently, the additional operations needed for vectors, in particular the reorthogonalization with respect to all previously stored JD vectors, does not pose any problem in our implementation.
\subsection{Parallelization of $\hat{J}^2$ in GSM}
We saw in Sect.\,\ref{J2_projection} that rotational invariance has to be checked through the action of $\hat{J}^2$ or $J^{+}$, and imposed if necessary with the $P_J$ operator of Eq.(\ref{PJ_Lowdin}) (see Eqs.(\ref{J2},\ref{Jpm})).
However, 2D partitioning would be rather inefficient therein due to the block structure of the $J^\pm$ matrix. Indeed, $J^\pm$ cannot connect two different configurations, which implies that it is in practice entirely contained in a diagonal square of its matrix, except for side effects. Consequently, 2D partitioning would imply that the diagonal nodes of the $J^\pm$ matrix would contain virtually all the $J^\pm$ matrix, while all the other nodes would be spectators. Thus, we choose to parallelize $J^\pm$ with a hybrid 1D/2D method, where each column of the $J^\pm$ matrix is stored on each node. GSM vectors are divided into $N$ parts, similarly to the 2D scheme, as such the output GSM vector must be reduced on each node at the end of the calculation. As each node contains a part of the diagonal squares of the $J^\pm$ matrix, the $J^\pm$ matrix memory distribution is well balanced. Symmetry requirements are here absent as $J^\pm$ is not symmetric, as it connects Slater determinants of total angular momentum projection $M$ and $M \pm 1$. As a matter of fact, the main problem here is MPI data transfer. Indeed, the number of $J'$ angular momenta to suppress (see Eq.(\ref{PJ_Lowdin})) is of the order of $50$, and, naively, one would have two MPI transfers of a full GSM vector per $J^\pm$ application, leading to $200$ MPI transfers of a full GSM vector per $P_J$ application, which is prohibitive. A solution to this problem lies in the block structure of the $J^\pm$ matrix. Indeed, the MPI transfers to be done therein are only those involving neighboring nodes in GSM vectors, so that their total volume is that of the non-zero components of a configuration in between two nodes, and not $d$, which is tractable. Moreover, $P_J$ applications occur only a few times at most, as it is rare for the $J$ quantum number not to be numerically conserved after a $H$ application, so that it does not slow down the implementation of eigenvectors significantly.
\section{Numerical Evaluations}
\label{GSM_MPI_computation_examples}
In order to test the performance of the new version of the GSM code, we consider the $^8$He and $^8$Be nuclei. Their model space consists of the $^4$He core with valence nucleons, four valence neutrons for $^8$He and two valence protons and two valence neutrons for $^8$Be nuclei. The core is mimicked by a Woods-Saxon potential and the interaction used is that of Furuchi-Horiuchi-Tamagaki type, which has been recently used for the description of light nuclei in GSM \cite{FHT_Yannen}. The model space comprises partial waves bearing $\ell \leq 3$, where the $s_{1/2}$, $p_{3/2}$, $p_{1/2}$, and $d_{5/2}$ states are given by the Berggren basis with a discretization of 21 points for contours, whereas the $d_{3/2}$, $f_{7/2}$ and $f_{5/2}$ states are of harmonic oscillator type, as in SM, where 6 states are taken into account for each of these partial waves. The model space is truncated so that no more than two nucleons can occupy scattering states and harmonic oscillator states. Proton and neutron spaces are treated symmetrically for $^8$Be. This model is used only for computational purposes, so that energies have not been fitted to their experimental values. The model spaces used nevertheless follow the usual requirements demanded for a physical calculation.
The $^8$He and $^8$Be nuclei are complementary for our numerical study. Indeed, nuclei with a large asymmetry between the number of neutrons versus the number of protons are more difficult to treat than those possessing more or less the same number of valence protons and neutrons. This is the typical case for SM, so that the on-the-fly method, based on the recalculation of matrix elements of the Hamiltonian, is usually very effective in this case. Consequently, this study also tests the ability of the code to calculate efficiently Hamiltonian matrix elements involving only neutrons, which often occurs in drip-line nuclei.
The GSM dimensions for the $^8$He and $^8$Be nuclei are respectively $d_{He}=939,033$ and $d_{Be}=3,371,395$, so that calculations remain fast with a relatively small number of nodes while remaining significant. Note that the GSM matrix are much denser than in SM, which is due to the drastic truncation used. The number of matrix non-zero elements is about $d_{He}^{1.65}$ and $d_{Be}^{1.59}$, which is about 7 and 4 times larger, respectively, than the typical number of non-zero matrix elements of SM roughly equal to $d^{3/2}$. Additionally, multiplications of complex numbers are in practice twice slower than multiplications of real numbers. Hence, even though the aforementioned dimensions would lead to very fast calculations in SM, they take a sufficiently long time in our study.
The effected parallel calculations are all hybrid as we use the MPI/OpenMP parallelization scheme. The number of nodes is of the form $n_d(n_d+1)/2$ (see Sect.\,\ref{H_parallelization}), and takes into account its possible values from 15 to 45 MPI ranks for $^8$He, and its possible values from 21 to 45 MPI ranks for $^8$Be.
Indeed, It was impossible to store to the Hamiltonian matrix of $^8$Be with the full storage method if one takes 15 nodes due to the memory limitations of a single node. 8 OpenMP threads per MPI rank are used.
All shown calculations were performed at the National Energy Research Scientific Computing Center (NERSC) of Lawrence Berkeley National Laboratory in Berkeley, CA.
Code debugging, optimization and testing were partly done using computer clusters available at the Oak Ridge Leadership Computing Facility, as well as MSU's High Performance Computing Center (HPCC).
\begin{figure}[htbp]
\begin{tabular}{cc}
\includegraphics[width=8cm]{8Be_memory_scaling.pdf} & \includegraphics[width=8cm]{8He_memory_scaling.pdf}
\end{tabular}
\caption{Reduction of storage memory of Hamiltonian matrix elements per node for $^8$Be (left panel) and $^8$He (right panel) as a function of the number of nodes (color online). All data have been divided by the value obtained with the smallest number of nodes (21 for $^8$Be and 15 for $^8$He), taken as a reference point. Full storage results are represented with solid lines, with stars for maximal memory stored in a node and crosses for average memory stored in a node. Partial storage results are represented with dashed lines, with squares for maximal memory stored in a node and circles for average memory stored in a node. The strong scaling line is also depicted on the picture as a solid line, but it cannot be discerned from the depiction of average memory stored in a node obtained with the full storage method.}
\label{memory_results}
\end{figure}
Results for memory storage reduction are shown in Fig.(\ref{memory_results}),
where we show the maximum memory space required for the storage of the Hamiltonian by any one node, as well as the average space required across all nodes.
Firstly, it is clear that the average memory distribution scales very well with the number of nodes for both $^8$He and $^8$Be.
As no data of large size are stored besides the Hamiltonian in the full storage method, it is not suprising that an exact scaling is obtained if one averages over all nodes used. However, results issued from the partial storage method slightly depart from perfect scaling. This comes from the necessary additional storage of two-body matrix elements, which provide with Hamiltonian matrix elements from stored array indices and Hamiltonian phases (see Sect.\ref{on_the_fly_partial}). However, by considering the maximal memory stored in a node, one can see that scaling performance is very different for $^8$He and $^8$Be. While the maximal memory scales very satisfactorily for $^8$He, where about 90\% of perfect scaling is attained, that of $^8$Be grows unevenly and its scaling efficiency is about 75\%.
The slow increase from 28 to 36 nodes indicates a sizable load imbalance for memory distribution of the Hamiltonian matrix elements among the utilized nodes.
One can assume that it comes from a less homogeneous distribution of non-zero matrix elements in $^8$Be, generated by the presence of both valence protons and neutrons, as $^8$He only bears valence neutrons. This issue is possibly caused by a few large many-body groups ending up at the same row/column group in our relatively simple distribution scheme. It may be fixed by sorting the many-body groups based on their size before doing a round-robin distribution, or by breaking large groups into smaller ones
\begin{figure}[htbp]
\begin{tabular}{cc}
\includegraphics[width=8cm]{8Be_time_scaling.pdf} & \includegraphics[width=8cm]{8He_time_scaling.pdf}
\end{tabular}
\caption{Speed-up of the total time taken by a matrix-vector operation for $^8$Be (left panel) and $^8$He (right panel) as a function of the number of nodes (color online). All data have been divided by the value obtained with the smallest number of nodes (21 for $^8$Be and 15 for $^8$He), taken as a reference point. Full storage results are represented by a solid line with crosses, partial storage results by dashed lines with squares, and on-the-fly results by a solid line with stars. The strong scaling line is depicted on the picture as a solid line.}
\label{total_time_results}
\end{figure}
Results for the speed-up of total calculation time are shown in Fig.(\ref{total_time_results}).
Similarly to Fig.(\ref{memory_results}),
total calculation time scales very well for $^8$He, when storage and on-the-fly methods show a typical ratio of 90\% and 75\% with respect to perfect scaling, respectively, while it is not the case for that of $^8$Be.
While these performances seem to be low at first sight for $^8$Be, they follow the uneven pattern of memory distribution among nodes, provided by the full storage calculation (see the left panel of Fig.(\ref{memory_results})). Indeed, efficiency varies from 60\% to 80\% for storage methods when the number of nodes goes from 36 to 45 nodes. However, the speed-up of the on-the-fly method stagnates, as it reaches only 65\% with 45 nodes, which is significantly smaller than the values obtained with the storage methods.
Nevertheless, absolute calculation times are not excessive with the on-the-fly method compared to the partial and full storage methods,
as they are slower by a factor of about 5 and 10 for $^8$Be, respectively.
Consequently, the on-the-fly method is of high interest when the partial and full storage methods cannot be used due to the impossibility to store the Hamiltonian matrix.
Nevertheless, its low scaling properties clearly demand to be investigated in the future.
\begin{figure}[htbp]
\begin{tabular}{cc}
\includegraphics[width=8cm]{8Be_MPI_time.pdf} & \includegraphics[width=8cm]{8He_MPI_time.pdf}
\end{tabular}
\caption{Ratio of MPI communication time to the full time spent during a matrix-vector operation for $^8$Be (left panel) and $^8$He (right panel) as a function of the number of nodes (color online).
Full storage results are represented by solid lines, with pluses for average MPI time and crosses for maximal MPI time in a node.
Partial storage results are represented by dashed lines, with stars for average MPI time and squares for maximal MPI time in a node.
On-the-fly results are represented by solid lines, with circles for average MPI time and triangles for maximal MPI time in a node.}
\label{MPI_time_results}
\end{figure}
Results for MPI communication times are shown in Fig.(\ref{MPI_time_results}) as the ratio of MPI communication time to the full time spent during a matrix-vector operation.
As the GSM vectors transferred with MPI are the same in full, partial and on-the-fly methods,
MPI communication times should be identical if scaling with the number the nodes were perfect. However, the situation is very different in practice.
In fact, one obtains a fairly large value for the ratio of average and maximal MPI communication times to total times, of 20\%-50\% for $^8$He and 30\%-70\% for $^8$Be (see Fig.(\ref{MPI_time_results})).
Moreover, obtained values for average and maximal MPI times are comparable for full, partial and on-the-fly methods (see Fig.(\ref{MPI_time_results})).
Consequently, the MPI communication times combine two intricate effects: the time taken to do a SpMV, and the uneven distribution of Hamiltonian matrix elements among nodes.
Indeed, processes will not finish their calculations at the same time. Consequently, they start communicating with each other at different times, and they experience additional MPI delays due
to load imbalances effects. Therefore, what we report as "MPI time" must be understood as load imbalance
plus MPI time. Further investigation is then necessary to understand how to mitigate load imbalance, as this will surely decrease MPI communication times.
\section{Conclusion}
\label{GSM_conclusion}
GSM has been parallelized with the most powerful computing method developed for SM, based on a 2D partitioning of the Hamiltonian matrix. It significantly reduces MPI inter-node communications while allowing to take advantage of the symmetry of the Hamiltonian. Time-reversal symmetry has also been included in our approach, as such calculations are twice faster for even nuclei. As GSM vectors are scattered among all nodes, memory requirements are very small for the vectors entering the JD method and reorthogonalization of vectors. Moreover, an effective on-the-fly method has been implemented within the 2D partitioning approach, which has been noticed to be slower than the initially developed scheme based on Hamiltonian full storage by a factor of about 5-10, which is still reasonable. The 2D partitioning has been tested with $^8$Be and $^8$He nuclei bearing medium size dimensions, whose calculations resemble those effected in the context of GSM. They have shown the efficiency of our method, despite a load imbalance issue which will be addressed as part of our future work.
Consequently, the implementation of the 2D partitioning method has expanded the matrix dimensions that GSM can treat, and it is now possible to deal with dimensions of the order of tens or hundreds of millions. These calculations will demand for sure the use of very powerful machines, bearing thousands of cores. As the feasibility of these calculations has been demonstrated, it is now possible in a near future to consider very large systems in GSM.
\section{Acknowledgments}
\label{Acknowledgments}
We thank Dr.~K{\'e}vin Fossez and Prof.~Witek Nazarewicz for useful discussions and comments. N.~Michel wishes to thank Prof.~Furong Xu for a CUSTIPEN visit in Peking University,
as well as Dr.~S.M.~Wang, for letting us use his figure of the Berggren completeness relation.
This work was supported by the US Department of Energy, Office of Science, Office of Nuclear Physics under Awards No.~DE-SC0013365 (Michigan State University), No.~DE-SC0008511 (NUCLEI SciDAC-3 Collaboration), No.~DE-SC0018083 (NUCLEI SciDAC-4 Collaboration); the National Natural Science Foundation of China under Grant No.~11435014;
and No.~DE-SC0009971 (CUSTIPEN: China-U.S. Theory Institute for Physics with Exotic Nuclei), and also supported in part by Michigan State University through computational resources provided by the Institute for Cyber-Enabled Research.
This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725,
and of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. \\
This work is licensed under a license Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) (https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en\_us).
|
1,941,325,220,704 | arxiv | \section{Introduction}
In 1917, van Maanen discovered the first white dwarf (WD) that showed Ca II absorption lines in its spectrum: van Maanen 2 \citep{van1917}. Since then, 25\,$\%$ to 50\,$\%$ of the WDs have been found with metallic lines (typically Mg, Si, Fe and other rock--forming elements) in their ultraviolet--optical spectra \citep{koester2014,harrison2018,wilson2019}. It is not expected that metallic lines are present in the spectra of cool WDs. The gravitational settling time, on which these elements sink out of the observable atmosphere, (a few days to years for DA WDs\footnote{WDs are classified according to the absorption lines present in their spectra, as DA, which are hydrogen dominated and DB, helium dominated.} and between few to million years in non--DA WDs \citealt{fontaine1979,wyatt2014}) is orders of magnitude shorter than the cooling time for these WDs ($t_\mathrm{cool}>100$ Myr for WDs with $T_\mathrm{eff}<$20000 K, \citealt{schreiber2003}). Thus, it was proposed that those cool metal--polluted WDs should accrete material from the exterior \citep{alcock86,farihi2009,farihi2010,koester2014}.
In the past, the processes invoked to explain the atmospheric pollution of WDs came in two flavours: either accretion from interstellar matter, or accretion of asteroidal material. In the interstellar matter scenario \citep{dupuis1992,dupuis1993}, as the WD travels through the gravitational potential of the Galaxy, it passes through denser cloud regions and accretes metallic elements into its atmosphere. Nowadays, the commonly--accepted mechanism involves the presence of rocky bodies that originally orbit the WD, are then dynamically delivered to the proximity of the WD, and finally subject to tidal forces that destroy them, allowing the accretion of material into the WD atmosphere. Observational evidence that supports this mechanism includes: i) the near--infrared excesses that imply the presence of a dust disc located at few solar radii from the WD surface e.g. \citep{kilic2007}; ii) the double--peaked lines arising from keplerian rotation of gaseous material \citep[vaporised rock;][]{gansicke2006,melis2012,wilson2014,guo2015} observed in a few WDs; and iii) the fact that the chemical abundances found in these WDs resemble rocky material with bulk Earth composition \citep{jura2014,xu2014,harrison2018}. All this evidence is further confirmed by the direct probe of the presence of asteroid material detected as variable transits around the star WD~1145+017, explained as the disintegration of planetesimals that have reached the WD's Roche limit \citep{vanderburg2015}, in addition to the recently--discovered transit of ZTF J013906.17+524536.89 by \cite{vanderbosh2019}. The planetesimal found orbiting the WD SDSS J122859.93+104032.9 with a 123.4--minute period \citep{manser2019} and the putative evaporating planet proposed to explain the accretion onto the WD J091405.30+191412.25 \citep{gansicke2019} are further proofs of the existence of asteroid and planetary bodies close to WDs.
WDs represent the final stages in the lives of stars with masses between 1 and 8 $\mathrm{\,M}_\odot$. Stellar evolution (the large increase in stellar radius as a giant) and tidal forces should prevent the survival of primordial planetary material up to a few \,au from the WD surface \citep[see e.g.][]{villaver2009,mustill2012,villaver2014}. Thus, material then needs to be delivered close to the WD likely by the scattering of small bodies from larger orbits when planetary systems are destabilized following the evolution of the star. Pioneering work in the problem was done by \cite{duncan1998}, who showed that the planets of the Solar System would remain stable when the Sun becomes a WD, and by \citet{debes2002}, who explored the dynamical evolution of two-- and three--planet systems of identical mass in circular orbits before and after adiabatic mass loss. Here, the fundamental idea is that after the star loses mass at the AGB tip (around 75\,\% for a star of initially 3 $\mathrm{\,M}_\odot$), the planet:star mass ratio increases by a factor of a few compared to the original main--sequence (MS) configuration. Therefore, asteroids, planets or whole systems that survived the star's MS lifetime can be destabilised once the star becomes a WD.
The work of \cite{debes2002} has recently been extended, looking at a broader range of system architectures. \citet{veras2013} and \citet{veras2013b} looked at the stability of two--planet systems to the range of stellar masses that from the MS evolve to the WD (1 -- 8 $\mathrm{\,M}_\odot$) and at Jupiter and Earth--mass planets with five eccentricity values (0, 0.1, 0.2, 0.3, 0.5).
The interaction of one planet with a belt of particles was explored by \citet{bonsor2011}, who reached the conclusion that one planet does not seem to be enough to deliver material efficiently into the WDs unless the presence of a very massive belt is invoked. \cite{frewen2014} found single eccentric small planets to be more efficient, suggesting a successful mechanism to explain pollution of WDs; the question then becomes one of the origin of the planetary eccentricity. \cite{mustill2014} studied systems of triple Jupiter--mass planets, finding that the percentage of instabilities in the WD phase was insufficient to explain the observed pollution in host stars with mass between 3 -- 8 $\mathrm{\,M}_\odot$. Furthermore, the planet mass seems to have an effect on the instability outcome, as \citet{veras2016b} showed (using Jupiter, Saturn, Uranus and Neptune masses and circular orbits) that giant planets usually eject each other while smaller planets preferentially collide with each other. Recently, \citet{mustill2018} carried out simulations of three--planet systems with unequal--mass planets (from Super--Earth to Super--Jupiter masses) and including test particles to mimic planetesimals; they too showed that the lower--mass planets in the simulations deliver material towards the WD more efficiently. Most important is the fact that with low--mass planets \cite{mustill2018} found both higher rates and a longer duration of delivery, in line with the broad range of cooling ages at which metal pollution is observed.
Building on previous works on the stability of multiple planetary systems using dynamical simulations, the objective of this paper is twofold. On the one hand, we aim at constraining the parameter space that previous studies could not, given the humongous number of parameters available if one builds the problem ad hoc. So we study the evolution to the WD phase of scaled versions of the MS planetary systems that have been detected, instead of using artificial planetary systems built for the problem. On the other hand, we allow an exploration of a larger parameter space than previous works, since we have configurations of planets with different masses, orbits, multiple eccentricities and with different semimajor axis ratios. Exploring all the range of parameters is otherwise unfeasible if not restricted by the system set--up built from the observed configurations.
In this work, we use the orbital parameters of the hundreds of multiple planetary systems with well--determined parameters found around MS stars, and explore their dynamical evolution to the WD phase.
This is the first time dynamical simulations restricted by the observed parameters have been done to study the instabilities that could bring material to the surface of the WD, thus producing the observed pollution. In this paper, we focus on the two--planet systems; systems with three and more planets will be analyzed in a future study. In \S2 we describe how we have built the planetary systems to study, in \S3 we explain the scaling up of the planet mass and radius and the simulation set--up, and in \S4 and \S5 we present the results and discussion of this work. Finally, in \S6 we summarize our conclusions.
\section{Simulations}
\label{setup}
In this work we take a novel approach to setting up the orbits of the planets in the systems we simulate. When studying the planetary systems that might be responsible for WD pollution, we are hindered by our lack of knowledge about wide--orbit planets orbiting intermediate--mass stars that are distant enough ($>$ few \,au) to survive the evolution of their host. Previous works have therefore constructed artificial systems of equal--mass planets \citep{debes2002,veras2013,mustill2014}, used the Solar System as a template \citep{veras2016b}, or constructed systems artificially from a prescribed distribution of planet masses and orbital spacings \citep{mustill2018}. Here we take a different approach: we use the large population of known multi--planet systems on closer orbits around lower--mass stars as templates for wider--orbit systems around intermediate--mass stars, scaling them up to maintain their dynamical properties. We describe this process in Section~\ref{sec:scaling}. In so doing, we are not asserting that wide--orbit planets around WD progenitors \emph{must} look like the better--studied population of close--in multiple--planet systems. Rather, we are constructing an artificial population that is somewhat grounded in reality, rather than prescribing masses and orbital separations as done previously.
To solve the dynamics of the systems, we use the \textsc{Mercury} package \citep{chambers1999} in its modified version
\citep{veras2013b,mustill2018}, which takes into account the change of the stellar mass and radius along the different evolution phases. We used the RADAU integrator with a tolerance parameter of 10$^{-11}$ as implemented in \citet{mustill2018}. We consider as a planet ejected when it reaches orbits above 1$\times 10^6$ \,au from the central star, and planets can also be removed due to planet--planet collisions and when they reach the stellar radius.
In order to build the architectures of the planetary systems we shall evolve, we have selected all the two--planet systems from the NASA Exoplanet Archive\footnote{https://exoplanetarchive.ipac.caltech.edu/} and the Exoplanet Encyclopedia\footnote{http://exoplanet.eu/} with reported discovery until June 2018 although we have updated (as of January 2020) the orbital parameters of some of the systems (see Section \ref{msexp}).
In those catalogues we have 29 multiple planetary systems in which a single host star has evolved beyond the MS according to the luminosity class as it appears in the {\it SIMBAD} database, the Exoplanet catalogues or the discovery paper. We have excluded from the simulations 13 giant stars, four Horizontal Branch stars and two pulsars for the simulations as the required treatment will be different from the rest of the simulations presented. Although found in the catalogues as evolved stars, we have included the 10 subgiants that after verifying that they have not ascended yet the Giant branch in the HR diagram. One Herbig Ae/Be star has also been excluded from our sample. When the planetary system is orbiting one of the components of a binary star (39 stellar binary systems were identified) we excluded from the list cataclysmic binaries and eclipsing binaries (14 systems). We also excluded 11 systems in which gravitational effects from a wide binary companion may affect the evolution of the planet orbits \citep[{\it Kepler}--108, HD 142, HD 89744, HD 133131A, HD 65216, HD 190360, HD 20781, XO--2S, HD 11964, HD 87646, GJ 229][]{moutou2017,otto2017,leggett2002}. We have kept in the simulations the systems orbiting HD 164922, HD 177830, HD 187123, HD 217107 and HD 143761 since \citet{wittrock2017} reported in their Differential Speckle Survey that they do not have a low mass stellar companion. We have also kept {\it Kepler}--383, {\it Kepler}--397, {\it Kepler}--400, {\it Kepler}--411, {\it Kepler}--449, {\it Kepler}--487, K2--36, HD 169830 and HD 147873 in our sample since we did not find any evidence in the literature that those are physically related double systems.
The final sample we build for our study consists of 373 stars with two planets each. For those we select from the observations the stellar and planet masses, and all the orbital and planet parameters available.
To build the simulations we have chosen an initial mass of the star of 3 $\mathrm{\,M}_\odot$. The mass of the host star of the observed systems ranges between 0.164 -- 1.965 $\mathrm{\,M}_\odot$. Thus, we have to re--scale the observed systems to a 3 $\mathrm{\,M}_\odot$ MS mass and we have to kept them dynamically analogous in order to evolve them. The choice of a 3 $\mathrm{\,M}_\odot$ is motivated by two facts, one observational and one computational. Polluted WDs have shown to have a mean mass of $\sim$ 0.7 $\mathrm{\,M}_\odot$ \citep{koester2014} which corresponds to a progenitor of 3 $\mathrm{\,M}_\odot$ mass on the MS following any standard initial--final mass relation (e.g. \citealt{kalirai2008}). Computationally, running a large number of dynamical simulations using low mass stars evolving off the MS become unfeasible given the long time the star expends on each evolutionary step. A 3 $\mathrm{\,M}_\odot$ star lives 377 Myr in the MS phase and at 477 Myr enters the WD phase [times derived using the SSE code of \citealt{hurley2000} which considers the isotropic mass loss during the Red Giant Branch (RGB) and Asymptotic Giant Branch (AGB) phases]. Note in contrast that a 1 $\mathrm{\,M}_\odot$ star requires $\sim$10 Gyr to leave the MS. The relatively rapid evolution of a 3 $\mathrm{\,M}_\odot$ star allows us to do a huge number of simulations in reasonable computational times (measured on a PhD timescale). We note that orbital integrations speed up considerably once the star loses mass, as the orbits expand and the central mass is lower. Therefore, it is much quicker to run a system around a $3\mathrm{\,M}_\odot$ star for 10\,Gyr than to run one around a $1$ or $2\mathrm{\,M}_\odot$ star.
We cannot be certain that the observed planet distribution matches that of the scaled $3\mathrm{\,M}_\odot$ mass stars we have simulated. The population of planets around WD stars is completely unknown yet and no planet has so far been confidently detected orbiting a WD despite systematic searches in the infrared and by transit \citep{burleigh2002,hogan2009,steele2011,mullally2008,debes2011,faedi2011,fulton14,vansluijs2018}. Planets around A stars cannot be detected using the same techniques as planets around G stars in the MS and therefore the limited statistics available for comparison among the different samples is subject to strong selection biases. Planet searches beyond the MS can give us some insight into the planet frequency around stars more massive than the Sun using the fact that the RV searches can be attempted once the star leaves the MS and its rotational velocity decreases. Early claims of an increase of planet mass with the mass of the host \citep[i.e.][]{lovis2007,johnson2007} are not supported by the results of more recent surveys that convincingly argue that this result might originate from limited RV precision and additional noise introduced by stellar p-mode oscillations \citep[see e.g.][]{niedzielski2016}.
On the other hand, planet formation scenarios that investigated the frequency of giant planet formation with stellar mass find that the probability that a given star has at least one gas giant increases linearly with stellar mass from $0.4$ to $3\mathrm{\,M}_\odot$ \citep{kennedy2008} but planet multiplicity cannot be extracted from these models. One could attempt to look at the radius distribution of debris disks in A type stars compared to G stars but the radial extent of debris disks with well-resolved observations does not show any obvious trend with the stellar spectral type \citep[see e.g.][]{hughes2018}. But note that even if differences were found in terms of the size of the debris disks and the spectral type it will be very hard to attribute them to planet formation \citep[e.g.][]{mustill2009} given that the size could be related as well to the location of the ice line \citep{morales2011} or to time effects in the production/destruction of dust \citep{kennedy2010}.
\section{The simulated architecture: Dynamically scaling the observed sample}
\label{sec:scaling}
To conserve the Hill stability criterion (see Section~\ref{equa} below) in the simulations with the adopted $3\mathrm{\,M}_\odot$ star, we multiply the mass of each planet by a scale factor defined as $f=3\mathrm{\,M}_\odot/M_*$, where $M_*$ is the observed mass of the host star in the system.
\subsection{Planet Masses and Radii }
The detection method determines the availability in the literature of distinct physical parameters of the planetary system. For instance, if the planet has been discovered by the transit method and has no radial velocity (RV) measurement then the radius of the planet is at hand but not its mass. Conversely, for those systems detected via RV but that are not transiting we have (minimum) masses and eccentricities, but not radii. Thus, in order to complete the parameter space needed for the simulations, we need to use a planet mass--radius relation. We have explored different prescriptions available in the literature and finally we adopted the one by \citet{chen2017}. In that paper a planet mass--radius relation is defined by a probabilistic model of power laws at different mass regimes: for Earth--like worlds the function goes as $R\sim M^{0.28}$ (where $R$ and $M$ are the radius and mass of the planetary body respectively), for
Neptunian worlds $R\sim M^{0.59}$, for Jovian exoplanets $R\sim M^{-0.04}$ and for stellar bodies $R\sim M^{0.88}$. We have used the \textsc{Python} package \textsc{Forecaster} by the same authors and assume that the standard deviation in the input parameter for the mass and radius is 5\,$\%$
Since \textsc{Forecaster} tests a variety of initial masses and radii, we opted to use the median after 100 runs.
We note that in a few cases (for some of the massive brown dwarfs which are listed in the exoplanet catalogues) the scaled--up mass of the ``planet'' is $>0.08\mathrm{\,M}_\odot$, making them large enough to burn hydrogen and become stars. We opted to keep these systems in the simulations in order to homogeneously treat our input catalogue.
\subsection{Initial Orbits}
\label{equa}
For two--planet systems, there exists an analytical criterion for whether the orbits of planets may intersect and the planets collide. This is the \emph{Hill stability limit} \citep{gladman1993}; in the following, we have used the two--body
approximation for the energy, as given by \citet[and see also \citealt{veras2013b}]{donnison2011}.
In multiple--planet systems, stability can be classified in two ways: Hill and Lagrange instability.
In Hill--unstable systems, planets are close enough to have orbit crossing and to collide with each other. In Lagrange--unstable
systems, at least one planet is lost of the system via collision with the star or ejection outwards from the system. The Hill stability limit gives an analytic constraint on the necessary conditions that planetary systems need to have so that their planets collide or cross their orbits; however, the Lagrange stability limit does not have any analytic formulation and it can only be found through dynamical simulations. We are interested in exploring Hill-- and Lagrange--unstable systems to explain the atmospheric metal pollution observed in WDs.
Since \textsc{Mercury} does not take into account the stellar tidal forces that may act directly on the planets,
we must ensure that these forces are negligible during the MS, RGB and AGB phases. For this reason we place
the innermost planet to a semimajor axis $a_0$ = 10 \,au from the star since the surviving tidal
limit for Jovian and Terrestrial planets is beyond this distance for a 3 $\mathrm{\,M}_\odot$ star
\citep{villaver2009,mustill2012}. Setting up this limit at 10 \,au also allows us an easier comparison with previous works.
Because the Hill stability limit depends directly on the semimajor axis ratio between the planets, we must keep the ratios of the observed planetary system in our simulations.
Thus, the second planet is placed at a distance of $(a_2/a_1)a_0$ (where $a_2$ and $a_1$ are the observed semimajor axis of the outer and innermost planets respectively). For those planets for which the semimajor axis information was not available in the catalogues, we calculated it using the planet periods and the stellar mass (i.e. HD~114386, K2--141, {\it Kepler}--462 and {\it Kepler}--88).
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=9.0cm, height=7.5cm]{mass_2pl.eps}
\includegraphics[width=9.0cm, height=7.5cm]{rad_rat_2pl.eps}
\end{tabular}
\caption{Scaled parameters of the planets considered in this work. Left, histogram of the scaled mass, showing
in blue the Transit planets with calculated mass using the mass--radius relation by \citet{chen2017} and in red the RV systems with parameters reported in the catalogues. Black upside down triangles depict the masses used in \citet{veras2013} simulations, black circles the planet masses used in \citet{veras2013b} and in black horizontal line the planet mass range simulated in \citet{debes2002}. The right panel shows the
distribution of the scaled planet radius ratio (outer/inner) of the Transit and RV systems. Same colours as the left panel.}
\label{4fig}
\end{center}
\end{figure*}
In Figure \ref{4fig} we present the distribution in mass (left) and radius ratio (right) of the planets we used in the simulations. In the left panel as the histogram of the scaled planet mass distribution we see that the transiting systems show a single peak at around the mass of Neptune while the RV systems (which include some {\it Kepler} ones) have a bi--modal distribution with one peak close to the mass of Neptune and the other in the Jovian mass regime.
In the top part of Fig. \ref{4fig} we have marked the values of the planet masses used in previous simulations where the triangles represent the planet masses used in \citet{veras2013}, circles are for \citet{veras2013b}, and the horizontal line are the planet mass range covered by \citet{debes2002} in their two--planet simulations. This Figure clearly illustrates that we are exploring a new and extended parameter
from previous works, specially for planets masses in between the ad hoc masses chosen in previous studies.
The right panel of Figure \ref{4fig} shows the distributions of the scaled radius ratio (outer/inner) of the systems we are simulating where a clear peak around 1 is present for both samples: an indication that planets in the same system have very similar sizes. Note that basically the same was found by \citet{weiss2018} in their analysis of the distribution of ratios of planet sizes for adjacent pairs within the same system observed by \emph{Kepler}.
For the
calculation of the dynamics of the systems, \textsc{Mercury} requires in addition to the planet mass, radius and semimajor axis, the orbital eccentricity and
inclination, the argument of the perihelion, mean anomaly and the longitude of the ascending node of
each planet. The latter three angular parameters, since they are not available from the observations in most cases, are drawn randomly from a uniform distribution of angles between 0 and 360$^\circ$. The eccentricities are taken directly from the catalogues or, when unavailable, based on the results
of \citet{vaneylen2015} and \cite{moorhead2011}, obtained from a Rayleigh distribution
with a $\sigma$ parameter $\sigma=0.02$ \citep{pu2015}.
Regarding the inclination of the planet orbits, we have also randomly selected them from a Rayleigh distribution with $\sigma = 1.12^\circ$ \citep{xie2016}.
The choice of using small inclination angles is justified in this work since \citet{veras2018} concluded that near co--planar angles are adequate for global stability studies.
In Figure \ref{esemi}, we show the histograms for the initial eccentricities of our two--planet simulated sample where the colours are as in previous Figures. Note that the numbers in this histogram are for simulated systems (as we explain later we perform 10 simulations per observed planetary system configuration). We see that the two samples peak at different eccentricities: the blue histogram simply reflecting the Rayleigh distribution used for those planets that did not have eccentricity measurements from observations (mostly transit systems) while the red histogram mimics the observed distribution from RV measurements. In the top of the Figure the triangles indicate the eccentricities used in \citet[$e=0,0.1, 0.2, 0.3$]{veras2013}, and the circles show the eccentricities used in the simulations of \citet[$e=0.1, 0.5$]{veras2013b}. Note that we have not simulated systems at zero eccentricity because this often reflects a lack of information, and it is more realistic to simulate them using a small eccentricity with a Rayleigh distribution with $\sigma =0.02$. Note as well that the our eccentricities cover a parameter space not studied in previous works.
\begin{figure}
\begin{center}
\includegraphics[width=9cm, height=7.0cm]{ecc_2pl.eps}
\caption{Histogram distribution of eccentricities of our simulated planets from transit detections (blue) and RV (red). The triangles and circles in the top of the panel are the eccentricities used in \citet{veras2013} and \citet{veras2013b} respectively.}
\label{esemi}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=14cm, height=12.5cm]{mass_semi_rat_2pl.eps}
\caption{The center panel is a scatter plot of the semimajor axis ratio $a_2/a_1$
vs. the planet mass ratio $m_{2}/m_{1}$ of our simulated planetary systems. The black vertical line shows the equal mass planet ratio and its length represents the semimajor axis ratio range explored in \protect\cite{veras2013,veras2013b} for a 3$\mathrm{\,M}_\odot$ host star and for Jupiter--mass planets. In the top panel histogram we display the mass ratio distribution of our two--planet sample, where the black dashed line marks the location of two--planet systems with equal mass planets. The histogram in the right panel is the distribution of the semimajor axis ratio of our sample. The dashed vertical line corresponds to the semimajor axis ratio at which \protect\citet{veras2013b} found the Lagrange stability limit for their two--planet simulations, using equal mass Jupiter planets and eccentricity 0.1.}
\label{mrati}
\end{center}
\end{figure*}
In Figure \ref{mrati} we display a scatter plot of the semimajor axis ratio versus the planet mass ratio. The black vertical line indicates the semimajor axis ratio range explored in \citet{veras2013,veras2013b} where they simulate two--planet systems with equal planet masses. In the upper part we show the histogram of the planet mass ratio distribution and in the right side of the scatter plot we have the histogram of the semimajor axis ratio. From this plot we see that both the parameter space in semimajor axis and planet mass ratio explored in this work is broader than previous works and that most systems have an outer planet (planet--2) of comparable mass but slightly more massive than the inner planet (planet--1).
To finalize our description of the parameter set--up, we proceed to run 10 simulations per system configuration, changing randomly the inclination and eccentricity of the planet orbits using the Rayleigh distribution mentioned above and the orbital angles for each run. If the eccentricity is known, then we set it constant for the 10 simulations of the system. In Figure \ref{initea} we show the initial semimajor axis and eccentricity of the scaled two--planet systems simulated in this work. Orange and green plus symbols refer to planet--1 and planet--2 respectively. As we can see from our set--up, all planets--1 are located at 10 \,au, covering the eccentricity range from 0 to 0.8. On the other hand, planets--2 are widely dispersed and some of them are located at high eccentricities and large semimajor axis. We point out that these high eccentricity and high semimajor axis planets may come from systems that have likely already undergone an instability that we would think took place very early before the onset of our simulations.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=9cm, height=7.5cm]{a_e_2pl.eps} \\
\end{tabular}
\caption{Initial eccentricities as a function of initial semimajor axis of the two--planet systems simulated in this work. Orange and green plus symbols refers to planet--1 and --2 respectively. We note that both planets cover a wide range in eccentricities. Note that our planetary system configuration, locates planet--1 always at initial semimajor axis of 10 \,au while planet--2 will have the semimajor axis which correspond to the semimajor axis ratio of each observed system. }
\label{initea}
\end{center}
\end{figure}
\section{Results}
We performed 3730 simulations of two--planet systems in which we
evolved a $3\mathrm{\,M}_\odot$ star from the MS to the WD. Before discussing the results in detail, we first show some examples of the types of dynamical evolution we see in our simulations. Such examples can be seen in Figure~\ref{semiaxis}. Starting from the top left, we have a system in which planet--1 is lost when it collides with the WD, in the top right a system that experiences orbit crossing and scattering followed by ejection of one planet, in the bottom left a system that underwent orbital crossing and a final planet--planet collision, and finally in the bottom right a fully stable system.
The upper left panel of Figure \ref{semiaxis} shows one of the ten simulations of the scaled system HD~113538. We see how planet--1 collides with the star at 6.2 Gyr, while planet--2 remains on a stable orbit at $a=300$ \,au after the instability. Since this instability happens at a time when the star is well into the
WD domain, this is a system in which we have a clear mechanism capable of producing pollution of the host star atmosphere, be it from the accreted planet itself or from asteroids scattered after the planet's eccentricities were excited. In the upper right panel of Figure \ref{semiaxis} we see the evolution of the scaled system GJ~180 showing orbital scattering capable of producing pollution if the ejected planet--2 would traverse a planetesimal belt during scattering. In this system, the orbital scattering starts when the planets have an orbit crossing in the few Myr of the WD phase and the ejection of planet--2 happens at 7.8 Gyr. The bottom left panel of Figure \ref{semiaxis} presents a collision between the planets in
a simulation of the scaled system {\it Kepler}--200. The planet--planet collision happens at $t$= 8.6 Gyr, when the star has long been a WD, again capable of sending material into the WD atmosphere. This simulation also shows some orbit crossing of the planets, followed by scattering in their semimajor axis until the impact of the planets. Note that in these examples, planets survive the MS phase and became unstable in the WD phase. Finally, the lower right panel evinces a fully stable evolution
of the scaled system {\it Kepler}--146. In this case both planets are separated wide enough to be both Hill--
and Lagrange--stable during the complete simulated time of 10 Gyr, ending at 40 and 73 \,au respectively
from the central star. The input parameters used in the simulations shown in Figure \ref{semiaxis} correspond to the simulation numbers 293, 105, 1831, and 1465 listed in the machine readable table, a fraction of which is displayed in Table \ref{mms}.
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=18cm, height=14cm]{examp_2plan.eps}
\end{tabular}
\caption{Evolution of the semimajor axis, in \,au, during the 10 Gyr of the simulations. In an orange solid line we show the evolution of planet--1 while the green solid line is for planet--2. The time when the star becomes a WD is shown as a red vertical dashed line. The four panels are representative examples of the outcomes of the simulations (\#293, 105, 1831, and 1465 left to right and top to bottom in Table \ref{mms}). Lagrange instability (planet--star collision) is shown in the top left panel, Hill and Lagrange instabilities (orbit crossing and ejection of a planet) in the upper right panel. A Hill--unstable example (orbit crossing and collision between the planets) is the example shown in the bottom left panel and a complete Lagrange-- and Hill--stable system appear in the bottom right panel.}
\label{semiaxis}
\end{center}
\end{figure*}
\begin{table*}
\begin{center}
\small\addtolength{\tabcolsep}{-2pt}
\caption{Fraction of a machine readable table with the input parameters of the scaled systems of the 3730 simulations performed in this work. Column 1: simulation number, 2: name of the planetary system, 3: scaled mass $m$, 4: planet density $\rho$, 5: scaled semimajor axis $a$, 6: eccentricity $e$, 7: orbit inclination $i$, 8: argument of the pericentre $\omega$, 9: longitude of the ascending node $\Omega$, 10: mean anomaly $M$. The suffices 1 and 2 refer to the inner and outer planet respectively. Column 11--18 the same but for planet--2. }
\label{mms}
\begin{tabular}{c c c c c c c c c c c c c c c c c c }
\noalign{\smallskip} \hline \noalign{\smallskip}
$\#$ & name & $m_1$ & $\rho_1$ & $a_1$ & $e_1$ & $i_1$ & {$\omega_1$} & $\Omega_1$ & $M_1$ & $m_2$ & $\rho_2$ & $a_2$ & $e_2$ & $i_2$ & {$\omega_2$} & $\Omega_2$ & $M_2$ \\
& & $[\mathrm{M_J}]$ & $[g/cm^3]$ & $[\,au]$ & & $[^o]$ & $[^o]$ & $[^o]$ & $[^o]$ & $[\mathrm{M_J}]$ & $[g/cm^3]$ & $[\,au]$ & & $[^o]$ & $[^o]$ & $[^o]$ & $[^o]$ \\
\noalign{\smallskip} \hline \noalign{\smallskip}
1 & 24Sex & 3.88 & 3.17 & 10.0 & 0.09 & 0.29 & 14.42 & 184.86 & 287.96 & 1.68 & 1.22 & 15.60 & 0.29 & 1.24 & 217.37 & 353.30 & 302.54 \\
2 & 24Sex & 3.88 & 3.17 & 10.0 & 0.09 & 0.57 & 188.29 & 37.46 & 204.71 & 1.68 & 1.22 & 15.60 & 0.29 & 1.43 & 357.32 & 318.77 & 317.99 \\
3 & 24Sex & 3.88 & 3.17 & 10.0 & 0.09 & 0.79 & 302.46 & 200.24 & 138.43 & 1.68 & 1.22 & 15.60 & 0.29 & 1.45 & 241.91 & 236.67 & 69.66 \\
4 & 24Sex & 3.88 & 3.17 & 10.0 & 0.09 & 1.25 & 45.52 & 52.84 & 264.22 & 1.68 & 1.22 & 15.60 & 0.29 & 3.07 & 76.19 & 254.02 & 51.88 \\
5 & 24Sex & 3.88 & 3.17 & 10.0 & 0.09 & 1.37 & 120.95 & 252.23 & 297.85 & 1.68 & 1.22 & 15.60 & 0.29 & 1.36 & 196.07 & 151.61 & 65.64 \\
... & ... & ... & ... & .... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Unstable Systems on the MS}
\label{msexp}
A number of our two--planet systems (33) lost a planet on the MS before any stellar mass loss, frequently on a time--scale of just a few Myr. Because the template systems are observed at ages of typically a few Gyr, this means that the initial configurations for these systems were unphysical. We therefore investigated these more closely in order to identify any problems with the set--up.
23 of these systems lie close to strong first-- or second--order mean motion commensurabilities, most commonly the 2:1. In these cases a more careful set--up is required to place the system in a stable resonant configuration. We find that 185 simulations out of these 23 systems, are having dynamical instabilities in the MS with 159 resulted in a planet being loss either by ejection, planet--planet or planet--star collisions, and 26 experienced orbit crossing. We defer the treatment of resonant two--planet systems experiencing stellar mass loss to a future work, but note that the stability properties of resonances are indeed known to change as the star loses mass and resonances broaden \citep{bonsor2011,mustillwyatt2012,debes2012,caiazzo2017}.
Besides the resonant systems, we found ten other systems experiencing instability on the MS:
\begin{itemize}
\item \emph{HIP~57050:} This had a mass incorrectly listed as $68\mathrm{\,M_J}$ in the exoplanet.eu catalogue as of June 2018. This has since been corrected to $68\mathrm{\,M}_\oplus$ and we re--ran the simulations with the correct mass.
\item \emph{HD~183263:} Planet--2 had an orbital solution proposed by \citet{wright2009}, and its RV curve did not have a complete period; the solution is very close to being unstable. Neverthleess, \citet{feng2015} re--analyzed the RV curve updating the orbital parameters for planet--2 with a higher mass and longer period. We re-ran the simulations with the updated orbital solution.
\item \emph{HD~202206:} This was listed as a two--planet system, but the innermost companion is in fact an M--dwarf. We have therefore removed it from consideration along with other binary stars.
\item \emph{HD~67087:} Planet--2 has a high but poorly--constrained eccentricity ($0.76^{+0.17}_{-0.24}$) owing to the lack of observations at pericentre passage \citep{harakawa2015}. \cite{petrovich2015} identified it as unstable according to his stability criterion. Lacking a good orbital solution, we removed this system from further consideration.
\item \emph{HD~106315:} \cite{crossfield2017} identified an RV trend indicating the presence of a third, outer, planet, and so we removed this system as it is probably not a two--planet system.
\item \emph{HD~30177:} exoplanet.eu reported the unstable best fit from \cite{wittenmyer2017}; however; these authors identified a second, more stable, solution family with planet--2 on a wider orbit. We re--ran this system with the more stable configuration.
\item \emph{Kepler--145:} Eccentricities are from photometry only and have large errors \citep{vaneylen2015}. We re--ran the system using instead the Rayleigh distribution of eccentricity which we used when the true eccentricity was unknown.
\item \emph{Kepler--210:} \cite{ioannidis2014} provided a two--planet TTV fit which was unstable, and favoured instead a three--planet fit with the third planet not detected in transit. We therefore removed the system as it is probably not a two--planet system.
\item \emph{K2--18:} exoplanet.eu reported high eccentricity values, but these were very poorly constrained by radial velocity measurements \citep{cloutier2017}. More recent work by \cite{sarkis2018} concluded that K2--18c was likely an artefact of stellar activity and therefore we removed this system from consideration.
\item \emph{Kepler--462:} \cite{ahlers2015} provide a high lower limit ($>0.5$) on the eccentricity of planet--2 based on transit photometry, and noted that their solutions were unstable. Lacking a good fit, we remove this system from consideration.
\end{itemize}
\subsection{Global results}
We now consider a total of 3485 simulations for the following statistics since we have removed 6 two--planet systems (60 simulations) due to the previous analysis and 185 additional simulations that experience dynamical instabilities on the MS. We keep the simulations of resonant systems that were stable on the MS, since their orbital configuration have lead them to avoid any destabilising effects of the resonances.
We find 101 (2.9\,$\%$) out of 3485 simulations that lose one planet in the simulations, be that by planet--planet collisions, planet--star collisions or ejections; while 3384 simulations (97.1\,$\%$) keep both planets the entire simulated time (10 Gyr).
In Table \ref{tab} we show the numbers of planets lost, the type of dynamical instability by which it is removed from the system and two relative percentages, the first with respect to the total number of simulations and the second with respect to the total number of planets simulated (6970) at different evolutionary stages of the host star: MS phase ($t \leq 377.65$\,Myr); the pre--WD phase, which take into account the RGB and AGB phases ($377.65\mathrm{\,Myr} < t \leq 477.57$\,Myr) and the WD phase ($t > 477.57$\,Myr).
\begin{table*}
\begin{center}
\caption{Number of planet instabilities (collision between the planets, planet collision with the star, ejection) appearing at different evolutionary stages. The first percentage is the fraction of simulations in which the given outcome occurred; the second, the fraction of planets experiencing said outcome. ``Pre--WD'' means subgiant through to AGB tip.}
\label{tab}
\begin{tabular}{l c c c c}
\noalign{\smallskip} \hline \noalign{\smallskip}
& MS & pre--WD & WD & Total \\
\noalign{\smallskip} \hline \noalign{\smallskip}
{\bf Ejections} & -- & 2 (0.06\,$\%$, 0.03\,$\%$) & 85 (2.44\,$\%$, 1.22\,$\%$) & 87 (2.5\,$\%$, 1.25\,$\%$) \\
{\bf Planet--star collisions} & -- & -- & 5 (0.14\,$\%$, 0.07\,$\%$) & 5 (0.14\,$\%$, 0.07\,$\%$) \\
{\bf Planet--planet collisions} & -- & 2 (0.06\,$\%$, 0.03\,$\%$) & 7 (0.2\,$\%$, 0.1\,$\%$) & 9 (0.26\,$\%$, 0.13\,$\%$)\\
{\bf Total} & -- & 4 (0.12\,$\%$, 0.06\,$\%$) & 97 (2.78\,$\%$, 1.39\,$\%$) & 101 (2.9\,$\%$, 1.45\,$\%$) \\
\hline
\end{tabular}
\end{center}
\end{table*}
In general, we see that the most prominent type of instability in our simulations is the ejection of a planet (2.5\,$\%$; 87/3485) and that occurs mainly in the WD phase. Planet--planet collisions happen at a rate of 0.26\,$\%$ (9/3485),
Finally, 0.14\,$\%$ (5/3485) of the systems experience a collision between a planet and the star.
In the pre--WD phase (377.65 -- 477.57 Myr), our simulations resulted in
4 (0.12\,$\%$) dynamical instabilities, two planet--planet collisions and two planet ejections. It is worth to mention that we have obtained 15 simulations where both planets are stable on the MS but they have orbit crossing just at the AGB tip of the host star, changing the order of the planets at the beginning of the WD phase.
During the WD phase 97 (2.78\,$\%$) of the simulations resulted in the loss of a planet and there most of them, 85 (2.44\,$\%$) correspond to ejections, a few 7 (0.2\,$\%$) are collisions between the planets and just 5 simulations (0.14\,$\%$) resulted in a direct collision of the planet with the WD.
To end this sub--section, it is worth mentioning that \citet{veras2013b} performed a test as to whether two--planet systems discovered up to November 2012 remain
stable using their Hill stability criterion and assuming planets with minimum mass and coplanarity.
They found four unstable planet pairs of two--planet systems, namely,
24~sex, HD~128311, HD~200964, which we also found to be unstable (during the MS phase) because they require stabilisation by 2:1 and 4:3 mean motion resonances; the fourth system BD+20~2457 is not analyzed in this work since it is an evolved star. Additionally, they analyzed three more pairs of planets which are expected to be Hill stable but Lagrange unstable in late MS times, HD~183263, HD~108874 and HD~4732. In our 10 Gyr simulations, HD~183263 indeed becomes Lagrange unstable at MS but using the updated parameters we find it stable. HD~108874 remained stable the entire simulated time and HD~4732 was not considered in this work since it is a giant star.
\subsection{The planet semimajor axis ratio}
\label{semitime}
In the following we perform a deeper analysis of the two--planet system parameters and see how they relate with the instability times obtained in our simulations. We begin by analyzing the semimajor axis ratio. In Figure \ref{instabili} we show the instability time vs the semimajor axis ratio of our two--planet systems, together with location of some first-- and second--order mean motion commensurabilities. The instabilities on the MS occur near these commensurabilities, most being around the 2:1 confirming the reduced survival rate found in \citet{pu2015} with {\it Kepler} multiple--planet systems due to having planets in first and second order commensurabilities. On the other hand, \citet{veras2013b} found that their simulations of two--planet systems, using planets with identical masses and eccentricities $e_1=e_2 = 0.1$, become completely stable inside the 2:1, $a_2/a_1 = 1.58$ for a $3\mathrm{\,M}_\odot$ host star. In our simulations, we find that the most widely--separated systems that lose a planet on the MS lie near to the 3:1 mean motion commensurability, at a semimajor axis ratio $a_2/a_1 \approx 2$.
\begin{figure}
\begin{center}
\includegraphics[width=9.3cm, height=8cm]{2pl_aratin2.eps}
\caption{Instability time as a function of semimajor axis ratio of our two--planet systems. Ejections are shown in dark blue, planet--planet collisions in pink and planet--star collisions in light green. The black vertical ticks mark the semimajor axis values of the two--planet systems used in this study. First-- and second--order mean motion commensurabilities (6:5, 4:3, 3:2, 5:3, 2:1, 3:1 from left to right) are shown with Y--shaped black symbols at the top of the graph. The red horizontal dashed line marks the time where the star becomes a WD. Instabilities in the MS are shown with small empty circles. The x--shape symbols reflect planet ejections produced when the non--adiabatic mass loss regime is reached. }
\label{instabili}
\end{center}
\end{figure}
We find four planetary system (WASP--53, HD~187123, HD~219828 and PR0211) for which 17 simulations resulted in the ejection of planet--2 just within the first 6 Myr after the beginning of the WD phase. The main characteristic of these systems is that they have the largest semimajor axis ratio, then, due to our scaling set--up, their planet--2 is initially located at 909, 1147, 1324 and 1821 \,au, with eccentricities of 0.84, 0.252, 0.812, and 0.7 respectively. \citet{veras2011} have found that one planet located at distances of $\sim$ 1000 \,au, having such large, eccentric orbits, may enter the non--adiabatic mass loss regime of the host star (which means that the mass loss time--scale is comparable to the planetary orbital time--scale); then, the wide and eccentric planet enters to a run--away phase, where its eccentricity and semimajor axis increase drastically, resulting in the ejection of the planet at times when the star has lost at least 70\,$\%$ of its mass (the fraction of mass lost by a 3$M\odot$ star when it becomes a WD). Alternatively, it can be protected from ejection, depending on its true anomaly evolution. With this in mind, the planet ejections found in the four scaled system mentioned before are due to the non--adiabatic mass loss and not by Lagrange instability. We show these planet ejections in Figure \ref{instabili} with x--shaped symbols.
The theoretical Hill stability limit in the WD phase in terms of semimajor axis ratio is calculated using the procedure found in \citet{veras2013b} and can be compared to the parameters of the simulated systems. We calculate the ratio between the observed semimajor axis ratio and the WD Hill stability limit. From our 3485 simulations of two planets, we found that 106 have an observed semimajor axis ratio lower than the expected ratio at which they might be Hill--stable at the WD phase. We can expect these systems to be unstable in the MS or WD phases. After performing the simulations, we found that 51 out of the 106 of those systems have dynamical instabilities where a planet is lost by ejections, planet--planet collisions or planet--star collisions, in the pre--WD or WD phases. 17 simulations undergo orbital scattering in their planet orbits and/or orbit crossing between the planets, without losing a planet. The other 38 simulations, of the 106 predicted unstable, remained stable the entire simulated time. Nevertheless, regarding the simulations expected to be stable the entire simulated time, we obtained 33 simulations which were Hill--stable during the WD phase but Lagrange--unstable, losing a planet in this phase. The latter cases are clear examples of planetary systems Hill--stable that become Lagrange--unstable due to the mass loss of the star, confirming that the boundaries of stability changes as the star becomes a WD \citep{debes2002}. We also have 15 simulations expected to be Hill--stable showing orbital scattering in their planet orbits during the WD phase without any orbit crossing nor loss of a planet. Note that we are not counting amongst the latter the 17 simulations with planet ejections produced by the non--adiabatic mass loss regime.
\begin{figure}
\begin{center}
\includegraphics[width=9cm, height=7cm]{2pl_wdhlim2.eps}
\caption{Ratio of the simulated semimajor axis ratio of the two--planet systems and the theoretical Hill limit calculated following the prescription in \citet{veras2013b} for the WD mass, as a function of the simulated semimajor axis ratio. Black dashed line depicts where the simulated semimajor axis ratio and the semimajor axis ratio of the Hill limit at the WD phase are equal. In light blue dots we show the systems for which both planets have eccentricity $\leq$ 0.1. In orange dots planets with one or both planets have eccentricities $>$ 0.1. Gray dots marked the unstable systems in the simulations and black dots show when the instabilities happen at the WD phase. The x-shape symbols depict simulations where ejections of the planets so far away that enter into the non--adiabatic mass loss regime.}
\label{mudelarat}
\end{center}
\end{figure}
In Figure \ref{mudelarat} we show the ratio between the observed semimajor axis ratio of the planetary systems and the critical semimajor axis limit at which the planets may become Hill unstable, using the theoretical prescription given by \citet{veras2013b} with the WD mass, plotted as a function of observed semimajor axis ratio.
We observe that most of the two--planet systems follow a trend that indicates the larger the semimajor axis ratio, the larger the difference of the observed ratio with respect to the theoretical one. We note that the light blue dots, representing low--eccentricity systems, follow a straight line with positive slope. Nevertheless, for planets that have eccentricities higher than 0.1, their semimajor axis difference deviates from this trend.
\subsection{The planet:star mass ratio}
We analyze effects of the the planet:star mass ratio defined as $\mu=\frac{m_1+m_2}{M_*}$, where $m_1$ and $m_2$ are the planet masses and $M_*$ is the mass of the central star. We use this to calculate the separation of the planets in mutual Hill radius units defined as,
\begin{equation}
R_\mathrm{H,mutual}=\frac{a_1+a_2}{2}\left(\frac{m_1+m_2}{3M_*}\right)^{1/3}
\end{equation}
where $a_1$ and $a_2$ are as stated before the semimajor axis of planets 1 and 2 respectively. Note that here we use the mass of the host star as $3\mathrm{\,M}_\odot$ for the calculation of the mutual Hill radii.
Since the distribution of separations of the planets $\Delta$ in terms of mutual Hill radius is also a function of $\mu$, in Figure \ref{mudel} we show $\Delta$ as a function of $\mu$ for the two--planet systems analyzed in this work.
In general, we see that lower--mass planets can be more widely spaced in mutual Hill radii. The envelope in the $\mu-\Delta$ plane is simply related to the fact that there is a singularity in the relation between $\Delta$ and $\mu$, where $\Delta_{\mathrm{max}}=2(\frac{m_1+m_2}{3M_*})^{-1/3}$ \citep[cf.][]{mustill2014}. For a better understanding of the planet masses in terms of $\mu$, planets with Earth, Neptune and Jupiter masses have values of the order of $10^{-6}$, $10^{-5}$, and $10^{-3}$ respectively. We highlight that the unstable systems that lose a planet by Hill or Lagrange instabilities are located in the lower part of the graph ($\Delta\leq$ 9.74)
and in the $\mu$ range from $1.8\times10^{-5}$ to $0.03$.
\begin{figure}
\begin{center}
\includegraphics[width=9cm, height=7.5cm]{2pl_mudel2.eps}
\caption{Distribution of planet separation distance in Hill radii ($\Delta$) as a function of $\mu$ $((m_1+m_2)/M_*)$, where $M_*$ is the mass of the host star in the MS. Red points mark systems discovered by RV, blue those by transit. Systems where at least one run was unstable during the WD phase are represented as black plus symbols, and those experiencing ejections by non--adiabatic mass loss as x--shape symbols. }
\label{mudel}
\end{center}
\end{figure}
It is clear that the planetary systems that have been discovered mainly by the transit method differ in their distribution in the $\mu$-$\Delta$ space with respect to those detected by RV (see Figure \ref{mudel}).
The majority of the planets detected by transits are less massive than those detected by RV, and therefore they have smaller Hill radii. This means that they can be very widely dynamically spaced (in terms of Hill radii) even when they are rather closely spaced physically (in terms of \,au or semimajor axis ratio). This means that it is easier for the transiting systems to remain stable than it is for the RV systems. We demonstrate this statement in the left panel of Figure \ref{hilmut}, where we plot the instability time (Myr) vs $\mu$. Most of the instabilities occurring during the MS phase are due to planets with masses higher than that of Jupiter. In fact, the instabilities happening at very early times (between 0.1 to 10 Myr) are those produced for the high mass planets ($m_\mathrm{planet} > 1 \mathrm{M_J}$). Then, as $\mu$ decreases, the instabilities move to later times, and at some point ($\mu\leq 1.84\times10^{-5}$), the instabilities cease to happen. In the right panel of Figure \ref{hilmut} we see how most of the MS and WD instabilities that lead a loss of a planet happen at $\Delta \leq$ 10 and then the planets are separated enough so that they were stable the entire simulated time.
\begin{figure*}
\begin{center}
\includegraphics[width=17.5cm, height=7.7cm]{2pl_mu_del2.eps}
\caption{Left: Instability times vs $\mu$. Colors and symbols are as in Fig. \ref{instabili}. Right: Same as left panel but showing the separation $\Delta$ in terms of Hill mutual radius.}
\label{hilmut}
\end{center}
\end{figure*}
\subsection{The planet mass and eccentricity ratio}
In the following we explore how the mass and eccentricity of the planets relate to the instability times. In the left panel of Figure \ref{massecc}, we plot the instability times (Myr) vs the planet mass ratio. We have also shown the cases explored in the simulations performed by \citet{veras2013b,veras2013}.
We can see that most of the systems that have an instability at the WD phase are located in the range between 0 to 9.2 planet mass ratio. The most extreme is Kepler--487, which has a planet mass ratio of $\sim 2\times 10^{-3}$. For this system, the mass--radius calculation of planet--1 for a 10.9 $R\oplus$ gives a planet mass of 2595 $\mathrm{M_\oplus} ($8.16 $\mathrm{M_J}$), while, for planet--2 the calculated mass is 5.25 $\mathrm{M_\oplus}$ for a radius of 2.07 $R\oplus$. We clearly see that in most of our simulations, planet--2 is more massive than planet--1, verified by having a large number of black lines in the right of the dotted line (and see Figure~\ref{mrati}). The number of instabilities at the WD phase for a planet mass ratio $<$ 1 is slightly larger than those for a mass ratio $\geq$ 1 (44 and 36 respectively).
In the right panel of Figure \ref{massecc} we show the same as in the left panel but as a function of the eccentricity ratio, where black upside down triangles depict the eccentricity ratios studied in \citet{veras2013}. The instabilities leading a loss of a planet at the WD phase in our simulations are present in a wider range of eccentricity ratios from the ones explored before in the literature, from 0 up to 4.86. We note that in contrast to the planet mass ratio, the eccentricity ratio values explored in our simulations are quite symmetric with respect to the ratio where the eccentricity of both planets is the same. We also note that the planet ejections due to non--adiabatic mass loss are located with eccentricity ratios between 10 to 100, even one case with a value around 200, which means that those systems have planetary configurations of a very eccentric planet--2 with respect to its companion. Nevertheless, we find more planet losses at the WD phase at eccentricity ratios $\geq$ 1 than $<$ 1 (43 simulations vs 37).
\begin{figure*}
\begin{center}
\includegraphics[width=17.5cm, height=7.7cm]{2pl_mr_er2.eps}
\caption{Left panel. Instability times vs the planet mass ratio. Colors and symbols are as in Fig. \ref{instabili}. Right panel. Instability times as a function of the eccentricity ratio. Black circles and upside down triangles in the top of both panels mark the planet mass ratio (right) and eccentricity ratios (left) of two--planets simulations used in \citet{veras2013b,veras2013} respectively. The black dotted line depicts the ratios for which the mass and eccentricity is equal in both planets.}
\label{massecc}
\end{center}
\end{figure*}
\section{DISCUSSION}
\label{planetesi}
\subsection{Instability without loss of a planet}
Hitherto we have treated systems as ``unstable'' when they lose a planet due to ejection or collision. However, systems where planets experience some degree of scattering without being lost are also of relevance for polluting white dwarfs, as their changing orbits and increasing eccentricities can lead to scattering of asteroids. Indeed, \cite{mustill2018} found that such systems can be among the most efficient at delivering material to the WD.
Among these systems, there are three groups to identify. The first group consists of 26 simulations in which orbits intersect and this produces orbital scattering in the semimajor axis until one of the planets is lost, either by ejection, planet--star collision or planet--planet collision at the WD phase (13 of them have orbits crossing before the WD phase). In the second group we include the 14 simulations where orbit crossing and orbital scattering in the semimajor axis are present, but no planets are lost in the 10 Gyr of simulated time (2 simulations have orbit crossing before the WD phase). In the third group of 18 simulations the planet orbit do not cross and no planet is lost either, but they show orbital scattering in the semimajor axis. The orbital scattering is defined as those systems where the observed semimajor axis of planet--1 and/or planet--2 differs more than 5\,$\%$ with respect to the semimajor axis value at the beginning of the WD phase. In total we find that 58 simulations out of 3485 (1.66\,$\%$) are located within the groups defined before. Note that in the 54 simulations out of 80 where a planet is lost by Hill or Lagrange instabilities during the WD stage do not experience any previous orbit crossing.
In Figure \ref{scat} we display four examples of the groups defined previously, where the evolution of the semimajor axis is shown as a function of time. We have added in the plot the evolution of the apocenter and pericenter. In the upper panels we show the orbital evolution of the scaled systems {\it Kepler}--200 (left) and HIP~65407 (right), both of them losing a planet by planet--planet collision and ejection respectively; {\it Kepler}--200 also has the orbits of its planets crossing several times before the planet--planet collision. We highlight for example that the top left simulation exhibits a collision between the planets at 8.6 Gyr, which would produce debris that can be launched toward the WD by the remaining planet. The system of the top right experience the ejection of planet--2 at 0.95 Gyr, however, we highlight that the pericenter of the surviving planet--1 reaches the Roche radius of the WD several times, the first time at 751 Myr and the last time at 859 Myr (see section~\ref{sec:roche} for further discussion). In the lower panels we present another run of the scaled systems {\it Kepler}--200 (right) and the system {\it Kepler}--29 (left). In these cases both planets are bound to the system the entire simulated time, however in {\it Kepler}--200 there are several instances of orbit crossing during the 10 Gyr, while {\it Kepler}--29 does not have any orbit crossing but the orbital scattering is quite large, increasing as the system evolves up to distances where the pericenter and apocenter of the planets cover a range of $\sim$ 80 \,au for the last Gyr of the simulation. Note that the scattering in the simulation in the bottom left panel begins during the MS and following the mass loss this orbital behavior causes the planets to migrate from the inner to the outer regions of the planetary system. While the case shown in the bottom right there is not a planet loss nor an orbit crossing between the planets but there is a quite large orbital scattering in the pericenter and apocenter of planet--1 and --2 for the entire 10 Gyr of simulation.
\begin{figure*}
\begin{center}
\includegraphics[width=8.5cm, height=6.9cm]{Kepler-200_0_2pl.eps}
\includegraphics[width=8.5cm, height=6.9cm]{HIP65407_5_2pl.eps}\\
\includegraphics[width=8.5cm, height=6.9cm]{Kepler-200_5_2pl.eps}
\includegraphics[width=8.5cm, height=6.9cm]{Kepler-29_1_2pl.eps}
\caption{Four representative examples of orbital evolution where the orange and green lines show the semimajor axis of planet--1 and planet--2 respectively. The lighter versions of these lines show the evolution of the apocenter and pericenter of the planets. As usual the red dashed vertical line represents the time when the star becomes a WD. From top to bottom and left to right, the input parameters of these simulations can be found in Table \ref{mms} under \#1831, 866, 1836, and 2352. }
\label{scat}
\end{center}
\end{figure*}
\subsection{Reaching the Roche limit of the WD}
\label{sec:roche}
The simulation of the scaled system HIP~65407 shown in the upper right panel of Figure \ref{scat} serve as an example to highlight a very important behaviour in the simulations: clear scattering is present, especially in the pericenter of planet--1, but most importantly there are periods of time when the pericenter of planet--1 reaches very close distances to the WD radius (0.02 $R_\odot$). During these instances, the planet could experience tidal destruction, an effect not included in the simulations. Therefore, we post--process the simulation results in order to compare the pericenters of all the planets with the Roche radius of the WD,
\begin{equation}
a_\mathrm{Roche}=\left(\frac{3\rho_\mathrm{WD}}{\rho_\mathrm{pl}}\right)^{1/3}R_\mathrm{WD}
\end{equation}
where $\rho_\mathrm{WD}$, $\rho_\mathrm{pl}$ are the densities of the WD and the planet respectively and $R_\mathrm{WD}$ is the radius of the WD \citep{mustill2014}.
Seven of our simulations have the pericenter of planet--1 at smaller distances than the calculated Roche radius: two of the system HIP~65407 (one of them shown in the upper right panel of Figure \ref{scat}), and five in the system HD~113538. These occur at cooling ages of $\sim100$\,Myr to several Gyr. The five planet--star collisions found in the WD phase are within the 7 simulations where planet--1 crosses the Roche radius. This means that these planets could be tidally disrupted by the WD, hence producing a circumstellar disc and pollute its atmosphere through acreted material.
\citet{gansicke2019} recently interpreted the gas disc in WD~J0914+1914 as the photo--evaporated atmosphere of an icy giant planet. They reached this conclusion from the inconsistency of accretion from the wind of a low mass stellar companion to WD~J0914+1914, the depletion of rock--forming elements with respect to bulk Earth composition, and the larger size of the circumstellar disc compared with a canonical disc that a planetesimal would form. Also, they also argued that Neptune to Jupiter--mass planets need to be at distances lower than 14--16 $R_\odot$ so that the planet atmospheres begin to photo--evaporate and form a gas disc similar to the WD~J0914 disc. With this less restrictive condition in mind, we check whether we find more systems with pericenters reaching this $16R_\odot$ limit. We find 2 more simulations: one of the scaled system HD~113538, where one planet is ejected at 988 Myr and one of the system HD~30177, where the planet is lost by ejection at 547 Myr. Note however that for evaporation to take place high irradiation during extended periods of time is a requirement (see e.g. \citealt{Villaver2007}) and that condition is hardly full filled just by reaching the pericenter distance mentioned above. Nevertheless, some of these close pericentre passages in our simulations occur at cooling ages of just a few 10s of Myr, when the WD is still indeed hot and bright.
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=9cm, height=7.5cm]{2pl_wdin_ae2.eps}
\includegraphics[width=9cm, height=7.5cm]{2pl_wdfn_ae2.eps}
\end{tabular}
\caption{Eccentricities vs semimajor axis of the scaled two--planet systems simulated in this work in the left panel at the time immediately after the central star becomes a WD (478 Myr) and the the right
at the end of the simulations (10 Gyr). The orange and green colours are for planet--1 and planet--2 respectively and the plus symbols are for planets in systems that never lose a planet. Brown dots indicate planets that survive the loss of their companion, while the purple ones depict those planets that will be lost during the WD times. Light blue and black circles around the plus symbols depict planets that undergo an orbit crossing before the WD time and planets that have orbital scattering without any loss of a planet nor orbit crossing respectively. The 10 planets that are unstable just or around that time (given the time resolution of the outputs of the simulations) are not shown in the graph nor the 17 planets ejected due to non--adiabatic mass loss. Right panel: we highlight the planets that lost a planet companion after the system suffered a dynamical instability in the WD phase. In dark blue dots we show the planets that survived the ejection of the other planet, in pink dots the planets surviving a planet--planet collision, and in light green dot planets surviving a planet--star collision of their companion. The light blue and black circles show the same as the left panel with the difference that now indicate orbit crossing or orbital scattering only at the WD phase.}
\label{surveccar}
\end{center}
\end{figure*}
\subsection{Eccentricity and Planet mass}
In the two snapshots of Figure \ref{surveccar} we show the distribution of the final eccentricity vs final semimajor axis of our two--planet simulations: on the left at the beginning of the WD phase (478 Myr), and on the right at the end of the simulations (10 Gyr).
This is to be compared with the initial $a-e$ distribution shown in Figure~\ref{initea}.
Planet--1 is mostly found at $\sim40$\,au, as expected from adiabatic orbit expansion, with a secondary tail of planets that have experienced scattering extending in to 20\,au at a range of eccentricities. This tail is typical for orbital elements after scattering \citep[e.g.,][]{chatterjee2008,mustill2014}.
The only few planets--1 that are beyond 50\,au, when the WD begins are those that underwent orbit crossing at pre-WD times. Planet--2 fills a much wider region of the eccentricity -- semimajor axis space with planets reaching up distances $\geq$ 10\,000\,au and eccentricities $\geq 0.95$. Note that many of these are the planets that will be ejected later on. The locations of stable planets in Figure \ref{surveccar} are as expected given adiabatic mass loss and neglecting tidal forces \citep{Villaver2007}, $a_\mathrm{WD}=a_\mathrm{MS} (M_\mathrm{MS}/M_\mathrm{WD})$, where $a_\mathrm{MS}$ is the initial orbital radius and $M_ \mathrm{MS}$, $M_\mathrm{WD}$ are the masses of the star at the MS and WD phases respectively. Thus given the initial configuration stable planets--1 are expected to end at 40\,au after mass loss with planets--2 further out, having a bulk of them located at distances from 45 to 350\,au and others beyond $10^3$\,au.
Planets--2 that have so far survived scattering are found in a second tail that extends to high semimajor axis and eccentricity. Planets are not found in the middle between the tails, since they are prone to experience dynamical instabilities in this region \citep{chatterjee2008}.
In the right panel of Figure \ref{surveccar} at 10 Gyr we have added the information of the surviving planets: specifically, what type of dynamical instability leads to the loss of their companions at the WD phase.
We can see that all the surviving planets that lose their companions by ejection ended between 17 to 41\,au, and between $\sim 0$ to $0.92$ in eccentricity, while those surviving a planet--planet collision ended between 39 to 44\,au, with eccentricities $\leq$ 0.07 and the survivors of a planet--star collision are wide planets with semimajor axis $> 150$\,au and eccentricities $> 0.49$.
The distribution of eccentricities of the surviving planet of the initial pair the end of our simulations is shown in Figure \ref{eccentric}.
We see a bi--modal distribution with one peak at eccentricities around $0.0$ and a second one around $0.6$.
Note that this distribution in the second peak is very different from the initial one (see Figure \ref{esemi}) meaning that high eccentric planets are produced by interactions of multiple planets. This is important given that simulations that include the interaction of planets with a planetesimal belt conclude that highly eccentric planets may deliver efficiently material toward the WD \citep[see, e.g.,][]{bonsor2011, frewen2014,mustill2018}. Here we provide a mechanism for a planet to have large enough eccentricities to be an efficient deliverer of material: a planet--planet scattering in a multiple planetary system.
We have 80 simulations in which a planet is lost either by Hill or Lagrange instability during the WD phase. We now look at the distribution of eccentricities of the remaining planet and compare them with the results of previous works.
We find that in 52 out of this 80 (1.49\,$\%$ of the total 3485) simulations, one of the planets ends up with eccentricities larger than $0.4$. These could be ideal candidates for sending large numbers of asteroids towards the WD; however, 51 of these 52 planets have masses $>1\mathrm{\,M_J}$ and would therefore have low efficiency at delivering material to the WD, preferentially ejecting asteroids instead.
\begin{figure}
\begin{center}
\includegraphics[width=9cm, height=6.5cm]{2pl_fwd_e2.eps}
\caption{Distribution of eccentricities of the surviving planets after the other become unstable and is lost from the system. The eccentricities are taken once the remaining planet becomes stable at the WD phase.}
\label{eccentric}
\end{center}
\end{figure}
However, we do find lower--mass planets if we include systems which, during the WD phase, experience either i) the loss of a planet, ii) orbit crossing without the loss of a planet, or iii) orbital scattering in the semimajor axis of both planets.
112 out of 3485 simulations fulfill this condition (3.21\,$\%$).
If we now look into their mass distribution we have 25 simulations (0.72\,$\%$) where planet--1 masses are in the range $1-30\mathrm{\,M}_\oplus$ and 35 (1.004\,$\%$) where planet--2 has masses in the same range.
These numbers are 39 (1.36\,$\%$) and 49 (1.41\,$\%$) of simulations for planet--1 and --2 respectively, with masses between 10 to $100\mathrm{\,M}_\oplus$. These mass ranges, especially the lower Earth--Neptune mass end, are those identified by \cite{mustill2018} as being most efficient at delivering asteroids to the WD during and after an instability.
\subsection{Pollution and cooling times}
We now briefly discuss the cooling times at which our
instabilities occur, and relate this to observations.
Observationally, polluted WD with detected IR excesses may
have a peak distribution in cooling ages around 400--500 Myr
while the polluted WDs without IR excesses have a distribution
in cooling times that extends up to 1 Gyr\footnote{Using data
from \citep{farihi2009,debes2011,girven2012,rocchetto2015}
complemented by the WD Montreal Database \citep{dufour2017b}
http://www.montrealwhitedwarfdatabase.org.}. While we do not
perform a quantitative comparison with our simulations, which
would require addressing the biases of different surveys,
these data do indicate that planetary/asteroidal material
is being delivered to WDs at a large range of cooling ages.
In particular, regardless of the true time dependence of these phenomena,
any dynamical delivery mechanism must be capable of providing
\emph{some} material at late times.
Indeed, we do find that the number of instabilities where a planet is lost in two--planet systems decreases as the cooling time increases, in common with previous dynamical simulations of planetary systems \citep[e.g.,][]{veras2018,mustill2018}. Compared to \cite{veras2013b}, we have fewer planet--planet collisions (0.14\,\% of instabilities overall compared to their $\sim50\,\%$): we can attribute this to the fact that \cite{veras2013b} simulated perfectly coplanar systems which significantly increases the likelihood of a physical collision when orbits cross. In our simulations most of the instabilities where a planet is lost occur in the first 100 Myr of the cooling time (55), with 14 simulations losing a planet between 100 -- 1000 Myr and 11 of them occur at times $t \geq 1$ Gyr. In contrast, note that \citet{hollands2018} found that the metal abundances in WD polluted sample decays exponentially with an e--folding time of 0.95 Gyr, suggesting that the potentially polluting material is being depleted at post MS times.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=9cm, height=8cm]{2pl_cool_hist_scat2.eps} \\
\end{tabular}
\caption{Distribution of WD cooling time of the orbital scattering for simulations in which orbit crossing appears. The colours depict different cases. The pink dotted line shows the simulations when the orbit crossing begins and the orbital scattering lasts the total simulated time, since no planets are lost in those simulations. The gray dashed line indicates the simulations from the onset of orbit crossing to the the loss of a planet, either by ejection, planet--star collision or planet--planet collision at WD phase. The purple dash--dotted line display the simulations where only orbital scattering is present, without any orbit crossing nor loss of a planet. The black solid line displays the sum of the pink, gray and purple. We highlight that some systems have the orbit crossing before the WD phase; thus they start to count from the beginning of the WD cooling time. }
\label{cross_cool}
\end{center}
\end{figure}
It is interesting to analyze the two--planet simulations that are dynamically active for periods of time. This period may begin when an orbit crossing is present in the orbital evolution of both planets, since the orbit crossing can trigger long--term orbital scattering in both planets that may launch rocky bodies onto stargrazing orbits, hence producing pollution in the WD atmosphere. This period of time can last until one of the planets in the system is lost, relaxing the system dynamically. However, this interval of time can also last the entire simulated time.
In Figure \ref{cross_cool}, we display the percentage of systems currently undergoing orbit crossing and orbital scattering at different times during the WD phase, there being 58 such systems in total. We transform our simulated time to WD cooling time, setting our zero point at the time when the WD forms. We show separately systems which either do or do not ultimately lose a planet as a result of the orbit crossing and systems where only orbital scattering is present, as well as the sum of these. Some important results can be outlined from this Figure. First, orbit crossing and orbital scattering are typically of long duration, and of the 58 systems at least two thirds are dynamically ``active'', i.e., are between the onset of orbit crossing and the loss (if any) of a planet, at any one time. We see that the peak of simulations where there is a loss of a planet is in the first 10 Myr of cooling time, then it decreases slowly towards the end of the simulation time. This imply that if those systems had an infinite reservoir of planetesimals, the delivery rate would decrease naturally since there would be no events that can launch material toward the WD at Gyr times, then the number of expected Gyr old WD with metal pollution would be very low. On the other hand, by observing the simulations which do not eventually lose a planet, we may expect an increase of number of WD showing pollutant events from hundreds to Gyr times. In reality, we expect that the reservoir of planetesimals is being depleted in time, thus, the effect of observing less WD with pollutant events at older times is enhanced. When we see the sum of the three lines, representing all 58 systems, the predominant effect is an increase number of pollutant events, reaching a peak around 1 Gyr of cooling time.
\section{Conclusions}
We have explored the stability of multiple planetary systems with the goal of understanding the observed pollution rates detected in WDs. We use dynamical simulations by restricting the, otherwise infinite, parameter space to study the evolution to the WD phase of scaled versions of the MS planetary systems that have been detected. In this way we are exploring a larger parameter space than previous works using configurations of planetary systems with two planets with different masses, orbits, multiple eccentricities and with different semimajor axis ratios. This is the first time dynamical simulations restricted by the observed parameters have been done to study the instabilities that could bring material to the surface of the WD. Of course, the only reliable constraint on the planet distribution around massive stars has to come from observations. This will most likely take the form of future microlensing surveys, as might be conducted with the {\it Nancy Grace Roman Space Telescope} \citep{spergel2015,penny2019} that could potentially provide the statistical picture of planetary system architectures on the MS that we need to evolve to the WD to test pollution scenarios. Thus, for the time being we have based our simulations on a simple, well-informed scenario on the observed planet distribution; the scaling of these observations and thus their validity to higher masses does not necessarily reflect reality but is the closest we can explore at the moment with the information available.
We performed 3730 dynamical simulations of 373 planetary systems (we run 10 simulations of each) orbiting a putative 3 $\mathrm{\,M}_\odot$ parent star. After disregarding for the analysis 245 simulations where a dynamical instability (loss of a planet, orbit crossing) occurs on MS due to having planets in (or close to) mean motion commensurabilities, we ended up with 3485 simulations that we followed for 10 Gyr well into the the WD phase. We find that 80 (2.3\,$\%$) simulations result in the loss of a planet by Lagrange or Hill instability after the formation of the WD, with only 5 of them sending the planet into the WD. The small number of planet--star collisions we find confirms that these events are rare in the WD phase, if we consider also the simulations of \citet{veras2013, veras2013b} where they did not encounter any planetary system that resulted in a planet--star collision in the WD phase. It has been estimated that at most 1--5\,\% of WDs have high ongoing accretion rates due to dust discs \citep{debes2011, farihi2012}. The formation of dust disks around WDs is not followed at all in our simulations but if it is a consequence of instabilities that send the planets themselves into the WD to be tidally disrupted, we find that our two-planet system simulation rates are too low to account for this phenomenon. On the other hand, if disks are formed after asteroids are scattered and disrupted, following orbital crossing/scattering of the planets, then our overall result (3\,\%) two body dynamics could account for this level of pollution and if it is 100\,\% effective, most or all of the large levels of IR excess due to dust.
We obtain a rate that is so small that implies that other mechanisms have to be invoked to explain the prevalence of atmospheric pollution in at least $25-50\,\%$ WDs.
One possibility is that higher planet multiplicity plays a role and to explore this option the simulations of the dynamical evolution of observed systems with three and more planets is on--going. As has been shown before, an enhanced incidence of instabilities, increases for systems with three or more planets \citep{mustill2014,veras2016b,mustill2018}. Another important aspect to consider is the fact that planets can be just the mechanism for the delivery of planetesimals and we can increase the delivery rate to 3.21\,$\%$ of our simulations if we assume that orbit crossing and orbital scattering could be contributing to the planetesimal delivery toward the WD. Note that although low, this provides a mechanism to continuously send destabilized material towards the WD.
\section*{Acknowledgements}
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. E.V. and R.M. and A.J.M acknowledge support from the `On the rocks II project' funded by the Spanish Ministerio de Ciencia, Innovaci\'on y Universidades under grant PGC2018-101950-B-I00. MC, RM and EB thank CONACyT for financial support through grant CB-2015-256961. A.J.M. acknowledges support from the the project grant 2014.0017 `IMPACT' from the Knut \& Alice Wallenberg Foundation, and from the starting grant 2017-04945 `A unified picture of white dwarf planetary systems' from the Swedish Research Council. We are grateful to Rafael Gerardo Weisz and Francisco Prada for their critical help with the automation of the processes and the use of the cluster.
\section*{Data availability}
Table \ref{mms} content is fully available as a machine readable table in the online supplementary material.
\bibliographystyle{mnras}
|
1,941,325,220,705 | arxiv | \section{Introduction}\label{sec:intro}
\myparagraph{The Needs of Irregular Data-parallel Algorithms}
Many interesting data-parallel algorithms are \emph{irregular}: the
amount of work to be processed is unknown ahead of time and may change dynamically in a workload-dependent manner.
There is growing interest in
accelerating such algorithms on
GPUs~\cite{owens-persistent,DBLP:conf/ipps/KaleemVPHP16,DBLP:conf/ipps/DavidsonBGO14,DBLP:conf/hipc/HarishN07,DBLP:journals/topc/MerrillGG15,DBLP:conf/egh/VineetHPN09,DBLP:conf/ppopp/NobariCKB12,DBLP:conf/hpcc/SolomonTT10a,DBLP:conf/popl/PrabhuRMH11,DBLP:conf/ppopp/Mendez-LojoBP12,DBLP:conf/oopsla/PaiP16,DBLP:conf/oopsla/SorensenDBGR16,DBLP:conf/egh/CedermanT08,TPO10,BNP12,Pannotia}.
Irregular algorithms usually require \emph{blocking synchronization}
between workgroups, e.g.\ many graph algorithms use a level-by-level
strategy, with a global barrier between levels; work
stealing algorithms require each workgroup to maintain a queue,
typically mutex-protected, to enable stealing by other
workgroups.
To avoid starvation, a blocking concurrent algorithm requires
\emph{fair} scheduling of workgroups. For
example, if one workgroup holds a mutex, an unfair scheduler may cause
another workgroup to spin-wait forever for the mutex to be
released. Similarly, an unfair scheduler can cause a workgroup to spin-wait
indefinitely at a global barrier so that other workgroups do not reach the barrier.
\myparagraph{A Degree of Fairness: Occupancy-bound Execution} The current GPU programming
models---OpenCL~\cite{opencl2Spec}, CUDA~\cite{cuda-75} and
HSA~\cite{HSAprogramming11}, specify almost no guarantees regarding
scheduling of workgroups, and current GPU schedulers are unfair in
practice. Roughly speaking, each workgroup executing a GPU kernel is
mapped to a hardware \emph{compute unit}.\footnote{In practice,
depending on the kernel, multiple workgroups might map to the same
compute unit; we ignore this in our current discussion.}
The simplest way for a GPU driver to handle more workgroups being
launched than there are compute units is via an \emph{occupancy-bound}
execution
model~\cite{owens-persistent,DBLP:conf/oopsla/SorensenDBGR16} where,
once a workgroup has commenced execution on a compute unit (it has
become \emph{occupant}), the workgroup has exclusive access to the
compute unit until it finishes execution.
Experiments suggest that this model
is widely employed by today's
GPUs~\cite{owens-persistent,DBLP:conf/oopsla/SorensenDBGR16,DBLP:conf/oopsla/PaiP16,BNP12}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{overview.pdf}
\caption{Cooperative kernels can flexibly resize to let other tasks,
e.g.\ graphics, run concurrently}
\label{fig:overview}
\end{figure}
The occupancy-bound execution model does not guarantee fair scheduling
between workgroups: if all compute units are occupied then a
not-yet-occupant workgroup will not be scheduled until some occupant
workgroup completes execution. Yet the execution model \emph{does}
provide fair scheduling between \emph{occupant} workgroups, which are
bound to separate compute units that operate in parallel. Current GPU
implementations of blocking algorithms assume the occupancy-bound
execution model, which they exploit by launching no more workgroups
than there are available compute units~\cite{owens-persistent}.
\myparagraph{Resistance to Occupancy-bound Execution}
Despite its practical prevalence, none of the current GPU programming
models actually mandate occupancy-bound execution. Further, there are
reasons why this model is undesirable.
First, the execution model does not enable
multitasking, since a workgroup effectively \emph{owns} a compute
unit until the workgroup has completed execution. The GPU cannot be used meanwhile for other
tasks (e.g.\ rendering).
Second, \emph{energy throttling} is an important concern for
battery-powered
devices~\cite{DBLP:journals/comsur/Vallina-RodriguezC13}. In the
future, it will be desirable for a mobile GPU driver to power down
some compute units, suspending execution of associated occupant
workgroups, if the battery level is low.
Our assessment, informed by discussions with a number of industrial
practitioners who have been involved in the OpenCL and/or HSA
standardisation efforts
(including~\cite{PersonalCommunicationRichards,PersonalCommunicationHowes}),
is that GPU vendors (1) will not commit to the occupancy-bound
execution model they currently implement, for the above reasons, yet
(2) will not guarantee fair scheduling \TSAdded{using preemption. This
is due to the high runtime cost of preempting workgroups, which
requires managing thread local state (e.g. registers, program
location) for all workgroup threads (up to 1024 on Nvidia
GPUs), as well as \emph{shared memory}, the workgroup local cache
(up to 64 KB on Nvidia GPUs)}. Vendors instead wish to retain the
essence of the simple occupancy-bound model, supporting preemption
only in key special cases.
For example, preemption is supported by Nvidia's Pascal architecture~\cite{PascalWhitepaper}, but on a GTX Titan X (Pascal)
we still observe starvation: a global barrier
executes successfully with 56 workgroups, but deadlocks with 57
workgroups, indicating unfair scheduling.
\myparagraph{Our Proposal: Cooperative Kernels}
To summarise: blocking algorithms
demand fair scheduling,
but for good reasons
GPU vendors will not commit to the guarantees of the
occupancy-bound execution model.
We propose \emph{cooperative kernels}, an extension to the GPU
programming model that aims to resolve this impasse.
A kernel
that requires fair scheduling is identified as \emph{cooperative}, and written using two additional
language primitives, $\mathsf{offer\_kill}$ and $\mathsf{request\_fork}$, placed by the programmer.
Where the cooperative kernel could proceed with fewer
workgroups, a workgroup can execute $\mathsf{offer\_kill}$, offering to
sacrifice itself to the scheduler. This indicates that the workgroup
would ideally continue executing, but that the
scheduler may preempt the workgroup; the cooperative kernel
must be prepared to deal with either scenario.
Where the cooperative kernel could use additional resources, a workgroup can execute
$\mathsf{request\_fork}$ to indicate that the
kernel is prepared to proceed with the existing set of workgroups, but
is able to benefit from one or more additional workgroups
commencing execution directly after the $\mathsf{request\_fork}$ program point.
The use of $\mathsf{request\_fork}$ and $\mathsf{offer\_kill}$ creates a contract between
the scheduler and the cooperative kernel. Functionally, the scheduler
must guarantee that the workgroups executing a cooperative kernel are
fairly scheduled, while the cooperative kernel must be robust to
workgroups leaving and joining the computation in response to
$\mathsf{offer\_kill}$ and $\mathsf{request\_fork}$. Non-functionally, a cooperative kernel
must ensure that $\mathsf{offer\_kill}$ is executed frequently enough such that
the scheduler can accommodate soft-real time constraints,
e.g.\ allowing a smooth frame-rate for graphics.
In return, the scheduler should allow the cooperative kernel to
utilise hardware resources where possible, killing workgroups only
when demanded by other tasks, and forking additional workgroups when
possible.
\TSAdded{Cooperative kernels allow for \emph{cooperative multitasking}
(see Sec.~{\ref{sec:relatedwork}}), used
historically when preemption was not available or too costly. Our approach avoids the cost of arbitrary
preemption as the state of a workgroup killed via $\mathsf{offer\_kill}$ does
not have to be saved. Previous cooperative multitasking systems have
provided \emph{yield} semantics, where a processing unit would
temporarily give up its hardware resource. We deviate from this
design as, in the case of a global barrier, adopting yield would
force the cooperative kernel to block \emph{completely} when a single workgroup yields, stalling the kernel until the given
workgroup resumes. Instead, our $\mathsf{offer\_kill}$ allows a kernel to
make progress with a smaller number of workgroups, with workgroups
potentially joining again later via $\mathsf{request\_fork}$.}
Figure~{\ref{fig:overview}} illustrates sharing of GPU compute units between a cooperative kernel and a
graphics task. Workgroups 2 and 3 of the cooperative kernel
are killed at an $\mathsf{offer\_kill}$ to make room for a graphics
task. The workgroups are subsequently restored to the cooperative kernel when workgroup 0 calls $\mathsf{request\_fork}$. The \emph{gather time} is the time
between resources being requested and the application surrendering them via $\mathsf{offer\_kill}$. To satisfy soft-real time
constraints, this time should be low; our experimental study
(Sec.~{\ref{sec:responsiveness}}) shows that, in practice, the
gather-time for our applications is acceptable for a range of graphics
workloads.
The cooperative kernels model has several appealing
properties:
\begin{enumerate}[leftmargin=*]
\item By providing fair scheduling between workgroups, cooperative
kernels meet the needs of blocking algorithms, including irregular
data-parallel algorithms.
\item The model has no impact on the development of regular
(non-cooperative) compute and graphics kernels.
\item The model is backwards-compatible: $\mathsf{offer\_kill}$ and $\mathsf{request\_fork}$ may be ignored, and a cooperative kernel will behave
exactly as a regular kernel does on current GPUs.
\item Cooperative kernels can be implemented over the occupancy\-/bound
execution model provided by current GPUs: our prototype implementation uses no special hardware/driver support.
\item If hardware support for preemption \emph{is} available, it can be leveraged to implement cooperative kernels efficiently, and cooperative kernels can avoid unnecessary preemptions by allowing the programmer to communicate ``smart'' preemption points.
\end{enumerate}
Placing the primitives manually is straightforward for the representative set of
GPU-accelerated irregular algorithms we have ported so far. Our experiments show that the model can enable efficient multitasking of cooperative and non-cooperative tasks.
In summary, our main contributions are:
\emph{cooperative kernels}, an extended GPU programming model that supports the scheduling requirements of blocking algorithms (Sec.~\ref{sec:cooperativekernels}); a \emph{prototype implementation} of cooperative
kernels on top of OpenCL 2.0
(Sec.~\ref{sec:implementation}); and \emph{experiments} assessing the overhead and responsiveness of the cooperative kernels approach over a set of irregular algorithms \cutthree{across three GPUs} (Sec.~\ref{sec:experiments}), including a best-effort comparison with the efficiency afforded by hardware-supported preemption available on Nvidia GPUs.
We begin by providing background on OpenCL via two motivating examples (Sec.~\ref{sec:background}). At the end we discuss related work (Sec.~\ref{sec:relatedwork}) and avenues for future work (Sec.~\ref{sec:conclusion}).
\section{Background and Examples}\label{sec:background}
We outline the OpenCL programming model on which we
base cooperative kernels (Sec.~\ref{sec:opencl}), and illustrate
OpenCL and the scheduling requirements of irregular algorithms using two examples: a work stealing queue and frontier-based graph traversal
(Sec.~\ref{sec:openclexamples}).
\subsection{OpenCL Background}\label{sec:opencl}
An OpenCL program is divided into \emph{host} and \emph{device}
components. A host application runs on the CPU and launches one or
more \emph{kernels} that run on accelerator devices---GPUs in the
context of this paper. A kernel is written in OpenCL C, based on C99.
All threads executing a kernel start at the same entry function with
identical arguments. A thread can call $\mathsf{get\_global\_id}$
to obtain a unique id, to access distinct data or follow different control flow paths.
The threads of a kernel are divided into \emph{workgroups}.
Functions
$\mathsf{get\_local\_id}$ and $\mathsf{get\_group\_id}$ return a thread's local id within
its workgroup and the workgroup id.
The number
of threads per workgroup and number of workgroups are obtained via
$\mathsf{get\_local\_size}$ and $\mathsf{get\_num\_groups}$.
Execution of the threads in a workgroup can be synchronised via a
workgroup barrier.
A \emph{global} barrier (synchronising all
threads of a kernel) is \emph{not} provided as a primitive.
\myparagraph{Memory Spaces and Memory Model} A kernel has access to
four memory spaces. \emph{Shared virtual memory} (SVM) is accessible
to all device threads and the host concurrently. \emph{Global} memory is
shared among all device threads. Each workgroup has a
portion of \emph{local} memory for fast intra-workgroup communication.
Every thread has a portion of very fast \emph{private} memory for
function-local variables.
Fine-grained
communication within a workgroup, as well as inter-workgroup
communication and communication with the host while the kernel is
running, is enabled by a set of atomic data types and operations. In
particular, fine-grained host/device communication is via atomic
operations on SVM.
\myparagraph{Execution Model}
OpenCL~\cite[p.\ 31]{opencl2Spec} and CUDA~\cite{cuda-75} specifically make no guarantees about fair scheduling between
workgroups executing the same kernel.
HSA provides limited, one-way guarantees,
stating~\cite[p. 46]{HSAprogramming11}: \emph{``Work-group A can wait
for values written by work-group B without deadlock provided ... (if) A
comes after B in work-group flattened ID order''}. This is not sufficient to support blocking algorithms that use
mutexes and inter-workgroup barriers, both of which require \emph{symmetric} communication between
threads.
\subsection{Motivating Examples}\label{sec:openclexamples}
\begin{figure}[t]
\begin{lstlisting}
(*@\label{line:wksteal:kernelfunc}@*)kernel work_stealing(global Task * queues) {
(*@\label{line:wksteal:getgroupid}@*) int queue_id = get_group_id();
(*@\label{line:wksteal:mainloop}@*) while (more_work(queues)) {
(*@\label{line:wksteal:poporsteal}@*) Task * t = pop_or_steal(queues, queue_id);
if (t)
(*@\label{line:wksteal:processtask}@*) process_task(t, queues, queue_id);
}
}
\end{lstlisting}
\caption{An excerpt of a work stealing algorithm in OpenCL}\label{fig:workstealing}
\end{figure}
\myparagraph{Work Stealing}
Work stealing enables dynamic balancing of tasks across
processing units. It is useful when the number of tasks to be
processed is dynamic, due to one task creating an arbitrary number of
new tasks. Work stealing has been explored in the context of
GPUs~\cite{DBLP:conf/egh/CedermanT08,TPO10}. Each workgroup has a
queue from which it obtains tasks to process, and to which
it stores new tasks. If its queue is empty, a workgroup
tries to \emph{steal} a task from another queue.
Figure~\ref{fig:workstealing} illustrates a work stealing
kernel. Each thread receives a pointer to the task queues, in global
memory, initialized by the host to contain initial tasks. A thread
uses its workgroup id (line~\ref{line:wksteal:getgroupid}) as a queue
id to access the relevant task queue. The $\mathsf{pop\_or\_steal}$
function (line~\ref{line:wksteal:poporsteal}) pops a task from the
workgroup's queue or tries to steal a task from other queues. Although
not depicted here, concurrent accesses to queues inside
$\mathsf{more\_work}$ and $\mathsf{pop\_or\_steal}$ are guarded by a
mutex per queue, implemented using atomic compare and swap operations
on global memory.
If a task is obtained, then the workgroup processes it
(line~\ref{line:wksteal:processtask}), which may lead to new tasks
being created and pushed to the workgroup's queue. The kernel presents
two opportunities for spin-waiting: spinning to obtain a mutex, and
spinning in the main kernel loop to obtain a task. \TSAdded{Without fair
scheduling, threads waiting for the mutex might spin indefinitely,
causing the application to hang.}
\begin{figure}[t]
\begin{lstlisting}
kernel graph_app(global graph * g,
global nodes * n0, global nodes * n1) {
int level = 0;
global nodes * in_nodes = n0;
global nodes * out_nodes = n1;
int tid = get_global_id();
int stride = get_global_size();
(*@\label{line:graph:iterate}@*) while(in_nodes.size > 0) {
for (int i = tid; i < in_nodes.size; i += stride)
process_node(g, in_nodes[i], out_nodes, level);
(*@\label{line:graph:swap}@*) swap(&in_nodes, &out_nodes);
(*@\label{line:graph:gb1}@*) global_barrier();
(*@\label{line:graph:reset}@*) reset(out_nodes);
level++;
(*@\label{line:graph:gb2}@*) global_barrier();
}
}
\end{lstlisting}
\caption{An OpenCL graph traversal algorithm}\label{fig:graphsearch}
\end{figure}
\myparagraph{Graph Traversal} Figure~\ref{fig:graphsearch} illustrates a frontier-based graph traversal algorithm; such algorithms have
been shown to execute efficiently on GPUs~\cite{BNP12,DBLP:conf/oopsla/PaiP16}.
The kernel is
given three arguments in global memory: a graph structure, and two
arrays of graph nodes. Initially, $\keyword{n0}$ contains the
starting nodes to process. Private variable $\keyword{level}$ records the current frontier level, and $\keyword{in\_nodes}$ and $\keyword{out\_nodes}$ point to
distinct arrays recording the nodes to be processed during the current and next frontier, respectively.
The application iterates as long as the current frontier contains
nodes to process (line~\ref{line:graph:iterate}). At each frontier,
the nodes to be processed are evenly distributed between
threads through \emph{stride} based processing.
In this case, the stride is the total number of threads, obtained via
$\mathsf{get\_global\_size}$.
A thread calls $\keyword{process\_node}$ to process a node given the current level, with nodes to be processed during the next frontier being pushed to $\keyword{out\_nodes}$. After processing the frontier, the threads swap their
node array pointers (line~\ref{line:graph:swap}).
At this point, the GPU threads must wait for all other threads to
finish processing the frontier. To achieve
this, we use a global barrier construct
(line~\ref{line:graph:gb1}). After all threads reach this point, the
output node array is reset (line~\ref{line:graph:reset}) and the level
is incremented. The threads use another global barrier to wait until the output node is
reset (line~\ref{line:graph:gb2}), after which they continue to the next frontier.
The global barrier used in this application is not provided as a GPU
primitive, though previous works have shown that such a global barrier
can be implemented~\cite{XF10,DBLP:conf/oopsla/SorensenDBGR16}, based
on CPU barrier designs~\cite[ch. 17]{HS08}. These barriers employ
spinning to ensure each thread waits at the barrier until all threads have
arrived, thus fair scheduling between workgroups is required for the
barrier to operate correctly. \TSAdded{Without fair scheduling, the
barrier threads may wait indefinitely at the barrier, causing the
application to hang.}
The mutexes and barriers used by these two examples appear to run
reliably on current GPUs for kernels that are executed with no more
workgroups than there are compute units. This is due to the fairness
of the occupancy-bound execution model that current GPUs have been
shown, experimentally, to provide. But, as discussed in
Sec.~\ref{sec:intro}, this model is not endorsed by language
standards or vendor implementations, and may not be respected
in the future.
In Sec.~\ref{sec:programmingguidelines} we show how the work stealing
and graph traversal examples of Figs.~\ref{fig:workstealing} and~\ref{fig:graphsearch} can be
updated to use our cooperative kernels programming model to resolve
the scheduling issue.
\section{Cooperative Kernels}\label{sec:cooperativekernels}
We present our cooperative kernels programming model as an extension
to OpenCL.
We describe the semantics of the model
(Sec.~\ref{sec:semantics}), presenting a more formal operational semantics in Appendix~\ref{appendix:semantics}
and discussing possible alternative semantic choices in Appendix~\ref{appendix:semanticalternatives}, use our motivating examples to discuss
programmability (Sec.~\ref{sec:programmingguidelines}) and outline
important nonfunctional properties that the model requires to work
successfully (Sec.~\ref{sec:nonfunctional}).
\subsection{Semantics of Cooperative Kernels}\label{sec:semantics}
As with a regular OpenCL kernel, a cooperative kernel is launched by
the host application, passing parameters to the kernel and specifying
a desired number of threads and workgroups. Unlike in a regular
kernel, the parameters to a cooperative kernel are immutable (though
pointer parameters can refer to mutable data).
Cooperative kernels are written using the following
extensions: $\mathsf{transmit}$, a qualifier on the variables of a
thread; $\mathsf{offer\_kill}$ and $\mathsf{request\_fork}$, the key functions that enable
cooperative scheduling; and $\mathsf{global\_barrier}$ and $\mathsf{resizing\_global\_barrier}$
primitives for inter-workgroup synchronisation.
\myparagraph{Transmitted Variables}
A variable declared in the root scope of the cooperative kernel can
optionally be annotated with a new $\mathsf{transmit}$ qualifier. Annotating
a variable $v$ with $\mathsf{transmit}$ means that when a workgroup
uses $\mathsf{request\_fork}$ to spawn new
workgroups, the workgroup should transmit its
current value for $v$ to the threads of the new workgroups.
We detail the semantics for this when we
describe $\mathsf{request\_fork}$ below.
\myparagraph{Active Workgroups}
If the host application launches a cooperative kernel requesting $N$
workgroups, this indicates that the kernel should be executed with a
\emph{maximum} of $N$ workgroups, and that as many workgroups as possible, up
to this limit, are desired. However, the scheduler may initially
schedule fewer than $N$ workgroups, and as explained below the number
of workgroups that execute the cooperative kernel can change during
the lifetime of the kernel.
The number of \emph{active workgroups}---workgroups executing the
kernel---is denoted $M$. Active workgroups have consecutive ids in
the range $[0, M-1]$. Initially, at least one workgroup is active; if
necessary the scheduler must postpone the kernel until some compute
unit becomes available. For example, in Fig.~{\ref{fig:overview}: at the
beginning of the execution $M = 4$; while the graphics task is
executing $M = 2$; after the fork $M = 4$ again.
When executed by a cooperative
kernel, $\mathsf{get\_num\_groups}$ returns $M$, the \emph{current} number of
active workgroups. This is in contrast to $\mathsf{get\_num\_groups}$ for
regular kernels, which returns the fixed number of workgroups that execute the kernel (see Sec.~\ref{sec:opencl}).
Fair scheduling \emph{is} guaranteed between active workgroups;
i.e.\ if some thread in an active workgroup is enabled, then
eventually this thread is guaranteed to execute an instruction.
\myparagraph{Semantics for $\mathsf{offer\_kill}$}
The $\mathsf{offer\_kill}$ primitive allows the cooperative kernel to return
compute units to the scheduler by offering to sacrifice workgroups.
The idea is as follows: allowing the scheduler to arbitrarily and abruptly terminate execution
of workgroups might be drastic, yet the kernel
may contain specific program points at which a workgroup could
\emph{gracefully} leave the computation.
Similar to the OpenCL workgroup $\mathsf{barrier}$ primitive,
$\mathsf{offer\_kill}$, is a workgroup-level function---it must be encountered
uniformly by all threads in a workgroup.
Suppose a workgroup with id $m$ executes $\mathsf{offer\_kill}$. If the
workgroup has the largest id among active workgroups then it
can be killed by the scheduler, except that workgroup 0 can never be
killed (to avoid early termination of the kernel). More formally, if $m < M-1$ or $M=1$ then $\mathsf{offer\_kill}$ is a
no-op. If instead $M > 1$ and $m = M-1$, the scheduler can choose to
ignore the offer, so that $\mathsf{offer\_kill}$ executes as a no-op, or accept
the offer, so that execution of the workgroup ceases and the number of
active workgroups $M$ is atomically decremented by one.
Figure~{\ref{fig:overview}} illustrates this, showing that
workgroup $3$ is killed before workgroup $2$.
\myparagraph{Semantics for $\mathsf{request\_fork}$}
Recall that a desired limit of $N$ workgroups was specified when the
cooperative kernel was launched, but that the number of active
workgroups, $M$, may be smaller than $N$, either because (due to
competing workloads) the scheduler did not provide $N$ workgroups
initially, or because the kernel has given up some workgroups via
$\mathsf{offer\_kill}$ calls. Through the $\mathsf{request\_fork}$ primitive (also a
workgroup-level function), the kernel and scheduler can collaborate to
allow new workgroups to join the computation at an appropriate point
and with appropriate state.
Suppose a workgroup with id $m\leq M$ executes $\mathsf{request\_fork}$. Then the
following occurs: an integer $k \in [0, N-M]$ is chosen by the
scheduler; $k$ new workgroups are spawned with consecutive ids in the
range $[M, M+k-1]$; the active workgroup count $M$ is atomically
incremented by $k$.
The $k$ new workgroups commence execution at the program point
immediately following the $\mathsf{request\_fork}$ call. The variables that
describe the state of a thread are all uninitialised for the threads
in the new workgroups; reading from these variables without first
initialising them is an undefined behaviour. There are two exceptions
to this: (1) because the parameters to a cooperative kernel are
immutable, the new threads have access to these parameters as part of
their local state and can safely read from them; (2) for each variable
$v$ annotated with $\mathsf{transmit}$, every new thread's copy of $v$ is
initialised to the value that thread 0 in workgroup $m$ held for $v$
at the point of the $\mathsf{request\_fork}$ call.
In effect, thread 0 of the forking workgroup \emph{transmits} the relevant
portion of its local state to the threads of the forked workgroups.
Figure~{\ref{fig:overview}} illustrates the behaviour of
$\mathsf{request\_fork}$. After the graphics task finishes executing, workgroup
$0$ calls $\mathsf{request\_fork}$, spawning the two new workgroups with ids $2$
and $3$. Workgroups $2$ and $3$ join the computation where workgroup
$0$ called $\mathsf{request\_fork}$.
Notice that $k=0$ is always a valid choice for the number of
workgroups to be spawned by $\mathsf{request\_fork}$, and is guaranteed if $M$ is
equal to the workgroup limit $N$.
\myparagraph{Global Barriers}
Because workgroups of a cooperative kernel are fairly scheduled, a
global barrier primitive can be provided. We specify two variants: $\mathsf{global\_barrier}$
and $\mathsf{resizing\_global\_barrier}$.
Our $\mathsf{global\_barrier}$ primitive is a kernel-level function: if it
appears in conditional code then it must be reached by \emph{all}
threads executing the cooperative kernel. On reaching a
$\mathsf{global\_barrier}$, a thread waits until all threads have arrived at
the barrier. Once all threads have arrived, the threads may proceed
past the barrier with the guarantee that all global memory accesses
issued before the barrier have completed. The $\mathsf{global\_barrier}$
primitive can be implemented by adapting an inter-workgroup barrier
design, e.g.~\cite{XF10}, to take account of a growing and shrinking number of workgroups, and the atomic operations provided by
the OpenCL 2.0 memory model enable a memory-safe
implementation~\cite{DBLP:conf/oopsla/SorensenDBGR16}.
The $\mathsf{resizing\_global\_barrier}$ primitive is also a kernel-level
function. It is identical to $\mathsf{global\_barrier}$, except that it caters
for cooperation with the scheduler: by issuing a
$\mathsf{resizing\_global\_barrier}$ the programmer indicates that the cooperative
kernel is prepared to proceed after the barrier with more or fewer workgroups.
When all threads have reached $\mathsf{resizing\_global\_barrier}$,
the number of active workgroups, $M$, is atomically set to a new value, $M'$ say, with $0 < M' \leq N$.
If $M' = M$ then the active workgroups remain unchanged. If $M' < M$, workgroups $[M', M-1]$ are
killed. If $M' > M$ then $M'-M$ new workgroups join the computation after the barrier,
as if they were forked from workgroup 0. In particular, the
$\mathsf{transmit}$-annotated local state of thread 0 in workgroup 0 is
transmitted to the threads of the new workgroups.
The semantics of $\mathsf{resizing\_global\_barrier}$ can be modelled via calling $\mathsf{request\_fork}$ and $\mathsf{offer\_kill}$,
surrounded and separated by calls to a $\mathsf{global\_barrier}$.
The enclosing $\mathsf{global\_barrier}$ calls ensure that the change in number
of active workgroups from $M$ to $M'$ occurs entirely within the
resizing barrier, so that $M$ changes atomically from a programmer's perspective. The middle $\mathsf{global\_barrier}$ ensures that forking occurs
before killing, so that workgroups $[0, \textrm{min}(M, M') - 1]$ are
left intact.
Because $\mathsf{resizing\_global\_barrier}$ can be implemented as above, we do
not regard it \emph{conceptually} as a primitive of our model.
However, in Sec.~\ref{sec:resizingbarrier} we show how a resizing
barrier can be implemented more efficiently through direct interaction
with the scheduler.
\subsection{Programming with Cooperative Kernels}\label{sec:programmingguidelines}
\myparagraph{A Changing Workgroup Count} Unlike in regular OpenCL, the
value returned by $\mathsf{get\_num\_groups}$ is not fixed during the lifetime of
a cooperative kernel: it corresponds to the active group count $M$,
which changes as workgroups execute $\mathsf{offer\_kill}$, and $\mathsf{request\_fork}$.
The value returned by $\mathsf{get\_global\_size}$ is similarly subject to change.
A cooperative kernel must thus be written in a manner that is robust
to changes in the values returned by these functions.
In general, their volatility means that use of these functions should
be avoided. However, the situation is more stable if a cooperative
kernel does not call $\mathsf{offer\_kill}$ and $\mathsf{request\_fork}$ directly, so that
only $\mathsf{resizing\_global\_barrier}$ can affect the number of active
workgroups. Then, at any point during execution, the threads of a
kernel are executing between some pair of resizing barrier calls,
which we call a \emph{resizing barrier interval} (considering the
kernel entry and exit points conceptually to be special cases of
resizing barriers). The active workgroup count is constant within
each resizing barrier interval, so that $\mathsf{get\_num\_groups}$ and
$\mathsf{get\_global\_size}$ return stable values during such intervals.
As we illustrate below for graph traversal, this can be exploited by algorithms that perform strided
data processing.
\myparagraph{Adapting Work Stealing}
In this example there is no state to transmit since a computation is
entirely parameterised by a task, which is retrieved from a queue
located in global memory. With respect to Fig.~~\ref{fig:workstealing},
we add $\mathsf{request\_fork}$ and $\mathsf{offer\_kill}$ calls at the start of the main loop
(below line~\ref{line:wksteal:mainloop}) to let a workgroup offer itself
to be killed or forked, respectively, before it processes a task. Note
that a workgroup may be killed even if its associated task queue is not
empty, since remaining tasks will be stolen by other workgroups. In
addition, since $\mathsf{request\_fork}$ may be the entry point of a workgroup, the
queue id must now be computed after it, so we move
line~\ref{line:wksteal:getgroupid} to be placed just before
line~\ref{line:wksteal:poporsteal}. In particular, the queue id cannot
be transmitted since we want a newly spawned workgroup to read its own
queue and not the one of the forking workgroup.
\myparagraph{Adapting Graph Traversal}
Figure~\ref{fig:cgraphsearch} shows a cooperative version of the
graph traversal kernel of Fig.~\ref{fig:graphsearch} from
Sec.~\ref{sec:openclexamples}. On lines~\ref{line:cgraph:resizing1}
and ~\ref{line:cgraph:resizing2}, we change the original global
barriers into a resizing barriers. Several variables are marked to be
transmitted in the case of workgroups joining at the resizing barriers
(lines~\ref{line:cgraph:transmit1}, \ref{line:cgraph:transmit2} and
\ref{line:cgraph:transmit3}): $\keyword{level}$ must be restored so
that new workgroups know which frontier they are processing;
$\keyword{in\_nodes}$ and $\keyword{out\_nodes}$ must be restored so
that new workgroups know which of the node arrays to use for input and
output. Lastly, the static work distribution of the original kernel is
no longer valid in a cooperative kernel. This is because the stride
(which is based on $M$) may change after each resizing barrier
call. To fix this, we re-distribute the work after each resizing
barrier call by recomputing the thread id and stride
(lines~\ref{line:cgraph:rechunking1} and
\ref{line:cgraph:rechunking2}). This example exploits the fact that
the cooperative kernel does not issue $\mathsf{offer\_kill}$ nor $\mathsf{request\_fork}$
directly: the value of $\keyword{stride}$ obtained from
$\mathsf{get\_global\_size}$ at line~\ref{line:cgraph:rechunking2} is stable
until the next resizing barrier at line~\ref{line:cgraph:resizing1}.
\begin{figure}
\begin{lstlisting}
kernel graph_app(global graph *g,
global nodes *n0, global nodes *n1) {
(*@\label{line:cgraph:transmit1}@*) transmit int level = 0;
(*@\label{line:cgraph:transmit2}@*) transmit global nodes *in_nodes = n0;
(*@\label{line:cgraph:transmit3}@*) transmit global nodes *out_nodes = n1;
while(in_nodes.size > 0) {
(*@\label{line:cgraph:rechunking1}@*) int tid = get_global_id();
(*@\label{line:cgraph:rechunking2}@*) int stride = get_global_size();
for (int i = tid; i < in_nodes.size; i += stride)
process_node(g, in_nodes[i], out_nodes, level);
swap(&in_nodes, &out_nodes);
(*@\label{line:cgraph:resizing1}@*) resizing_global_barrier();
reset(out_nodes);
level++;
(*@\label{line:cgraph:resizing2}@*) resizing_global_barrier();
}
}
\end{lstlisting}
\caption{Cooperative version of the graph traversal kernel of Fig.~\ref{fig:graphsearch}, using a resizing barrier and $\mathsf{transmit}$ annotations}\label{fig:cgraphsearch}
\end{figure}
\myparagraph{Patterns for Irregular Algorithms}
In Sec.~\ref{sec:portingalgorithms} we describe the set of irregular GPU algorithms used
in our experiments, which largely captures the irregular blocking
algorithms that are available as open source GPU kernels. These all
employ either work stealing or operate on graph data structures, and placing our new constructs follows a common, easy-to-follow pattern in each case.
The work stealing algorithms have a transactional flavour
and require little or no state to be carried between transactions. The point at which a workgroup is ready to process a new task is a natural place for $\mathsf{offer\_kill}$ and $\mathsf{request\_fork}$, and few or no $\mathsf{transmit}$ annotations are required.
Figure~\ref{fig:cgraphsearch} is representative of
most level-by-level graph algorithms.
It is typically the case that on completing a level of
the graph algorithm, the next level could be processed by more or
fewer workgroups, which $\mathsf{resizing\_global\_barrier}$
facilitates. Some level-specific state must be transmitted to new workgroups.
\subsection{Non-Functional Requirements}\label{sec:nonfunctional}
The semantics presented in Sec.~\ref{sec:semantics} describe the
behaviours that a developer of a cooperative kernel should be prepared
for.
However, the aim of cooperative kernels is to find a balance that
allows \emph{efficient} execution of algorithms that require fair scheduling, and
\emph{responsive} multitasking, so that the GPU can be shared between
cooperative kernels and other shorter tasks with soft real-time constraints.
To achieve this balance, an implementation of the cooperative
kernels model, and the programmer of a cooperative kernel, must strive
to meet the following non-functional requirements.
The purpose of $\mathsf{offer\_kill}$ is to let the scheduler destroy a workgroup
in order to schedule higher-priority tasks. The scheduler relies on the
cooperative kernel to execute $\mathsf{offer\_kill}$ sufficiently frequently that
soft real-time constraints of other workloads can be met.
Using our work stealing example: a workgroup offers itself to
the scheduler after processing each task. If tasks are sufficiently
fast to process then the scheduler will have ample opportunities to
de-schedule workgroups. But if tasks are very time-consuming to
process then it might be necessary to rewrite the algorithm so that
tasks are shorter and more numerous, to achieve a higher rate of calls
to $\mathsf{offer\_kill}$.
Getting this non-functional requirement right is GPU- and
application-dependent. In Sec.~\ref{sec:sizingnoncoop} we conduct
experiments to understand the response rate that would be required to
co-schedule graphics rendering with a cooperative kernel, maintaining
a smooth frame rate.
Recall that, on launch, the cooperative kernel requests $N$ workgroups.
The scheduler should thus aim to provide $N$ workgroups if other constraints allow it,
by accepting an $\mathsf{offer\_kill}$ only if a compute unit is required for another
task, and responding positively to $\mathsf{request\_fork}$ calls if compute units are available.
\section{Prototype Implementation}\label{sec:implementation}
Our vision is that cooperative kernel support will be integrated
in the runtimes of future GPU implementations of OpenCL, with driver
support for our new primitives. To experiment with our ideas on
current GPUs, we have developed a prototype that mocks up the required
runtime support via a \emph{megakernel}, and exploits the
occupancy-bound execution model that these GPUs provide to ensure fair
scheduling between workgroups. We emphasise that an aim of
cooperative kernels is to \emph{avoid} depending on the
occupancy-bound model. Our prototype exploits this model simply to
allow us to experiment with current GPUs whose proprietary drivers we
cannot change. We describe the megakernel approach
(Sec.~\ref{sec:megakernel}) and detail various aspects of the
scheduler component of our implementation
(Sec.~\ref{sec:schedulerimpl}).
\subsection{The Megakernel Mock Up}\label{sec:megakernel}
Instead of multitasking multiple separate kernels, we merge a set of
kernels into a megakernel---a single, monolithic kernel. The
megakernel is launched with as many workgroups as can be occupant
concurrently. One workgroup takes the role of the
scheduler,\footnote{\TSAdded{We note that the scheduler requirements
given in Sec.~{\ref{sec:cooperativekernels}} are agnostic to
whether the scheduling logic takes place on the CPU or GPU. To
avoid expensive communication between GPU and host, we choose to
implement the scheduler on the GPU.} } and the scheduling logic
is embedded as part of the megakernel. The remaining workgroups act
as a pool of workers. A worker repeatedly queries the scheduler to be
assigned a task. A task corresponds to executing a cooperative or
non-cooperative kernel. In the non-cooperative case, the workgroup
executes the relevant kernel function uninterrupted, then awaits
further work. In the cooperative case, the workgroup either starts
from the kernel entry point or immediately jumps to a designated point
within the kernel, depending on whether the workgroup is an initial
workgroup of the kernel, or a forked workgroup. In the latter case,
the new workgroup also receives a struct containing the values of all
relevant $\mathsf{transmit}$-annotated variables.
\myparagraph{Simplifying Assumptions}
For ease of implementation, our prototype supports multitasking a
single cooperative kernel with a single non-cooperative kernel (though
the non-cooperative kernel can be invoked many times).
We require that $\mathsf{offer\_kill}$, $\mathsf{request\_fork}$ and
$\mathsf{resizing\_global\_barrier}$ are called from the entry function of a
cooperative kernel. This allows us to use $\keyword{goto}$ and
$\keyword{return}$ to direct threads into and out of the kernel. With
these restrictions we can experiment with interesting irregular
algorithms (see Sec.~\ref{sec:experiments}). A non-mock
implementation of cooperative kernels would not use the megakernel
approach, so we did not deem the engineering effort associated with
lifting these restrictions in our prototype to be worthwhile.
\subsection{Scheduler Design}\label{sec:resizingbarrier}\label{sec:schedulerimpl}
To enable multitasking through cooperative kernels, the runtime (in
our case, the megakernel) must track the state of workgroups,
i.e.\ whether a workgroup is waiting or computing a kernel; maintain
consistent context states for each kernel, e.g.\ tracking the number
of active workgroups; and provide a safe way for these states to be
modified in response to $\mathsf{request\_fork}$/$\mathsf{offer\_kill}$. We discuss these
issues, and describe the implementation of an efficient resizing
barrier. We describe how the scheduler would handle arbitrary
combinations of kernels, though as noted above our current
implementation is restricted to the case of two kernels.
\myparagraph{Scheduler Contexts}
To dynamically manage workgroups executing cooperative kernels, our
framework must track the state of each workgroup and provide a channel
of communication from the scheduler workgroup to workgroups executing
$\mathsf{request\_fork}$ and $\mathsf{offer\_kill}$. To achieve this, we use a
\emph{scheduler context} structure, mapping a primitive workgroup id
the workgroup's status, which is either \emph{available} or the id of
the kernel that the workgroup is currently executing. The scheduler
can then send cooperative kernels a \emph{resource message},
commanding workgroups to exit at $\mathsf{offer\_kill}$, or spawn additional
workgroups at $\mathsf{request\_fork}$. Thus, the scheduler context needs a
communication channel for each cooperative kernel. We implement the
communication channels using atomic variables in global memory.
\myparagraph{Launching Kernels and Managing Workgroups}
To launch a kernel, the host sends a data packet to the GPU scheduler
consisting of a kernel to execute, kernel inputs, and a flag
indicating whether the kernel is cooperative. In our prototype,
this host-device communication channel is built using fine-grained SVM
atomics.
On receiving a data packet describing a kernel launch $K$, the GPU
scheduler must decide how to schedule $K$. Suppose $K$ requests $N$
workgroups. The scheduler queries the scheduler context. If there are
at least $N$ available workgroups, $K$ can be scheduled
immediately. Suppose instead that there are only $N_a < N$ available
workgroups, but a cooperative kernel $K_c$ is executing. The scheduler
can use $K_c$'s channel in the scheduler context to command $K_c$ to
provide $N - N_a$ workgroups via $\mathsf{offer\_kill}$. Once $N$ workgroups
are available,
the scheduler then sends $N$ workgroups from the available workgroups
to execute kernel $K$.
If the new kernel $K$ is itself a cooperative kernel, the scheduler
would be free to provide $K$ with fewer than $N$ active workgroups
initially.
If a cooperative kernel $K_c$ is executing with fewer workgroups than
it initially requested, the scheduler may decide make extra workgroups
available to $K_c$, to be obtained next time $K_c$ calls $\mathsf{request\_fork}$.
To do this, the scheduler asynchronously signals $K_c$ through $K_c$'s
channel to indicate the number of workgroups that should join at the
next $\mathsf{request\_fork}$ command. When a workgroup $w$ of $K_c$ subsequently
executes $\mathsf{request\_fork}$, thread 0 of $w$ updates the kernel and
scheduler contexts so that the given number of new workgroups are
directed to the program point after the $\mathsf{request\_fork}$ call. This
involves selecting workgroups whose status is \emph{available}, as
well as copying the values of $\mathsf{transmit}$-annotated variables to the
new workgroups.
\myparagraph{An Efficient Resizing Barrier}
In Sec.~\ref{sec:semantics}, we defined the semantics of a resizing
barrier in terms of calls to other primitives. It is possible,
however, to implement the resizing barrier with only one call to a
global barrier with $\mathsf{request\_fork}$ and $\mathsf{offer\_kill}$ inside.
We consider barriers that use the master/slave model~\cite{XF10}: one
workgroup (master) collects signals from the other workgroups (slaves)
indicating that they have arrived at the barrier and are waiting for a
reply indicating that they may leave the barrier. Once the master has
received a signal from all slaves, it replies with a signal saying that
they may leave.
Incorporating $\mathsf{request\_fork}$ and $\mathsf{offer\_kill}$ into such a barrier is
straightforward. Upon entering the barrier, the slaves first execute
$\mathsf{offer\_kill}$, possibly exiting. The master then waits for $M$ slaves
(the number of active workgroups), which may decrease due to
$\mathsf{offer\_kill}$ calls by the slaves, but will not increase. Once the
master observes that $M$ slaves have arrived, it knows that all other
workgroups are waiting to be released. The master executes
$\mathsf{request\_fork}$, and the statement immediately following this
$\mathsf{request\_fork}$ is a conditional that forces newly spawned workgroups to
join the slaves in waiting to be released. Finally, the master
releases all the slaves: the original slaves and the new slaves that
joined at $\mathsf{request\_fork}$.
This barrier implementation is sub-optimal because workgroups only
execute $\mathsf{offer\_kill}$ once per barrier call and, depending on order of
arrival, it is possible that only one workgroup is killed per barrier
call, preventing the scheduler from gathering workgroups quickly.
We can reduce the gather time by providing a new
$\keyword{query}$ function for cooperative kernels, which returns the
number of workgroups that the scheduler needs to obtain from the
cooperative kernel.
A resizing barrier can now be implemented as follows: (1) the master
waits for all slaves to arrive; (2) the master calls $\mathsf{request\_fork}$ and
commands the new workgroups to be slaves; (3) the master calls
$\keyword{query}$, obtaining a value $W$; (4) the master releases the
slaves, broadcasting the value $W$ to them; (5) workgroups with ids
larger than $M-W$ spin, calling $\mathsf{offer\_kill}$ repeatedly until the
scheduler claims them---we know from $\keyword{query}$ that the
scheduler will eventually do so.
We show in
Sec.~\ref{sec:responsiveness} that the barrier using $\keyword{query}$ greatly
reduces the gather time in practice.
\section{Applications and Experiments}\label{sec:experiments}
We discuss our experience porting irregular algorithms to cooperative
kernels and describe the GPUs on which we evaluate these applications
(Sec.~\ref{sec:portingalgorithms}). For these GPUs, we report on
experiments to determine non-cooperative workloads that model the
requirements of various graphics rendering tasks
(Sec.~\ref{sec:sizingnoncoop}). We then examine the overhead
associated with moving to cooperative kernels when multitasking is
\emph{not} required (Sec.~\ref{sec:overhead}), as well as the
responsiveness and throughput observed when a cooperative kernel is
multi-tasked with non-cooperative workloads
(Sec.~\ref{sec:responsiveness}). Finally, we compare
against a performance model of \emph{kernel-level} preemption, which
we understand to be what current Nvidia GPUs provide (Sec.~\ref{sec:nvidiacomparison}).
\subsection{Applications and GPUs}\label{sec:portingalgorithms}
\begin{table}[t]
\normalsize
\caption{Blocking GPU applications investigated}
\centering
\begin{tabular}{ l r r r r r r}
App. & barriers & kill & fork & transmit & LoC & inputs\\
\hline
\rowcolor{Gray1}
color & 2 / 2 & 0 & 0 & 4 & 55 & 2\\
\rowcolor{Gray1}
mis & 3 / 3 & 0 & 0 & 0 & 71 & 2\\
\rowcolor{Gray1}
p-sssp & 3 / 3 & 0 & 0 & 0 & 42 & 1\\
\rowcolor{Gray2}
bfs & 2 / 2 & 0 & 0 & 4 & 185 & 2\\
\rowcolor{Gray2}
l-sssp & 2 / 2 & 0 & 0 & 4 & 196 & 2\\
\rowcolor{Gray3}
octree & 0 / 0 & 1 & 1 & 0 & 213 & 1 \\
\rowcolor{Gray3}
game & 0 / 0 & 1 & 1 & 0 & 308 & 1 \\
\end{tabular} \\
\crule[Gray1]{.2cm}{.2cm} Pannotia \hspace{.4cm} \crule[Gray2]{.2cm}{.2cm} Lonestar GPU \hspace{.4cm} \crule[Gray3]{.2cm}{.2cm} work stealing
\label{tab:applications}
\end{table}
Table~\ref{tab:applications} gives an overview of the 7 irregular
algorithms that we ported to cooperative kernels. Among them, 5 are
graph algorithms, based on the Pannotia~\cite{Pannotia} and
Lonestar~\cite{BNP12} GPU application suites, using global barriers.
We indicate how many of the original number of barriers are changed to
resizing barriers (all of them), and how many variables need to be
transmitted. The remaining two algorithms are work stealing
applications: each required the addition of $\mathsf{request\_fork}$ and
$\mathsf{offer\_kill}$ at the start of the main loop, and no variables needed to
be transmitted (similar to example discussed in Sec.~\ref{sec:programmingguidelines}).
Most graph applications come with 2 different data sets as input,
leading to 11 application/input pairs in total.
\TSAdded{Our prototype implementation (Sec.~\ref{sec:implementation}) requires
two optional features of OpenCL 2.0: SVM fine-grained buffers and SVM
atomics. Out of the GPUs available to us, from ARM, AMD, Nvidia\xspace, and
Intel, only Intel GPUs provided robust support of these features.}
\TSAdded{We thus ran our experiments on three Intel GPUs: HD 520, HD 5500
and Iris 6100. The results were similar across the GPUs, so for
conciseness, we report only on the Iris 6100 GPU (driver
20.19.15.4463) with a host CPU i3-5157U. The Iris has a reported 47
compute units. Results for the other Intel GPUs are presented in Appendix~\ref{appendix:extragraphs}.}
\subsection{Sizing Non-cooperative Kernels}\label{sec:sizingnoncoop}
Enabling rendering of smooth graphics in parallel with irregular
algorithms is an important use case for our approach. Because our
prototype implementation is based on a megakernel that takes over the
entire GPU (see Sec.~\ref{sec:implementation}), we cannot assess this
directly.
We devised the following method to determine OpenCL workloads that simulate
the computational intensity of various graphics rendering workloads.
We designed a synthetic kernel that occupies all workgroups of a GPU
for a parameterised time period $t$, invoked in an infinite loop by a
host application. We then searched for a maximum value for $t$ that
allowed the synthetic kernel to execute without having an observable
impact on graphics rendering. Using the computed value, we ran the
host application for $X$ seconds, measuring the time $Y < X$ dedicated
to GPU execution during this period and the number of kernel launches
$n$ that were issued. We used $X \geq 10$ in all experiments. The
values $(X-Y)/n$ and $X/n$ estimate the average time spent using the
GPU to render the display between kernel calls (call this $E$) and the
period at which the OS requires the GPU for display rendering (call
this $P$), respectively.
We used this approach to measure the GPU availability required for three
types of rendering: \emph{light}, whereby desktop icons are smoothly
emphasised under the mouse pointer; \emph{medium}, whereby window
dragging over the desktop is smoothly animated; and \emph{heavy}, which
requires smooth animation of a WebGL shader in a browser. For
\emph{heavy} we used WebGL demos from the Chrome
experiments~\cite{chrome-experiments}
Our results are the following: $P=70\mathit{ms}$ and $E=3\mathit{ms}$ for light;
$P=40\mathit{ms}$, $E=3\mathit{ms}$ for medium; and $P=40\mathit{ms}$, $E=10\mathit{ms}$ for heavy. For
medium and heavy, the $40\mathit{ms}$ period coincides with the human persistence
of vision. The $3\mathit{ms}$ execution duration of both light and medium
configurations indicates that GPU computation is cheaper for basic
display rendering compared with more complex rendering
\subsection{The Overhead of Cooperative Kernels}\label{sec:overhead}
\myparagraph{Experimental Setup}
Invoking the cooperative scheduling primitives incurs some overhead
even if no killing, forking or resizing actually occurs, because the cooperative kernel still needs to interact with the scheduler to determine this.
We assess this overhead by measuring the
slowdown in execution time between the original and cooperative versions of a kernel, forcing the scheduler to never modify the number of
active workgroups in the cooperative case.
Recall that our mega kernel-based implementation merges the code of a
cooperative and a non-cooperative kernel.
This can reduce the occupancy for the merged kernel, e.g.\ due to
higher register pressure, This is an artifact of our prototype
implementation, and would not be a problem if our approach was
implemented inside the GPU driver. We thus launch both the original
and cooperative versions of a kernel with the reduced occupancy bound
in order to meaningfully compare execution times.
\begin{figure*}
\includegraphics[width=.67\columnwidth]{iris_octree_NA.pdf}
\includegraphics[width=.67\columnwidth]{iris_bfs_usa.pdf}
\includegraphics[width=.67\columnwidth]{iris_color_G3_circuit.pdf}
\caption{Example gather time and non-cooperative timing results}\label{fig:fine-grained-timing}
\end{figure*}
\begin{table}
\normalsize
\caption{Cooperative kernel slowdown w/o multitasking}
\centering
\begin{tabular}{ l l | l l | l l }
\multicolumn{2}{c|}{overall} & \multicolumn{2}{c|}{barrier} & \multicolumn{2}{c}{wk.steal.} \\
mean & max & mean & max & mean & max \\
\hline
$1.07$ & $1.23^{\ddagger}$ & $1.06$ & $1.20^{\diamond}$ & $1.12$ & $1.23^{\ddagger}$ \\
\end{tabular}\\
{\small
$^{\ddagger}$octree, $^{\diamond}$color G3\_circuit
}
\label{tab:overhead}
\end{table}
\myparagraph{Results}
Tab.~\ref{tab:overhead} shows the geometric mean and
maximum slowdown across all applications and inputs, with averages and
maxima computed over 10 runs per benchmark. For the maximum slowdowns,
we indicate which application and input was responsible. The slowdown is
below 1.25 even in the worst case, and closer to 1 on average. We consider
these results encouraging, especially since the performance of our
prototype could clearly be improved upon in a native implementation.
\subsection{Multitasking via Cooperative Scheduling}\label{sec:responsiveness}
We now assess the responsiveness of multitasking between a
long-running cooperative kernel and a series of short, non-cooperative
kernel launches, and the performance impact of multitasking on the
cooperative kernel.
\myparagraph{Experimental Setup} For a given cooperative kernel and
its input, we launch the kernel and then repeatedly schedule a
non-cooperative kernel that aims to simulate the intensity of one of
the three classes of graphics rendering workload discussed in
Sec.~\ref{sec:sizingnoncoop}. In practice, we use matrix
multiplication as the non-cooperative workload, with matrix
dimensions tailored to reach the appropriate execution duration. We
conduct separate runs where we vary the number of workgroups requested
by the non-cooperative kernel, considering the cases where one, a
quarter, a half, and all-but-one, of the total number of workgroups
are requested. For the graph algorithms we try both
regular and query barrier implementations.
Our experiments span 11 pairs of cooperative kernels and inputs, 3
classes of non-cooperative kernel workloads, 4 quantities of
workgroups claimed for the non-cooperative kernel and 2 variations of
resizing barriers for graph algorithms, leading to 240 configurations.
We run each configuration 10 times, in order to report averaged
performance numbers. For each run, we record the execution time of the
cooperative kernel. For each scheduling of the non-cooperative kernel
during the run, we also record the \emph{gather time} needed by the
scheduler to collect workgroups to launch the non-cooperative kernel,
and the non-cooperative kernel execution time.
\myparagraph{Responsiveness}
Figure~\ref{fig:fine-grained-timing} reports, on three
configurations, the average gather and execution times for the
non-cooperative kernel with respect to the quantity of workgroups allocated to
it. A logarithmic scale is used for time since gather times tend to
be much smaller than execution times. The horizontal grey lines
indicates the desired period for non-cooperative kernels. These
graphs show a representative sample of our results; the full set of
graphs for all configurations is provided in Appendix~\ref{appendix:extragraphs}.
The left-most graph illustrates a work
stealing example. When the non-cooperative kernel is given only one
workgroup, its execution is so long that it cannot complete within the
period required for a screen refresh. The gather time is very good
though, since the scheduler needs to collect only one workgroup. The
more workgroups are allocated to the non-cooperative kernels, the
faster it can compute: here the non-cooperative kernel becomes fast
enough with a quarter (resp.\ half) of available workgroups for light
(resp.\ heavy) graphics workload. Inversely, the gather time increases
since the scheduler must collect more and more workgroups.
The middle and right graphs show results for graph algorithms. These
algorithms use barriers, and we experimented with the regular and
query barrier implementations described in
Sec.~\ref{sec:resizingbarrier}. The execution times for the
non-cooperative task are averaged across all runs, including with both
types of barrier. We show separately the average gather time
associated with each type of barrier. The graphs show a similar trend
to the left-most graph: as the number of non-cooperative workgroups
grows, the execution time decreases and the gather time
increases.
The gather time is higher on the rightmost figure as the G3 circuit input
graph is rather wide than deep, so the graph algorithm reaches
resizing barriers less often than for the USA road input of the middle
figure for instance. The scheduler thus has fewer opportunities to
collect workgroups and gather time increases. Nonetheless, scheduling
responsiveness can benefit from the query barrier: when used, this
barrier lets the scheduler collect all needed workgroups as soon as
they hit a resizing barrier.
As we can see, the gather time of the
query barrier is almost stable with respect to the number of workgroups that
needs to be collected.
\begin{figure}
\includegraphics[width=\columnwidth]{heavy.pdf}
\caption{Performance impact of multitasking cooperative and non-cooperative workloads, and the period with which non-cooperative kernels execute}\label{fig:performance}
\end{figure}
\myparagraph{Performance} Figure~\ref{fig:performance} reports the
overhead brought by the scheduling of non-cooperative kernels over the
cooperative kernel execution time. This is the slowdown associated
with running the cooperative kernel in the presence of multitasking,
vs.\ running the cooperative kernel in isolation (median over all
applications and inputs). We also show the period at which
non-cooperative kernels can be scheduled (median over all applications
and inputs). Our data included some outliers that occur with
benchmarks in which the resizing barrier are not called very
frequently and the graphics task requires half or more workgroups. For
example, a medium graphics workload for bfs on the rmat input has over
an 8$\times$ overhead when asking for all but one of the
workgroups. As Figure~\ref{fig:performance} shows, most of our
benchmarks are much better behaved than this. In future work
is required to examine the problematic benchmarks in more detail,
possibly inserting more resizing calls.
We show results for the three workloads listed in
Sec.~~\ref{sec:sizingnoncoop}. The horizontal lines in the period
graph correspond to the goals of the workloads: the higher
(resp. lower) line corresponds to a period of $70\mathit{ms}$ (resp.\ $40\mathit{ms}$) for
the light (resp. medium and heavy) workload.
Co-scheduling non-cooperative kernels that request a single workgroup
leads to almost no overhead, but the period is far too high to meet
the needs of any of our three workloads; e.g.\ a heavy workload
averages a period of $939\mathit{ms}$. As more workgroups are dedicated to
non-cooperative kernels, they execute quickly enough to be scheduled
at the expected period. For the light and medium workloads, a quarter
of the workgroups executing the non-cooperative kernel are able to
meet their goal period (70 and $40\mathit{ms}$ resp.). However, this is not
sufficient to meet the goal for the heavy workload (giving a median
period of $104\mathit{ms}$). If half of the workgroups are allocated to the
non-cooperative kernel, the heavy workload achieves its goal period
(median of $40\mathit{ms}$).
Yet, as expected, allocating more non-cooperative workgroups increases
the overhead of the cooperative kernel.
Still, heavy workloads meet their period by allocating half
of the workgroups, incurring a slow down of less than
1.5$\times$ (median). Light and medium workloads meet their period
with only a small overhead; 1.04$\times$ and 1.08$\times$ median
slowdown respectively.
\subsection{Comparison with Kernel-Level Preemption}\label{sec:nvidiacomparison}
\begin{table}[t]
\normalsize
\caption{Overhead of kernel level preemption vs cooperative kernels for three
graphics workloads}
\centering
\begin{tabular}{ l r r r}
g. workload & kernel-level & cooperative & resources \\
\hline
light & 1.04 & 1.04 & $N/4$\\
medium & 1.08 & 1.08 & $N/4$\\
heavy & 1.33 & 1.47 & $N/2$\\
\end{tabular} \\
\label{tab:preemption}
\end{table}
Nvidia's recent Pascal architecture provides hardware support for
instruction-level preemption~\cite{PascalWhitepaper,anandtech},
however, preemption of entire kernels, but not of individual
workgroups is supported. Intel GPUs do not provide this feature, and
our OpenCL prototype of cooperative kernels cannot run on Nvidia GPUs,
making a direct comparison impossible. We present here a theoretical
analysis of the overheads associated with sharing the GPU between
graphics and compute tasks via kernel-level preemption.
Suppose a graphics workload is required to be scheduled with period
$P$ and duration $D$, and that a compute kernel requires time $C$ to
execute without interruption. If we assume the cost of preemption is
negligible (e.g.\ Nvidia have reported preemption times of 0.1
$\mathit{ms}$ for Pascal~\cite{anandtech}, because of
special hardware support), then the overhead associated with switching
between compute and graphics every $P$ time steps is $P/(P-D)$.
We compare this task-level preemption overhead model with our
experimental results per graphics workload in
Tab.~{\ref{tab:preemption}}. We report the overhead of the
configuration that allowed us to meet the deadline of the graphics
task.
Based on the above assumptions, our approach provides similar overhead
for low and medium graphics workloads, however, has a higher overhead for
the high workload.
Our low performance for heavy workloads is because the graphics task
requires half of the workgroups, crippling the cooperative kernel
enough that $\mathsf{request\_fork}$ calls are not issued as frequently. Future
work may examine how to insert more resizing calls in these
applications to address this.
These results suggest that a hybrid preemption scheme may work
well. That is, the cooperative approach works well for light and
medium tasks; on the other hand, heavy graphics tasks benefit from the
coarser grained, kernel-level preemption strategy. However, the
preemption strategy requires specialised hardware support
in order to be efficient.
\section{Related Work}\label{sec:relatedwork}
\myparagraph{Irregular Algorithms and Persistent kernels}
There has been a lot of work on accelerating blocking irregular
algorithms using GPUs, and on the \emph{persistent threads}
programming style for long-running
kernels~\cite{owens-persistent,DBLP:conf/ipps/KaleemVPHP16,DBLP:conf/ipps/DavidsonBGO14,DBLP:conf/hipc/HarishN07,DBLP:journals/topc/MerrillGG15,DBLP:conf/egh/VineetHPN09,DBLP:conf/ppopp/NobariCKB12,DBLP:conf/hpcc/SolomonTT10a,DBLP:conf/popl/PrabhuRMH11,DBLP:conf/ppopp/Mendez-LojoBP12,DBLP:conf/oopsla/PaiP16,DBLP:conf/oopsla/SorensenDBGR16,DBLP:conf/egh/CedermanT08,TPO10,BNP12,Pannotia}.
These approaches rely on the occupancy-bound execution model, flooding
available compute units with work, so that the GPU is unavailable for
other tasks, and assuming fair scheduling between occupant workgroups,
which is unlikely to be guaranteed on future GPU platforms.
As our experiments demonstrate, our cooperative kernels model allows blocking algorithms
to be upgraded to run in a manner that facilitates responsive multitasking.
\myparagraph{GPU Multitasking and Scheduling}
Hardware support for preemption has been proposed for Nvidia\xspace GPUs, as
well as \emph{SM-draining} whereby workgroups occupying a symmetric
multiprocessor (SM; a compute unit using our terminology) are allowed
to complete until the SM becomes free for other
tasks~\cite{DBLP:conf/isca/TanasicGCRNV14}. SM draining is limited
the presence of blocking constructs, since it may not be possible to
drain a blocked workgroup.
A follow-up work adds the notion of SM \emph{flushing}, where a
workgroup can be re-scheduled from scratch if it has not yet committed
side-effects~\cite{DBLP:conf/asplos/ParkPM15}. Both approaches have
been evaluated using simulators, over sets of regular GPU kernels.
Very recent Nvidia\xspace GPUs (i.e. the Pascal architecture) support
preemption, though, as discussed in Sec.~{\ref{sec:intro}} and
Sec.~{\ref{sec:nvidiacomparison}}, it is not clear whether they guarantee
fairness or allow tasks to share GPU resources at the workgroup
level~\cite{PascalWhitepaper}.
CUDA and OpenCL provide the facility for a kernel to spawn further
kernels~\cite{cuda-75}. This \emph{dynamic parallelism} can be used
to implement a GPU-based scheduler, by having an initial scheduler
kernel repeatedly spawn further kernels as required, according to some
scheduling policy~\cite{DBLP:conf/ppopp/Muyan-OzcelikO16}. However,
kernels that uses dynamic parallelism are still prone to unfair
scheduling of workgroups, and thus does not help in deploying blocking
algorithms on GPUs.
\myparagraph{Cooperative Multitasking}
Cooperative multitasking was offered in older operating systems
(e.g. pre 1995 Windows) and is still used by some operating systems,
such as RISC OS~\cite{risc-os-multitasking}. \TSAdded{Additionally,
cooperative multitasking can be efficiently implemented in today's
high-level languages for domains in which preemptive multitasking is
either too costly or not supported on legacy
systems~\cite{Tarpenning:1991:CMC:136810.136820}}.
\section{Conclusions and Future Work}\label{sec:conclusion}
We have proposed \emph{cooperative kernels}, a small set of GPU
programming extensions that allow long-running, blocking kernels to be
fairly scheduled and to share GPU resources with other workloads.
Experimental results using our megakernel-based prototype show that
the model is a good fit for current GPU-accelerated irregular
algorithms. The performance that could be gained through a native
implementation with driver support would be even better.
Avenues for future work include seeking additional classes of
irregular algorithms to which the model might (be extended to) apply
(to), investigating implementing native support in open source
drivers, and integrating cooperative kernels into template- and
compiler-based programming models for graph algorithms on
GPUs~\cite{DBLP:conf/ppopp/WangDPWRO16,DBLP:conf/oopsla/PaiP16}.
\section*{Acknowledgments}
We are grateful to Lee Howes, Bernhard Kainz, Paul Kelly, Christopher
Lidbury, Steven McDonagh, Sreepathi Pai, and Andrew Richards for
insightful comments throughout the work. We thank the FSE reviewers
for their thorough evaluations and feedback. This work is supported in
part by EPSRC Fellowship EP/N026314, and a gift from Intel Corporation.
\clearpage
|
1,941,325,220,706 | arxiv | \section{Introduction}
Ion trap precision spectroscopy has led the way to implement quantum algorithms~\cite{Blatt:08}, test fundamental symmetries of nature~\cite{Lean:11}, trace element analysis~\cite{Trac:12}, isotope separation~\cite{list:03} \textit{etc.}. In each of these experiments an essential component has been a stable laser to probe atomic or molecular transition. Stability of a laser is judged by its emission bandwidth as well as slow drift of its wavelength. The emission bandwidth is narrowed by locking to high finesse optical cavity which can currently achieve sub-Hz linewidth in short time scales~\cite{clock:13}. However the emission wavelength of the laser locked to a cavity can drift due to ambient temperature fluctuation, low frequency mechanical vibrations \textit{etc.}. There has been tremendous development in building ultra-stable reference cavities which can restrict these drifts below kHz/day~\cite{clock:13}. An alternative method to restrict the drift is to actively lock the laser to known frequency reference of an atom or molecule. Particularly for experiments which require extended period of data aquisition, active locking to atomic or molecular reference is preferred due to their robustness. Furthermore to avoid complexity in the experimental setup it is preferred to have the same atomic/molecular reference cell for all involved transitions. However, this is not always possible due to lack of suitable transitions in a single atom or molecule. Mostly iodine dimers and tellurium dimers are used as choice of reference apart from hollow cathode lamps for different elements. The later requires opto-galvanic detection setup while the former relies on Doppler free optical spectroscopy. In the following we have implemented modulation transfer spectroscopy~(MTS) which unlike the frequency modulation spectra~(FMS) produces a zero crossing signal at the resonance frequency thereby allowing direct frequency locking of the laser to the molecular transition frequency similar to a Pound-Drever-Hall signal~\cite{pdh:83}. \\
Tellurium dimer has a rich spectra covering parts of ultra-violate~(UV) and visible wavelengths. A comprehensive study of its broad spectra has been performed by Cariou and Luc in what is now known as the Te$_2$ atlas \cite{Te2atlas:80}. However, in order to frequency lock a laser it is important to detect transition lines close to the targeted transition line of the atomic species under investigation. Russell J. De Young has performed absorption spectroscopy above $500~$nm to extract the absorption cross-section to the first electronic excited state~\cite{You:94}. T.~J.~Scholl \textit{et al.} measured $39$ lines between $420~$nm to $460~$nm to cover the Stilbene-420 dye tuning curve employing saturation spectroscopy~\cite{Sch:05}. Tellurium in addition to the thorium and uranium emission atlas of Los Alamos provide suitable reference below $500$~nm which includes Gillaspy and Sansonetti's measurement between between $471-502~$nm for Coumarine dye at $480~$nm~\cite{Gil:91} and Courteille \textit{et al.}'s measurement close to $476~$nm for diode laser locking for $Yb^+$ ion spectroscopy~\cite{Ma:93}. In the region of interest for hydrogenic atoms like deuterium, hydrogen and positronium ranging from $486~$nm to $488~$nm a number of experiments has been performed~\cite{Mct:90}. C.~Raab~\textit{et al.} performed precision measurement on Te$_2$ spectra close to the Ba$^+$ ion Doppler cooling transition at $493~$nm~\cite{Raa:98}. More recently, J.~Cooker~\textit{et al.} showed a commercial blue laser diode diode stability locked to Te$_2$ line at $444.4~$nm for diode laser reference in transfer cavities~\cite{Coo:11}. In the meanwhile I.~S. Burns~\textit{et al.} extended the Te2 spectral reference close to $410~$nm where commercial blue diodes are now available~\cite{Bur:06}. \\
In this work we extend the available Ba$^+$ spectroscopic tool further by adding new Te$_2$ spectral lines close to the $S_{1/2}-P_{3/2}$ transition in barium ion at $455.4~$nm. In order to drive this transition, we have developed an extended cavity diode laser employing commercially available violet laser diode at $455~$nm with mode-hope-free turning range of more than $100~$GHz. In order to determine the absolute wavelength, simultaneous opto-galvanometric measurement of a barium hollow cathode lamp~(HCL) was recorded. We find two new Te$_2$ lines within the $1.2~$GHz wide HCL spectrum with the closest one being only $79~$MHz away from the needed barium line. The closeness of this transition makes it a suitable frequency reference which can easily be bridged by an acousto-optic modulator~(AOM). In the following, we will provide a description of our setup, measurement procedure and present our results before concluding in the last section.
\section{Experimental setup}
\label{sec:setup}
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{figure1.eps}}
\caption{layout of Experimental set up;ECDL-external cavity diode
laser,FI-Faraday isolator,PBS-polarising beam splitter,BS-beam
splitter,HWP- half wave plate,M-Mirror,L-lens,EOM-electro optic
modulator.} \label{fig:1}
\end{figure}
In this section we briefly describe our laser design and electronics to generate the $455.4~$nm laser light along with an overview of the setup to measure the MTS spectra of Te$_2$ in a hot vapor cell. A schematic of the experimental setup is shown in fig.~\ref{fig:1} which includes both the MTS setup of tellurium and the HCL spectroscopy of barium using the $455~$nm diode laser. The laser is an extended cavity diode laser similar to the NIST design~\cite{Wie:91} where a special pivot point is selected to have minimal cavity length change as the ECDL is frequency scanned. The laser diode is \textit{Nichia NDB4216E} with anti-reflection coating on the front facet in order to minimize the diode laser modes. The diode is driven by an in-house $CQT$ designed current drive, very similar to the original J.~Hall design~\cite{Lib:93} while mode-hop-free operation over a wide frequency range is ensured by feed forward added to the diode operating current. The feed forward current and the scan voltages are generated from a direct digital synthesis~(DDS) implemented on low cost $Arduino-UNO$ board taking into account the ECDL cavity length change as a result of the angular ($\alpha$) tuning of the piezo. The optical output power of the diode with and without the ECDL is shown in figure~\ref{fig:2}. It is clear that the laser diode under ECDL condition shows more regular mode-hops as the current is increased beyond $60~$mA. At the operational wavelength, total power of $50~$mW is available at a diode current of $100~$mA for the experiment. Out of this, about $10~$mW is used for the implementation of the MTS setup, about $3~$mW is used for the HCL spectrometry, about $15~$mW is used for wavemeter measurement and the rest is available for the ion trap experiment. All these paths are fiber coupled with an efficiency of about $40\%$. The unusually high power requirement in the wavemeter path is mainly due to low efficiency in the wavemeter switch which is located about $50~$m away from our laboratory.\\
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{figure2.eps}}
\caption{ECDL output: The output power of the ECDL as a function of the input current. The small jumps in the output power observed above $50~$mA of drive current indicate mode-hops in the ECDL cavity. similar mode-hops due to grating angle are suppressed by feed-forward applied to the current.}
\label{fig:2}
\end{figure}
A faraday isolator (FI) is placed in front of the ECDL minimizing the optical feedback. As shown in fig.~\ref{fig:1}, the laser beam from the ECDL is divided into three components by a couple of beam splitters(BS) BS1 and BS2 for the MTS setup, the HCL setup and the $30~$MHz resolution wavemeter setup respectively. The optical setup for MTS spectroscopy utilizes two polarising beam splitter (PBS): PBS1 is used to split the beam into pump and probe while PBS2 is used for re-combining them as both arms overlap inside the Te$_2$ cell. The intensity ratio in those two beams is controlled by a zero order half wave plate (HWP) HWP1. The other two half waveplates HWP2 and HWP3 are used to control the polarization of individual beams. The pump beam is phase modulated by an electro-optic modulator~(EOM)(crystal:Mgo doped LiNbO3) which is driven at a modulation frequency of $5.8~$MHz. The probe beam is aligned collinearly with the counter propagating modulated pump beam through a $10~$cm long Te$_2$ cell which is placed inside an oven heated to $530~$K. The temperature of the cell is maintained to within $0.5^\circ$K by thermal isolation. Two photodiodes of responsivity $0.64~$A/W detect the MTS signal or saturation absorption signal after the PBS2. The photodiode (PD) signal is then amplified by a low noise amplifier and fed into a $CQT-$built MTS Locking board comprising of a frequency mixer with low pass
filter at $30~$kHz cut-off and a PID controller for frequency locking purposes. This board generates error signal which is split into two parts: one feeds back to the current of the ECDL for any fast frequency correction and the other feeds back to the piezo driver board for slow drift corrections. The light which is reflected from BS2 is used for HCL spectroscopy. This part is sent to the barium hollow cathode lamp after chopping at a frequency of $1~$kHz to avoid low frequency electronic noise in the lock-in-amplifier detection setup. The opto-galvanic signal from the HCL is separated out using a high pass filter and the voltage drop across a $15~$kOhm resistor is detected by \textit{Stanford Research:SRS380} lock-in-amplifier. Simultaneous data of the lock-in-amplifier and the wavemeter are logged into a computer using python code.
\section{Measurement procedure and results}
\label{sec:measpro}
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{figure3.eps}}
\caption{Spectra: The Te$_2$ MTS spectra shown as a function of laser frequency. The barium hollow cathode lamp spectra is also shown here as a reference for the frequency axis. Line no. 1 corresponds to the only known line of the Te$_2$ atlas. The line of interest is the line no. $5$ which is about $79~$MHz from the barium line center.}
\label{fig:3}
\end{figure}
A symmetric triangular voltage $0-10~$V is applied to the piezo of the ECDL in order to scan the range of frequencies which is close the barium resonance as shown in Fig.~\ref{fig:3}. In order to obtain this full range of frequencies spanned over $7~$GHz, appropriate feed-forward has been applied. Both the tellurium resonances as well as the HCL spectra of the $S_{1/2}-P_{3/2}$ transition of the barium ion observed within this scan range. The frequency axis is calibrated by locking the laser at individual Te$_2$ resonance and measuring the wavelength from the wavemeter with a resolution of $30~$MHz. The absolute wavelength value has been determined from the Gaussian fit to the HCL spectra which matches well with NIST data. The linearity of the frequency axis is determined from the fit with reduced $\chi^2\approx0.98$ leading to $1\%$ uncertainty. As is evident from the HCl spectrum containing the Gaussian fit, the barium resonance wavelength can be determined within $20~$MHz uncertainty, thereby limiting our overall uncertainty of the absolute scale to the same value. In order to ensure that individual Te$_2$ lines are within the uncertainty set by the HCL spectrum, we performed a line-shape fit to Te2 line as shown in figure~\ref{fig:4}. The line shape of MTS resonance is devoid of any background slope unlike FMS. This shape can be described according to \cite{line:82} by
\begin{eqnarray}
S(\Delta)&=& Re\Bigg[ \sum_{j=a,b}\frac{\mu_{ab}^2}{\gamma_j+i\delta}\Bigg( \frac{1}{\gamma_{ab}+i(\Delta+\delta/2)} - \frac{1}{\gamma_{ab}+i(\Delta+\delta)} +\nonumber\\
&& \frac{1}{\gamma_{ab}-i(\Delta-\delta)} - \frac{1}{\gamma_{ab}-i(\Delta-\delta/2)} \Bigg) e^{-i\theta} \Bigg],
\label{eq:1}
\end{eqnarray}
where, $a$ and $b$ denotes the electronic levels of Te$_2$, $\mu_{ab}$ is the electric dipole matrix element between them, $\Delta$ is the laser detuning, $\gamma_{j}$ are decay rates of the levels and $\gamma_{ab}$ is the optical relaxation rate between $a$ and $b$. The modulation frequency and phase are given by $\delta$ and $\theta$ respectively. As an example one of the the resonance line shape as obtained in the experiment is shown in fig.~\ref{fig:4}. The line is fitted with a overall scaling factor equivalent to $\sum_{j=a,b}\frac{\mu_{ab}^2}{\gamma_j+i\delta}$, the relaxation rate $\gamma_{ab}$ for the involved transition and the unknown phase $\theta$. The relaxation rate $\gamma_{ab}$ obtained from the best fit provides the linewidth to be $20.9(4)~$MHz with a reduced $\chi^2\approx 0.98$. The zero-crossing of the resonance along with electronic suppression (about 100) allows the laser to be locked with a bandwidth of a few hundred kHz. The model mostly fits well with all the resonances except a few where an additional etalon effect modifies the base level of the signal. One particular point to note is the width of these resonances are higher as compared to the Te$_2$ resonances obtained near the barium S$_1/2$-P$_3/2$ transition at $493~$nm which is attributed to the higher vibrational level densities for shorter wavelengths.\\
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{figure4.eps}}
\caption{Spectra: The Te$_2$ MTS spectra along with the line-shape fit. The $*$ denotes experimental data along with errorbar while the solid line shows the fit with reduced $\chi^2$ of $0.98$. The obtained linewidth from the fit is $20.9(4)~$MHz.}
\label{fig:4}
\end{figure}
\begin{table*}[htbp]
\centering
\caption{\bf Tellurium reference lines relative to barium resonance line: The line no. $1$ corresponds to line no. $1677$ in the tellurium atlas~\cite{Te2atlas:80}. All other lines are observed for the first time. Line no. $5$ is closest to the barium transition where spectroscopic laser is frequency locked. The first two lines here are unresolved due to fast scan rate and their relative strength are given as "sat" meaning saturated with respect to line no. $5$.}
\begin{tabular}{cccc}
\hline
line number & Relative frequency/GHz & Wavenumber/cm$^-$ & Relative strength \\
\hline
1 & 3.423 & 21952.29007 & sat \\
2 & 3.341 & 21952.29281 & sat \\
3 & 1.542 & 21952.35282 & 1.6 \\
4 & 0.749 & 21952.37927 & 8.8 \\
5 & 0.079 & 21952.40162 & 10 \\
6 & - 0.149 & 21952.40922 & 2.3 \\
7 & - 1.548 & 21952.45589 & 0.8 \\
\hline
\end{tabular}
\label{tab1}
\end{table*}
The result as summarised in figure~\ref{fig:3} contains seven resonances which has been observed for the first time. The previously reported lines in the Te$_2$ atlas are about $3.5~$GHz away from the barium line, therefore it cannot be used for laser locking. However for the linearity check of our frequency scan we have also used those resonance and found them to be matching well within the uncertainties. The frequencies are tabulated in table~\ref{tab1} along with their uncertainties and relative strengths. As for line no. $1$ and $2$, the resonance lines are close to each other due to our fast scan. We have observed these lines with higher resolution scan as well. Notably, despite the high relative strength of the second line, it was not observed in the $Te_2$ atlas possibly due to poorer resolution.
\section{Conclusion}
\label{sec:con}
We have performed modulation transfer spectroscopy on a hot tellurium cell using an in-house ECDL laser at $455.4~$nm constructed from a commercially available Nichia diode. The laser is built for trapped barium ion experiment. For the first time we have observed seven resonance lines in the neighbourhood of barium ion dipole transition $S_{1/2}-P_{3/2}$, the closest being only $79~$MHz away. All the observed transitions are having signal-to-noise ratio of more than $10$, except for line no.$~7$ where the ratio is around $5$. The resonance line no.$~5$ closest to the barium transition has a S/N of more than $80$ leading to a robust frequency lock. These measurements will allow new reference for precision barium ion experiment which varies from fundamental physics to quantum information processing. Moreover the barium ion $S_{1/2}-P_{3/2}$ transition is also used for Raman pumping into the dark $D_{5/2}$ state where absolute locking reference for the $455.4~$nm laser will make the experiments more stable and robust against frequency drifts. In addition, Te$_2$ is already an established reference for the other barium transition namely the $S_{1/2}-P_{1/2}$. Therefore, we believe that our newly measured references will further advance the toolbox of barium ion precision experiments.
\section{Acknowledgement}
DDM would like to acknowledge the contribution of Dr. Riadh Rebhi in developing a part of the setup used for performing this experiment. TD would like to acknowledge the contribution of Noah Van Horne in building some of the electronics.
\section{Funding Information}
This research is supported by the National Research Foundation Singapore under its Competitive Research Programme (CRP Award No. NRF-CRP14-2014-02)
\bigskip
|
1,941,325,220,707 | arxiv | \section*{Introduction}
Transportation--cost inequalities can be seen as a functional approach to the concentration of measure phenomenon (see e.g. Ledoux's work \cite{Led01} for a survey on this topic). Let $(E,d)$ be a metric space and let $P(E)$ denote the set of probability measures on the Borel sets of $E$. We say that the $p$-transportation--cost inequality holds for a measure $\mu \in P(E)$ if there is a constant $C$ such that
\begin{align}\label{eqn:transp_cost_metric_space}
\mathcal{W}_p(\nu,\mu) \leq \sqrt{C H(\nu\,|\,\mu)}
\end{align}
holds for all $\nu \in P(E)$. Here $\mathcal{W}_p(\nu,\mu)$ denotes the Wasserstein $p$-distance
\begin{align*}
\mathcal{W}_p(\nu,\mu) = \inf_{\pi \in \Pi(\nu,\mu)}\left( \int_{E \times E} d(x,y)^p\, d\pi(x,y) \right)^{\frac{1}{p}}
\end{align*}
where $\Pi(\nu,\mu)$ is the set of all probability measures on the product space $E \times E$ with marginals $\nu$ resp. $\mu$, and $H(\nu\,|\,\mu)$ is the relative entropy (or Kullback--Leibler divergence) of $\nu$ with respect to $\mu$, i.e.
\begin{align*}
H(\nu\,|\,\mu) =
\begin{cases}
\int \log\left(\frac{d\nu}{d\mu}\right)\,d\nu &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }\nu \ll \mu \\
+ \infty &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.}
\end{cases}
\end{align*}
If \eqref{eqn:transp_cost_metric_space} holds, we will say that $T_p(C)$ holds for the measure $\mu$.
Inequalities of type \eqref{eqn:transp_cost_metric_space} were first considered by Marton (cf. \cite{Mar86}, \cite{Mar96}).
The cases ``$p = 1$'' and ``$p = 2$'' are of special interest. The $1$-transportation--cost inequality, i.e. the weakest form of \eqref{eqn:transp_cost_metric_space}, is actually equivalent to Gaussian concentration as it was shown by Djellout, Guillin and Wu in \cite{DGW04} (using preliminary results by Bobkov and G\"otze obtained in \cite{BG99}). The $2$-transportation--cost inequality was first proved by Talagrand for the Gaussian measure on $\R^d$ in \cite{Tal96} with the sharp constant $C = 2$ (for this reason it is also called \emph{Talagrand's transportation--cost inequality}). $T_2(C)$ is particularly interesting since it has the \emph{dimension--free tensorization property}: If $T_2(C)$ holds for two measures $\mu_1$ and $\mu_2$, it also holds for the product measure $\mu_1 \otimes \mu_2$ \emph{for the same
constant $C$} (see also \cite{GL07} for a general account on tensorization properties for transportation--cost inequalities), and this property yields a dimension--free concentration of measure for $\mu$. Gozlan realized in \cite{Goz09} that also the converse is true: If $\mu$ possesses the dimension--free concentration of measure property, $T_2(C)$ holds for $\mu$. We also remark that the $2$--transportation-cost inequality gained much attention because it is intimately linked to other famous concentration inequalities, notably to the logarithmic Sobolev inequality: In their celebrated paper \cite{OV00}, Otto and Villani showed that in a smooth Riemannian setting, the logarithmic Sobolev inequality implies the $2$-transportation--cost inequality. Since then, this result has been generalized in several directions, see e.g. the recent work of Gigli and Ledoux \cite{GL13} and the references therein.
In this work, we will mainly study transportation--cost inequalities for the law of a continuous diffusion $Y$ in a multidimensional setting, i.e. solutions to
\begin{align}\label{eqn:SDE_intro}
Y_t = f_0(Y_t)\, dt + \sum_{i = 1}^d f_i(Y_t)\circ dB_t^i; \qquad Y_0 = \xi \in \R^m,\quad t\in [0,T]
\end{align}
where $B$ is a $d$-dimensional Brownian motion, assuming that the vector fields $f = (f_i)_{i = 0,1,\ldots,d}$ are sufficiently smooth. In this context, $T_1(C)$ was first established with respect to the uniform metric by Djellout, Guillin and Wu in \cite{DGW04}. Assuming that the solution $Y$ is contracting in the $L^2$ sense (which implies the existence of a unique invariant probability measure), Wu and Zhang proved in \cite{WZ04} that also $T_2(C)$ holds for the uniform metric. $T_2(C)$ is seen to hold for the weaker $L^2$-metric also under milder assumption on the vector fields, cf. \cite{DGW04} (see also \cite{Wan02} in the context of Riemannian manifolds and \cite{Sau12} where the Brownian motion $B$ is replaced by a fractional Brownian motion (in the smooth setting with Hurst parameter $H > \frac{1}{2}$)).
A standard argument to establish transportation-cost inequalities, following \cite{FU04} and \cite{DGW04}, is to use the Girsanov transformation. In the present paper, we introduce a new approach, where the key idea is to use Lyons' \textit{rough paths theory}. In the following, we will explain our strategy. The $2$--transportation-cost inequality for Gaussian measures on Banach spaces reads as follows: Let $(E,\mathcal{H},\gamma)$ be a Gaussian Banach space, i.e. $E$ is a Banach space, $\gamma$ is a Gaussian measure and $\mathcal{H}$ denotes the associated Cameron--Martin space. We set
\begin{align}\label{eqn:def_CM_metric}
d_{\mathcal{H}}(x,y) = \begin{cases}
|x - y|_{\mathcal{H}} &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{if } x - y \in \mathcal{H} \\
+ \infty &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.}
\end{cases}
\end{align}
The following Theorem was shown by Feyel and \"Ust\"unel, using the Girsanov transformation (cf. \cite[Theorem 3.1]{FU04}).
\begin{theorem}\label{thm:transp_cost_gaussian_space_intro}
If $(E,\mathcal{H},\gamma)$ is a Gaussian Banach space and $\mathcal{H}$ is densely embedded in $E$,
\begin{align}\label{eqn:transp_cost_gaussian_space_intro}
\inf_{\pi \in \Pi(\nu,\gamma)}\left( \int_{E \times E} d_{\mathcal{H}}(x,y)^2\, d\pi(x,y) \right)^{\frac{1}{2}} \leq \sqrt{2 H(\nu\,|\,\gamma)}
\end{align}
holds for all $\nu \in P(E)$.
\end{theorem}
Note that the inequality \eqref{eqn:transp_cost_gaussian_space_intro} does not really fit into the framework discussed above since $d_{\mathcal{H}}$ does in general not induce the topology on the space $E$. The statement of Theorem \ref{thm:transp_cost_gaussian_space_intro} is even more surprising since in the case $\operatorname{dim}\mathcal{H} = \infty$, we have $\gamma(\mathcal{H}) = 0$, in other words the function $d_{\mathcal{H}}$ equals $+\infty$ ``very often''. The first contribution of the present work is to give a proof of Theorem \ref{thm:transp_cost_gaussian_space_intro} using the ideas of Gozlan \cite{Goz09}; that is, we combine the dimension--free concentration property of the Gaussian measure $\gamma$ (in the form of the Borell--Sudakov--Tsirelson inequality) with a large deviation argument. Theorem \ref{thm:transp_cost_gaussian_space_intro} applied to the Wiener measure (or more general Gaussian measures) on the space of continuous functions (or on a suitable subspace) will be our starting point.
A fundamental observation made by Djellout, Guillin and Wu in \cite[Lemma 2.1]{DGW04} is that transportation--cost inequalities are stable under a push--forward by Lipschitz maps. If we could show that the map $I_{f}(\cdot,\xi)$, assigning to each Gaussian trajectory $\omega$ the solution trajectory $Y(\omega)$, is Lipschitz with respect to the metric $d_{\mathcal{H}}$, it would be immediate that $T_2(C)$ also holds for the law of $Y$. In the additive noise case, this is not hard to show and discussed in Section \ref{sec:sde_add_noise} for general Gaussian driving signals and different metrics. The multiplicative noise case is considerably more involved. It is well known that in this case, the map $I_f(\cdot,\xi)$ will in general not be continuous w.r.t. the uniform metric. However, the key result of Lyons rough paths theory in this context (cf. \cite{Lyo98}, \cite{LQ98}, \cite{LCL07}) is that there is a metric space $\mathcal{D}^{0,p}_g$ and a measurable map $S$ such that
\begin{align*}
\begin{tikzpicture}[node distance=2cm, auto]
\node (C) {$\mathcal{D}^{0,p}_g$};
\node (P) [below of=C] {$C_0$};
\node (Ai) [right of=P] {$C_{\xi}$};
\draw[->] (C) to node {$\mathbf{I}_f(\cdot,\xi)$} (Ai);
\draw[->] (P) to node [swap] {$S$} (C);
\draw[->] (P) to node [swap] {$I_f(\cdot,\xi)$} (Ai);
\end{tikzpicture}
\end{align*}
commutes, where $\mathbf{I}_f(\cdot,\xi)$ is now a locally Lipschitz continuous function. Using this factorization, we can show a local Lipschitz continuity w.r.t. the metric $d_{\mathcal{H}}$. Recall the definition of the $p$-variation (pseudo-)metric $d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}$.
\begin{proposition}
Let $f$ be sufficiently smooth and choose $p \in (2,3)$. Then there is a measurable function $L$ such that
\begin{align*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}(y^1,y^2) \leq L(x^1) d_{\mathcal{H}}(x^1,x^2)
\end{align*}
holds $\gamma$-almost surely for all $x^1, x^2 \in C_0([0,T];\R^d)$ where $\gamma$ denotes the Wiener measure and $y^i = I_{f}(x^i,\xi)$, $i = 1,2$.
\end{proposition}
See Section \ref{sec:sde_mult_noise} for a proof. The function $L$ is very explicit, and it turns out that its $L^q(\gamma)$ moments are finite for every $q\in [1,\infty)$. A straightforward generalization of \cite[Lemma 2.1]{DGW04} leads to our main result in case of multiplicative noise.
\begin{theorem}\label{thm:transp_cost_mult_intro}
Let $\mu$ denote the law of $Y$ on the space $C_{\xi}([0,T],\R^m)$. Then for every $p \in (2,3)$ and $\varepsilon > 0$ there is a constant $C$ such that
\begin{align*}
\inf_{\pi \in \Pi(\nu,\mu)}\left( \int_{C_{\xi} \times C_{\xi}} d_{p -\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}(x,y)^{2 - \varepsilon}\, d\pi(x,y) \right)^{\frac{1}{2- \varepsilon}} \leq \sqrt{C H(\nu\,|\,\mu)}
\end{align*}
for every $\nu \in P(C_{\xi})$.
\end{theorem}
We make several remarks.
\begin{itemize}
\item If $x_0 = y_0$, $\| x - y \|_{\infty} \leq d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}(x,y)$ and therefore Theorem \ref{thm:transp_cost_mult_intro} also holds for the uniform metric.
\item Rough paths theory allows to go beyond the usual semimartingale setting and we can consider more general Gaussian driving signals in \eqref{eqn:SDE_intro} than Brownian motion.
Saying this, Theorem \ref{thm:transp_cost_mult_intro} actually holds for much more general diffusions, namely for those which are driven by Gaussian rough paths in the sense of Friz--Victoir (cf. \cite{FV10-2}) and for which the Cameron--Martin space is continuously embedded in the space of paths with bounded variation. This is particularly the case for the fractional Brownian motion with Hurst parameter $H \geq 1/2$, and we extend results from Sausserau \cite{Sau12} in the multidimensional setting by considering more general diffusion coefficients. See Section \ref{sec:sde_mult_noise} for more examples.
\item Already in the Brownian motion case, our results are interesting since we \emph{almost} obtain the $2$--transportation--cost inequality without the (rather strong) assumption that $Y$ is contracting in $L^2$ (cf. \cite{WZ04}). Even though we slightly fail to conclude the dimension--free concentration property, the forthcoming Theorem \ref{theorem:tail_est_intro} shows why it is still desirable to obtain $p$--transportation--cost inequalities for $p \in (1,2)$.
\end{itemize}
We finally discuss tail estimates for functionals on spaces on which a $p$-transportation-cost inequality holds. We cite a short form of Theorem \ref{theorem:tail_estimates_general_form} here.
\begin{theorem}\label{theorem:tail_est_intro}
Let $V$ be a linear Polish space, $\mathcal{B} \subseteq V$ be a normed subspace and set
\begin{align*}
d_{\mathcal{B}}(x,y) = \begin{cases}
\| x - y \|_{\mathcal{B}} &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if } x - y \in \mathcal{B} \\
+ \infty &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ otherwise.}
\end{cases}
\end{align*}
Let $\mu \in P(V)$ and assume
\begin{align*}
\inf_{\pi \in \Pi(\nu,\mu)} \left(\int_{V \times V} d_{\mathcal{B}}(x,y)^{p}\, d\pi(x,y)\right)^{\frac{1}{p}} \leq \sqrt{H(\nu\,|\,\mu)} \quad \forall\,\nu \in P(V).
\end{align*}
Let (E,d) be a metric space, $f \colon V \to (E,d)$ such that $\mu$ - a.s. for all $h \in \mathcal{B}$,
\begin{align*}
d(f(x + h),e) \leq c(x) \left(g(x) + \|h\|_{\mathcal{B}} \right)
\end{align*}
for some $e \in E$ with $c \in L^q(\mu)$, $\frac{1}{p} + \frac{1}{q} = 1$ and $\int c g \,d\mu < \infty$.
Then $f \colon x \to d(f(x),e)$ has Gaussian tails.
\end{theorem}
Applied to Gaussian Banach spaces, Theorem \ref{theorem:tail_est_intro} can be seen as a even more general form of the \textit{generalized Fernique theorem} proved by Friz and Oberhauser in \cite{FO10} (see also \cite{DOR14}) where $c$ had to be assumed to be bounded almost surely. Theorem \ref{theorem:tail_est_intro} can be applied to many interesting examples arising from rough paths theory which are discussed more closely in Section \ref{sec:tail_estimates_for_functionals}.
The structure of the paper is as follows. Section \ref{sec:transp_cost_gaussian_space} consists of a proof of Theorem \ref{thm:transp_cost_gaussian_space_intro}. In Section \ref{sec:appl_diff}, we establish transportation--cost inequalities for the law of diffusions; the additive noise case is treated in Section \ref{sec:sde_add_noise}, the multiplicative noise case in Section \ref{sec:sde_mult_noise}. In Section \ref{sec:tail_estimates_for_functionals} we consider tail estimates for functionals on spaces in which transportation--cost inequalities hold and prove Theorem \ref{theorem:tail_est_intro}. In the appendix we collect some useful lemmata.
\subsection*{Notation}
If $(X,\mathcal{F})$ is a measurable space, $P(X)$ denotes the set of all probability measures defined on $\mathcal{F}$. If $X$ is a topological space, $\mathcal{F}$ will be usually be the Borel $\sigma$-algebra $\mathcal{B}(X)$. If $X$ and $Y$ are measurable spaces and $\nu \in P(X)$, $\mu \in P(Y)$, then $\Pi(\nu,\mu)$ denotes the set of all product measures on $X \times Y$ with marginals $\nu$ resp. $\mu$. If $c \colon X \times X \to \R_+ \cup \{+ \infty\}$ is measurable and $p>0$, we set
\begin{align*}
\mathcal{W}^c_p(\nu,\mu) = \inf_{\pi \in \Pi(\nu,\mu)} \left( \int_{X \times X} c(x,y)^p\, d\pi(x,y) \right)^{\frac{1}{p}}
\end{align*}
for $\nu, \mu \in P(X)$.
If $[S,T]$ is any interval in $\R$, we write $\mathcal{P}([S,T])$ for the set of all finite partitions of $[S,T]$ of the form $S = t_0 < t_1 < \ldots < t_M = T$, $M \in \N$. If $x,y \colon [S,T] \to (B,\| \cdot \|)$ are paths with values in a normed space and $p\geq 1$, we define $p$-variation seminorm and pseudometric as
\begin{align*}
\|x \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} := \sup_{D \in \mathcal{P}([S,T])} \left( \sum_{t_i \in D} \| x_{t_{i+1}} - x_{t_i} \|^p \right)^{\frac{1}{p}}; \qquad d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(x,y) := \|x - y \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}.
\end{align*}
If the time horizon is clear from the context, we sometimes omit the subindex $[S,T]$ in the notation. The set of all continuous paths $x \colon [S,T] \to B$ with $\| x\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} < \infty$ is denoted by $C^{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}([S,T];B)$ and we also define $C^{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}_{\xi}([S,T];B) := \{ x \in C^{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}([S,T];B)\,:\, x_S = \xi \}$ for some $\xi \in B$. If $B$ is a Banach space, $C^{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}_{0}([S,T];B)$ is also a Banach space with the norm $\| \cdot \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}$.
\section{Transportation inequality on a Gaussian space}\label{sec:transp_cost_gaussian_space}
Let $(F, \mathcal{H},\gamma)$ be a Gaussian space where $F$ is a separable Fr\'echet space, $\gamma$ is a centered Gaussian measure on the Borel sets $\mathcal{B}(F)$ and $\mathcal{H}$ denotes the corresponding Cameron--Martin space (cf. \cite{Bog98} for the precise definitions). Recall that probability measures on Borel sets of Polish spaces are Radon measures, thus $\gamma$ is Radon and $\mathcal{H}$ is also separable (cf. \cite[3.2.7. Theorem]{Bog98}).
In the next proposition, we prove a general form of $T_2(C)$ on Gaussian Fre\'chet spaces for continuous pseudometrics. The proof adapts ideas from \cite[Theorem 1.3]{Goz09}, using the Gaussian free concentration property given by the Borell--Sudakov--Tsirelson inequality.
\begin{proposition}\label{thm:main_frechet}
Let $(F,\mathcal{H},\gamma)$ be a separable Gaussian Fr\'echet space and let $d_F$ be a metric which induces the topology on $F$. Let $d$ be a pseudometric with the following properties:
\begin{itemize}
\item[(i)] There is a constant $L > 0$ such that $d(x,y) \leq L d_F(x,y)$ holds for all $x,y \in F$.
\item[(ii)] There exists a constant $C$ such that $d(x + h,x) \leq C |h|_{\mathcal{H}}$ for all $x\in F$ and $h \in\mathcal{H}$.
\end{itemize}
Then
\begin{align*}
\inf_{\pi \in \Pi(\nu,\gamma)} \int_{F \times F} d(x,y)^2\,d\pi(x,y) \leq 2 C^2\, H(\nu\,|\,\gamma)
\end{align*}
holds for every $\nu \in P(F)$
\end{proposition}
\begin{remark}
Note the crucial fact that the constant $L$ does not appear in the conclusion.
\end{remark}
\begin{proof}
We may assume w.l.o.g. that $d$ is bounded (otherwise we prove the estimate for $d_n := d \wedge n$ instead and send $n\to\infty$ at the end, using Lemma \ref{lemma:approx_cost_functions}). Thus, we may also assume that $d_F$ is bounded. Set
\begin{align*}
\mathcal{W}_2(\nu,\mu) := \inf_{\pi \in \Pi(\nu,\mu)} \left( \int_{F \times F} d(x,y)^2\,d\pi(x,y) \right)^{\frac{1}{2}}.
\end{align*}
For $x = (x_1,\ldots,x_n) \in X^n$, define
\begin{align*}
L_n^x := \frac{1}{n} \sum_{i=1}^n \delta_{x_i}.
\end{align*}
First, we claim that for every $x = (x_1,\ldots,x_n) \in X^n$ and $h = (h_1,\ldots,h_n) \in \mathcal{H}^n$,
\begin{align}\label{eqn:claim_one_main_frechet}
\left|\mathcal{W}_2(L_n^{x + h},\gamma) - \mathcal{W}_2(L_n^{x},\gamma) \right| \leq C \frac{|h|_{\mathcal{H}^n}}{\sqrt{n}}.
\end{align}
Indeed: Since $\mathcal{W}_2$ is a pseudometric (cf. Lemma \ref{lemma:wasserstein_pseudo_metric}),
\begin{align*}
\left|\mathcal{W}_2(L_n^{x + h},\gamma) - \mathcal{W}_2(L_n^{x},\gamma) \right| \leq \mathcal{W}_2(L_n^{x + h},L_n^{x}).
\end{align*}
By the convexity property of $\mathcal{T}_2 := (\mathcal{W}_2)^2$ (cf. \cite[Theorem 4.8]{Vil09}) and assumption (ii),
\begin{align*}
\mathcal{T}_2(L_n^{x + h},L_n^{x}) \leq \frac{1}{n} \sum_{i = 1}^{n} \mathcal{T}_2(\delta_{x_i + h_i},\delta_{x_i}) = \frac{1}{n} \sum_{i = 1}^n d(x_i + h_i,x_i)^2 \leq C^2 \frac{|h|_{\mathcal{H}^n}^2}{n}
\end{align*}
which shows \eqref{eqn:claim_one_main_frechet}. Now let $(X_i)_{i\in \N}$ be an i.i.d. sequence in $F$ with law $\gamma$ and let $L_n$ be its empirical measure. Let $m_n$ be the median of $\mathcal{W}_2(L_n,\gamma)$, i.e.
\begin{align*}
\P (\mathcal{W}_2(L_n,\gamma) \leq m_n) = \gamma^n \left\{ x \in F^n\ |\ \mathcal{W}_2(L_n^x,\gamma) \leq m_n \right\} \geq \frac{1}{2}
\end{align*}
and the same holds for the reversed inequalities. Define
\begin{align*}
A := \left\{ x \in F^n\ |\ \mathcal{W}_2(L_n^x,\gamma) \leq m_n \right\}
\end{align*}
and set
\begin{align*}
A^r := \left\{ x + rh\ |\ x\in A,\ h\in \mathcal{K}^n \right\}
\end{align*}
where $\mathcal{K}^n$ denotes the unit ball in $\mathcal{H}^n$. If $x + h \in A^r$, \eqref{eqn:claim_one_main_frechet} shows that
\begin{align*}
\mathcal{W}_2(L_n^{x+h},\gamma) \leq \mathcal{W}_2(L_n^{x},\gamma) + |\mathcal{W}_2(L_n^{x+h},\gamma) - \mathcal{W}_2(L_n^{x},\gamma)| \leq m_n + C \frac{r}{\sqrt{n}},
\end{align*}
thus
\begin{align*}
A^r &\subset \left\{ x + h \ |\ \mathcal{W}_2(L_n^{x+h},\gamma) \leq m_n + C \frac{r}{\sqrt{n}} \right\}\\
&\subset \left\{x \in X^n\ |\ \mathcal{W}_2(L_n^{x},\gamma) \leq m_n + C \frac{r}{\sqrt{n}} \right\}.
\end{align*}
Using the Borell--Sudakov--Tsirelson inequality (cf. \cite[Theorem 3.1]{Bor75}), we obtain
\begin{align*}
\P \left( \mathcal{W}_2(L_n,\gamma) > m_n + C \frac{r}{\sqrt{n}} \right) &= \gamma^n \left\{x \in F^n\ |\ \mathcal{W}_2(L_n^{x},\gamma) > m_n + C \frac{r}{\sqrt{n}} \right\} \\
&\leq \gamma^{n}_*((A^r)^c) \leq \bar{\Phi}(r)
\end{align*}
for all $r > 0$ where $\Phi$ denotes the cummulative distribution function of a standard normal random variable and $\bar{\Phi} = 1 - \Phi$ . Equivalently,
\begin{align*}
\P \left( \mathcal{W}_2(L_n,\gamma) > u \right) \leq \bar{\Phi}\left( \frac{u - m_n}{C} \sqrt{n}\right)
\end{align*}
for all $u > m_n$ and so
\begin{align*
\frac{1}{n} \log \P \left(\mathcal{W}_2(L_n,\gamma) > u \right) \leq -\frac{\log(2)}{n} - \frac{1}{2}\left(\frac{u - m_n}{C} \right)^2
\end{align*}
where we used the standard estimate $\bar{\Phi}(r) \leq 1/2 \exp(-r^2/2)$. From Varadarajan's Theorem (cf. \cite[Theorem 11.4.1]{Dud89}), with probability one, $L_n \to \gamma$ weakly in $F$ for $n\to\infty$. Using \cite[Theorem 7.12]{Vil03} we see that $\mathcal{W}_2^{d_F}(L_n,\gamma) \to 0$ almost surely for $n \to \infty$. By assumption (i), also $\mathcal{W}_2(L_n,\gamma) \to 0$,
hence $m_n \to 0$ for $n \to \infty$ and thus
\begin{align}\label{eqn:limsup_part_main_frechet}
\limsup_{n\to\infty} \frac{1}{n} \log \P \left( \mathcal{W}_2(L_n,\gamma) > u \right) \leq -\frac{u^2}{2C^2}
\end{align}
for all $u>0$.
Note that for every $u>0$, the set
\begin{align*}
\mathcal{O}_u = \left\{ \nu \in P(F)\ |\ \mathcal{W}_2(\nu,\gamma) > u \right\}
\end{align*}
is open in the weak topology. Indeed, assumption (i) implies that $\mathcal{W}_2 \leq L \mathcal{W}_2^{d_F}$, and since $\mathcal{W}_2^{d_F}$ metrizes weak convergence, $\nu \mapsto \mathcal{W}_2(\nu,\gamma)$ is continuous in the weak topology.
Hence we may apply Sanov's Theorem (cf. e.g. \cite[Theorem 6.2.10]{DZ98}) which shows that
\begin{align}\label{eqn:liminf_part_main_frechet}
\liminf_{n\to \infty} \frac{1}{n} \log \P \left( \mathcal{W}_2(L_n,\gamma) > u \right) \geq - \inf \{ H(\nu\,|\,\gamma)\ |\ \nu \in P(F)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ such that } \mathcal{W}_2(\nu,\gamma) > u \}
\end{align}
and combining \eqref{eqn:limsup_part_main_frechet} and \eqref{eqn:liminf_part_main_frechet} we obtain
\begin{align*}
\inf \{ H(\nu\,|\,\gamma)\ |\ \nu \in P(F)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ such that } \mathcal{W}_2(\nu,\gamma) > u \} \geq \frac{u^2}{2C^2}
\end{align*}
which is the same as saying that
\begin{align*}
\frac{1}{2 C^2} \mathcal{W}_2(\nu,\mu)^2 \leq H(\nu\,|\,\mu).
\end{align*}
\end{proof}
\subsection{Banach spaces}
In this section, we assume that $(B,\|\cdot\|)$ is a Gaussian Banach space and we set $d_B(x,y) := \|x - y\|$.
As an immediate corollary of Proposition \ref{thm:main_frechet} we obtain:
\begin{corollary}\label{corollary:weak_conclusion}
For any $\nu \in P(B)$,
\begin{align*}
\inf_{\pi \in \Pi(\nu,\gamma)} \int_{B \times B} {d_B}(x,y)^2\,d\pi(x,y) \leq 2 \sigma^2\, H(\nu\,|\,\gamma)
\end{align*}
where
\begin{align*}
\sigma^2 = \sup_{l \in B^*, \|l\| \leq 1} \int l(x)^2\,d\gamma(x) < \infty.
\end{align*}
\end{corollary}
\begin{proof}
It is well known that $\sigma < \infty$ and that for every $h\in \mathcal{H}$ one has $\| h \| \leq \sigma |h|_{\mathcal{H}}$, cf. \cite[Chapter 4]{Led96}, which gives the claim.
\end{proof}
Note that\footnote{This even holds more generally for Gaussian Radon measures on locally convex spaces, cf. \cite[3.6.1. Theorem]{Bog98}.} the closure $\bar{\mathcal{H}}$ of $\mathcal{H}$ in $B$ coincides with the support of $\gamma$. Therefore, we may (and will) assume from now on that $\mathcal{H}$ is continuously embedded $B$. Recall the definition of $d_{\mathcal{H}}$ given in \eqref{eqn:def_CM_metric}. The key to proof our main theorem is the following Lemma which shows that the metric $d_{\mathcal{H}}$ can be approximated by metrics which fulfill the conditions of Proposition \ref{thm:main_frechet}.
\begin{lemma}\label{lemma:ex_ps_metrics}
There is a sequence pseudometrics $(d_n)_{n\in \N}$ on $B$ with the following properties:
\begin{itemize}
\item[(i)] $(d_n)$ is nondecreasing and $d_n \nearrow d_{\mathcal{H}}$ pointwise for $n \to \infty$.
\item[(ii)] For every $n\in \N$ there is a constant $L_n$ such that $d_n(x,y) \leq L_n \|x-y\|$ for all $x,y \in B$.
\item[(iii)] $d_n(x + h,x) \leq |h|_{\mathcal{H}}$ for all $x\in B$, $h \in \mathcal{H}$ and $n\in\N$.
\item[(iv)] All $d_n$ are bounded.
\end{itemize}
\end{lemma}
\begin{proof}
Recall the following diagram:
\begin{align*}
B^* \hookrightarrow \mathcal{H}^* \leftrightarrow \mathcal{H} \hookrightarrow B.
\end{align*}
Since the inclusion $i \colon \mathcal{H} \hookrightarrow B$ is dense, also $i^* \colon B^* \hookrightarrow \mathcal{H}^*$ is injective and has a dense image. This implies that we can find a complete orthonormal system $(e^*_n)_{n\in \N}$ in $\mathcal{H}^*$ lying also in $i^*(B^*)$. If $R \colon \mathcal{H}^* \to \mathcal{H}$ denotes the Riesz identification map, the system $(e_n)_{n\in\N}$ defined as $e_n = R(e^*_n)$ is the dual system, i.e. $\langle e^*_m,e_n \rangle = \delta_{n,m}$. For such a system, we define maps $\pi_n \colon B \to \mathcal{H}$ as
\begin{align*}
\pi_n(x) = \sum_{k=1}^n \langle e^*_k,x \rangle\, e_k.
\end{align*}
Note that $\pi_n$ extends the orthonormal projection from $\mathcal{H}$ onto the $n$-dimensional subspace $\mathcal{H}^n = \operatorname{span}\langle e_1,\ldots, e_n \rangle$, i.e. for $h \in \mathcal{H}$ we have
\begin{align*}
\pi_n(h) = \sum_{k=1}^n \langle e_k,h \rangle_{\mathcal{H}}\, e_k.
\end{align*}
If follows that for $h\in \mathcal{H}$,
\begin{align*}
\pi_n(h) \to h
\end{align*}
in $\mathcal{H}$ for $n\to \infty$.
Set
\begin{align*}
\tilde{d}_n(x,y) &:= |\pi_n(x) - \pi_n(y)|_{\mathcal{H}} \quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\\
d_n(x,y) &:= \tilde{d}_n(x,y) \wedge n.
\end{align*}
Clearly, all $d_n$ are bounded pseudometrics. If $x,y \in B$,
\begin{align*}
\tilde{d}_n(x,y)^2 =
\sum_{i = 1}^n |\langle e^*_i, x - y \rangle|^2
\leq \sum_{i = 1}^{n+1} |\langle e^*_i, x - y \rangle|^2
= \tilde{d}_{n+1}(x,y)^2
\end{align*}
which shows that $(d_n)$ is nondecreasing. Furthermore, if $x-y \in \mathcal{H}$,
\begin{align*}
\lim_{n\to\infty} d_n(x,y)^2 = \sum_{i = 1}^\infty |\langle e_i, x - y \rangle_{\mathcal{H}}|^2 = \left|x - y \right|^2_{\mathcal{H}} = d_{\mathcal{H}}(x,y)^2
\end{align*}
by Parseval's identity. Conversely, if $\lim_{n \to \infty} d_n(x,y) < \infty$ for some $x,y \in B$, we may define
\begin{align*}
\sum_{i = 1}^\infty \langle e_i^*, x - y \rangle e_i =: z \in \mathcal{H}.
\end{align*}
This implies that
\begin{align*}
\sum_{i = 1}^\infty \langle e_i^*, x - y \rangle e_i = \sum_{i = 1}^\infty \langle e_i^*, z \rangle e_i
\end{align*}
and applying $e_k^*$ on both sides shows that $\langle e_k^*, x - y \rangle = \langle e_k^*, z \rangle$ holds for every $k\in \N$. Hence $x - y = z \in \mathcal{H}$ and we have shown property (i). If $x,y \in B$,
\begin{align*}
\tilde{d}_n(x,y)^2 = \sum_{i = 1}^n |\langle e_i^*,x-y \rangle |^2 \leq \| x-y\|^2 \sum_{i = 1}^n \|e_i^*\|^2
\end{align*}
which shows property (ii). For $x \in B$ and $h \in \mathcal{H}$,
\begin{align*}
\tilde{d}_n(x+h,x)^2 \leq \sum_{i = 1}^\infty |\langle e_i, h \rangle_{\mathcal{H}}|^2 = |h|_{\mathcal{H}}^2
\end{align*}
which finally gives property (iii).
\end{proof}
Now we can prove our main theorem.
\begin{theorem}\label{thm:talagrand_strong_form_linear}
For any $\nu \in P(B)$,
\begin{align*}
\inf_{\pi \in \Pi(\nu,\gamma)} \int_{B \times B} d_{\mathcal{H}}(x,y)^2\, d\pi(x,y) \leq 2 \, H(\nu\,|\,\gamma).
\end{align*}
\end{theorem}
\begin{proof}
Take $(d_n)$ as in Lemma \ref{lemma:ex_ps_metrics}. From Proposition \ref{thm:main_frechet},
\begin{align*}
\inf_{\pi \in \Pi(\nu,\gamma)} \int_{B \times B} d_{n}(x,y)^2\, d\pi(x,y) \leq 2 \, H(\nu\,|\,\gamma)
\end{align*}
holds for every $\nu \in P(B)$ and $n \in \N$. Sending $n \to \infty$ and using Lemma \ref{lemma:approx_cost_functions} shows the claim.
\end{proof}
\subsection{Rough paths spaces}
In the case of $B = C_0([0,T],\R^d)$, Theorem \ref{thm:talagrand_strong_form_linear} immediately generalizes to rough paths spaces. Let $\gamma$ be a Gaussian measure on $B$ with corresponding Cameron--Martin space $\mathcal{H}$. For the sake of simplicity, we will assume that $\mathcal{H}$ is continuously embedded in $C_0$, otherwise we could have used a smaller space lying in $C_0$ instead. Let $\mathcal{D}$ be a rough paths space (which could either be geometric or non-geometric, a $p$-variation or an $\alpha$-H\"older rough paths space, cf. \cite{LCL07}, \cite{FV10} or \cite{FH13} for a precise definition) and assume that there is a measurable map $S \colon C_0 \to \mathcal{D}$ such that $\pi_1 \circ S = \operatorname{Id}_{C_0}$ holds where $\pi_1 \colon \mathcal{D} \to C_0$ is the projection map. The map $S$ is called a \textit{lift map}. Set $\boldsymbol{\gamma} = \gamma\circ S^{-1}$. Abusing notation, we define $d_{\mathcal{H}} \colon \mathcal{D} \times \mathcal{D} \to \R\cup\{+\infty\}$ as
\begin{align*}
d_{\mathcal{H}}(\mathbf{x},\mathbf{y}) = \begin{cases}
|\pi_1(\mathbf{x}) - \pi_1(\mathbf{y})|_{\mathcal{H}} &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{if } \pi_1(\mathbf{x}) - \pi_1(\mathbf{y}) \in \mathcal{H} \\
+ \infty &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.}
\end{cases}
\end{align*}
\begin{corollary}\label{corollary:talagrand_gaussian_rp_space}
For any $\boldsymbol{\nu} \in P(\mathcal{D})$,
\begin{align*}
\inf_{\boldsymbol{\pi} \in \Pi(\boldsymbol{\nu},\boldsymbol{\gamma})} \int_{\mathcal{D} \times \mathcal{D}} d_{\mathcal{H}}(\mathbf{x},\mathbf{y})^2 \, d\boldsymbol{\pi}(\mathbf{x},\mathbf{y}) \leq 2 \, H(\boldsymbol{\nu}\,|\,\boldsymbol{\gamma}).
\end{align*}
\end{corollary}
\begin{proof}
By definition, $d_{\mathcal{H}}(S(x),S(y)) = d_{\mathcal{H}}(x,y)$, hence $S$ is (in particular) 1-Lipschitz and the result follows from Theorem \ref{thm:talagrand_strong_form_linear} and Lemma \ref{lemma:transp_ineq_trans_under_loc_lip_maps}.
\end{proof}
\section{Applications to diffusions}\label{sec:appl_diff}
\subsection{SDEs with additive noise}\label{sec:sde_add_noise}
In this section we will consider SDEs with additive noise. Let $Y \colon [0,T] \to \R^d$ be the solution to
\begin{align}\label{eq:sde_additive_noise}
dY_t = dX_t + b(Y_t)\,dt,\quad Y_0 = \xi \in \R^d
\end{align}
where $X \colon [0,T] \to \R^d$ is a $d$-valued Gaussian process with continuous sample paths and $b \in C^1(\R^d,\R^d)$. It is well known that \eqref{eq:sde_additive_noise} has a unique solution given by $Y_t = I_b(X_t,\xi)$ where $I_b(\cdot,\xi) \colon C_0([0,T];\R^d) \to C_{\xi}([0,T];\R^d)$ and $I_b(x,\xi) = y$ is defined as the unique solution to
\begin{align}
y(t) = \xi + x(t) + \int_0^t b(y(s))\, ds.
\end{align}
\begin{lemma}\label{lemma:add_noise_cont_p_var}
Assume that $b$ is Lipschitz continuous with Lipschitz constant $L>0$. Then for every $q \geq 1$,
\begin{align*}
\| I_b(x + h,\xi) - I_b(x,\xi) \|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}} \leq e^{LT} \| h \|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}
\end{align*}
for every $x \in C_0 ([0,T],\R^d)$ and $h \in C_0^{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}} ([0,T],\R^d)$.
\end{lemma}
\begin{proof}
Set $y = I_b(x,\xi)$ and $y_h = I_b(x + h,\xi)$. Let $t_j \leq t_{j+1}$. Then,
\begin{align*}
\left| y_h(t_{j+1}) - y_h(t_{j}) - (y(t_{j+1}) - y(t_{j})) \right| &\leq |h(t_{j+1}) - h(t_{j})| + \left| \int_{t_j}^{t_{j+1}} b(y_h(s)) - b(y(s))\, ds \right|\\
&\leq |h(t_{j+1}) - h(t_{j})| + L \int_{t_j}^{t_{j+1}} |y_h(s) - y(s)|\, ds.
\end{align*}
Hence
\begin{align*}
\| y_h - y\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,t]} &\leq \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,t]} + L \int_0^t |y_h(s) - y(s)|\, ds \\
&\leq \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,t]} + L \int_0^t \|y_h - y\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,s]}\, ds
\end{align*}
for every $t\in [0,T]$. The first estimate shows in particular that the left hand side is finite, therefore also the right hand side after the second estimate is finite. Applying Gronwall's Lemma shows that
\begin{align*}
\| y_h - y\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}} \leq \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}} e^{LT}.
\end{align*}
\end{proof}
Recall the definition of (fractional) Sobolev spaces: If $h \colon [0,T] \to \R^d$, set
\begin{align*}
|h|_{W^{\delta,p};[s,t]} := \left( \iint_{[s,t]^2} \frac{|h(v) - h(u)|^p}{|v - u|^{1 + \delta p}} \, du \, dv \right)^{\frac{1}{p}}
\end{align*}
for $\delta \in (0,1)$, $p \in (1, \infty)$ and
\begin{align*}
|h|_{W^{1,p};[s,t]} := \left( \int_s^t |\dot{h}(s)|^p \, ds \right)^{\frac{1}{p}}
\end{align*}
for $p \in (1,\infty)$ where $\dot{h}$ denotes the weak derivative of $h$. Set $\| h\|_{W^{\delta,p}} := \| h\|_{L^p,[0,T]} + |h|_{W^{\delta,p};[0,T]}$ and $d_{W^{\delta,p}}(x,y) := \|x - y\|_{W^{\delta,p}}$. The space $W^{\delta,p}$ consists of all paths $h$ for which $\| h\|_{W^{\delta,p}} < \infty$ and $W^{\delta,p}_{0} = \{h \in W^{\delta,p}\,:\, h(0) = 0 \}$. Both spaces are Banach spaces.
\begin{lemma}\label{lemma:add_noise_cont_besov}
Assume that $b$ is Lipschitz continuous with Lipschitz constant $L>0$. Then for every $\delta \in (0,1]$ and $p \in (1/\delta, \infty)$, there is a constant $C = C(\delta, p,L)$ such that
\begin{align*}
\| I_b(x + h,\xi) - I_b(x,\xi) \|_{W^{\delta,p}} \leq C \| h \|_{W^{\delta,p}}
\end{align*}
for every $x \in C_0 ([0,T],\R^d)$ and $h \in W^{\delta,p}$.
\end{lemma}
\begin{proof}
As before, set $y = I_b(\gamma,\xi)$ and $y_h = I_b(\gamma + h,\xi)$. It is easy to see that
\begin{align}\label{eqn:lem_frac_sob_1}
\| y_h - y \|_{L^p;[0,t]}^p \leq 2^{p-1} \|h\|_{L^p;[0,t]}^p + 2^{p-1} L^p \int_0^t s^{p-1}\| y_h - y \|_{L^p;[0,s]}^p\,ds.
\end{align}
Assume $\delta = 1$. Then
\begin{align}\label{eqn:lem_frac_sob_2}
| y_h - y |_{W^{1,p};[0,t]}^p &\leq 2^{p-1} | h |_{W^{1,p};[0,t]}^p + 2^{p-1} L^p \int_0^t |y_h(s) - y(s)|^p\, ds \\
&\leq 2^{p-1} | h |_{W^{1,p};[0,t]}^p + 2^{p-1} L^p \int_0^t \|y_h - y\|_{1-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,s]}^p\, ds\\
&\leq 2^{p-1} | h |_{W^{1,p};[0,t]}^p + 2^{p-1} L^p \int_0^t s^{p - 1} |y_h - y|_{W^{1,p};[0,s]}^p \, ds \label{eqn:lem_frac_sob_2}
\end{align}
where we used the embedding from \cite[Theorem 1]{FV06} in the last estimate. For $\delta \in (0,1)$,
\begin{align*}
| y_h - y |_{W^{\delta,p};[0,t]}^p &\leq 2^{p-1} |h|_{W^{\delta,p};[0,t]}^p + 2^{p-1} L^p \iint_{[0,t]^2} \frac{ \left( \int_u^v |y_h(s) - y(s)|\, ds \right)^p}{|v-u|^{1 + \delta p}}\, du\, dv \\
&\leq 2^{p-1} |h|_{W^{\delta,p};[0,t]}^p + 2^{p-1} L^p \iint_{[0,t]^2} \frac{1}{|v - u|^{1 + \delta p - p/q'}}\, du\, dv \left(\int_0^t |y_h(s) - y(s)|^q\, ds \right)^{\frac{p}{q}}
\end{align*}
where we used H\"older's inequality with $\frac{1}{q} + \frac{1}{q'} = 1$. Choosing $q$ large enough such that $\delta < 1 / q'$ ensures that the double integral is finite and we obtain
\begin{align*}
| y_h - y |_{W^{\delta,p};[0,t]}^p &\leq 2^{p-1} |h|_{W^{\delta,p};[0,t]}^p + C \left(\int_0^t \|y_h - y\|_{1/\delta - \RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,s]}^q\, ds \right)^{\frac{p}{q}} \\
&\leq 2^{p-1} |h|_{W^{\delta,p};[0,t]}^p + C \left( \int_0^t s^{\delta q - q/p} |y_h - y|_{W^{\delta,p};[0,s]}^q\, ds \right)^{\frac{p}{q}}
\end{align*}
using \cite[Theorem 2]{FV06} in the second estimate, thus
\begin{align}\label{eqn:lem_frac_sob_3}
| y_h - y |_{W^{\delta,p};[0,t]}^q &\leq C |h|_{W^{\delta,p};[0,t]}^q + C \int_0^t s^{\delta q - q/p} |y_h - y|_{W^{\delta,p};[0,s]}^q\, ds
\end{align}
by making the constant $C$ larger if necessary. Combining the estimates \eqref{eqn:lem_frac_sob_1}, \eqref{eqn:lem_frac_sob_2} and \eqref{eqn:lem_frac_sob_3} with Gronwall's Lemma gives the claim.
\end{proof}
Now let $\gamma$ be the law of $X$ on $C_0 = C_0([0,T],\R^d)$. As usual, $\mathcal{H}$ denotes the corresponding Cameron--Martin space and we assume that $\mathcal{H}$ is continuously embedded in $C_0$. Set $C_{\xi} = C_{\xi}([0,T];\R^d)$.
\begin{theorem}\label{thm:transp_cost_ineq_add_noise_case}
Let $Y$ be the solution to the SDE \eqref{eq:sde_additive_noise} and let $\mu$ be the law of $Y$. Assume that $b$ is Lipschitz continuous with Lipschitz constant $L$.
\begin{itemize}
\item[(i)] If there is a continuous embedding
\begin{align}\label{eqn:CM_embedding_besov_add_noise}
\iota \colon \mathcal{H} \hookrightarrow W^{\delta,p}
\end{align}
for some $\delta \in (0,1]$ and $p \in (1/\delta,\infty)$, then there is a constant $C = C(\delta,p,L)$ such that for every $\nu \in P(C_{\xi})$
\begin{align}\label{eqn:conc_ineq_besov_add_noise}
\inf_{\pi \in \Pi(\nu,\mu)} \int_{C_{\xi} \times C_{\xi}} d_{W^{\delta,p}}(x,y)^2\, d\pi(x,y) \leq C \|\iota\|_{\mathcal{H} \hookrightarrow W^{\delta,p}}^2 \, H(\nu\,|\,\mu).
\end{align}
\item[(ii)] If there is a continuous embedding
\begin{align}\label{eqn:CM_embedding_q-var_add_noise}
\iota \colon \mathcal{H} \hookrightarrow C^{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}
\end{align}
for some $q \in [1,\infty)$, then for every $\nu \in P(C_{\xi})$
\begin{align}\label{eqn:conc_ineq_q-var_add_noise}
\inf_{\pi \in \Pi(\nu,\mu)} \int_{C_{\xi} \times C_{\xi}} d_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}(x,y)^2\, d\pi(x,y) \leq 2 e^{2LT} \|\iota\|_{\mathcal{H} \hookrightarrow C^{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}}^2 \, H(\nu\,|\,\mu).
\end{align}
\end{itemize}
\end{theorem}
\begin{proof}
Follows from Theorem \ref{thm:talagrand_strong_form_linear}, Lemma \ref{lemma:transp_ineq_trans_under_loc_lip_maps} and Lemma \ref{lemma:add_noise_cont_besov} resp. Lemma \ref{lemma:add_noise_cont_p_var}.
\end{proof}
\begin{remark}
Embeddings of the form \eqref{eqn:CM_embedding_q-var_add_noise} play a crucial role in Gaussian rough paths theory and we will revisit them also in the next section. Sufficient conditions for such embeddings are given in \cite{FGGR13}.
\end{remark}
\begin{example}
We consider several Gaussian processes as driving signals for \eqref{eq:sde_additive_noise}:
\begin{enumerate}
\item In the case of a Brownian motion, $\mathcal{H} = W^{1,2}_0$ and \eqref{eqn:conc_ineq_besov_add_noise} holds. This was already shown in \cite[Proposition 5.4]{DGW04}.
\item Let $X =B^H$ be a fractional Brownian motion with Hurst parameter $H \in (0,1/2)$. Then it is known (cf. \cite[Theorem 3]{FV06}) that $\mathcal{H}$ is compactly embedded in $W^{\delta,2}$ for every $\delta < H + 1/2$, thus \eqref{eqn:conc_ineq_besov_add_noise} holds for every $\delta \in (1/2,H + 1/2)$. In \cite[Theorem 1 and Example 2.7]{FGGR13} it is shown that \eqref{eqn:CM_embedding_q-var_add_noise} holds with the choice $q = \frac{1}{H + 1/2}$, hence we may conclude \eqref{eqn:conc_ineq_q-var_add_noise}.
\item Many more examples for \eqref{eqn:conc_ineq_q-var_add_noise} (driving signals might be bifractional Brownian motion, Volterra processes, random Fourier series...) may be derived from \cite[Theorem 1 and Examples 2.3 -- 2.16]{FGGR13}.
\end{enumerate}
Note that in all these cases, the (fractional) Sobolev norms of the solution paths to \eqref{eq:sde_additive_noise} are in general \emph{not} finite resp. they will not have finite $q$-variation almost surely.
\end{example}
\subsection{SDEs with multiplicative noise}\label{sec:sde_mult_noise}
Next we will consider SDEs with multiplicative noise. Let $Y \colon [0,T] \to \R^m$ be the solution to
\begin{align}\label{eqn:Strat_SDE_mult_noise_case}
dY_t = f_0(Y_t)\,dt + \sum_{i = 1}^d f_i(Y_t)\,\circ dW^i_t,\quad Y_0 = \xi \in \R^m
\end{align}
where $W = (W^1,\ldots,W^d)$ is a $d$-dimensional Brownian motion and $f_0,f_1,\ldots f_d \colon \R^m \to \R^m$. In contrast to the additive noise case, the solution map $I_f(\cdot,\xi) \colon C_0([0,T],\R^d) \to C_{\xi}([0,T],\R^m)$ which assigns to each Brownian path the solution path to the SDE \eqref{eqn:Strat_SDE_mult_noise_case} will in general \emph{not} be \mbox{(Lipschitz-) continuous}. This issue can be overcome using Lyons' rough paths theory. Indeed, rough paths theory shows that there is a Polish space $\mathcal{D}^{0,p}_g$ such that the diagram
\begin{align}\label{eqn:comm_diag}
\begin{tikzpicture}[node distance=2cm, auto]
\node (C) {$\mathcal{D}^{0,p}_g$};
\node (P) [below of=C] {$C_0$};
\node (Ai) [right of=P] {$C_{\xi}$};
\draw[->] (C) to node {$\mathbf{I}_f(\cdot,\xi)$} (Ai);
\draw[->] (P) to node [swap] {$S$} (C);
\draw[->] (P) to node [swap] {$I_f(\cdot,\xi)$} (Ai);
\end{tikzpicture}
\end{align}
commutes almost surely and the map $\mathbf{I}_f(\cdot,\xi) \colon \mathcal{D}^{0,p}_g \to C_{\xi}$ is locally Lipschitz continuous. The map $S \colon C_0 \to \mathcal{D}^{0,p}_g$ is constructed w.r.t. the Wiener measure on the path space $C_0$. Using a pathwise approach, one is not restricted to Wiener measure and it is indeed possible to construct lift maps $S_{\gamma}$ w.r.t. more general Gaussian measures $\gamma$ (cf. \cite{CQ02}, \cite{FV10-2}). In this case, one \emph{defines} $I_f(\cdot,\xi) := \mathbf{I}_f(S(\cdot),\xi)$ which gives rise to solutions of SDEs of the form
\begin{align}\label{eqn:mult_case_gaussian_sde}
dY_t = \sum_{i = 1}^d f_i(Y_t)\,\circ dX^i_t,\quad Y_0 = \xi \in \R^m
\end{align}
where $X = (X^1,\ldots,X^d)$ is a Gaussian process (for simplicity, we dropped the drift term here which does not cause problems anyway). Our key result will be that for \textit{Brownian-like} Gaussian processes (we will be more precise later), we have an estimate of the form
\begin{align*}
\| I_f(x,\xi) - I_f(y,\xi) \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}} \leq L(y) d_{\mathcal{H}}(x,y)
\end{align*}
almost surely for every $x,y \in C_0$ where $L$ is a random variable which possesses every moment w.r.t. the Gaussian measure $\gamma$. Together with Lemma \ref{lemma:transp_ineq_trans_under_loc_lip_maps}, this immediately yields our main result which is stated in Theorem \ref{thm:main_result_multiplicative_sdes}.
We will not make an attempt to give an overview of rough paths theory since we will use it merely as a tool. Instead, we refer to the monographs \cite{LQ02}, \cite{LCL07}, \cite{FV10} and \cite{FH13}. The terms and notation we are using coincides with the one from \cite{FV10} with the addition that we use the symbol $\mathcal{D}_g^{0,p}$ for the geometric $p$-variation rough paths space $C^{0,p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}_0([0,T];G^{[p]}(\R^d))$.
We start with some deterministic estimates for rough paths. If $\omega$ is a control function and $\alpha > 0$, recall the definition of $N_{\alpha}(\omega;[s,t])$ resp. of $N_{\alpha}(\mathbf{x};[s,t])$ for geometric rough paths $\mathbf{x}$ (\cite{CLL13}, \cite{FR12b}). The next proposition is a version of \cite[Theorem 4]{BFRS13} for the $p$-variation metric.
\begin{proposition}
\label{prop:loc_lip_integrability_p-var_top} Let $\mathbf{x}^1$ and $\mathbf{x}^2$ be weakly geometric $p$-rough paths for some $p \geq 1$. Consider the rough differential equations (RDEs)
\begin{equation*}
dy_{t}^{j}=f^{j}(y_{t}^{j})\,d\mathbf{x}_{t}^{j};\ y_{S}^{j}\in \mathbb{
}^{m}
\end{equation*
for $j=1,2$ on some interval $[S,T]$ where $f^{1} = (f^1_i)_{i = 1,\ldots,m}$ and $f^{2} = (f^2_i)_{i=1,\ldots,m}$ are two families of vector
fields, $\theta >p$ and $\beta $ is a bound on\footnote{We mean Lipschitz it the sense of Stein , cf. \cite[Chapter 10]{FV10}} $|f^{1}|_{Lip^{\theta }}$ and
|f^{2}|_{Lip^{\theta }}$. Then for every $\alpha >0$ there is a constant
C=C(\theta ,p,\beta ,\alpha )$ such tha
\begin{eqnarray*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}( y^{1},y^{2}) &\leq
&\ C\left[ |y_{S}^{1}-y_{S}^{2}|+\left\vert f^{1}-f^{2}\right\vert _{\RIfM@\expandafter\text@\else\expandafter\mbox\fi
Lip}^{\theta -1}}+\rho _{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(\mathbf{x}^{1},\mathbf{x}^{2}
\right] \\
&&\times ( \|\mathbf{x}^1 \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + \|\mathbf{x}^2 \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + 1 ) \\
&&\times \exp \left\{ C\left( N_{\alpha}(\mathbf{x}^{1};[S,T])+N_{\alpha
}(\mathbf{x}^{2};[S,T])+1\right) \right\}
\end{eqnarray*
holds.
\end{proposition}
\begin{proof}
The proof follows \cite[Lemma 8 and Theorem 4]{BFRS13}. Let $\omega$ be a control function such that $\sup_{s<t} \frac{\|\mathbf{x}^j \|}{\omega(s,t)^{1/p}} \leq 1$ for $j = 1,2$. Set $\bar{y} := y^1 - y^2$ and
\begin{align*}
\kappa :=\frac{\left\vert f^{1}-f^{2}\right\vert _{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Lip}^{\theta -1}}}
\beta }+\rho _{p-\omega ;[S,T]}(\mathbf{x}^{1},\mathbf{x}^{2}).
\end{align*}
We claim that there is a constant $C = C(\theta,p)$ such that for every $s<t$,
\begin{align}\label{eqn:claim_1_prop_loc_lip_p-var}
\|\bar{y}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[s,t]} \leq C \beta \omega(s,t)^{\frac{1}{p}} \left(\|\bar{y}\|_{\infty;[s,t]} + \kappa \right) \exp \left\{ C \beta^p(N_{\alpha}(\omega;[s,t]) + 1) \right\}.
\end{align}
Indeed, as it was shown in the proof of \cite[Lemma 8]{BFRS13},
\begin{align*}
2^{1-p} |\bar{y}_{s,v}|^p \leq C \beta^p \omega(u,v) (|\bar{y}_u| + \kappa)^p \exp \left\{ C \beta^p \omega(u,v) \right\} + |\bar{y}_{s,u}|^p
\end{align*}
for every $[u,v] \subseteq [s,t]$. Thus if $s = \tau_0 < \ldots < \tau_M < \tau_{M+1} = v$,
\begin{align*}
|\bar{y}_{s,v}|^p \leq 2^{(M+1)(p-1)} C \beta^p \omega(s,v) (\|\bar{y}\|_{\infty;[s,v]} + \kappa)^p \exp \left\{ C \beta^p \sum_{i = 0}^M \omega(\tau_i,\tau_{i+1}) \right\}
\end{align*}
for every $s \leq v \leq t$. Choosing $\tau_0 = s$, $\tau_{i+1} = \inf_t\{\omega(\tau_i,t) \geq \alpha \} \wedge v $ gives
\begin{align*}
|\bar{y}_{s,v}|^p \leq C \beta^p \omega(s,v) (\|\bar{y}\|_{\infty;[s,v]} + \kappa)^p \exp \left\{ C \beta^p (N_{\alpha}(\omega;[s,v]) + 1) \right\}
\end{align*}
for every $s \leq v \leq t$ and \eqref{eqn:claim_1_prop_loc_lip_p-var} follows. Now we can use the conclusion from \cite[Lemma 8]{BFRS13} to see that
\begin{align*}
\|\bar{y}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[s,t]} \leq C \beta \omega(s,t)^{\frac{1}{p}} \left(\|\bar{y}_s\| + \kappa \right) \exp \left\{ C \beta^p(N_{\alpha}(\omega;[s,t]) + 1) \right\}
\end{align*}
holds for every $s<t$. We conclude as in \cite[Theorem 4]{BFRS13}.
\end{proof}
\begin{lemma}\label{lemma:estimates_diff_young_shift_inhom_rp_metric}
Let $\mathbf{x}$ be a weakly geometric $p$-rough path with $p \in [2,3)$ and $h$ a path of finite $q$-variation with $1 \leq q \leq p$ and $\frac{1}{p} + \frac{1}{q} > 1$. Then there is a constant $C = C(p,q)$ such that
\begin{align*}
\rho_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(T_h(\mathbf{x}),\mathbf{x}) \leq C_{p,q}(1 \vee \|x\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]})(\| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}^2).
\end{align*}
\end{lemma}
\begin{proof}
Recall that
\begin{align*}
\rho_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(\mathbf{x},\mathbf{y}) = \sup_{D \in \mathcal{P}([S,T])} \left( \sum_{t_i \in D} |x_{t_i,t_{i+1}} - y_{t_i,t_{i+1}}|^p \right)^{\frac{1}{p}} + \sup_{D \in \mathcal{P}([S,T])} \left( \sum_{t_i \in D} |\mathbf{x}^2_{t_i,t_{i+1}} - \mathbf{y}^2_{t_i,t_{i+1}}|^{p/2} \right)^{\frac{2}{p}}.
\end{align*}
Therefore, we immediately obtain
\begin{align*}
\rho_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(T_h(\mathbf{x}),\mathbf{x}) \leq \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + \sup_{D \in \mathcal{P}([S,T])} \left( \sum_{t_i \in D} |T_h(\mathbf{x})^2_{t_i,t_{i+1}} - \mathbf{x}^2_{t_i,t_{i+1}}|^{p/2} \right)^{\frac{2}{p}}.
\end{align*}
Concerning the second term, fix some $D \in \mathcal{P}([S,T])$. We have
\begin{align*}
\sum_{t_i \in D} |T_h(\mathbf{x})^2_{t_i,t_{i+1}} - \mathbf{x}_{t_i,t_{i+1}}^2|^{p/2} = \sum_{t_i \in D} \left|\int_{\Delta_{{t_i},{t_{i+1}}}^2} d(x+h) \otimes d(x+h) - \int_{\Delta_{{t_i},{t_{i+1}}}^2} dx \otimes dx \right|^{p/2}
\end{align*}
and
\begin{align*}
&\left|\int_{\Delta_{{t_i},{t_{i+1}}}^2} d(x+h) \otimes d(x+h) - \int_{\Delta_{{t_i},{t_{i+1}}}^2} dx \otimes dx \right|\\
\leq\ &\left| \int_{\Delta_{t_i,t_{i+1}}^2} dh \otimes d(x+h) \right| + \left| \int_{\Delta_{t_i,t_{i+1}}^2} dx \otimes dh \right| \\
\leq\ &C_{p,q} \|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[t_i,t_{i+1}]} \left( \|x+h\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[t_i,t_{i+1}]} + \|x\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[t_i,t_{i+1}]} \right)
\end{align*}
by the estimates for the Young integral. From H\"older's inequality,
\begin{align*}
\sum_{t_i \in D} |T_h(\mathbf{x})^2_{t_i,t_{i+1}} - \mathbf{x}_{t_i,t_{i+1}}^2|^{p/2} &\leq C_{p,q} \left( \sum_{t_i} \|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[t_i,t_{i+1}]}^q \right)^{\frac{p}{2q}} \left( \sum_{t_i} \|x+h\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[t_i,t_{i+1}]}^p + \|x\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[t_i,t_{i+1}]}^p \right)^{\frac{1}{2}} \\
&\leq C_{p,q} \|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}^{p/2}(\|x+h\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}^{p/2} + \|x\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}^{p/2} ).
\end{align*}
and the result follows from the triangle inequality for the $p$-variation seminorm and standard estimates.
\end{proof}
\begin{lemma}\label{lemma:dist_ito_lyons_sup}
Let $\mathbf{x}^1 := \mathbf{x}$ and $\mathbf{x}^2 := T_h(\mathbf{x})$ where $\mathbf{x}$ is a weakly geometric $p$-rough path for some $p \in [1,3)$ and $h$ is a path of finite $q$-variation with $\frac{1}{p} + \frac{1}{q} > 1$. Consider the solutions $y^1$ and $y^2$ to the RDEs as in Proposition \ref{prop:loc_lip_integrability_p-var_top} with $f^1 = f^2$ and $y^1_S = y^2_S$.
Then
\begin{align*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(y^1,y^2) \leq C \exp \{ C (N_1(\mathbf{x};[S,T]) + 1) \} (\|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} \vee \|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}^q)
\end{align*}
where $C$ is a constant depending on $p,q,\theta$ and $\beta$.
\end{lemma}
\begin{proof}
We will only consider the case $p \in [2,3)$, the case $p \in [1,2)$ is similar (and easier). Let $\|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} \leq 1$. We claim that
\begin{align}\label{eqn:claim_1_small_h}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(y^1,y^2) \leq C \exp\left\{C(N_{1}(\mathbf{x};[S,T]) + 1)\right\} \|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}
\end{align}
holds for some constant $C$. Indeed: From Proposition \ref{prop:loc_lip_integrability_p-var_top} we know that for every $\alpha > 0$,
\begin{align*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(y^1,y^2) \leq &C ( \|\mathbf{x} \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + \| T_h(\mathbf{x}) \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + 1 )\\
&\times \exp\left\{C(N_{\alpha}(\mathbf{x};[S,T]) + N_{\alpha}(T_h(\mathbf{x});[S,T]) + 1)\right\} \rho_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(\mathbf{x},T_h(\mathbf{x})).
\end{align*}
Using Lemma \ref{lemma:growth_random_meas_cm_shift}, \cite[Theorem 9.33]{FV10} and the assumption $\|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} \leq 1$ shows that
\begin{align*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(y^1,y^2) \leq C ( \|\mathbf{x} \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + 1 )\exp\left\{C(N_{1}(\mathbf{x};[S,T]) + 1)\right\} \rho_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(\mathbf{x},T_h(\mathbf{x})).
\end{align*}
for a larger constant $C$ and $\alpha$ chosen appropriately. Applying Lemma \ref{lemma:estimates_diff_young_shift_inhom_rp_metric} shows \eqref{eqn:claim_1_small_h}, using the estimate $\|\mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} \leq N_1(\mathbf{x};[S,T]) + 1$ which was proven in \cite[Lemma 4]{FR12b}.
Now let $\|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} \geq 1$. In this case,
\begin{align*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(y^1,y^2) &\leq \|y^1\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} + \|y^2\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]} \\
&\leq C(N_{\alpha}(\mathbf{x};[S,T]) + N_{\alpha}(T_h(\mathbf{x});[S,T]) + 1)
\end{align*}
using the deterministic estimates for the It\=o--Lyons map proven in \cite{FR12b}. With Lemma \ref{lemma:growth_random_meas_cm_shift}, we conclude that
\begin{align*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}(y^1,y^2) \leq C(N_{1}(\mathbf{x};[S,T])+1) \|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[S,T]}^q
\end{align*}
with $\alpha$ chosen appropriately and a larger constant $C$.
\end{proof}
We will now make further assumptions on our lift map $S$ and on the Cameron--Martin space $\mathcal{H}$. Suppose that
\begin{itemize}
\item[(i)] There is a continuous embedding
\begin{align*}
\iota \colon \mathcal{H} \hookrightarrow C^{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}([0,T],\R^d);\quad 1\leq q \leq p
\end{align*}
with $\frac{1}{p} + \frac{1}{q} > 1$ (note that this implies $1 \leq q < 2$ when $p \geq 2$).
\item[(ii)] The set
\begin{align*}
\left\{ x \in C_0 \ |\ S(x + h) = T_h(S(x)) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all } h \in \mathcal{H} \right\} =: \tilde{C}_0
\end{align*}
has full measure.
\end{itemize}
\begin{remark}
Assumption (i) and (ii) are trivially fulfilled when $p \in [1,2)$. Both assumptions are fulfilled for lifts in the sense of Friz--Victoir (cf. \cite{FV10-2}, \cite[Proposition 15.7 and Lemma 15.58]{FV10} and \cite[Theorem 1]{FGGR13}). In particular, they hold for the Stratonovich lift of the Brownian motion with $q = 1$.
\end{remark}
Under these two conditions, the following Proposition is an immediate consequence of Lemma \ref{lemma:dist_ito_lyons_sup}.
\begin{proposition}\label{cor:loc_lip_mult_noise}
Consider the RDEs as in Proposition \ref{prop:loc_lip_integrability_p-var_top} with $p \in [1,3)$, $f^1 = f^2$, $y^1_0 = y^2_0$ and assume that $S \colon C_0 \to \mathcal{D}^{0,p}_g$ is a lift map such that the diagram \eqref{eqn:comm_diag} commutes. Then
\begin{align*}
d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}(y^1,y^2) \leq L(x^1) (\|\iota\|_{\mathcal{H} \hookrightarrow C^{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}} \vee \|\iota\|_{\mathcal{H} \hookrightarrow C^{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}}^q) (d_{\mathcal{H}}(x^1,x^2) \vee d_{\mathcal{H}}(x^1,x^2)^q)
\end{align*}
for all $x^1,x^2 \in C_0$ where
\begin{align*}
L(x) = C \exp \big\{ C (N_1(S(x);[0,T]) + 1) \big\}
\end{align*}
and $C$ is a constant depending on $p,q,\theta$ and $\beta$.
\end{proposition}
The next theorem is our main result for the multiplicative case.
\begin{theorem}\label{thm:main_result_multiplicative_sdes}
Let $Y$ be the solution to the SDE \eqref{eqn:mult_case_gaussian_sde} defined pathwise via the diagram \eqref{eqn:comm_diag} and let $\mu$ be the law of $Y$. Assume that $q = 1$. Then for every $\varepsilon > 0$ there is a constant $C$ depending on $\varepsilon$, $p$, $\theta$ and $\beta$ such that for every $\nu \in P(C_{\xi})$,
\begin{align*}
\inf_{\pi \in \Pi(\nu,\mu)} \left(\int_{C_{\xi} \times C_{\xi}} d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}(x,y)^{2 - \varepsilon}\, d\pi(x,y)\right)^{\frac{1}{2 - \varepsilon}} \leq C \|\iota\|_{\mathcal{H} \hookrightarrow C^{1-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}} \sqrt{H(\nu\,|\,\mu)}.
\end{align*}
\end{theorem}
\begin{proof}
From \cite[Theorem 6.3]{CLL13} we know that $N_1(S;[0,T])$ has Gaussian tails w.r.t. $\gamma$, hence $\| \exp \{ C (N_1(S;[0,T])+1) \}\|_{L^q(\gamma)} < \infty$ for every $q \in [1,\infty)$. The assertion follows from Theorem \ref{thm:talagrand_strong_form_linear}, Proposition \ref{cor:loc_lip_mult_noise} and Lemma \ref{lemma:transp_ineq_trans_under_loc_lip_maps}.
\end{proof}
\begin{remark}
\begin{itemize}
\item[(i)] The transportation--cost inequality in Theorem \ref{thm:main_result_multiplicative_sdes} holds in particular for the Stratonovich solution to a multiplicative SDE driven by a Brownian motion. As already mentioned in the introduction, this extends the known results in two ways. First, it is seen to hold for the (stronger) $p$-variation metric. Second, it holds for the parameter $2 -\varepsilon$, any $\varepsilon > 0$, without the assumption that $Y$ is contracting in the $L^2$-sense as in \cite{WZ04}.
\item[(ii)] Many more Gaussian processes fulfill assumption (i) and (ii) for $q = 1$ and therefore may be considered as driving signals in Theorem \ref{thm:main_result_multiplicative_sdes}, e.g. fractional Brownian motions with Hurst parameter $H \geq \frac{1}{2}$ (in which case we extend results from \cite{Sau12}), Brownian bridges, Ornstein--Uhlenbeck processes, bifractional Brownian motions and random Fourier series; see \cite{FGGR13} for a detailed account and even more examples.
\end{itemize}
\end{remark}
\section{Tail estimates for functionals}\label{sec:tail_estimates_for_functionals}
It is well known that transportation--cost inequalities imply Gaussian measure concentration. This was first disovered by Marton (\cite{Mar86}, \cite{Mar96}). In the following, we show how to modify her argument in order to deduce tail estimates in a more general context.
\begin{theorem}\label{theorem:tail_estimates_general_form}
Let $V$ be a linear Polish space and let $\mu$ be a probability measure defined on the Borel $\sigma$--algebra of $V$. Assume that there is a normed space $\mathcal{B} \subseteq V$. Let $d_{\mathcal{B}} \colon V \times V \to \R \cup \{ + \infty \}$ be defined as
\begin{align*}
d_{\mathcal{B}}(x,y) = \begin{cases}
\| x - y \|_{\mathcal{B}} &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if } x - y \in \mathcal{B} \\
+ \infty &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ otherwise}
\end{cases}
\end{align*}
and assume that there is a $p \in [1, \infty)$ and a constant $C$ such that for every $\nu \in P(V)$,
\begin{align*}
\inf_{\pi \in \Pi(\nu,\mu)} \left(\int_{V \times V} d_{\mathcal{B}}(x,y)^{p}\, d\pi(x,y)\right)^{\frac{1}{p}} \leq C \sqrt{H(\nu\,|\,\mu)}.
\end{align*}
Let $(E,d)$ be some metric space, $f \colon V \to E$ be measurable w.r.t. the Borel $\sigma$--algebra and assume that there is an $r_0 \geq 0$ and some element $e \in E$ such that
\begin{align*}
\mu\left\{ x \in V\, : \, d(f(x),e) \leq r_0 \right\} =: a > 0.
\end{align*}
\begin{itemize}
\item[(i)] Assume that there is a nullset $\mathcal{N}$ such that
\begin{align*}
d(f(x + h),f(x)) \leq c(x) \|h\|_{\mathcal{B}}
\end{align*}
holds for every $x$ outside $\mathcal{N}$ and $h \in \mathcal{B}$ where $c \in L^q(\mu)$ with $q \in (1,\infty]$ such that $\frac{1}{p} + \frac{1}{q} = 1$. Then
\begin{align*}
\mu \left\{ x \in V\, :\, d(f(x),e) > r \right\} \leq \exp \left\{ - \left( \frac{r - r_1}{C \|c\|_{L^q(\mu)}} \right)^2 \right\}
\end{align*}
for all $r \geq r_1$ where $r_1 = r_0 + \|c\|_{L^q(\mu)} \sqrt{2 \log(a^{-1})}$.
\item[(ii)] Assume that there is a nullset $\mathcal{N}$ such that
\begin{align*}
d(f(x + h),e) \leq c(x) \left(g(x) + \|h\|_{\mathcal{B}} \right)
\end{align*}
holds for every $x$ outside $\mathcal{N}$ and $h \in \mathcal{B}$ with $c$ as in (i) and $\langle c,g\rangle = \int c g \,d\mu < \infty$. Then
\begin{align*}
\mu \left\{ x \in V\, :\, d(f(x),e) > r \right\} \leq \exp \left\{ - \left( \frac{r - r_1}{C \|c\|_{L^q(\mu)}} \right)^2 \right\}
\end{align*}
for all $r \geq r_2$ where $r_2 = r_0 + 4\langle c,g\rangle + \|c\|_{L^q(\mu)} \sqrt{2 \log(a^{-1})}$.
\end{itemize}
In particular, in both cases, $d(f(\cdot),e) \colon V \to \R$ has Gaussian tails.
\end{theorem}
\begin{proof}
For $x, y \in V$ set
\begin{align*}
d_f(x,y) = d(f(x),f(y)).
\end{align*}
For any measurable set $A \subseteq V$ and $r \geq 0$ we define
\begin{align*}
A^r := \left\{ x \in V\, :\, \RIfM@\expandafter\text@\else\expandafter\mbox\fi{there is an }\bar{x} \in A\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ such that } d_f(x,\bar{x}) \leq r \right\}.
\end{align*}
Fix some $r\geq 0$ and set $B := (A^r)^c$. Assume first that $A$ and $B$ have positive measure. On $V$, we define the measures
\begin{align*}
d\mu_A := \frac{\mathbbm{1}_A}{\mu(A)}d\mu \quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and} \quad d\mu_B := \frac{\mathbbm{1}_B}{\mu(B)}d\mu.
\end{align*}
Then
\begin{align*}
r &\leq \inf_{\pi \in \Pi(\mu_A,\mu_B)} \int_{V \times V} d_f(x,y) \, d\pi(x,y) \\
&\leq \inf_{\pi \in \Pi(\mu_A,\mu)} \int_{V \times V} d_f(x,y) \, d\pi(x,y) + \inf_{\pi \in \Pi(\mu_B,\mu)} \int_{V \times V} d_f(x,y) \, d\pi(x,y)
\end{align*}
where we used symmetry and the triangle inequality which can be deduced from Lemma \ref{lemma:wasserstein_pseudo_metric}. Set $\tilde{V} := V \setminus \mathcal{N}$ and let $x,y \in \tilde{V}$. If $x - y = h \in \mathcal{B}$, we have
\begin{align}\label{eqn:df_H_lipschitz}
d_f(x,y) = d(f(y+h), f(y)) \leq c(y)\|h\|_{\mathcal{B}}
\end{align}
in case (i) and
\begin{align*}
d_f(x,y) \leq 2 c(y) g(y) + c(y)\|h\|_{\mathcal{B}}
\end{align*}
in case (ii). In the following, we will only show the conclusion stated in (i), part (ii) is similar. From \eqref{eqn:df_H_lipschitz}, we can deduce that for all $x,y \in \tilde{V}$,
\begin{align*}
d_f(x,y) \leq c(y) d_{\mathcal{B}}(x,y).
\end{align*}
Since $\tilde{V}$ has full measure for $\mu$, it follows that this inequality holds $\pi$ almost surely for every $\pi \in \Pi(\mu_A,\mu)$ resp. $\pi \in \Pi(\mu_B,\mu)$, hence
\begin{align*}
r &\leq \inf_{\pi \in \Pi(\mu_A,\mu)} \int_{V \times V} c(y) d_{\mathcal{B}}(x,y) \, d\pi(x,y) + \inf_{\pi \in \Pi(\mu_B,\mu)} \int_{V \times V} c(y) d_{\mathcal{B}}(x,y) \, d\pi(x,y) \\
&\leq \|c\|_{L^q} \inf_{\pi \in \Pi(\mu_A,\mu)} \left( \int_{V \times V} d_{\mathcal{B}}(x,y)^p \, d\pi(x,y) \right)^{\frac{1}{p}} + \|c\|_{L^q} \inf_{\pi \in \Pi(\mu_B,\mu)} \left( \int_{V \times V} d_{\mathcal{B}}(x,y)^p \, d\pi(x,y) \right)^{\frac{1}{p}} \\
&\leq \|c\|_{L^q} C \sqrt{H(\mu_A\,|\,\mu)} + \|c\|_{L^q} C \sqrt{H(\mu_B\,|\,\mu)}\\
&= \|c\|_{L^q} C \sqrt{\log(\mu(A)^{-1})} + \|c\|_{L^q} C \sqrt{\log(\mu(B)^{-1})}.
\end{align*}
Rearranging terms, we see that
\begin{align*}
1 - \exp\left\{ -\left(\frac{r - \tilde{r}}{C \|c\|_{L^q}} \right)^2 \right\} \leq \mu(A^r)
\end{align*}
for every $r \geq \tilde{r}$ where $\tilde{r} := C \|c\|_{L^q}\sqrt{\log(\mu(A)^{-1})}$. Now set
\begin{align*}
A := \{ x\in V\, :\, d(f(x),e) \leq r_0 \}.
\end{align*}
Then we have for every $r \geq 0$,
\begin{align*}
A^r \subseteq \{ x \in V\, :\, d(f(x),e) \leq r_0 + r\}.
\end{align*}
If $\mu(B) = 0$, it follows that $\{ x \in V\, :\, d(f(x),e) \leq r_0 + r\}$ has full measure. In other words, $d(f(\cdot),e)$ is bounded almost surely and the claimed estimate is trivial. If $\mu(B) > 0$, we can use our calculations above to conclude that
\begin{align*}
1 - \exp\left\{ -\left(\frac{r - \tilde{r}}{C \|c\|_{L^q}} \right)^2 \right\} \leq \mu\{ x \in V\, :\, d(f(x),e) \leq r_0 + r\}
\end{align*}
holds for every $r \geq \tilde{r}$ and the claim follows.
\end{proof}
\begin{remark}
In the Gaussian case, the assumptions from Theorem \ref{theorem:tail_estimates_general_form} hold with $V$ being a Banach space, $\mathcal{B} = \mathcal{H}$ being continuously embedded in $V$ and $p = q = 2$. It is well known (cf. \cite[4.5.6. Theorem]{Bog98}) that $\mathcal{H}$-Lipschitzian functions have Gaussian tails. Part (i) in Theorem \ref{theorem:tail_estimates_general_form} shows that this assumption can be relaxed by assuming that we can control the Lipschitz constant as in \eqref{eqn:df_H_lipschitz}.
\end{remark}
\begin{example}
\begin{enumerate}
\item In case of a Gaussian Banach space $(E,\mathcal{H},\gamma)$, choosing $f = \| \cdot \|_E$ gives the usual Fernique theorem.
\item In case of existence of a lift map $S \colon C_0 \to \mathcal{D}_g^{0,p}$ to a rough paths space as in Section \ref{sec:sde_mult_noise}, we obtain the Fernique estimate for Gaussian rough paths (see \cite{FO10}) by setting $f(x) = d_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}(S(x),e)$.
\item Choosing $f(x) = \| I_f(x,\xi) \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}$ and using Proposition \ref{cor:loc_lip_mult_noise} together with the tail estimates for the counting process $N_1(S(x);[0,T])$ (cf. \cite{CLL13} and the forthcoming example \ref{example:cll_count_process}), we see that the $p$-variation norm of solutions to rough differential equations with sufficiently smooth driving signals (i.e. Gaussian processes with Cameron--Martin paths of bounded variation) have Gaussian tails (this result is however not new and was already obtained in \cite{FR12b}).
\end{enumerate}
\end{example}
We want to show now how to deduce tail estimates for the counting process $N_1(S(X);[0,T])$ where $X$ is a stochastic process for which we assume that its law satisfies a transportation--cost inequality on a path space. In the Gaussian case, these estimates can be seen as one of the key results in \cite{CLL13} and we already used them several times in this work.
\begin{lemma}\label{lemma:growth_random_meas_cm_shift}
Let $\mathbf{x}$ be a weakly geometric $p$-rough path and $h$ be a path of bounded $q$-variation where $1 \leq q \leq p$ and $\frac{1}{p} + \frac{1}{q} > 1$. Then there is an $\alpha = \alpha(p,q)$ such that
\begin{align*}
N_{\alpha}(T_h(\mathbf{x});[0,T]) \leq \left( \|\mathbf{x} \|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p \vee (2N_1(\mathbf{x};[0,T]) + 1)\right) + \|h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^q.
\end{align*}
\end{lemma}
\begin{proof}
We have
\begin{align*}
N_{\alpha}(\| T_h\mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p;[0,T]) &\leq N_{\alpha}(C_{p,q}( \| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p);[0,T]) \\
&= N_{1}(\| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p ; [0,T])
\end{align*}
with the choice $\alpha = C_{p,q}$, using \cite[Theorem 9.33]{FV10}. By definition,
\begin{align*}
N_{1}(\| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p ; [0,T]) \leq \sum_{\tau_i} \| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^p
\end{align*}
where $(\tau_i)$ is a finite partition of $[0,T]$ for which $\| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^p \leq 1$ for every $\tau_i$, and in particular $\| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^p \leq \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^q$. Hence
\begin{align*}
N_{1}(\| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p ; [0,T]) &\leq \sum_{\tau_i} \| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^q \\
&\leq \sup_{\stackrel{(\tau_i) \in \mathcal{P}([0,T])}{\|\mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]} \leq 1}} \sum_{\tau_i} \| \mathbf{x}\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[\tau_i,\tau_{i+1}]}^p + \| h\|_{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,T]}^q. \\
\end{align*}
The claim follows from \cite[Proposition 4.11]{CLL13}.
\end{proof}
\begin{example}\label{example:cll_count_process}
Let $X$ be a Gaussian process and consider the counting process
\begin{align*}
t\mapsto N_{\alpha}(\| S(X)\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}^p;[0,t]) = N_{\alpha}( S(X) ;[0,t])
\end{align*}
where $S$ is a lift map as in Section \ref{sec:sde_mult_noise}. Lemma \ref{lemma:growth_random_meas_cm_shift} shows that
\begin{align*}
N_{\alpha}(S(x+h);[0,t]) \leq \|S(x)\|_{p-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var};[0,t]}^p + \| \iota \|_{\mathcal{H} \hookrightarrow C^{q-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{var}}}^q \| h \|_{\mathcal{H}}^q
\end{align*}
almost surely, which implies that $N_{\alpha}( S(X);[0,t])^{\frac{1}{q}}$ has Gaussian tails for every $t \geq 0$, one of the main results obtained in \cite{CLL13}. This implies in particular that the random variable $L$ in Proposition \ref{cor:loc_lip_mult_noise} has moments of any order.
\end{example}
\section{Appendix}
\subsection{Optimal transport and the Wasserstein metric}
We start with an approximation result which will be used for the Wasserstein metric. The proof is taken from \cite[part 3 in the proof of Theorem 1.3]{Vil03}.
\begin{lemma}\label{lemma:approx_cost_functions}
Let $X$ and $Y$ be Polish spaces and let $\mu$ and $\nu$ be probability measures on $X$ resp. $Y$. Let $c_n \colon X \times Y \to \R_+$ be a nondreasing sequence of bounded, continuous functions with $c_n \nearrow c$ pointwise where $c \colon X \times Y \to \R_+ \cup \{ + \infty \}$. Then
\begin{align*}
\lim_{n \to \infty} \inf_{\pi \in \Pi(\mu,\nu)} \int_{X \times Y} c_n(x,y)\,d\pi(x,y) = \inf_{\pi \in \Pi(\mu,\nu)} \int_{X \times Y} c(x,y)\,d\pi(x,y).
\end{align*}
\end{lemma}
\begin{proof}
First, it can be shown (cf. \cite[p. 32]{Vil03}) that $\Pi(\mu,\nu)$ is compact in the weak topology. For $\pi \in \Pi(\mu,\nu)$, set
\begin{align*}
I_n(\pi) := \int_{X \times Y} c_n(x,y)\,d\pi(x,y) \quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and} \quad I(\pi) = \int_{X \times Y} c(x,y)\,d\pi(x,y).
\end{align*}
Since the $c_n$ are nondecreasing, $\inf_{\pi \in \Pi(\mu,\nu)} I_n(\pi)$ is nondecreasing and
\begin{align*}
\inf_{\pi \in \Pi(\mu,\nu)} I_n(\pi) \leq \inf_{\pi \in \Pi(\mu,\nu)} I(\pi).
\end{align*}
Therefore, the limit exists and it is enough to show that
\begin{align*}
\lim_{n \to \infty} \inf_{\pi \in \Pi(\mu,\nu)} I_n(\pi) \geq \inf_{\pi \in \Pi(\mu,\nu)} I(\pi).
\end{align*}
From continuity of the $c_n$, we know that the infima are attained (cf. \cite[Theorem 1.3]{Vil03}), thus there are $\pi_n \in \Pi(\mu,\nu)$ such that
\begin{align*}
I_n(\pi_n) = \inf_{\pi \in \Pi(\mu,\nu)} I_n(\pi).
\end{align*}
From compactness, there is a subsequence $(\pi_{n_k})$ and a $\pi_* \in \Pi(\mu,\nu)$ such that $\pi_{n_k} \to \pi_*$ weakly for $k \to \infty$. Now, whenever $n\geq m$, $I_n(\pi_n) \geq I_m(\pi_n)$ and
\begin{align*}
\lim_{n \to \infty} I_n(\pi_n) \geq \limsup_{n \to \infty} I_m(\pi_n) \geq I_m(\pi_*).
\end{align*}
Monotone convergence gives
\begin{align*}
\lim_{m \to \infty} I_m(\pi_*) = I(\pi_*)
\end{align*}
and thus
\begin{align*}
\lim_{n \to \infty} I_n(\pi_n) \geq \lim_{m \to \infty} I_m(\pi_*) = I(\pi_*) \geq \inf_{\pi \in \Pi(\mu,\nu)} I(\pi).
\end{align*}
\end{proof}
It is well known that the Wasserstein metric $\mathcal{W}^d_p$ is indeed a metric when $d$ is a metric. The next Lemma analyzes the respective properties more carefully.
\begin{lemma}\label{lemma:wasserstein_pseudo_metric}
Let $X$ be Polish, $c \colon X \times X \to \R_+ \cup \{+ \infty \}$ be measurable and $p \geq 1$.
\begin{itemize}
\item[(i)] If $c(x,x) = 0$ for all $x \in X$, we have $\mathcal{W}_p^c(\nu,\nu) = 0$ for all $\nu \in P(X)$.
\item[(ii)] Assume in addition that $c$ is lower semicontinuous. If $c(x,y) = 0$ implies $x = y$, also $\mathcal{W}_p^c(\nu,\mu) = 0 $ implies $\nu = \mu$.
\item[(iii)] If $c$ is symmetric, also $\mathcal{W}_p^c$ is symmetric.
\item[(iv)] If the triangle inequality holds for $c$, it also holds for $\mathcal{W}_p^c$.
\end{itemize}
In particular, if $c = d$ is a (pseudo-)metric, $\mathcal{W}_p^d$ defines a (possibly infinite) (pseudo-)metric on the space $P(X)$ for all $p \geq 1$.
\end{lemma}
\begin{proof}
(i), (ii) and (iii) are easy and can be shown as in \cite[Theorem 7.3]{Vil03}. It remains to prove (iv). Let $\nu_1,\nu_2,\nu_3 \in P(X)$ and let $\varepsilon > 0$. Choose $\pi_{12} \in \Pi(\nu_1,\nu_2)$ and $\pi_{23} \in \Pi(\nu_2,\nu_3)$ such that
\begin{align*}
\left(\int c(x,y)^p\, d\pi_{12}(x,y)\right)^{\frac{1}{p}} \leq \mathcal{W}_p^c(\nu_1,\nu_2) + \varepsilon
\end{align*}
and the same for $\pi_{23}$ and $\mathcal{W}_p^c(\nu_2,\nu_3)$. From the Gluing Lemma (cf. \cite[Lemma 7.6]{Vil03}) there is a probability measure $\pi \in P(X \times X \times X)$ such that $\pi_{12} = \pi(\cdot,\cdot,X)$ and $\pi_{23} = \pi(X,\cdot,\cdot)$. Set $\pi_{13} = \pi(\cdot,X,\cdot)$. Then, using the triangle inequality for $c$,
\begin{align*}
\mathcal{W}_p^c(\nu_1,\nu_3) &\leq \left(\int c(x,y)^p\, d\pi_{13}(x,y)\right)^{\frac{1}{p}} \\
&\leq \left(\int c(x,y)^p\, d\pi_{12}(x,y)\right)^{\frac{1}{p}} + \left(\int c(x,y)^p\, d\pi_{23}(x,y)\right)^{\frac{1}{p}} \\
&\leq \mathcal{W}_p^c(\nu_1,\nu_2) + \mathcal{W}_p^c(\nu_2,\nu_3) + 2\varepsilon.
\end{align*}
Since $\varepsilon > 0$ was arbitrary, this shows the claim.
\end{proof}
The next Lemma is a generalization of \cite[Lemma 2.1]{DGW04}.
\begin{lemma}\label{lemma:transp_ineq_trans_under_loc_lip_maps}
Let $(X,\mathcal{F})$ be a measurable space on which regular conditional distributions exist and let $c \colon X \times X \to \R_+ \cup\{+ \infty\}$ be a measurable function. Assume that there is a measure $\mu \in P(X)$ such that
\begin{align*}
\inf_{\pi \in \Pi(\nu,\mu)} \left( \int_{X \times X} c(x,y)^p\, d\pi(x,y) \right)^{\frac{1}{p}} \leq \sqrt{C H(\nu\,|\,\mu)}
\end{align*}
holds for every $\nu \in P(X)$ where $C$ is some constant and $p \in [1,\infty)$. Let $(Y,\mathcal{G})$ be another measurable space, $\tilde{c} \colon Y \times Y \to \R_+ \cup\{+ \infty\}$ be a measurable function and assume that there is a measurable function $\Psi \colon X \to Y$ for which
\begin{align*}
\tilde{c}(\Psi(x),\Psi(y)) \leq L(y) c(x,y)
\end{align*}
holds for every $x,y \in X_0$ where $X_0 \subseteq X$ has full measure w.r.t. $\mu$ and $L \colon X \to \R \cup\{+ \infty\}$ is another measurable function. Set $\tilde{\mu} := \mu \circ \Psi^{-1}$. Then for every $\tilde{p} \in [1,p]$,
\begin{align*}
\inf_{\tilde{\pi} \in \Pi(\tilde{\nu},\tilde{\mu})} \left( \int_{Y \times Y} \tilde{c}(x,y)^{\tilde{p}} \, d\tilde{\pi}(x,y) \right)^{\frac{1}{\tilde{p}}} \leq \|L\|_{L^q(\mu)} \sqrt{C H(\tilde{\nu}\,|\,\tilde{\mu})}
\end{align*}
holds for very $\tilde{\nu} \in P(Y)$ where $q \in (1, \infty]$ is chosen such that $\frac{1}{q} + \frac{1}{p} = \frac{1}{\tilde{p}}$.
\end{lemma}
\begin{proof}
W.l.o.g. we may assume $C = 1$. Let $\tilde{\nu} \in P(Y)$ and assume that $H(\tilde{\nu}\,|\,\tilde{\mu}) < \infty$. Choose $\nu \in P(X)$ such that $\tilde{\nu} = \nu \circ \Psi^{-1}$ and $ \nu \ll \mu$ (note that there is at least one $\nu$ which fulfills this condition; e.g. $\nu_0(dx) := \frac{d\tilde{\nu}}{d\tilde{\mu}}(\Psi(x)) \mu(dx)$). Then
\begin{align*}
\inf_{\tilde{\pi} \in \Pi(\tilde{\nu},\tilde{\mu})} \int \tilde{c}(x,y)^{\tilde{p}} \, d\tilde{\pi}(x,y)
&\leq \inf_{\pi \in \Pi(\nu,\mu)} \int_{Y \times Y} \tilde{c}(x,y)^{\tilde{p}} \, d(\pi \circ (\Psi\times \Psi)^{-1})(x,y) \\
&= \inf_{\pi \in \Pi(\nu,\mu)} \int_{X \times X} \tilde{c}(\Psi(x),\Psi(y))^{\tilde{p}} \, d \pi(x,y).
\end{align*}
Since $\nu \ll \mu$, $X_0 \times X_0$ has full measure for every $\pi \in \Pi(\nu,\mu)$, therefore
\begin{align*}
\inf_{\pi \in \Pi(\nu,\mu)} \int_{X \times X} \tilde{c}(\Psi(x),\Psi(y))^{\tilde{p}} \, d \pi(x,y)
&\leq \inf_{\pi \in \Pi(\nu,\mu)} \int_{X \times X} (L(y) c(x,y))^{\tilde{p}} \, d \pi(x,y) \\
&\leq \|L\|_{L^q(\mu)}^{\tilde{p}} \inf_{\pi \in \Pi(\nu,\mu)} \left( \int_{X \times X} c(x,y)^p \, d \pi(x,y) \right)^{\frac{\tilde{p}}{p}}
\end{align*}
by H\"older's inequality. The assertion follows from the identity
\begin{align}
H(\tilde{\nu}\,|\,\tilde{\mu}) = \inf \{ H(\nu\,|\,\mu)\ |\ \nu \in P(X)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ s.t. } \nu \circ \Psi^{-1} = \tilde{\nu} \}
\end{align}
which holds under the assumption that regular conditional distributions exist on $(X, \mathcal{F})$, see \cite[Lemma 2.1]{DGW04}.
\end{proof}
\bibliographystyle{alpha}
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{F}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint}}%
\def\diiint{\mathop{\displaystyle \iiint}}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\makeatother
\endinput
\fi
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\message{amsmath already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
|
1,941,325,220,708 | arxiv | \section{Introduction}
The origin of cosmic neutrinos observed in IceCube is a major enigma~\cite{Aartsen:2013bka,Aartsen:2013jdh}, which has been deepened by the latest results on high- and medium-energy starting events and shower events~\cite{Aartsen:2014muf,Aartsen:2015ita,Aartsen:2017mau}.
The atmospheric background of high-energy electron neutrinos is much lower than that of muon neutrinos, allowing us to investigate the data below 100~TeV~\cite{Beacom:2004jb,Laha:2013lka}.
The comparison with the extragalactic gamma-ray background (EGB) measured by {\it Fermi} indicates that the extragalactic neutrino background (ENB) at $10-100$~TeV energies originates from hidden sources preventing the escape of GeV-TeV gamma rays~\cite{Murase:2015xka}.
Active galactic nuclei (AGNs) are major contributors to the energetics of high-energy cosmic radiations~\cite{Murase:2018utn}; radio quiet (RQ) AGNs are dominant in the extragalactic x-ray sky~\cite{1992ARA&A..30..429F,ued+03,Hasinger:2005sb,Ajello:2008xb,Ueda:2014tma}, and radio loud (RL) AGNs including blazars give dominant contributions to the EGB~\cite{Costamante:2013sva,Inoue:2014ona,Fornasa:2015qua}.
AGNs may also explain the MeV gamma-ray background whose origin has been under debate (e.g.,~\cite{Inoue:2007tn,Ajello:2009ip,Lien:2012gz}).
High-energy neutrino production in the vicinity of supermassive black holes (SMBHs) were discussed early on~\cite{Eichler:1979yy,1981MNRAS.194....3B,brs90,Stecker:1991vm}, in particular to explain x-ray emission by cosmic-ray (CR) induced cascades assuming the existence of high Mach number accretion shocks at the inner edge of the disk~\cite{1986ApJ...304..178K,1986ApJ...305...45Z,1987ApJ...320L..81S,Stecker:1991vm}. However, cutoff features evident in the x-ray spectra of Seyfert galaxies and the absence of electron-positron annihilation lines ruled out the simple cascade scenario for the x-ray origin (e.g.,~\cite{DiMatteo:1999mw,2018MNRAS.480.1819R}). In the standard scenario, the observed x rays are attributed to thermal Comptonization of disk photons~\cite{Shapiro:1976fr,1980A&A....86..121S,Zdziarski:1996wq,1996ApJ...470..249P,Haardt:1996hn}, and electrons are presumably heated in the coronal region~\cite{1977ApJ...218..247L,Galeev:1979td}. There has been significant progress in our understanding of accretion disks with the identification of the magnetorotational instability (MRI)~\cite{bh91,BH98a}, which can result in the formation of a corona above the disk as a direct consequence of the accretion dynamics and magnetic dissipation (e.g.,~\cite{Miller:1999ix,Merloni:2000gs,Liu:2002ts,Blackman:2009fi,Io:2013gja,SI14a,Jiang:2014wga}).
Turbulence is also important for particle acceleration~\cite{Lazarian:2012nd}. The roles of nonthermal particles have been studied in the context of radiatively inefficient accretion flows (RIAFs;~\cite{ny94,YN14a}), in which the plasma is often collisionless because Coulomb collisions are negligible for protons (e.g.,~\cite{tk85,Mahadevan:1997qz,Mahadevan:1997zq,ktt14,lyn+14,Ball:2017bpa}). Recent studies based on numerical simulations of the MRI~\cite{Kimura:2016fjx,Kimura:2018clk} support the idea that high-energy ions might be stochastically accelerated by the ensuing magnetohydrodynamic (MHD) turbulence.
The vicinity of SMBHs is often optically thick to GeV-TeV gamma rays, where CR acceleration cannot be directly probed by these photons, but high-energy neutrinos can be used as a unique probe of the physics of AGN cores. In this work, we present a concrete model for their high-energy emissions (see Fig.~\ref{fig:model}), in which spectral energy distributions (SEDs) are constructed from the data and from empirical relations. We compute neutrino and gamma-ray spectra, by solving both CR transport equations with the relevant energy losses and the resulting electromagnetic cascades of secondaries. We demonstrate the importance of future MeV gamma-ray observations for revealing the origin of IceCube neutrinos.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{AGNcorona2.eps}
\caption{Schematic picture of the AGN disk-corona scenario. Protons are accelerated by turbulence generated by the MRI in coronae, and produce high-energy neutrinos and cascaded gamma rays via interactions with matter and radiation.}
\label{fig:model}
\end{center}
\end{figure}
\section{Phenomenological Prescription of AGN Disk Coronae}
We construct a phenomenological disk-corona model based on the existing data. SEDs of Seyfert galaxies have been extensively studied, which consist of several components; radio emission (see Ref.~\cite{2019arXiv190205917P}), infrared emission from a dust torus~\cite{Netzer:2015jna}, optical and ultraviolet components from an accretion disk~\cite{1999PASP..111....1K}, and x rays from a corona~\cite{1980A&A....86..121S}.
The averaged SEDs are provided in Ref.~\cite{ho08} as a function of the Eddington ratio, $\lambda_{\rm Edd}=L_{\rm bol}/L_{\rm Edd}$, where $L_{\rm bol}$ and $L_{\rm Edd}$ are bolometric and Eddington luminosities, respectively. The well-known ``blue'' bump is attributed to multicolor blackbody emission from the geometrically thin, optically thick disk~\cite{Shakura:1972te}.
The spectrum is expected to have an exponential cutoff at $\varepsilon_{\rm disk, cut}\approx2.8k_BT_{\rm disk}$, where $T_{\rm disk}\approx0.49(GM\dot{M}/16\pi\sigma_{\rm SB}R_{S}^3)^{1/4}$ is the maximum effective temperature of the disk (e.g.,~\cite{pri81}). Here, $M$ is the SMBH mass, $\dot M$ is the mass accretion rate, and $R_{S}=2GM/c^2$ is the Schwarzschild radius.
Assuming a standard disk, we use $\dot M\approx L_{\rm bol}/(\eta_{\rm rad}c^2)$ with a radiative efficiency of $\eta_{\rm rad}=0.1$.
Although the spectra calculated by Ref.~\cite{ho08} extend to low energies, we only consider photons with $\varepsilon_{\rm disk}>2$ eV because infrared photons would come from a dust torus.
X rays are produced via Compton upscattering by thermal electrons with $T_e\sim10^9$~K.
The spectrum can be modeled by a power law with an exponential cutoff. The photon index, $\Gamma_X$, is correlated with the Eddington ratio as $\Gamma_X\approx0.167\times\log(\lambda_{\rm Edd})+2.0$~\cite{2017MNRAS.470..800T}.
The cutoff energy is also given by $\varepsilon_{X,\rm cut}\sim[-74\log(\lambda_{\rm Edd})+1.5\times10^2]$~keV~\cite{2018MNRAS.480.1819R}.
The electron temperature is written as $k_BT_e\approx\varepsilon_{X,\rm cut}/2$ for an optically thin corona. Then, assuming a slab geometry, the Thomson optical depth is given by $\tau_T\approx10^{(2.16-\Gamma_X)/1.06}{(k_BT_e/{\rm keV})}^{-0.3}$~\cite{2018MNRAS.480.1819R}.
The x-ray luminosity $L_X$ is converted into $L_{\rm bol}$ following Ref.~\cite{2007ApJ...654..731H}, and the SMBH mass can be estimated by $M\approx2.0\times10^7~M_\odot~(L_X/1.16\times10^{43}~{\rm erg}~{\rm s}^{-1})^{0.746}$~\cite{2018arXiv180306891M}.
The thus constructed SEDs are shown in Fig.~\ref{fig:sed}.
We expect the disk coronae to be characterized by two temperatures, i.e., $T_p\gg T_e$~\cite{DiMatteo:1997cy,Cao:2008by} (see Appendix).
We assume that the thermal protons are at the virial temperature, $T_p\approx{GMm_p/(3Rk_B)}$, where $R={r}R_S$ is the coronal size and $r$ is the normalized radius.
The normalized proton temperature is $\theta_p = k_BT_p/(m_pc^2)\approx5.3\times10^{-3}r_{1.5}^{-1}$. With the sound speed $C_s^2\approx k_BT_p/m_p$ and Keplerian velocity $V_K=\sqrt{GM/R}$, the scale height is written as $H\approx(C_s/V_K)R$, leading to a nucleon target density, $n_p\approx\tau_T/(\sigma_TH)$. The magnetic field is estimated by $B\approx\sqrt{8\pi m_pn_pk_BT_p/\beta}$, where $\beta$ is the plasma beta.
We summarize our model parameters in Table~\ref{tab:quantities}. Note that most of the physical quantities can be estimated from the observational correlations. Thus, for a given $L_X$, $\beta$ and $r$ are the only remaining parameters. They are also constrained in a certain range by observations~\cite{2012MNRAS.420.1825J,Morgan:2012df} and numerical simulations~\cite{Io:2013gja,Jiang:2014wga}. For example, recent MHD simulations show that $\beta$ in the coronae can be as low as $0.1-10$ (e.g.,~\cite{Miller:1999ix,SI14a}). We assume $\beta\sim1$, and adopt ${r}=30$ throughout this work.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.9\linewidth]{sed_proton2.eps}
\caption{Disk-corona SEDs and CR proton differential luminosities for $L_X=10^{42}\rm~erg~s^{-1}$, $10^{43}\rm~erg~s^{-1}$, $10^{44}\rm~erg~s^{-1}$, $10^{45}\rm~erg~s^{-1}$, $10^{46}\rm~erg~s^{-1}$ (from bottom to top).}
\label{fig:sed}
\end{center}
\end{figure}
\begin{table}[tb]
\caption{Parameters used in this work. Units are [erg $\rm s^{-1}$] for $L_X$ and $L_{\rm bol}$, [$M_{\odot}$] for $M$, [cm] for $R$, [$\rm cm^{-3}$] for $n_p$, and [\%] for the ratio of the CR pressure to the thermal pressure.
\label{tab:quantities}}
\begin{ruledtabular}
\begin{tabular}{cccccccccc}
$\log L_X$ & $\log L_{\rm bol}$ & $\log M$ & $\Gamma_X$ & $\theta_e$ & $\tau_T$ & $\log R$ & $\log n_p$ & $P_{\rm CR}/P_{\rm th}$\\
42.0 & 43.0 & 6.51 & 1.72 & 0.27 & 0.59 & 13.5 & 10.73 & 0.27\\
43.0 & 44.2 & 7.25 & 1.80 & 0.23 & 0.52 & 14.2 & 9.93 & 0.54\\
44.0 & 45.4 & 8.00 & 1.88 & 0.20 & 0.46 & 15.0 & 9.13 & 0.94\\
45.0 & 46.6 & 8.75 & 1.96 & 0.16 & 0.41 & 15.7 & 8.33 & 1.54\\
46.0 & 47.9 & 9.49 & 2.06 & 0.12 & 0.36 & 16.4 & 7.53 & 2.34\\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Stochastic Acceleration and Secondary Production in Coronae}
For the disk coronae considered here, the infall and dissipation time scales are estimated to be $t_{\rm fall}\simeq2.5\times{10}^{6}~{\rm s}~R_{15}{(\alpha V_K/4000~{\rm km}~{\rm s}^{-1})}^{-1}$ and $t_{\rm diss}\simeq1.8\times{10}^{5}~{\rm s}~R_{15}{(V_K/40000~{\rm km}~{\rm s}^{-1})}^{-1}\beta^{1/2}$, where $\alpha$ is the viscosity parameter~\cite{Shakura:1972te}.
The electron relaxation time via Coulomb collisions, $t_{ee,\rm rlx}\sim1.6\times{10}^3~{\rm s}~\theta_{e,-0.6}^{3/2}\tau_T^{-1}R_{15}$, is always shorter than $t_{\rm diss}$. The proton relaxation time is much longer, which can ensure two temperature coronae (see Appendix).
These collisionallity arguments imply that turbulent acceleration is promising for not electrons but protons (although fast acceleration by small-scale reconnections might occur~\cite{Hoshino:2015cka,Li:2015bga}).
The situation is somewhat analogous to that in RIAFs, for which nonthermal signatures have been studied~(e.g.,~\cite{Ozel:2000wm, Kimura:2014jba,Ball:2016mjx}).
We expect that protons are accelerated in the MHD turbulence. We compute steady state CR spectra by solving the following Fokker-Planck equation (e.g.,~\cite{Becker:2006nz,Stawarz:2008sp,cc70,1996ApJS..103..255P}),
\begin{equation}
\frac{\partial F_p}{\partial t} = \frac{1}{\varepsilon_p^2}\frac{\partial}{\partial \varepsilon_p}\left(\varepsilon_p^2D_{\varepsilon_p}\frac{\partial F_p}{\partial \varepsilon_p} + \frac{\varepsilon_p^3}{t_{p-\rm cool}}F_p\right) -\frac{F_p}{t_{\rm esc}}+\dot F_{p,\rm inj},\label{eq:FP}
\end{equation}
where $F_p$ is the CR distribution function, $D_{\varepsilon_p}\approx \varepsilon_p^2/t_{\rm acc}$ is the diffusion coefficient in energy space, $t_{p-\rm cool}^{-1}=t_{pp}^{-1}+t_{p\gamma}^{-1}+t_{\rm BH}^{-1}+t_{p-\rm syn}^{-1}$ is the total cooling rate, $t_{\rm esc}^{-1}=t_{\rm fall}^{-1}+t_{\rm diff}^{-1}$ is the escape rate,
and $\dot F_{p,\rm inj}$ is the injection function (see Appendix~\footnote{We consider meson production processes by $pp$ ($t_{pp}$) and $p\gamma$ ($t_{p\gamma}$) interactions, as well as the Bethe-Heitler pair production ($t_{\rm BH}$), proton synchrotron radiation ($t_{\rm p-syn}$), diffusive escape ($t_{\rm diff}$), and infall losses ($t_{\rm fall}$).}).
The stochastic acceleration time is given by $t_{\rm acc}\approx \eta{(c/V_A)}^2(R/c){(\varepsilon_p/eBR)}^{2-q}$, where $V_A$ is the Alfv\'en velocity and $\eta$ is the inverse of the turbulence strength~\cite{Dermer:1995ju,Dermer:2014vaa}.
We adopt $q=5/3$, which is consistent with the recent MHD simulations~\cite{Kimura:2018clk}, together with $\eta=10$.
Because the dissipation rate in the coronae is expected to be proportional to $L_X$, we assume that the injection function linearly scales as $L_X$.
To explain the ENB, the CR pressure required for $L_X=10^{44}~{\rm erg}~{\rm s}^{-1}$ turns out to be $\sim1$\% of the thermal pressure, which is reasonable.
We plot $\varepsilon_p L_{\varepsilon_p}\equiv4\pi (\varepsilon_p^4/c^3) F_p {\mathcal V}(t_{\rm esc}^{-1}+t_{p-\rm cool}^{-1})$ in Fig.~\ref{fig:sed}, where $\mathcal V$ is the volume.
While the CRs are accelerated, they interact with matter and radiation modeled in the previous section, and produce secondary particles. Following Ref.~\cite{Murase:2017pfe,Murase:2018okz}, we solve the kinetic equations taking into account electromagnetic cascades. In this work, secondary injections by the Bethe-Heitler and $p\gamma$ processes are approximately treated as $\varepsilon_{e}^2 (d\dot{N}_e^{\rm BH}/d\varepsilon_e)|_{\varepsilon_{e}=(m_e/m_p)\varepsilon_p} \approx t_{\rm BH}^{-1}\varepsilon_p^2(dN_{\rm CR}/d\varepsilon_p)$, $\varepsilon_{e}^2(d\dot{N}_e^{p\gamma}/d\varepsilon_e)|_{\varepsilon_{e}=0.05\varepsilon_p} \approx (1/3)\varepsilon_{\nu}^2(d\dot{N}_\nu^{p\gamma}/d\varepsilon_\nu)|_{\varepsilon_{\nu}=0.05\varepsilon_p} \approx(1/8)t_{p\gamma}^{-1} \varepsilon_p^2(dN_{\rm CR}/d\varepsilon_p)$,
and $\varepsilon_{\gamma}^2(d\dot{N}_\gamma^{p\gamma}/d\varepsilon_\gamma)|_{\varepsilon_{\gamma}=0.1\varepsilon_p} \approx(1/2)t_{p\gamma}^{-1} \varepsilon_p^2(dN_{\rm CR}/d\varepsilon_p)$.
The resulting cascade spectra are broad, being determined by synchrotron and inverse Compton emission.
In general, stochastic acceleration models naturally predict reacceleration of secondary pairs populated by cascades~\cite{Murase:2011cx}.
The critical energy of the pairs, $\varepsilon_{e,\rm cl}$, is consistently determined by the balance between the acceleration time $t_{\rm acc}$ and the electron cooling time $t_{\rm e-cool}$.
We find that whether the secondary reacceleration occurs or not is rather sensitive to $B$ and $t_{\rm acc}$. For example, with $\beta=3$ and $q=1.5$, the reaccelerated pairs can upscatter x-ray photons up to $\sim{(\varepsilon_{e,\rm cl}/m_ec^2)}^2\varepsilon_{X}\simeq3.4~{\rm MeV}~{(\varepsilon_{e,\rm cl}/30~{\rm MeV})}^2(\varepsilon_{X}/1~{\rm keV})$, which may form a gamma-ray tail. However, if $\varepsilon_{e,\rm cl}\lesssim1$~MeV (for $\beta=1$ and $q=5/3$), reacceleration is negligible, and small-scale turbulence is more likely to be dissipated at high $T_p$~\cite{2011PhRvL.107c5004H}.
\section{Neutrino background and MeV gamma-ray connection}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\linewidth]{SeyfertNBGDF.eps}
\caption{EGB and ENB spectra in our RQ AGN core model.
The data are taken from {\it Swift-BAT}~\cite{aje+14} (green), Nagoya balloon~\cite{1975Natur.254..398F} (blue), SMM~\cite{1997AIPC..410.1223W} (purple), COMPTEL~\cite{2000AIPC..510..467W} (gray), {\it Fermi-LAT}~\cite{Ackermann:2014usa} (orange), and IceCube~\cite{Aartsen:2017mau} for shower (black) and upgoing muon track (blue shaded) events.
A possible contribution of reaccelerated pairs is indicated (thin solid).
}
\label{fig:diffuse}
\end{center}
\end{figure}
We calculate neutrino and gamma-ray spectra for different source luminosities, and obtain the EGB and ENB through Eq.~(31) of Ref.~\cite{Murase:2014foa}.
We use the x-ray luminosity function $d\rho_X/dL_X$, given by Ref.~\cite{Ueda:2014tma}, taking into account a factor of 2 enhancement by Compton thick AGNs.
Results are shown in Fig.~\ref{fig:diffuse}.
Our RQ AGN core model can explain the ENB at $\sim30$~TeV energies if the CR pressure is $\sim1$\% of the thermal pressure.
In the vicinity of SMBHs, high-energy neutrinos are produced by {\it both} $pp$ and $p\gamma$ interactions.
The disk-corona model indicates $\tau_T\sim1$ (see Table~1), which leads to the effective $pp$ optical depth $f_{pp}\approx t_{\rm esc}/t_{pp}\approx n_p (\kappa_{pp}\sigma_{pp})R(c/V_{\rm fall})\sim2\tau_T{(\alpha V_{K}/4000~{\rm km}~{\rm s}^{-1})}^{-1}$. Note that $V_K$ is a function of $M$ (and $L_X$).
X-ray photons from coronae provide target photons for the photomeson production, whose effective optical depth~\cite{Murase:2008mr,Murase:2015xka} is $f_{p \gamma}[\varepsilon_p]\approx t_{\rm esc}/t_{p\gamma}\approx\eta_{p\gamma}\hat{\sigma}_{p\gamma}R (c/V_{\rm fall}) n_X{(\varepsilon_p/\tilde{\varepsilon}_{p\gamma-X})}^{\Gamma_X-1}\sim0.9L_{X,44}R_{15}^{-1}{(\alpha V_{K}/4000~{\rm km}~{\rm s}^{-1})}^{-1}{(1~{\rm keV}/\varepsilon_X)}\eta_{p\gamma}{(\varepsilon_p/\tilde{\varepsilon}_{p\gamma-X})}^{\Gamma_X-1}$,
where $\eta_{p\gamma}\approx2/(1+\Gamma_X)$, $\hat{\sigma}_{p\gamma}\sim0.7\times{10}^{-28}~{\rm cm}^2$ is the attenuation cross section, $\bar{\varepsilon}_\Delta\sim0.3$~GeV, $\tilde{\varepsilon}_{p\gamma-X}=0.5m_pc^2\bar{\varepsilon}_{\Delta}/\varepsilon_X\simeq0.14~{\rm PeV}~{(\varepsilon_X/1~{\rm keV})}^{-1}$, and $n_X\sim L_X/(4\pi R^2 c \varepsilon_X)$ is used.
The total meson production optical depth is given by $f_{\rm mes}=f_{p\gamma}+f_{pp}$, which always exceeds unity in our model.
Importantly, $\sim10-100$~TeV neutrinos originate from CRs with $\sim0.2-2$~PeV. Different from previous studies explaining the IceCube data~\cite{Stecker:2013fxa,Kalashev:2015cma}, disk photons are irrelevant for the photomeson production because its threshold energy is $\tilde{\varepsilon}_{p\gamma-\rm th}\simeq3.4~{\rm PeV}~{(\varepsilon_{\rm disk}/10~{\rm eV})}^{-1}$.
However, CRs in the 0.1-1~PeV range should efficiently interact with disk photons via the Bethe-Heitler process because the characteristic energy is $\tilde{\varepsilon}_{\rm BH-disk}=0.5m_pc^2\bar{\varepsilon}_{\rm BH}/\varepsilon_{\rm disk}\simeq0.47~{\rm PeV}~{(\varepsilon_{\rm disk}/10~{\rm eV})}^{-1}$, where $\bar{\varepsilon}_{\rm BH}\sim10(2m_ec^2)\sim10$~MeV~\citep{1992ApJ...400..181C,SG83a}.
Approximating the number of disk photons by $n_{\rm disk}\sim L_{\rm bol}/(4\pi R^2 c \varepsilon_{\rm disk})$, the Bethe-Heitler effective optical depth~\cite{Murase:2010va} is estimated to be $f_{\rm BH}\approx n_{\rm disk}\hat{\sigma}_{\rm BH}R(c/V_{\rm fall})\sim20L_{\rm bol,45.3}R_{15}^{-1}{(\alpha V_{K}/4000~{\rm km}~{\rm s}^{-1})}^{-1}{(10~{\rm eV}/\varepsilon_{\rm disk})}$, where $\hat{\sigma}_{\rm BH}\sim0.8\times{10}^{-30}~{\rm cm}^2$. The dominance of the Bethe-Heitler process is a direct consequence of the observed disk-corona SEDs, implying that the medium-energy neutrino flux is suppressed by $\sim f_{\rm mes}/f_{\rm BH}$.
The ENB flux is analytically estimated to be
\begin{eqnarray}
E_\nu^2\Phi_\nu&\sim&{10}^{-7}~{\rm GeV}~{\rm cm}^{-2}~{\rm s}^{-1}~{\rm sr}^{-1}~\left(\frac{2K}{1+K}\right)\xi_{\rm CR}\mathcal{R}_{p,0.5}^{-1}\nonumber\\
&\times&\left(\frac{20f_{\rm mes}}{1+f_{\rm BH}+f_{\rm mes}}\right)L_{X,44}{\left(\frac{\xi_z\rho_X}{{10}^{-5}~{\rm Mpc}^{-3}}\right)}.\,\,\,\,\,\,\,\,\,\,\,
\label{eq:diffuse}
\end{eqnarray}
where $K=1$ and $K=2$ for $p\gamma$ and $pp$ interactions, respectively, and $\xi_z\sim3$ represents the redshift evolution of RQ AGNs.
Eq.~(\ref{eq:diffuse}) is consistent with the numerical results presented in Fig.~\ref{fig:diffuse}.
Here ${\mathcal R}_p$ is the conversion factor from bolometric to differential luminosities, $\xi_{\rm CR}$ is the CR loading parameter defined against the x-ray luminosity, and $P_{\rm CR}/P_{\rm th}\sim0.01$ corresponds to $\xi_{\rm CR}\sim1$ in our model.
We find that the ENB and EGB are dominated by AGNs with $L_{X}\sim{10}^{44}~{\rm erg}~{\rm s}^{-1}$, at which the local number density is $\rho_X\sim3\times{10}^{-6}~{\rm Mpc}^{-3}$~\cite{Murase:2016gly}.
The $pp$, $p\gamma$ and Bethe-Heitler processes all initiate electromagnetic cascades, whose emission appears in the MeV range.
Thanks to the dominance of the Bethe-Heitler process, RQ AGNs responsible for the medium-energy ENB should contribute to $\gtrsim10-30$\% of the MeV EGB.
Possible reacceleration can enhance the MeV gamma-ray flux, and the MeV EGB could be explained if $\sim0.1$\% of the pairs is injected into the reacceleration process.
For comparison, models for RL AGNs (\cite{Inoue:2011bm,Ajello:2015mfa} for the EGB and \cite{Fang:2017zjf} for the ENB) are also shown in Fig.~\ref{fig:diffuse}.
This demonstrates that in principle the dominant portions of the EGB and ENB from MeV to PeV energies can be explained by the combination of RQ AGNs and RL AGNs. However, we also caution that other possibilities such as starburst galaxies are still viable~\cite{Murase:2013rfa}.
\section{Multimessenger Tests}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\linewidth]{SeyfertDF.eps}
\caption{Point source fluxes of all flavor neutrinos and gamma rays from a nearby RQ AGN. A possible effect of secondary reacceleration is indicated (thin solid). For eASTROGAM~\cite{DeAngelis:2016slk} and AMEGO~\cite{Moiseev:2017mxg} sensitivities, the observation time of $10^6$~s is assumed. The IceCube eight-year sensitivity~\cite{Aartsen:2018ywr} and the 5 times better case~\cite{Aartsen:2014njl} are shown.
}
\label{fig:psflux}
\end{center}
\end{figure}
Detecting MeV signals from individual Seyferts would be crucial for testing the model, which is challenging for existing gamma-ray telescopes. However, this would be feasible with future telescopes like eASTROGAM~\cite{DeAngelis:2016slk}, GRAMS~\cite{Aramaki:2019bpi}, and AMEGO~\cite{Moiseev:2017mxg} (see Fig.~\ref{fig:psflux}).
For luminous Seyfert galaxies, the fact that x rays come from thermal Comptonization suggests that the photon energy density is larger than the magnetic field energy density. In the scenario to explain 10-100 TeV neutrinos, secondary pairs are injected in the 100-300~GeV range and form a fast cooling $\varepsilon_e^{-2}$ spectrum down to MeV energies in the steady state. Thus, in the simple inverse Compton cascade scenario, the cascade spectrum is extended up to the break energy due to $\gamma\gamma\rightarrow e^+e^-$.
In reality, both synchrotron and inverse Compton processes can be important. The characteristic frequency of synchrotron emission by Bethe-Heitler pairs is given by $\varepsilon_{\rm syn}^{\rm BH}\sim0.8~{\rm MeV}~{B}_{2.5}{(\varepsilon_p/0.5~{\rm PeV})}^2$~\cite{Murase:2018okz}. Because disk photons lie in the $\sim1-10$~eV range, the Klein-Nishina effect is moderately important at the injection energies. The synchrotron cascade is dominant if the photon energy density is smaller than $\sim10B^2/(8\pi)$, i.e., $B\gtrsim200~{\rm G}~L_{\rm bol,45.3}^{1/2}R_{15}^{-1}$. In either synchrotron or inverse Compton cascades, MeV gamma rays are expected.
The ENB and EGB are dominated by AGNs with $L_X\sim10^{44}~{\rm erg}~{\rm s}^{-1}$. AMEGO's differential sensitivity suggests that point sources with $d\lesssim150-400$~Mpc are detectable, and the number of the sources within this horizon is ${\mathcal N}_{\rm AGN}\sim10-100$. Detections or nondetections of the MeV gamma-ray counterparts will support or falsify the AGN core model as the origin of $\sim30$~TeV neutrinos. Note that the predicted neutrino flux shown in Fig.~\ref{fig:psflux} is below the current IceCube sensitivity.
Nearby Seyferts may be seen as point sources with IceCube-Gen2, but stacking analyses are more promising.
\section{Summary and discussion}\label{sec:summary}
We presented the results of a concrete model for RQ AGNs which can explain the medium-energy neutrino data.
The disk-corona SEDs have been well studied, and known empirical relations enabled us to estimate model parameters, with which we solved the relevant transport equations and computed subsequent cascades consistently.
The model is not only motivated from both observations and theories but it also provides clear predictions.
In particular, the dominance of the Bethe-Heitler process is a direct consequence of the observed SEDs, leading to a robust MeV gamma-ray connection. Nearby Seyferts will be promising targets for future MeV gamma-ray telescopes such as eASTROGAM and AMEGO. A good fraction of the MeV EGB may come from RQ AGNs especially in the presence of secondary reacceleration, in which gamma-ray anisotropy searches should be powerful tools~\cite{Inoue:2013vza}. Neutrino multiplet and stacking searches with IceCube-Gen2 are also promising~\cite{Murase:2016gly}.
The suggested tests are crucial for unveiling nonthermal phenomena in the vicinity of SMBHs.
For low-luminosity AGNs, where the plasma density is low, direct acceleration may occur~\cite{Levinson:2000nx} and TeV gamma rays can escape~\cite{Aleksic:2014xsg}.
However, in Seyferts, the plasma density is so high that a gap is not expected, and GeV-TeV gamma rays are blocked.
Only MeV gamma rays can escape from the core region, and neutrinos serve as a smoking gun.
Our results strengthen the importance of further theoretical studies of disk-corona systems. Simulations on turbulent acceleration in coronae and particle-in-cell computations of acceleration via magnetic reconnections are encouraged in order to understand the CR acceleration in the disk-corona system. Global MHD simulations will also be relevant to examine other postulates such as accretion shocks~\cite{1986ApJ...304..178K,Stecker:1991vm,Szabo:1994qx,Stecker:1995th} or colliding blobs~\cite{AlvarezMuniz:2004uz} and to reveal the origin of low-frequency emission that could come from the outer region of coronae~\cite{Inoue:2014bwa,Inoue:2018kbv}.
\acknowledgments
This work is supported by Alfred P. Sloan Foundation and NSF Grant No. PHY-1620777 (K.M.), JSPS Oversea Research Fellowship and the IGC post-doctoral fellowship program (S.S.K.), and NASA NNX13AH50G as well as the Eberly Foundation (P.M.).
While we were finalizing this project, we became aware of a related work by Inoue et al. (arXiv:1904.00554).
We thank for Yoshiyuki Inoue for discussions.
Both works are independent and complementary, and there are notable differences.
We consider stochastic acceleration by turbulence based on the disk-corona model rather than by accretion shocks. Also, we focus on the origin of 10-100 TeV neutrinos, for which the Bethe-Heitler suppression is critical.
Third, we calculate CR-induced electromagnetic cascades, which is critical for testing the scenario for IceCube neutrinos.
K.M. also thanks for the invitation to the AMEGO Splinter meeting held in January 2019, in which preliminary results of cascade emission were presented.
|
1,941,325,220,709 | arxiv | \section{Introduction}
Throughout the past,
a number of important facts about the quantum sine-Gordon model were discovered.
Among them are elegant relations between the zero-point energy
and the Painlev${\acute {\rm e}}$ III transcendent.
To describe them explicitly let us recall some elementary facts about a structure of the Hilbert space
of the model,
\begin{eqnarray}\label{sg}
\mathcal{L}={ \frac{1}{\beta^2_{\rm sg}}}\ \Big(\, \ { \frac{1}{2}}\ (\partial_\mu \phi)^2+\Lambda\
\cos(\phi)\, \Big)\ ,
\end{eqnarray}
in finite-size geometry with the
spatial coordinate $x$ compactified on
a circle of a circumference $R$, with the periodic boundary conditions
\begin{eqnarray}\label{nasbash}
\phi(x+R,t)=\phi(x,t)\ .
\end{eqnarray}
Due to the periodicity of the potential term $\Lambda\, \cos(\phi)$ in \eqref{sg} in
$\phi$, the
space of states $\mathcal{H}$ splits into orthogonal subspaces
$\mathcal{H}_k$, characterized by the ``quasi-momentum'' $k$,
\begin{eqnarray}\label{quasi}
\phi \to \phi +2\pi \,: \qquad \mid \Psi_k\,\rangle \
\to\ \mbox{e}^{2\pi {\rm i}\,k}\,\mid \Psi_k \,\rangle
\end{eqnarray}
for $\mid \Psi_k\,\rangle \in \mathcal{H}_k$. We call $k$-vacuum
the ground state of the finite-size system \eqref{sg} in the
sector $\mathcal{H}_k$ and denote it by $|\,\Psi^{{\rm (vac)}}_k\,\rangle$. The corresponding energy will be denoted by $E_k$.
In general, the coupling constant in \eqref{sg}
should be restricted by the condition $\beta^2_{\rm sg}<8\pi$ \cite{Coleman:1974bu} and
it is convenient to substitute $\beta^2_{\rm sg}$ for the ``renormalized coupling'':
\begin{eqnarray}\label{ososao}
\xi=\frac{\beta^2_{\rm sg}}{8\pi -\beta^2_{\rm sg}}\ .
\end{eqnarray}
The value
$\xi=2$
is special. For this coupling, the theory possesses ${\cal N}=2$ supersymmetry
which is spontaneously broken, except the subspaces
$\mathcal{H}_k$ corresponding to
$k=\pm\frac{1}{4}$\ \cite{Saleur:1991hk}.
In the sectors with unbroken supersymmetry the ground state energy is of course identically zero.
In Ref.\cite{Fendley:1992jy} Fendley and Saleur (see also related Ref.\cite{Cecotti:1992qh}) applied the general
construction \cite{Cecotti:1991me}
to derive the remarkable relation
\begin{eqnarray}\label{sssiasai}
\frac{ R}{\pi}\ \Big(\frac{\partial E_{k}}{\partial k} \Big)_{\xi=2\atop k=\pm 1/4}=\mp 4 r\,
\frac{\mbox{d} U(r)}{\mbox{d} r}\ .
\end{eqnarray}
Here the
variable $r$ stands for the size of the system measured in the units of the correlation length
(inverse soliton mass $M$),
\begin{eqnarray}\label{sssat}
r=MR\ ,
\end{eqnarray}
and
$U=U(t)$ is a particular solution to the Painlev${\acute {\rm e}}$ III equation
\begin{eqnarray}\label{soisiasai}
\frac{1}{t}\ \frac{\mbox{d}}{\mbox{d} t}\Big(t\,\frac{\mbox{d} U}{\mbox{d} t}\Big)=\frac{1}{2}\ \sinh(2 U)\ .
\end{eqnarray}
This equation admits a one-parameter family of solutions regular at $t>0$ (see e.g.\cite{McCoy:1976cd}) called
the Painlev${\acute {\rm e}}$ III transcendents.
The special solution in \eqref{sssiasai} is fixed by the following boundary conditions
\begin{eqnarray}\label{sosias}
U(t)=
\begin{cases}
-\frac{1}{3}\ \log(t)+O(1)\ \ \ \ \ {\rm as}\ \ \ \ t\to 0\\
\ o(1) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm as}\ \ \ \ t\to \infty
\end{cases}\ .
\end{eqnarray}
In the consequent work\ \cite{Zamolodchikov:1994uw} Alyosha Zamolodchikov derived one more mysterious relation
\begin{eqnarray}\label{siasai}
\frac{ R}{ \pi}\ \Big(\frac{\partial E_{k}}{\partial \xi} \Big)_{\xi=2\atop k=\pm 1/4}
=-\frac{r^2}{8}+
\frac{1}{2}\ \int_r^\infty\mbox{d} t\ t\ \sinh^2U(t)\ .
\end{eqnarray}
Below, we will refer to Eqs.\,\eqref{sssiasai},\,\eqref{siasai} as the FSZ relations.
Relations similar to\ \eqref{sssiasai} and \eqref{siasai}
were also discovered in other models\ \cite{Cecotti:1992qh}, \cite{Fendley:1999zd}, \cite{Bazhanov:2004hv}.
However all the generalizations had limitations in the choice of coupling constants and
sectors of the theories.
The long-time consensus was the
FSZ relations are
due to the accidental symmetry
and do not possess any interesting generalizations for general values of $\xi$ and $k$.
The first serious sign that this may not be true came from the study of $D=4$ supersymmetric
gauge theories \cite{Gaiotto:2008cd, Gaiotto:2009hg, Alday:2009yn, Alday:2009dv, Alday:2010vh}.
In these works a link was observed between certain
Thermodynamic Bethe Ansatz (TBA) type integral equations and partial differential equations
integrated by the inverse scattering methods.
Some of the integral equations were in fact identical to
the sine-Gordon TBA systems
corresponding to $\xi\not =2$ and $k\not=\pm\frac{1}{4}$.
Inspired by this remarkable development, A. Zamolodchikov and the author found
a classical integrable equation associated with the quantum sine-Gordon model
for generic $\xi$ and $k$\ \cite{Lukyanov:2010rn}.
It turned out to be the classical Modified Sinh-Gordon equation (MShG)
\begin{eqnarray}\label{shgz}
\partial_z\partial_{\bar z}\eta -\mbox{e}^{2\eta}+p(z)\,{p}({\bar z})\ \mbox{e}^{-2\eta}=0
\end{eqnarray}
with $p(z)$ of the form
\begin{eqnarray}\label{kskssls}
p(z) = z^{2\alpha}-s^{2\alpha}\,.
\nonumber
\end{eqnarray}
Parameters $\alpha$ and $s$ are real and positive,
related to the
sine-Gordon parameters $\xi$\ \eqref{ososao} and $r=MR$\ \eqref{sssat} as follows
\begin{eqnarray}
\label{aoisasos}
\alpha={\xi}^{-1}\ , \ \ \ \ \ \ \qquad\qquad
s=\Big( \frac{2\,r}{\xi\, r_\xi}\Big)^{\frac{\xi}{1+\xi}}\ ,
\end{eqnarray}
where, for future references, we use the notation
\begin{eqnarray}\label{tsrars}
r_\xi=\frac{2\sqrt{\pi}\, \Gamma(\frac{\xi}{2})}{\Gamma(\frac{3}{2}+\frac{\xi}{2}) }\ .
\end{eqnarray}
The MShG equation in general has no rotational symmetry.
Instead, it has the discrete
symmetry
$z \to \text{e}^\frac{{\rm i}\pi}{\alpha}\, z\,, \
{\bar z} \to \text{e}^{-\frac{{\rm i}\pi}{\alpha}}\, {\bar z}$.
Solutions of the MShG equation \eqref{shgz} relevant to the problem
respect this symmetry, are continuous at all finite nonzero $z$, and
grow slower then the exponential as $|z|\to\infty$.
In other words, they are single-valued functions
on a cone with the apex angle $\frac{\pi}{\alpha}$ including the zero of $p(z)$ (see Fig.\ref{fig0}).
There is a one-parameter family of such solutions, characterized
by the behavior at the apex:
$\eta \to 2l\,\log|z|+O(1)$ as $|z|\to 0$,
with
real $l\in (-\frac{1}{2},\,\frac{1}{2}\, )$ which turns out to be related to the
quasi-momentum\ \eqref{quasi} by
\begin{eqnarray}\label{llsslasa}
l=2\,|k|-{\textstyle\frac{1}{2}}\ .
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=10 cm]{ConeSnewX.eps}
\caption{ The world sheet for the MShG equation\ \eqref{shgz}.
The dots $A$ and $B$ indicate positions of the apex and zero of $p(z)$, respectively.
At the minisuperspace limit
($\alpha\to\infty$, $r$ is kept fixed) the world sheet shrinks to a single ray.}
\label{fig0}
\end{figure}
The MShG equation
can be represented as a flatness condition
for certain $SL(2)$ connection. In Ref.\cite{Lukyanov:2010rn} it was shown that the
monodromy from the apex to infinity corresponding to the above described solution
is essentially the sine-Gordon $Q$-function,
whose asymptotic expansions
generate the vacuum eigenvalues of integrals of motions
of the quantum theory.
The original motivation for the present work was to incorporate the FSZ relations
to the construction of Ref.\cite{Lukyanov:2010rn}.
This problem is solved in Section\,\ref{Sect2} of this work.
It turned out that the main player in the derivation of the generalized FSZ relations
is a properly defined ``on-shell'' action for the MShG
equation.
Remarkably it can also be interpreted as a critical value of the
Yang-Yang (YY) functional in the quantum sine-Gordon model.
In the seminal work \cite{Yang} the variational principle was applied to prove an
existence of a solution to vacuum Bethe Ansatz (BA) equations for the XXZ spin chain.
From that time the functional whose extremum condition reproduce BA equations bears the Yang-Yang name.
The YY-functional proves to be useful for computing norms of the Bethe states
(see e.g. \cite{Korepin} and references therein).
Recently it attracts a great deal of attention in the context of the relation between
supersymmetric gauge theories
and quantum mechanical integrable systems\ \cite{Moore:1997dj, Gerasimov:2006zt, Nekrasov:2009rc}.
However, the r${\hat {\rm o}}$le of the YY-functional in 2D QFT seems to be undervalued.
To the best of my knowledge,
it was never defined in
intrinsic terms of integrable QFT.
Nevertheless, there is a brute-force approach for the calculation
of critical values of the YY-functional in the sine-Gordon QFT.
It is based on the
discretization of the theory, i.e., reducing it to the system with finite number of degrees of freedom,
which then
can be solved by standard methods of BA.
Although this formal approach does not clarify the meaning
of the YY-functional itself, it is sufficient for the calculation of
the YY-functions, i.e., the critical values of YY-functional corresponding to
the Bethe states.
In this work we restrict our attention to the $k$-vacuum state $\mid \Psi^{{\rm (vac)}}_k\,\rangle \in \mathcal{H}_k$.
In Section\,\ref{Sect3} it is shown that the corresponding YY-function
can be identified with the on-shell action for the MShG equation.
Another purpose of Section\,\ref{Sect3} is to discuss technical tools for the calculation of the YY-function.
We review the well-known approach\ \cite{ Klumper:1991, Destri:1992qk} which allows one to
express partial derivatives of the YY-function in terms of a solution
to the non-linear integral equation from Ref.\cite{Destri:1992qk}.
Section\,\ref{SectionMini} is devoted to the so-called minisuperspace approximation
(in the stringy terminology).
The approximation implies the $\xi\to 0$ limit such that the soliton mass $M$ is kept fixed.
In this case
the sine-Gordon QFT reduces to the quantum mechanical problem of
particle in the cosine
potential.\footnote{Note that in the conventional
classical limit, the mass of the lightest particle in \eqref{sg} is kept fixed while $M\to\infty$.}
At the minisuperspace limit the world sheet of the MShG equation collapses into a single ray (see Fig.\,\ref{fig0})
and the solution $\eta$ at the segment $(A,\,B)$ is expressed in terms of
a solution of the Painlev${\acute {\rm e}}$ III equation \eqref{soisiasai},
subject of the boundary conditions
\begin{eqnarray}\label{ytrsosias}
U(t)=
\begin{cases}
\ 2l\, \log(t)+O(1) \, \ \ \ \ \ \ \ \ {\rm as}\ \ \ \ t\to 0\\
-\log(r-t)+O(1)\ \ \ \ \ {\rm as}\ \ \ \ t\to r
\end{cases}\ .
\end{eqnarray}
Real solutions of the Painlev${\acute {\rm e}}$ III equation which are regular at the open segment $t\in (0,r)$,
and satisfy
the boundary conditions \eqref{ytrsosias}, form a
family which is parameterized by $r>0$ and $-\frac{1}{2}< l<\frac{1}{2}$.
By taking the minisuperspace limit of the generalized FSZ relation, we solve the connection problem
for the local expansions of the solution $U(t)$\ \eqref{ytrsosias} at $t=0$ and $t=r$.
The results obtained in Section\,\ref{SectionMini} provide
an interesting link between the Painlev${\acute {\rm e}}$ III and Mathieu equations.
\section{\label{Sect2}On-shell action for the ShG equation}
\subsection{From MShG to ShG}
In practical calculations
it is convenient
to trade the world sheet variable $z$ in the MShG equation
\eqref{shgz} to
\begin{eqnarray}\label{uyslaskasa}
w=\mbox{e}^{\frac{{\rm i}\pi(\alpha+1)}{2\alpha}}\ \int \mbox{d} z\ \sqrt{p(z)}\ ,
\end{eqnarray}
and similarly for ${\bar w}$.
The branch of the multivalued function\ \eqref{uyslaskasa}
can be chosen to provide the map of the cone with the cut along the ray $(AB)$ visualized in Fig.\ref{fig1a}a to the
domain of the $w$-complex plane in Fig.\ref{fig1a}b\ (see Ref.\,\cite{Lukyanov:2010rn} for details).
\begin{figure}
\centering
\includegraphics[width=12 cm]{mapsh0nm.eps}
\caption{
The $w$-image of the cutted cone
under the map \eqref{uyslaskasa}.
The points on the cone and their images are denoted by the same symbols.
The segment $A{\tilde B}$ is identified
with $AB$, and the boundary line from ${\tilde B}$ to infinity is identified with the line
from $B$ to infinity. The point $O$ is an origin of the $w$-plane.}
\label{fig1a}
\end{figure}
This conformal map brings the MShG equation
to the conventional Sinh-Gordon (ShG) form
\begin{eqnarray}\label{luuausay}
\partial_w{ \partial}_{\bar w}{\hat\eta}-\mbox{e}^{2{\hat\eta}}+\mbox{e}^{-2{\hat\eta}}=0
\end{eqnarray}
for
$ {\hat\eta}= \eta-{\textstyle \frac{1}{4}}\ \log \big(\,p(z) p({\bar z})\,\big)$, which vanishes at infinity
\begin{eqnarray}\label{fssisaisa}
\lim_{|w|\to\infty}{\hat \eta}=0\ ,
\end{eqnarray}
becoming singular at the apex
\begin{eqnarray}\label{skssai}
{\hat\eta}= 2l\ \log|w-w_{A}|+ O(1)\ \ \ \ \ {\rm as}\ \ \ \ \ |w- w_{A}|\to 0
\end{eqnarray}
and at the point $B\sim {\tilde B}$
\begin{eqnarray}\label{skssaisusy}
{\hat\eta}= -\frac{1}{3}\ \log|w-w_{B}|+O(1)\ \ \ \ \ {\rm as}\ \ \ \ \
|w- w_{B}|\to 0\ .
\end{eqnarray}
Unlike the apex singularity, the asymptotic \eqref{skssaisusy} is an artifact of the
coordinate transformation\ \eqref{uyslaskasa}.
\subsection{\label{Section2b}Action functional}
To generalize relations \eqref{sssiasai},\,\eqref{siasai} we need
an extra ingredient -- the ``on-shell'' action for the ShG
equation\ \eqref{luuausay}.
It can be defined through the following limiting procedure.
Start with the domain $D$ depicted in Fig.\ref{fig1a}b of the complex $w$-plane. Cut out the small sectors of radius $\epsilon$
around the point $A$, $B$ and ${\tilde B}$ to obtain the domain $D_\epsilon$ shown in Fig.\ref{fig2d}.
\begin{figure}
[!ht]
\centering
\includegraphics[width=5 cm]{mapsh0nl.eps}
\caption{The integration domain $D_{\epsilon}$ for the regularized action
\eqref{ssiisa}.}
\label{fig2d}
\end{figure}
Define the regularized action functional
\begin{eqnarray}\label{ssiisa}
{\cal A}[\,{\hat \eta}\,]&=&\lim_{\epsilon\to 0}\, \bigg[\, \int_{D_\epsilon} \frac{\mbox{d} w \wedge\mbox{d} {\bar w}}{2\pi{\rm i} }\
\big(\, \partial_w{\hat \eta} \partial_{\bar w}{\hat \eta}+4\,\sinh^2({\hat\eta})\,\big)+
\frac{ l}{\pi \epsilon}\ \int_{C_{A}}\mbox{d} \ell\
{\hat \eta}-\frac{l^2}{\alpha}\ \log(\epsilon)\nonumber\\
&-& \frac{ 1}{6\pi \epsilon}\ \int_{C_{B}}\mbox{d} \ell \
{\hat \eta}-
\frac{1 }{ 6\pi\epsilon}\ \int_{C_{{\tilde B}}}\mbox{d} \ell \
{\hat \eta}
-\frac{1}{12}\ \log(\epsilon)\, \bigg]\ .
\end{eqnarray}
The first term is the ``cutoff'' version of the naive action for the ShG equation\ \eqref{luuausay}.
The additional terms involves integrals over
three arcs $C_{A}$, $C_{B}$ and $ C_{{\tilde B}}$
and field-independent counterterms which provide an existence of the limit.
Then the ShG equation supplemented by
asymptotic behaviors near the singularities \eqref{skssai}, \eqref{skssaisusy}
and at large $w$\ \eqref{fssisaisa} constitute a sufficient condition for an extremum of
the functional \eqref{ssiisa}:
\begin{eqnarray}\label{sossai}
\delta {\cal A}=0\ .
\end{eqnarray}
Finally we define the on-shell action ${\cal A}^*$ as
the value of ${\cal A}[{\hat \eta}]$
calculated on the solution ${\hat \eta}$ \eqref{luuausay}-\eqref{skssaisusy}.
For the variation \eqref{sossai}
the world sheet geometry, as well as the parameter $l$ controlling the behavior of the solution at the apex,
is assumed to be fixed.
Varying the on-shell action
with respect to the parameter $l$, it is observed that
\begin{eqnarray}\label{uyososai}
\Big(\frac{\partial {\cal A}^*}{\partial l}\Big)_{r,\alpha}=\frac{1}{\alpha}\ {\hat \eta}_A\ ,
\end{eqnarray}
where the constant ${\hat\eta}_A$ can be thought of as a regularized value of
the solution $ {\hat \eta}$ at the apex
\begin{eqnarray}\label{eaisusy}
{\hat\eta}_A=\lim_{|w-w_A|\to 0}\big(\, {\hat \eta}(w,{\bar w})-2l\, \log|w-w_A|\,\big)\ .
\end{eqnarray}
It should be stressed that unlike $l$, which is the ``input'' parameter applied
with the problem, the value of the constant ${\hat \eta}_A$ is not prescribed in advance
but determined through the solution, i.e. it is rather part
of the ``output''.
Let us consider now the infinitesimal variations of the world-sheet geometry.
The corresponding $\delta{\cal A}$ do not vanish on-shell
and can be expressed through the on-shell values of the stress-energy tensor.
Under
the infinitesimal dilation
$\frac{\delta r}{r}=\frac{\delta\epsilon}{\epsilon}=\lambda \ll 1$,
\begin{eqnarray}\label{xsososa}
\delta_r {\cal A}^*=\frac{\delta r}{r}\, \bigg[\,
\lim_{\epsilon\to 0}\int_{D_\epsilon} \frac{\mbox{d} w\wedge \mbox{d} {\bar w}}{\pi{\rm i} } \ \Theta-\Big(
\frac{ l^2}{\alpha}+\frac{1}{12}\, \Big)\, \bigg]\ ,
\end{eqnarray}
where
\begin{eqnarray}\label{hssasys}
\Theta=4 \,\sinh^2({\hat \eta})
\end{eqnarray}
is a trace of
the stress-energy tensor for the classical ShG equation.
The other two non-vanishing components of $T_{\mu\nu}$
are given by
\begin{eqnarray}\label{sosisia}
T= (\partial_w{\hat \eta})^2\, ,\ \ \ \ \ \ \ {\bar T}= (\partial_{\bar w}{\hat \eta})^2\ .
\end{eqnarray}
By virtue of the ShG equation, they satisfy
the continuity equations
\begin{eqnarray}\label{sa]assa}
\partial_{\bar w}T=\partial_w\Theta\ ,\ \ \ \ \ \partial_{ w}{\bar T}=\partial_{\bar w}\Theta\ ,
\end{eqnarray}
and, hence, they can be expressed in terms of a single scalar potential
\begin{eqnarray}\label{ystsossai}
T=\partial^2_w\Phi\ ,\ \ \ \ \ {\bar T}={\bar \partial}^2_w\Phi\ ,\ \ \ \ \ \
\Theta=\partial_w\partial_{\bar w}\Phi\ .
\end{eqnarray}
Combining the last formula with\ \eqref{xsososa}, one has
\begin{eqnarray}\label{isxssosa}
r\, \Big(\frac{\partial {\cal A}^*}{\partial r}\Big)_{\alpha,l}=
\lim_{\epsilon\to 0}\int_{D_\epsilon} \frac{\mbox{d} w\wedge \mbox{d}{\bar w} }{\pi{\rm i} } \ \partial_w\partial_{\bar w}\Phi
-\Big(
\frac{ l^2}{\alpha}+\frac{1}{12}\, \Big)\ .
\end{eqnarray}
The 2-fold integral here can be reduced to the linear integral over the boundary of $D_\epsilon$.
The linear integrals over the arcs $C_A$, $C_B$ and $C_{\tilde B}$
from Fig.\ref{fig2d} cancel out the term in the brackets in \eqref{isxssosa}.
This follows
from the asymptotic formulas
\begin{eqnarray}\label{oosaosa}
{ \Phi}(w,\,{\bar w})= -2l^2\ \log|w-w_{A}|+O(1)\ , \ \ \ \ \ \ \ \ \ |w- w_{ A}|\to 0
\end{eqnarray}
and
\begin{eqnarray}\label{kssysai}
{ \Phi}(w,\,{\bar w})
=-\frac{1}{18}\times
\begin{cases}
\log|w-w_{B}|+O(1)\ ,\ \ \ \ \ \ \ \ \ |w- w_{ B}|\to 0\\
\log|w-w_{\tilde B}|+O(1)\ ,\ \ \ \ \ \ \ \ \ |w- w_{\tilde B}|\to 0
\end {cases}\ ,
\end{eqnarray}
which are consequences of Eqs.\,\eqref{sosisia},\,\eqref{ystsossai} and\ \eqref{skssai},\,\eqref{skssaisusy}.
To proceed further, we need
to use some properties of the potential $\Phi$ discussed in Appendix\,\ref{AppendixA}.
Namely, for $|w|>|w_B|$
\begin{eqnarray}\label{ssisaias}
\Phi\big( w\,\mbox{e}^{ \frac{{\rm i} \pi (\alpha+1)}{\alpha}},\, {\bar w}\,\mbox{e}^{-\frac {{\rm i} \pi (\alpha+1)}{\alpha}}\,\big)=
\Phi(w,\,{\bar w})
\end{eqnarray}
and
\begin{eqnarray}\label{sosssiasai}
\lim_{|w|\to \infty}{ \Phi}(w,\,{\bar w})= 0\ .
\end{eqnarray}
Eq.\eqref{ssisaias}
implies that
the half-infinite boundary rays $(B,\,\infty)$ and $({\tilde B},\,\infty)$ from Fig.\ref{fig2d}
do not contribute into the integral \eqref{isxssosa}.
Now, taking into account Eq.\eqref{sosssiasai}, it is straightforward to show that
\begin{eqnarray}\label{xososa}
r\, \Big(\frac{\partial {\cal A}^*}{\partial r}\Big)_{\alpha,l}=-\frac{1}{2\pi}\
\sin\Big({ \frac{\pi}{2\alpha}}\Big)\ \big(\, {\mathfrak J}_1+ {\bar {\mathfrak J}}_1\,\big)
\ ,
\end{eqnarray}
where notations from Ref.\cite{Lukyanov:2010rn} are adopted,
\begin{eqnarray}\label{sisisias}
\sin\big({\textstyle \frac{\pi}{2\alpha}}\big)\ {\mathfrak J}_1&=&
{\textstyle \frac{1}{4}}\ \ \mbox{e}^{\frac{{\rm i} (\alpha+1) \pi}{2\alpha}}\ \int_{C}\big(\, \mbox{d} w\, T+\mbox{d} {\bar w}\, \Theta\,\big)\\
\sin\big({\textstyle \frac{\pi}{2\alpha}}\big)
\ {\bar {\mathfrak J}}_1&=&
{\textstyle \frac{1}{4}}\
\ \mbox{e}^{-\frac{{\rm i} (\alpha+1) \pi}{2\alpha}}\int_{ C} \big(\, \mbox{d} {\bar w}\, {\bar T}+ { \mbox{d} w}\, \Theta\,\big)\ .\nonumber
\end{eqnarray}
The integration contour $C$ is visualized in
Fig.\,\ref{fig4a}.
\begin{figure}
\centering
\includegraphics[width=10 cm]{mapsh0nr.eps}
\caption{ The integration contour $C$
in \eqref{sisisias}. The contour on the cone and its $w$-image
are denoted by the same symbol.}
\label{fig4a}
\end{figure}
Due to the continuity equations, $ {\mathfrak J}_1$ and ${\bar {\mathfrak J}}_1$ are
integrals of motion, i.e., they do
not change under continuous deformations of the integration contour.
Finally, let us consider the variation of the on-shell ShG action
under an infinitesimal change of the apex angle $\frac{\pi}{\alpha}$.
In this case, using the simple electrostatic analogy, one can express $\delta_\alpha {\cal A}^*$
through the torque applied to the boundary $\partial D_\epsilon$
\begin{eqnarray}\label{tyssosiosa}
\delta_\alpha {\cal A}^*&=&\delta\Big(\frac{\pi}{\alpha}\Big)\ \lim_{\epsilon\to 0} \bigg[\,
-\int_{\partial D_\epsilon}\frac{\mbox{d} \ell}{\pi}\ \epsilon^{\mu\nu} x^\mu n^{\sigma}\ T_{\nu\sigma}+
\frac{l}{\pi}\ {\hat \eta}_A+ \frac{l^2}{\pi}\ \log(\epsilon)\, \bigg]\ ,
\end{eqnarray}
where $x^{1}=\Re e(w-w_A)$ and $x^2=\Im m (w-w_A)$ are real coordinates
on $D_\epsilon$, $n^\sigma$ is a unit external normal to the boundary $\partial D_\epsilon$
and
$ T_{\mu\nu}= -\frac{1}{4}\ \partial_\mu{\hat \eta}\partial_\nu{\hat \eta}+
\delta_{\mu\nu}\ \big[\, \frac{1}{8}\
\big(\partial_\sigma{\hat\eta})^2+2\, \sinh^2({\hat \eta})\, \big]$.
The integration contour $\partial D_\epsilon$ contains
two components $\partial D^{+}_\epsilon$ and $\partial D^{-}_\epsilon$,
related by reflection on the axis $x^{2}=0$.
Since each component contributes equally, we replace the integral in \eqref{tyssosiosa},
by $2\int_{\partial D^{+}_\epsilon}$ and then evaluate it
using the identity
\begin{eqnarray}\label{saosai}
4\ \int_{C}\mbox{d} \ell\ \epsilon^{\mu\nu} x^\mu n^{\sigma} T_{\nu\sigma}
=\int_C \mbox{d} x^\mu\, \partial_\mu \Phi-\int_C\mbox{d} \ell\ \partial_\mu \big( x^\mu \Phi)
\end{eqnarray}
and Eq.\eqref{sosssiasai}.
This yields
\begin{eqnarray}\label{issusossa}
\alpha^2\, \Big(\frac{\partial {\cal A}^*}{\partial \alpha}\Big)_{r,l}
=-{\textstyle \frac{1}{2}}\ \Phi_A
-l\ {\hat \eta}_A\ ,
\end{eqnarray}
where ${\hat\eta}_A$ is given by Eq.\eqref{eaisusy} and $\Phi_A$ stands for
another ``output'' constant determined through the solution of the ShG equation -- the regularized value of
the potential at the apex
\begin{eqnarray}\label{isusy}
\Phi_A=\lim_{|w-w_A|\to 0}\big(\, \Phi(w,{\bar w})+2l^2\, \log|w-w_A|\,\big)\ .
\end{eqnarray}
\subsection{Generalized FSZ relations}
The compatibility of the derived above equations \eqref{uyososai},\,\eqref{xososa} and \eqref{issusossa}
implies
\begin{eqnarray}\label{sisissa}
\alpha\, \Big(\frac{\partial {\mathfrak F}}{\partial l}\Big)_{r,\alpha}&=&- r\,\Big(\frac{\partial{\hat \eta}_A}{\partial r}\Big)_{\alpha,l}\\
\alpha^2\, \Big(\frac{\partial {\mathfrak F}}{\partial \alpha}\Big)_{r,l}&=&
{\textstyle \frac{1}{2}}\
r\,\Big(\frac{\partial{\Phi}_A}{\partial r}\Big)_{\alpha,l}
+l\, r\,\Big(\frac{\partial{\hat \eta}_A}{\partial r}\Big)_{\alpha,l}\ ,\nonumber
\end{eqnarray}
where we introduce the notation
\begin{eqnarray}\label{skasisaus}
{\mathfrak F}=
\frac{r}{2\pi}\ \sin\Big({ \frac{\pi}{2\alpha}}\Big)\ \big(\, {\mathfrak J}_1+ {\bar {\mathfrak J}}_1\,\big)\ .
\end{eqnarray}
According to Ref.\cite{Lukyanov:2010rn} this constant
is related to the sine-Gordon $k$-vacuum energy $E_k$
\begin{eqnarray}\label{sksjsasayusa}
{\mathfrak F}=\frac{R}{\pi}\ (\,E_k-e_\infty\, R\,)\ ,
\end{eqnarray}
provided $l=2|k|-\frac{1}{2}$, $\alpha=\xi^{-1}$, and
$e_\infty$ stands for the specific energy of the system
with the infinitely large space size\ \cite{Destri:1990ps}:
\begin{eqnarray}\label{vsfd}
e_{\infty}= \lim_{R\to\infty}\frac{E_{k}}{R}= -\frac{ M^2}{4}\ \tan\Big(\frac{\pi\xi}{2}\Big)\ .
\end{eqnarray}
Thus, for $0<k<\frac{1}{2}$, relations \eqref{sisissa} are recast into the form
\begin{eqnarray}\label{ososaasi}
&& \frac{R}{\pi\xi}\ \Big(\frac{\partial E_k}{\partial k}\Big)_{r,\xi}=-
r\,\Big(\frac{\partial{\hat \eta}_A}{\partial r}\Big)_{\alpha,l}\\
&&\frac{R}{\pi}\ \Big(\frac{\partial E_k}{\partial \xi}\Big)_{r,k}=-
\frac{r^2}{8\cosh^2(\frac{\pi}{2\alpha})}- {\textstyle \frac{1}{2}}\
r\,\Big(\frac{\partial{\Phi }_A}{\partial r}\Big)_{\alpha,l} -l\ r\,\Big(\frac{\partial{\hat \eta}_A}{\partial r}\Big)_{\alpha,l}
\ .\nonumber
\end{eqnarray}
Formulas \eqref{ososaasi} generalize the FSZ
relations \eqref{sssiasai} and \eqref{siasai} to arbitrary values
of the sine-Gordon coupling constant and the
quasi-momentum. Indeed,
as $\xi=2$,
the apex angle of the cone in Fig.\ref{fig1a}a becomes $2\pi$, whereas
$k=\frac{1}{4}$ corresponds to $l=0$, i.e., the solution of
the (M)ShG equation remains finite at the tip $A$.
In this special case, ${\hat \eta}(w,{\bar w})$ is expressed in terms of the
Painlev${\acute {\rm e}}$ III transcendent
\eqref{soisiasai},\, \eqref{sosias}:
\begin{eqnarray}\label{sososasai}
{\hat \eta}(w,{\bar w})=U\big(\, 4\, |w-w_{B}|\, \big)\ .
\end{eqnarray}
Since $|w_A-w_B|=r/4$ (see Fig.\ref{fig1a}b), the value
${\hat \eta}$
at the apex is given by
\begin{eqnarray}\label{soisai}
{\hat {\eta}}_A= U(r)\ ,
\end{eqnarray}
whereas, as it follows from the general relations $\partial_w\partial_{\bar w}\Phi=4\ \sinh^2({\hat \eta})$ and \eqref{sosssiasai},
\begin{eqnarray}\label{uossaisa}
r\, \frac{\mbox{d} \Phi_A}{\mbox{d} r}=- \int_r^\infty\mbox{d} t\ t\ \sinh^2U(t)\ .
\end{eqnarray}
\subsection{Normalized on-shell action }
Although the on-shell action ${\cal A }^*$ disappears from the generalized FSZ relations, it is a main player
in the derivation of \eqref{ososaasi}.
Let us discuss some of its properties.
The R.H.S. of \eqref{sksjsasayusa} exponentially decays at $r\to\infty$ (see e.g. \cite{Destri:1994bv}).
This enables us to represent the on-shell action in the form
\begin{eqnarray}\label{ppsossais}
{\cal A}^*={\cal A}^*_{\infty}+\int_r^\infty\frac{\mbox{d} r}{r}\ {\mathfrak F}\ ,
\end{eqnarray}
where the integration constant stands for
$\lim_{r\to\infty}{\cal A}^*$.
The calculations outlined in Appendix\,\ref{AppendixB} yield its explicit form
\begin{eqnarray}\label{ssisaisa}
{\cal A}^*_\infty&=&
\log\big(3^{\frac{1}{12}} 2^{-\frac{2}{9}}\big)
+(3\xi+1)\ \log\big(A_G\,2^{-\frac{1}{9}}\,\big)\\
&+&2\xi k\
\log\Big(\frac{4k}{\mbox{e} }\Big)+
\xi\ \int_{0}^{2k}\mbox{d} x\ \log\left(\,\frac{2^{-2 x}\,\Gamma(1- x)}{\Gamma(1+ x)}\,\right)\ ,
\nonumber\end{eqnarray}
where
$A_G=1.28243\ldots$ is Glaisher's constant and we use the sine-Gordon variables $\xi=\alpha^{-1}$ and $k=(2l+1)/4>0$.
The second term in Eq.\eqref{ppsossais} is of primary interest thus we introduce the special notation
\begin{eqnarray}\label{uayossiasa}
{\mathfrak Y}=\int_R^\infty\frac{\mbox{d} R}{\pi}\ (\,E_k-e_\infty\, R\,)\ .
\end{eqnarray}
Evidently it is the on-shell ShG action
normalized by
the condition
\begin{eqnarray}\label{jassay}
\lim_{r\to\infty} {\mathfrak Y}=0\ .
\end{eqnarray}
Then Eqs.\eqref{uyososai},\,\eqref{issusossa} are replaced by
\begin{eqnarray}\label{sisisaas}
\Big(\frac{\partial {\mathfrak Y}}{\partial k}\Big)_{r,\xi}
&=& 2\xi\ \eta_A-
\Big(\frac{\partial {\cal A}_\infty^*}{\partial k}\Big)_\xi \\
\Big(\frac{\partial {\mathfrak Y}}{\partial \xi}\Big)_{r,k}
&=&{\textstyle \frac{1}{2}}\ \Phi_A+l\, \eta_A-\Big(\frac{\partial {\cal A}_\infty^*}{\partial \xi}\Big)_k\ , \nonumber
\end{eqnarray}
where we still assume that $0<k<\frac{1}{2}$.
Using the relation (see the conformal perturbation theory expansion \eqref{isaiaissus} bellow)
\begin{eqnarray}\label{uyast}
\lim_{R\to 0} RE_{k}= -{ \frac{\pi}{6}}\ c_{\rm eff}\ ,
\end{eqnarray}
where
\begin{eqnarray}\label{isossiasai}
c_{\rm eff}=1-\frac{ 24\xi k^2}{1+\xi}
\end{eqnarray}
is the ``effective'' central charge,
one can represent ${\mathfrak Y}$ in the form which is appropriate for the study of the $R\to 0$ limit,
\begin{eqnarray}\label{sisaiaissus}
{\mathfrak Y}= {\textstyle \frac{1}{6}}\ c_{\rm eff}\ \log(MR)+
{\mathfrak Y}_0-\frac{(MR)^2}{8\pi}\ \tan\Big(\frac{\pi\xi}{2}\Big)-
\int_0^R\frac{\mbox{d} R}{\pi}\,
\left(\, E_{k}+ { \frac{\pi c_{\rm eff}}{6 R}}\, \right)\ .
\end{eqnarray}
Here $ {\mathfrak Y}_0$ is some $R$-independent constant.
To calculate this constant explicitly
one should write
${\mathfrak Y}$ as ${\cal A}^*-{\cal A}^*_{\infty}$,
express the functional \eqref{ssiisa} in terms of the original variables of
the MShG equation\ \eqref{shgz}, and then analyze the limit of a small $r$.
The straightforward calculations yield
\begin{eqnarray}\label{ssisisa}
{\mathfrak Y}_0 &=&{\textstyle \frac{1}{12}}\ \log\big(\, 4\,
\xi^\xi\, (1+\xi)^{-1-\xi}\, \big)-
{\textstyle \frac{1}{6}}\ c_{\rm eff}\ \log(r_\xi)
\\
&-&
\int_0^\infty\frac{\mbox{d} x}{x}\
\left(\, \frac{\sinh( x) \cosh(4\xi k x)}{ 2x\sinh(\xi x)\sinh\big(x(1+\xi)\big)
}-\frac{1}{2\xi(1+\xi)\,x^2}+\frac{c_{\rm eff}}{6}\ \mbox{e}^{-2 x}\, \right)\, ,\nonumber
\end{eqnarray}
where $r_\xi$ is given by
Eq.\eqref{tsrars}.
\section{\label{Sect3} YY-function in the sine-Gordon model}
In this section we identify $ {\mathfrak Y}$\ \eqref{uayossiasa} with the YY-function and
briefly review the
approach to numerical calculation
of its partial derivatives.
\subsection{YY-function for the inhomogeneous 6-vertex model}
The sine-Gordon model admits an integrable lattice regularization based on
the conventional $R$-matrix of the six-vertex model (see Fig.\,\ref{fig5}).
\begin{figure}
\centering
\includegraphics[width=10 cm]{trace2.eps}
\caption{ Partition function $Z_N={\rm Tr}\big[\, q^{k\sum_j \sigma_j^z}\, {\boldsymbol \tau}^N\,\big]$
of the inhomogeneous 6-vertex model on an infinite cylinder.
Here ${\boldsymbol \tau}$ is the monodromy matrix along the infinite direction and
$q=\mbox{e}^{\frac{{\rm i}\pi\xi}{1+\xi}}$. $R_{ij}^{kl}(\lambda)$ are conventional Boltzmann weights
for the $6$-vertex model satisfying the Yang-Baxter equation. }
\label{fig5}
\end{figure}
Here I shall recall some basic facts concerning the lattice BA equations which are relevant for the purposes of this work.
All the details can be found in Refs.\cite{Destri:1994bv, Destri:1997yz}.
The energy-momentum spectrum in the lattice theory can
be calculated by means of the
algebraic BA, or Quantum Inverse Scattering Method: BA state is identified by an unordered finite set
of distinct, generally complex numbers $\theta_j$ which satisfy BA equations
\begin{eqnarray}\label{iosisisa}
\bigg[\,\frac{s(\theta_j+\Theta+\frac{{\rm i}\pi}{2})
\,s(\theta_j-\Theta+\frac{{\rm i}\pi}{2})}
{s(\theta_j+\Theta-\frac{{\rm i}\pi}{2})
\, s(\theta_j-\Theta-\frac{{\rm i}\pi}{2})}\,\bigg]^N
=-\mbox{e}^{\frac{4{\rm i}\pi \xi k }{1+\xi}}\
\prod_{n}\frac{s(\theta_j-\theta_n+{\rm i}\pi)}
{s(\theta_j-\theta_n-{\rm i}\pi)}\ ,
\end{eqnarray}
where
\begin{eqnarray}\label{sossaui}
s(x)=\sinh\Big(\frac{x}{1+\xi}\Big)\ ,
\end{eqnarray}
and $ N$ stands for one-half of the number of sites along the compactified direction in Fig.\,\ref{fig5}.
The parameter $\Theta$ controls the world-sheet inhomogeneity of the Boltzmann weights,
whereas $k$ in \eqref{iosisisa}
is proportional to the twist angle for the quasiperiodic
boundary conditions imposed
along the compactified direction. Then
the energy $E^{(N)}$ and momentum $P^{(N)}$ of the BA state
can be extracted from the formulas
\begin{eqnarray}\label{saissais}
\exp\Big(-{\rm i}\ \frac{ E^{(N)}\pm P^{(N)} }{2N}\,\Big)=
\prod_{j}\frac{s(\frac{{\rm i}\pi}{2}+\Theta\pm\theta_j)}
{s(\frac{{\rm i}\pi}{2}-\Theta\mp\theta_j)}\ .
\end{eqnarray}
For the vacuum state all the BA roots are real and
their number coincides with $N$, which is assumed to be even bellow.
Following Yang and Yang \cite{Yang}, the BA equations in this case
can be bring to the form of the extremum condition
\begin{eqnarray}\label{zssisaisa}
\frac{\partial Y^{(N)} }{\partial \theta_j}=0\ \ \ \ \
\ \ \ \ \
\big(\, j=-{\textstyle\frac{N}{2}},\, -{\textstyle \frac{N}{2}}+1,\ldots
{\textstyle \frac{N}{2}}-2,\, {\textstyle \frac{N}{2}}-1\,\big)
\end{eqnarray}
for the YY-functional defined by the formulas:
\begin{eqnarray}\label{trsossia}
{ Y}^{(N)}=2\ \sum_{j} \Big(\, V(\theta_j)-\frac{2\xi k\, \theta_j }{1+\xi}\,\Big)
+ \sum_{j,n}\ U(\theta_j-\theta_n)
\end{eqnarray}
with
\begin{eqnarray}\label{sosoisai}
V(\theta)=- \frac{N}{\pi}\ \Xint-_{-\infty}^\infty
\frac{\mbox{d}\omega}{\omega^2}\
\frac{
\sinh(\frac{\pi\omega \xi}{2})\, \cos(\omega\Theta)}
{ \sinh(\frac{\pi\omega(1+\xi)}{2})} \ \mbox{e}^{{\rm i}\omega\theta}
\end{eqnarray}
and
\begin{eqnarray}\label{saosisai}
U(\theta)=
\frac{1}{\pi}\ \Xint-_{-\infty}^\infty
\frac{\mbox{d}\omega}{\omega^2}\ \
\frac{
\sinh(\frac{\pi\omega\xi}{2}) \cosh(\frac{\pi\omega}{2})}
{ \sinh(\frac{\pi\omega(1+\xi)}{2})}\ \mbox{e}^{{\rm i}\omega\theta}\ .
\end{eqnarray}
Here and bellow the symbol $\Xint-$ stands for a principal value integral
defined as the half-sum
$\frac{1}{2}\, \big(\,
\int_{-\infty-{\rm i} 0}^{+\infty-{\rm i} 0}+
\int_{-\infty+{\rm i} 0}^{+\infty+{\rm i} 0}\, \big)$.
Eqs.\eqref{zssisaisa} can be interpreted as an equilibrium condition for the system of $N$
one-dimensional ``electrons'' in a presence
of
confining and linear external potentials.
For large separations, $\theta\gg 1$,
\begin{eqnarray}\label{isussoosia}
U(\theta)= -\frac{\xi}{1+\xi}\ |\theta|+O\big(\,\mbox{e}^{-\frac{2|\theta|}{1+\xi}}\,\big)\ ,
\end{eqnarray}
therefore the 2-body potential is essentially a 1D repulsive Coulomb potential
slightly modified at short distances.
Since
\begin{eqnarray}\label{soosia}
V(\theta)= \frac{N\xi}{2(1+\xi)}\ |\,\theta-\Theta\,|+
\frac{N\xi}{2(1+\xi)}\ |\,\theta+\Theta\,|+O\big(\,\mbox{e}^{-\frac{2|\theta\pm \Theta|}{1+\xi}}\,\big)\ ,
\end{eqnarray}
$V(\theta)$
can be interpreted as a potential produced by
two heavy positive charges $+\frac{N\xi}{2(1+\xi)}$
placed at $\pm \Theta$. We shall always assume
that the external linear potential in \eqref{trsossia} is sufficiently week and the inequality
\begin{eqnarray}\label{ysoisisai}
-{\textstyle \frac{1}{2}}<k< {\textstyle \frac{1}{2}}
\end{eqnarray}
is fulfilled. For
\begin{eqnarray}\label{saosisaui}
0<\xi<1\ ,
\end{eqnarray}
the Hessian of the system \eqref{zssisaisa},
$\frac{\partial^2 Y_N}{\partial \theta_j\partial\theta_n}$, is positive definite therefore
we will focus primarily on this case.
As the physical intuition suggests,
the YY-functional \eqref{trsossia} has a stable minimum at
some real distribution of the BA roots
\begin{eqnarray}\label{sosaosao}
\theta^{(N)}_{-\frac{N}{2}}<\theta^{(N)}_{-\frac{N}{2}+1}<\ldots <\theta^{(N)}_{\frac{N}{2}-2}<\theta^{(N)}_{\frac{N}{2}-1}\ .
\end{eqnarray}
The main subject of our interest is the YY-function, i.e., a critical value of ${ Y}^{(N)}$ calculated at this minimum.
With some abuse of notations we will
denote it by the same symbol as the YY-functional,
\begin{eqnarray}\label{sossaisa}
{ Y}^{(N)}={ Y}^{(N)}(\Theta,\,\xi,\, k)\ .
\end{eqnarray}
Using the YY-function,
the ground state energy \eqref{saissais} can be written as
\begin{eqnarray}\label{iasosaisa}
{ E}^{(N)}=\Big(\frac{\partial { Y}^{(N)}}{\partial \Theta}\Big)_{N,\xi,k}\ ,
\end{eqnarray}
whereas
the momentum associated with the ground state is of course zero.
At large $N$ and finite $\Theta$,
the distribution of the BA roots
\begin{eqnarray}\label{bsosiosa}
\rho^{(N)}(\theta_{n+\frac{1}{2}}) =\frac{1}{N(\theta_{n+1}-\theta_{n})}\ \ \ \ \ \ \ \ \
\ \ \ \big(\, \theta_{n+\frac{1}{2}}
\equiv {\textstyle \frac{1}{2}}\, (\,
\theta_{n+1}+\theta_{n}\,)\, \big)
\end{eqnarray}
is well approximated by the continuous density (see Fig.\ref{fig00})
\begin{eqnarray}\label{aiosasau}
\rho(\theta)= \frac{1}{2\pi}\ \Big[\, \frac{1}{\cosh(\theta-\Theta)}+ \frac{1}{\cosh(\theta+\Theta)}\, \Big]\ .
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=9 cm]{rho.eps}
\caption{$\rho^{(N)}$ from
Eq.\eqref{bsosiosa} for $N=400$ and $\Theta=2,\, k=0,\, \xi=\frac{2}{3}.$ The solid line represents the
continuous density \eqref{aiosasau}. }
\label{fig00}
\end{figure}
\noindent
Therefore the following limit does exist
\begin{eqnarray}\label{saosoisa}
\lim_{N\to\infty\atop\Theta-{\rm fixed}}N^{-2}\, { Y}^{(N)}(\Theta)=y_{\infty}(\Theta)\ ,
\end{eqnarray}
and it is a simple exercise to show that
\begin{eqnarray}\label{soissisai}
y_{\infty}(\Theta)=
-\frac{1}{\pi}\ \Xint-_{-\infty}^\infty\frac{\mbox{d} \omega}{\omega^2}\
\frac{\sinh(\frac{\pi\omega\xi}{2})\ \cos^2(\omega\Theta)}{
\sinh(\frac{\pi\omega(1+\xi)}{2}) \cosh(\frac{\pi\omega}{2})}\ .
\end{eqnarray}
For large $\Theta$, Eq.\eqref{soissisai} yields
\begin{eqnarray}\label{soiass}
y_{\infty}(\Theta)= \frac{\xi\, |\Theta|}{1+\xi}+\frac{y_\infty(0)}{2}+
\frac{2}{\pi}\ \mbox{e}^{-2|\Theta|}\ \tan\Big(\frac{\pi\xi}{2}\,\Big)+
O\Big( \mbox{e}^{-\frac{4|\Theta|}{ 1+\xi}}\Big)\ .
\end{eqnarray}
The constant term here coincides with half of the value of \eqref{soissisai} taken at $\Theta=0$.
This is not an accidental relation.
Indeed, as $|\Theta|\to +\infty $, all the BA roots split into two clusters centered at
$\pm \Theta$.
The systems of BA equations for each cluster are completely separated in this limit
and reduce to the original form \eqref{iosisisa} with
$\Theta=0$ and $N$ is replaced by $N\to N/2$.
Hence for any even $N$
\begin{eqnarray}\label{sikssa}
{ Y}^{(N)}(\Theta)=\frac{\xi N^2}{1+\xi}\ |\Theta|+2\, Y^{({N}/{2})}(0)+o(1)\ \ \ \ \ \ \ {\rm as}\ \ \ \ \Theta\to\pm \infty\ ,
\end{eqnarray}
where the first term describes monopole-monopole interaction of the ``electron'' clusters
while the second one represents their intrinsic potential energy.\footnote{In a view of the mechanical analogy,
it would be natural to include an additional term
$-\frac{\xi N^2}{1+\xi}\ |\Theta|$ into the R.H.S. of definition\ \eqref{trsossia}.
This term represents the ion-ion potential energy and
does not affect on the equilibrium conditions\ \eqref{zssisaisa}.}
Combining \eqref{iasosaisa} with \eqref{sikssa} one also has
\begin{eqnarray}\label{siosisausy}
E^{(N)}(\Theta)- \frac{\xi\, N^2 }{1+\xi}=o(1)\ \ \ \ \ \ \ {\rm as}\ \ \ \ \Theta\to+ \infty\ .
\end{eqnarray}
It should be emphasized that asymptotic formulas \eqref{sikssa} and \eqref{siosisausy}
do not assume the large-$N$ limit and can be applied for any finite $N$.
\subsection{Scaling limit}
The sine-Gordon QFT \eqref{sg} manifests itself in
the
scaling limit when both $N,\, \Theta\to+\infty$ while
the scaling parameter
\begin{eqnarray}\label{sissa}
r=4\, N\ \mbox{e}^{-\Theta}
\end{eqnarray}
is kept fixed (RG-invariant).
In this case the L.H.S. of \eqref{siosisausy} does not vanish, but
has a simple relation to the $k$-vacuum energy \cite{Destri:1994bv}:
\begin{eqnarray}\label{gsossia}
\lim_{N,\Theta\to+\infty\atop
r-{\rm fixed}}\ \left(\, { E}^{(N)}-\frac{\xi\, N^2}{1+\xi}\ \,\right)=
\frac{RE_k }{2\pi}+ \frac{ c_{\rm eff}}{12}\ ,
\end{eqnarray}
where the effective center charge $c_{\rm eff}$ is given by Eq.\eqref{isossiasai}.
In order to study the scaling behavior of the YY-function, it makes sense
to consider only the part of the ``electron-ion'' potential energy corresponding to the mutual interaction of
the clusters,
\begin{eqnarray}\label{sossisa}
{ Y}_{\rm int}^{(N)}(\Theta)= { Y}^{(N)}(\Theta)-\frac{\xi N^2}{1+\xi}\ |\Theta|-2\, Y^{(N/2)}(0)\ ,
\end{eqnarray}
which vanishes as $\Theta\to+\infty$ for any fixed $N$.
Taking into account Eqs.\eqref{iasosaisa},\,\eqref{gsossia}, we get
\begin{eqnarray}\label{issai}
\lim_{N, \Theta\to+\infty\atop
r-{\rm fixed}} { Y}_{\rm int}^{(N)}(\Theta)=-\int_0^r\frac{\mbox{d} r}{\pi r}\
\left(\, RE_k+ { \frac{\pi c_{\rm eff}}{6 }}\, \right)\ ,
\end{eqnarray}
or, equivalently using \eqref{sisaiaissus},\footnote{
Notice that $Y^{(N)}(0)$ can be interpreted as the YY-function for the spin-$\frac{1}{2}$ Heisenberg chain.
As $N\to\infty$ $$Y^{(N)}(0)= y_{\infty}(0)\ N^2+
{\textstyle \frac{1}{6}}\ c_{\rm eff}\ \log\big({\textstyle \frac{\pi N }{4}}\big)+
{\mathfrak Y}_0+o(1)\, ,$$
where ${\mathfrak Y}_0$ and $y_\infty(0)$ are given by Eqs.\eqref{ssisisa} and \eqref{soissisai}, respectively.}
\begin{eqnarray}\label{issaiioi}
\lim_{N, \Theta\to+\infty\atop
r-{\rm fixed}} { Y}_{\rm int}^{(N)}(\Theta)=
{\mathfrak Y}- {\textstyle \frac{1}{6}}\ c_{\rm eff}\ \log(r)+\frac{r^2}{8\pi}\ \tan\Big(\frac{\pi\xi}{2}\Big)-{\mathfrak Y}_0\ .
\end{eqnarray}
The last formula allows one to identify the
YY-function in the sine-Gordon QFT with the normalized on-shell action ${\mathfrak Y}$.
The following comment is in order here.
Our analysis is based on the existence of solutions \eqref{sosaosao}
of the vacuum BA equations. It
can be directly applied to
the case $0<\xi<1$ only. At $\xi=1$, the sine-Gordon model is equivalent to the free massive Dirac fermions theory
and a closed form of the YY-function can be easily derived from definition\ \eqref{uayossiasa}:
\begin{eqnarray}\label{sssiossisa}
{\mathfrak Y}=
-r\ \int_{-\infty}^\infty\frac{\mbox{d}\tau}{ 2\pi^2}\ \tau\, \sinh(\tau)\, \log\Big[\,\big(1+\mbox{e}^{-r\cosh(\tau)+2\pi {\rm i} k})
(1+\mbox{e}^{-r\cosh(\tau)-2\pi {\rm i} k}\big)\, \Big]\ .
\end{eqnarray}
It is expected that for $\xi>1$ the YY-function is uniquely defined
through the analytic continuation from the segment $\xi\in (0,1)$.
\subsection{BA roots at the large-$N$ limit}
Properties of the BA roots at the scaling limit were discussed in Ref.\,\cite{Destri:1997yz}.
The roots accumulate at $\theta=\pm \Theta$. However,
at the center region (see Fig.\ref{fig00}) and at the tails of the distribution
the roots remain isolated and
their behavior can be described as follows
(see Tables\,\ref{F-Cyns} and \ref{Fa-Cyns} for illustration):
\begin{table}
\begin{center}
\begin{tabular}{| c || l | l | l | l | l |}
\hline \rule{0mm}{3.6mm}
$n$ & $N=100$ & $N=200$ & $N=400$ & $N=800$ & $N=1600$ \\
\hline
$0$ & 1.04348 & 1.04342 &1.04340 &1.04340 & 1.04340 \\
$1$ & 3.01807 & 3.01640 &3.01598 &3.01588 & 3.01585 \\
$2$ & 5.01975 & 5.01202 &5.01009 &5.00960 & 5.00948 \\
$3$ & 7.03507 & 7.01378 &7.00848 &7.00716 & 7.00683 \\
$4$ & 9.06565 & 9.02023 &9.00896 &9.00615 & 9.00545 \\
$5$ & 11.1150 & 11.0317 &11.0111 &11.0059 & 11.0046 \\
$6$ & 13.1873 & 13.0489 &13.0149 &13.0064 & 13.0043 \\
$7$ & 15.2870 & 15.0729 &15.0204 &15.0074 & 15.0042 \\
$8$ & 17.4186 & 17.1044 &17.0280 &17.0090 & 17.0043 \\
$9$ & 19.5874 & 19.1447 &19.0378 &19.0112 & 19.0046 \\
\hline
\end{tabular}
\end{center}
\caption{ $\frac{r}{2\pi}\ \exp\big(\theta^{(N)}_n\big)$ for $r=1$, $\xi=\frac{2}{3}$ and $k=0$}
\label{F-Cyns}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{| c || l | l | l | l | l |}
\hline \rule{0mm}{3.6mm}
$n$ & $N=100$ & $N=200$ & $N=400$ & $N=800$ & $N=1600$ \\
\hline
$0$ & 1.02837 & 1.02831 &1.02830 &1.02830 & 1.028299 \\
$1$ & 3.01276 & 3.01111 &3.01069 &3.01059 & 3.01056 \\
$2$ & 5.01652 & 5.00881 &5.00689 &5.00641 & 5.00628 \\
$3$ & 7.03273 & 7.01148 &7.00619 &7.00487 & 7.00454 \\
$4$ & 9.06381 & 9.01843 &9.00718 &9.00436 & 9.00366 \\
$5$ & 11.1135 & 11.0302 &11.0096 &11.0045 & 11.0032 \\
$6$ & 13.1860 & 13.0477 &13.0136 &13.0051 & 13.0030 \\
$7$ & 15.2858 & 15.0718 &15.0194 &15.0063 & 15.0031 \\
$8$ & 17.4176 & 17.1035 &17.0271 &17.0081 & 17.0033 \\
$9$ & 19.5864 & 19.1438 &19.0369 &19.0104 & 19.0038 \\
\hline
\end{tabular}
\end{center}
\caption{ $\frac{4 N}{\pi}\ \exp\big(\theta^{(N)}_{n-\frac{N}{2}}+\Theta\big)$ for $r=1$, $\xi=\frac{2}{3}$ and $k=0$}
\label{Fa-Cyns}
\end{table}
\begin{itemize}
\item
There exist limits
\begin{eqnarray}\label{xsosasai}
\theta_j=\lim_{N, |\Theta|\to\infty\atop
r,\, j-{\rm fixed} }\theta_j^{(N)}\ \ \ \ \ \ \ \ (\, j=0,\,\pm1\,\pm 2\ldots\, )
\end{eqnarray}
and, for an arbitrary $\Theta\geq 0$,
\begin{eqnarray}\label{sisusossais}
{\tau}^{(+)}_n&=&\lim_{N\to\infty\atop
n-{\rm fixed}} \Big(\, \theta^{(N)}_{n-\frac{N}{2}}+\log(N)+\Theta\, \Big)
\\
{\tau }^{(-)}_n&=& \lim_{N\to\infty\atop
m-{\rm fixed}}\Big(\, \theta^{(N)}_{\frac{N}{2}-1-n} -\log(N)-\Theta\,\Big)
\ \ \ \ \ \ \ \ \ \ \ \ (\, n=0, \,1\ldots\, ) \ .\nonumber
\end{eqnarray}
\item
The limiting values of the roots
possess the following $n\to+\infty$ asymptotic behavior
\begin{eqnarray}\label{saklsisais}
\mbox{e}^{\theta_n}&=&
\frac{2\pi}{r}\ \big(\, 2n+1+
2k\,\big)+
O\big(n^{-1}\big)\\
\mbox{e}^{-\theta_{-n-1}}&=&
\frac{2\pi}{r}\ \big(\, 2n+1-
2k\,\big)+O\big(n^{-1}\big)
\nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{ossuusiosisa}
\exp\big(\pm \tau^{(\pm)}_n\big)=\frac{\pi}{2}\
\big(\, 2n+1\pm 2 k\,\big)+O\big(n^{-1}\big)\ .
\end{eqnarray}
\end{itemize}
\noindent
To probe
the infinite sequences
$\{\theta_j\}_{j=-\infty}^\infty$ and
$\{\tau^{(\pm)}_n\}_{n=0}^\infty$
it is useful to consider
certain generating functions encoding their properties.
Let $\zeta_+(\omega)$ and $\zeta_-(\omega)$ be functions defined as the analytic continuation of
convergent series
\begin{eqnarray}\label{ssisai}
\zeta_+(\omega)&=& \sum_{n=0}^\infty\mbox{e}^{-{\rm i}\omega\theta_n}\ \ \ \ \ \ \ \ \ \ \ \Im m (\omega)<-1\ ,\nonumber\\
\zeta_-(\omega)&=& \sum_{n=0}^\infty\mbox{e}^{-{\rm i}\omega\theta_{-1-n}}\ \ \ \ \ \ \ \,\Im m(\omega)>1\ .
\end{eqnarray}
As follows from the asymptotic formulas \eqref{saklsisais},
$\zeta_+(\omega)\ \big(\,\zeta_-(\omega)\,\big)$ is analytic in the half plane $\Im m(\omega)< 1$ $(\Im m (\omega)>- 1)$ except at
a simple pole at $ \omega=-{\rm i}$ $(\omega={\rm i})$ with the residue $ - \frac{{\rm i} r}{4\pi }$ $ \big( \frac{{\rm i} r}{4\pi }\big)$.
Also note that $ \zeta_\pm(0)=\mp k$.
Therefore
\begin{eqnarray}\label{sososasau}
\zeta(\omega)=\zeta_+(\omega)+\zeta_-(\omega)\ ,
\end{eqnarray}
is an analytic function in the strip $|\Im m(\omega)|<1$ such that
\begin{eqnarray}\label{sususay}
\zeta(0)=0\ .
\end{eqnarray}
In the limit $r\to 0$
\begin{eqnarray}\label{oaasauas}
\zeta(\omega)= \Big(\frac{r}{4}\Big)^{{\rm i}\omega}\ \big(\,\zeta^{\rm(cft)}_k(\omega)+o(1)\,\big)+
\Big(\frac{r}{4}\Big)^{-{\rm i}\omega}\ \big(\, \zeta^{\rm(cft)}_{-k}(-\omega)+o(1)\,\big)\ ,
\end{eqnarray}
where $\zeta^{\rm(cft)}_{\pm k}(\pm \omega)$
are the zeta functions for the sequences $\{\tau^{(\pm )}_n\}_{n=0}^\infty$ \eqref{sisusossais}, i.e.,
\begin{eqnarray}\label{sisasaaus}
\zeta^{\rm(cft)}_{\pm k}(\pm \omega)=\sum_{n=0}^\infty \exp\big( - {\rm i}\omega\tau^{(\pm)}_n\,\big)\ .
\end{eqnarray}
The function\ $\zeta^{\rm(cft)}_{ k}(\omega)$ was introduced (in a different overall normalization) and
studied in Ref.\cite{Bazhanov:1996dr}.
Its exhaustive description
was found later in Ref.\cite{Bazhanov:1998wj} (see also related works \cite{Voros:1992}
and \cite{Dorey:1998pt}), where
it was shown that $\zeta^{\rm(cft)}_{ k}(\omega)$ coincides with the zeta function of
the Schr${\ddot {\rm o}}$dinger operator
\begin{eqnarray}\label{xssiisa}
-\partial_x^2+ x^{2\alpha}+\frac{l(l+1)}{x^2}\ .
\end{eqnarray}
More precisely, for $0< k <\frac{1}{2}$,
the sequences $\big\{{\tau}^{(\pm)}_n\,\big\}_{n=0}^\infty$
are simply related to
the spectral sets $ \big\{{\cal E}^{(\pm)}_n\}_{n=0}^\infty$ of this differential operator:
\begin{eqnarray}\label{saskalska}
\exp\big(\pm {\tau}^{(\pm)}_n\big)=\frac{\xi r_\xi}{8}\ \
\left(\, {\cal E}_n^{(\pm)}\, \right)^{\frac{1+\xi}{2}}\ ,
\end{eqnarray}
with
$\alpha=\xi^{-1}, \, l=2k-\frac{1}{2}$ and
$r_\xi$ given by \eqref{tsrars}.
With the above properties of the BA roots it is not difficult to analyze
the large $N$-limit of the relation
\begin{eqnarray}\label{uspssa}
\Big(\frac{\partial Y^{(N)}}{\partial k}\Big)_{N,\Theta,\xi}=-\frac{4\,\xi }{1+\xi}\ \sum_j\theta^{(N)}_j\ ,
\end{eqnarray}
which is derived by differentiating \eqref{trsossia}
with the use of
BA equations \eqref{zssisaisa}. The scaling analog of \eqref{uspssa}
reads
\begin{eqnarray}\label{spssyssta}
\Big(\frac{\partial {\mathfrak Y}}{\partial k}\Big)_{r,\xi}=-\frac{4{\rm i}\,\xi }{1+\xi}\ { \zeta}'(0)\ ,
\end{eqnarray}
where prime stands for the derivative with respect to $\omega$. As
follows from\ \eqref{oaasauas} and $\zeta^{\rm(cft)}_k(0)=-k$,
\begin{eqnarray}\label{siadsayqq}
\Big(\frac{\partial {\mathfrak Y}}{\partial k}\Big)_{r,\xi}=
-\frac{8k\,\xi }{1+\xi}\ \log\Big(\frac{r}{4}\Big)
-\frac{4{\rm i}\,\xi }{1+\xi}\ \left(\, \partial_\omega\zeta^{\rm(cft)}_k(0)+
\partial_\omega\zeta^{\rm(cft)}_{-k}(0)\, \right) +o(1)\ \ \ \ \ {\rm as}\ \ \ r\to 0\ .
\end{eqnarray}
The subleading term of this asymptotic is expressed in terms of the determinant of the differential
operator \eqref{xssiisa} and can be calculated explicitly:
\begin{eqnarray}\label{sossias}
\Big(\frac{\partial{\mathfrak Y}}{\partial p}\Big)_{r,\xi}=
\log\bigg[\, \Big(\frac{r}{r_\xi}\Big)^{-\frac{2p}{\xi(1+\xi)}}\
\xi^{\frac{2p}{\xi}}\
(1+\xi)^{-\frac{2 p}{1+\xi}}\ \
\frac{\Gamma(1+\frac{p}{\xi})\Gamma(1-\frac{p}{1+\xi})}{\Gamma(1-\frac{p}{\xi})
\Gamma(1+\frac{p}{1+\xi})}\, \bigg]+o(1)\ \ {\rm as} \ \ r\to 0\, ,
\end{eqnarray}
where we substitute $k$ for the equivalent parameter $p=2\xi k$.
Of course, Eq.\eqref{sossias} can be alternatively obtained by means of relations \eqref{sisaiaissus},\,\eqref{ssisisa}.
Note the expression in the bracket $\big[\,\cdots\,\big]$ coincides with
Liouville reflection amplitude (analytically continued to the domain $-1<\xi<0$)
introduced in the work \cite{Zamolodchikov:1995aa}.
\subsection{Calculation of partial derivatives of the YY-function}
For $0<\xi<1$ the so-called $Q$-function can be defined through the convergent product
\begin{eqnarray}\label{saossisa}
Q(\theta)={\mathfrak C}\ \ \mbox{e}^{\frac{2k \theta}{1+\xi}}\ \prod_{n=0}^\infty
4\ \mbox{e}^{\frac{\theta_{-n-1}-\theta_n}{1+\xi}}\
s\big(\theta_n-\theta\big)\, s\big(\theta- \theta_{-n-1}\big)\ ,
\end{eqnarray}
where the abbreviation $s(\theta)$
\eqref{sossaui} is applied.
The $\theta$-independent factor ${\mathfrak C}$ can be chosen at will.
In what follows it is assumed that
\begin{eqnarray}\label{skskssisai}
{\mathfrak C}= 2^{\frac{1}{1+\xi}}\
\prod_{n=0}^{\infty}\ \exp\Big[ \,
{\textstyle \frac{1}{1+\xi}}\ \big(\,
2\, \log\big({\textstyle \frac{r}{2\pi (2n+1)}}\big)
+\theta_n- \theta_{-n-1}\,\big)
\, \Big]\ .
\end{eqnarray}
In the scaling limit, BA equations \eqref{zssisaisa}
boil down to
\begin{eqnarray}\label{soisisa}
\epsilon(\theta_j)=\pi\, (2j+1)\ \ \ \ \ \ (j=0,\,\pm 1,\,\pm 2\ldots\, )\ ,
\end{eqnarray}
where
\begin{eqnarray}\label{sosoia}
\epsilon(\theta)={\rm i} \log\left(\,\frac{Q(\theta+{\rm i}\pi\xi)}{Q(\theta-{\rm i}\pi\xi)}\,\right)
\end{eqnarray}
and the branch of the log is fixed by the condition
\begin{eqnarray}\label{ssosisasiu}
\epsilon(\theta)= \frac{r\mbox{e}^\theta}{2}-2\pi k+o(1)\ \ \ \ \ \ \ \ \ \ {\rm for}\ \ \ \ \ \ \Re e(\theta)\to+\infty \ \ \ \
{\rm and}\ \ \ |\Im m (\theta)|<\frac{\pi}{2}\ .
\end{eqnarray}
Using the analytic properties of $\zeta_\pm(\omega)$ \eqref{ssisai},
definitions \eqref{saossisa} and \eqref{sosoia} can be transformed
into the integral representations
\begin{eqnarray}\label{jsaospsosa}
\log Q\big(\theta+{\textstyle \frac{{\rm i}\pi (1+\xi)}{ 2}} \big)={\rm i}\pi k+
\frac{r\cosh(\theta)}{2 \cos(\frac{\pi\xi}{ 2}) }-
\frac{1}{2}\ \Xint-_{-\infty}^\infty\frac{\mbox{d}\omega}{\omega }\
\frac{{ \zeta}(\omega)}{\sinh(\frac{\pi\omega(1+\xi)}{2})}\ \mbox{e}^{{\rm i}\omega\theta}
\end{eqnarray}
and
\begin{eqnarray}\label{soisoai}
\epsilon(\theta)=-2\pi k
+r\ \sinh(\theta)-{\rm i}\ \int_{-\infty}^\infty\frac{\mbox{d}\omega}{\omega }\
\frac{ \sinh(\frac{\pi\omega(1-\xi)}{2}) }{\sinh(\frac{\pi\omega(1+\xi)}{2})}\ \zeta(\omega)\ \mbox{e}^{{\rm i}\omega\theta}\ ,
\end{eqnarray}
respectively.
On the other hand, the BA equations\ \eqref{soisisa} imply
(see Ref.\cite{Destri:1992qk} and related Ref.\cite{Klumper:1991} for the original derivation)
\begin{eqnarray}\label{sksklsskla}
\zeta(\omega)=
\frac{{\rm i}\, \omega\, \sinh(\frac{\pi\omega (1+\xi)}{ 2})}{
\cosh(\frac{\pi\omega}{ 2})\,
\sinh(\frac{\pi\xi \omega}{ 2})}\
\int_{-\infty}^{\infty} \frac{\mbox{d}\theta}{2\pi}\ \mbox{e}^{-{\rm i}\omega \theta}\,
\Im
m\Big[\, \log\big(1+\mbox{e}^{-{\rm i} \epsilon(\theta-{\rm i} 0)}\, \big)
\,\Big]\ .
\end{eqnarray}
Note that at the free-fermion point ($\xi=1$) $\epsilon(\theta)=r\, \sinh(\theta)-2\pi k$,
therefore Eq.\eqref{sksklsskla} gives
\begin{eqnarray}\label{ssaisisai}
\zeta(\omega)= \omega \int_{-\infty}^\infty\frac{\mbox{d}\theta}{2\pi}\, \mbox{e}^{-{\rm i}\omega\theta}
\Big[\, \mbox{e}^{-\frac{\pi\omega}{2}}\ \log\big(1+\mbox{e}^{-r\cosh(\theta)+ 2\pi{\rm i} k}\,\big)-
\mbox{e}^{\frac{\pi\omega}{2}}\ \log\big(1+\mbox{e}^{-r\cosh(\theta)- 2\pi{\rm i} k}\,\big) \Big] .
\end{eqnarray}
In general,
Eqs.\eqref{soisoai} and \eqref{sksklsskla} are combined into a single integral equation on $\epsilon(\theta)$.
Once the numerical data for $\epsilon(\theta)$ are available,
$\zeta(\omega)$ can be computed by means of \eqref{sksklsskla}.
Eq.\eqref{sksklsskla} shows that $\zeta(\omega)$
is a meromorphic function with simple poles located at
$\omega=\pm {\rm i}\, (2 n+1)$ and $\omega= \pm \frac{2{\rm i}}{\xi}\ (n+1) \ (n=0,\,1\ldots)$.
The residue values are
the $k$-vacuum eigenvalues of local and nonlocal integral of motions in the
quantum sine-Gordon model\ \cite{Bazhanov:1996dr,Lukyanov:2010rn}.
In particular, at the boundary of the strip of analyticity $|\Im m(\omega)|<1$, $ \zeta(\omega)$ has simple poles
\begin{eqnarray}\label{uysoosaao}
\zeta(\omega)= \mp
\frac{\mathfrak F}{r} \cot\Big(\frac{\pi\xi}{2}\Big)\ \frac{{\rm i} }{ \omega\pm {\rm i} }+O(1)\ \ \ \ \ {\rm as}\ \ \ \
\omega\to\pm{\rm i}\ ,
\end{eqnarray}
where ${\mathfrak F}$ is given by Eq.\eqref{sksjsasayusa}.
This way, the problem of numerical calculation
of $k$- and $r$-partial derivatives of the YY-function can be solved by means of relations
\begin{eqnarray}\label{usystsoissisa}
\Big(\frac{\partial{\mathfrak Y}}{\partial r}\Big)_{\xi,k}
&=& \tan\Big(\frac{\pi\xi}{2}\Big)\ \lim_{\omega\to\mp{\rm i}} (1 \mp{\rm i}\, \omega)\,\zeta(\omega)\ ,\nonumber \\
\Big(\frac{\partial{\mathfrak Y}}{\partial k}\Big)_{r,\xi}&=&-\frac{4{\rm i}\, \xi }{1+\xi}\ \zeta'(0)\ .
\end{eqnarray}
The calculation of $\xi$-derivative is found out to be a more delicate problem.
Rather naive manipulations with the lattice YY-functional\ \eqref{trsossia} suggest
that
\begin{eqnarray}\label{assjsajasju}
\Big(\frac{\partial{\mathfrak Y}}{\partial \xi}\Big)_{r, k}&=&
\frac{1}{4}\ \int_{-\infty}^\infty
\frac{\mbox{d}\omega}{\omega}\
\frac{ \sinh(\pi \omega)}
{ \sinh^2(\frac{\pi\omega(1+\xi)}{2})}\ \zeta(\omega)\,
\zeta( -\omega)\ .
\end{eqnarray}
In Appendix\,\ref{AppendixC}
we present some evidences in support of this relation.
Unfortunately it still lacks a rigorous proof.
\section{\label{SectionMini} Minisuperspace limit}
\subsection{Minisuperspace limit of the YY-function}
The small-$R$ expansion of the $k$-vacuum energy in the quantum sine-Gordon model
was argued in Ref.\cite{Zamolodchikov:1994uw},
\begin{eqnarray}\label{isaiaissus}
\frac{RE_k}{\pi}=- { \frac{1}{6}}+\frac{4k^2\xi}{1+\xi }-
\sum_{n=1}^\infty\, f _n\ r^{\frac{4 n}{1+\xi}}\ .
\end{eqnarray}
The first coefficient $f_1$ has a relatively simple explicit form
\begin{eqnarray}\label{soisasa}
f_1= 4\ \
\frac{ \gamma^2(\frac{\xi}{1+\xi})\gamma(\frac{\xi(1- 2k)}{1+\xi})\gamma(\frac{\xi(1+2k)}{1+\xi})}{\gamma(\frac{2\xi}{1+\xi})
\, (\, (1+\xi)\, r_\xi\,)^{{\frac{4}{1+\xi}}}}\ ,
\end{eqnarray}
where $\gamma(x)=\Gamma(x)/\Gamma(1-x)$ and $r_\xi$ \eqref{tsrars}.
Let us consider
the $\xi\to 0$ limit of \eqref{isaiaissus}
in which the parameters $r$ and $k$
are kept fixed. One finds
\begin{eqnarray}\label{ssasoaaiso}
\frac{RE_k}{\pi}=-\frac{ 1}{6}+ \xi\ a+o(\xi)
\end{eqnarray}
with
\begin{eqnarray}\label{sisaasu}
a=\nu^2+\frac{q^2}{2\,(\nu^2-1)} +O(q^4)\ .
\end{eqnarray}
Here we have denoted
\begin{eqnarray}\label{usoisisai}
q =\Big(\frac{r}{4}\Big)^2\ ,\ \ \ \ \ \ \
\nu=2 k\ .
\end{eqnarray}
In this special (minisuperspace) limit
the sine-Gordon QFT reduces to the quantum mechanical problem of
particle in the cosine potential
whose energy coincides with
$a$ from Eq.\eqref{ssasoaaiso}.\footnote{
The minisuperspace approximation for the closely related quantum sinh-Gordon model was discussed in Ref.\cite{Lukyanov:2000jp}.}
Note that \eqref{usoisisai} are conventional notations in the theory of Mathieu
equation \cite{Stegun}.
For given $\nu$, $a$ is determined by the
Whittaker equation
\begin{eqnarray}\label{sakssaisa}
\sin^2\Big(\frac{\pi\nu}{2}\Big)=\Delta_q( a)\ \sin^2 \Big(\frac{\pi\sqrt{a}}{2}\Big)\ ,
\end{eqnarray}
where $\Delta_q(a)$ is Hill's determinant
\begin{eqnarray}\label{salososa}
\Delta_q(a)=
\det \begin{pmatrix}
&\cdots& \cdots & \cdots & \cdots & \cdots & \cdots & \cdots& \\
&\cdots& \gamma_{-2}& 1 & \gamma_2 & \cdots & \cdots & \cdots & \\
&\cdots& \cdots &\gamma_{0}& 1 &\gamma_0 & \cdots & \cdots & \\
&\cdots& \cdots & \cdots & \gamma_2 & 1 & \gamma_2 & \cdots & \\
&\cdots& \cdots & \cdots & \cdots & \cdots & \cdots & \cdots &
\end{pmatrix}\ ,\ \ \ \ \ \ \ \ \gamma_{2n}=\frac{q}{4n^2-a}\ .
\end{eqnarray}
The solution
of Eq.\eqref{sakssaisa} is a multivalued function, but the condition \eqref{sisaasu}
specifies the proper branch unambiguously.\footnote{It
is implemented in the $Mathematica$ as
${\rm MathieuCharacteristicA}[\,\nu,\, q\,]$ with $-1<\nu<1$.}
To simplify formulas, we will below treat $a$ as a function of the variables $r$ and $\nu$, i.e.,
\begin{eqnarray}\label{aauaussay}
a=a(r,\nu)\ .
\end{eqnarray}
Now, using Eqs.\eqref{sisaiaissus} and \eqref{ssasoaaiso} it is straightforward to
obtain the limiting behavior of the YY-function:
\begin{eqnarray}\label{osaosasao}
{\mathfrak Y}=
\frac{1}{6}\ \log\left(\frac{ r\xi A_G^{12}}{4}\right)+
\xi\ \left(\, \frac{1}{4}\,
\log\big(\, \mbox{e}^{-\frac{1}{3}}\, 2^{-\frac{2}{3}}\,\xi\,\big)+ {\cal Y}(r,\nu)\, \right)+o(\xi)\ ,
\end{eqnarray}
where
\begin{eqnarray}\label{soisosa}
{\cal Y} (r,\nu)&=&{\cal Y}_0(\nu)
-\nu^2\ \log\Big(\frac{r}{4}\Big)-\frac{r^2}{16}
-\int_0^{r}\frac{\mbox{d} t}{t}\ \big(\, a(t,\nu)-\nu^2\,\big)\ ,
\end{eqnarray}
with
\begin{eqnarray}\label{tskssia}
{\cal Y}_0(\nu)=
\int_0^\infty\frac{\mbox{d} x}{x}\
\left(\, \nu^2-\frac{ \sinh^2(\nu x)}{x\sinh(x)}
\, \right)\ \mbox{e}^{-x} \ .
\end{eqnarray}
Note that
the above form of ${\cal Y}( r,\nu)$ can be alternatively rewritten as\footnote{ For the large-$r$ expansion of $a$
see, e.g., formulas 28.8.1, 28.8.2 in Ref.\cite{Wolf}.}
\begin{eqnarray}\label{sioisai}
{\cal Y}( r,\nu)=
\log\big(2^{\frac{1}{6}}\, A_G^3\big)+
\frac{1}{4}\ \log(r)-\frac{r}{2}+
\int_{r}^\infty\frac{\mbox{d} t}{ t}
\ \Big(\, a(t,\nu)+\frac{t^2}{8}-\frac{t}{2}+\frac{1}{4}\, \Big)\ .
\end{eqnarray}
\subsection{\label{MiniQ}Minisuperspace limit of the $Q$-function}
The minisuperspace approximation of the $Q$-function at $r=0$ was argued in Appendix B
of Ref.\cite{Bazhanov:1996dr}.
The analysis was based on general
properties of the $Q$-function and
only minor adjustments are needed to extend it to the case $r>0$.
The $Q$-function is a quasiperiodic solution (see Eq.\eqref{saossisa})
\begin{eqnarray}\label{iskissasua}
Q\big(\theta+{\rm i}\pi(1+\xi)\, \big)=\mbox{e}^{2{\rm i}\pi k}\ Q(\theta)
\end{eqnarray}
of Baxter's equation (see, e.g., Ref.\cite{Lukyanov:2010rn})
\begin{eqnarray}\label{isssisa}
T_{\frac{1}{2}}(\theta)\, Q(\theta)=Q\big(\,\theta+{\rm i}\pi\xi\, \big)+
Q\big(\,\theta-{\rm i}\pi\xi\, \big)\ ,
\end{eqnarray}
where $ T_{\frac{1}{2}}(\theta)$ stands for the $k$-vacuum eigenvalue
of the transfer matrix. If the
overall normalization factor in \eqref{saossisa}
is chosen as in Eq.\eqref{skskssisai}, then the $Q$-function
also
obeys the so-called quantum Wronskian relation
\begin{eqnarray}\label{hasta}
Q\big(\theta+{\textstyle\frac{{\rm i}\xi\pi}{2}}, \, k\, \big)\,
Q\big(\theta-{\textstyle\frac{{\rm i}\xi\pi}{2}}, -\, k\, \big) -
Q\big(\theta-{\textstyle\frac{{\rm i}\xi\pi}{2}}, \,k \, \big)\,
Q\big(\theta+{\textstyle\frac{{\rm i}\xi\pi}{2}}, -\, k\, \big)=2{\rm i}\sin(2\pi k )\ ,
\end{eqnarray}
where we explicitly indicate the dependence on the quasi-momentum.
In the minisuperspace limit
\begin{eqnarray}\label{yiusosisai}
T_{\frac{1}{2}}(\theta)=2+(\pi\xi )^2\ w(\theta)+o(\xi)\ ,
\end{eqnarray}
whereas Baxter's equation reduces to the second order differential equation.
For the conformal case (i.e., for $r=0$) discussed in \cite{Bazhanov:1996dr},
$w(\theta)=\mbox{e}^{2\theta}-(2k)^2$. Similarly, in the case of finite $r$ one can show that
\begin{eqnarray}\label{jaausy}
w(\theta)=\frac{r^2}{8}\ \cosh(2\theta)-A\ ,
\end{eqnarray}
with some $\theta$-independent constant $A$ such that
\begin{eqnarray}\label{kausya}
\lim_{r\to 0}\, A=(2k)^2\ .
\end{eqnarray}
Thus the minisuperspace limit of the $Q$-function can be described as follows.
Let $F_\nu(z)$ be
Floquet's solution
\begin{eqnarray}\label{issuas}
F_\nu(z+{\rm i}\pi)=\mbox{e}^{{\rm i} \pi\nu}\ F_\nu(z)\ ,\ \ \ \ \ \ \ \ F_{-\nu}(z)=F_{\nu}(-z)
\end{eqnarray}
of the modified
Mathieu equation
\begin{eqnarray}\label{yqtqrs}
- \frac{\mbox{d}^2 F}{\mbox{d} z^2}+\Big(\, A-\frac{r^2}{8} \, \cosh(2z )\,\Big)\, F=0
\end{eqnarray}
normalized by the condition
\begin{eqnarray}\label{iosausau}
W[F_\nu, F_{-\nu}]=-2\ \sin(\pi\nu )\ ,
\end{eqnarray}
where
$W[f,g]$ stands for the Wronskian $f g'-g f'$.
Then, as follows from \eqref{iskissasua}-\eqref{jaausy},
\begin{eqnarray}\label{aosasu}
\lim_{\xi\to 0\atop
\theta,\,r,\, k-{\rm fixed}}\, \sqrt{\pi \xi}\ Q(\theta, k)= F_\nu(\theta)\ \ \ \ \ \ \ \ \ \ {\rm with}\ \ \ \ \ \ \ \ \nu=2k\ .
\end{eqnarray}
For given $\nu$, the constant $A$ in \eqref{yqtqrs} is determined by
the quasiperiodicity condition \eqref{issuas}, which implies
the Whittaker equation \eqref{sakssaisa} with $a$ replaced by $A$.
The extra condition \eqref{kausya} enables us to chose
the branch of solution of \eqref{sakssaisa} unambiguously.
Thus we conclude that
\begin{eqnarray}\label{sosasai}
A=a(r,\nu)\ ,
\end{eqnarray}
where $a$ is the same function as in Eqs.\eqref{ssasoaaiso},\,\eqref{aauaussay}.
It is useful to note that Eq.\eqref{iosausau}
is equivalent to the following normalization
condition\footnote{ For $0< \nu<1$$,
F_\nu(z)={\cal N}_\nu\, \big(\,ce_\nu({\rm i}\, z)-{\rm i}\, se_\nu({\rm i}\, z)\, \big)\,,\ \ {\cal N}_\nu=
\sqrt{\frac{\sin(\pi\nu)}{se'(0)ce_\nu(0)}}$, where $ce_\nu( x)$ and $ se_\nu(x)$
are returned by the
$Mathematica$ functions ${\rm MathieuC}[a,\, q,\, x]$ and ${\rm MathieuS}[a,\, q,\, x]$, respectively.
Their $x$-derivatives are implemented as
${\rm MathieuCPrime}[a,\, q,\, x]$ and ${\rm MathieuSPrime}[a,\, q,\, x]$. Here $a=a(r,\nu)$ and
$q=\big(\frac{r}{4}\big)^2$.}
\begin{eqnarray}\label{ksaskla}
\int_0^\pi \mbox{d} y\ F_\nu ({\rm i}\, y)\, F_\nu (-{\rm i}\, y)=2\pi\sin(\pi \nu)\ \bigg[\,
\Big(\frac{\partial a}
{\partial \nu}\Big)_r\, \bigg]^{-1}\ .
\end{eqnarray}
Eq.\eqref{aosasu} dictates that the BA roots $\{\theta_j\}_{j=-\infty}^\infty$ \eqref{xsosasai}
in the minisuperspace limit turn out to be the zeros of the Mathieu
function,\footnote{The remaining terms $O(n^{-1})$ in the large-$n$ asymptotic
formulas \eqref{saklsisais} diverge as $\xi\to 0$.
For this reason these formulas are not applicable at
the minisuperspace limit.}
\begin{eqnarray}\label{ysasaioaois}
\lim_{\xi\to 0\atop j.\,r,\,k-{\rm fixed}}\theta_j=z_j\ \ \ :\ \ \ F_\nu(z_j)=0\ \ \ \ \ (j=0,\,\pm1,\pm2\ldots\,)\ .
\end{eqnarray}
Using the properties of $F_\nu(z)$, it is not difficult to derive the sum rule
\begin{eqnarray}\label{sskssia}
\Sigma=\sin(\pi\nu )\
\int_{-\infty}^{\infty}\ \frac{ \mbox{d} x}
{F_\nu(x+\frac{{\rm i}\pi}{2})\,F_{\nu}(-x-\frac{{\rm i}\pi}{2})}\ ,
\end{eqnarray}
where $\Sigma$ stands for the regularized sum of $(-2z_j)$:
\begin{eqnarray}\label{iasissaasuy}
\Sigma=
2\ \sum_{n=0}^{\infty}\ \Big( \,
{ \frac{
\nu }{ n+1}}-z_n- z_{-n-1}\,\Big)-2\nu\ \log
\Big(\frac{r\mbox{e}^{\gamma_E}}{ 4\pi}\Big)
\end{eqnarray}
($\gamma_E$ is Euler's constant).
Note that
$\Sigma=-2{\rm i}\, \lim_{r\to 0}\zeta'(0)$, and hence Eqs.\,\eqref{spssyssta},\,\eqref{osaosasao} imply that
\begin{eqnarray}\label{siaassyua}
\Sigma= \int_r^{\infty}\frac{\mbox{d} t}{t}\ \Big(\frac{\partial a}
{\partial \nu}\Big)_t\ .
\end{eqnarray}
\subsection{Connection problem for the Painlev${\acute {\bf e}}$ III equation}
We now turn to the ShG equation\ \eqref{luuausay} at the minisuperspace limit.
In this limiting situation the triangle $AB{\tilde B}$ in Fig.\ref{fig1a}b shrinks to a segment while
${\hat\eta}$ becomes a certain solution of the Painlev${\acute {\rm e}}$ III equation\ \eqref{soisiasai}
\begin{eqnarray}\label{mznzhx}
\lim_{\alpha\to\infty\atop
r,\,l-{\rm fixed}}{\hat\eta}(w,{\bar w})=U(4\, |w-w_A|)\ \ \ \ \ \ \ \ \ \ \ \
\big(\, 0<|w-w_A|< r/4\,\big)\ ,
\end{eqnarray}
such that
\begin{eqnarray}\label{sksisauas}
\mbox{e}^{2U(t)}=\kappa^2\ t^{4\nu-2}+o(t^{4\nu-2})\ \ \ \ \ \ \ {\rm for}\ \ \ \ \ t\to 0\ .
\end{eqnarray}
Parameters $\nu$ and $\kappa$ are related to $l$ and $\eta_A$\ \eqref{eaisusy} as follows
\begin{eqnarray}\label{iiuuwwq}
\nu&=&l+{\textstyle \frac{1}{2}}\\
\log\kappa&=& -l\, \log(4)+\lim_{\alpha\to \infty\atop
l,\, r-{\rm fixed}}\eta_A\ .\nonumber
\end{eqnarray}
Let us discuss the solution satisfying \eqref{sksisauas} in the context of the
general theory of Painlev${\acute {\rm e}}$ III equation.
For $0<\nu<1$ and real $\kappa$,
the asymptotic condition\ \eqref{sksisauas} unambiguously specifies a two-parameter family
of real solutions of the Painlev${\acute {\rm e}}$ III equation.
A systematic small-$t$ expansion of \eqref{sksisauas} has the form of the double series
\cite{Zamolodchikov:1994uw}
\begin{eqnarray}\label{utsosaisa}
\mbox{e}^{2U(t)}=\kappa^2\ t^{4\nu-2}+
\sum_{m,n=0\atop m+n>1}^{\infty}\
B_{m,n}\ t^{ 4(1-\nu) m +4\nu n-2}
\end{eqnarray}
whose coefficients are uniquely determined
through the parameters $\nu$ and
$\kappa$ by a recursion relation which follows from the
differential equation \eqref{soisiasai}.
The expansion\ \eqref{utsosaisa} is expected to converge for sufficiently small $t$.
Let $t=r$ be the
closest singularity to the origin.
The differential equation \eqref{soisiasai} possesses the Painlev${\acute {\rm e}}$ property which states
that, except at $t=0$ and $t=\infty$, the only possible singularities of $\mbox{e}^{2U}$
are the second order poles of the form
$\mbox{e}^{2U(t)}=\frac{4}{(t-r)^2}-
\frac{4}{r\,(t-r)}+C+o(1)$
with some constant $C$.
Further terms in this
Laurent expansion are expressed in terms of $r$ and $C$. They can be easily generated
through the differential equation\ \eqref{soisiasai}.
It will be convenient for us to substitute $C$
for an equivalent parameter $c$ such that
\begin{eqnarray}\label{skkss}
\mbox{e}^{2U(t)}=\frac{4}{(t-r)^2}-
\frac{4}{r\,(t-r)}+\frac{13-16\, c}{3\, r^2}+ \frac{2\, ( 16\, c-7)}{3\, r^3}\ (t-r)+O\big((t-r)^2\big)\ .
\end{eqnarray}
We will focus on the case when the closest pole to the origin is located at the positive real axis, i.e., $r>0$.
This requirement imposes certain constraints
on admissible values of $\kappa$ and $c$.
Within the admissible domains,
each pair $(\nu,\kappa)$ or $(r, c)$ can serve as an independent set of parameters
for the two-parameter family of real solutions of the Painlev${\acute {\rm e}}$ III equation
which are regular at the segment $t\in (0,r)$ and characterized by the behaviour $U\to (2\nu-1)\,\log(t)+O(1)$ as $t\to 0$;
However,
it is more convenient to choose $(r,\nu)$ with $r>0$, $0<\nu<1$, as a basic set of independent parameters.
At this point, we turn to the problem of finding the functions
$\kappa=\kappa(r,\nu)$ and $c=c(r,\nu)$, i.e., the connection problem
for the local expansions
\eqref{utsosaisa} and \eqref{skkss}.
It is relatively easy to establish
the relation
\begin{eqnarray}\label{naahsg}
\Big(\frac{\partial c}{\partial \nu}\Big)_r=- r\,\Big(\frac{\partial \log\kappa}{\partial r }\Big)_\nu\ .
\end{eqnarray}
The proof is similar to our previous derivation of the
generalized FSZ relations;
One should consider
the action functional
\begin{eqnarray}\label{aosioas}
{\cal S}[U]=\frac{1}{4} \lim_{\epsilon\to 0}\bigg[
\int_\epsilon^{r}\mbox{d} t\,
t \Big( {\dot U}^2+\sinh^2(U)-\frac{2}{(r-t)^2}
\Big)+2 (2\nu-1)\, U(\epsilon)
-(2\nu-1)^2 \log(\epsilon) \bigg]
\end{eqnarray}
where the dot stands for the $t$-derivative.
For $U(t)$,\ $t\in(0, r)$ satisfying the boundary conditions \eqref{sksisauas},\,\eqref{skkss},
the functional ${\cal S}[U]$ is well defined and its variation vanishes
provided $U(t)$ satisfies the Painlev${\acute {\rm e}}$ III equation and $\delta U(r)=0$.
Let ${\cal S}^*$ be the on-shell value of \eqref{aosioas}.
One can show that
\begin{eqnarray}\label{sosaiuas}
r\, \Big(\frac{\partial {\cal S}^*}{\partial r}\Big)_\nu&=&\frac{1}{4 }-\frac{r^2}{8}-c \\
\Big(\frac{\partial {\cal S}^*}{\partial \nu}\Big)_r&=&\log\kappa\ ,\nonumber
\end{eqnarray}
and the compatibility of these equations implies\ \eqref{naahsg}.
Now let us apply the results of the previous sections.
At the minisuperspace limit the
first generalized FSZ relation\ \eqref{ososaasi} yields the formula identical to\ \eqref{naahsg} with $c(r, \nu)$
replaced by $a(r,\nu)$.
Therefore, we conclude that $c-a$ does not depend on $\nu$.
It is
straightforward to analyze $r\to 0$ behaviour of the on-shell action:
\begin{eqnarray}\label{jayast} {\cal S}^*= \Big(\,\frac{1}{4}-\nu^2\,\Big)\ \log(r)+\nu\log\Big(\frac{8\nu}{\mbox{e}}\Big)
-\frac{1}{4}\ \log\Big(\frac{4}{\mbox{e}}\Big) -\frac{r^2}{16}+O(r^4)\ .
\end{eqnarray}
This asymptotic formula,
combined with \eqref{sosaiuas}, implies that $c=\nu^2+O(r^4)$, and hence
$c-a=O(r^4)$ (see Eq.\eqref{sisaasu}).
In Appendix\,\ref{AppendixD} it is explained how to systematically recover the small-$r$ expansion of $c(r,\nu)$.
The calculations yield the expansion
\begin{eqnarray}\label{skssia}
c&=&\nu^2+\frac{1}{2\,( \nu^2-1)}\ \Big(\frac{r}{4}\Big)^{4}+
\frac{ 5\, \nu^2+7}{ 32\, (\nu^2-1)^3\, (\nu^2-4)}\ \Big(\frac{r}{4}\Big)^{8}\\
&+&
\frac{9\, \nu^4+58\, \nu^2+29}{ 64\, (\nu^2-1)^5\, (\nu^2-4)\, (\nu^2-9)}\
\Big(\frac{r}{4}\Big)^{12}+O(r^{16})\ ,\nonumber
\end{eqnarray}
which matches exactly the small-$r$ expansion of $a(r,\nu)$
(see Eq.\eqref{sisaasu} and formula 20.3.15 in Ref.\cite{Stegun}).
The formal derivation of the relation
\begin{eqnarray}\label{nsassysa}
c=a(r,\,\nu)
\end{eqnarray}
can be obtained with the use of equations \eqref{ossai}, \eqref{uyytsoisai}
from Appendix\,\ref{AppendixA},
combined with the results from Section\,\ref{MiniQ}.\footnote{
Note that Eq.\eqref{nsassysa} allows one to derive the following large-$\alpha$ asymptotic formula
for the on-shell action \eqref{ssiisa}:
$${\cal A}^*=\frac{1}{6}\ \log\Big(\,
\frac{ 3^{\frac{1}{2}}\, A_G^{18}\, r}{2^4\, \alpha}\, \Big)+\frac{1}{\alpha}\
\bigg( {\cal S}^*+\frac{1}{4}\ \log\Big(\,\frac{ 2^{8l^2-2}\, A_G^{12}}{ \mbox{e}^{\frac{4}{3}}\, \alpha\, r}\,\Big)\, \bigg)
+o\big(\alpha^{-1}\big)\, .$$}
Eq.\eqref{nsassysa}, together with
\begin{eqnarray}\label{saoisa}
\kappa
&=& 8\nu\ r^{-2\nu}\ \exp\bigg[\,
\int_0^{r}\frac{\mbox{d} t}{t}\ \bigg(\,2\nu- \Big(\frac{\partial a}
{\partial \nu}\Big)_t\,\bigg) \, \bigg]\\
&=&8^{1-2\nu}\ \ \frac{\Gamma(1-\nu)}{\Gamma(\nu)}\
\exp\bigg[\,\int_r^{\infty}\frac{\mbox{d} t}{t}\ \Big(\frac{\partial a}
{\partial \nu}\Big)_t\,\bigg] \ ,\nonumber
\end{eqnarray}
leads to an explicit solution of the connection problem for local expansions\ \eqref{utsosaisa} and
\eqref{skkss}.
The domain of applicability of Eqs.\eqref{nsassysa},\,\eqref{saoisa}
is given by the inequalities
(see Fig.\,\ref{fig5g}a)
\begin{eqnarray}\label{asosiisaa}
0<\nu< 1\ ,\ \ \ \ \ \ \ \kappa>8^{1-2\nu}\ \ \frac{\Gamma(1-\nu)}{\Gamma(\nu)}\ .
\end{eqnarray}
It can be equivalently described
in terms of the pair $(r,c)$:
\begin{eqnarray}\label{akjasjwqi}
r>0\ ,\ \ \ \ \ \ \ \ \ \ a_0(r)< c< b_1(r)\ ,
\end{eqnarray}
where $a_0=a(r,0)$ and $b_1=\lim_{\nu\to 1}a(r,\nu)$ stand for the minimum and maximum of
the first conduction band for the Mathieu equation, respectively (see Fig.\,\ref{fig5g}b).
\begin{figure}
\centering
$(a)$\ \
\includegraphics[width=7 cm]{KN.eps}
\hskip 1.cm (b)\ \
\includegraphics[width=7 cm]{CR.eps}
\caption{ The admissible parameter domain
for the two-parameter family of
solutions of the Painlev${\acute {\rm e}}$ III equation:
$(a)$ in terms of $(\nu,\kappa)$ from \eqref{sksisauas}. $(b)$ in terms of $(r,c )$
from\ \eqref{skkss}. }
\label{fig5g}
\end{figure}
Finally, let us briefly discuss the minisuperspace limit for $\Phi^{(-)}=\frac{1}{2}\, (\,\Phi-{\hat \eta})$.
Contrary to ${\hat \eta}(w,{\bar w})$, the potential $\Phi(w,{\bar w})$ does not have a finite limit for $0<|w-w_A|< r/4$.
However, the divergent part of $\Phi^{(-)}(w,{\bar w})$ for $0<|w-w_A|< r/4$
is somewhat trivial and can be resolved by by means of the decomposition
\eqref{rwsosaosai} from Appendix\,\ref{AppendixA}.
The non-trivial part ${\tilde \Phi}^{(-)}$
is specified by the conditions\ \eqref{sosiosai},\eqref{usysosiosa}
and remains finite as $\alpha\to \infty$.
It is convenient to introduce
\begin{eqnarray}\label{sosisaiu}
W(t)=
-l (l+1)\ \log|w-w_A|+
\lim_{\alpha\to\infty\atop
r,\,l-{\rm fixed}} {\tilde \Phi}^{(-)}(w,{\bar w})\ ,
\end{eqnarray}
where $t=4\,|w-w_A|<r$ and $l=\nu-\frac{1}{2}$.
As follows from Eqs.\eqref{sksai},\, \eqref{rwsosaosai}, $W(t)$
satisfies the linear inhomogeneous differential equation (assuming $U$ is given)
\begin{eqnarray}\label{siksisasai}
\frac{4}{t}\ \frac{\mbox{d} }{\mbox{d} t}\Big(\, t\, \frac{\mbox{d} W }{\mbox{d} t}\,\Big)= \mbox{e}^{-2U(t)}-1\ ,
\end{eqnarray}
and the boundary condition
\begin{eqnarray}\label{xskisai}
W(t)=
\Big(\,\frac{1}{4}-\nu^2\,\Big)\ \log\Big(\frac{t}{4}\Big) +o(1)\ ,\ \ \ {\rm as}\ \ \ \ \ t\to 0\ .
\end{eqnarray}
Therefore
\begin{eqnarray}\label{ysosaisa}
W(t)=\Big(\,\frac{1}{4}-\nu^2\,\Big)\ \log\Big(\frac{t}{4}\Big)
-\frac{t^2}{16}-\frac{1}{4}\ \int_0^t\mbox{d} \tau\, \tau\, \log\Big(\frac{\tau}{t}\Big)\ \mbox{e}^{-2U(\tau)}\ .
\end{eqnarray}
The $t$-derivative
of ${ W}(t)$ at $t=r$ is given by
\begin{eqnarray}\label{xisiiauysr}
r\,{\dot W}(r)=\frac{1}{4}-\frac{r^2}{8}-\nu^2+\frac{1}{4 }\ \int_0^r\mbox{d} t\, t\, \mbox{e}^{-2U(t)}
\end{eqnarray}
and it is not difficult to show that
\begin{eqnarray}\label{lsissu}
{\dot W}(r)=\Big(\frac{\partial {\cal S}^*}{\partial r}\Big)_\nu
\ .
\end{eqnarray}
The last two relations
combined with Eqs.\eqref{sosaiuas},\,\eqref{nsassysa} imply\label{isiiauysr}
\begin{eqnarray}\label{aiassayas}
\frac{1}{4 }\ \int_0^r\mbox{d} t\, t\, \mbox{e}^{-2U(t)}=\nu^2-a(r,\nu)\ .
\end{eqnarray}
By transforming the integrals as in Ref.\cite{Wu:1975mw},
the value of $W(t)$ at $t=r$ can be expressed in terms of $a$, $\kappa$ and ${\cal S}^*$:
\begin{eqnarray}\label{isuusasy}
W(r)=\frac{1}{2}\ \log\Big(\frac{r}{4}\Big)-\frac{r^2}{8}+
\Big(\nu-\frac{1}{2}\Big)^2
+2\nu^2\,\log(2)-a
+\nu\ \log(\kappa)-{\cal S}^*\ ,
\end{eqnarray}
which is equivalent to the following relations
\begin{eqnarray}\label{usossai}
W(r)=\Big(\, \frac{1}{4}-\nu^2\,\Big)\, \log\Big(\frac{r}{4}\Big)-
\frac{r^2}{16}-
\int_0^{r}\frac{\mbox{d} t}{t}\, \bigg(\,
t\, \Big(\frac{\partial a}
{\partial t}\Big)_\nu+
\nu\, \Big(\frac{\partial a}
{\partial \nu}\Big)_t-a(t,\nu)-\nu^2\, \bigg)\ ,
\end{eqnarray}
or
\begin{eqnarray}\label{sossisosisai}
W(r)&=&{\cal Y}_0(\nu)-
\log\bigg[\, A_G^3\ \ \ \mbox{e}^{-\frac{1}{4}-\nu^2}\ 2^{2\nu^2+\frac{2}{3}}\
\bigg(\, \frac{\Gamma(1+\nu)}{\Gamma(1-\nu)}\,\bigg)^{\nu}\,\bigg]\\
&+&
\int_r^{\infty}\frac{\mbox{d} t}{t}\ \bigg(\,
t\, \Big(\frac{\partial a}
{\partial t}\Big)_\nu+
\nu\, \Big(\frac{\partial a}
{\partial \nu}\Big)_t-a(t,\nu)
+\frac{t^2}{8}-\frac{1}{4} \,\bigg)\ ,\nonumber
\end{eqnarray}
where ${\cal Y}_0(\nu)$ is given by Eq.\eqref{tskssia}.
Note that the last term in Eq.\eqref{sossisosisai} vanishes as $r\to\infty$ while the remaining part
reproduces the result quoted in Ref.\cite{Zamolodchikov:1994uw}.
\section{Concluding remark}
In this work we have described the link between the action functional for the classical
ShG equation and the YY-function corresponding to the $k$-vacuum states in the quantum sine-Gordon model.
The natural question arises: Can this relation be generalized for the excited states?
Nowadays the machinery of the Destri de Vega equation for the excited states are well developed
\cite{Destri:1997yz, Feverati:1998dt}, so that the calculation of the YY-function for
the excited states does not seem to be a particularly complicated problem.
However, it remains unclear how to construct integrable classical equations
associated with the excited states and perhaps more importantly, what all of this really means.
\section*{Acknowledgments}
Numerous discussions with A.B. Zamolodchikov were highly valuable for me.
I also want to acknowledge helpful discussions with V. Bazhanov, N. Nekrasov, S. Shatashvili and F. Smirnov.
\bigskip
\noindent This research was supported in part by DOE grant
$\#$DE-FG02-96 ER 40949.
|
1,941,325,220,710 | arxiv | \section{Introduction}
This article is located in the area of group theory. One interesting algebraic property of groups is the coherence. A group is called coherent if every finitely generated subgroup is finitely presented. Classical examples of coherent groups are free groups and free abelian groups. The standard example of an incoherent group is the direct product $F_2\times F_2$ of two free groups.
We are interested in understanding which graph products, Artin and Coxeter groups are coherent. More precisely, let $\Gamma=(V,E)$ be a finite simplicial non-empty graph with vertex set $V$ and edge set $E$. A vertex labeling on $\Gamma$ is a map $\varphi:V\rightarrow \left\{\text{non-trivial finitely generated abelian groups}\right\}$ and an edge labeling on $\Gamma$ is a map $\psi:E\rightarrow\mathbb{N}-\left\{0,1\right\}$. A graph $\Gamma$ with a vertex and edge labeling is called a vertex-edge-labeled graph.
\begin{NewDefinition}
Let $\Gamma$ be a vertex-edge-labeled graph.
\begin{enumerate}
\item[(i)] If $\psi(E)=\left\{2\right\}$, then $\Gamma$ is called a graph product graph. The graph product $G(\Gamma)$ is the group obtained from the free product of
the $\varphi(v)$, by adding the commutator relations $[g,h] = 1$ for all $g\in \varphi(v)$, $h\in\varphi(w)$ such that $\left\{v,w\right\}\in E$.
\item[(ii)] If $\varphi(V)=\left\{\ensuremath{\mathbb{Z}}\right\}$, then $\Gamma$ is called an Artin graph and the corresponding Artin group $A(\Gamma)$ is given by the presentation
\[
A(\Gamma)=\langle V\mid \underbrace{vwvw\ldots}_{\psi(\left\{v,w\right\})-\text{letters}}=\underbrace{wvwv\ldots}_{\psi(\left\{v,w\right\})-\text{letters}}\text{ if }\left\{v,w\right\}\in E\rangle
\]
If $\Gamma$ is an Artin graph and $\psi(E)=\left\{2\right\}$, then $\Gamma$ is called a right angled Artin graph and the Artin group $A(\Gamma)$ is called right angled Artin group.
\item[(iii)] If $\varphi(V)=\left\{\ensuremath{\mathbb{Z}}_2\right\}$, then $\Gamma$ is called a Coxeter graph and the corresponding Coxeter group $C(\Gamma)$ is given by the presentation
\[
C(\Gamma)=\langle V\mid v^2, (vw)^{\psi(\left\{v, w\right\})}\text{ if }\left\{v,w\right\}\in E\rangle
\]
If $\Gamma$ is a Coxeter graph and $\psi(E)=\left\{2\right\}$, then $\Gamma$ is called a right angled Coxeter graph and the Coxeter group $C(\Gamma)$ is called right angled Coxeter group.
\end{enumerate}
\end{NewDefinition}
The first examples to consider are the extremes. If $\Gamma$ is a discrete vertex-edge-labeled graph, then $G(\Gamma)$ is a free product of finitely generated abelian groups. In particular, if $\Gamma$ is a discrete Artin graph with $n$ vertices, then $A(\Gamma)$ is the free group $F_n$ of rank $n$. On the other hand, if $\Gamma$ is a complete graph product graph, then $G(\Gamma)$ is a finitely generated abelian group. The corresponding Coxeter group of the following Coxeter graph is the symmetric group ${\rm Sym}(5)$ and the corresponding Artin group of the following Artin graph is the braid group $B_5$ on $5$ strands .
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.75, transform shape]
\draw[fill=black] (0,0) circle (1pt);
\draw[fill=black] (2,0) circle (1pt);
\draw[fill=black] (2,2) circle (1pt);
\draw[fill=black] (0,2) circle (1pt);
\draw (0,0)--(2,0);
\draw (0,0)--(0,2);
\draw (2,0)--(2,2);
\draw (0,2)--(2,2);
\draw (0,0)--(2,2);
\draw (2,0)--(0,2);
\node at (0,-0.25) {$\ensuremath{\mathbb{Z}}_2$};
\node at (2,-0.25) {$\ensuremath{\mathbb{Z}}_2$};
\node at (0, 2.25) {$\ensuremath{\mathbb{Z}}_2$};
\node at (2,2.25) {$\ensuremath{\mathbb{Z}}_2$};
\node at (1,-0.25) {$3$};
\node at (1, 2.25) {$3$};
\node at (2.25, 1) {$3$};
\node at (-0.25,1) {$2$};
\node at (0.5, 0.8) {$2$};
\node at (1.5,0.8) {$2$};
\draw[fill=black] (4,0) circle (1pt);
\draw[fill=black] (6,0) circle (1pt);
\draw[fill=black] (6,2) circle (1pt);
\draw[fill=black] (4,2) circle (1pt);
\draw (4,0)--(6,0);
\draw (4,0)--(4,2);
\draw (6,0)--(6,2);
\draw (4,2)--(6,2);
\draw (4,0)--(6,2);
\draw (6,0)--(4,2);
\node at (4,-0.25) {$\ensuremath{\mathbb{Z}}$};
\node at (6,-0.25) {$\ensuremath{\mathbb{Z}}$};
\node at (4, 2.25) {$\ensuremath{\mathbb{Z}}$};
\node at (6,2.25) {$\ensuremath{\mathbb{Z}}$};
\node at (5,-0.25) {$3$};
\node at (5, 2.25) {$3$};
\node at (6.25, 1) {$3$};
\node at (3.74,1) {$2$};
\node at (4.5, 0.8) {$2$};
\node at (5.5,0.8) {$2$};
\end{tikzpicture}
\end{center}
\end{figure}
It is natural to ask which graph products, Artin and Coxeter groups are coherent. For right angled Artin groups this has been answered by Droms \cite{Droms}: A right angled Artin group $A(\Gamma)$ is coherent iff $\Gamma$ has no induced cycle of length $>3$.
We show that Droms arguments can be extended to a much larger class of groups.
\begin{NewTheoremA}
Let $\Gamma$ be a graph product graph. If $\Gamma$ has no induced cycle of length $>3$, then $G(\Gamma)$ is coherent.
\end{NewTheoremA}
A right angled Artin group $A(\Gamma)$ where $\Gamma$ has a shape of a cycle of length $>3$ is by Droms result incoherent. Concerning arbitrary graph product $G(\Gamma)$ where $\Gamma$ has a shape of a cycle $>3$ this result does not hold. We prove in Theorem B, that a right angled Coxeter group which is defined via graph with a shape of a cycle of length $>3$ is always coherent. Let us consider a graph product graph $\Gamma=(V,E)$ with a shape of a cycle of length $4$ such that $\#\varphi(v)\geq 3$ for all $v\in V$, then $G(\Gamma)$ is incoherent. This follows from the observation, that $G(\Gamma)$ is the direct sum of the free product of opposite vertex groups $G(\Gamma)=(\varphi(v_1)*\varphi(v_3))\times(\varphi(v_2)*\varphi(v_4))$. Since the kernel of the canonical map $\varphi(v_1)*\varphi(v_3)\rightarrow\varphi(v_1)\times\varphi(v_3)$ is a free group of rank $\geq 2$, see \cite[I.1.3 Pr. 4]{Serre} it follows that $F_2\times F_2$ is a subgroup of $G(\Gamma)$. It is known that $F_2\times F_2$ is incoherent, hence $G(\Gamma)$ is incoherent.
Let $\Gamma=(V, E)$ be a graph product graph with a shape of a cycle of length $\geq 5$ such that $\infty>\#\varphi(v)\geq 3$ for all $v\in V$. We do not know if $G(\Gamma)$ is coherent or not. We only know that $F_2\times F_2$ is not a subgroup of $G(\Gamma)$, because $G(\Gamma)$ is Gromov hyperbolic \cite{Hyperbolic} and it is known that Gromov hyperbolic groups do not contain a copy of $\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}}$. If $G(\Gamma)$ is incoherent, then we would have the same characterization for coherence of graph products of finite abelian vertex groups with cardinality $\geq 3$ as for right angled Artin groups. But if $G(\Gamma)$ is coherent, then the characterization would be more complicated.
It was proven by Wise and Gordon \cite{Wise2}, \cite{Gordon} that an Artin group $A(\Gamma)$ is coherent iff
$\Gamma$ has no induced cycle of length $>3$,
every complete subgraph of $\Gamma$ with $3$ or $4$ vertices has at most one edge label $>2$
and $\Gamma$ has no induced subgraph of the following shape:
\newpage
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.75, transform shape]
\draw[fill=black] (0,0) circle (1pt);
\draw[fill=black] (2,0) circle (1pt);
\draw[fill=black] (1,2) circle (1pt);
\draw[fill=black] (1,-2) circle (1pt);
\draw (0,0)--(1,2);
\draw (0,0)--(2,0);
\draw (0,0)--(1, -2);
\draw (2,0)--(1,2);
\draw (2,0)--(1,-2);
\node at (0.25,1) {$2$};
\node at (1.75,1) {$2$};
\node at (1,0.2) {$m>2$};
\node at (0.25,-1) {$2$};
\node at (1.75,-1) {$2$};
\node at (-0.25,0) {$\ensuremath{\mathbb{Z}}$};
\node at (2.25,0) {$\ensuremath{\mathbb{Z}}$};
\node at (1,2.25) {$\ensuremath{\mathbb{Z}}$};
\node at (1,-2.25) {$\ensuremath{\mathbb{Z}}$};
\end{tikzpicture}
\end{center}
\end{figure}
Concerning coherence of Coxeter groups we found two results in the literature. A simple criteria for the coherence of Coxeter groups which depends only on the edge labeling and the number of generators was proven by McCammond and Wise
\cite[Theorem 12.2]{WiseCoxeter}: If $\Gamma=(V, E)$ is a Coxeter graph and $\psi(e)\geq \#V$ for all $e\in E$, then $C(\Gamma)$ is coherent. Further, Jankiewicz and Wise proved with probabilistic methods that many infinite Coxeter groups where the Coxeter graph is complete are incoherent \cite[Theorem 1.2]{Jankiewicz}.
We present two results regarding coherence of Coxeter groups.
\begin{NewTheoremB}
Let $\Gamma$ be a Coxeter graph.
\begin{enumerate}
\item[(i)] If $\Gamma$ has no induced cycle of length $>3$ and every complete subgraph is that of a slender Coxeter group (i.e. every subgroup is finitely generated), then $C(\Gamma)$ is coherent. In particular, if $\Gamma$ has a shape of a tree, then $C(\Gamma)$ is coherent.
\item[(ii)] If $\Gamma$ has a shape of a cycle of length $>3$, then $C(\Gamma)$ is coherent.
\end{enumerate}
\end{NewTheoremB}
It is obvious that finite Coxeter groups are slender. Concerning slenderness of infinite Coxeter groups we show:
\begin{NewPropositionC}
Let $C(\Gamma)$ be an infinite Coxeter group. Then $C(\Gamma)$ is slender iff $C(\Gamma)$ decomposes as $C(\Gamma)\cong C(\Gamma_1)\times C(\Gamma_2)$, where $\Gamma_1$, $\Gamma_2$ are induced subgraphs of $\Gamma$ and $C(\Gamma_1)$ is a finite subgroup and $C(\Gamma_2)$ is a finite direct product of irreducible Euclidean reflection groups.
\end{NewPropositionC}
Irreducible Euclidean reflection groups were classified in terms of Coxeter diagrams, see \cite{Humphreys}.
It follows from Droms characterization that the smallest incoherent right angled Artin graph is a cycle of length $4$. Concerning right angled Coxeter groups we show that a smallest right angled Coxeter graph has 6 vertices and 9 edges.
We prove Theorems A and B by induction on the cardinality of the vertex set of $\Gamma$. Our main technique in the proofs of the main theorems is based on the
following result by Karrass and Solitar \cite[Theorem 8]{Karrass}: An amalgam $A*_C B$ where $A, B$ are coherent groups and $C$ is slender is also coherent.
\section{Graphs}
In this section we briefly present the main definitions and properties
concerning simplicial graphs. For more background results see \cite{Diestel}.
A {\it simplicial graph} is a pair $\Gamma=(V,E)$ of sets such that $V\neq\emptyset$ and $E\subseteq V\times V$. The elements of $V$ are called {\it vertices} and the elements of $E$ are its {\it edges}. If $V'\subseteq V$ and $E'\subseteq E$, then $\Gamma'=(V', E')$ is called a {\it subgraph} of $\Gamma$. If $\Gamma'$ is a subgraph of $\Gamma$ and $E'$ contains all the edges $\left\{v, w\right\}\in E$ with $v, w\in V'$, then $\Gamma'$ is called an {\it induced subgraph} of $\Gamma$. A subgraph $\Gamma'$ is called {\it proper} if $\Gamma'\neq \Gamma$. A {\it path} of length $n$ is a graph $P_n=(V, E)$ of the form $V=\left\{v_0, \ldots, v_n\right\}$ and $E=\left\{\left\{v_0, v_1\right\}, \left\{v_1, v_2\right\}, \ldots, \left\{v_{n-1}, v_n\right\}\right\}$ where $v_i$ are all distinct. If $P_n=(V,E)$ is a path of length $n\geq 3$, then the graph $C_{n+1}:=(V, E\cup\left\{\left\{v_n, v_0\right\}\right\})$ is called a {\it cycle} of length $n+1$. A graph $\Gamma=(V,E)$ is called {\it connected} if any two vertices $v, w\in V$ are contained in a subgraph $\Gamma'$ of $\Gamma$ such that $\Gamma'$ is a path. A maximal connected subgraph of $\Gamma$ is called a {\it connected component} of $\Gamma$. A graph $\Gamma$ is called a {\it tree} if $\Gamma$ is a connected graph without induced cycle. A graph is called {\it complete} if there is an edge for every pair of distinct vertices.
Dirac proved in \cite{Dirac} the following result which we will use in the proofs of Theorem A and Theorem B.
\begin{proposition}
\label{Dirac}
Let $\Gamma$ be a connected non-complete finite simplicial graph. If $\Gamma$ has no induced cycle of length $>3$, then there exist proper induced subgraphs $\Gamma_1$, $\Gamma_2$ with the following properties:
\begin{enumerate}
\item[(i)] $\Gamma=\Gamma_1\cup\Gamma_2$,
\item[(ii)] $\Gamma_1\cap\Gamma_2$ is complete.
\end{enumerate}
\end{proposition}
\section{Slender and coherent groups}
In this section we present the main definitions and properties concerning slender and coherent groups. We start with the following definition.
\begin{definition}
\begin{enumerate}
\item[(i)] A group $G$ is said to be slender (or Noetherian) if every subgroup of $G$ is finitely generated.
\item[(ii)] A group $G$ is called coherent if every finitely generated group is finitely presented.
\end{enumerate}
\end{definition}
One can easily verify from the definition of slenderness and coherence that finite groups are slender and coherent. Further, it follows from the classification of finitely generated abelian groups that these groups are slender and coherent. A standard example of a group which is not slender is a free group $F_n$ for $n\geq 2$. This follows from the fact that the commutator subgroup of $F_n$ for $n\geq 2$ is not finitely generated.
Concerning slenderness of graph products we want to remark the following result:
\begin{example}
Let $\Gamma=(V,E)$ be a graph product graph such that $\#\varphi(v)\geq 3$ for all $v\in V$ and $G(\Gamma)$ the corresponding graph product. Then $G(\Gamma)$ is slender iff $\Gamma$ is a complete graph.
\end{example}
\begin{proof}
If $\Gamma$ is a complete graph, then $G(\Gamma)$ is a finitely generated abelian group and hence slender. Let us assume that $\Gamma=(V, E)$ is not complete. Then there exist vertices $v_1, v_2\in V$ such that $\left\{v_1, v_2\right\}\notin E$. Therefore $\Gamma':=(\left\{v_1, v_2\right\}, \emptyset)$ is an induced subgraph and $G(\Gamma')=\varphi(v_1)*\varphi(v_2)$ is a subgroup of $G(\Gamma)$, see \cite{Green}. Since $\#\varphi(v_1)\geq 3$, it follows that the kernel of the natural map $\varphi(v_1)*\varphi(v_2)\rightarrow\varphi(v_1)\times\varphi(v_2)$ is a free group of rank $\geq 2$. The free group $F_2$ is not slender, therefore $G(\Gamma')$ and $G(\Gamma)$ are not slender.
\end{proof}
It is not hard to see that the following result is true.
\begin{lemma}
\label{slender}
Let $1\rightarrow G_1\rightarrow G_2\rightarrow G_3\rightarrow 1$ be a short exact sequence of groups. Then $G_2$ is slender iff $G_1$ and $G_3$ are slender.
In particular, semidirect products of slender groups are slender and finite direct products of slender groups are slender.
\end{lemma}
Concerning slenderness of Coxeter groups we prove:
\begin{NewPropositionC}
Let $C(\Gamma)$ be an infinite Coxeter group. Then $C(\Gamma)$ is slender iff $C(\Gamma)$ decomposes as $C(\Gamma)\cong C(\Gamma_1)\times C(\Gamma_2)$, where $\Gamma_1$, $\Gamma_2$ are induced subgraphs and $C(\Gamma_1)$ is a finite subgroup and $C(\Gamma_2)$ is a finite direct product of irreducible Euclidean reflection groups.
\end{NewPropositionC}
\begin{proof}
If $C(\Gamma)$ is slender, then $F_2$ is not a subgroup of $C(\Gamma)$ and it follows by \cite[17.2.1]{Davis} that $C(\Gamma)$ decomposes as $C(\Gamma)\cong C(\Gamma_1)\times C(\Gamma_2)$, where $C(\Gamma_1)$ is a finite subgroup and $C(\Gamma_2)$ is a finite direct product of irreducible Euclidean reflection groups.
Now, assume that $C(\Gamma)$ has the above decomposition.
Let $G$ be an irreducible Euclidean reflection group, then $G$ decomposes as semidirect product of a finitely generated abelian group and a finite group, see \cite{Humphreys}. Since finitely generated abelian groups and finite groups are slender it follows by Lemma \ref{slender} that $G$ is slender. Now we know that $C(\Gamma)$ is a direct product of slender groups. It follows again by Lemma \ref{slender} that $C(\Gamma)$ is slender.
\end{proof}
Since $\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2$ is the only irreducible right angled Euclidean reflection group we immediate obtain the following corollary of Proposition C:
\begin{corollary}
Let $G$ be an infinite right angled Coxeter group. Then $G$ is slender iff there exist $n, k\in\mathbb{N}$ such that $G\cong \ensuremath{\mathbb{Z}}^n_2\times (\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2)^k$.
\end{corollary}
For example, the corresponding right angled Coxeter groups of the following graphs are slender:
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
\draw[fill=black] (0,0) circle (1pt);
\draw[fill=black] (0,1) circle (1pt);
\draw[fill=black] (0,2) circle (1pt);
\draw (0,0)--(0,2);
\node at (-0.5,0) {$\Gamma_1$};
\draw[fill=black] (2,0) circle (1pt);
\draw[fill=black] (1.5, 1) circle (1pt);
\draw[fill=black] (2.5, 1) circle (1pt);
\draw[fill=black] (2, 2) circle (1pt);
\draw (2,0)--(1.5,1);
\draw (2,0)--(2.5,1);
\draw (1.5,1)--(2.5,1);
\draw (1.5,1)--(2,2);
\draw (2.5,1)--(2,2);
\node at (1.5,0) {$\Gamma_2$};
\draw[fill=black] (4,0) circle (1pt);
\draw[fill=black] (3.5, 1) circle (1pt);
\draw[fill=black] (4.5, 1) circle (1pt);
\draw[fill=black] (4, 1.5) circle (1pt);
\draw[fill=black] (4, 2.5) circle (1pt);
\draw (4,0)--(3.5,1);
\draw (4,0)--(4.5,1);
\draw (3.5,1)--(4.5,1);
\draw (3.5,1)--(4,2.5);
\draw (4.5,1)--(4,2.5);
\draw (3.5,1)--(4,1.5);
\draw (4.5,1)--(4,1.5);
\draw (4,1.5)--(4,2.5);
\draw[dashed] (4, 0)--(4,1.5);
\node at (3.5,0) {$\Gamma_3$};
\end{tikzpicture}
\end{center}
\end{figure}
\begin{lemma}
\label{slendercoherent}
Let $1\rightarrow G_1\xrightarrow{\iota} G_2\xrightarrow{\pi} G_3\rightarrow 1$ be a short exact sequence of groups. If $G_1$ and $G_2$ are slender and coherent groups, then $G_2$ is slender and coherent.
In particular, semidirect products of slender and coherent groups are slender and coherent and finite direct products of slender and coherent groups are slender and coherent.
\end{lemma}
\begin{proof}
The slenderness of $G_2$ follows by Lemma \ref{slender}. Let $U$ be a finitely generated subgroup of $G_2$. We have ${\rm ker}(\pi_{\mid U})\subseteq{\rm im}(G_1)\cong G_1$. The group $G_1$ is slender and coherent, thus ${\rm ker}(\pi_{\mid U})$ is finitely presented. The quotient $U/{\rm ker}(\pi_{\mid U})$ is isomorphic to the subgroup of $G_3$. We know that $G_3$ is slender and coherent, hence $U/{\rm ker}(\pi_{\mid U})$ is finitely presented. So far we have shown that ${\rm ker}(\pi_{\mid U})$ and $U/{\rm ker}(\pi_{\mid U})$ are finitely presented groups. Now it is not hard to see that $U$ is finitely presented.
\end{proof}
The following consequence of Lemma \ref{slendercoherent} will be need in the proof of Theorem B.
\begin{corollary}
\label{Coxeterslendercoherent}
Let $C(\Gamma)$ be a slender Coxeter group. Then $C(\Gamma)$ is coherent.
\end{corollary}
\begin{proof}
If $C(\Gamma)$ is finite, then it is obvious that $C(\Gamma)$ is coherent. Otherwise by Proposition C follows that $C(\Gamma)$ decomposes as a finite direct product of a finite group and irreducible Euclidean reflection groups. Since an irreducible Euclidean reflection group is a semidirect product of a finitely generated abelian group and a finite group it follows by Lemma \ref{slendercoherent} that this group is slender and coherent.
Thus, $C(\Gamma)$ is a finite direct product of slender and coherent groups and by Lemma \ref{slendercoherent} follows that $C(\Gamma)$ is also coherent.
\end{proof}
The crucial argument in the proofs of the main theorems is the following result which was proven by Karrass and Solitar.
\begin{proposition}(\cite[Theorem 8]{Karrass})
\label{Karrass}
Let $A*_C B$ be an amalgam. If $A$ and $B$ are coherent groups and $C$ is slender, then $A*_C B$ is coherent.
In particular, coherence is preserved under taking free products.
\end{proposition}
A direct consequence of the result of Karrass and Solitar is the next corollary:
\begin{corollary}
\label{amalgam}
\begin{enumerate}
\item[(i)] Let $\Gamma$ be a graph product graph and $\Gamma_1, \Gamma_2$ induced subgraphs such that $\Gamma=\Gamma_1\cup\Gamma_2$. If $G(\Gamma_1)$ and $G(\Gamma_2)$ are coherent and $G(\Gamma_1\cap\Gamma_2)$ is slender, then $G(\Gamma)$ is coherent.
\item[(ii)] Let $\Gamma$ be a Coxeter graph and $\Gamma_1, \Gamma_2$ induced subgraphs such that $\Gamma=\Gamma_1\cup\Gamma_2$. If $C(\Gamma_1)$ and $C(\Gamma_2)$ are coherent and $C(\Gamma_1\cap\Gamma_2)$ is slender, then $C(\Gamma)$ is coherent.
\end{enumerate}
\end{corollary}
\begin{proof}
It follows by the presentation of the graph product $G(\Gamma)$ and Coxeter group $C(\Gamma)$ that
\[
G(\Gamma)=G(\Gamma_1)*_{G(\Gamma_1\cap\Gamma_2)} G(\Gamma_2)\text{ and } C(\Gamma)=C(\Gamma_1)*_{C(\Gamma_1\cap\Gamma_2)} C(\Gamma_2).
\]
By Proposition \ref{Karrass} follows that $G(\Gamma)$ and $C(\Gamma)$ are coherent.
\end{proof}
\section{Proof of Theorem A}
We are now ready to prove Theorem A.
\begin{proof}
We prove this result by induction on $\#V=n$.
Suppose that $n=1$. Then $G(\Gamma)$ is a finitely generated abelian group and therefore coherent.
Now we assume that $n>1$. If $\Gamma$ is not connected, then there exist induced proper disjoint subgraphs $\Gamma_1, \Gamma_2$ with $\Gamma=\Gamma_1\cup \Gamma_2$. It is evident from the presentation of the graph product that $G(\Gamma)=G(\Gamma_1)*G(\Gamma_2)$. By the induction assumption $G(\Gamma_1)$ and $G(\Gamma_2)$ are coherent. By Corollary \ref{amalgam} coherence is preserved under taking free products, hence $G(\Gamma)$ is coherent. If $\Gamma$ is not connected, then we have the following cases:
\begin{enumerate}
\item If $\Gamma$ is complete, then $G(\Gamma)$ is a finitely generated abelian group and thus coherent.
\item If $\Gamma$ is not complete, then by Proposition \ref{Dirac} there exist proper induced subgraphs $\Gamma_1, \Gamma_2$ with the following properties: $\Gamma=\Gamma_1\cup\Gamma_2$ and $\Gamma_1\cap\Gamma_2$ is complete. It follows that
$G(\Gamma)=G(\Gamma_1)*_{G(\Gamma_1\cap\Gamma_2)} G(\Gamma_2)$. The groups $G(\Gamma_1), G(\Gamma_1)$ are coherent by the induction assumption. The group $G(\Gamma_1\cap\Gamma_2)$ is a finitely generated abelian group and hence slender. By Corollary \ref{amalgam} follows that $G(\Gamma)$ is coherent.
\end{enumerate}
\end{proof}
\section{Proof of Theorem B}
The proof of Theorem B is very similar to the proof of Theorem A.
\begin{proof} $ $\\
\begin{enumerate}
\item[to (i):] The proof is again by induction on $\#V=n$.
Suppose that $n=1$. Then $C(\Gamma)\cong \ensuremath{\mathbb{Z}}_2$ and therefore coherent.
Now we assume that $n>1$. If $\Gamma$ is not connected, then there exist induced proper disjoint subgraphs $\Gamma_1, \Gamma_2$ with $\Gamma=\Gamma_1\cup \Gamma_2$. It is evident from the presentation of the Coxeter group that $G(\Gamma)=G(\Gamma_1)*G(\Gamma_2)$. By the induction assumption $C(\Gamma_1)$ and $C(\Gamma_2)$ are coherent. By Corollary \ref{amalgam} coherence is preserved under taking free products, hence $C(\Gamma)$ is coherent. If $\Gamma$ is connected, then we have the following cases:
\begin{enumerate}
\item If $\Gamma$ is complete, then $C(\Gamma)$ is slender and by Corollary \ref{Coxeterslendercoherent} we know that slender Coxeter groups are coherent.
\item If $\Gamma$ is not complete, then by Proposition \ref{Dirac} there exist proper induced subgraphs $\Gamma_1, \Gamma_2$ with the following properties: $\Gamma=\Gamma_1\cup\Gamma_2$ and $\Gamma_1\cap\Gamma_2$ is complete. It follows, that
$C(\Gamma)=C(\Gamma_1)*_{C(\Gamma_1\cap\Gamma_2)} C(\Gamma_2)$. The groups $C(\Gamma_1), C(\Gamma_1)$ are coherent by the induction assumption and the group $G(\Gamma_1\cap\Gamma_2)$ is slender. By Corollary \ref{amalgam} follows that $C(\Gamma)$ is coherent.
\end{enumerate}
\item[to (ii):] Let $\Gamma$ be a Coxeter graph with a shape of a cycle of length $>3$. Then there exist proper subgraphs $\Gamma_1$ and $\Gamma_2$ with the following properties: $\Gamma_1, \Gamma_2$ are paths, $\Gamma=\Gamma_1\cup\Gamma_2$ and $\Gamma_1\cap\Gamma_2=(\left\{v_i, v_j\right\}, \emptyset)$ where $v_i$ and $v_j$ are disjoint vertices. The Coxeter group $C(\Gamma)$ has the following decomposition:
\[
G(\Gamma)=G(\Gamma_1)*_{G(\Gamma_1\cap\Gamma_2)} G(\Gamma_2)
\]
Since $\Gamma_1, \Gamma_2$ are trees, it follows by Theorem B(i) that $C(\Gamma_1), C(\Gamma_2)$ are coherent groups. Further, $C(\Gamma_1\cap\Gamma_2)$ is isomorphic to $\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2$ and therefore slender. By Corollary \ref{amalgam} we obtain that $C(\Gamma)$ is coherent.
\end{enumerate}
\end{proof}
\section{Small right angled Coxeter groups}
We turn now to the proof of the following result.
\begin{proposition}
Let $\Gamma=(V, E)$ be a right angled Coxeter graph.
\begin{enumerate}
\item[(i)] If $\# V\leq 5$ or $\#V=6$ and $\#E\leq 8$, then $C(\Gamma)$ is coherent.
\item[(ii)] If $\Gamma$ has the shape of the above graph,
then the corresponding right angled
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.5, transform shape]
\draw[fill=black] (0,0) circle (1pt);
\draw[fill=black] (0,1) circle (1pt);
\draw[fill=black] (0,2) circle (1pt);
\draw[fill=black] (3,0) circle (1pt);
\draw[fill=black] (3,1) circle (1pt);
\draw[fill=black] (3,2) circle (1pt);
\draw (0,0)--(3,0);
\draw (0,0)--(3,1);
\draw (0,0)--(3, 2);
\draw (0,1)--(3,0);
\draw (0,1)--(3,1);
\draw (0,1)--(3,2);
\draw (0,2)--(3,0);
\draw (0,2)--(3,1);
\draw (0,2)--(3,2);
\end{tikzpicture}
\end{center}
\caption*{$K_{3,3}$}
\end{figure}
Coxeter group $C(\Gamma)$ is incoherent.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item[to (i):] If $\#V\leq 4$, then $\Gamma$ has no induced cycle of length $>3$ or $\Gamma$ is a cycle of length $4$. It follows by Theorem B that $C(\Gamma)$ is coherent.
If $\#V=5$, then (i) $\Gamma$ is not connected or (ii) $\Gamma$ has no cycle of length $> 3$ or (iii) $\Gamma$ is a cycle of length $5$ or (iv) $\Gamma$ has a shape of the one of the following graphs:
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.5, transform shape]
\draw[fill=black] (1,4) circle (1pt);
\draw[fill=black] (5,4) circle (1pt);
\draw[fill=black] (11,4) circle (1pt);
\draw[fill=black] (14.5,4) circle (1pt);
\draw (0,0)--(2,0);
\draw (4,0)--(6,0);
\draw (8,0)--(10, 0);
\draw (0,2)--(2,2);
\draw (4,2)--(6,2);
\draw (8,2)--(10, 2);
\draw (13,2)--(16, 2);
\draw (0,0)--(0,2);
\draw (2,0)--(2,2);
\draw (4,0)--(4, 2);
\draw (6,0)--(6,2);
\draw (8,0)--(8,2);
\draw (10,0)--(10, 2);
\draw (0,2)--(1,4);
\draw (4,2)--(5,4);
\draw (6,2)--(5, 4);
\draw (8,2)--(11,4);
\draw (10,2)--(11,4);
\draw (10,0)--(11, 4);
\draw (14.5,0)--(13,2);
\draw (14.5,0)--(16,2);
\draw (13,2)--(14.5, 4);
\draw (16,2)--(14.5,4);
\draw[fill=black] (0,0) circle (1pt);
\draw[fill=black] (2,0) circle (1pt);
\draw[fill=black] (4,0) circle (1pt);
\draw[fill=black] (6,0) circle (1pt);
\draw[fill=black] (8,0) circle (1pt);
\draw[fill=black] (10,0) circle (1pt);
\draw[fill=black] (14.5,0) circle (1pt);
\draw[fill=black] (0,2) circle (1pt);
\draw[fill=black] (2,2) circle (1pt);
\draw[fill=black] (4,2) circle (1pt);
\draw[fill=black] (6,2) circle (1pt);
\draw[fill=black] (8,2) circle (1pt);
\draw[fill=black] (10,2) circle (1pt);
\draw[fill=black] (13,2) circle (1pt);
\draw[fill=black] (14.5,2) circle (1pt);
\draw[fill=black] (16,2) circle (1pt);
\end{tikzpicture}
\end{center}
\end{figure}
In the cases (i), (ii) and (iii) using Corollary \ref{amalgam} and Theorem B follows that $C(\Gamma)$ is coherent. If $\Gamma$ has a shape of the one of the above graphs then it is easy to verify that there exist induced subgraphs $\Gamma_1$ and $\Gamma_2$ such that $C(\Gamma)=C(\Gamma_1)*_{C(\Gamma_1\cap\Gamma_2)} C(\Gamma_2)$ and $C(\Gamma_1), C(\Gamma_2)$ are coherent and $C(\Gamma_1\cap\Gamma_2)$ is slender. By Corollary \ref{amalgam} we obtain that $C(\Gamma)$ is coherent.
If $\#V=6$ and $\#E\leq 8$, it follows with similar arguments as in the case $\#V=5$ that $C(\Gamma)$ is coherent. A table of connected graphs with six vertices is given in \cite{Cvetkovic}.
\item[to (ii):] The corresponding right angled Coxeter group $C(K_{3,3})$ is isomorphic to
\[
(\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2)\times(\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2).
\]
Since the kernel of the natural map $\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2*\ensuremath{\mathbb{Z}}_2\rightarrow \ensuremath{\mathbb{Z}}_2\times\ensuremath{\mathbb{Z}}_2\times\ensuremath{\mathbb{Z}}_2$ is a free group of rank $\geq 2$ it follows that $F_2\times F_2$ is a subgroup of $C(K_{3,3})$. The product $F_2\times F_2$ is incoherent and hence $C(K_{3,3})$ is incoherent.
\end{enumerate}
\end{proof}
|
1,941,325,220,711 | arxiv | \section{INTRODUCTION}
Service robots work in human-populated environments. Robust tracking surrounding people allow robots to serve and navigate safely and politely. In this work, we aim at the egocentric perceptual task, multi-person tracking, using a tour-guide robot (TGR). The TGR is a robot that can show people around a site such as museums, castles, and aquariums, and can introduce surroundings to people \cite{al2016tour}. In this scenario, targets occlude with each other and step in or out the camera view, frequently. Moreover, humans in a group usually dress in the same clothes, making it a difficult task to identify each other. Despite the above challenges, individuals in a guided group must be tracked and assigned consistent IDs during the whole tour-guide service period.
However, there is a vast gap between the existing MOT or MPT datasets and the tour-guide scenario, making existing trackers perform unsatisfactorily. Some examples of such datasets are the well-known MOTChallenge \cite{milan2016mot16} and KITTI \cite{geiger2012we}, targeting applications in video surveillance and autonomous driving, respectively. Furthermore, in these datasets, a person will be assigned a new ID if he/she leaves the field of view or is occluded for a prolonged period and then reappears. This is very different from our scenario where consistent IDs are required during the whole tracking period. More recently, an egocentric dataset, JRDB, was presented in \cite{martin2021jrdb}, which is more relevant to our work. JRDB is a multi-modal dataset and was captured using a moving robot in a university campus, both indoors and outdoors. However, this dataset is still inapplicable in our scenario. On the one hand, JRDB was captured in daily life where few interactions between students and robot exist. As a result, cases that humans disappear then reappear again are few. On the other hand, people in JRDB dress casually and almost no one wears the same.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{intro.pdf}
\caption{(a) The tour-guide robot that we use in this paper. (b)-(f) Examples of our dataset captured in 5 rounds. In the first round, people wear their own clothes. In the following 4 rounds, people wear same clothes.}.
\label{intro}
\end{figure}
To solve the above problems, we present the TGRDB dataset, a new large-scale dataset for TGR. To capture our dataset, we use a $180^{\circ}$ fisheye RGB camera mounted on a moving (sometimes standing) robot shown in Fig. \ref{intro}. In an indoor tour-guide scenario, the robot shows 5 or 6 individuals around. At first, participants wear their own clothes and complete the first capturing round. Then they are required to change and put on the same clothes, as shown in Fig. \ref{intro}. There are totally 5 capturing rounds for each group, including one casual-cloth round and 4 same-cloth rounds, and there are 18 groups in total. As a result, 90 video sequences are captured, totally of 5.6 hours long. We annotate each video frame with whole body and head shoulder bounding boxes, and unique IDs, resulting in over 3.2 million bounding boxes and 450 trajectories. We hope this dataset will drive the progress of research in service robotics, long-term multi-person tracking, and fine-grained \cite{yin2020fine} or clothes-inconsistency \cite{wan2020person} person re-identification. Associated with this dataset, a new metric named TGRHOTA is proposed. It is a simplified but more practical version of HOTA \cite{luiten2021hota}. TGRHOTA only considers matches between prediction and ground truth as true positives at the most previous frames when evaluating association score. We will describe it in Sec. \ref{metric}.
As part of our work, we present TGRMPT, a novel and practical multi-person tracker in the tour-guide scenario. As a person's head-shoulder contains discriminative information such as hairstyle \cite{xu2020black}, and involves fewer distractors than the whole body bounding box when he/she is partially occluded, we optimally integrate head-shoulder information into the tracker. Specifically, we adopt the state-of-the-art DeepSORT \cite{wojke2017simple} framework and tailor it to our scenario. Both whole body and head shoulder detectors are trained using our dataset, as well as deep appearance embedding feature extractors. To integrate these two types of information, we simply concatenate the two embedding features extracted from the corresponding bounding boxes, resulting in stronger appearance descriptors. Extensive experiments verify the effectiveness of our proposal.
To summarize, our contributions are as follows:
\begin{itemize}
\item We release the TGRDB dataset, a first large-scale dataset towards the applications of the tour-guide robot. This dataset not only benefits the domain of service robotics but can also drive the progress of domains related to multi-person tracking and person re-identification.
\item We propose a more practical metric, TGRHOTA, to evaluate trackers in the tour-guide scenario. Different from existing metrics, TGRHOTA punishes trackers when a new ID assignment occurs if the target has already been assigned an ID before.
\item We propose a novel head-shoulder aided multi-person tracker, named TGRMPT, that leverages best of both information containing in the whole body and head shoulder. Experiments show that our tracker achieves start-of-the-art results.
\end{itemize}
\section{Related Work}
\subsection{Tracking Methods}
Most multi-object tracking algorithms consist of two critical components, object detection and data association, responsible for estimating the bounding boxes and obtaining the identities, respectively.
\textbf{Detection in tracking}: Benefiting from the development of deep learning, the detection algorithm has been significantly improved. In addition, the quality of detection directly affects the tracking results. Therefore, many researchers \cite{yu2016poi,xu2019spatial,zhou2020tracking,sun2020transtrack,xu2021transcenter} are committed to improving the detection ability.
POI \cite{yu2016poi} achieves the state-of-the-art tracking performance due to the high-performance detection and deep learning-based appearance features.
STRN \cite{xu2019spatial} presents a similarity learning framework between tracks and objects, which encodes various Spatial-Temporal relations. The tracking-by-detection pipeline achieves leading performance, but the model complexity and computational cost are not satisfying.
Centertrack \cite{zhou2020tracking} proposes to estimate detection box and offset by using the data of two adjacent frames, which improves the ability to recover missing or occluded objects.
\cite{sun2020transtrack} leverages the transformer architecture, an attention-based query-key mechanism, to integrate the detection information of the previous frame to the detection process of the current frame.
\cite{xu2021transcenter} leverages two adjacent frames information to improve detection under the transform framework and proposes dense pixel-level multi-scale queries that are mutually correlated within the transformer attention and produce abundant but less noisy tracks.
In addition, many works directly use the off-shelf detection methods, e.g., two-stage \cite{ren2015faster} or one-stage object detectors \cite{lin2017focal}, YOLO series detectors \cite{redmon2018yolov3}, anchor-free detectors \cite{law2018cornernet}.
\textbf{Data association}: As the core component of multi-object tracking, data association first computes the similarity between tracklets and detection boxes, then matches them according to the similarity. Many methods focus on data association to improve tracking performance. SORT~\cite{bewley2016simple} is a simple but effective tracking framework that employs Kalman filtering in image space and frame-by-frame data association using the Hungarian method with the association metric that measures bounding box overlap. To address occlusions, \cite{wojke2017simple} put forward to adopt an independent REID model to extract appearance features from the detection boxes to enhance the association metric of SORT.
To save computational time, \cite{zhang2021fairmot,wang2020towards,liang2020rethinking,lu2020retinatrack} integrate the detecting and embedding models into a single network.
To address the non-negligible true object missing and fragmented trajectories that are caused by simply throwing away the objects with low detection scores, ByteTrack \cite{zhang2021bytetrack} propose to track by associating all detection boxes. To recover true objects, the association method in \cite{zhang2021bytetrack} utilizes similarities to filter out background detections with low scores.
Many researchers consider splitting the whole tracking task into isolated sub-tasks, such as object detection, feature extraction, and data association, which may lead to local optima. To address this issue, the research in \cite{peng2020chained}, as well as the subsequent works in \cite{zhou2020tracking,tokmakov2021learning,pang2021quasi}, propose to use an end-to-end model to unify the three isolated subtasks.
\subsection{Tracking Datasets}
Our TGRDB is a multi-person tracking dataset and benchmark containing fine-grained and clothes-inconsistency targets in the tour-guide scenario. \cite{milan2016mot16,dendorfer2020mot20,martin2021jrdb,yang2019person,yin2020fine} are the most relevant datasets to TGRDB. MOT \cite{milan2016mot16,dendorfer2020mot20} is a well-known multi-object tracking benchmark, and many methods are based on this. JRDB \cite{martin2021jrdb} is a novel multi-modal dataset collected from a mobile social JackRabbot. \cite{yang2019person} is an image dataset for cloth-changing person REID. \cite{yin2020fine} released a fine-grained REID dataset containing targets with the same clothes. In addition, the automatic driving datasets\cite{geiger2012we,yu2018bdd100k,sun2020scalability} are also interrelated to our TGRDB, which annotate a large number of pedestrians bounding boxes. To the best of our knowledge, there is no tracking dataset containing fine-grained and clothes-inconsistency targets on the same scale as our TGRDB in the tour-guide scenario.
\subsection{Metrics}
Within the last few years, new MOT metrics have been proposed enormously.
To remedy the lack of generally applicable metrics in MOT, \cite{bernardin2008evaluating} introduces two novel metrics, the multi-object tracking precision (MOTP) and the multi-object tracking accuracy (MOTA), that intuitively express a tracker’s overall strengths.
\cite{ristani2016performance} proposes a new precision-recall measure of performance, IDF1, that treats errors of all types uniformly and emphasizes correct identification over sources of errors.
MOTA and IDF1 overemphasize the importance of detection and association separately. To address this, \cite{luiten2021hota} presents a novel MOT evaluation metric, higher-order tracking accuracy (HOTA), which is a unified metric that explicitly balances the effect of detection, association, and localization. Nonetheless, none of the above metrics is applicable in our scenario. Hence we propose a new metric, TGRHOTA, for fair comparison of trackers in TGR.
\section{Dataset and Metric}
\subsection{Dataset}
Our TGRDB dataset was collected with a $180^\circ$ fisheye RGB camera on-board of a tour-guide robot shown in Fig. \ref{intro}. Equipped with a $360^\circ$ 2D LiDAR, our TGR can autonomously navigate and show people around. There is a pan-tilt with two degrees of freedom mounted at the neck, so the robot can rotate its head to see what it's interested in.
\begin{figure}[th]
\centering
\includegraphics[width=0.45\textwidth]{route.pdf}
\caption{The tour-guide route to capture our TGRDB dataset. There are 4 exhibit points where the robot stays and introduces exhibits to participants. Pictures are captured at these points respectively.}.
\label{route}
\end{figure}
We collect our dataset in an indoor tour-guide scenario. From the start point, the robot navigates to the first preset exhibit point, stays for a while and introduces the surrounding exhibits to participants, then moves to the next exhibit point, as shown in Fig. \ref{route}. Repeat the above steps until it returns to the start point. There are totally four exhibit points and the robot stays for around 22 seconds at each point. During the introduction period, the head of the robot alternatively looks forward or rotates randomly. Participants are required to walk freely and behave naturally. They need to follow the robot when it moves from one exhibit point to another. During this period targets are out of the field of camera view.
We divide all the participants into 18 groups, in each there are 5 or 6 individuals. Each group is required to finish 5 tour-guide rounds, including 1 casual-cloth round and 4 same-cloth rounds as shown in Fig. \ref{intro}.
Each video frame is annotated with both whole-body and head-shoulder bounding boxes, as well as unique IDs. For one target, his/her ID is consistent no matter which cloth he/she dresses in. As a result, totally 90 video sequences are captured at 25 fps. The average duration of each video is 3.76 minutes. We divide the dataset into train and test sub-datasets. Table \ref{dataset} shows more details of our dataset.
\begin{table}[ht]
\centering
\caption{Statistical comparisons between TGRDB and existing datasets. K = Thousand, M = Million, min = minutes.}
\label{dataset}
\setlength{\tabcolsep}{0.3mm}{
\begin{tabular}{l|c|c|c|c|c|c}
\hline
& No. of & No. of & No. of & No. of & & \\
& sequences & frames & boxes & IDs & Duration & Cloth \\ \hline
MOT17\cite{milan2016mot16} & 14 & 34K & 290K & - & - & Casual only \\
MOT20\cite{dendorfer2020mot20} & 8 & 13K & 1.6M & - & - & Casual only \\ \hline
KITTI\cite{geiger2012we} & 22 & 15K & 6K & - & - & Casual only \\
BDD100K\cite{yu2018bdd100k} & 1.6K & 318K & 440K & - & - & Casual only \\
Waymo\cite{sun2020scalability} & 1.15K & 600K & 2.1M & - & - & Casual only \\ \hline
JRDB\cite{martin2021jrdb} & 54 & 28K & 2.4M & - & 64min & Casual only \\
Ours(Train) & 50 & 281K & 1.8M & 51 & 188min & Casual \& Same \\
Ours(Test) & 40 & 232K & 1.4M & 40 & 150min & Casual \& Same \\
\textbf{Ours(Total)} & \textbf{90} & \textbf{513K} & \textbf{3.2M} & \textbf{91} & \textbf{338min} & \textbf{Casual \& Same} \\
\hline
\end{tabular}}
\end{table}
\subsection{Metric} \label{metric}
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{metric.pdf}
\caption{Unlike existing metrics, in TGRHOTA, one predicted trajectory can be matched to only one ground-truth trajectory, and vice versa. In other words, existing metrics select TP of interest in all dashed areas, while our TGRHOTA only selects in the green area.}.
\label{metric_fig}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{hota_vs_tgrhota.pdf}
\caption{An example showing how HOTA fails to correctly rank tracking performance in TGR because it takes all TPs into account. Thick line: ground-truth trajectory. Thin lines: prediction trajectories. All detections are TPs.}
\label{hota_vs_tgrhota}
\end{figure}
An applicable metric is significant when comparing different trackers. Existing metrics, such as HOTA \cite{luiten2021hota}, MOTA \cite{bernardin2008evaluating} and IDF1 \cite{ristani2016performance}, treat all matched prediction (pr) and ground-truth (gt) pairs as true positives (TPs), which may be reasonable in video surveillance or autonomous driving. However, they are not applicable in TGR for the requirement of consistent IDs during the whole tour-guide period. In retrospect, HOTA \cite{luiten2021hota} counts TPAs (True Positive Associations), FNAs (False Negative Associations), and FPAs (False Positive Associations) for each TP and defines association score as follows:
\begin{equation}
AssA=\frac{1}{|TP|}\sum_{c\in TP}{\frac{|TPA(c)|}{|TPA(c)|+|FNA(c)|+|FPA(c)|}},
\end{equation}
where all matched pr and gt pairs are considered as TP of interest. In other words, as shown in Fig. \ref{metric_fig}, TP of interest is selected in matches within all dashed areas. However, in TGR, we only consider TPs in the green area, i.e., matches at most previous frames. Formally, we define the set of TPs in TGR as follows:
\begin{equation}
TP'=\{tp_t\in TP|\forall t'<t, tp_{t'}\equiv tp_t \text{ or } tp_{t'}\not\equiv tp_t\}
\end{equation}
where $t$ is the time index, and $\equiv$ denotes that two TPs have the same prID and gtID, while $\not\equiv$ denotes two TPs have different prID and gtID. Thus, the association score in TGR can be written as:
\begin{equation}
AssA'=\frac{1}{|TP'|}\sum_{c\in TP'}{\frac{|TPA(c)|}{|TPA(c)|+|FNA(c)|+|FPA(c)|}}.
\end{equation}
The detection score, $DetA$, and the calculation of the final HOTA score, is the same as in \cite{luiten2021hota}, except that we use $AssA'$ instead of $AssA$.
Fig. \ref{hota_vs_tgrhota} shows an example where HOTA gives the two trackers the same scores which is counterproductive for our TGR scenario where tracker B performs much better than tracker A. Treating all TPs equally even if they have been assigned another ID at a previous frame, HOTA is not applicable in our scenario. On the contrary, our TGRHOTA gives much more reasonable scores.
\section{TGRMPT: Head-Shoulder Aided Multi-Person Tracking}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{pipeline.pdf}
\caption{The pipeline of our proposed TGRMPT framework. Integrating both whole body (\textit{wb}) and head shoulder (\textit{hs}) information, our method produces strong and discriminative appearance feature descriptors. Followed by Hungarian algorithm, robust tracking result is obtained.}
\label{pipeline}
\end{figure*}
Fig. \ref{pipeline} depicts our proposed TGRMPT tracker. By sequentially applied whole body (\textit{wb})/head shoulder (\textit{hs}) detectors and \textit{wb}/\textit{hs} feature extractors, deep appearance signatures describing global whole-body and local head-shoulder are generated. Concatenating these two types of features results in the final strong descriptor that contains both global and local information. The followed Hungarian algorithm produces the final robust tracking result.
\subsection{Detector}
Detection is the core component of the existing MOT or MPT systems. In consider of system speed, we deploy YOLOv5s \cite{glenn_jocher_2021_5563715} as our detector. We fine-tune this network on our dataset using \textit{wb} and \textit{hs} annotations to form \textit{wb} and \textit{hs} detectors, respectively. The corresponding outputs are denoted as $\mathcal{D}_t^{wb}=\{d_1^{wb},...,d_N^{wb}\}$ and $\mathcal{D}_t^{hs}=\{d_1^{hs},...,d_M^{hs}\}$. Defining IoU between $d_i^{wb}$ and $d_j^{hs}$ as $IoU=\frac{|d_i^{wb}\cap d_j^{hs}|}{|d_j^{hs}|}$, we use Hungarian algorithm to match \textit{wb} and \textit{hs} detections, resulting in matched pairs denoted as $\mathcal{D}_t^{match}=\{(d_{i1}^{wb},d_{j1}^{hs}),...,(d_{iK}^{wb},d_{jK}^{hs})\}$. We denote those \textit{wb} detections that are not matched to any \textit{hs} detections as $\mathcal{D}_t^{wb^-}=\{d_1^{wb^-},...,d_L^{wb^-}\}$, and we discard those \textit{hs} detections that are not matched to any \textit{wb} detections.
\subsection{Appearance Descriptor}
Appearance descriptors are used to measure the similarities between detections at current frame and history trajectories. They offer signification, sometimes the only, cue to re-identify the target when he/she was missed for a period of time and reappears. To obtain strong and discriminative appearance descriptors, we leverage ResNet18 \cite{he2016deep} as our feature extracting network and train it using our TGRDB dataset described above. We choose ResNet18 due to the trade-off between speed and performance. To integrate global and local information, \textit{wb} and \textit{hs} feature extracting networks are trained individually using \textit{wb} and \textit{hs} re-identification dataset, respectively. For detection pair $(d_i^{wb},d_j^{hs})\in \mathcal{D}_t^{match}$, the corresponding appearance features are $(f_i^{wb}, f_j^{hs})$. For detection $d_i^{wb^-}\in\mathcal{D}_t^{wb^-}$, we denote its appearance feature as $(f_i^{wb^-}, \mathbf{0})$ where $\mathbf{0}$ is zero vector with the same dimension as $f_i^{wb^-}$. At last, we concatenate each pair of features and form the final appearance descriptors denoted as $\mathcal{F}_t^{fused}=\{[f_{i1}^{wb},f_{j1}^{hs}],...,[f_{iK}^{wb},f_{jK}^{hs}],[f_1^{wb^-}, \mathbf{0}],...,[f_L^{wb^-}, \mathbf{0}]\}$.
\subsection{Tracking} \label{tracking}
We adopt DeepSORT \cite{wojke2017simple} framework to perform tracking. Given a set of appearance descriptors, $\mathcal{F}_t^{fused}$, we associate them to trajectories at previous frames. To do so, we keep the latest $P$ descriptors for each trajectory and leverage cosine similarity to measure the distance between trajectories and detections. Specifically, for one detection $f\in \mathcal{F}_t^{fused}$ and one trajectory $\mathcal{T}=\{f_1,...,f_P\}$, a set of cosine similarities, $S(f,\mathcal{T})=\{s_1,...,s_P\}$ are calculated, and we average them to produce the final distance, $D(f,\mathcal{T})=\sum_{i=0}^{P}(1-s_i)/P$. We also explore the performance of the minimum similarity values, $D(f,\mathcal{T})=min(1-s_i)$, in experiments. After computing the pairwise distance between current detections and history trajectories, we obtain $Q\times (K+L)$ cost matrix where $Q$ is the number of trajectories, i.e., the number of tracked targets. Hungarian algorithm is employed to get the final association results and trajectories are updated using the corresponding associated detections. Any association that has a distance greater than the preset threshold $\tau$ is treated as unmatched.
The origin DeepSORT is designed for video surveillance and is not well fitted to our tour-guide scenario. In the data association process of DeepSORT, matching cascade is introduced to give higher priority to trajectories that miss targets for less time. Meanwhile, a so-called gating mechanism is applied and only detections near the predicted locations of trajectories are considered. These two tricks may cause more track fragments and worsen the tracking performance in our scenario as frequent occlusion and long-term missing exist in our dataset. Consequently, we abandon these two tricks in our tracker. Another trick to handle challenges in TGR is to assign a relatively great value to the age threshold, $\alpha$. When a trajectory is missed, i.e., no detections associated with it, for consecutive $\alpha$ frames, it will be deleted. We explore different values of $\alpha$ in our ablation experiments and conclude that $\alpha=\infty$, i.e., no trajectories are deleted during tracking, gives the best performance.
\section{Experiments}
To evaluate on our TGRDB dataset, the widely used metrics, HOTA \cite{luiten2021hota}, MOTA \cite{bernardin2008evaluating} and IDF1 \cite{ristani2016performance} are adopted. Besides, we report results evaluated using the proposed TGRHOTA metric as well. For a more comprehensive analysis, we split the test dataset into casual-cloth and same-cloth sub-datasets, and conduct experiments respectively. But we don't split the training dataset.
\subsection{Ablation Studies}
\subsubsection{Hyper-Parameters}
Different values of hyper-parameters, $\alpha$ and $\tau$, may impact the final performance dramatically. We fix the distance threshold $\tau$ and set it to 0.5 at first, and change the age threshold $\alpha$ to explore the impact. In this stage, we employ the whole body branch only. The results are shown in Table \ref{alpha}. Intuitively, $\alpha$ impacts more on data association. We can see this from AssA and IDF1 scores. In the TGR scenario, it is required that people's IDs remain consistent even after long-term target missing. Thus the greater $\alpha$ should give the better performance. This is verified by the results in Table \ref{alpha}, where the infinite $\alpha$ gives the best performance. This means that during the period of tour-guide, no tracks will be deleted, which gives chance to the tracker to find back the target even if he/she has disappeared for a long time.
\begin{table}[th]
\centering
\caption{The impact of age threshold $\alpha$ when distance threshold $\tau=0.5$.}
\label{alpha}
\begin{subtable}[t]{0.495\linewidth}
\caption{Casual Cloth}
\centering
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{l|cccc}
\hline
& \multicolumn{4}{|c}{$\alpha$} \\
& 30 & 100 & 500 & $\infty$ \\ \hline
HOTA$\uparrow$ & 33.6 & 36.0 & 37.8 & \textbf{41.2} \\ \hline
AssA$\uparrow$ & 15.0 & 17.1 & 18.9 & \textbf{22.5} \\ \hline
MOTA$\uparrow$ & 86.2 & 86.3 & 86.0 & 85.5 \\ \hline
IDF1$\uparrow$ & 27.4 & 29.6 & 31.2 & \textbf{36.4} \\ \hline
\end{tabular}}
\end{subtable}
\begin{subtable}[t]{0.495\linewidth}
\caption{Same Cloth}
\centering
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{l|cccc}
\hline
& \multicolumn{4}{|c}{$\alpha$} \\
& 30 & 100 & 500 & $\infty$ \\ \hline
HOTA$\uparrow$ & 31.7 & 34.8 & 37.2 & \textbf{42.5} \\ \hline
AssA$\uparrow$ & 13.1 & 15.8 & 18.1 & \textbf{23.6} \\ \hline
MOTA$\uparrow$ & 86.9 & 87.0 & 86.9 & 86.4 \\ \hline
IDF1$\uparrow$ & 24.9 & 28.4 & 30.8 & \textbf{37.7} \\ \hline
\end{tabular}}
\end{subtable}
\end{table}
\begin{figure}[th]
\centering
\includegraphics[width=0.45\textwidth]{distance.pdf}
\caption{The impact of different distance threshold $\tau$ values using the two matching distance calculation ways.}.
\label{distance}
\end{figure}
We then fix $\alpha=\infty$ and change the distance threshold $\tau$ to explore the impact. In data association, there are two ways, \textit{mean} or \textit{min} as described in \ref{tracking}, to compute the distance between detection and trajectory. We show their results\footnote{For simplicity, only results on same-cloth sub-dataset are shown as casual-cloth sub-dataset has similar results.} in
Fig. \ref{distance}, where we conclude that the averaged distance value gives much better performance. The reason is apparent: the reserved $P$ galleries in each trajectory may contain noisy samples and the minimum distance value suffers much from these noises.
The above experiments on hyper-parameters indicate that $\alpha=\infty$ and $\tau=0.85$ give the best performance. We will fix them in the following experiments and explore other aspects of our method.
\subsubsection{Addition of Head-Shoulder}
Head-shoulder contains non-negligible features, such as haircut, glasses, complexion, etc., which can provide discriminative cues for person re-identification. We conduct comparative experiments on the employment of: 1) \textit{wb}, the whole body only; 2) \textit{hs}, the head shoulder only; 3) \textit{wb}+\textit{hs}, both whole body and head shoulder. The results are shown in Table \ref{wb_hs}. Undoubtedly, the method that integrates both \textit{wb} and \textit{hs} information gives the best performance.
\begin{table}[th]
\centering
\caption{The improvement of performance after adding head-shoulder.}
\label{wb_hs}
\begin{subtable}[t]{0.495\linewidth}
\caption{Casual Cloth}
\centering
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{l|ccc}
\hline
& \textit{wb} & \textit{hs} & \textit{wb}+\textit{hs} \\ \hline
HOTA$\uparrow$ & 71.2 & 53.0 & \textbf{72.7} \\ \hline
MOTA$\uparrow$ & 87.1 & 54.1 & \textbf{87.2} \\ \hline
IDF1$\uparrow$ & 81.2 & 58.8 & \textbf{84.2} \\ \hline
\end{tabular}}
\end{subtable}
\begin{subtable}[t]{0.495\linewidth}
\caption{Same Cloth}
\centering
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{l|ccc}
\hline
& \textit{wb} & \textit{hs} & \textit{wb}+\textit{hs} \\ \hline
HOTA$\uparrow$ & 67.2 & 51.9 & \textbf{68.7} \\ \hline
MOTA$\uparrow$ & 87.6 & 56.2 & \textbf{87.6} \\ \hline
IDF1$\uparrow$ & 77.2 & 56.3 & \textbf{77.9} \\ \hline
\end{tabular}}
\end{subtable}
\end{table}
To dig deeper into how head shoulder helps improve the overall performance, we show precision and recall scores of association in Fig. \ref{assa}. Compared to the \textit{wb}-only method, for casual-cloth sub-dataset, the percentage promotion of precision and recall after involving head shoulder is 2.3\% and 2.9\%, respectively. Thus, the head shoulder contributes equally to precision and recall. Nevertheless, for same-cloth sub-dataset, the respective promotion is 5.9\% and 1.4\%. Head shoulder contributes much more to precision. In other words, with the help of head shoulder, the tracker does better in identifying different people even if they wear the same. This is due to the discriminative details contained in head shoulder which are untraceable in whole body.
\begin{figure}[!t]
\centering
\begin{subtable}[t]{0.45\linewidth}
\centering
\includegraphics[align=c, width=1\textwidth]{tradition_cloth_AssPr_vs_AssRe_AssA.png}
\label{assa_tradition}
\caption{Casual cloth}
\end{subtable}
\begin{subtable}[t]{0.45\linewidth}
\centering
\includegraphics[align=c, width=1\textwidth]{similar_cloth_AssPr_vs_AssRe_AssA.png}
\label{assa_similar}
\caption{Same cloth}
\end{subtable}
\caption{Precision and recall scores of data association. The integration of head shoulder contributes equally to precision and recall scores in casual-cloth sub-dataset while contributing more to precision score in same-cloth sub-dataset.}.
\label{assa}
\end{figure}
\subsubsection{New Metric}
We show scores of our proposed TGRHOTA in Table \ref{tgrhota}. We can see that TGRHOTA scores are higher than HOTA scores. This is a similar situation as tracker B shown in Fig. \ref{hota_vs_tgrhota}. That is to say, once a target was assigned an ID, our tracker can keep it consistent most of the time.
\begin{table}[th]
\centering
\caption{The evaluation result of new TGRHOTA metric.}
\label{tgrhota}
\begin{subtable}[t]{0.495\linewidth}
\caption{Casual Cloth}
\centering
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{l|ccc}
\hline
& \textit{wb} & \textit{hs} & \textit{wb}+\textit{hs} \\ \hline
TGRHOTA$\uparrow$ & 74.3 & 55.5 & \textbf{75.2} \\ \hline
\end{tabular}}
\end{subtable}
\begin{subtable}[t]{0.495\linewidth}
\caption{Same Cloth}
\centering
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{l|ccc}
\hline
& \textit{wb} & \textit{hs} & \textit{wb}+\textit{hs} \\ \hline
TGRHOTA$\uparrow$ & 70.1 & 54.9 & \textbf{72.1} \\ \hline
\end{tabular}}
\end{subtable}
\end{table}
\subsubsection{FPS}
On a single thread, our method runs at 16fps end-to-end, using an Nvidia 2080Ti GPU card. Parallel running of \textit{wb} and \textit{hs} detectors may speed up the method.
\subsection{Comparison with State-of-the-Art}
We compare our proposal with two newest state-of-the-art methods, CenterTrack \cite{zhou2020tracking} and ByteTrack \cite{zhang2021bytetrack}. Results are shown in Table \ref{sota}. We train CenterTrack and ByteTrack using our dataset and fine-tune some hyper-parameters. Our method achieves the best performance. We show comparison results for reference only, as we have carefully tailored DeepSORT to fit the tour-guide scenario, especially for data association. Designed for video surveillance, it is foreseeable that CenterTrack and ByteTrack perform poorly without any scenario-oriented modifications.
\begin{table}[!t]
\centering
\caption{Comparison with State-of-the-Art.}
\label{sota}
\setlength{\tabcolsep}{0.1mm}{
\begin{tabular}{l|cccc|cccc}
\hline
& \multicolumn{4}{c|}{Casual Cloth} & \multicolumn{4}{c}{Same Cloth} \\ \cline{2-9}
Method & MOTA & IDF1 & HOTA & TGRHOTA & MOTA & IDF1 & HOTA & TGRHOTA \\ \hline
CenterTrack & 88.1 & 25.1 & 32.0 & 39.3 & 87.7 & 22.0 & 29.8 & 35.5 \\
ByteTrack & 87.7 & 26.8 & 33.1 & 39.7 & 87.3 & 23.7 & 30.8 & 34.5 \\
Ours & 87.2 & \textbf{84.2} & \textbf{72.7} & \textbf{75.2} & 87.6 & \textbf{77.9} & \textbf{68.7} & \textbf{72.1} \\ \hline
\end{tabular}}
\end{table}
\section{Conclusion}
We release the TGRDB dataset, the first large-scale dataset for the applications of TGR. It is captured using a TGR in an indoor tour-guide scenario. We annotate the dataset with whole-body and head-shoulder bounding boxes, as well as unique IDs for each participant. Frequently occlusion, long-term missing and fined-grained targets (dress the same) are the main challenges in this dataset. We believe that TGRDB will help future research in service robotics, long-term multi-person tracking, and find-grained or clothes-inconsistency person re-identification. Along with the dataset, we propose a more practical metric, TGRHOTA, to evaluate trackers in the tour-guide scenario. Different from existing metrics, TGRHOTA only considers TPs at most previous frames, which is more applicable in TGR. As part of our work, we propose TGRMPT, a novel head-shoulder aided multi-person tracking system that leverages best of discriminative cues contained in head shoulder which are untraceable in whole body. Extensive experiments have confirmed the significant advantages of our proposal.
\section*{Acknowledgment}
This work is supported in part by the Cloud Brain Foundation Grants U21A20488 in Zhejiang Lab. The authors would also thank Fellow Academician Jianjun Gu for establishing and leading the Tour-Guide Robot project.
|
1,941,325,220,712 | arxiv | \section{Proof of Theorem 1}
\thscala*
\begin{proof}
There are two restrictions on the total size of raw files. One is the restriction of total capacity. The other is the restriction of the maximal total value of stored files.
Under the former restriction, each file $f$ is stored as $f.cp$ replicas. Due to the assumption of redundant capacity, The total size of all replicas can not exceed $\frac{1}{2}$ of total capacity. That is,
\[
\sum_f (f.size \times f.cp) \leq \frac{1}{2} N_s \times minCapacity.
\]
Because $f.cp = k\times \frac{f.value}{minValue}$, we have
\[
\sum_f (f.size \times k\times \frac{f.value}{minValue}) \leq \frac{1}{2} N_s \times minCapacity.
\]
Then we have
\[
k \times\frac{\sum_f (f.size \times f.value)}{minValue \times \sum_f f.size} \leq \frac{1}{2} \times \frac{N_s \times minCapacity}{\sum_f f.size}.
\]
Let $r_1 = \frac{\sum_f f.size \times f.value}{minValue \times \sum_f f.size}$ we have
\[
\frac{k}{r_1} \leq \frac{1}{2} \times \frac{N_s \times minCapacity}{\sum_f f.size}.
\]
Then
\[
\sum_f f.size \leq \frac{ N_s \times minCapacity}{2r_1 k }.
\]
Under the latter restriction, the total value of files can't exceed $S^m_v \times minValue$. That is,
\[
\sum_f f.value \leq n \times minValue.
\]
Because $n = Cap\_Para\times m$,
\[
\frac{\sum_f f.value}{\sum_f f.size} \leq \frac{Cap\_Para\times N_s \times minValue}{\sum_f f.size}.
\]
Therefore,
\[
\sum_f f.size \leq \frac{N_s \times minCapacity }{r_2},
\]
where
\[
r_2 = \frac{minCapcity \times \sum_f f.value}{minValue \times \sum_f f.size \times Cap\_Para}.
\]
\end{proof}
\section{Proof of Theorem 2}
\thadd*
\begin{proof}
In the special case, all files have the same size $f.size$. For a sector $s$ with capacity $s.capacity$, it can store $\frac{s.capacity}{f.size}$ backups. We define $N_{cp}=kN_v$ as the number of file backups in total because $N_{cp} = \sum_f f.cp = \sum_f \frac{f.value}{minValue} \times k = kN_v$. Additionally, let $X_i$ be the event that the backup $i$ is stored in this sector and $S=\sum_{i=1}^{N_{cp}}X_i$. Because the assumption of redundant capacity, we have $\mathrm{E}[S]\leq \frac{s.capacity}{2 f.size}$. By multiplicative Chernoff bound, we have
\begin{align*}\small
& \Pr\left[s.freeCap\leq \frac{1}{8} s.capacity\right]\\
= & \Pr\left[\sum_{i=1}^{N_{cp}} X_i\geq \frac{7}{8}\frac{s.capacity}{f.size}\right]\\
\leq & \Pr\left[S\geq \frac{7}{4}\mathrm{E}\left[S\right]\right]\\
\leq & \exp\left\{\left(\log\frac{e}{4}\right)\frac{3}{4}\mathrm{E}\left[S\right]\right\}\\
\leq & \exp\left\{\left(\log\frac{e}{4}\right)\frac{3 s.capacity}{8 f.size}\right\}\\
\leq & \exp\left\{-0.144\frac{s.capacity}{f.size}\right\}
\end{align*}
By applying union bound, we obtain
\small{\[
\Pr\left[ \exists s, s.freeCap\leq \frac{1}{8}s.capacity\right] \leq N_s\exp\left\{-0.144\frac{s.capacity}{f.size}\right\}.
\]}
\end{proof}
\input{proof_3}
\end{appendices}
\section{Conclusion}\label{conclusion}
In this paper, we propose FileInsurer, a novel design for blockchain-based \emph{Decentralized Storage Network}, which achieves both scalability and reliability. FileInsurer is the first DSN protocol that gives full compensation to file loss and has provable robustness. Our work also raises many open problems. First, are there other approaches to enhance the reliability of \emph{Decentralized Storage Networks}? For example, a reputation mechanism~\cite{chen2021provable} on storage providers may be also helpful to reduce the loss of files. Second, are there other ways to support dynamic content in sectors other than DRep? Furthermore, can the idea of FileInsurer be extended to decentralized insurance in other scenarios?
\section{Introduction}
File storage is a fundamental issue in distributed systems. Recently, the developments of Web 3.0~\cite{alabdulwahhab2018web}, Non-Fungible Tokens (NFTs)~\cite{wang2021non,chenabsnft}, and Metaverse~\cite{ryskeldiev2018distributed} have raised high requirements on reliability and accessibility of file storage. For example, the metadata of NFTs should be verifiable and accessible in NFT markets, as the values of NFTs disappear if the metadata is lost. Billions of metadata generated by blockchain applications are searching for reliable storage services.
Traditionally, people store files in personal storage or cloud storage service. However, personal storage struggles to keep files secure and accessible. Additionally, cloud storage lacks transparency and trust~\cite{benisi2020blockchain}. It is hard for users to recognize how many backups of their files should be stored to guarantee security. Moreover, file loss often occurs in cloud storage.
Due to the defects of personal storage and cloud storage, more and more users choose to store files in the blockchain-based \emph{Decentralized Storage Networks} (DSNs) such as Sia~\cite{vorick2014sia}, Filecoin~\cite{benet2018filecoin}, Arweave~\cite{williams2019arweave}, and Storj~\cite{wilkinson2014storj}. In a DSN, storage providers contribute their available hard disks to store files from clients and then earn profits. The storing, discarding, and storing state-changing events of files are recorded in the blockchain. Files can be stored by multiple storage providers to enhance security. Additionally, in Filecoin, backups are changed to be replicas, once they have been proved by proof-of-replication (PoRep). PoRep well resists Sybil attacks~\cite{douceur2002sybil} by a storage provider, who may pretend to store multiple backups by forging multiple identities, while she actually only stores one backup.
PoRep can also be used to ensure providers cannot cheat on the available storage space.
However, the existing DSN protocols do not well promise the reliability. In DSN, there are always files only stored by a small part of storage providers due to the issue of scalability. Therefore, it is impossible to completely avoid loss of files. When files are lost, the owners of these files only receive little compensation.
In this paper, we aim to enhance the reliability of decentralized file storage from the perspective of economic incentive approaches. For the Bitcoin Blockchain~\cite{nakamoto2008bitcoin}, the most success is to apply the economic incentive approach, by awarding a certain amount of token to encourage miners to actively mine. Thus, in the era of blockchain, the issue of economic approaches is getting more and more important. We build a decentralized insurance scheme on files stored in DSN to protect the interests of users when their files are lost. Under the insurance scheme, storage providers need to pledge a deposit before storing files. If a file is lost, which means that all providers storing this file are corrupted, the total deposit from these providers can fully compensate for the loss of this file.
We hope that the deposit should be small to incentivize participants to contribute their storage space. Let us denote \emph{deposit ratio} to be the ratio of the sum of deposits to the total value of files. Chen et. al. \cite{chen2020decentralized} firstly studied how to decrease the deposit ratio in the decentralized custody scheme with insurance. However, the methodology in \cite{chen2020decentralized} cannot be directly applied in our scenario. The reason is that storage providers and files change over time in DSN, while ~\cite{chen2020decentralized} is only suitable for static setting. Our approach is to achieve provable robustness by ensuring \emph{storage randomness}. Storage randomness requires the locations of replicas are randomly selected by DSN, such that these locations are
uniformly distributed. Consequently, the attackers must corrupt a considerable portion of providers even if they only want to destroy all backups of a small portion of files. Therefore, the randomness can promises that only a relatively small deposit needs to be pledged by storage providers to cover the potential file loss.
\subsection*{Main Contributions}
We propose FileInsurer, a novel design for blockchain-based \emph{Decentralized Storage Network}, to achieve both scalability and reliability of file storage. In our protocol, storage providers are required to pledge deposits to registered sectors and the locations of files are randomly selected. To further ensure storage randomness, locations of files' replicas shall change from time to time because the list of sectors is dynamic.
Our protocol advances the technology of decentralized file storage in the following three aspects.
\begin{itemize}
\item Firstly, FileInsurer supports dynamic content stored in sectors with low cost, which is necessary to ensure \emph{storage randomness}. FileInsurer deploys \emph{Dynamic Replication} (DRep) to support adding and refreshing stored files. DRep is also able to resist Sybil attacks and make sure the free space of sectors is indeed available.
\item Secondly, FileInsurer can achieve provable robustness. In FileInsurer, files are stored as replicas in sectors. Naturally, a file is missing, if and only if all replicas of this file have been destroyed. A sector is collapsed, as long as any bit in this sector is lost. Under mild conditions, we prove that no more than 0.1\% value of all files are lost even if half of the storage collapses.
\item Thirdly, FileInsurer implements an insurance scheme on DSN that can provide full compensation for the loss of those missing files. The compensation is covered by the deposit of all crashed storage sectors. Our theoretical analysis indicates that only a small deposit ratio is needed to cover all of the file loss in FileInsurer.
\end{itemize}
To the best of our knowledge, FileInsurer is the first DSN protocol that can provide full compensation for the file loss and has provable robustness.
\subsection*{Paper Organization}
The rest of this paper is organized as follows. Section~\ref{related} introduces the related works of decentralized storage protocols. In Section~\ref{prelimiaries}, we describe the structure and components of FileInsurer protocol. Then, we continue to introduce the protocol design of FileInsurer in Section~\ref{protocoldesign}. In Section~\ref{analysis}, we propose the theoretical analysis on the scalability, robustness, and deposit issue of our protocol. We also compare FileInsurer with other blockchain-based decentralized storage protocols. In addition, some practical problems in FileInsurer are detailedly discussed in Section~\ref{discussion}. Finally, we summarize our protocol and raise some open problems in Section~\ref{conclusion}.
\section{Protocol Design of FileInsurer}\label{protocoldesign}
In this section, we introduce the protocol design of FileInsurer in detail. The insurance scheme is introduced into the protocol design so that the storage providers are responsible for file loss and their deposit can fully compensate the clients whose files are lost. To support the dynamical file storing in sectors, storage randomness is needed to randomly distribute the locations of replicas in DSN, which can be realized by randomly selecting and refreshing the locations of replicas.
Additionally, files with higher values have more replicas so it is harder to destroy all replicas of these files.
In FileInsurer, all file replicas and Capacity Replicas are generated by PoRep, which means that WinningPoSt can be easily achieved. Therefore, the Expected Consensus deployed by Filecoin can be directly applied to our consensus algorithm. Additionally, FileInsurer protocol can be deployed as a smart contract or sidechain in other blockchain protocols such as Ethereum~\cite{wood2014ethereum} and Algorand~\cite{gilad2017algorand, chen2019algorand}.
\subsection{Fee mechanism}\label{fee_mechanism}
In our DSN design, clients need to pay a fee when they obtain the storage service and retrieval service. Moreover, there are three kinds of fees in FileInsurer, which are the traffic fee, storage rent, and prepaid gas fee.
\subsubsection{Traffic fee}
The traffic fee needs to be paid when a client occupies the network bandwidth of providers by transmitting files, retrieving files, or other interactions. The mechanism to pay a traffic fee is necessary because malicious clients may transmit files but pay nothing to block the providers' network. The operation to upload traffic fee must be committed to the storage provider before the file transmission, and the provider obtains the fee only when it has confirmed the file.
\subsubsection{Storage rent} Clients need to pay the storage rent for the used storage space, which is proportional to the size of the file times the number of replicas. The unit rent is the same for all files, and the network informs the client how much rent it should pay. The client will be automatically charged storage rent in the task \textsf{Auto\_CheckAlloc} which will be introduced in \cref{protocol}.
In particular, the network distributes revenue by time period. In a time period, all storage rent is stored in the network at first. At the end of the period, the network distributes the rent to owners of proper functioning sectors during this period. Storage providers are paid proportionally according to their total storage capacity, without paying attention to which file is stored in which sector.
\subsubsection{Prepaid gas fee} After a client stores files on the network, the network needs to periodically check the proof and refresh the file storage locations. These operations use the consensus space and thus incur a gas fee. The gas fee for these operations should be prepaid by the user as these operations are performed automatically. The prepaid gas fee shall be collected together with storage rent through
\textsf{Auto\_CheckAlloc}.
In addition, anyone who submits requests to the network must pay a gas fee to avoid wasting valuable consensus space. The design of the gas fee mechanism is part of the network design. As our DSN design does not focus on the network design, we can use other existing gas fee mechanisms and do not detailedly address it in this work.
\subsection{Deposit and Compensation}
When registering a sector, the storage provider should pledge to DSN with a certain amount of deposit. The deposit is locked until the sector safely quits the system or is corrupted. If the sector safely quits, the deposit would be withdrawn to the storage provider. If the sector is corrupted, the deposit must be confiscated.
When the deposit of a sector is confiscated, it shall be stored in the network to compensate for lost files. File loss in a network means that all its copies are no longer available, i.e. those storage sectors storing the copies are all corrupted. When a file is lost, the network shall provide users with compensation equal to the value of the file. Values of files are given by users when storing their files. If a user reports a higher value than the value of her file, she would pay a higher storage rent, and if she reports a lower value, the compensation would be lower once her file gets lost.
The \emph{deposit ratio} $\gamma_{deposit}$ of FileInsurer is defined as the ratio that the sum of deposits compared to the maximal value of files stored in the network. It can be understood as how much deposit is required for each unit of value stored in the network. Thus, the lower the deposit ratio makes the providers have more incentives to participate in the distributed storage network, and thus make our protocol more competitive.
Now we show how to calculate the deposit by $\gamma_{deposit}$ while registering a sector. Assume that the total size of sectors in FileInsurer is $N_s\times minCapacity$ and the maximal total value of stored files are $N_v^m \times minValue$. For a sector $s$ with capacity $s.capacity$, the deposit should be the proportion of $s.capacity$ in the network multiplied by the total deposit, which is
$\gamma_{deposit} \times N_v^m \times minValue \times \frac{s.capacity}{N_s \times minCapacity}$. Let $capPara = \frac{N_v^m}{N_s}$ be a constant and the deposit becomes $ s.capacity \times \gamma_{deposit} \times\frac{ capPara \times minValue }{minCapacity}$, which can be calculated only by $s.capacity$, $\gamma_{deposit}$, and some constants. The setting of $\gamma_{deposit}$ are discussed in \Cref{th:ratio}.
\input{protocol}
\section{Discussion}\label{discussion}
In previous sections, we have proposed the general framework of FileInsurer and theoretically proved the excellent performance of FileInsurer. Besides, some practical issues exist and we explore the corresponding solutions for them under FileInsurer in this section.
\subsection{Distributions and Parameters}\label{dis:r12}
The value and size of a file follows a certain distribution in DSN. We have the following reasonable assumptions about the distribution.
\begin{itemize}
\item The maximal value of a file is bounded by a constant. Therefore, $r_1$ (defined in ~\cref{r1}) is bounded by a constant.
\item The average value of a unit size is a bounded constant. Then it's reasonable to assume that $\frac{\sum_f f.value}{\sum_f f.size}$ is bounded by a constant. Therefore, $r_2$ (defined in ~\cref{r2}) is bounded by a constant.
\end{itemize}
The parameters of FileInsurer should be properly set according to the distribution of files. For example, we should set parameters to make $2r_1 k$ is not far away from $r_2$ to further improve scalability bound in \cref{th:scala}. It also helps to avoid the bad situation that the total value of files is far below the maximal, but the used space has reached its limit.
\subsection{Storage Randomness When Adding or Removing Sectors}\label{maintainrandom}
In our analysis of storage randomness, we ignore the case that the network may add or remove sectors online. When a new sector $s$ is registered in the network, in order to maintain the independently and identically distributed property of the allocations, the network should traverse each allocation and swap out the allocation to that sector with the probability of $\frac{s.capacity}{N_s\times minCapacity}$. Such an operation is impossible because traversing over files is too expensive. One good approximation method is that the network first calculates how many files backups need to be swapped into the sector by sampling from a Poisson distribution, and then randomly select the file backups to swap into the sector.
If a sector is disabled, We can request it to keep storing all replicas it currently stores even if they are slowly being swapped out. As a result, it does not get easier to attack the corresponding files. When all of its files are swapped out, this sector no longer exists in the network so the storage randomness can guarantee.
\subsection{Adjusting to Extremely Large Files}
In some special cases, very few huge files, whose sizes are comparable to the capacity of sectors, need to be stored in the network. These very large files might break storage randomness because their allocations might fail to find enough space in one turn. To address this problem from the extremely large files, the network needs to specify an upper limit $sizeLimit$ on the size of a single file. For a file with a size greater than $sizeLimit$, we can convert it to a collection of segments
by the erasure code, such that each segment's size is upper bounded by $sizeLimit$. By this operation, the file can still be recovered even if half of the segments are lost. Therefore, we can simply regard each segment as an individual file with value $\frac{2value}{k}$. In practice, we can apply the common erasure code such as Reed–Solomon code~\cite{reed1960polynomial} to archive this.
\subsection{Storing Files with Widely Varying Values}\label{sublinear}
In FileInsurer protocol, the value of each file is required to be an integer multiple of $minValue$. Thus a file with a value of $v$ can be treated as $\frac{v}{minValue}$ documents worth of $minValue$. This means that a high-value file needs to have many replicas in the system, and the number of replicas is linearly related to this file's value. A compromise solution is to pre-divide the value levels of files and to establish a storage subnetwork corresponding to each level. Then the clients can choose which subnetwork to store files based on the value level of their files.
\subsection{Avoiding Selfish Storage Providers}
Selfish storage providers refer to these providers who store files but do not normally provide retrieval services. Assume the ratio of the number of selfish storage providers to the number of all providers is $\alpha$ in the network. Then it is expected that $\alpha^k$ proportion of files suffer from the threat of the selfish providers' collusion. Here $k$ is just the number of copies of a stored file.
As a result, any protocol that fixes file storage locations cannot fundamentally solve the problem of selfish storage providers. However, a natural advantage of FileInsurer is that its file refresh mechanism can fundamentally eliminate the threat from selfish storage providers. Because of the existence of refreshing file storage location, no single file will be completely controlled by the selfish storage provider for a long time.
\subsection{Supports for IPFS}
Filecoin has shown how to support IPFS in a blockchain-based DSN, and FileInsurer has a similar approach. In FileInsurer, the hashes and locations of files are all stored in blockchain. Therefore, it's easy to build and update DHTs and Merkle DAGs on FileInsurer so that anyone can address files stored in FileInsurer through IPFS paths. The retrieval of files can be also realized through BitSwap protocol.
\section{experiment}
\label{experiment}
\hy{different size of sectors}
Recall that $N_{cp}=kN_v$ is the number of file backups and each file $f$ needs to store $f.cp$ backups on the network. \Cref{table:experiment} shows the result of the capacity usage of the most loaded sector in the experiment. We run the experiment under two different settings. In the first setting, we reallocate all file backups in one go for $100$ times. In the second setting, we allocate each file backup and then randomly refresh the location of a file backup $100N_{cp}$ times. In the experiment, we record the maximum value of capacity usage at any time. The results are shown in \cref{table:experiment}. That the maximum capacity usage is less than $1$ means that no file backups are allocated to sectors with insufficient capacity.
We can find that under the distributions we test in the experiments, the probability of that file backups are allocated to sectors with insufficient capacity is sufficiently small.
\begin{table}[]
\caption{\textbf{Experiment result:} maximum capacity usage of sectors}
\centering{\small
\begin{tabular}{c c|c c c c c}
\hline\hline
\multicolumn{7}{c}{reallocate all file backups $100$ times} \
\cr \cline{1-7}
\multicolumn{2}{c}{parameter} & \multicolumn{5}{c}{maximum capacity usage} \
\cr \cline{1-2} \cline{3-7}
$N_{cp}$ & $N_s$ & $[1]$ & $[2]$ & $[3]$ & $[4]$ & $[5]$\\
\hline
$10^5$ & 5 & 0.511 & 0.508 & 0.514 & 0.511 & 0.509\\\hline
$10^5$ & 10 & 0.519 & 0.518 & 0.521 & 0.518 & 0.515\\\hline
$10^5$ & 20 & 0.525 & 0.524 & 0.536 & 0.530 & 0.529\\\hline
$10^5$ & 50 & 0.565 & 0.539 & 0.558 & 0.549 & 0.548\\\hline
$10^5$ & 100 & 0.571 & 0.566 & 0.584 & 0.572 & 0.569\\\hline
$10^6$ & 50 & 0.515 & 0.513 & 0.517 & 0.515 & 0.513\\\hline
$10^6$ & 100 & 0.522 & 0.523 & 0.530 & 0.530 & 0.521\\\hline
$10^6$ & 200 & 0.538 & 0.530 & 0.542 & 0.534 & 0.533\\\hline
$10^6$ & 500 & 0.558 & 0.548 & 0.569 & 0.570 & 0.557\\\hline
$10^6$ & 1000 & 0.591 & 0.571 & 0.598 & 0.594 & 0.576\\\hline
$10^7$ & 500 & 0.516 & 0.515 & 0.522 & 0.521 & 0.518\\\hline
$10^7$ & 1000 & 0.524 & 0.521 & 0.531 & 0.528 & 0.524\\\hline
$10^7$ & 2000 & 0.540 & 0.534 & 0.544 & 0.545 & 0.534\\\hline
$10^7$ & 5000 & 0.562 & 0.554 & 0.581 & 0.573 & 0.560\\\hline
$10^7$ & 10000 & 0.589 & 0.576 & 0.609 & 0.606 & 0.585\\\hline
$10^8$ & 5000 & 0.520 & 0.518 & 0.522 & 0.520 & 0.517\\\hline
$10^8$ & 10000 & 0.526 & 0.525 & 0.537 & 0.529 & 0.524\\\hline
$10^8$ & 20000 & 0.541 & 0.534 & 0.550 & 0.547 & 0.538\\\hline
$10^8$ & 50000 & 0.562 & 0.555 & 0.580 & 0.571 & 0.559\\\hline
$10^8$ & $10^5$ & 0.591 & 0.582 & 0.614 & 0.599 & 0.586\\\hline
\hline
\end{tabular}
\begin{tabular}{c c|c c c c c}
\hline\hline
\multicolumn{7}{c}{refresh the location of a file backup $100N_{cp}$ times} \
\cr \cline{1-7}
\multicolumn{2}{c}{parameter} & \multicolumn{5}{c}{maximum capacity usage} \
\cr \cline{1-2} \cline{3-7}
$N_{cp}$ & $N_s$ & $[1]$ & $[2]$ & $[3]$ & $[4]$ & $[5]$\\
\hline
$10^5$ & 5 & 0.517 & 0.511 & 0.519 & 0.515 & 0.514\\\hline
$10^5$ & 10 & 0.524 & 0.523 & 0.529 & 0.522 & 0.519\\\hline
$10^5$ & 20 & 0.532 & 0.529 & 0.538 & 0.535 & 0.531\\\hline
$10^5$ & 50 & 0.550 & 0.551 & 0.566 & 0.554 & 0.557\\\hline
$10^5$ & 100 & 0.588 & 0.571 & 0.599 & 0.595 & 0.581\\\hline
$10^6$ & 50 & 0.518 & 0.516 & 0.521 & 0.519 & 0.517\\\hline
$10^6$ & 100 & 0.525 & 0.522 & 0.532 & 0.529 & 0.526\\\hline
$10^6$ & 200 & 0.536 & 0.535 & 0.546 & 0.542 & 0.541\\\hline
$10^6$ & 500 & 0.565 & 0.563 & 0.582 & 0.575 & 0.562\\\hline
$10^6$ & 1000 & 0.592 & 0.581 & 0.610 & 0.605 & 0.589\\\hline
$10^7$ & 500 & 0.520 & 0.518 & 0.525 & 0.523 & 0.522\\\hline
$10^7$ & 1000 & 0.533 & 0.527 & 0.534 & 0.533 & 0.531\\\hline
$10^7$ & 2000 & 0.542 & 0.535 & 0.553 & 0.549 & 0.540\\\hline
$10^7$ & 5000 & 0.565 & 0.562 & 0.586 & 0.582 & 0.569\\\hline
$10^7$ & 10000 & 0.610 & 0.591 & 0.626 & 0.613 & 0.599\\\hline
$10^8$ & 5000 & 0.529 & 0.527 & 0.539 & 0.532 & 0.529\\\hline
$10^8$ & 10000 & 0.542 & 0.536 & 0.543 & 0.546 & 0.537\\\hline
$10^8$ & 20000 & 0.551 & 0.547 & 0.560 & 0.558 & 0.548\\\hline
$10^8$ & 50000 & 0.575 & 0.569 & 0.599 & 0.584 & 0.577\\\hline
$10^8$ & $10^5$ & 0.611 & 0.604 & 0.639 & 0.628 & 0.611\\\hline
\hline
\end{tabular}}
\begin{tablenotes}
\footnotesize
\item $[1]$: Uniform distribution in interval $[0,1]$
\item $[2]$: Uniform distribution in interval $[1,2]$
\item $[3]$: Exponential distribution
\item $[4]$: Normal distribution with $\mu = \sigma^2$
\item $[5]$: Normal distribution with $\mu = 2\sigma^2$
\end{tablenotes}
\label{table:experiment}
\end{table}
\section{preliminaries}\label{prelimiaries}
FileInsurer is a protocol to build a blockchain-based Decentralized Storage Network (DSN)~\cite{benisi2020blockchain}. The structure of DSN could be an independent blockchain or a decentralized application (DApp) parasitic on existing blockchains or other distributed network types. In DSN, a group of participants, called {\em storage providers}, are willing to rent out their unused hardware storage space to store the {\em clients'} files, and then the distributed file storage is realized.
In this section, we introduce the structure and components of FileInsurer protocol. Particularly, we deploy \emph{Dynamic Replication} (DRep) to support dynamic content in sectors
with low cost, which is important to ensure \emph{storage randomness}. Additionally, we also explain why compensation is necessary for DSN.
\subsection{Participants}
There are two kinds of participants in DSN that are \textit{clients} and \textit{storage providers}.
\subsubsection{Client}
Clients are the participants who have the demand to store files in the network. They propose a request to declare which file needs to be stored, via \textsf{File\_Add} request. Once her file is stored, the client shall pay the rent for the storage service at periodic intervals (introduced in Section \ref{fee_mechanism}), which depends on the file's value and size.
They also can ask DSN to discard their files stored before, via \textsf{ File\_Discard} request. Besides, clients can retrieve any file stored in DSN, via \textsf{File\_Get} request, by paying the retrieving payment. As the uploaded files are public in DSN, clients can encrypt their files before uploading if she concerns about privacy.
\subsubsection{Storage Providers}
Storage Providers are the participants who rent out their hard disks to store clients' files and offer the service of retrieving files in exchange for payments.
When receiving the \textsf{File\_Add} request from a client, DSN
automatically selects several independent storage providers to store this file, so that the robustness could be guaranteed by replicating files.
After receiving a file from a client, providers need to declare that they have obtained this file by \textsf{File\_Confirm} request. In addition, after storing a file, it is necessary for providers to repeatedly submit the proofs of file storage to DSN at each specified checkpoint, to show that they are storing this file, via \textsf{File\_Prove} request. In order to guarantee security, each storage provider must pledge a deposit, so that her deposit could be liquidated to compensate for the loss of clients once her disk is corrupted. When a client requests retrieval of a specified file, the providers, who store this file, compete to respond to the request for the corresponding payment. Hence a {\em Retrieval Market} is formed, in which the clients and providers exchange the file without the witness of DSN.
\subsection{Data Structures}
\Cref{fig:datastructure} shows a brief description of the data structures of the FileInsurer. There are four main data structures, which are sector, file descriptor, allocation table, and pending list.
\subsubsection{Sector}
A disk sector is the smallest unit that a provider rents out to store files. Sector sizes vary but are required to be an integer multiple of a minimum value of $minCapacity$. $minCapacity$ can be set to $64$GB or other deterministic value. A sector is considered to be corrupted, as long as any bit in this sector is destroyed. A file is missing, if and only if the sectors storing this file are all corrupted. In FileInsurer protocol, providers could divide their storage spaces into multiple sectors, and are not allowed to register multiple disks as the same sector. In addition, FileInsurer requires that a file is integratedly stored in a sector, instead of being dispersed into multiple sectors.
Such a requirement ensures the owner of a lost file obtains the compensation, which can completely make up for her loss.
\subsubsection{File descriptor}
The file descriptor $f$ describes a file stored in the network, including its size, value, Merkle root, the number of copies, and other necessary information. When a file is stored, the following two conditions must be satisfied.
\begin{itemize}
\item The total size of the files stored in a sector must not exceed the capacity of this sector.
\item If a file $f$ is lost, meaning all sectors storing it are all corrupted, then the deposits from these sectors are at least $f.value$ to make up the loss of the file's owner.
\end{itemize}
\subsubsection{Allocation table}
FileInsurer selects some feasible sectors to store a file and makes a note of it recorded in the allocation table. The allocation table will be updated when a file is stored in the network, a file is discarded, or the storage location of a file is transferred. The allocation table is a part of the network consensus and can support fast random access.
\subsubsection{Pending list}
In the design of FileInsurer, some tasks need to be automatically executed at a specific time in the future, such as regularly checking whether a file is saved correctly. Therefore FileInsurer needs to maintain a pending list to save these tasks and their corresponding execution time. When a new time point $t$ is reached, the tasks in the pending list whose timestamp is $t$ will be automatically executed by the network. As the gas fee for these tasks should be paid in advance, tasks that are placed in the
pending list must have a clear gas used upper bound. In the basic design of FileInsurer, these tasks are only generated through network consensus.
\begin{figure}[ht]
\begin{algorithm}[H]
\caption{Data structures}
\small
\setstretch{.9}
\textbf{Sector}\\
$\textsf{sector}: (\textsf{owner},~ \textsf{id},~ \textsf{capacity},~ \textsf{freeCap},~ \textsf{state})$
\begin{itemize}
\item \textsf{owner}: the provider who owns the sector.
\item \textsf{id}: the id of the sector, a provider cannot have two sectors with the same \textsf{id}.
\item \textsf{capacity}: the storage capacity of the sector.
\item \textsf{freeCap}: current free capacity of the sector.
\item \textsf{state}: \texttt{normal} means this sector has capacity to accept new files, \texttt{disable} means the sector no longer accepts new files.
\end{itemize}
\textbf{File descriptor}\\
$\textsf{fileDescriptor}: (\textsf{size},~\textsf{value},~\textsf{merkleRoot},~\textsf{cp},~\textsf{cntdown},~\textsf{state})$
\begin{itemize}
\item \textsf{size}: the size of the file.
\item \textsf{value}: the value of the file.
\item \textsf{merkleRoot}: merkle root of the file.
\item \textsf{cp}: the number of replicas to be stored in the network, determined by the file value.
\item \textsf{cntdown}: the number of checkpoints until the next refresh of the file store.
\item \textsf{state}: \texttt{normal} means this file needs to be stored, \texttt{discard} means this file is discarded.
\end{itemize}
\textbf{Allocation table}\\
$\textsf{allocTable}: \left\{\left(\textsf{fileDescriptor},~\textsf{index}\right)\rightarrow \textsf{allocEntry}\right\}$
$\textsf{allocEntry}: (\textsf{prev},~\textsf{next},~\textsf{last},~\textsf{state})$
\begin{itemize}
\item \textsf{prev}: the current sector storing the file.
\item \textsf{next}: the next sector to store the file.
\item \textsf{last}: time of the last proof of storage.
\item \textsf{state}: \texttt{alloc} means the file is being (re)allocated to a sector, \texttt{confirm} means that the file is confirmed by the next sector to store, \texttt{normal} means the current sector is storing the file, \texttt{corrupted} means the current sector is corrupted.
\end{itemize}
\textbf{Pending list}\\
$\textsf{pendingList}: \left\{\textsf{time}\rightarrow \left[\textsf{task},\textsf{task},...\right]\right\}$
\begin{itemize}
\item \textsf{time}: time point when the tasks need to be automatically executed.
\item \textsf{task}: description and parameters of the task to be executed.
\end{itemize}
\end{algorithm}
\caption{\textbf{The data structures of FileInsurer}}
\label{fig:datastructure}
\end{figure}
\subsection{Interactions between Participants and Network}
This subsection introduces the abovementioned operations performed by clients and storage providers in detail.
\subsubsection{Client requests}
\begin{itemize}
\item \textsf{File\_Add}: \textit{Client stores a file in DSN.}\\
A client submits an order through a \textsf{File\_Add} request to inform DSN of the file's description $f$, containing size $f.size$, value $f.value$, Merkle root $f.merkleRoot$, the number of replicas $f.cp$, and other necessary information. DSN automatically allocates feasible $f.cp$ sectors. When these sectors are found, the client transmits the file to these sectors.
\item \textsf{File\_Discard}: \textit{Client discards a file stored in DSN.}\\
It is not necessary for clients to specify how long to store the file in advance. As an alternative, the client can discard the file at any time by submitting \textsf{File\_Discard} request, which contains the description $f$ of this file, to DSN.
\item \textsf{File\_Get}: \textit{Client retrieves a file from DSN.}\\
Each client can request any file in DSN, via \textsf{File\_Get} request, by paying a certain amount of tokens. Because this requested file is available in multiple providers' sectors, the retrieve request can be satisfied by receiving one of the copies from these providers.
\end{itemize}
\subsubsection{Provider requests}
\begin{itemize}
\item \textsf{Sector\_Register}: \textit{Provider registers a new sector in DSN.}\\
When providers launch a new storage space, they have two options. One is to register the whole storage space as one sector. The other is to divide this space into several parts and each part is registered as one sector. When a sector is registered, the provider shall pledge a deposit proportional to the capacity of this sector.
\item \textsf{Sector\_Disable}: \textit{An operation to affirm that a sector no longer accepts new files.}\\
In the design of FileInsurer protocol, providers are not allowed to revoke the sectors they leased on the network before. Instead, when a provider decides not to provide storage service from a sector, she shall declare that the sector is disabled, that it is no longer accepts any new file. After all files stored in this sector are allocated to other sectors by the network, the sector is removed from the network.
\item \textsf{File\_Confirm}: \textit{The provider confirms to the network that a file has been received.}\\
The network automatically specifies the storage sector for files, and the provider of the sector needs to confirm to the network after receiving the client's file.
\item \textsf{File\_Prove}: \textit{The provider submits the certificate to the network of its correct storage of files.}\\
When providers store the files, they must repeatedly submit proofs of replication to ensure they are storing the files. Proofs are posted on and verified by DSN.
\item \textsf{File\_Supply}: \textit{The provider responds a \textsf{File\_Get} request from a client.}\\
Once the supply and demand relationship of one file has been established, the transmission of this file would be carried off-chain.
\end{itemize}
\subsection{Dynamic Content in Sectors}\label{PoRep}
In FileInsurer, the content of a sector needs to be dynamic from time to time, which is supported by ensuring storage randomness. FileInsurer can resist Sybil attack by storing the files as multiple replicas. In FileInsurer, these replicas are generated by PoRep, and the free capacity of a sector needs to be proven that it is indeed available. A trivial idea is to make a new replica of the sector whenever the content is changed by PoRep. However, it is not a wise solution because it would lead to an extremely high burden on providers and much more verification of PoRep.
We propose a novel solution called \emph{Dynamic Replication} (DRep) to solve this problem. Different from Filecoin, we don't encode a whole sector into a replica, but make each file in a sector to be a unique replica. We define a \emph{Capacity Replica} (CR) as a replica of zeros bits generated by the PoRep process. When a sector is registered, it should be just filled with $l$ unique CRs. The sector is requested to contain as many CRs as possible while storing files. Therefore, the unsealed space of a sector is smaller than the size of a CR. Figure~\ref{fig:dcs} shows some examples of DRep.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{sector.png}
\caption{\textbf{Examples of DRep}: Initially the sector contains six Capacity Replicas (as shown in (a)). After filling some files, there are two Capacity Replicas left (as shown in (b)). When the total size of files decreases, the provider regenerates the $CR3$ (as shown in (c)).}
\label{fig:dcs}
\end{figure}
Ensuring free space of sectors is indeed available by CRs is an efficient way. All CRs only need to be generated by PoRep once and then verified continuously stored via WindowPoSt~\cite{benet2018filecoin}. If a CR has been thrown, the provider can recover it by $\textsf{PoRep.setup}$ because the raw data of a CR are zeros. It doesn't need to go through the whole PoRep process because the Merkle roots of CRs have been previously verified. Therefore, DRep won't bring an extra verification burden on the DSN and providers don't need to generate SNARK of PoRep again.
Additionally, FileInsurer changes the location of replicas at a low cost. Consider that a replica of a file $f$ needs to be transferred to another sector. The provider do not need to generate new replicas of $f$ by PoRep, but just transfer the old ones. Liveness issue occurs that a provider may not transfer the replica of $f$ to the successor provider. However, it doesn't bother because the successor provider can fetch the source data of $f$ from other providers and recover the replica via \textsf{PoRep.Setup}. Similar to CRs, these replicas don't need to be verified again, and they can be recovered from the raw file. Therefore, the movement of replicas is efficient.
\subsection{Storage Market and Retrieval Market}
Similar to Filecoin, there are two markets in FileInsurer, the Storage Market and the Retrieval Market.
The Retrieval Market in FileInsurer is the same as that in Filecoin. In the DSN, clients can send the request to retrieve any file $f$. Any participant with $f$ or a replica of $f$ can answer that request. The process of retrieving is accomplished by BitSwap of IPFS.
However, the Storage Market in FileInsurer is quite different from the one in Filecoin. In FileInsurer, the price of storing a file is decided by the size and the value of that file. Clients do not need to negotiate prices of storage with providers and even do not need to specify who to store their files. Prices for storage services may change over time, which will be discussed in Section~\ref{fee_mechanism}.
\subsection{Source of Randomness}
Just like other DSN protocols, FileInsurer needs a huge amount of on-chain random bits. To achieve this with low expense, we use a pseudorandom number generator~\cite{james1990review,pareek2014overview} to generate long pseudo-random bits based on a short random beacon. Additionally, the issue of generating an unbiased and unpredictable public random beacon in blockchain has been well studied~\cite{cachin2005random,bhat2021randpiper,das2021spurt}. Combining the abovementioned two technologies, we can cheaply get enough public pseudo-random bits. In this paper, we omit the implementation of generating and using random bits because it is too detailed and not our main contribution.
\subsection{Necessity of Compensation in DSN}\label{comneeded}
Compensation is needed in DSN because the scalability of DSN would lead to an unavoidable risk of missing data. Necessarily, the scalability of storage means that a participant in DSN only needs to store a very small part of all data in DSN. Therefore, many data of DSN must only be stored by a small part of participants.
However, if a constant ratio, for example, $0.1$, of sectors (or storage capacity) crash instantaneously, some data may be lost. So DSN brings a huge risk of file loss.
To demonstrate more clearly, let us provide some other concrete examples. In Storj, a file is lost if enough shards of the file are not available beyond what can be recovered by erasure code. In Filecoin, a file is lost, if and only if all sectors storing replicas of this file crash down.
To balance the safety and the scalability of DSN, compensation is an effective method to motivate users to take part in the distributed file storage. The reasonable deposit shall compensate the users' loss from missing data.
\section{Proof of Theorem 3}
We define the state of a FileInsurer network as $(F,S,A,C)$ consisting of files $F$ ,sectors $S$, all allocations $A$, and corrupted bits $C$ in the network. Also, we define $V_{lost}^{(F,S,A,C)}$ as the sum of values of the lost files and $V_{confiscated}^{(F,S,A,C)}$ as the confiscated deposits of the corrupted sectors.
\begin{lemma}
For a specific state $(F,S,A,C)$, keeping the content and availability of each physical disk in the network unchanged, it can be viewed as another state $(F',S,A',C)$ where the value of each file is $minValue$. State $(F',S,A',C)$ satisfies $V_{lost}^{(F,S,A,C)}\leq V_{lost}^{(F',S,A',C)}$.
\label{lemma:filesize}
\end{lemma}
\begin{proof}
We divide each file descriptor $f$ to $\frac{f.value}{minValue}$ different file descriptors. These file descriptors all have the value $minValue$ and the same Merkle root as that of $f$. Divide the $f.cp$ allocations of $f$ equally among these new file descriptors, so each file descriptor have exactly $k$ allocations. By defining $F'$ as all these new file descriptors, $A'$ as these new file allocations, we construct a state $(F',S,A',C)$ such that the value of each file is $minValue$.
The content and availability of each physical disk in the network are same in state $(F,S,A,C)$ and state $(F',S,A',C)$. Since we do not change the state of sectors, we simply obtain $V_{confiscated}^{(F,S,A,C)}= V_{confiscated}^{(F',S,A',C)}$. For each file $f$ lost in state $(F,S,A,C)$, since all its backups are lost, every new file descriptor generated by $f$ in state $(F',S,A',C)$ also lost. The value of the file $f$ is equal to the sum of the values of the file descriptors it generates, so $V_{lost}^{(F,S,A,C)}\leq V_{lost}^{(F',S,A',C)}$.
\end{proof}
\begin{lemma}
$\forall 0<p\leq \frac{1}{5}$ and $5p\leq x\leq 1$, $D_{KL}(x||p)\geq \frac{1}{2} x\log\frac{x}{p}$.
\label{lemma:1}
\end{lemma}
\begin{proof}
In the proof below, we will use $x\geq p$ unspecified. Let
\[
\begin{cases}
f(x)=x^2\log\frac{x}{p}-(1-x)^2\log\frac{1-x}{1-p}\\
g(x)=\log\frac{x-1}{p-1} \left(-\log\frac{x}{p}+x-1\right)-x \log\frac{x}{p}\\
h(x)=\frac{x\log\frac{x}{p}}{-(1-x)\log\frac{1-x}{1-p}}
\end{cases}
\]whose domain is $x\in[p,1]$. First, we have $f(x)\geq 0$ because $\frac{\mathrm{d}f}{\mathrm{d}x}=1+2D_{KL}(x||p)\geq 0$ and $f(p)=0$. Then, we have $g(x)\geq 0$ because $\frac{\mathrm{d}g}{\mathrm{d}x}=\frac{f(x)}{x(1-x)}\geq 0$ and $g(p)=0$. Finally, we obtain $h(x)$ is monotonically increasing because $\frac{\mathrm{d}h}{\mathrm{d}x}=\frac{g(x)}{(x-1)^2 \log ^2\frac{1-x}{1-p}}\geq 0$.
Because $h(x)$ is monotonically increasing, $\forall x\geq 5p$, $ h(x)\geq h(5p)=\frac{5 p \log 5}{(1-5p) \log\frac{1-p}{1-5p}}$. We can find that
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}p} \frac{5 p \log 5}{(1-5p) \log\frac{1-p}{1-5p}}
= & \frac{5 \log 5 \left((1-p) \log\frac{1-p}{1-5 p}-4 p\right)}{(1-5 p)^2 (1-p) \log ^2\left(\frac{1-p}{1-5 p}\right)}\\
\geq & \frac{5 \log 5 \left((1-p) (1-\frac{1-5p}{1-p})-4 p\right)}{(1-5 p)^2 (1-p) \log ^2\left(\frac{1-p}{1-5 p}\right)}\\
= & 0.
\end{align*}
When $p\rightarrow 0$, $\frac{5 p \log 5}{(1-5p) \log\frac{1-p}{1-5p}}>2$, so $\forall 0< p\leq \frac{1}{5},\frac{5 p \log 5}{(1-5p) \log\frac{1-p}{1-5p}}>2$, that is $\forall x\geq 5p,h(x)>2$. At last,
\begin{align*}
D_{KL}(x||p)&= x\log\frac{x}{p} + (1-x)\log\frac{1-x}{1-p}\\
&= x\log\frac{x}{p} \left (1+\frac{(1-x)\log\frac{1-x}{1-p}}{x\log\frac{x}{p}}\right)\\
&= x\log\frac{x}{p} \left (1-\frac{1}{h(x)}\right)\\
&\geq \frac{1}{2} x\log\frac{x}{p}.
\end{align*}
\end{proof}
\begin{lemma}
Assume that the total size of corrupted sectors is $\lambda N_s \times minCapacity$. Denote the total value of lost files to be $V_{lost}$. Then, with a probability of not less than $1-c$, $V_{lost}$ satisfies
{\small
\[
V_{lost} \leq minValue \times \max\left\{5N_v\lambda^k,N_v\lambda^\frac{k}{2},4\frac{\log\binom{N_s}{\lambda N_s}-\log c}{k\log\frac{1}{\lambda}}\right\}.
\]
}
\label{lemma:2}
\end{lemma}
\begin{proof}
Because of \cref{lemma:filesize}, we can make the relaxation of that each file has value $minValue$ in subsequent analysis. Under the relaxations, the setting of the problem can be simplified as follow: there are $N_v$ files, each file has the same value $minValue$ and needs to be stored in $k$ sectors. Each storage location of each file is generated independent and identically distributed.
For any certain scheme that the adversary corrupts $\lambda$ ratio of capacity, which means that the total size of corrupted sectors is $\lambda N_s \times minCapacity$, define random variable $X_i$ as the indicator variable of that the file $f_i$ is lost. Recall \emph{storage randomness} indicates that all replicas are evenly and randomly distributed. Then all $X_i$ are independent events and $\Pr[X_i]=\lambda^{k}$ when any sectors with total space of $\lambda N_s \times minCapacity$ are corrupted.
Denote $\gamma = \frac{V_{lost}^v}{minValue}$.
By Chernoff bound and \cref{lemma:1}, when $\gamma \geq 5N_v\lambda^k $, we obtain
\begin{align*}
& \Pr\left[\sum_iX_i \geq \gamma \right]\\
\leq & \exp\left\{-N_v\left(\frac{\gamma}{N_v}\log\frac{\gamma}{N_v\lambda^k}+\left(1-\frac{\gamma}{N_v}\right)\log\frac{N_v-\gamma}{N_v-N_v\lambda^k}\right)\right\}\\
\leq & \exp\left\{-\frac{\gamma}{2}\log\frac{\gamma}{N_v\lambda^k}\right\}.
\end{align*}
Consider the number of scenarios in which an adversary can corrupt sectors with capacity $\lambda N_s \times minCapacity$. Here we do a simple scaling that treat a sector with $s.capacity$ capacity as $\frac{s.capacity}{minCapacity}$ sectors with capacity $minCapacity$. After this scaling, the adversary has $\binom{N_s}{\lambda N_s}$ options to corrupt sectors. Therefore, for the original situation, the number of scenarios in which an adversary can corrupt sectors with capacity $\lambda N_s \times minCapacity$ does not exceed $\binom{N_s}{\lambda N_s}$.
By union bound, when $\gamma\geq 5N_v\lambda^k$, the probability it cannot manufacture $\gamma$ lost files is at least
\[
1-\binom{N_s}{\lambda N_s}\exp\left\{-\frac{\gamma}{2}\log\frac{\gamma}{N_v\lambda^k}\right\}.
\]
When $\gamma\geq N_v\lambda^\frac{k}{2}$, we have $\log\frac{\gamma}{N_v\lambda^k}\geq \log\left(\lambda^{\frac{-k}{2}}\right)=-\frac{k}{2}\log\lambda$. Then we find that
\begin{align*}
& \gamma \geq 4\frac{\log\binom{N_s}{\lambda N_s}-\log c}{k\log\frac{1}{\lambda}}\\
\Leftrightarrow & \gamma\frac{k}{4}\log\frac{1}{\lambda} \geq -\log\frac{c}{\binom{N_s}{\lambda N_s}}\\
\Rightarrow & \frac{\gamma}{2}\log\frac{\gamma}{N_v\lambda^k} \geq -\log\frac{c}{\binom{N_s}{\lambda N_s}}\\
\Leftrightarrow & -\frac{\gamma}{2}\log\frac{\gamma}{N_v\lambda^k} \leq \log\frac{c}{\binom{N_s}{\lambda N_s}}\\
\Leftrightarrow & \exp\left\{-\frac{\gamma}{2}\log\frac{\gamma}{N_v\lambda^k}\right\} \leq \frac{c}{\binom{N_s}{\lambda N_s}}\\
\Leftrightarrow & \binom{N_s}{\lambda N_s}\exp\left\{-\frac{\gamma}{2}\log\frac{\gamma}{N_v\lambda^k}\right\} \leq c.
\end{align*}
This shows that when $\gamma$ meets the above three conditions, the probability that an adversary can make $\gamma$ lost files does not exceed $1-c$, that is, $\gamma$ needs to satisfy
\[
\gamma \leq\max\left\{5\lambda^kN_v,\lambda^\frac{k}{2}N_v,4\frac{\log\binom{N_s}{\lambda N_s}-\log c}{k\log\frac{1}{\lambda}}\right\}.
\]
Therefore, with a probability of no less than $1-c$, $V_{lost}^v$ satisfies
{\small
\[
V_{lost}^v \leq minValue \times \max\left\{5N_v\lambda^k,N_v\lambda^\frac{k}{2},4\frac{\log\binom{N_s}{\lambda N_s}-\log c}{k\log\frac{1}{\lambda}}\right\}.
\]
}
\end{proof}
\throb*
\begin{proof}
Now we use an upper bound of the binomial number via Stirling's formula,
\begin{align*}
\binom{N_s}{\lambda N_s} & = \frac{N_s!}{(\lambda N_s)!(N_s-\lambda N_s)!}\\
& \leq \frac{e}{2\pi}\frac{N_s^{N_s+\frac12}}{(\lambda N_s)^{\lambda N_s+\frac12}(N_s-\lambda N_s)^{N_s-\lambda N_s+\frac12}}\\
& = \frac{e}{2\pi}\sqrt{\frac{1}{N_s\lambda(1-\lambda)}}\left(\frac{1}{\lambda^\lambda(1-\lambda)^{1-\lambda}}\right)^{N_s}\\
& \leq \frac{e}{2\pi}\left(\frac{1}{\lambda^\lambda(1-\lambda)^{1-\lambda}}\right)^{N_s}
\end{align*}
Using this upper bound, we can have a simpler version of $\gamma_{lost}^v$:
\begin{align*}
\gamma_{lost}^v & \leq\max\left\{5\lambda^k,\lambda^\frac{k}{2},4\frac{\log\frac{e}{2\pi}-N_s\log\left(\lambda^\lambda(1-\lambda)^{1-\lambda}\right)-\log c}{N_vk\log\frac{1}{\lambda}}\right\}\\
& = \max\left\{5\lambda^k,\lambda^\frac{k}{2},4\frac{\frac{\log\frac{e}{2\pi}-\log c}{N_s}-\log\left(\lambda^\lambda(1-\lambda)^{1-\lambda}\right)}{capPara\cdot \gamma_v^m k\log\frac{1}{\lambda}}\right\}\\
\end{align*}
\end{proof}
\section{Proof of Theorem 4}
\thratio*
\begin{proof}
By assumption, the total size of corrupted sectors is no more than $\lambda N_s \times minCapacity$.
Because the deposit of corrupted sectors should always cover the file loss, for all $\frac{1}{N_s}\leq \lambda^{\prime} \leq \lambda$ we shall have $\lambda^{\prime} \gamma_{deposit} N_v^m \geq \gamma$, which is equivalent to
\[
\gamma_{deposit} \geq \max_{\frac{1}{N_s}\leq\lambda^{\prime}\leq \lambda} \left\{ \frac{\gamma}{\lambda^{\prime} N_v^m}\right\}.
\]
Then with probability no less than $1-c$, the following $\gamma_{deposit}$ is enough for full compensation
{\small
\[
\gamma_{deposit} \geq \max_{\frac{1}{N_s}\leq\lambda^{\prime}\leq \lambda} \max \left\{5(\lambda^{\prime})^{k-1},(\lambda^{\prime})^{\frac{k}{2}-1},4\frac{\log\binom{N_s}{\lambda^{\prime} N_s}-\log c}{\lambda^{\prime}N_v^m k\log\frac{1}{\lambda^{\prime}}}\right\}.
\]
}
Considering the third part, we have
\begin{align*}
& 4\frac{\log\binom{N_s}{\lambda^{\prime} N_s}-\log c}{\lambda^{\prime}N_v^m k\log\frac{1}{\lambda^{\prime}}}\\
\leq & \max_{\frac{1}{N_s}\leq\lambda^{\prime}\leq \lambda}
\frac{4\log\left(N_s^{\lambda^{\prime} N_s}\right)-4\log c}{\lambda^{\prime} N_v^mk\log\frac{1}{\lambda^{\prime}}}\\
= & \max_{\frac{1}{N_s}\leq\lambda^{\prime}\leq \lambda}
\frac{4\lambda^{\prime} N_s\log N_s-4\log c}{\lambda^{\prime} N_v^mk\log\frac{1}{\lambda^{\prime}}}\\
\leq & \left(\max_{\frac{1}{N_s}\leq\lambda^{\prime}\leq \lambda}\frac{4N_s\log N_s}{N_v^mk\log\frac{1}{\lambda^{\prime}}}\right)+\left(\max_{\frac{1}{N_s}\leq\lambda^{\prime}\leq \lambda}\frac{-4\log c}{\lambda^{\prime} N_v^m k\log\frac{1}{\lambda^{\prime}}}\right)\\
\leq & \frac{4N_s\log N_s}{N_v^mk\log\frac{1}{\lambda}}+\frac{-4N_s\log c}{N_v^mk\log N_s}.
\end{align*}
Then
{\small
\[
\gamma_{deposit} \geq \max\left\{5 \lambda^{k-1},\lambda^{\frac{k}{2} - 1},
\frac{4N_s\log N_s}{N_v^mk\log\frac{1}{\lambda}}+\frac{-4N_s\log c}{N_v^mk\log N_s}\right\}.
\]
}
As $capPara = \frac{N_v^m}{N_s} $,
{\small
\[
\gamma_{deposit} \geq \max\left\{5 \lambda^{k-1},\lambda^{\frac{k}{2} - 1},
\frac{4}{k\times capPara }\left( \frac{\log N_s}{\log\frac{1}{\lambda}}+\frac{\log \frac{1}{c}}{\log N_s}\right)\right\}.
\]
}
\end{proof}
\subsection{Main Protocol}
\label{protocol}
\begin{figure*}[htbp]
\centering
\subfigure[\textbf{Storing files on the network}: First, the file should be informed to the network to get the sectors where the files are stored at. The client then sends the file to those sectors. The file is successfully stored after the system executes \textsf{Auto\_CheckAlloc}, and the rent is paid every time the system executes \textsf{Auto\_CheckProof}.]
{
\includegraphics[scale=.4]{fig/file.pdf}
}
\subfigure[\textbf{Renting sectors to the network}: After the sector has been registered, the file will be swapped into or out of the sector through \textsf{Auto\_Refresh} from time to time. Moreover, the network may also inform the sector to take over new files by corresponding \textsf{File\_Add}.]
{
\includegraphics[scale=.4]{fig/sector.pdf}
}
\caption{\textbf{Brief overview of the protocol}: How files and sectors interact with the network. The symbol ``F'' represents a file, and ``R'' represents a file replica.}
\label{brief}
\end{figure*}
\begin{table}[htbp]
\setstretch{0.6}
\caption{Descriptions of parameters and functions}
\label{table:description}
{\centering
\small
\begin{tabular}{c c}
\textbf{Notation} & \multicolumn{1}{p{0.6\columnwidth}}{\textbf{Description}} \\ [0.5ex]
\toprule
$RandomSector()$ & \multicolumn{1}{m{0.6\columnwidth}}{Sample a random sector. The probability of selecting each sector is proportional to its capacity.}\\
\midrule
$SampleExp(x)$ & \multicolumn{1}{m{0.6\columnwidth}}{Sample from an exponential distribution with mean $x$.}\\
\midrule
$RandomIndex(f)$ & \multicolumn{1}{m{0.6\columnwidth}}{Sample a number between $1$ and $f.cp$ uniformly at random.}\\
\midrule
$DelayPerSize$ & \multicolumn{1}{m{0.6\columnwidth}}{The maximum transmit time allowed per unit file size. This constant multiplied by the file size is the upper limit of the file transfer time allowed by the network.}\\
\midrule
$AvgRefresh$ & \multicolumn{1}{m{0.6\columnwidth}}{The number of $ProofCycle$s to refresh the file storage on average} \\
\midrule
$ProofCycle$ & \multicolumn{1}{m{0.6\columnwidth}}{Time interval between each inspection proof.}\\
\midrule
$ProofDue$ & \multicolumn{1}{m{0.6\columnwidth}}{The specified upper limit of the time the last proof until now.}\\
\midrule
$ProofDeadline$ & \multicolumn{1}{m{0.6\columnwidth}}{The tolerable upper limit of the time the last proof until now.}\\
\bottomrule\\
\end{tabular}}
\end{table}
FileInsurer mainly includes three parts:
\begin{itemize}
\item \textsf{File\_}: protocols with \textsf{File\_} prefix handles the storage of data on the network,
\item \textsf{Sector\_}: protocols with \textsf{Sector\_} prefix handles the sector registration and revocation,
\item \textsf{Auto\_}: protocols with \textsf{Auto\_} prefix are mainly used for the maintenance of network. They are special because they cannot be called by anyone and will be executed automatically at a specific time.
\end{itemize}
Figure \ref{brief} proposes a brief overview of the protocol of FileInsurer by explaining how files and sectors interact with the network. Table \ref{table:description} lists all parameters and functions used in FileInsurer protocol.
\input{code/file_client}
\subsubsection{File\_}
\Cref{file:client} shows the network response for the clients' \textsf{File\_} requests. When a client makes a \textsf{File\_Add} request, the network first generates a file descriptor and samples $f.cp$ sectors for storage. The probability of each sector being selected is proportional to the capacity of this sector. The number of backup files that need to be stored is calculated by $f.cp=\frac{f.value}{minValue}k$, where $minValue$ is a parameter representing the lower limit of the file value of network storage and each $f.value$ must be integer multiple of $minValue$. Next, the waiting time is calculated and the user needs to transfer the file to the owner of the selected sectors before the waiting time expires. Once the waiting time expires, a task named as \textsf{Auto\_CheckAlloc} is performed automatically to confirm whether the file is successfully stored on the network. When a client submits a \textsf{File\_Discard} request, the network simply sets the state of the corresponding file descriptor to \texttt{discard}.
\input{code/file_provider}
\Cref{file:provider} illustrates the network response for providers' \textsf{File\_} requests. When receiving an \textsf{File\_Confirm} request, the network sets the state of the corresponding allocation entry to \texttt{confirm}. It means the sector has successfully received the file. When the network receives a \textsf{File\_Prove} request, it shall update the last proof time of the file storage after checking the correctness of the proof.
\input{code/sector}
\subsubsection{Sector\_}
It is simple for the network to respond to \textsf{Sector\_} requests. The pseudo-code is shown in Figure \ref{sector}. A new sector is registered when a \textsf{Sector\_Register} request is received and the state of a sector will be set to \texttt{disable} when a request of \textsf{Sector\_Disable} is received.
When all files in a disabled sector are swapped out, then it can be removed.
\subsubsection{Auto\_}
Note that the tasks with \textsf{Auto\_} prefix cannot be called by anyone and shall be executed at a specific time automatically. In the design of the FileInsurer protocol, the network needs to maintain a pending list to ensure that these tasks are executed at a specific time. There are 4 kinds of tasks with \textsf{Auto\_} prefix, which are \textsf{Auto\_CheckAlloc}, \textsf{Auto\_CheckProof}, \textsf{Auto\_Refresh}, and \textsf{Auto\_CheckRefresh}. In simple terms, \textsf{Auto\_CheckAlloc} is used to check that the file has been correctly stored on the network, \textsf{Auto\_CheckProof} is periodically proof checking, while \textsf{Auto\_Refresh} and \textsf{Auto\_CheckRefresh} are the processes of file storage refreshing in order to ensure the randomness of storage. Therefore, the period of proof checking should be short, and thus the frequency of the file storage location refreshing could be very low.
\input{code/auto_checkalloc}
\textsf{Auto\_CheckAlloc} will be executed automatically at some time after a \textsf{File\_Add} request is responded by the network. The network shall confirm if all $f.cp$ sectors have received the file described by $f$. If so, the network goes to change the state of the file descriptor to \texttt{normal}; otherwise, it shall inform the client that it failed to upload the file.
\input{code/auto_checkproof}
Every file needs to be checked at some specific time whether it is stored properly. In each specific time period, a task named \textsf{Auto\_CheckProof} automatically runs to check whether each proof to the file is timely. We provide the pseudo-code of \textsf{Auto\_CheckProof} in Figure \ref{auto:checkproof}. We use WindowPoSt of Filecoin~\cite{benet2018filecoin} to implement the proof process. A sector will be punished if it cannot submit the proof of storage of its files within $ProofDue$ time, and then its corresponding deposit is liquidated if the proof of storage of its files cannot be provided within $ProofDeadline$ time.
\input{code/auto_refresh}
Whenever a random number\footnote{This random number follows an exponential distribution} of checkpoints are passed, a task named \textsf{Auto\_Refresh} will be called to randomly refresh one of the storage places of the file. Figure \ref{auto:refresh} shows the details of \textsf{Auto\_Refresh} and another corresponding task \textsf{Auto\_CheckRefresh}. The probability of sampling the new storage sector is proportional to the capacity of the sector. The network then calculates a waiting time and the current sectors that store this file need to transfer it to the selected sector before the waiting time expires. Once the waiting time expires, the task named \textsf{Auto\_CheckRefresh} will be executed automatically to confirm whether the file is successfully stored in the new sector.
\section{Related Works}\label{related}
\subsection{InterPlanetary File System (IPFS)}
The InterPlanetary File System (IPFS) is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files~\cite{benet2014ipfs}. Files, identified by their cryptographic hashes, are stored and exchanged by nodes in IPFS. Nodes also provide the service of retrieving files to earn profits through BitSwap protocol. The routing of IPFS is achieved by Distributed Hash Tables (DHTs), which is an efficient way to locate data among IPFS nodes. Based on BitSwap and DHTs, IPFS builds an Object Merkle DAG which allows participants to address files through IPFS paths.
\subsection{FileCoin}\label{related:filecoin}
Filecoin builds a blockchain-based \emph{Decentralized Storage Network} which runs in the top layer of IPFS~\cite{benet2018filecoin}. There are three types of participants in Filecoin, which are clients, storage miners, and retrieval miners. Specifically, clients pay to store and retrieve files, storage miners earn profits by registering sectors to offer storage, and retrieval miners earn profits by serving data to clients.
\subsubsection{Proof-of-Replication~(PoRep)}
PoRep~\cite{benet2017proof} is a kind of proof-of-storage scheme deployed in Filecoin. In the PoRep scheme, the prover firstly generates a replica of file $\mathcal{D}$, denoted by $\mathcal{R}^{\mathcal{D}}_{ek}$, through the process of \textsf{PoRep.setup($\mathcal{D}, ek$)}. $ek$ is a randomly chosen encryption key that with $ek$, $\mathcal{R}^{\mathcal{D}}_{ek}$ can be encrypted from $\mathcal{D}$, and $\mathcal{D}$ can be decrypted from $\mathcal{R}^{\mathcal{D}}_{ek}$. The prover then submits the hash root of $\mathcal{R}^{\mathcal{D}}_{ek}$ to the DSN. Finally, the prover proves that $\mathcal{R}^{\mathcal{D}}_{ek}$ is a replica of $\mathcal{D}$ with encryption key $ek$ via SNARK.
The verification of SNARK is very efficient. However, the calculation of $\mathcal{R}^{\mathcal{D}}_{ek}$ would take a lot of time because it can't be parallelized. Additionally, the calculation of SNARK would consume lots of computation resources.
\subsubsection{Filecoin Sectors}
In Filecoin, sectors are divided into sealed ones and unsealed ones\footnote{See in \url{https://spec.filecoin.io/systems/filecoin_mining/sector/}}. Only sealed sectors are part of the Filecoin network and can get rewards of storage. Unsealed sectors only contain raw data, and a sealed sector can be registered from an unsealed sector by PoRep. Storage miners would pledge deposits when registering a sector, but when the sector crashes, that deposit is burnt other than used for compensating the file loss to clients.
When registering an unsealed sector, if the sector is not full, the rest space of the sector would be filled with zeros before encoding by PoRep. If a sealed sector doesn't contain any files, which means the contents of that sector are all zeros when registering, it's called a committed capacity (CC). Other sealed sectors are called regular sectors. A CC sector can be upgraded to a regular sector by discarding the CC sector and registering a new regular sector. However, the content of a regular sector can be no longer changed.
\subsubsection{Proof-of-Spacetime}
PoSt is another kind of proof-of-storage scheme for storage miners to prove that they are indeed actually storing a replica. There are two kinds of PoSt in Filecoin. WinningPoSt serves as a part of the Expected Consensus of Filecoin, while WindowPoSt guarantees that the miner continuously maintains a replica over time. Therefore, Sybil attacks are prevented by the combination of WindowPoSt and PoRep because storage miners should actually store all replicas.
\subsubsection{Storage Market and Retrieval Market}
There are two markets in Filecoin, the Storage Market and the Retrieval Market. In the Storage Market, storage miners and clients negotiate on the price and length of storage. Similarly, retrieval miners and clients would negotiate on the price of file retrieving.
\subsection{Other Solutions to Decentralized File Storage}
\subsubsection{Storj}
Storj~\cite{wilkinson2014storj} is a sharding~\cite{kokoris2018omniledger,zhang2020cycledger} based protocol to archive a peer-to-peer cloud storage network implementing end-to-end encryption. It stores files in encrypted shards to ensure that the file itself cannot be recovered by anyone other than the owner. Moreover, it uses erasure code to ensure file availability in case some shards are lost.
\subsubsection{Sia}
Sia~\cite{vorick2014sia} is a platform for decentralized storage enabling the formation of storage contracts between peers. The Sia protocol provides an algorithm of storage proof in order to build storage contracts. According to the file contract, storage providers need to generate proof-of-storage periodically. The client needs to pay for each valid storage proof.
\subsubsection{Arweave}
Arweave~\cite{williams2019arweave} is a mechanism design-based approach to achieving a sustainable and permanent ledger of knowledge and history. Storing files on Arweave only requires a single upfront fee, after which the files become part of the consensus. Arweave uses the mechanism of Proof of Access in consensus to ensure that miners need to store as many files as possible to participate in mining.
\section{Analysis}\label{analysis}
In this section, we analyze the performance of our protocol and compare FileInsurer with other DSN protocols in detail.
\subsection{Notation and Assumption}
\begin{table}[htbp]
\setstretch{0.7}
\caption{Notation Table}
\label{table:notation}
{\centering
\small
\begin{tabular}{c c}
\textbf{Notation} & \multicolumn{1}{p{0.7\columnwidth}}{\textbf{Description}} \\ [0.5ex]
\toprule
$minCapacity$ & \multicolumn{1}{m{0.7\columnwidth}}{The minimum capacity of a sector. The capacity of each sector is an integer multiple of $minCapacity$.}\\
\midrule
$minValue$ & \multicolumn{1}{m{0.7\columnwidth}}{The minimum value of a file. The value of each file is an integer multiple of $minValue$.} \\
\midrule
$N_f$ & \multicolumn{1}{m{0.7\columnwidth}}{The number of files. }\\
\midrule
$N_{s}$ & \multicolumn{1}{m{0.7\columnwidth}}{The ``weighted'' number of sectors. $N_s\times minCapacity$ indicates the total capacity of the network.}\\
\midrule
$N_v$ & \multicolumn{1}{m{0.7\columnwidth}}{The ``weighted'' number of files. $N_v\times minValue$ indicates the total value of files stored on the network.}\\
\midrule
$N_v^m$ & \multicolumn{1}{m{0.7\columnwidth}}{The maximum ``weighted'' number of files the network is designed to carry. $N_v^m\times minValue$ is the maximum value the network can carry.}\\
\midrule
$\gamma^m_v$ & \multicolumn{1}{m{0.7\columnwidth}}{$\gamma^m_v=\frac{N_v}{N_v^m}$ is the ratio that the total value stored in FileInsurer compared to the maximal value.}\\
\midrule
$\gamma_{deposit}$ & \multicolumn{1}{m{0.7\columnwidth}}{ The deposit ratio.
$\gamma_{deposit}$ is the ratio that the sum of
deposits compared to the maximal value of files stored in the network.}\\
\midrule
$capPara$ & \multicolumn{1}{m{0.7\columnwidth}}{$capPara$ is defined as $\frac{N_v^m}{N_s}$.}\\
\midrule
$c$ & \multicolumn{1}{m{0.7\columnwidth}}{Security parameter. We set it to be $10^{-18}$.}\\
\midrule
$k$ & \multicolumn{1}{m{0.7\columnwidth}}{The number of backups should be stored of a file whose value is $minValue$.}\\
\bottomrule\\
\end{tabular}}
\end{table}
Before the analysis, we list notations in Table~\ref{table:notation} which are necessary for our theoretical analysis. Additionally, The following assumptions are necessary for our theoretical analysis.
\begin{itemize}
\item \textbf{Consensus security}: FileInsurer requires that the network consensus itself is secure. The issue of consensus security is not the target of this paper.
\item \textbf{Adversary ability}: FileInsurer allows an adversary to corrupt $\lambda$ proportion of network capacity immediately.
\item \textbf{Redundant capacity}: FileInsurer requires that the total capacity in the network is no less than twice the total size of all files' replicas. This assumption is deployed to ensure \emph{storage randomness}.
\end{itemize}
\subsection{Performance of FileInsurer}
\subsubsection{Analysis for Capacity Scalability}
We consider the capacity scalability of FileInsurer as the maximal size of stored files. The following theorem indicates that FileInsurer is scalable in capacity.
\begin{restatable}{theorem}{thscala}\label{th:scala}
The total size of files can be stored in FileInsurer is
{\small
\[
\min \left\{\frac{ N_s \times minCapacity}{2r_1 k}, \frac{N_s \times minCapacity }{r_2} \right\},
\]
}
where
{\small
\begin{align}
r_1 & = \frac{\sum_f f.size \times f.value}{minValue \times \sum_f f.size},\label{r1}\\
r_2 & = \frac{minCapcity \times \sum_f f.value}{minValue \times \sum_f f.size \times capPara}.\label{r2}
\end{align}
}
\end{restatable}
The proof of Theorem~\ref{th:scala} is provided in Appendix A.
We claim that each of $r_1$ and $r_2$ is bounded by a constant in Section~\ref{dis:r12}. Then the total size of raw files can be stored in FileInsurer is $\tilde O(N_s \times minCapacity )$, which is almost linear to the total size of sectors.
\subsubsection{Storage Randomness}
Storage randomness is an important issue in FileInsurer. Storage randomness can ensure the locations of replicas are evenly distributed. Therefore,
the adversaries must corrupt a huge number of sectors even if they only want to destroy all replicas of a small portion of files. In FileInsurer, replicas are stored by randomly selected sectors in \textsf{File\_Add} and their locations are randomly refreshed by \textsf{Auto\_Refresh}. Such operations make the locations of all replicas are independent and identically distributed.
However, when the total used space is close to the capacity of DSN, the process of \textsf{File\_Add} and \textsf{Auto\_Refresh} faces the trouble that the free space of selected sectors is not enough for the storage of a replica. We call this event a \emph{collision}. Although sectors can be reselected to store these replicas, Storage randomness would be influenced. Therefore, redundant capacity is required to avoid collisions. We claim that the frequency of collisions is ignorant by preliminary theoretical proof and further experiments.
We first consider a trivial case that all files have the same size. The following theorem indicates that a collision happens with an extremely low probability.
\begin{restatable}{theorem}{thadd}\label{th:adding}
If all files have the same size $f.size$, for a sector $s$ with total capacity $s.capacity$ and free capacity $s.freeCap$, then
{\small\[
\Pr\left[ \exists s,~s.freeCap\leq \frac{1}{8}s.capacity\right] \leq N_s\exp\left\{-0.144\frac{s.capacity}{f.size}\right\}.
\]}
\label{th:random}
\end{restatable}
The proof of Theorem \ref{th:adding} is provided in Appendix B.
By \Cref{th:random}, when $\frac{s.capacity}{f.size} \geq 1000$ and $N_s \leq 10^{12}$, we have $\Pr\left[ \exists s,~s.freeCap\leq \frac{1}{8}s.capacity\right] < 10^{-50}$.
A replica of the file can be stored in any sector $s$ with $s.freeCap\leq \frac{1}{8}s.capacity$ because $f.size < \frac{1}{8}s.capacity$. This result indicates that the probability of collision is extremely low under these conditions.
We further consider the general case that the size of files follows a certain distribution. We conduct a series of numerical experiments in two different settings. In the first setting, we reallocate all file backups in one go for $100$ times. In the second setting, we allocate each file backup and then randomly refresh the location of a file backup $100N_{cp}$ times. Recall that $N_{cp}=kN_v$ is the number of file backups and each file $f$ needs to store $f.cp$ backups on the network.
In the experiments, we test several distributions for the size of file backups. We focus on the maximum ratio of capacity usage. If the ratio is less than $1$, no file backups are allocated to sectors with insufficient capacity. \Cref{table:experiment} shows the results of our experiments. We can find that the maximum ratios of capacity usage never exceed $0.64$ under all tested distributions, which means that the probability that file backups are allocated to sectors with insufficient capacity is very small. Therefore, the results of our experiments indicate that collisions would hardly occur when the average size of file backups is much smaller than the sector capacity.
We also discuss how to maintain storage randomness when the list of sectors changes in \cref{maintainrandom}. These results show that storage randomness is easy to be promised in practice. Therefore, each allocation of replicas is assumed to be independent and identically distributed in the following analyses.
\begin{table}[]
\caption{\textbf{Experiment result:} maximum capacity usage of sectors}
\centering{\small
\begin{tabular}{c c|c c c c c}
\hline\hline
\multicolumn{7}{c}{reallocate all file backups $100$ times} \
\cr \cline{1-7}
\multicolumn{2}{c}{parameter} & \multicolumn{5}{c}{maximum capacity usage} \
\cr \cline{1-2} \cline{3-7}
$N_{cp}$ & $N_s$ & $[1]$ & $[2]$ & $[3]$ & $[4]$ & $[5]$\\
\hline
$10^5$ & 20 & 0.525 & 0.524 & 0.536 & 0.530 & 0.529\\\hline
$10^5$ & 100 & 0.571 & 0.566 & 0.584 & 0.572 & 0.569\\\hline
$10^6$ & 200 & 0.538 & 0.530 & 0.542 & 0.534 & 0.533\\\hline
$10^6$ & 1000 & 0.591 & 0.571 & 0.598 & 0.594 & 0.576\\\hline
$10^7$ & 2000 & 0.540 & 0.534 & 0.544 & 0.545 & 0.534\\\hline
$10^7$ & 10000 & 0.589 & 0.576 & 0.609 & 0.606 & 0.585\\\hline
$10^8$ & 20000 & 0.541 & 0.534 & 0.550 & 0.547 & 0.538\\\hline
$10^8$ & $10^5$ & 0.591 & 0.582 & 0.614 & 0.599 & 0.586\\\hline
\hline
\end{tabular}
\begin{tabular}{c c|c c c c c}
\hline\hline
\multicolumn{7}{c}{refresh the location of a file backup $100N_{cp}$ times} \
\cr \cline{1-7}
\multicolumn{2}{c}{parameter} & \multicolumn{5}{c}{maximum capacity usage} \
\cr \cline{1-2} \cline{3-7}
$N_{cp}$ & $N_s$ & $[1]$ & $[2]$ & $[3]$ & $[4]$ & $[5]$\\
\hline
$10^5$ & 20 & 0.532 & 0.529 & 0.538 & 0.535 & 0.531\\\hline
$10^5$ & 100 & 0.588 & 0.571 & 0.599 & 0.595 & 0.581\\\hline
$10^6$ & 200 & 0.536 & 0.535 & 0.546 & 0.542 & 0.541\\\hline
$10^6$ & 1000 & 0.592 & 0.581 & 0.610 & 0.605 & 0.589\\\hline
$10^7$ & 2000 & 0.542 & 0.535 & 0.553 & 0.549 & 0.540\\\hline
$10^7$ & 10000 & 0.610 & 0.591 & 0.626 & 0.613 & 0.599\\\hline
$10^8$ & 20000 & 0.551 & 0.547 & 0.560 & 0.558 & 0.548\\\hline
$10^8$ & $10^5$ & 0.611 & 0.604 & 0.639 & 0.628 & 0.611\\\hline
\hline
\end{tabular}}
\begin{tablenotes}
\footnotesize
\item $[1]$: Uniform distribution in interval $[0,1]$
\item $[2]$: Uniform distribution in interval $[1,2]$
\item $[3]$: Exponential distribution
\item $[4]$: Normal distribution with $\mu = \sigma^2$
\item $[5]$: Normal distribution with $\mu = 2\sigma^2$
\end{tablenotes}
\label{table:experiment}
\end{table}
\subsubsection{Analysis of Robustness}
We consider the robustness of FileInsurer as the ability of resisting corruptions of sectors. The following theorem indicates that FileInsurer is quite robust. The proof is left in
Appendix C.
\begin{restatable}{theorem}{throb}\label{th:rob}
Assume that the total size of corrupted sectors is $\lambda N_s \times minCapacity$. Denote the total value of lost files to be $V_{lost}$, and $\gamma_{lost}^v = \frac{V_{lost}}{S_v \times minValue}$ represents the ratio of the value of lost files to the total value of all files.
Then with a probability of not less than $1-c$, $\gamma_{lost}^v$ satisfies
{\small
\[
\gamma_{lost}^v \leq\max\left\{5\lambda^k,\lambda^\frac{k}{2},\frac{4\left(\frac{\log\frac{e}{2\pi}-\log c}{N_s}-\log\left(\lambda^\lambda(1-\lambda)^{1-\lambda}\right)\right)}{\gamma_v^m k \log\frac{1}{\lambda} \times capPara }\right\}.
\]
}
\end{restatable}
Let us propose a concrete example to show that the result of Theorem~\ref{th:rob} is quite strong. Set $k = 20$, $N_s = 10^6$, and $capPara = 10^3$. Let $\lambda = 0.5$, which means that half capacity of FileInsurer is broken. Then
{\small
\[
\gamma_{lost}^v\leq \max\left\{5 \times 10^{-6},0.001,
\frac{1}{\gamma^m_v} \times 5 \times 10^{-6}
\right\}.
\]
}
When $\gamma^m_v \geq 0.005$, $\gamma_{lost}^v \leq 0.001$. It means that in this case, even when half of the capacity of FileInsurer is corrupted, the value of lost files is no more than $0.1\%$ of the value of all stored files.
\subsubsection{Deposit Ratio}
The following theorem indicates that only a small deposit ratio is needed for full compensation.
\begin{restatable}{theorem}{thratio}\label{th:ratio}
Assume that the total size of corrupted sectors is no more than $\lambda N_s \times minCapacity$. If the deposit ratio satisfies
{\small
\[
\gamma_{deposit} \geq \max\left\{5 \lambda^{k-1},\lambda^{\frac{k}{2} - 1},
\frac{4}{k\times capPara }\left( \frac{\log N_s}{\log\frac{1}{\lambda}}+\frac{\log \frac{1}{c}}{\log N_s}\right)\right\},
\]
}
then full compensation can be achieved with a probability of not less than $1-c$.
\end{restatable}
The proof of \cref{th:ratio} is in
Appendix D.
Set $k = 20$, $N_s = 10^6$, $capPara = 10^3$ and $\lambda = 0.5$. Then $\gamma_{deposit} = 0.0046$ is enough to ensure full compensation, which is relatively small.
\subsection{Comparison with Existing Protocols}
\begin{table}
\centering\setstretch{1.1}{\small\setlength{\tabcolsep}{1pt}
\caption{Comparison of DSN Protocols}
\label{table:comparison}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Property & FileInsurer &Filecoin &Arweave &Storj &Sia \\
\hline
Capacity Scalability & Yes & Yes & Yes & Yes & Yes\\
Preventing Sybil Attacks & Yes & Yes & Yes & Yes & No\\
Provable Robustness & Yes & No & No & No & No\\
Compensation for File Loss & Yes & No$^{[1]}$ & No & No & No \\
\hline
\end{tabular}}
\footnotesize{$^{[1]}$ Provides only limited file loss compensation}\\
\end{table}
\Cref{table:comparison} shows the comparison between FileInsurer and existing DSN protocols including Filecoin, Arweave, Sia, and Storj. We observe that FileInsurer is the only DSN protocol that has provable robustness and gives full compensation for file loss.
|
1,941,325,220,713 | arxiv | \section{Comments on Brane Stability and
Supersymmetry}
It is appropriate to address the issue of
stability of brane configurations
and its relevance to the anomaly analysis\footnote
{We would like to thank K. Bardakci for useful
conversations regarding this issue.}.
For a generic brane configuration, there are forces between
nonparallel branes. If they do not cancel, this configuration is
not stable. One can no more trust string
perturbation theory in an unstable brane configuration than
one can trust perturbative expansion around a false
vacuum in field theory. Anomaly calculations
is in some sense more robust than many other perturbative
calculations, but
one must still know the correct spectrum of massless fermions
in \emph {some} true vacuum to correctly compute the anomaly.
Of course this was the original motivation
for t'Hooft's anomaly matching conditions.
In the above, we have relied on
string perturbation when we obtained the massless
fermion contents and their quantum numbers. When the
brane configuration is unstable, there is no known reason to
expect a priori that such analysis captures correctly
the spectrum.
On the other hand, supersymmetry
is the only general condition under which
the forces between branes cancel.
If supersymmetry is completely
broken in a brane configuration, the latter is
\emph {generically} unstable.
For N identical D-branes to
preserve some supersymmetry in a string compactification, they
must wrap around the supersymmetric cycles classified in
\cite {ooy, bbmooy}. Between a pair of D-branes, the pattern
of supersymmetry breaking depends on their relative
arrangement. For the
case of intersection at right angle, some
supersymmetries survive provided that
\cite
{ghm}
\begin{equation} \label {eq:cond-i-brane-susy}
\dim [T(M_1) \cap N(M_2)]
+ \dim [N(M_1) \cap T(M_2)] = 0 \mod 4.
\end{equation}
The expression on the LHS of this equation is sometime denoted
$nd + dn$ in the literature
because it is the number of spacetime coordinates
for which the boundary condition of the relevant open string
is Neuman on one end and Dirichlet on the other. When \eqr
{cond-i-brane-susy} is not satisfied,
anomaly calculation based on perturbative
string theory does not have to be
reliable. For example, if $nd+dn = 2$,
it may be shown that the force between the two D-branes is
attractive. It is believed that in this case there exists
a stable nonmarginal bound state \cite {pol}.
There seems a priori to be no reason to expect that
the correct degrees of freedom
of the bound state to be obtained from a
perturbative string analysis carried out at the unstable
configuration.
On the other hand, \eqr {cond-i-brane-susy}
was not needed in the analysis carried out in this paper.
In fact it follows through as long as
\begin{equation}
\dim [T(M_1) \cap T(M_2)]
+ \dim [N(M_1) \cap N(M_2)] = 0 \mod 2,
\end{equation}
a condition satisfied by any pair of D-branes that can
coexist in the same string theory.
This seems to suggest that even for nonsupersymmetric
brane configurations, at least
the massless fermion contents might be captured correctly.
\myref {
\bibitem {gs} M.~B.~Green, J.~H.~Schwarz,
\plb 149,84,117.
\bibitem {ch} C.~G.~Callan, J.~A.~Harvey, \npb 250,85,427.
\bibitem {bh} J. Blum, J.~A.~Harvey, \npb416,94,119,hep-th/9310035.
\bibitem {ghm} M.~B.~Green, J.~A.~Harvey, G.~Moore, \cqg 14,97,47,
hep-th/9605033.
\bibitem {bsv2} M.~Bershadsky, V.~Sadov, C.~Vafa,
\npb463,96,420, hep-th/9511222.
\bibitem {ooy} H.~Ooguri, Y. Oz, Z.~Yin,
\npb477,96,407, hep-th/9606112.
\bibitem {bbmooy} \beckerk, \beckerm, D.~R.~Morrison,
H.~Ooguri, Y. Oz, Z. Yin, \npb 480,96,225,
hep-th/9608116.
\bibitem {bsv1} M.~Bershadsky, V.~Sadov, C.~Vafa,
\npb463,96,398, hep-th/9510225.
\bibitem {ov1} H.~Ooguri, C.~Vafa, \npb463,96,55, hep-th/9511164.
\bibitem {bv} M.~Bershadsky, A. Johansen,
T. Pantev, V.~Sadov,
C.~Vafa,
\npb448,95,166, hep-th/9612052.
\bibitem {ov2} H.~Ooguri, C.~Vafa, hep-th/9702180.
\bibitem {vz} C.~Vafa, B.~Zwiebach, hep-th/9701015.
\bibitem {ho} K.~Hori, Y.~Oz, hep-th/9702173.
\bibitem {top1} M.~Blau, G.~Thompson,
\npb492,97,545, hep-th/9612143.
\bibitem {top2} L. Baulieu, A. Losev, N. Nekrasov,
hep-th/9707174.
\bibitem {witea} E.~Witten, hep-th/9610234.
\bibitem {bcr} L. Bonora, C. S. Chu, M. Rinaldi, hep-th/9710063.
\bibitem {Tsi} E.~Witten, \npb460,96,541, hep-th/9511030.
\bibitem {Tbwb} M.~R.~Douglas, hep-th/9512077.
\bibitem {Tgfab} M.~R.~Douglas, hep-th/9604198.
\bibitem {gh3} P. Griffiths and J. Harris,
{\sl Principles of Algebraic Geometry}, Chap. 3,
Wiley-Interscience,
New York 1978.
\bibitem {wz} J.~Wess, B.~Zuimino, \plb37,71,95.
\bibitem {zlec} B.~Zuimino, Lectures given at Les Houches
Summer School on Theoretical Physics, 1983.
\bibitem {pol-RR} J.~Polchinski,
\prl75,95,4724, hep-th/9510017.
\bibitem {dght} S. Deser, A. Gomberoff, M. Henneaux, C. Teitelboim,
\plb400,97,80, hep-th/9702184.
\bibitem {pol} J.~Polchinski, Lectures given at TASI Summer
School on Fields, Strings and Duality, 1996,
hep-th/9611040.
\bibitem {ss} J.~H.~Schwarz, A. Sen, \npb411,94,35, hep-th/9304154.
\bibitem {sorokin} G. Dall'Agata, K. Lechner,
D. Sorokin, hep-th/9707044.
\bibitem {wy} T. Y. Wu, C. N. Yang, \prd12,75,3845.
\bibitem {as} M.~F.~Atiyah, I.~M.~Singer,
\pnas81,84,2597.
\bibitem {asz} O.~Alvarez, I.~M.~Singer, B.~Zuimino, \cmp96,84,409.
\bibitem {sumi} T.~Sumitani, \jpa17,84,L811.
\bibitem {agg} L.~Alvarez-Gaume, P.~Ginsparg, \annp161,85,423,
Erratum \ibid171,86,233.
\bibitem {sz} J.~Manes, R.~Stora, B.~Zuimino, \cmp102,85,157.
\bibitem {pol-wit} J.~Polchinski, E.~Witten, \npb460,96,525,
hep-th/9510169.
\bibitem {bdl} M. Berkooz, M.~R.~Douglas, R. G. Leigh, \npb480,96,265,
hep-th/9606139.
\bibitem {bt} R.~Bott, L.~W.~Tu, {\sl Differential Forms in
Algebraic Topology}, Chap. 2, Springer-Verlag,
New York, 1982.
\bibitem {vafainst} C.~Vafa, \npb463,96,35, hep-th/9512078.
\bibitem {bbs} \beckerk, \beckerm, A.~Strominger,
\npb456,95,130, hep-th/9507158.
\bibitem {hl} R. Harvey, B. Lawson, \am148,82,47.
\bibitem {mclean} R. C. McLean,
http://www.math.duke.edu/preprints/96-01.ps.
\bibitem {sv} S. Shatashvili, C.~Vafa,
\sm1,95,347, hep-th/9407025.
\bibitem {pt} G. Papadopoulos, P. K. Townsend,
\plb357,95,300, hep-th/9506150
}
\end {document}
|
1,941,325,220,714 | arxiv | \section{Introduction}
Accurate modeling and forecasting electricity demand and prices are very important issues for decision making in deregulated electricity markets. Different techniques were developed
to describe and forecast the dynamics of electricity load. Short term forecast proved to be very challenging task due to these specific features. Figure \ref{fig_price_week} and \ref{fig_quantity_week} demonstrate changing of electricity equilibrium price and quantity during one week. Functional data analysis is extensively used in other fields of science, but it has been little explored in the electricity market setting.
\begin{figure}[H]
\begin{minipage}[b]{.4\textwidth}
\centering
\includegraphics[width=1\textwidth]{price_week.jpg}
\caption{Electricity equilibrium prices during a week}
\label{fig_price_week}
\end{minipage}
\hfill
\begin{minipage}[b]{.4\textwidth}
\centering
\includegraphics[width=1\textwidth]{quantity_week.jpg}
\caption{Electricity equilibrium quantities during a week}
\label{fig_quantity_week}
\end{minipage}
\end{figure}
We consider the Italian electricity market (IPEX). IPEX consists of different markets, including a day-ahead market. The day-ahead market is managed by Gestore del Mercato Elettrico where prices and demand are determined the day before the delivery. Supply and demand curves on day-ahead electricity markets are the results of thousands of bid and ask entries in the day-ahead auction, this for all the 24 hours. In principle, it would be possible to represent, and forecast, these curves by taking into account each production and each consumption unit as a separate time series, and then joining these together to construct the final curves, and thus the resulting price. However, the huge number of these units makes this naive strategy infeasible, unless one has extremely high computing capacity with complex machine learning algorithms available.
In this paper, we are going to present a more parsimonious approach. In fact, the idea is to represent each curve using non-parametric mesh-free interpolation techniques, so that we can obtain an approximation of the original curve with far less parameters than the original one. The original curve, in fact, in principle depends on about hundreds of parameters and is obtained as follow.
The producers submit offers where they specify the quantities and the minimum price at which they are willing to sell. The demanders submit bids where they specify
the quantities and the maximum price at which they are willing to buy. They are then aggregated by an independent system operator (ISO) in order to construct the supply and demand curves.
Once the offers and bids are received by the ISO, supply and demand curves are established
by summing up individual supply and demand schedules. In the case of demand, the first
step is to replace ''zero prices`` bids by the market maximum price (for Italian electricity market, the market
maximum price is 3000 Euro) without changing the corresponding quantities. After this
replacement, the bids are sorted from the highest to the lowest with respect to prices. The
corresponding value of the quantities is obtained by cumulating each single demand bid.
For supply curve, in contrast, the offers are sorted from the lowest to the highest with respect to prices and the corresponding value of the quantities is obtained by cumulating each
single supply offer. The market equilibrium is
the point where both curves intersect each other and the price balances supply and demand
schedules (see, e.g. Figure \ref{Fig_equilibr_point}). This point determines the market clearing price and the traded quantity. Accepted offers and bids are those that fall to the left of the intersection of the two curves, and all of them are exchanged at the resulted price.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{equilibrium.jpg}
\caption{The market equilibrium point
\label{Fig_equilibr_point}
\end{figure}
In the beginning of the 2000s the amount of papers focused on electricity price forecasting started to increase dramatically. A great variety of methods and models occurred during last twenty years. Weron \cite{Weron} (2014) made an overview of the existing literature on electricity price forecasting and divided electricity price models into five different groups: multi-agent, fundamental, reduced-form, statistical and
computational intelligence models. A review of probabilistic forecasting was done in \cite{Weron2} (2018) by Weron and Nowotarski. Most models have in common that they focus on the price itself or related time series. In such a way these models does not take into account the underlying mechanic which determines the price process -- the intersection between the part of the electricity supply and demand.
Some of the recent approaches try to to analyse the real offered volumes for selling and purchasing electricity. This commonly leads to a problem of a large amount of data and, therefore, high complexity. In particular, Eichler, Sollie, Tuerk in 2012 \cite{Eichler} investigated a new approach that exploits information available in the supply and demand curves for the German day-ahead market. They proposed the idea that the form of the supply and demand curves or, more precisely, the
spread between supply and demand, reflects the risk of extreme price fluctuations. They
utilize the curves to model a scaled supply and demand spread using
an autoregressive time series model in order to construct a flexible model adapted to changing
market conditions. Furthermore, Aneiros, Vilar, Cao, San Roque in 2013 \cite{Aneiros} dealt with the prediction of residual demand curve in elecricity spot market using two functional models. They tested this method as a tool for optimizing bidding strategies for the Spanish day-ahead market. Then Ziel and Steinert in 2016 \cite{Ziel1} proposed a model for the German European Power Exchange (EPEX) market, which considers all the supply and demand information of the system and discusses the effects of the changes in supply and demand. Their idea was to fill the gap between research done in time-series
analysis, where the structure of the market is usually left out, and the research done in structural analysis, where empirical data is utilized very rarely and even less thoroughly. They provided deep insight on the
bidding behavior of market participants. They also showed that incorporating the sale and purchase data yields promising results for forecasting the likelihood of extreme price
events. In 2016 Shah \cite{Tesi} also considered the idea of modeling the daily supply and demand curves, predicting them and finding the intersection of the predicted curves in order to find the predicted market clearing price and volume. He used the functional approach, namely, B-spline approximation, to convert the resulted piece-wise constant curves into smooth functions.
As far as we know, non-parametric mesh-free interpolation techniques were never considered for the problem of modeling the daily supply and demand curves.
We are going to use a relatively new modeling technique based on functional data analysis for demand and price prediction. The first task for this purpose is to make an appropriate algorithm to present the information about electricity prices and demands, in particular to approximate a monotone piecewise constant function.
We want to make an appropriate algorithm to present this information, in particular, to approximate a monotone piecewise constant function. Accuracy of the approximation and running time are very important for us. As we already said, the basic novelty of our problem is that we are going to present the information about electricity prices and demands using functional data analysis approach.
The main idea behind functional data analysis is, instead of considering a collection of data points, to consider the data as a single structured object. This allows to use
additional information contained in the functional structure of the data.
Once the data are converted to functional form, it can be evaluated at all values over some interval.
The most promising technique
to do so is the use of (integrals of) Radial Basis Functions, which
are been used in several other applications (image reconstruction, medical
imaging, geology, etc.) and allow a very flexible adaptation of the
interpolating curves to real data. The use of radial basis functions have attracted increasing attention in recent years as an elegant scheme for high-dimensional scattered data approximation, an accepted method for machine learning, one of the foundations of meshfree methods and so on. The initial motivation for RBF methods came from geodesy, mapping, and meteorology. RBF methods were first studied by Roland Hardy, an Iowa State geodesist, in 1968, when he developed one of the first effective methods for the interpolation
of scattered data. Later in 1986 Charles Micchelli, an IBM mathematician, developed the theory behind the multiquadric method. Micchelli made the connection between scattered data interpolation and positive definite functions \cite{Micchelli}. RBF methods are now considered an effective way to solve partial differential equations, to represent topographical surfaces as well as other intricate three-dimensional shapes, having been successfully applied in such diverse areas as climate modeling, facial recognition, topographical map production, auto and aircraft design, ocean floor mapping, and medical imaging (see, for example,\cite{Bozzini},\cite{Fasshauer2011}, \cite{Emma1}). Now RBF methods are an active area of mathematical research, as many open questions still remain. We will present different techniques for this interpolation, with their advantages and drawbacks, and with an
application to the Italian day-ahead market.
The paper is organized as follow. Section 2 describes the theoretical background, namely, mesh-free interpolation techniques based on radial basis function approximation.
Section 3 presents the database from the
Italian electricity market.
Section 4 is devoted to a short description of the numerical schemes and to the analysis of the results.
Section 5 concludes the paper.
\section{Meshless approximation}
Let us briefly notice some features of supply and demand curves that are relevant for our modeling:
\begin{itemize}
\item By construction, the curves are monotone.
\item The values attained by the supply curve are roughly clustered around {\bf layers}, corresponding to different
production technologies. In Italy they are non-dispatchable renewables, gas, coal, hydro, oil.
\item The fact that renewables are the first ones make the supply curve intrinsically "meshless".
\item Demand is much more inelastic than supply.
\end{itemize}
So, we are dealing with a scattered data interpolation problem. We have a large amount of points (each point represents price and amount of electricity) that we want to approximate. We can formalize this problem as follows.
Given a set of $N$ distinct \textit{data points} $X_N=\{x_i: i=1, 2, \ldots, N\}$ arbitrarily distributed on a domain $\Omega\subset {\mathbb{R}}$ and a set of \textit{data values} (or function values) $Y_N=\{y_i: i=1, 2,\ldots, N\}\subset {\mathbb{R}}$, the data interpolation problem consists in finding a function $s_f: \Omega \rightarrow {\mathbb{R}}$ such that
\begin{equation}\label{interpol_equation}
s_f(x_i)=y_i,\, i=1,\ldots,N.
\end{equation}
Let us recall briefly the most popular methods for the interpolation problem. Polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. For
given data sites $X_N$ and function values $Y_N$ there exists exactly one polynomial $p\in \pi_{N-1}({\mathbb{R}})$ that interpolates the data at the data sites. Therefore the space $\pi_{N-1}({\mathbb{R}})$ depends neither on the
data sites nor on the function values but only on the number of points.
Runge's phenomenon (1901) shows that for high values of $N$, the interpolation polynomial may oscillate wildly between the data points. Besides, the polynomial interpolation does not guarantee of monotonicity of the curves (see Figure \ref{PlinomAppr}).
\begin{figure}[h!]
\caption{Approximation of supply curve with polynomials}
\centering
\includegraphics[width=1\linewidth]{ApproximationOfSupplyCurveWithPolynomials}
\label{PlinomAppr}
\end{figure}
It is a well-established fact that a large
data set is better dealt with splines than with polynomials. An aspect to notice in contrast
to polynomials is that the accuracy of the interpolation process using splines is not based on the
polynomial degree but on the spacing of the data sites. In particular, cubic splines are widely used to fit a smooth continuous function through discrete data. However, spline interpolation requires a mesh.
Notice that for all methods, the interpolant $s_f$ is expressed as a linear
combination of some basis functions $B_i$ , i.e. $$s_f(t)= \displaystyle \sum_{k=1}^d c_k B_k(t).$$
The basis functions in e.g. polynomial interpolation does not depend on the data points. Another approach is to use a basis which depends on the data points.
One simple way to solve problem \eqref{interpol_equation} is to choose a fixed function $\phi:{\mathbb{R}}\rightarrow {\mathbb{R}}$
and to form the interpolant as
$$s_f(x)= \displaystyle \sum_{i=1}^N \alpha_i \phi(\|x-x_i\|),$$
where the coefficients $\alpha_i$ are determined by the interpolation conditions
$s_f(x_i)=y_i$. Therefore, the scattered data interpolation problem leads
to the solution of a linear system
\begin{equation*}
A\alpha=y, \text{ where } A_{i,j}=\phi(|x_i-x_j|).
\end{equation*}
The solution of the system requires
that the matrix $A$ is non-singular. It is enough to know in advance that the matrix is positive definite (see \cite{Wendland} for more details). Let us recall the definition of strictly positive definite function.
\begin{definition}\label{pos_def}
A real-valued function $\Phi:{\mathbb{R}}\longrightarrow {\mathbb{R}}$ is called
\textit{ positive semi-definite} if , for all $m\in {\mathbb{N}}$ and for any set of pairwise distinct points
$x_1, x_2, \ldots,x_m$, the $m\times m$ matrix
$$A=\left(\Phi(x_i-x_j)\right)_{i, j=1}^m$$
is positive semi-definite, i.e. for every column vector $z$ of $m$ real numbers the scalar $z^T A z\geq 0$. The function $\Phi:{\mathbb{R}}\longrightarrow {\mathbb{R}}$ is called (strictly)
\textit{ positive definite} if the matrix $A$ is positive definite, i.e. for every non-zero column vector $z$ of $m$ real numbers the scalar $z^T A z> 0$.
\end{definition}
The most important property of positive semi-definite matrices is that their eigenvalues are positive and so is its determinant.
A radial function is a real-valued positive semi-definite function whose value depends only on the distance from the center $\mathbf {c}$. One useful characterization for positive semi-definite univariate function was given by Schoenberg in 1938 in the terms of completely monotone functions: a continuous function $\phi:[0, \infty)\rightarrow {\mathbb{R}} $ is positive semi-definite if and only if $\phi\in C^\infty(0, \infty)$ and $(-1)^k\phi^{(k)}(r)\geq 0$ for all $r\geq 0$, for $k=0, 1,\ldots$.
Some standard radial basis functions are
\begin{itemize}
\item ${\displaystyle \phi (r)=e^{-(\varepsilon r)^{2}}} $ (Gaussian),
\item $\phi (r)=e^{-\varepsilon r}(\varepsilon r+1)$ (Mat\'{e}rn),
\item $\phi (r)=(1-\varepsilon r)^4_+(4\varepsilon r+1)$ (Wendland),
\end{itemize}
where $\varepsilon>0$ denote a shape parameter, $r=\|x\|_2$.
The idea of meshless approximation with radial basis functions is to find an approximant of $f$ in the following form:
$$ s_f(x) := \sum_{i=1}^N \alpha_i \phi(\|x - x_i\|) $$
where:
\begin{itemize}
\item the coefficients $\alpha_i$ and the {\bf centers} $x_i$ are to be chosen so that the interpolant is as near as possible as the original function $f$;
\item $\phi: {\mathbb{R}} \to {\mathbb{R}}$ is a {\bf radial basis function} (RBF).
\end{itemize}
Notice that the radial basis function $\phi \geq 0$, with $\alpha_i \geq 0$, so
$$ \sum_{i=1}^M \alpha_i \phi(\|x - x_i\|) \geq 0. $$
As we need to approximate piecewise constant monotone function from $[0, M]$ to ${\mathbb{R}}^+$, we decided to use the integrals of RBF. Namely, we want to find an approximant of the form
$$ s_f(t)= \int_0^t \sum_{i=1}^M \alpha_i \phi(\lambda_i \|x - x_i\|)\ dx = \sum_{i=1}^M \alpha_i \int_0^t \phi(\lambda_i \|x - x_i\|)\ dx $$
where $\lambda_i$ is a shape parameter for every center $x_i$. As radial basis functions, we choose Gaussian functions for analytical tractability.
Evidently, any supply curve and any demand curve can be approximated by a combination of error functions, which is the integral of a normalized Gaussian function. The standard error function is defined as:
$${\displaystyle {\begin{aligned}\operatorname {erf} (x)={\frac {1}{\sqrt {\pi }}}\int _{-x}^{x}e^{-t^{2}}\,dt={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt.\end{aligned}}} $$
In order to find unknown coefficients $\alpha_i, \lambda_i, x_i$ we need to solve global minimization problem:
$$\underset{p}{\min} \|s_f(x_i,p)-y_i\|_2^2,$$
where $p = (\alpha_i,\lambda_i,x_i)_{i=1,\ldots,N}$ and
$$ s_f(t,p) := \sum_{i=1}^M \alpha_i \int_0^t \phi(\lambda_i \|x - x_i\|)\ dx $$
and $\phi(t) = (\mathrm{erf}(t) + 1)/2$ is the primitive of a Gaussian kernel. However, this optimization problem is very heavy, as it is a nonlinear and nonconvex minimization over $p \in {\mathbb{R}}^{3M}$.
For this reason, we divide our global problem in simpler subproblems, with lower dimensionality, so that the final result is faster. We describe two realization of this approach in Section~\ref{sec4}.
\section{Data set}
We now use the data about supply and demand bids from the
Italian day-ahead electricity market from the GME website www.mercatoelettrico.org. We consider time period from 01.01 to 31.12.2017. These data are in aggregated form, i.e. bids coming from different agents, but with the same price, are aggregated in the price layer. Even in this form, we are dealing with a massive amount of data. For instance, \textbf{2 800 687} offer and \textbf{558 926} bid layers were observed during this period.
\begin{table}[h!]
\centering
\caption{Data}
\label{tab3}
\begin{tabular}{|l|c|c|c|}
\hline
Date & \multicolumn{1}{l|}{Hour} & \multicolumn{1}{l|}{Volume (MW)} & \multicolumn{1}{l|}{Price (Euro)} \\ \hline
01-01-2017 & 1 & 13392.7 & 0 \\ \hline
01-01-2017 & 1 & 25 & 0.1 \\ \hline
01-01-2017 & 1 & 113.8 & 1 \\ \hline
01-01-2017 & 1 & 11 & 3.5 \\ \hline
01-01-2017 & 1 & 270.3 & 5 \\ \hline
01-01-2017 & 1 & 0.5 & 6 \\ \hline
.................. & \multicolumn{1}{l|}{......} & ...................... & .................... \\ \hline
31-12-2017 & 24 & 370 & 554.2 \\ \hline
31-12-2017 & 24 & 352 & 554.3 \\ \hline
31-12-2017 & 24 & 365 & 554.5 \\ \hline
31-12-2017 & 24 & 97 & 700 \\ \hline
31-12-2017 & 24 & 60000 & 3000 \\ \hline
\end{tabular}
\end{table}
This means, that on average there are 324 offer and 65 bid layers for each hour of the year, which corresponds to one supply curve and one demand curve respectively.
It is a known fact that the dynamics of electricity trade displays a set of characteristics: external weather conditions, dependence of the consumption on the hour of the day, the day of the week, and time of the year. Variation in prices are all dependent on the principles of demand and supply. First of all, on the day-ahead market the energy is typically traded on an hourly basis and this means that the prices can and will vary per hour. For example, at 9:00 a.m. there could be a price peak, while at 4:00 a.m. prices could be only half of the peak price. Second, the weekly seasonal behaviour matters. Usually, it is necessary to differentiate between the two weekend days (Saturday and Sunday), the first business day of the week (Monday), the last business day of the week (Friday) and the remaining business days (see e.g. \cite{Andreis}). Thirdly, electricity spot prices display a strong yearly seasonal pattern: for instance, demand increases in summer, as consumers turn their air conditioners on, and also in winter because of electric heating in housing.
As far as the number of offers (or bids) affects directly the complexity of approximation, we decided to explore the relationship between the number of bids and offers and such a characteristics as the hour of the day, the day of the week, and the month of the year. Based on the dependence between this three factors and electricity prices we could expect that some hours, days have much less offers and bids than another one. This analysis is presented on Figures~\ref{off_dep_hour}~--~\ref{off_dep_mon}.
The main conclusion that we have made is that there is no direct relationship between the number of offer and bid layers and the hour of the day, the day of the week, and the time of the year. In particular, during 24 hour of the day the number of offer layers varies between 299 and 332, and the number of bid layers varies between 61 and 66. With regard to dependence of the day of the week the number of offer layers varies between 310 and 320, and the number of bid layers varies between 55 and 68. Based on this observation we decided to choose the same number of basis functions independently of the hour of the day, the day of the week, and the time of the year.
\begin{figure}[H]
\begin{minipage}{0.5\textwidth}
\includegraphics[width=1\textwidth]{Hour_dep_off_bid.jpg}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{\multirow{2}{*}{Hour}} & \multicolumn{2}{c|}{Number} & \multicolumn{1}{l|}{\multirow{2}{*}{Hour}} & \multicolumn{2}{c|}{Number} \\ \cline{2-3} \cline{5-6}
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{of offers} & \multicolumn{1}{l|}{of bids} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{of offers} & \multicolumn{1}{l|}{of bids} \\ \hline
\textbf{1} & 300 & 64 & \textbf{13} & 329 & 64 \\ \hline
\textbf{2} & 299 & 64 & \textbf{14} & 329 & 64 \\ \hline
\textbf{3} & 300 & 64 & \textbf{15} & 330 & 64 \\ \hline
\textbf{4} & 300 & 64 & \textbf{16} & 332 & 64 \\ \hline
\textbf{5} & 301 & 63 & \textbf{17} & 332 & 63 \\ \hline
\textbf{6} & 303 & 63 & \textbf{18} & 332 & 63 \\ \hline
\textbf{7} & 307 & 62 & \textbf{19} & 331 & 64 \\ \hline
\textbf{8} & 318 & 63 & \textbf{20} & 329 & 65 \\ \hline
\textbf{9} & 325 & 65 & \textbf{21} & 329 & 66 \\ \hline
\textbf{10} & 326 & 64 & \textbf{22} & 323 & 64 \\ \hline
\textbf{11} & 329 & 64 & \textbf{23} & 321 & 63 \\ \hline
\textbf{12} & 329 & 65 & \textbf{24} & 314 & 61 \\ \hline
\end{tabular}
\end{minipage}
\caption{Hour dependence of the number of offer and bid layers}\label{off_dep_hour}
\end{figure}
\begin{figure}[H]
\begin{minipage}{0.5\textwidth}
\includegraphics[width=1\textwidth]{Week_dep_off_bid.jpg}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\begin{tabular}{ccclll}
\cline{1-3}
\multicolumn{1}{|c|}{\multirow{2}{*}{Month}} & \multicolumn{2}{c|}{Number} & \multicolumn{3}{l}{\multirow{7}{*}{}} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{l|}{of offers} & \multicolumn{1}{l|}{of bids} & \multicolumn{3}{l}{} \\ \cline{1-3}
\multicolumn{1}{|c|}{\textbf{Sunday}} & \multicolumn{1}{c|}{310} & \multicolumn{1}{c|}{55} & \multicolumn{3}{l}{} \\ \cline{1-3}
\multicolumn{1}{|c|}{\textbf{Monday}} & \multicolumn{1}{c|}{310} & \multicolumn{1}{c|}{56} & \multicolumn{3}{l}{} \\ \cline{1-3}
\multicolumn{1}{|c|}{\textbf{Tuesday}} & \multicolumn{1}{c|}{322} & \multicolumn{1}{c|}{68} & \multicolumn{3}{l}{} \\ \cline{1-3}
\multicolumn{1}{|c|}{\textbf{Wednesday}} & \multicolumn{1}{c|}{324} & \multicolumn{1}{c|}{67} & \multicolumn{3}{l}{} \\ \cline{1-3}
\multicolumn{1}{|c|}{\textbf{Thursday}} & \multicolumn{1}{c|}{326} & \multicolumn{1}{c|}{68} & \multicolumn{3}{l}{} \\ \cline{1-3}
\multicolumn{1}{|c|}{\textbf{Friday}} & \multicolumn{1}{c|}{327} & \multicolumn{1}{c|}{68} & \multicolumn{3}{c}{\multirow{5}{*}{\textbf{}}} \\ \cline{1-3}
\multicolumn{1}{|c|}{\textbf{Saturday}} & \multicolumn{1}{c|}{329} & \multicolumn{1}{c|}{68} & \multicolumn{3}{c}{} \\ \cline{1-3}
\end{tabular}
\end{minipage}
\caption{Weekly dependence of the number of offer and bid layers}\label{off_dep_week}
\end{figure}
\begin{figure}[H]
\begin{minipage}{0.6\textwidth}
\includegraphics[width=1\textwidth]{Month_dep_off_bid.jpg}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{tabular}{|c|c|c|ccc}
\cline{1-3}
\multirow{2}{*}{Month} & \multicolumn{2}{c|}{Number} & \multicolumn{1}{l}{\multirow{2}{*}{}} & \multicolumn{2}{c}{} \\ \cline{2-3}
& \multicolumn{1}{l|}{of offers} & \multicolumn{1}{l|}{of bids} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \cline{1-3}
\textbf{January} & 331 & 65 & \textbf{} & & \\ \cline{1-3}
\textbf{February} & 341 & 79 & \textbf{} & & \\ \cline{1-3}
\textbf{March} & 324 & 81 & \textbf{} & & \\ \cline{1-3}
\textbf{April} & 305 & 72 & \textbf{} & & \\ \cline{1-3}
\textbf{May} & 298 & 57 & \textbf{} & & \\ \cline{1-3}
\textbf{June} & 298 & 54 & \textbf{} & & \\ \cline{1-3}
\textbf{July} & 322 & 55 & \textbf{} & & \\ \cline{1-3}
\textbf{August} & 305 & 58 & \textbf{} & & \\ \cline{1-3}
\textbf{September} & 300 & 64 & \textbf{} & & \\ \cline{1-3}
\textbf{October} & 309 & 66 & \textbf{} & & \\ \cline{1-3}
\textbf{November} & 348 & 58 & \textbf{} & & \\ \cline{1-3}
\textbf{December} & 357 & 57 & \textbf{} & & \\ \cline{1-3}
\end{tabular}
\end{minipage}
\caption{Monthly dependence of the number of offer and bid layers}\label{off_dep_mon}
\end{figure}
\section{Numerical experiments}\label{sec4}
Since the maximum market clearing price for the period under review (i.e. from 01.01.2017 to 31.12.2017) is 350 \euro, in all the experiments we restricted ourselves to a maximum price of 400 \euro. For the realization of our algorithm we are using the function \texttt{lsqcurvefit} from MATLAB Optimization Toolbox.
First, we download the data from a text file and choose the number of basis function $M$. After that, we need to divide our problem into $M$ sub-problems. Then each part of the supply curve must be approximated by one error function.
Our first attempt (Method 1) was just to divide $y$-axis uniformly into $M$ equal intervals (see Figure \ref{methods1}). However this approach is ineffective, as a huge jump concentrates on itself, keeping uselessly many components.
To resolve this problem we created a simple algorithm - Method 2 - that finds the points $p_1, \ldots, p_M$ on the $y$-axis such that our supply curve takes the value exactly $p_i$ on some non-trivial interval (see Figure \ref{methods2}).
\begin{figure}[h!]
\begin{minipage}[b]{.4\textwidth}
\centering
\includegraphics[width=1\textwidth]{devided1.jpg}
\caption{Method 1}\label{methods1}
\end{minipage}
\hfill
\begin{minipage}[b]{.4\textwidth}
\centering
\includegraphics[width=1\textwidth]{devided2.jpg}
\caption{Method 2} \label{methods2}
\end{minipage}
\end{figure}
Then $M$ times we resolve the same optimization problem for the values of the supply curve between $p_i$ and $p_{i+1}$ using function \texttt{lsqcurvefit} (see Figure \ref{Devided}). On each part we need to find only 3 coefficients $a_i, b_i, c_i$ of the function \begin{equation}
G(x)=\sum_{i=1}^k a_i (\operatorname {erf}(c_i\cdot(x-b_i))+1).
\end{equation}
Here, for convenience of representation we are using $\{\operatorname {erf}(c_i\cdot(x-b_i))+1\}$ instead of $\{\operatorname {erf}(c_i\cdot(x-b_i))\}$, as our data values are never negative.
The \texttt{lsqcurvefit} function solves nonlinear data-fitting problems in least-squares sense. Suppose that we have data points $X_N=\{x_i: i=1, 2, \ldots, N\}$ and data values $Y_N=\{y_i: i=1, 2,\ldots, N\}\subset {\mathbb{R}}$ and we want to find a function $f$ such that $f(x_i)\approx y_i,\, i=1,\ldots,N.$ We can consider the family of functions $\{f(x,p): p\in {\mathbb{R}}^k\}$, depending of some parameter $p\in {\mathbb{R}}^k$. Let $p_0\in {\mathbb{R}}^k$ be an ``initial guess'' such that $f(x_i,p)$ is reasonably close to $y_i$. The function \texttt{lsqcurvefit} starts at $p_0$ and finds coefficients $p$ from some neighborhood of $p_0$ to best fit the data set $Y_N$:
$$\underset{p}{\min}\|f(x_i,p)-y_i\|_2^2.$$
Notice that this function works well only if the number of parameters $(p_1, \ldots, p_k)$ is not very big. That is why we are forced to divide our problem into many local problems.
For optimizing the numerical procedure we solved some parts of the optimization problem by ourselves: in fact, when the interval $[p_i,p_{i+1}]$ contains only one jump, then
$$ a_i := f(p_{i+1}) - f(p_i) $$
for any kernel function $\phi$ with unit integral.
\begin{figure}[H]
\caption{Local interpolation by one error function with \texttt{lsqcurvefit} function}
\label{Devided}
\centering
\includegraphics[width=0.45\linewidth]{algorithm2}
\includegraphics[width=0.45\linewidth]{algorithm3}
\label{Algorithm}
\end{figure}
A summary of the results is shown in Table \ref{tab2}. For all experiments we proceed with the data for period from 01.01.2017 to 31.12.2017. We used different number of basis function to approximate supply and demand curves, and then compared the equilibrium price, which was received as intersection of approximants ($P_{appr}$), with the correct equilibrium price ($P$). We did this for each hour of each day, and then computed the average value of $|P - P_{appr}|$ (Error) for all 8 664 hours of the year and the maximum value of $|P - P_{appr}|$ (Max error).
This empirical results show that the accuracy of our approximation is good enough, if we use 5 basis function for the demand curve and 15 basis function for the supply curve. Then the increase in the number of functions leads to more time consumption, but the increase of the accuracy is less significant.
\bigskip
\begin{table}[H]
\centering
\caption{Results of numerical experiment}
\label{tab2}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Number of functions} & \multicolumn{3}{c|}{Results} \\
\hline
For demand & For supply & Error & Max error &Running time \\
\hline
5 &5 & 3.9 \euro &28.6 \euro & 69 min. \\
5 &10 & 2.2 \euro & 14.9 \euro & 82 min. \\
5 &15 & 1.5 \euro & 11.1 \euro & 103 min. \\
5 &20 & 1.3 \euro & 9.1 \euro & 110 min. \\
5 &25 & 1.2 \euro & 9.3 \euro & 135 min. \\
5 &30 & 1.2 \euro & 9.4 \euro & 159 min. \\
5 &35 & 1.2 \euro & 9.8 \euro & 177 min. \\
5 &40 & 1.2 \euro & 9.6 \euro & 190 min. \\
5 &45 & 1.2 \euro & 9.6 \euro & 199 min. \\
5 &50 & 1.2 \euro & 9.6 \euro & 207 min. \\
\hline
10 &5 & 3.9 \euro & 39.5 \euro & 100 min. \\
10 &10 & 2.1 \euro & 14.9 \euro & 128 min. \\
10 &15 & 1.4 \euro & 8.9 \euro & 146 min. \\
10 &20 & 1.2 \euro & 9.1 \euro & 162 min. \\
10 &25 & 1.1 \euro & 9.5 \euro & 183 min. \\
10 &30 & 1.1 \euro & 9.3 \euro & 199 min. \\
10 &35 & 1.0 \euro & 9.4 \euro & 223 min. \\
10 &40 & 0.98 \euro & 9.8 \euro & 241 min. \\
10 &45 & 0.98 \euro & 9.6 \euro & 255 min. \\
10 &50 & 0.98 \euro & 9.6 \euro & 273 min. \\
\hline
\end{tabular}
\end{table}
As a last step we analyzed the stability of the coefficients for the case when we approximate the supply curve with 10 basis functions and the demand curve with 5 basis functions for the same period of time, as
$$S(x)=\sum_{i=1}^{10} A_i (\operatorname {erf}(C_i\cdot(x-B_i))+1)\quad\text{ and } \quad D(x)=\sum_{i=1}^5 E_i (\operatorname {erf}(K_i\cdot(x-L_i))+1).$$
From Table \ref{tab3} we can see that these coefficients do not have a stable behavior (namely, maximum values, minimum values and mean values are presented). Although the values attained by the supply curve are clustered around layers, which correspond to different
production technologies, we came to the conclusion that we have no chance to choose these coefficients uniformly for all curves, but we need to calculate them for all supply and demand curves.
\begin{figure}[H]
\caption{Supply curve approximated with 10 basis functions}
\centering
\includegraphics[width=1\linewidth]{levels_ten}
\label{Algorithm}
\end{figure}
\begin{table}[H]
\centering
\caption{Stability of the coefficients}
\label{tab3}
\begin{tabular}{|c|c|c|c|}
\hline
& Min & Mean & Max \\
\hline
\multicolumn{4}{|c|}{Coefficients for supply curve} \\
\cline{1-4}
$A_1$ & 10 & 14.76981 & 18 \\
$A_2$ & 10.5 & 15.15519 &21 \\
$A_3$ & 10.5 & 15.21438 & 19.5 \\
$A_4$ & 11 & 15.53944 & 22 \\
$A_5$ & 11 & 16.8968 & 27.5 \\
$A_6$ & 12.5 & 20.44287 & 27 \\
$A_7$ & 14.5 & 22.15457 & 33 \\
$A_8$ &19 & 29.69132 & 57.5 \\
$A_9$ &17 & 24.48784 & 48 \\
$A_{10}$ & 21 & 25.64777 & 50 \\
\hline
\multicolumn{4}{|c|}{Coefficients for demand curve} \\
\cline{1-4}
$E_1$ & 12 & 30.95154 & 37.5 \\
$E_2$ & 25 & 34.31039 & 58.5 \\
$E_3$ & 25 & 36.24469 & 50 \\
$E_4$ & 33 & 40.19715 & 50 \\
$E_5$ & 50 & 58.29623 & 75 \\
\hline
\end{tabular}
\end{table}
\section{Conclusions}
We presented a parsimonious way to represent supply and demand curves, using a mesh-free method based on Radial Basis Functions. Using the tools of functional data analysis, we are able to approximate the original curves with far less parameters than the original ones. Namely, in order to approximate piece-wise constant monotone functions, we are using linear combinations of integrals of Gaussian functions.
The real data about supply and demand bids from the Italian day-ahead electricity market showed that there is no direct relationship between the number of offer and bid layers and the hour of the day, the day of the week, and the time of the year. Based on this observation, we decided to choose the same number of basis functions independently of the hour of the day, the day of the week, and the time of the year.
The numerical results showed that the accuracy of our approximation is good enough, if we use 5 basis function for the demand curve and 15 basis function for the supply curve, and then the increase in the number of functions leads to more time-consumption, but the increase of the accuracy is less significant.
\section*{Acknowledgements}
The authors thank Enrico Edoli, Marco Gallana, and Emma Perracchione for several useful discussions.
The authors wish to thank also Florian Ziel, Carlo Lucheroni, Stefano Marmi, Sergei Kulakov, Enrico Moretto for their comments and suggestions. The authors would like to thank the participants of the following events: Energy Finance Christmas Workshop (2018) in Bolzano; Quantitative Finance Workshop (2019) in Zurich; Energy Finance Italia Workshop in Milan (2019), the Freiburg-Wien-Zurich Workshop (2019) in Padova.
The first author is pursuing her Ph.D. with a fellowship for international students funded by Fondazione Cassa di Risparmio di Padova e Rovigo (CARIPARO) and acknowledges the support of this project. The second author acknowledges financial support from the research projects of the University of Padova BIRD172407-2017 "New perspectives in stochastic methodsfor finance and energy markets" and BIRD190200/19 "Term Structure Dynamics in Interest Rate and Energy Markets: Modelling and Numerics".
|
1,941,325,220,715 | arxiv | \section{Introduction}
\label{sec:Introduction}
\IEEEPARstart{T}{he} technique of object detection in aerial images refers to extracting the positions of the objects defined by domain experts, according to whether a kind of object is common and its value for real-world applications \cite{DOTA,GLSNet}. With the rapid development of remote sensing acquisition technology, object detection methods in aerial images are widely used in ship monitoring, maritime search and rescue, traffic control, power line inspection, military reconnaissance, and other fields \cite{OcSaFPN}. In addition, for unmanned aerial vehicles (UAVs) and satellites with limited energy resources, aerial image object detection technology is also used as a pre-processing method to select important images with suspected targets and return them to the ground station first. This puts higher requirements on the accuracy and speed of object detection algorithms \cite{Lightweight-RS}. Based on this rapidly growing demand, traditional object detection algorithms using the logic of extracting hand-designed low-level features, and then classifying and predicting bounding boxes can no longer meet the detection tasks under fast, multiple categories and complex background conditions \cite{Optical Satellite Images With Complex Backgrounds,Ship Detection From Optical Satellite Images Based on Saliency Segmentation,A Survey}.
Widely used convolutional neural networks (CNNs) greatly promoted the development of object detection technology. Overall, the current object detection network can be divided into single-stage detectors and two-stage detectors. On the one hand, the two-stage detector method can be also called the region proposal-based method. This type of algorithm usually uses a small two-class network to extract the region proposals firstly, and then perform bounding boxes tuning procedure and category prediction. The region proposal-based method can achieve higher detection accuracy, but its structure is not suitable to edge computing devices. DPM \cite{DPM}, R-CNN \cite{RCNN}, Fast R-CNN \cite{FASTRCNN}, Faster R-CNN \cite{FASTERRCNN}, feature pyramid network (FPN) \cite{FPN}, Cascade R-CNN \cite{Cascade}, Mask R-CNN \cite{Mask r-cnn}, etc. are the representative algorithms. On the other hand, the single-stage detector can be also called the regression-based method, including Overfeat \cite{Overfeat}, you only look once (YOLO) \cite{YOLO}, YOLOv2 \cite{YOLOv2}, YOLOv3 \cite{YOLOv3}, YOLOv4 \cite{YOLOv4}, YOLOX \cite{YOLOX}, single shot detector(SSD) \cite{SSD}, FCOS \cite{Fcos}, and RetinaNet \cite{Focalloss}. The single-stage detector performs object detection on an entire image at one time, and predicts the bounding boxes as the coordinate regression task. This kind of method is very suitable for the embedding of edge computing devices. But its accuracy is not as good as the two-stage detector.
It is worth mentioning that some methods transform the prediction of the bounding boxes into a task of predicting the key points of the bounding boxes \cite{CornerNet,CornerNet-Lite,FoveaBox}. These methods are fast and efficient, but have difficulty in the scene where the object obscure each other in natural images.
At present, encouraged by the great success of deep-learning-based object detection in natural images \cite{Deep learning,ResNet}, many researchers have proposed utilizing a similar methodology for object detection in aerial images \cite{A Survey,Ship Detection Visual Attention}. Figure~\ref{rs-and-natural} shows the difference between natural images and aerial images. First of all, the backgrounds of aerial images are more complex, which put forward higher requirements on algorithms. Secondly, the aerial images are taken from the vertical view perspective when acquiring, so there is almost no occlusion between the objects. In addition, aerial images have rich scene-target semantic information. Finally, due to the different acquisition platforms, flight trajectories, and sensors used when acquiring aerial images, almost every aerial image product has unique resolution and imaging characteristics. This leads to drastic scale changes of the same object. And the scales between different objects are also quite different. To sum up, it is impossible to directly apply the common object detection algorithms based on deep learning in the field of natural images to the field of aerial image object detection.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Definitions/rs-and-natural.png}
\end{center}
\caption{The difference between natural images and aerial images. (\textbf{a}) A port image data obtained by aerospace platforms or aviation platforms. (\textbf{d}) A port image data captured by surveillance equipment or handheld devices.}
\label{rs-and-natural}
\end{figure}
In this paper, in order to obtain improved object detection results with optical remote sensing images, we put forward an effective network structure, namely, relationship representation network for object detection in aerial images (RelationRS). In RelationRS, we first propose a multi-scale feature fusion module to deal with dramatic scale changes, which can dynamically learn the relationship of the object at different scales. Besides, the potential relationships between different scenes of aerial images can be learned by the multi-scale feature fusion module at the same time. The dual relationship module realizes the extraction and characterization of multi-scale relations and relations from different scenes, and dynamically generates weight parameters according to the input image data to guide the fusion of multi-scale features. According to the characteristic that the objects on the aerial image usually do not block each other, we then introduced the bridging visual representation module (BVR) \cite{BVR}, which integrates key-points detection and bounding box detection, into the field of aerial image object detection to meet the complex backgrounds. BVR can build relationships between different types of representations.
The proposed algorithm tested on the DOTA dataset \cite{DOTA} demonstrates that the proposed RelationRS achieves a state-of-the-art detection performance.
The rest of this paper is organized as follows. Section~\ref{sec:RelatedWork} gives a brief review of the related work on aerial image object detection based on deep learning, the multi-scale feature representations, and the conditional convolution mechanism. In Section~\ref{sec:ProposedMethod}, we introduce the proposed method in detail. The details of the dataset, the experiments conducted in this study, and the results of the experiments are presented in Section~\ref{sec:ExperimentsandResults}. Section~\ref{sec:Conclusions} concludes this paper with a discussion of the results.
\section{Related Work}
\label{sec:RelatedWork}
\subsection{Object Detection of Aerial Images}
\label{sec:ObjectDetectionofAerialImages}
Given the characteristics of aerial images, many researchers currently focus on the two-stage detector as the baseline to obtain higher-precision detection results, compared to the single-stage detector.
Zou et al. \cite{SVDNet} proposed a singular value decomposition network (SVDNet) for ship object detection in aerial images. To improve detection accuracy, the SVDNet combines singular value decomposition and convolutional neural networks. Since the region proposals are extracted by the selective search algorithm \cite{selective1,selective2}, this non-end-to-end implementation greatly reduces the speed of the algorithm. Dong et al. \cite{Sig-NMS} optimized the non-maximum suppression algorithm, and the proposed Sig-NMS method has better accuracy for small objects. Both Deng et al. \cite{Toward fast and accurate} and Xiao et al. \cite{Airport detection based on a multiscale} have used sliding windows to extract region proposals, then used convolutional neural networks to extract deep learning-based features, and finally used non-rotating rectangular boxes to characterize objects. A large number of regions in aerial images belong to the background class where no object exists. Therefore, the method of using the sliding windows to extract region proposals is inefficient, and it is easy to cause an imbalance between positive and negative samples. Xu et al. \cite{Deformable convnet with aspect ratio} and Ren et al. \cite{Small object based on Faster} applied deformable convolution \cite{Deformable} to aerial image object detection tasks with complex boundaries. Through expanding the receptive field of the convolution kernel, the deformable convolution with variable sampling position is beneficial to extract the complex boundaries of objects in aerial images. Similarly, Hamaguchi et al. \cite{Effective use of dilated convolutions} has used the dilated convolution filters \cite{dilated convolution} to optimize the receptive field size of the convolution kernel in aerial image object detection and segmentation, which is conducive to the extraction of scene semantic information. CDD-Net \cite{CDD-Net} has used an attention mechanism to improve the ability of multi-scale feature representation. In addition, CDD-Net has used a local semantic feature network to extract the feature information of the area around an object, to compensate for the limited filter size of the CNNs. Finally, the detection task is finished with both the feature maps of region proposals and the information around objects. To improve the efficiency of multi-scale feature utilization, A2RMNet \cite{A2RMNet} adopted a gate structure to automatically select and fuse multi-scale features and then adopted the same region proposal feature map pooling method as R2CNN \cite{R2CNN} to obtain three different aspect ratios. Feature maps and the attention mechanism are also used to optimize and merge the three feature maps to improve the accuracy of object detection.
Different from the algorithms using the rectangular bounding box frame, some aerial image object detection methods based on two-stage detectors believe that the use of the rotating bounding box frame can better describe the object and reduce the influence of background pixels. Although RRPN \cite{RRPN} is proposed for the task of text detection, it has achieved excellent results in many competitions related to aerial imagery. Similarly, both Li et al. \cite{Rotation-insensitive} and Ding et al. \cite{RoITransformer} adopted the lightweight structure to obtain more accurate rectangular bounding box characterization with different angles in the region proposal extraction stage. When performing RoI Pooling operations, this kind of rotation bounding box frame tries to eliminate the background pixels in the region proposal to improve the characteristics of the object itself. Yang et al. \cite{Scrdet} constructed a two-stage detector, which performs well in small objects and densely arranged objects detection tasks, namely SCRDet. Firstly, for SCRDet, a new feature fusion branch, called SF-Net, is proposed to improve the recall value of the region proposals. Then, SCRDet adopted a MDA-Net with self-attention mechanism to reduce the interference of background pixels and improve the feature representation ability of the object itself. Finally, the loss function is improved with a constant factor. Both CAD-Net \cite{CAD-Net} and GLS-Net \cite{GLSNet} improved feature representation ability of the object itself by global semantic branches with attention mechanism or saliency algorithm. These global semantic branches are used to compensate for the lack of scene information caused by the limited size of the receptive field.
RADet \cite{RADet} improved the feature representation ability for scale changes by fusing the front layer feature maps and the deeper layer feature maps, this kind of fusion strategy is useful for the small object detection tasks. Moreover, on the basis of on Mask R-CNN \cite{Mask r-cnn}, RADet can predict the instance mask and the rotating rectangular bounding box at the same time based on the attention mechanism. In RADet, the minimum rectangular area of the object is used as the truth of the mask, which actually still contains the background area. For this reason, RADet does not realize the high-precision characterization of the object itself. Similarly, inspired by Mask R-CNN, Mask OBB \cite{Mask OBB} adopted the inception module \cite{Inception-v4} to fuse feature maps from different depths and utilized the attention mechanism to construct semantic segmentation feature maps. Finally, Mask OBB can predict the bounding boxes, rotated bounding boxes, and instance masks simultaneously. In order to deal with the problem of scale changes and small objects, Li et al. \cite{Learning object-wise semantic} proposed a method that can predict bounding boxes and rotated bounding boxes with a module similar to the inception module and a semantic segmentation module. However, the above two methods require multiple types of samples in training stage. Xu et al. \cite{Gliding vertex} believe that it is difficult to directly predict the rotated bounding boxes, and the detection of the rotated objects should to be implemented step by step. That is, the non-rotated bounding boxes need to be predicted first, and then the offset of the four vertices can be calculated. Zhu et al. \cite{Adaptive period embedding} proposed that it is difficult and inefficient to directly calculate the angles of the bounding boxes. Therefore, two different two-dimensional periodic vectors are used to represent angles, and a new intersection over union (IoU) is used to solve the problem of the object with a large length and width ratio. Fu et al. \cite{Rotation-aware and multi-scale} extracted features of region proposals with angles and merged feature maps from different depths through bidirectional information flow to enhance the representation ability of the feature pyramid network. ReDet \cite{Redet} encodes the rotation-equivariance and the rotation-invariance, which can reduce the demand for network parameters while realizing the detection of rotated objects. Based on the characteristics of images in frequency, OcSaFPN \cite{OcSaFPN} is proposed to improve the accuracy of object detection in aerial images with noise.
Object detection algorithms based on single-stage detectors in aerial images have developed rapidly, and the accuracy gap with two-stage detectors has been narrowing or even surpassing. The aerial image object detection algorithms based on the single-stage detectors have the common characteristics of being fast, concise, and conducive to the deployment of dedicated computing chips and edge computing equipment. For this reason, object detection algorithms based on single-stage detectors in aerial images have great potential in the production and application in a variety of scenarios. By adding a new branch for scene prediction, you Only Look Twice (YOLT) \cite{YOLT} realizes the simultaneous prediction of the objects and scene information, and forms the association between the object and the scene information from the loss function. But this method does not make up for the shortcomings of YOLOv2 itself, and most of the datasets in aerial image field lack scene labeling information. Inspired by SSD, FMSSD \cite{FMSSD} adopted the dilated convolutional filter to enlarge the size of the receptive field. Although this method can extract the information of the object and its surrounding area at the same time, the size of the surrounding area is limited and the feature maps still cannot describe the scene semantics well. Zou et al. \cite{Random access memories} realized the small objects detection in high-resolution remote sensing images based on Bayesian priors algorithm, and optimized the memory overhead when processing large images. On the basis of RetinaNet, R3Det \cite{R3det} predicts the rotated bounding boxes through the anchor with angles and the first head network. In addition, a feature refinement module (FRM) is designed to reconstruct the entire feature map to solve the problem of misalignment. The FRM has a lightweight structure, rigorous and efficient code implementation, and can be easily inserted into a variety of cascaded detectors.
\subsection{Multi-Scale Feature Representations}
\label{sec:Multi-ScaleFeatureRepresentations}
For convolutional neural networks, the fusion and use of multi-scale feature maps can greatly improve the detection accuracy of the algorithm in small target detection tasks and scenes with dramatic scale changes \cite{OcSaFPN}.
In the field of object detection based on convolutional neural networks, feature pyramid network (FPN) \cite{FPN} is one of the earliest effective ways to solve multi-scale problems, and it is also the most widely used feature pyramid structure. FPN receives the multi-scale features from the backbone structure and then builds a top-down information transfer path to enhance the representation capabilities of the multi-scale features. While retaining the top-down information flow path of FPN, PANet \cite{PANet} adds a bottom-up information transmission path to realize the two-way interaction between shallow features and deep features. Furthermore, the adaptively spatial feature fusion (ASFF) \cite{ASFF} is designed with dense connections to transfer information between features of different scales.
The scale-transfer module proposed by STDL \cite{STDL} reconstructs multi-scale feature maps without introducing new parameters. Kong et al. \cite{Deep feature pyramid reconfiguration} first fused multi-scale feature maps and then used a global attention branch to reconstruct these features. Both AugFPN \cite{Augfpn} and $U^{2}-ONet$ \cite{U2-ONet} output multiple feature maps of different scales, and then perform loss calculations on each level.
NAS-FPN \cite{Nas-fpn} adopted an adaptive search algorithm to allow the system to automatically find the optimal multi-scale feature information flow path, thereby forming a pyramid structure with a fixed connection path. This kind of network design logic is different from the common feature pyramid, which can avoid the complicated manual design process, but the automatically search process requires huge computing resources. Inspired by NAS-FPN, MnasFPN \cite{Mnasfpn} added the characteristics of mobile hardware to the search algorithm. Therefore, when searching for the optimal network structure, the search algorithm not only considers accuracy as the only basis for judgment but also takes the hardware characteristics into account. Therefore, the deployment of MnasFPN on mobile is more advantageous.
BiFPN \cite{BiFPN} improved multi-scale expression ability through connections across different scales and short-cut operations. OcSaFPN \cite{OcSaFPN} improved the robustness of multi-scale features on noisy data by assigning different weights to feature maps from different depths.
\subsection{Conditional Convolution Mechanism}
\label{sec:ConditionalConvolutionMechanism}
Conditional convolution, which can also be called dynamic filter, was first proposed by Jia et al. \cite{Dynamic filter}. Different from the traditional convolutional layers with fixed weight parameters in the inference stage, the parameters of a conditional convolutional layer are constantly changing with different input data. Therefore, the parameter form is more flexible. This variable parameter form can better adapt to the input data, thus it has gradually attracted the attention of many researchers.
At the same time, hyper networks \cite{Hypernetworks} is proposed to generate weights for another network. Later, this mechanism is also adopted to the style transfer task \cite{Neural style transfer}. A kind of dynamic upsampling filter is used by Jo et al. \cite{Deep video super-resolution} for the task of high-resolution video reconstruction. Similarly, Meta-SR \cite{Meta-SR} is also adopted to the super-resolution reconstruction task. Condconv \cite{Condconv} described the logic of conditional convolution in detail and used the form of group convolution to deal with the situation of multiple data in a batch. Wu et al. \cite{Dynamic filtering with large sampling} the idea of the dynamic filter to generate optical flow data. Both Harley et al. \cite{Segmentation-aware convolutional networks} and CondInst \cite{CondInst} adopted the conditional convolution mechanism to predict the instance masks. Xue et al. \cite{Visual dynamics} adopted the conditional convolution mechanism to generate the future frames. Both Sagong et al. \cite{cGANs} and Liu et al. \cite{Learning to predict layout-to-image} introduced the conditional convolution mechanism into the generative adversarial network. CondLaneNet \cite{CondLaneNet} and ConDinet++ \cite{ConDinet++} are respectively for the lane line extraction task and the road extraction task in remote sensing images.
In summary, most of the current researches on conditional convolution focus on pixel-level tasks, and the number of related researches is generally small. Applications related to remote sensing images are even rarer.
\section{Proposed Method}
\label{sec:ProposedMethod}
In this section, we introduce the RelationRS algorithm. The flow chart of the proposed object detection method is shown in Figure~\ref{RelationRS}. The proposed RealationRS is based on the classic anchor-free detector, namely FCOS \cite{Fcos}, with the backbone module (ResNet50 \cite{ResNet}) and FPN structure \cite{FPN}. Firstly, the input aerial image data flows through the backbone network and FPN module, and then feature maps of five scales can be obtained (${P_{2}, P_{3}, P_{4}, P_{5}, P_{6}}$). In order to better adapt to multi-scale object detection tasks, the dual relationship module is designed to fuse features of different scales, which is explained in Section~\ref{sec:DualRelationshipModule}. On the one hand, this dual relationship module can learn the relationship of an object at different scales, and dynamically generate the fusion weights according to the input data to guide the fusion of multi-scale information. On the other hand, the dual relationship module can learn the potential scene semantics between different patches in one batch, and improve the detection accuracy through the comparison between different scenes. In Section~\ref{sec:BridgingVisualRepresentationsforObjectDetectioninAerialImages}, on the basis of fusion of multi-scale information, we use the bridging visual representations module to suppress the influence of complex background information in aerial images, and improve accuracy through the combination of multiple features. Aerial imagery usually adopts top-view perspective imaging, thus there is little occlusion between objects (Figure~\ref{OcclusionProblem}). One of the disadvantages of the key point-based object detection algorithm is that it is not robust enough when encountering occlusion problems, and its advantage lies in better positioning accuracy. Based on the above reasons, the use of key-point detection technology in aerial imagery can achieve strengths and avoid weaknesses. By combining rectangular bounding box detection, center detection, corner detection, and classification, the interference of complex background information can be suppressed, and the positioning accuracy of objects on complex background data can be improved. Finally, high-precision aerial image object detection can be achieved.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{Definitions/RelationRS.png}
\end{center}
\caption{The proposed framework, namely, the relationship representation network for object detection in aerial images (RelationRS), is made up of three components: the baseline network, which is made up of the fully convolutional single-stage detector with no anchor setting (FCOS) \cite{Fcos} and the feature pyramid network (FPN) \cite{FPN}; the dual relationship module, which learns the potential relationship between different scales of the objects and the potential relationship between different scenes of aerial images in one batch; the bridging visual representation module for aerial image object detection task, which learns the potential relationship between different coordinate representations based on BVR module \cite{BVR}.}
\label{RelationRS}
\end{figure*}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Definitions/OcclusionProblem.png}
\end{center}
\caption{Comparison of target occlusion in images under different viewing angles. (a)There are serious occlusion problems between different people in natural images. (b)Images acquired by drones, due to the overhead perspective, there is almost no problem of mutual occlusion between people and vehicle objects.}
\label{OcclusionProblem}
\end{figure}
\subsection{Dual Relationship Module (DRM)}
\label{sec:DualRelationshipModule}
Aerial images have obvious characteristics of diverse scales and the existence of scene semantics. To deal with the scale changes within and between classes, FPN \cite{FPN}, PANet \cite{PANet}, NAS-FPN \cite{Nas-fpn}, MnasFPN \cite{Mnasfpn}, BiFPN \cite{BiFPN}, OcSaFPN \cite{OcSaFPN}, etc. have all been proposed. These methods effectively improve the multi-scale object detection problem. However, the structures and weights of these methods are fixed in the inference stage and will not change according to the input data. As shown in Figure~\ref{MultipleScenes}, for different aerial image patches, the semantic information of the scene contained in it is different, and the object types and scales in the two scenes are also quite different. Based on the above reasons and inspired by CondInst \cite{CondInst}, the dual relationship module is designed to learn scale changes from multi-scale information, implicitly extract the connections and differences between the scenes contained in different patches in the one batch. In addition, the neural network parameters of the multi-scale information fusion module are dynamically generated to guide the fusion of multi-scale features with semantic information of the input data.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Definitions/MultipleScenes.png}
\end{center}
\caption{Sample aerial image patches with different scenes. The sizes of patches are all $1024 \times 1024$. Different patches have different scene semantics, forming a potential semantic contrast with each other. There are also intra-class and inter-class scale differences for objects between different scenarios.}
\label{MultipleScenes}
\end{figure}
Figure~\ref{DualRelationshipModule} shows the construction of the dual relationship module. Taking $batchsize=2$ as an example, the four scale feature maps extracted from the backbone network are marked as ${C_{2}, C_{3}, C_{4}, C_{5}}$. Then after taking ${C_{2}, C_{3}, C_{4}, C_{5}}$ as input, the feature maps output by FPN \cite{FPN} can be marked as ${P_{2}, P_{3}, P_{4}, P_{5}, P_{6}}$. Among them, $P_{6}$ is generated by $P_{5}$ with the maximum pooling operation. Inspired by CondInst, the key point of the dual relationship module is the generation of $P_{2}^{\prime}$ feature maps, and the generation process of $P_{2}^{\prime}$ can be described by Equation~(\ref{eq:DualRelationshipModule}):
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Definitions/DualRelationshipModule.png}
\end{center}
\caption{The construction process for the dual relationship module.}
\label{DualRelationshipModule}
\end{figure}
\begin{equation}
P_{2}^{\prime} = Conv2(Conv1(concate(resize((P_{2}, P_{3}, P_{4}, P_{5}))))),
\label{eq:DualRelationshipModule}
\end{equation}
where $Conv1(\cdot)$ and $Conv2(\cdot)$ denote two convolution operation with kernel size [1, 1], $concate(\cdot)$ denotes the concatenation. The parameters of $Conv1(\cdot)$ and $Conv2(\\cdot)$ are obtained by CondConv \cite{Condconv}, and the process can be expressed by Equation~(\ref{eq:CondConv}):
\begin{equation}
w_{1},w_{2},b_{1},b{2} = CondConv(FO(C_{2}, C_{3}, C_{4}, C_{5})),
\label{eq:CondConv}
\end{equation}
where $CondConv(\cdot)$ denotes the conditional convolution module from CondConv \cite{Condconv}, $FO(\cdot)$ denotes the fusion operation seen in Figure~\ref{DualRelationshipModule} and can be built by Equation~(\ref{eq:FusionOperation}):
\begin{equation}
FO(C_{2}, C_{3}, C_{4}, C_{5}) = Conv_{3 \times 3}(concate(resize(C_{2}, C_{3}, C_{4}, C_{5}))),
\label{eq:FusionOperation}
\end{equation}
where $Conv_{3 \times 3}(\cdot)$ denotes the convolution operation with kernel size [3, 3].
Based on the above formulas, ${C_{2}, C_{3}, C_{4}, C_{5}}$ are resized and concated in the channel dimension. In order to obtain $w_{1},w_{2},b_{1},b{2}$, the feature map after fusion first passes through a convolutional layer with kernel size [3, 3] for feature alignment and channel dimensionality reduction. Then, the dimensionality-reduced feature map is sent to CondConv module to generate the required weights and bias values. To solve the situation where batchsize is not equal to 1, we treat different patches in a batch as different experts. This is different from the way in CondInst. Finally, we generate the parameters to initialize the two convolutional layers for the fusion of ${P_{2}, P_{3}, P_{4}, P_{5}}$, and finally output the required feature maps $P_{2}^{\prime}$. In this way, we get a series of multi-scale feature maps ${P_{2}^{\prime}, P_{2}, P_{3}, P_{4}, P_{5}, P_{6}}$ and send them to the head network. To reduce the amount of parameters ($w_{1},w_{2},b_{1},b{2}$), the group convolution mechanism from AlexNet \cite{AlexNet} is used in $Conv1(\cdot)$ and $Conv2(\cdot)$.
It is worth noting that we treat two patches in a batch as experts (Figure~\ref{CondConv}), and then generate weight parameters. These parameters potentially obtain the relationships and differences between two scenarios, thus the network can dynamically extract features based on the input data. This is different from the idea of using semantic extraction branches in CAD-Net \cite{CAD-Net} and GLS-Net \cite{GLSNet} to extract the semantics of a single patch scene.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Definitions/CondConv.png}
\end{center}
\caption{CondConv \cite{Condconv} layer architecture with $n=2$ kernels. CondConv: $({\alpha}_{1}W_{1}+{\alpha}_{2}W_{2})\times x$.}
\label{CondConv}
\end{figure}
\subsection{Bridging Visual Representations for Object Detection in Aerial Images}
\label{sec:BridgingVisualRepresentationsforObjectDetectioninAerialImages}
Generally speaking, different representation methods usually lead detectors to perform well in different aspects. According to the structures of the current object detection networks, the two-stage detector can usually obtain a more accurate object category prediction. The detection method based on the center point can improve the detection accuracy of small objects. The corner-based method reduces the characterization dimension of the bounding boxes, thus it has more advantages in positioning tasks \cite{BVR}. As shown in Figure~\ref{OcclusionProblem}, in the face of the character that objects in aerial images are less occluded, we believe that the introduction of the key-point detection algorithm based on FCOS can suppress the influence of complex backgrounds and improve detection accuracy. Based on this point of view, the BVR module is introduced into the aerial image object detection task. Through the combination of multiple characterization methods, the accuracy of aerial image detection tasks for complex backgrounds can be improved.
For a detector, the main idea of BVR module is to regard its main representation as the master representation. Thus other auxiliary representations are adopts to enhance the master representation by a kind of transformer module \cite{Transformer}. That is, the transformer mechanism is used to bridge different visual representations.
As for the anchor-free algorithm (FOCS), the center point location and the corresponding features are regarded as the master representation and the query input at the same time. Compared with standard FOCS head network, the BVR module constructs an additional point head network (Figure~\ref{BVR_FCOS}). The point head network consists of two shared convolutional layers with kernel size [3, 3], followed by two independent sub-networks to predict the scores and sub-pixel offsets for center and corner prediction \cite{BVR}. The representations produced by the point head network are regarded as the auxiliary representation and the keys in the transformer algorithm. To reduce the amount of calculation, a top-k key selection strategy is adopted to control the set of keys not larger than $k$ (default=50), according to their corner-ness scores. Besides, the cosine/sine location embedding algorithm is used to reduce the complexity of coordinate representations. Here, according to the characteristic of the aerial images, the maximum number of key points is set to 400.
Based on the above settings of the queries and the keys, the enhanced features $f_{i}^{\prime q}$ can be calcuted by Equation~(\ref{eq:attention}):
\begin{equation}
f_{i}^{\prime q} = f_{i}^{q} + \sum_{j}S(f_{i}^{q},f_{j}^{k},g_{i}^{q},g_{j}^{k})\cdot T_{v}(f_{j}^{k}),
\label{eq:attention}
\end{equation}
where $f_{i}^{q}$ and $g_{i}^{q}$ are the input feature and geometric vector for a $query$ instance $i$; $f_{j}^{k}$ and $g_{j}^{k}$ are the input feature and geometric vector for a $key$ instance j; $T_{v}(\cdot)$ is a linear $value$ transformation function; $S(\cdot)$ is a similarity function between $i$ and $j$ \cite{BVR}. $S(\cdot)$ can be described as Equation~(\ref{eq:similarityfunction}):
\begin{equation}
S(f_{i}^{q},f_{j}^{k},g_{i}^{q},g_{j}^{k}) = {softmax}_{j}(S^{A}(f_{i}^{q},f_{j}^{k})+S^{G}(g_{i}^{q},g_{j}^{k})),
\label{eq:similarityfunction}
\end{equation}
where $S^{A}(f_{i}^{q},f_{j}^{k})$ denotes the appearance similarity computed by a scaled dot product between $query$ and $key$ features,and $S^{G}(g_{i}^{q},g_{j}^{k})$ denotes a geometric term computed by cosine/sine location embedding-based method \cite{BVR}.
The BVR module based on FCOS can be seen in Figure~\ref{BVR_FCOS}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Definitions/BVR_FCOS.png}
\end{center}
\caption{The construction process for the BVR module based on FCOS.}
\label{BVR_FCOS}
\end{figure}
\section{Experiments and Results}
\label{sec:ExperimentsandResults}
For proving the effectiveness of the proposed method, a widely used ``A Large-Scale Dataset for Object Detection in Aerial images” (DOTA) \cite{DOTA} dataset was used in the experiments for the object detection task in aerial images. In this chapter, the DOTA1.0 dataset, the implementation details, and the ablation studies conducted with the proposed method can be introduced in detail in order.
\subsection{Dataset}
\label{sec:Dataset}
\textbf{DOTA1.0}. DOTA1.0 dataset is one of the largest published open-access dataset for object detection in aerial images. The dataset consists of 2806 large-size aerial images from Google Earth and satellites including Julang-1 (LJ-1) and the Gaofen-2 satellite (GF-2). The dataset consistes 188,282 annotated bounding boxes in 15 categories, including plane, baseball diamond (BD), bridge, ground~ track field (GTF), small vehicle (SV), large vehicle (LV), ship, tennis court (TC), basketball court (BC), storage tank (ST), soccer-ball field (SBF), roundabout (RA), harbor, swimming pool (SP), and~ helicopter (HC). Most of the ground sampling distance(GSD) values of the images in the DOTA1.0 dataset are better than 1 meter. The DOTA1.0 dataset has the characteristics of diverse scenarios, categories, scales, etc. It is still challenging to achieve high-precision object detection with this dataset.
In the experiments, the training set and validation set of DOTA1.0 were used in the training stage, and the test set was used for the inference stage and evaluation stage. The original images were all cropped to a size of $1024 \times 1024$, overlapping by 500 pixels. If the size of a patch is less than $1024 \times 1024$, the zero padding method was adopted for completion. Based on the above settings, we obtained a total of 38,504 patches for training stage and a total of 20,012 patches for the evaluation task. Since the ground truth files of the test set from the DOTA1.0 dataset was not disclosed, we submitted the final test results to the online evaluation website in the format of '.txt' (\url{https://captain-whu.github.io/DOTA/evaluation.html}).
\subsection{Evaluation Metrics}
\label{sec:EvaluationMetrics}
For quantitative accuracy evaluation, the mean average precision (mAP) is used in this paper. The mAP describes the mean value of the average precision (AP) values for multiple categories in a dataset. For a certain category, the AP value is the area enclosed by the coordinate axis and the broken line in the corresponding precision-recall graph. The larger the area, the higher its AP value. The details of the evaluation follow the official DOTA1.0 evaluation website (\url{https://captain-whu.github.io/DOTA/evaluation.html}).
\subsection{Implementation Details}
\label{sec:ImplementationDetails}
For the realization of the RelationRS, we built the baseline network based on FCOS \cite{Fcos} with FPN \cite{FPN}. The ResNet50 \cite{ResNet} pretrained on ImageNet \cite{ImageNet Classification} was adopted as the backbone network. A series of experiments were designed to better evaluate the effects of the dual relationship module and the bridging visual representations module for aerial image object detection in this paper. The environment used was a single NVIDIA Tesla V100 GPU with 16 GB memory, along with the PyTorch 1.8.0 and Python 3.7.10 deep learning frameworks. The initial learning rate was 0.0025, the batch size of the input data was $2$, the value of the momentum is 0.9, the value of the weight decay was 0.0001, and the minibatch stochastic gradient descent (SGD) was also used for optimization. And the project was build on the mmdetection v2.7.0 \cite{MMDetection}.
\subsection{Ablation Experiments}
\label{sec:AblationExperiments}
Two ablation experiments were used to further discuss the influence of the dual relationship module and the bridging visual representations module. Here, the abbreviations in the DOTA data set are explained again: plane, baseball diamond (BD), bridge, ground track field (GTF), small vehicle (SV), large vehicle (LV), ship, tennis court (TC), basketball court (BC), storage tank (ST), soccer-ball field (SBF), roundabout (RA), harbor, swimming pool (SP), and helicopter (HC).
\subsubsection{Dual Relationship Module}
\label{sec:AblationDualRelationshipModule}
We conducted the ablation experiment to verify the effectiveness of the proposed dual relationship module. The baseline is the FCOS algorithm. +DRM means the combination of the FCOS and the dual relationship module. The difference between the baseline and the +DRM is only whether the dual relationship module is additionally used. And the parameters used in the experiment are strictly kept consistent.
From Table~\ref{tb:plusDRM}, +DRM obtains a $mAP$ of $65.63\%$, which is $1.38\%$ higher than the $mAP$ value of the baseline ($64.25\%$). For DOTA datasets with 15 categories, the baseline method only has advantages in three categories, plane, soccer-ball field (SBF) and basketball court (BC), which are $0.17\%$, $4.21\%$ and $5.4\%$ higher than the values of +DRM. This indicates that the performance of +DRM is not stable enough for objects with relatively large scales. For small objects, +DRM has achieved better accuracy in multiple categories. The $AP$ values of small vehicle (SV), large vehicle (LV), ship, storage tank (ST), and helicopter (HC) of the +DRM outperform the values of the baseline by $1.66\%$, $3.07\%$, $4.97\%$, $6.48\%$, and $6.46\%$. In addition, The $AP$ values of baseball diamond (BD), bridge, ground track field (GTF), roundabout (RA), harbor and swimming pool (SP) of the +DRM are also higher than values of the baseline. Therefore, the +DRM method can effectively improve the detection accuracy of small targets.
\begin{table*}[]
\caption{Detection accuracy in the ablation study of using DRM or not with DOTA test dataset. The bold numbers denote the highest values in each class.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccccccccccccc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \textbf{Plane}& \textbf{BD}& \textbf{Bridge}& \textbf{GTF}& \textbf{SV}& \textbf{LV}& \textbf{Ship}& \textbf{TC} & \multirow{2}{*}{\textbf{mAP(\%)}}\\
\cline{2-9}
~ & \textbf{BC} & \textbf{ST}& \textbf{SBF}& \textbf{RA}& \textbf{Harbor}& \textbf{SP}& \textbf{HC}\\
\midrule
\multirow{2}*{baseline} & \textbf{88.12}& 70.77& 44.04& 47.46& 76.36& 65.34& 77.96& 90.83&\multirow{2}{*}{64.25}\\
\cline{2-9}
~ & \textbf{74.31}& 78.37& \textbf{48.3}& 52.62& 72.25& 42.77& 34.27\\ \hline
\multirow{2}*{+DRM} & 87.95& \textbf{71.66}& \textbf{44.1}& \textbf{52.48}& \textbf{78.02}& \textbf{68.41}& \textbf{82.93}& 90.83&\multirow{2}{*}{\textbf{65.63}}\\
\cline{2-9}
~ & 68.91& \textbf{84.85}& 44.09& \textbf{53.6}& \textbf{72.44}& \textbf{43.42}& \textbf{40.73}\\ \hline
\bottomrule
\end{tabular}}
\label{tb:plusDRM}
\end{table*}
\subsubsection{Bridging Visual Representations for Object Detection in Aerial Images}
\label{sec:AblationBridgingVisualRepresentationsforObjectDetectioninAerialImages}
To evaluate the efficiency of the bridging visual representations module in aerial images, an ablation experiment are designed to compare the BVR module with the baseline in DOTA test dataset. From Table~\ref{tb:plusBVR}, the baseline is the FCOS algorithm the same as Section~\ref{sec:AblationDualRelationshipModule}. +BVR is a combination of the baseline method and the bridging visual representations module.
For 10 categories of plane, baseball diamond (BD), bridge, ground track field (GTF), small vehicle (SV), large vehicle (LV), ship, storage tank (ST), swimming pool (SP) and helicopter (HC), the $AP$ values of the +BVR are higher than values of the baseline by $1.2\%$, $1.98\%$, $1.17\%$, $5.55\%$, $2\%$, $0.31\%$, $0.84\%$, $1.9\%$, $5.01\%$ and $9.8\%$. Thus, +BVR achieves a higher $mAP$ values by $1.67\%$. The value increase of the $mAP$ proves the effectiveness of the BVR method in the field of aerial image detection.
\begin{table*}[]
\caption{Detection accuracy in the ablation study of using BVR module or not with DOTA test dataset. The bold numbers denote the highest values in each class.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccccccccccccc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \textbf{Plane}& \textbf{BD}& \textbf{Bridge}& \textbf{GTF}& \textbf{SV}& \textbf{LV}& \textbf{Ship}& \textbf{TC} & \multirow{2}{*}{\textbf{mAP(\%)}}\\
\cline{2-9}
~ & \textbf{BC} & \textbf{ST}& \textbf{SBF}& \textbf{RA}& \textbf{Harbor}& \textbf{SP}& \textbf{HC}\\
\midrule
\multirow{2}*{baseline} & 88.12& 70.77& 44.04& 47.46& 76.36& 65.34& 77.96& 90.83&\multirow{2}{*}{64.25}\\
\cline{2-9}
~ & \textbf{74.31}& 78.37& \textbf{48.3}& \textbf{52.62}& \textbf{72.25}& 42.77& 34.27\\ \hline
\multirow{2}*{+BVR} & \textbf{89.32}& \textbf{72.75}& \textbf{45.21}& \textbf{53.01}& \textbf{78.36}& \textbf{65.65}& \textbf{78.74}& 90.83&\multirow{2}{*}{\textbf{65.92}}\\
\cline{2-9}
~ & 72.88& \textbf{80.27}& 45.42& 52.36& 72.12& \textbf{47.78}& \textbf{44.07}\\ \hline
\bottomrule
\end{tabular}}
\label{tb:plusBVR}
\end{table*}
\subsection{Comparison with the State-of-the-Art}
\label{sec:ComparisonwiththeState-of-the-Art}
To examine and evaluate the performance of the proposed framework RelationRS, the proposed framework is compared with the state-of-the-art algorithms with the DOTA test dataset. Table~\ref{tb:RelationRS} shows the AP and the mAP values of different algorithms.
\begin{table*}[]
\caption{Comparisons with the state-of-the-art single-stage-based detectors in the DOTA1.0 test dataset with horizontal bounding boxes. The baseline is the FCOS algorithm. +DRM means the combination of the FCOS and the dual relationship module. +BVR is a combination of the baseline method and the bridging visual representations module. And the RelationRS is a network adding the DRM and the BVR module to the baseline detector. The \textcolor[RGB]{255,0,0}{red} numbers and the \textcolor[RGB]{0,0,255}{blue} numbers denote the highest values and the second highest values in each class.}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccccccccccccccc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \textbf{Plane}& \textbf{BD}& \textbf{Bridge}& \textbf{GTF}& \textbf{SV}& \textbf{LV}& \textbf{Ship}& \textbf{TC} & \multirow{2}{*}{\textbf{mAP(\%)}}\\
\cline{2-9}
~ & \textbf{BC} & \textbf{ST}& \textbf{SBF}& \textbf{RA}& \textbf{Harbor}& \textbf{SP}& \textbf{HC}\\
\midrule
\multirow{2}*{YOLOv3-tiny \cite{YOLOv3}} & 61.48& 24.35& 4.3& 15.49& 20.27& 30.22& 26.96& 72&\multirow{2}{*}{25.73}\\
\cline{2-9}
~ & 26.21& 22.91& 14.05& 7.27& 28.78& 27.07& 4.55\\ \hline
\multirow{2}*{SSD \cite{SSD}} & 57.85& 32.79& 16.14& 18.67& 0.05& 36.93& 24.74& 81.16&\multirow{2}{*}{29.86}\\
\cline{2-9}
~ & 25.1& 47.47& 11.22& 31.53& 14.12& 9.09& 0\\ \hline
\multirow{2}*{YOLOv2 \cite{YOLOv2}} & 76.9& 33.87& 22.73& 34.88& 38.73& 32.02& 52.37& 61.65&\multirow{2}{*}{39.2}\\
\cline{2-9}
~ & 48.54& 33.91& 29.27& 36.83& 36.44& 38.26& 11.61\\ \hline
\multirow{2}*{RetinaNet \cite{Focalloss}} & 78.22& 53.41& 26.38& 42.27& 63.64& 52.63& 73.19& 87.17&\multirow{2}{*}{50.39}\\
\cline{2-9}
~ & 44.64& 57.99& 18.03& 51& 43.39& \textcolor[RGB]{0,0,255}{56.56}& 7.44\\ \hline
\multirow{2}*{YOLOv3 \cite{YOLOv3}} & 79& \textcolor[RGB]{0,0,255}{77.1}& 33.9& \textcolor[RGB]{0,0,255}{68.1}& 52.8& 52.2& 49.8& 89.9&\multirow{2}{*}{60}\\
\cline{2-9}
~ & \textcolor[RGB]{255,0,0}{74.8}& 59.2& \textcolor[RGB]{255,0,0}{55.5}& 49& 61.5& 55.9& 41.7\\ \hline
\multirow{2}*{SBL \cite{SBL}} & \textcolor[RGB]{0,0,255}{89.15}& 66.04& \textcolor[RGB]{0,0,255}{46.79}& 52.56& 73.06& 66.13& 78.66& \textcolor[RGB]{255,0,0}{90.85}&\multirow{2}{*}{64.77}\\
\cline{2-9}
~ & 67.4& 72.22& 39.88& \textcolor[RGB]{0,0,255}{56.89}& 69.58& \textcolor[RGB]{255,0,0}{67.73}& 34.74\\ \hline
\multirow{2}*{$SFFM^{d}$ \cite{SFFM}} & 88.1& \textcolor[RGB]{255,0,0}{82.4}& \textcolor[RGB]{255,0,0}{47.7}& \textcolor[RGB]{255,0,0}{72.9}& 45.9& \textcolor[RGB]{255,0,0}{73.5}& 64.4& 90.4&\multirow{2}{*}{\textcolor[RGB]{0,0,255}{66.3}}\\
\cline{2-9}
~ & 66.7& 50.1& \textcolor[RGB]{0,0,255}{54}& \textcolor[RGB]{255,0,0}{60.1}& \textcolor[RGB]{255,0,0}{77.8}& 51.7& \textcolor[RGB]{255,0,0}{69.5}\\ \hline
\multirow{2}*{baseline \cite{Fcos}} & 88.12& 70.77& 44.04& 47.46& 76.36& 65.34& 77.96& \textcolor[RGB]{0,0,255}{90.83}&\multirow{2}{*}{64.25}\\
\cline{2-9}
~ & \textcolor[RGB]{0,0,255}{74.31}& 78.37& 48.3& 52.62& 72.25& 42.77& 34.27\\ \hline
\multirow{2}*{+DRM} & 87.95& 71.66& 44.1& 52.48& 78.02& 68.41& \textcolor[RGB]{255,0,0}{82.93}& \textcolor[RGB]{0,0,255}{90.83}&\multirow{2}{*}{65.63}\\
\cline{2-9}
~ & 68.91& \textcolor[RGB]{255,0,0}{84.85}& 44.09& 53.6& 72.44& 43.42& 40.73\\ \hline
\multirow{2}*{+BVR} & \textcolor[RGB]{255,0,0}{89.32}& 72.75& 45.21& 53.01& \textcolor[RGB]{0,0,255}{78.36}& 65.65& 78.74& \textcolor[RGB]{0,0,255}{90.83}&\multirow{2}{*}{65.92}\\
\cline{2-9}
~ & 72.88& 80.27& 45.42& 52.36& 72.12& 47.78& 44.07\\ \hline
\multirow{2}*{RelationRS} & 88.27& 72.96& 45.47& 53.7& \textcolor[RGB]{255,0,0}{79.73}& \textcolor[RGB]{0,0,255}{70.98}& \textcolor[RGB]{0,0,255}{82.38}& \textcolor[RGB]{0,0,255}{90.83}&\multirow{2}{*}{\textcolor[RGB]{255,0,0}{66.81}}\\
\cline{2-9}
~ & 69.86& \textcolor[RGB]{0,0,255}{83.29}& 45.26& 54.61& \textcolor[RGB]{0,0,255}{72.79}& 47.85& \textcolor[RGB]{0,0,255}{44.18}\\ \hline
\bottomrule
\end{tabular}}
\label{tb:RelationRS}
\end{table*}
As shown in Table~\ref{tb:RelationRS}, the proposed RelationRS achieves the highest mAP value, and it outperfors YOLOv3-tiny \cite{YOLOv3}, SSD \cite{SSD}, YOLOv2 \cite{YOLOv2}, RetinaNet \cite{Focalloss}, YOLOv3 \cite{YOLOv3}, SBL \cite{SBL}, $SFFM^{d}$ \cite{SFFM}, and FCOS (baseline) \cite{Fcos} by $41.08\%$, $36.95\%$, $27.61\%$, $16.42\%$, $6.81\%$, $2.04\%$, and $0.51\%$. For 15 categories of objects, RelationRS obtains the highest mAP value on 1 class (small vehicle) and the second highest mAP value on 6 classes (large vehicle, ship, tennis court, storage tank, harbor, and helicopter).
RelationRS has good detection performance for small objects, such as small vehicle (SV), large vehicle (LV), ship, storage tank (ST), and helicopter (HC), etc. This phenomenon is similar to the one discussed in Section~\ref{sec:AblationDualRelationshipModule}. In addition, the BVR module is also used in the RelationRS method to improve the performance of the detector on images with complex background. Therefore, the overall accuracy of multiple categories is improved to a certain extent. Interestingly, $SFFM^{d}$ obtains the highest mAP value on 7 classes and the second highest mAP value on 1 classes, but its mAP value is slightly lower than the RelationRS method. This indicates that RelationRS has a more stable performance on different categories in DOTA dataset, and proves the generalization of detector, that is, it is easier to promote the RelationRS to specific object categories not included in the current large aerial image datasets. The above experiments fully demonstrate the effectiveness of the proposed method.
In conclusion, RelationRS constructs a dual relationship module to guide the fusion of multi-scale features. DRM learns the comparison information of scenes from different patches in one batch and dynamically generates the weight parameters of multi-scale fusion through conditional convolution. Furthermore, the bridging visual representations module for natural image object detection is introduced into RelationRS to improve the performance of the detector on aerial images with complex background information. To the best of our knowledge, this is the first time to demonstrate the performance of BVR Module on aerial image. Some examples of detection results in different scenarios are shown in Figure~\ref{results}.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{Definitions/results.png}
\end{center}
\caption{Object detection results on DOTA1.0 dataset. (a)-(l) are schematic pictures of detection results in different scenarios.}
\label{results}
\end{figure*}
\section{Conclusions}
\label{sec:Conclusions}
In this paper, a single-stage-based object detector in aerial images is proposed, namely RelationRS. The framework combines a dual relationship module and a bridging visual representations module to solve the problem of multi-scale fusion and improve the object detection accuracy in aerial image with complex background. The dual relationship module can extract the relationship between different scenes according to the input images and dynamically generate the weight parameters required for feature maps fusion. In addition, on the basis of the characteristics of objects in aerial images is not easy to keep out each other, from the viewpoint of introducing key-point detection algorithm, we proves the effectiveness of the bridging visual representations module in the field of aerial image object detection. The experiments under taken with the public DOTA1.0 dataset confirmed the remarkable performance of the proposed method.
On the other hand, single-stage detectors are still not as accurate as two-stage detectors. The current neural networks still cannot be better explained. Therefore, how to better explain the features extracted by neural networks and combine the imaging parameters of aerial images is one of the key points to improve the detection accuracy of aerial image in the future.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,941,325,220,716 | arxiv | \section{Introduction}
The objective of this article is to present the main ideas of a \textit{c}anonical \textit{f}uzzy \textit{l}ogic (CFL). This logic is a particular case of a non-classical logical formalism -- in the sense that it differs from that of \textit{b}ivalent classical \textit{l}ogic (BL) -- that is susceptible to diverse interpretations each of which is a non-classical logic. In this first treatment of the topic, only one of these interpretations will be considered: precisely that of CFL.
In this article BL will be understood as the mathematical logic in which there are only two possible truth values, true and false, as discussed in outstanding contributions by G. Boole, F. L. G. Frege, C. S. Pierce and B. Russell. This logic can be considered a notable improvement and an extraordinary amplification of Aristotelian logic.
CFL is a parametric type of logic inasmuch as a parameter is used: $w$ -- weight or degree of truth. It is known as canonical because there is a marked continuity with BL: the “laws” -- theorems -- of this logic conserve their validity in CFL. BL can be regarded as a particular ``limit'' case of CFL.
For a clear understanding of this article, knowledge of only basic notions of propositional calculus of BL and of classical set theory is required. For an introductory overview of these topics, one may consult, for example: \cite{a1}, \cite{a2}, \cite{a3} and \cite{a4}. On fuzzy logic and fuzzy set theory, one may consult, for example: \cite{b1}, \cite{b2}, \cite{c1} and \cite{c2}. An interesting view of the development of fuzzy logic and the theory of fuzzy sets, as well as of the prospects of those fields was presented by Lotfi A. Zadeh \cite{c3}.
\section{Some Correspondences between Operations of Propositional Calculus of BL and Operations of Classical Set Theory}
Variables which can be replaced by propositions are known as propositional variables. Letters such as $p$, $q$, $r$, $\ldots$ are commonly used to refer to different propositional variables. In future work on the non-classical logical formalism mentioned above, the letter $p$ will be used as an abbreviation for \textit{p}robability. Thus, to prevent confusion, in this article reference will be made to the different propositional variables as follows: $q_1$, $q_2$, $q_3$, $\ldots$.
Strictly speaking, attributing a truth value to a propositional variable is not admissible. It only makes sense to attribute a truth value -- either true or false – to a proposition. However, in line with a license commonly found in specialized literature, expressions like ``Consider that $q_1$ has been replaced by a true proposition and $q_2$ has been replaced by a false proposition'' will be communicated in an abbreviated way such as ``$q_1$ is true and $q_2$ is false''.
In logical expressions in which only the symbol corresponding to a sole proposition (and eventually its negation) is present, the subscript can be eliminated and the proposition can be symbolized simply as $q$.
Different sets will be symbolized as $C_1$, $C_2$, $C_3$, $\ldots$. In expressions in which reference is made to only one of those sets, and eventually to its complement, the subscript can be eliminated and the set can be symbolized simply as $C$.
Two very theoretically important sets, the universal set and its complement – the empty set – will be symbolized respectively as follows: $\mathbb U$ and $\varnothing$.
With no exceptions, all the elements to be considered when using set theory for a specific topic belong to the universal set $\mathbb U$, also called universe of discourse or simply universe. Any element belonging to the universal set $\mathbb U$ does not belong to the corresponding empty set $\varnothing$. Therefore, in classical set theory, no element belongs to $\varnothing$.
Some of the symbols used in set theory to express different relationships between sets are those of equality ($=$), inclusion in a broad sense ($\subseteq$), and inclusion in a narrow sense ($\subset$).
$C_1 = C_2$ is read as: ``The set $C_1$ is equal to the set $C_2$''. The meaning of $C_1 = C_2$ is as follows: The same elements of a given universal set $\mathbb U$ that belong to $C_1$ belong to $C_2$. In other words, if any element whatsoever of a given universal set $\mathbb U$ belongs to $C_1$, then it also belongs to $C_2$, and if any element whatsoever of a given universal set $\mathbb U$ does not belong to $C_1$, then it does not belong to $C_2$ either.
$C_1 \subseteq C_2$ is read as: ``$C_1$ is included, in a broad sense, in $C_2$''. The meaning of $C_1 \subseteq C_2$ is as follows: Any element of a given universal set $\mathbb U$ which belongs to $C_1$ also belongs to $C_2$, and the possibility that $C_1$ may be equal to $C_2$ ($C_1 = C_2$) is not excluded.
$C_1 \subset C_2$ is read as: ``$C_1$ is included, in a narrow sense, in $C_2$''. The meaning of $C_1 \subset C_2$ is as follows: Any element of a given universal set $\mathbb U$ which belongs to $C_1$ also belongs to $C_2$, and there is at least one element in that universal set $\mathbb U$ which belongs to $C_2$ but does not belong to $C_1$. Thus the possibility that $C_1$ is equal to $C_2$ is excluded.
The logical connective of \textit{negation} of a proposition is usually symbolized as $\neg$. What many authors express with $\neg q$ (not $q$) will be expressed in this article by a horizontal bar above the negated proposition as $\overline{q}$.
The operator of complementation of a set $C$ is usually represented by a bar above the symbol $C$: $\overline{C}$. In this article a slightly different symbol will be used to represent that complement. In this way, confusion between the logical connective of negation in propositional calculus and the operator of complementation used in set theory can be prevented. A proposition -- that is, a statement also known as a ``sentence'' at times -- may be negated. However, according to the usual notion of a set, it would be senseless to negate a set. Therefore, the set $\xvec{C}$ will be called the complement set of $C$. Recall that all the elements belonging to the universal set $\mathbb U$ considered that do not belong to $C$ belong to $\xvec{C}$.
The truth table (a) corresponding to the negation of q -- that is, $\overline{q}$ -- and the membership table (b) corresponding to the complementary set $\xvec{C}$ of any set $C$ are shown in figure \ref{f1}.
\begin{figure}[H]
\centering
\subfloat[Truth table corresponding to the negation of q; that is, $\overline{q}$.]{
\hspace{2in}
\begin{tabular}{c||c}
$q$ & $\overline{q}$ \\
\midrul
0 & 1 \\
1 & 0 \\
\end{tabular}
\hspace{2in}
\label{f1a}
}
\\
\subfloat[Membership table corresponding to the complementary set of $C$ -- that is, $\xvec{C}$. $C$ is any set such that all the elements belonging to it are elements of the universal set $\mathbb U$ considered above.]{
\hspace{1.9in}
\begin{tabular}{c||c}
$C$ & $\xvec{C}$ \\
\midrul
0 & 1 \\
1 & 0 \\
\end{tabular}
\hspace{2in}
\label{f1b}}%
\caption{a) Truth table corresponding to $\overline{q}$, and b) membership table corresponding to $\xvec{C}$.}
\label{f1}
\end{figure}
In a truth table, a zero (0) in the column corresponding to a proposition -- that is, a truth value equal to 0 -- represents the supposition that the proposition is false. A one (1) in the column corresponding to a proposition -- that is, a truth value equal to 1 -- represents the supposition that the proposition is true.
In the membership table, a zero (0) in the column corresponding to a set -- that is, a membership value equal to 0 -- represents the supposition that any element of the universal set $\mathbb U$ considered does not belong to that set. A one (1) in the column corresponding to a set -- that is, a membership value equal to 1 -- represents the supposition that any element whatsoever of $\mathbb U$ does belong to that set.
The first row of the truth table in figure \ref{f1a}, in which a sequence of digits (0 and 1) appears, should be interpreted as follows: If $q$ is false, then $\overline{q}$ is true. The second row of that truth table in which a sequence of digits (1 and 0) appears, should be interpreted as follows: If $q$ is true, then $\overline{q}$ is false.
The first row of the membership table in figure \ref{f1b}, in which a sequence of digits (0 and 1) appears, should be interpreted as follows: If any element belonging to the $\mathbb U$ considered does not belong to $C$, then it does belong to $\xvec{C}$. The second row of the membership table in figure \ref{f1b}, in which a sequence of digits (1 and 0) appears, should be interpreted as follows: If any element belonging to the $\mathbb U$ considered belongs to $C$, then that element does not belong to $\xvec{C}$.
Note the equality of the first and second rows, respectively, of the tables in figures \ref{f1a} and \ref{f1b}. This makes it possible to establish: 1) a correspondence between the logical connective of negation of propositional calculus and the operator of complementation of set theory; 2) a correspondence between the proposition $q$ and the set $C$; and 3) a correspondence between $\overline{q}$ and the set $\xvec{C}$.
In propositional calculus, the logical connective of \textit{disjunction}, known as ``or'', is symbolized as $\lor$. The proposition $q_1 \lor q_2$ is read as ``$q_1$ or $q_2$''. In set theory, the operator of the union of sets is symbolized as $\cup$. The set $C_1 \cup C_2$ is the set resulting from the operation of union of the sets $C_1$ and $C_2$. It is admitted that each element belonging to set $C_1$, and each element belonging to set $C_2$ are elements belonging to the universal set $\mathbb U$, which should be specified unambiguously. This is achieved by specifying the elements belonging to $\mathbb U$.
Figure \ref{f2} presents a) the truth table corresponding to $q_1 \lor q_2$, and the membership table corresponding to $C_1 \cup C_2$.
\begin{figure}[H]
\centering
\subfloat[Truth table for the disjunction of the propositions $q_1$ and $q_2$: $q_1 \lor q_2$.]{
\hspace{1.6in}
\begin{tabular}{c|c||c}
$q_1$ & $q_2$ & $q_1 \lor q_2$ \\
\midrul
0 & 0 & 0 \\
0 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 & 1 \\
\end{tabular}
\hspace{2in}
\label{f2a}
}
\\
\subfloat[Membership table for the union of $C_1$ and $C_2$: $C_1 \cup C_2$.]{
\hspace{1.5in}
\begin{tabular}{c|c||c}
$C_1$ & $C_2$ & $C_1 \cup C_2$ \\
\midrul
0 & 0 & 0 \\
0 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 & 1 \\
\end{tabular}
\hspace{2in}
\label{f2b}}%
\caption{a) Truth table corresponding to $q_1 \lor q_2$; and b) membership table corresponding to $C_1 \cup C_2$.}
\label{f2}
\end{figure}
The sequence of digits 0, 0, 0 in the first rows of the tables in figures \ref{f2a} and \ref{f2b} should be interpreted as follows: In figure \ref{f2a}, that sequence means that if $q_1$ is false, and $q_2$ also is false, then the proposition $q_1 \lor q_2$ is false. In figure \ref{f2b}, that same sequence of digits means that if any element of the $\mathbb U$ considered belongs neither to $C_1$ nor to $C_2$, then that element does not belong to the set $C_1 \cup C_2$.
The sequence of digits 0, 1, 1 in the second rows of the tables in figures \ref{f2a} and \ref{f2b} should be interpreted as follows: In figure \ref{f2a}, that sequence means that if $q_1$ is false, and $q_2$ is true, then the proposition $q_1 \lor q_2$ is true. In figure \ref{f2b}, that same sequence of digits means that if any element of the $\mathbb U$ considered does not belong to $C_1$, and that element does belong to $C_2$, then that element does belong to the set $C_1 \cup C_2$.
The sequence of digits 1, 0, 1 in the third rows of the tables in figures 2a and 2b should be interpreted as follows: In figure \ref{f2a}, that sequence means that if $q_1$ is true and $q_2$ is false, then the proposition $q_1 \lor q_2$ is true. In figure \ref{f2b}, that same sequence of digits means that if any element of the $\mathbb U$ considered belongs to $C_1$, and that element does not belong to $C_2$, then that element does belong to $C_1 \cup C_2$.
The sequence of digits 1, 1, 1 in the fourth rows of the tables in figures \ref{f2a} and \ref{f2b} should be interpreted as follows: In figure \ref{f2a}, that sequence means that if $q_1$ is true, and $q_2$ is also true, then $q_1 \lor q_2$ is true. In figure \ref{f2b}, that same sequence of digits means that if any element of the $\mathbb U$ considered belongs to $C_1$, and that element also belongs to $C_2$, then that element belongs to $C_1 \cup C_2$.
Observe that only when both the proposition $q_1$ and the proposition $q_2$ are false, the proposition $q_1 \lor q_2$ is false. Also, only when any element of the universal set $\mathbb U$ does not belong to $C_1$ and does not belong to $C_2$, then that element does not belong to $C_1 \cup C_2$.
Note the equality of the first, second, third and fourth rows respectively, of the tables in figures \ref{f2a} and \ref{f2b}. This equality makes it possible to establish the following correspondences: 1) the propositions $q_1$ and $q_2$ correspond, respectively, to the sets $C_1$ and $C_2$; and 2) the logical connective of disjunction $\lor$ of propositional calculus corresponds to the union operator $\cup$ in set theory.
Detailed explanations like these for each pair of tables presented in figures \ref{f3}, \ref{f4} and \ref{f5} might be tedious to read. Briefer explanations will be provided for each of those pairs of tables, and based on the above information, the missing details can be completed. Also for each pair of tables, correspondences may be established between: 1) the propositions $q_1$ and $q_2$ and the sets $C_1$ and $C_2$, respectively; and 2) a particular logical connective of propositional calculus and a particular operator of set theory.
In propositional calculus, the logical connective of \textit{conjunction} ``and'' is symbolized as $\land$. The proposition $q_1 \land q_2$ is read as ``$q_1$ and $q_2$''. In set theory, the operator of intersection is symbolized as $\cap$. The set $C_1 \cap C_2$ is the set resulting from the operation of intersection between the sets $C_1$ and $C_2$. It is admitted that each element belonging to the set $C_1$ and each element belonging to the set $C_2$ are elements belonging to a universal set $\mathbb U$ which should be specified unambiguously. This is achieved by specifying the elements in $\mathbb U$, that is, the elements belonging to the set $\mathbb U$.
The truth table (a) corresponding to $q_1 \land q_2$, and the membership table (b) corresponding to $C_1 \cap C_2$ are shown in figure \ref{f3}.
\begin{figure}[H]
\centering
\subfloat[Truth table for the conjunction of any two propositions $q_1$ and $q_2$: $q_1 \land q_2$.]{
\hspace{1.6in}
\begin{tabular}{c|c||c}
$q_1$ & $q_2$ & $q_1 \land q_2$ \\
\midrul
0 & 0 & 0 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
1 & 1 & 1 \\
\end{tabular}
\hspace{2in}
\label{f3a}
}
\\
\subfloat[Membership table corresponding to the intersection of $C_1$ and $C_2$: $C_1 \cap C_2$.]{
\hspace{1.5in}
\begin{tabular}{c|c||c}
$C_1$ & $C_2$ & $C_1 \cap C_2$ \\
\midrul
0 & 0 & 0 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
1 & 1 & 1 \\
\end{tabular}
\hspace{2in}
\label{f3b}}%
\caption{a) Truth table corresponding to $q_1 \land q_2$; and b) membership table corresponding to $C_1 \cap C_2$.}
\label{f3}
\end{figure}
In figure \ref{f3a}, it can be seen that only when $q_1$ is true and $q_2$ is also true, the conjunction of them $q_1 \land q_2$ is true. This is the case considered in the fourth row of the corresponding truth table. In the other three possible cases, in the other three rows of that truth table, $q_1 \land q_2$ is false. Likewise, in figure \ref{f3b}, it is seen that only when any element of $\mathbb U$ that belongs to $C_1$ and also belongs to $C_2$, that element will belong to $C_1 \cap C_2$. This case is considered in the fourth row of the respective membership table. In the other three cases, in the other three rows of that membership table, that element does not belong to $C_1 \cap C_2$.
The consideration of the tables presented in figure \ref{f3} makes it possible to establish the following correspondences: 1) a correspondence between the propositions $q_1$ and $q_2$ and the sets $C_1$ and $C_2$ respectively; and 2) a correspondence between the logical connective of conjunction $\land$ of propositional calculus, and the operator of intersection $\cap$ in set theory.
In propositional calculus, the logical connective of \textit{material implication} is symbolized as $\to$. The proposition $q_1 \to q_2$, considered as conditional, is read as ``if $q_1$, then $q_2$''. The proposition $q_2 \to q_1$ is read as ``if $q_2$, then $q_1$''. The antecedent of the proposition $q_1 \to q_2$ is $q_1$, and the consequent of that proposition is $q_2$. Likewise, the antecedent of the proposition $q_2 \to q_1$ is $q_2$, and the consequent of that proposition is $q_1$.
In set theory, the operator of the material implication of sets is symbolized as $\naturalto$. The set $C_1 \naturalto C_2$ is generated by applying the operator of material implication to the ordered pair of sets ($C_1$, $C_2$). The set $C_2 \naturalto C_1$ is generated by applying the operator of material implication to the ordered pair of sets ($C_2$, $C_1$).
Figure \ref{f4} presents (a) the truth tables corresponding to the propositions $q_1 \to q_2$ and $q_2 \to q_1$, and (b) the membership tables corresponding to the sets $C_1 \naturalto C_2$ and $C_2 \naturalto C_1$.
\begin{figure}[H]
\centering
\subfloat[Truth table corresponding to the proposition $q_1 \to q_2$ and truth table corresponding to $q_2 \to q_1$.]{
\hspace{1in}
\begin{tabular}{c|c||c|c}
$q_1$ & $q_2$ & $q_1 \to q_2$ & $q_2 \to q_1$ \\
\midrul
0 & 0 & 1 & 1 \\
0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 \\
\end{tabular}
\hspace{1in}
\label{f4a}
}
\\
\subfloat[Membership table corresponding to the set $C_1 \naturalto C_2$ and membership table corresponding to the set $C_2 \naturalto C_1$.]{
\hspace{1in}
\begin{tabular}{c|c||c|c}
$C_1$ & $C_2$ & $C_1 \naturalto C_2$ & $C_2 \naturalto C_1$ \\
\midrul
0 & 0 & 1 & 1 \\
0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 \\
\end{tabular}
\hspace{1in}
\label{f4b}}%
\caption{a) Truth tables corresponding to the propositions $q_1 \to q_2$ and $q_2 \to q_1$, and membership tables corresponding to the sets $C_1 \naturalto C_2$ and $C_2 \naturalto C_1$.}
\label{f4}
\end{figure}
Note that the only case in which the proposition $q_1 \to q_2$ is false is that in which the antecedent $q_1$ is true and the consequent $q_2$ is false. This is the case of the third row in the truth table $q_1 \to q_2$. In the other three possible cases, $q_1 \to q_2$ is true. Likewise, the only case in which the proposition $q_2 \to q_1$ is false is that in which the antecedent $q_2$ is true and the consequent $q_1$ is false. This is the case of the second row of the truth table for the proposition $q_2 \to q_1$. In the other three cases, $q_2 \to q_1$ is true.
In addition, notice also that the only case in which any element of the universal set $\mathbb U$ considered does not belong to the set $C_1 \naturalto C_2$ is that in which that element belongs to $C_1$ and does not belong to $C_2$. This is the case of the third row of the membership table for $C_1 \naturalto C_2$. Likewise, the only case in which any element of the universal set $\mathbb U$ considered does not belong to the set $C_2 \naturalto C_1$ is that in which that element belongs to $C_2$ and does not belong to $C_1$.
The consideration given to the tables in figure \ref{f4} makes it possible to establish the following correspondences: 1) a correspondence between the propositions $q_1$ and $q_2$ and the sets $C_1$ and $C_2$, respectively; and 2) a correspondence between the connective of material implication of propositional calculus and the operator of material implication of set theory.
In propositional calculus, the connective of \textit{material bi-implication} (or logical equivalence) is symbolized as: $\longleftrightarrow$. The proposition $q_1 \longleftrightarrow q_2$ is true if and only if $q_1$ and $q_2$ have the same truth value; that is, if both are true or both are false. In set theory, the operator of material bi-implication is symbolized as $\naturaltolr$. Any element of the universal set $\mathbb U$ that belongs both to $C_1$ and $C_2$ belongs to $C_1 \naturaltolr C_2$; so does any element that does not belong to either $C_1$ or $C_2$. However, if any element of $\mathbb U$ belongs to only one of the two sets, $C_1$ or $C_2$, and not to the other, then it does not belong to the set $C_1 \naturaltolr C_2$.
The truth table (a) corresponding to the proposition $q_1 \longleftrightarrow q_2$, and the membership table (b) of the set $C_1 \naturaltolr C_2$ are represented in figure \ref{f5}.
\begin{figure}[H]
\centering
\subfloat[Truth table corresponding to the proposition $q_1 \longleftrightarrow q_2$.]{
\hspace{1.5in}
\begin{tabular}{c|c||c}
$q_1$ & $q_2$ & $q_1 \longleftrightarrow q_2$ \\
\midrul
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
1 & 1 & 1 \\
\end{tabular}
\hspace{2in}
\label{f5a}
}
\\
\subfloat[Membership table corresponding to the set $C_1 \naturaltolr C_2$.]{
\hspace{1.4in}
\begin{tabular}{c|c||c}
$C_1$ & $C_2$ & $C_1 \naturaltolr C_2$ \\
\midrul
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
1 & 1 & 1 \\
\end{tabular}
\hspace{2in}
\label{f5b}}%
\caption{a) Truth table corresponding to the proposition $q_1 \longleftrightarrow q_2$; and b) membership table corresponding to the set $C_1 \naturaltolr C_2$.}
\label{f5}
\end{figure}
The comparison of the tables presented in figures \ref{f5a} and \ref{f5b} makes it possible to establish the following correspondences: 1) a correspondence between the propositions $q_1$ and $q_2$ and the sets $C_1$ and $C_2$, respectively; and 2) a correspondence between the logical connective of material bi-implication of propositional calculus and the operator of material bi-implication of set theory.
The comparison of each of the truth tables and its corresponding membership table presented in this section makes it possible to establish the following correspondences: 1) between each proposition and a set, and 2) between each logical connective of propositional calculus and a specific operator of set theory.
Recall the number of existing logical functions, which are also propositions, of $n$ propositions, for $n=1,2,3\ldots$. If in the logical operations carried out there are $n$ propositions -- $q_1$, $q_2$, $\ldots, q_n$ -- in the corresponding truth tables, there will be $2^n$ rows because each of those propositions can have two truth values -- true or false. Each row of those tables will correspond to each possible case of different assignments for the truth values of each of those $n$ propositions. Since for each of these cases there are two possible assignments of truth value for the logical function to be specified, there are $2^{(2^n)}$ possible logical functions of $n$ propositions. Thus, for example, if $n=2$, there are 16 possible logical functions, and if $n=3$, there are 256 possible logical functions.
For each of the $2^{(2^n)}$ logical functions of $n$ propositions, for $n=1,2,3\ldots$, there is, according to the approach used, a function of $n$ sets, which also is a set.
For propositions resulting from the use of connectives, such as $q_1 \lor q_2$ or $q_3 \land q_4$, it can be suitable to express them in parentheses as $(q_1 \lor q_2)$ and $(q_3 \land q_4)$, respectively. Thus, if a connective from propositional calculus is used with those propositions to obtain another proposition, it is clear how that has operated. For example, $(q_1 \lor q_2)\to(q_3 \land q_4)$ is the proposition of a conditional nature: ``Si $q_1 \lor q_2$, then $q_3 \land q_4$'', in which $q_1 \lor q_2$ is the antecedent, and $q_3 \land q_4$ is the consequent. The proposition $(q_1 \lor q_2)\to(q_3 \land q_4)$ has been obtained by the action of the connective of material implication on the following ordered pair of propositions $\left( (q_1 \lor q_2),(q_3 \land q_4)\right)$. Likewise, for clarity, the proposition $(q_1 \lor q_2)\to(q_3 \land q_4)$ can be expressed between parentheses if an operation is carried out on it and on some other proposition, by using some logical connective. Hence, for example, $\left( (q_1 \lor q_2)\to(q_3 \land q_4) \right) \lor (q_4 \to q_5)$ is the proposition obtained through the action of the logical connective of disjunction on the propositions $(q_1 \lor q_2)\to(q_3 \land q_4)$, and $(q_4 \to q_5)$.
Given the correspondences mentioned, 1) between propositions and sets, and 2) between logical connectives and set theory operators, considerations of this same type concerning the use of parentheses are valid in this theory. Therefore, the set $(C_1 \cup C_2) \naturalto (C_3 \cap C_4)$ corresponds to $(q_1 \lor q_2)\to(q_3 \land q_4)$; and the set $\left((C_1 \cup C_2) \naturalto (C_3 \cap C_4) \right) \cup (C_4 \naturalto C_5)$ corresponds to $\left( (q_1 \lor q_2)\to(q_3 \land q_4) \right) \lor (q_4 \to q_5)$.
\section{Isomorphism between Each Law or Theorem of Propositional Calculus and the Corresponding Expression of the Universal Set}
If a function of $n$ propositions, for $n = 1, 2, 3, \ldots$, is true, regardless of the truth values of each of those $n$ propositions, then that propositional function, which also is a proposition, is considered a law -- a theorem -- of propositional calculus.
When carrying out operations with $n$ sets, for $n = 1, 2, 3, \ldots$, using the operators mentioned above, it will be supposed that each element belonging to each of those sets is an element belonging to the universal set $\mathbb U$ specified unambiguously. If the set resulting from these operations is equal to the universal set $\mathbb U$, regardless of what those sets are (with the only limitation being that specified above), then the equation which expresses the equality between the resulting set mentioned and the universal set $\mathbb U$ is considered a law -- a theorem -- of set theory.
Some examples of laws of propositional calculus and the corresponding laws of set theory will be considered below.
The law (or the principle) of the excluded middle, known also as the law (or the principle) of the excluded third (in Latin, \textit{principium tertii exclusi}) can be expressed as $q \lor \overline{q}$. Another Latin designation for that law is \textit{tertium non datur}: no third (possibility) is given.
The expression of a set which is isomorphic to that law is $C \cup {\xvec{C}}$.
In figure \ref{f6}, the truth table (a) corresponding to the law of the excluded middle, and the membership table (b) of the set whose expression is isomorphic to that law of propositional calculus are presented.
\begin{figure}[H]
\centering
\subfloat[Truth table corresponding to the law of propositional calculus $q\lor \overline{q} $.]{
\hspace{1.7in}
\begin{tabular}{c|c||c}
$q$ & $\overline{q}$ & $q\lor \overline{q} $\\
\midrul
0 & 1 & 1 \\
1 & 0 & 1 \\
\end{tabular}
\hspace{2in}
\label{f6a}
}
\\
\subfloat[Membership table corresponding to the set $C \cup {\xvec{C}}$.]{
\hspace{1.6in}
\begin{tabular}{c|c||c}
$C$ & $\xvec{C}$ & $C \cup \xvec{C}$ \\
\midrul
0 & 1 & 1\\
1 & 0 & 1 \\
\end{tabular}
\hspace{2in}
\label{f6b}}%
\caption{a) Truth table for $q\lor \overline{q}$, and b) membership table for $C \cup {\xvec{C}}$.}
\label{f6}
\end{figure}
In the truth table in figure \ref{f6a}, it can be seen that both if $q$ is false and if $q$ is true, $q\lor \overline{q}$ is true. In other words, $q\lor \overline{q}$ is true due to its logical form, regardless of whether $q$ is false or $q$ is true.
In the first row of the membership table for the set $C \cup {\xvec{C}}$, there is a sequence of digits 0, 1, 1. This sequence should be interpreted as follows: If any element of $\mathbb{U}$ does not belong to $C$, then, given the definition of the complement of $C$ (${\xvec{C}}$), that element belongs to ${\xvec{C}}$, and therefore, given the characterization of $C \cup {\xvec{C}}$, that element belongs to $C \cup {\xvec{C}}$.
In the second row of the membership table for the set $C \cup {\xvec{C}}$, there is a sequence of digits 1, 0, 1. This sequence should be interpreted as follows: If any element of $\mathbb{U}$ belongs to $C$, then, given the definition of the complement of $C$ (${\xvec{C}}$), that element does not belong to ${\xvec{C}}$, and therefore, given the characterization of $C \cup {\xvec{C}}$, that element belongs to $C \cup {\xvec{C}}$.
It can be seen, then, that any element of $\mathbb{U}$ belongs to $C \cup {\xvec{C}}$, regardless of whether it belongs to $C$, or does not belong to $C$. Therefore, $C \cup {\xvec{C}}$ is an expression of the set $\mathbb{U}$: $C \cup {\xvec{C}} = \mathbb{U}$.
Notice that $C \cup {\xvec{C}}$ is an expression of $\mathbb{U}$, which is isomorphic to $q\lor \overline{q}$. Indeed, $C$ corresponds to $q$, ${\xvec{C}}$ corresponds to $\overline{q}$ and the operator for set union -- $\cup$ -- corresponds to the connective $\lor$. The equality $C \cup {\xvec{C}}$ = $\mathbb{U}$ is a law, or theorem, of set theory.
The law known as \textit{modus tolendo tolens} of propositional calculus can be expressed as $\left( (q_1 \to q_2) \land \overline{q}_2 \right) \to q_1$. The expression of a set which is isomorphic to that law of propositional calculus is: $( (C_1 \naturalto C_2) \cap {\xvec{C}_2} ) \naturalto {\xvec{C}_1}$.
The truth table (a) corresponding to the law \textit{modus tolendo tolens} of propositional calculus and the membership table (b) for a set whose expression is isomorphic to that law are presented in figure \ref{f7}.
\begin{figure}[H]
\centering
\subfloat[Truth table for the law of propositional calculus $((q_1 \to q_2) \land \overline{q}_2) \to \overline{q}_1 $.]{
\begin{tabular}{c|c|c|c|c|c||c}
$q_1$ & $q_2$ & $\overline{q}_1$ & $\overline{q}_2$ & $q_1 \to q_2$ & $ (q_1 \to q_2) \land \overline{q}_2 $ & $((q_1 \to q_2) \land \overline{q}_2) \to \overline{q}_1$\\
\midrul
0 & 0 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 1 & 0 & 1 & 0 & 1 \\
1 & 0 & 0 & 1 & 0 & 0 & 1 \\
1 & 1 & 0 & 0 & 1 & 0 & 1 \\
\end{tabular}
\label{f7a}
}
\\
\subfloat[Membership table for the expression of a set -- $ ((C_1 \naturalto C_2) \cap {\xvec{C}_2} ) \naturalto {\xvec{C}_1}$ -- isomorphic to the law of propositional calculus $((q_1 \to q_2) \land \overline{q}_2)\to \overline{q}_1$.]{
\hspace{-.3in}
\begin{tabular}{c|c|c|c|c|c||c}
$C_1$ & $C_2$ & ${\xvec{C}_1}$ & ${\xvec{C}_2}$ & $C_1 \naturalto C_2$ & $ (C_1 \naturalto C_2) \cap {\xvec{C}_2} $ & $( (C_1 \naturalto C_2) \cap {\xvec{C}_2} ) \naturalto {\xvec{C}_1} $\\
\midrul
0 & 0 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 1 & 0 & 1 & 0 & 1 \\
1 & 0 & 0 & 1 & 0 & 0 & 1 \\
1 & 1 & 0 & 0 & 1 & 0 & 1 \\
\end{tabular}
\label{f7b}%
}
\caption{a) Truth table corresponding to the proposition $((q_1 \to q_2) \land \overline{q}_2)\to \overline{q}_1$, and b) membership table corresponding to $( (C_1 \naturalto C_2) \cap {\xvec{C}_2} ) \naturalto {\xvec{C}_1} $.}
\label{f7}
\end{figure}
It can be seen in figure \ref{f7} that the truth table and the membership table considered were represented to the right of the two parallel segments of vertical lines. For obvious reasons, knowledge of the tables represented on the left of the tables is essential to obtain these tables.
It can be observed that $((q_1 \to q_2) \land \overline{q}_2)\to \overline{q}_1$ is a tautology of propositional calculus, given that it is true regardless of the truth values of $q_1$ and $q_2$. Moreover, given that any element of $\mathbb U$ belongs to the set $((C_1 \naturalto C_2) \cap \xvec{C}_2 ) \naturalto \xvec{C}_1$, this set is equal to the universal set $(((C_1 \naturalto C_2) \cap \xvec{C}_2 ) \naturalto \xvec{C}_1) = \mathbb U$.
The tautology and the expression of the universal sets $\mathbb U$ considered are isomorphic. Indeed, $q_1$ corresponds to $C_1$, $\overline{q}_1$ corresponds to ${\xvec{C}_1}$, $q_2$ corresponds to $C_2$, $\overline{q}_2$ corresponds to ${\xvec{C}_2}$, and the logical connectives of conjunction $\land$ and of material implication $\to$ of propositional calculus correspond, respectively, to the operators of intersection $\cap$ and of the material implication $\naturalto$ of sets.
Reference will be made below to some other laws of propositional calculus and to the corresponding expressions of $\mathbb U$ which are isomorphic to them. Given the information provided above, it is clear how to construct the corresponding truth tables and membership tables. For this reason, those tables are not included here.
The law known as \textit{modus ponendo ponens} can be expressed as $((q_1 \to q_2)\land q_1) \to q_2$. This proposition is true regardless of the truth values of $q_1$ and $q_2$. It is then a tautology. The corresponding expression which is isomorphic to it is $((C_1 \naturalto C_2)\cap C_1) \naturalto C_2$. The corresponding law, or theorem, in set theory is $(((C_1 \naturalto C_2)\cap C_1) \naturalto C_2)=\mathbb U$.
The law of transitivity of propositional calculus can be expressed as: $((q_1 \to q_2) \land (q_2 \to q_3)) \to (q_1 \to q_3)$. The expression of the universal set which is isomorphic to that law is as follows: $((C_1 \naturalto C_2)\cap (C_2 \naturalto C_3)) \naturalto (C_1 \naturalto C_3)$. The corresponding law in set theory is: $(((C_1 \naturalto C_2)\cap (C_2 \naturalto C_3)) \naturalto (C_1 \naturalto C_3)) = \mathbb{U}$.
One of De Morgan's laws in propositional calculus is: $(\overline{q_1 \lor q_2}) \longleftrightarrow (\overline{q}_1 \land \overline{q}_2)$. The expression of the universal set which is isomorphic to that law is: $({\xvec{C_1 \cup C_2}}) \naturaltolr ({\xvec{C}_1} \cap {\xvec{C}_2})$. The corresponding law in set theory is: $(({\xvec{C_1 \cup C_2}}) \naturaltolr ({\xvec{C}_1} \cap {\xvec{C}_2})) = \mathbb{U}$.
The other of De Morgan's laws in propositional calculus is: $(\overline{q_1 \land q_2}) \longleftrightarrow (\overline{q}_1 \lor \overline{q}_2)$. The expression of the universal set which is isomorphic to that law is: $({\xvec{C_1 \cap C_2}}) \naturaltolr ({\xvec{C}_1} \cup {\xvec{C}_2})$. The corresponding law in set theory is $(({\xvec{C_1 \cap C_2}}) \naturaltolr ({\xvec{C}_1} \cup {\xvec{C}_2})) = \mathbb U$.
If a function of $n$ propositions, for $n = 1, 2, 3, \ldots$ is false, regardless of the truth values of each of these $n$ propositions, then that propositional function, which also is a proposition, is considered a \textit{contradiction}.
The proposition $q_1 \land \overline q $ is considered below. This proposition, a propositional function of a sole proposition $q$ is a contradiction, according to the above criterion.
The truth table (a) of $q\land \overline{q} $ and the membership table (b) of the expression of a set which is isomorphic to this expression are presented in figure \ref{f8}.
\begin{figure}[H]
\centering
\subfloat[Truth table corresponding to the negation of q; that is, $q_1 \land \overline q $.]{
\hspace{1.7in}
\begin{tabular}{c|c||c}
$q$ & $\overline{q}$ & $q\land \overline{q} $\\
\midrul
0 & 1 & 0 \\
1 & 0 & 0 \\
\end{tabular}
\hspace{2in}
\label{f8a}
}
\\
\subfloat[Membership table corresponding to $C$; that is, $C \cap {\xvec{C}}$.]{
\hspace{1.6in}
\begin{tabular}{c|c||c}
$C$ & ${\xvec{C}}$ & $C \cap {\xvec{C}}$ \\
\midrul
0 & 1 & 0\\
1 & 0 & 0 \\
\end{tabular}
\hspace{2in}
\label{f8b}}%
\caption{a) Truth table corresponding to $q\land \overline{q} $, and b) membership table corresponding to the set $C \cap {\xvec{C}}$.}
\label{f8}
\end{figure}
The set $C \cap {\xvec{C}}$ is the intersection of the set $C$ and its complement, the set ${\xvec{C}}$. As seen in the membership table corresponding to the set $C \cap {\xvec{C}}$, presented in figure \ref{f8}, no element of the universal set $\mathbb U$ belongs to the set $C \cap {\xvec{C}}$. Therefore, $(C \cap {\xvec{C}})$ is the empty set: $(C \cap {\xvec{C}} = \varnothing)$. This result is usually regarded as a law in set theory. For this reason, the proposition $q\land \overline{q} $ -- isomorphic to $C \cap {\xvec{C}}$ -- was considered in this section.
\section{The Propositional Calculus of a Canonical\\Fuzz\-y Logic (CFL)}
The CFL mentioned in this article is no different from other fuzzy logics -- including the version of them which is best known and most used, introduced by Lotfi A. Zadeh -- regarding what is understood by ``weight of truth''. The reasons why it is often acceptable and useful to consider that the possible truth values of a proposition are not two -- true and false -- as in BL, have been amply discussed and analyzed critically in specialized literature.
In this article it also will be accepted that within the framework of CFL, each proposition may be assigned a ``weight (or degree) of truth''. Thus, for example, the proposition $q$ may be assigned a weight $w$, which depends on $q$, such that $ 0 \leq w \leq 1$. Reference may be made to the weight of truth using the symbol $w(q)$.
If consideration is given to a set of propositions $q_1, q_2, q_3,\ldots,q_n$, reference may be made to the weight of truth $q_i$, for $i=1,2,3,\ldots,n$, using the symbol $w(q_i)$, and to the weight of truth of $\overline{q}_i$, using the symbol $w(\overline{q}_i)$.
It will be accepted that $w(q_i) + w(\overline{q}_i)=1$; and therefore, $w(q_i)=1-w(\overline{q}_i)$ and $w(\overline{q}_i) = 1-w(q_i)$. The justification for the equality, $w(q_i) + w(\overline{q}_i)=1$, will be provided when considering the law of the excluded middle.
In the applications of CFL, experts in the diverse fields of application should be the ones who determine the weights of truth of the different propositions. Thus, to determine the weight of truth of $q_1$ -- John is tall -- an anthropometric criterion must be used which will depend on the population in which John is considered. In addition, to specify the weight of truth of $q_2$ -- Ellen is rich -- an economic criterion should be used to establish that quantitative evaluation of the degree of Ellen's wealth, also within the context of a certain reference population.
In figure \ref{f9} and in the explanations given about it in the text below, it will be shown how the weight of truth $w(q_1 \lor q_2)$ is determined; that is, the weight of truth of the \textit{disjunction} of $q_1$ and $q_2$, once $w(q_1)$ and $w(q_2)$ are known.
\begin{figure}[H]
\centering
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q_1$ & $q_2$ & $q_1\lor q_2$ & $w(q_1 \lor q_2) = \displaystyle\sum_{i=1}^{4}S_i$\\
\midrul
0 & 0 & 0 & $S_1=0$ \\
0 & 1 & 1 & $S_2=w(\overline{q}_1) \cdot w(q_2)$ \\
1 & 0 & 1 & $S_3=w(q_1) \cdot w(\overline{q}_2)$ \\
1 & 1 & 1 & $S_4=w(q_1) \cdot w(q_2)$\\
\end{tabular}
\hspace{1in}
\caption{How to determine $w(q_1 \lor q_2)$. See explanation below.}
\label{f9}
\end{figure}
On the left, or more precisely, in the first three columns of figure \ref{f9}, the truth table for $(q_1 \lor q_2)$ is presented as constructed by using the same procedure as that used in BL propositional calculus. The fourth column of figure \ref{f9} is devoted to the computation of $w(q_1 \lor q_2)$, according to CFL.
The equality $w(q_1 \lor q_2) = \displaystyle\sum_{i=1}^{4}S_i$ specifies that the weight of truth of $q_1 \lor q_2$ -- $w(q_1 \lor q_2)$ -- according to CFL, is obtained by the sum of four terms. In general, there is a term, or addend, corresponding to each row of the truth table considered. In this case, there are 4 rows in the table because the proposition $q_1 \lor q_2$, which is the function of 2 propositions ($q_1$ and $q_2$), is considered.
The first addend, or summand, $S_1$ considered is equal to 0 because there is a 0 in the first row of the column corresponding to $q_1 \lor q_2$. In all cases in which in one row of the column corresponding to the proposition whose weight of truth should be computed there is a 0, the respective addend is equal to 0.
The addends corresponding to the other 3 rows of the column corresponding to $q_1 \lor q_2$ should be computed because in each one of these rows there is a 1, not a 0.
The numerical sequence corresponding to the second row of the truth table for $q_1 \lor q_2$ is as follows: 0, 1, 1. Written out: ``If $q_1$ is false and $q_2$ is true, then $q_1\lor q_2$ is true''. Given that the third digit in the numerical sequence 0, 1, 1 is a 1, the second addend $S_2$ of $w(q_1 \lor q_2)$ should be computed. This addend has 2 factors. The first of them, given that a 0 was assigned to $q_1$, is $w(\overline q_1)$, that is, a truth value of false. The second factor, given that a 1 was assigned to $q_2$, is $w(q_1)$, that is, a truth value of true. Hence, $S_2 = w(\overline q_1)\cdot w(q_2)$.
The numerical sequence corresponding to the third row of the truth table for $q_1 \lor q_2)$ is the following: 1, 0, 1. Written out: ``If $q_1$ is true and $q_2$ is false, then $q_1\lor q_2$ is true''. Given that the third digit in that numerical sequence is a 1, the value of the third addend $S_3$ of $w(q_1 \lor q_2)$ should be computed. This addend has 2 factors. The first of them, given that a 1 was assigned to $q_1$, is $w(q_1)$, that is, a truth value of true. The second factor, given that a 0 was assigned to $q_2$, is $w(\overline{q}_1)$, that is, a truth value of false. Hence, $S_3 = w(q_1)\cdot w(\overline{q}_2)$.
The numerical sequence corresponding to the fourth row of the truth table for $q_1\lor q_2$ is as follows: 1, 1, 1. Written out: ``If $q_1$ is true and $q_2$ is true, then $q_1\lor q_2$ is true''. Given that the third digit in that numerical sequence (1, 1, 1) is a 1, the fourth addend $S_4$ of $w(q_1\lor q_2)$ must be computed. This addend has 2 factors. The first of them, given that a 1 was assigned to $q_1$, is $w(q_1)$, that is, a truth value of true. The second factor, given that a 1 was also assigned to $q_2$, is $w(q_2)$, that is, a truth value of true . Hence, $S_4 = w(q_1)\cdot w(q_1)$.
Therefore, $w(q_1 \lor q_2) = S_1 + S_2 + S_3 + S_4 =
0 + w(\overline{q}_1) \cdot w(q_2) + w(q_1) \cdot w(\overline{q}_2) + w(q_1) \cdot w(q_2) =
(1-w(q_1)) \cdot w(q_2) + w(q_1) \cdot (1-w(q_2)) + w(q_1) \cdot w(q_2) =
w(q_2)-w(q_1)\cdot w(q_2) + w(q_1)-w(q_1)\cdot w(q_2) + w(q_1) \cdot w(q_2) =
w(q_1) + w(q_2) - w(q_1)\cdot w(q_2)$.
In conclusion, according to CFL, $w(q_1 \land q_2) = w(q_1) + w(q_2) - w(q_1)\cdot w(q_2)$.
It is important to note that the equality computed according to CFL,\\
$w(q_1 \lor q_2) = w(q_1) + w(q_2) - w(q_1)\cdot w(q_2)$,\\
is different from the equality accepted by the most commonly used fuzzy logics:\\ $w(q_1 \lor q_2) = \max(w(q_1),w(q_2))$.
The same approach is used to obtain the weight of truth $w(q_1 \land q_2)$; that is, the weight of truth according to CFL of the \textit{conjunction} of $q_1$ and $q_2$. Figure \ref{f10} facilitates the understanding of the procedure used here:
\begin{figure}[H]
\centering
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q_1$ & $q_2$ & $q_1\land q_2$ & $w(q_1 \land q_2) = \displaystyle\sum_{i=1}^{4}S_i$\\
\midrul
0 & 0 & 0 & $S_1=0$ \\
0 & 1 & 0 & $S_2=0$ \\
1 & 0 & 0 & $S_3=0$ \\
1 & 1 & 1 & $S_4=w(q_1) \cdot w(q_2)$\\
\end{tabular}
\hspace{1in}
\caption{How to determine $q_1\land q_2$. This is explained below.}
\label{f10}
\end{figure}
Figure \ref{f10} makes it possible to see that $S_4=w(q_1) \cdot w(q_2)$ is the only addend $S_1$, for $i = 1, 2, 3, 4$, in the case of the proposition $q_1\land q_2$ different from 0, according to CFL.
It is important to note that the equality computed according to CFL,
\\ $w(q_1 \land q_2) = w(q_1)\cdot w(q_2)$,\\
is different from the equality accepted by the most commonly used in fuzzy logics:\\
$w(q_1 \land q_2) = \min(w(q_1),w(q_2))$.
In addition, the same approach is used to determine the weights of truth $w(q_1 \to q_2)$, $w(q_2 \to q_1)$ and $w(q_1 \longleftrightarrow q_2)$, as specified in figure \ref{f11} .
\begin{figure}[H]
\centering
\subfloat[How to determine $w(q_1 \to q_2)$.]{
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q_1$ & $q_2$ & $q_1 \to q_2$ & $w(q_1 \to q_2) = \displaystyle\sum_{i=1}^{4}S_i$\\
\midrul
0 & 0 & 1 & $S_1=w(\overline{q}_1) \cdot w(\overline{q}_2)$ \\
0 & 1 & 1 & $S_2=w(\overline{q}_1) \cdot w(q_2)$ \\
1 & 0 & 0 & $S_3=0$ \\
1 & 1 & 1 & $S_4=w(q_1) \cdot w(q_2)$\\
\end{tabular}
\hspace{1in}
\label{f11a}
}
\\
\subfloat[How to determine $w(q_2 \to q_1)$.]{
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q_1$ & $q_2$ & $q_2 \to q_1$ & $w(q_2 \to q_1) = \displaystyle\sum_{i=1}^{4}S_i$ \\
\midrul
0 & 0 & 1 & $S_1=w(\overline{q}_1) \cdot w(\overline{q}_2)$ \\
0 & 1 & 0 & $S_2=0$ \\
1 & 0 & 1 & $S_3=w(q_1) \cdot w(\overline{q}_1)$ \\
1 & 1 & 1 & $S_4=w(q_1) \cdot w(q_2)$\\
\end{tabular}
\hspace{1in}
\label{f11b}}%
\\
\subfloat[How to determine $w(q_1 \longleftrightarrow q_2)$.]{
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q_1$ & $q_2$ & $q_1 \longleftrightarrow q_2$ & $w(q_1 \longleftrightarrow q_2) = \displaystyle\sum_{i=1}^{4}S_i$ \\
\midrul
0 & 0 & 1 & $S_1=w(\overline{q}_1) \cdot w(\overline{q}_2)$ \\
0 & 1 & 0 & $S_2=0$ \\
1 & 0 & 0 & $S_3=0$ \\
1 & 1 & 1 & $S_4=w(q_1) \cdot w(q_2)$\\
\end{tabular}
\hspace{1in}
\label{f11c}}%
\caption{How to determine a) $w(q_1 \to q_2)$, b) $w(q_2 \to q_1)$, and c) $w(q_1 \longleftrightarrow q_2)$.}
\label{f11}
\end{figure}
Recall that, for any $q_i$, for $i=1,2,3,\ldots$, $w(q_i) + w(\overline{q}_i)=1$. Therefore, according to the information provided in figure \ref{f11}, the following results are obtained:\\
\\
$w(q_1 \to q_2)=w(\overline{q}_1) \cdot w(\overline{q}_2) + w(\overline{q}_1) \cdot w(q_2) + 0 + w(q_1) \cdot w(q_2) = \\ w(\overline{q}_1)\cdot (w(q_2)+w(\overline{q}_2)) + w(q_1)\cdot w(q_2) = w(\overline{q}_1) + w(q_1)\cdot w(q_2) = \\ 1-w(q_1)+w(q_1)\cdot w(q_2)$\\
\\
$w(q_2 \to q_1)=w(\overline{q}_1) \cdot w(\overline{q}_2) + 0 + w(q_1) \cdot w(\overline{q}_1) + w(q_1) \cdot w(q_2) = \\ w(\overline{q}_2)\cdot w(\overline{q}_1 + w(q_1)) + w(q_1)\cdot w(q_2) = \\ 1-w(q_2)+w(q_1)\cdot w(q_2)$\\
\\
$w(q_1 \longleftrightarrow q_2)=w(\overline{q}_1) \cdot w(\overline{q}_2) + 0 + 0 + w(q_1) \cdot w(q_2) = \\ (1 - w(q_1))\cdot(1-w(q_2)) + w(q_1)\cdot w(q_2) = \\ 1- w(q_2) - w(q_1) + w(q_1)\cdot w(q_2) + w(q_1)\cdot w(q_2) = \\ 1-w(q_1) - w(q_2)+ 2\cdot w(q_1)\cdot w(q_2) $.\\
Hence, according to CFL,\\
$w(q_1 \to q_2) = 1-w(q_1)+w(q_1)\cdot w(q_2) \\ w(q_2 \to q_1)= 1-w(q_2) + w(q_1)\cdot w(q_2) \ \\ w(q_1 \longleftrightarrow q_2)= 1-w(q_1) - w(q_2)+ 2\cdot w(q_1)\cdot w(q_2) $.
Consideration is given below to the law of the excluded middle according to the propositional calculus of CFL.
\begin{figure}[H]
\centering
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q$ & $\overline{q}$ & $q\lor \overline{q}$ & $w(q \lor \overline{q}) = \displaystyle\sum_{i=1}^{2}S_i$\\
\midrul
0 & 1& 1 & $S_1=w(\overline{q})$ \\
1 & 0 & 1 & $S_2=w(q) $\\
\end{tabular}
\hspace{1in}
\caption{How to determine $w(q \lor \overline{q})$.}
\label{f12}
\end{figure}
According to figure \ref{f12}, the following result is obtained:\\
$w(q\lor \overline{q}) = S_1 + S_2 = w(\overline{q}) + w(q) =1 - w(q) + w(q) = 1$
\\
In this case one might wonder about how $S_1$ and $S_2$ were specified. The justification is as follows: $q\lor \overline{q}$ is a propositional function of a sole proposition $q$, and in general, for any propositional function of $n$ propositions, for $n=1,2,3\ldots$, each $S_i$ is equal to the product of $n$ factors, each of which is a certain weight of truth.
In figure \ref{f13} indications are given on how to determine the weight of truth of the proposition $q\land \overline{q}$ according to the propositional calculus of CFL.
\begin{figure}[H]
\centering
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q$ & $\overline{q}$ & $q\land \overline{q}$ & $w(q \land \overline{q}) = \displaystyle\sum_{i=1}^{2}S_i$\\
\midrul
0 & 1 & 0 & $S_1=0 $ \\
1 & 0 & 0 & $S_2=0 $\\
\end{tabular}
\hspace{1in}
\caption{How to determine $w(q \land \overline{q})$.}
\label{f13}
\end{figure}
According to figure \ref{f13}, the following result is obtained:\\
$w(q\land \overline{q}) = S_1 + S_2 = 0$.
The reason why $S_1 = 0$ here is as follows: In the first row of the column corresponding to $q\land \overline{q}$, there is a 0. The reason why $S_2 = 0$ here is as follows: In the second row of the column corresponding to $q\land \overline{q}$, there also is a 0. Given that the weight of truth of $q\land \overline{q}$ (i. e., $w(q\land \overline{q})$) is equal to 0, regardless of the weight of truth of $q$ (i. e., $w(q)$), it is considered that $q\land \overline{q}$ is a contradiction of propositional calculus of CFL.
Another tautology, one of De Morgan's laws, will be considered below according to the propositional calculus of CFL.
\begin{figure}[H]
\centering
\hspace{1in}
\begin{tabular}{c|c|c|c}
$q_1$ & $q_2$ & $ (\overline{q_1 \lor q_2}) \longleftrightarrow (\overline{q}_1 \land \overline{q}_2)$ & $w((\overline{q_1 \lor q_2}) \longleftrightarrow (\overline{q}_1 \land \overline{q}_2)) = \displaystyle\sum_{i=1}^{4}S_i$\\
\midrul
0 & 0 & 1 & $S_1=w(\overline{q}_1) \cdot w(\overline{q}_2)$ \\
0 & 1 & 1 & $S_2=w(\overline{q}_1) \cdot w(q_2) $ \\
1 & 0 & 1 & $S_3=w(q_1) \cdot w(\overline{q}_1)$ \\
1 & 1 & 1 & $S_4=w(q_1) \cdot w(q_2)$\\
\end{tabular}
\hspace{1in}
\caption{How to determine $w((\overline{q_1 \lor q_2}) \longleftrightarrow (\overline{q}_1 \land \overline{q}_2))$.}
\label{f14}
\end{figure}
According to figure \ref{f14}, the following result is obtained:\\
$w((\overline{q_1 \lor q_2}) \longleftrightarrow (\overline{q}_1 \land \overline{q}_2)) = w(\overline{q}_1) \cdot w(\overline{q}_2)+w(\overline{q}_1) \cdot w(q_2)+w(q_1) \cdot w(\overline{q}_2)+w(q_1) \cdot w(q_2) = w(\overline{q}_1)\cdot(w(\overline{q}_2)+w(q_2)) + w(q_1)\cdot(w(\overline{q}_2)+w(q_2)) = w(\overline{q}_1) +w(q_1) = 1$.
Recall that a tautology of BL propositional calculus is a propositional function of $n$ propositions, for $n=1,2,3\ldots$, which is true regardless of the truth value -- true or false -- of one of the $n$ propositions.
In general, any tautology of BL propositional calculus is also a tautology -- or law -- of CFL propositional calculus, in the sense that its weight of truth is equal to 1, regardless of the weight of truth of each of the $n$ propositions of which it is a propositional function.
A negation of any tautology of BL propositional calculus is a contradiction. Recall that in that calculus a contradiction is any propositional function of $n$ propositions, for $n=1,2,3\ldots$, which is false regardless of the truth value -- true or false -- of each of those $n$ propositions.
In general, any contradiction of propositional calculus of BL is also a contradiction of propositional calculus of CFL, in the sense that its weight of truth is equal to 0, regardless of the weight of truth of each of the $n$ propositions of which it is a propositional function.
As a consequence of the characterizations given of tautologies and contradictions, both of propositional calculus of BL and of propositional calculus of CFL, it is obvious that in both types of calculus, a) the negation of any tautology is a contradiction, and b) the negation of any contradiction is a tautology.
All the results that can be obtained with the propositional calculus of BL can also be obtained with the propositional calculus of CFL. Indeed, in the former, each proposition can have only one of two truth values: true or false. If each proposition of this calculus is reinterpreted within the framework of propositional calculus of CFL, restricting the weights of truth to only two of them (the weight of truth equal to 1, or the weight of truth equal to 0), then the second propositional calculus -- that of CFL -- makes it possible to obtain all the results achieved by using the first of these calculi -- that of BL. Thus it can be concluded that the first type of propositional calculus is one particular case, which could be viewed as a ``limit case'' of the second type of calculus given that 0 and 1 are the minimum and maximum numerical values respectively, which in the second calculation can be assigned to the weight of any proposition.
\section{Fuzzy Sets According to CFL}
In classical set theory, one possible way of characterizing a finite set is to specify each element belonging to it. Thus, the set $C_1$ to which the natural numbers 7, 17 and 528 belong can be characterized as follows:
\begin{equation}
C_1 = \{7, 17, 528\}
\end{equation}
In some applications of this theory which mention different extra-mathematical entities, such as human beings, an expression such as the following does not suffice to characterize a set unambiguously:
\begin{equation}
C_2 = \{x_1, x_2, x_3\}
\end{equation}
In effect, the characterization of $C_2$ must also specify which people are referred to by the symbols $x_1, x_2, x_3$. However, difficulties may appear in the applications to different fields of knowledge. Some of these difficulties have led to the introduction of logics different from BL. Suppose that $C_2$ is the ``set of tall men in a certain population''. In the case considered in (1), there is no doubt as to whether each element of the set $N$ of the natural numbers belongs, or does not belong, to the set $C_1$. Each natural number either belongs or does not belong to $C_1$. Therefore, for example, given the characterization of $C_1$, it is known that the natural number 287 does not belong to $C_1$ and that the natural number 528 does belong to $C_1$. On the other hand, in case (2) it could be useful to introduce the notion of degree or ``weight of membership'' in the set $C_2$ for each element considered. This was covered in section 4 above. Considerations of this type were precisely those that led to the development of fuzzy set theory.
Just as in classical set theory, in CFL fuzzy set theory, when applying the latter to the treatment of a given topic, first the characterization of the universal set $\mathbb U$ to which all the elements belong is introduced, and one must specify the elements to which reference is made. It will be accepted that each element belongs to a $\mathbb U$ with a weight of membership equal to 1. In this article only finite fuzzy universal sets will be discussed. The notation to be used is adequate for this restriction. As this will be covered in other articles, that limitation can be eliminated to make it possible to operate, using the CFL approach, with both countable and uncountable infinite universal sets.
In classical set theory, each element belonging to $\mathbb U$ does not belong to any set $C$ different from $\mathbb U$. In effect, if each element of those belonging to $\mathbb U$ belonged to $C$, $C$ would not be different from $\mathbb U$, but rather equal to it; that is, the equality $C = \mathbb U$ would be valid.
In fuzzy set theory, according to CFL, each cf elements belonging to $\mathbb U$ does belong to each set characterized within the framework of a given $\mathbb U$. However, if that set is different from $\mathbb U$, then at least one of those elements belongs to it with a weight different from 1. As justified below, each element belonging to $\mathbb U$ belongs to the complement of $\mathbb U$ ($\xvec{\mathbb U}$), which is equal to the empty set $\varnothing$, with a weight of membership equal to 0.
Consider any element $x_i$ whatsoever, for $i = 1,2,3,\dots,n$, belonging to $\mathbb U$. That membership may be expressed as $x_i \in \mathbb U$. (Recall that the symbol $\in$ is used to indicate the membership of a given element in a set.)
The weight of membership of $x_i$ in $\mathbb U$ is symbolized as $w(x_i \in \mathbb U)$. In this case, that weight of membership is equal to 1: $w(x_i \in \mathbb U)=1$.
In general, for the weight of membership of any element $x_i$ of $\mathbb U$ in a set $C_j$, for $j = 1,2,3,\dots$, characterized within the framework of that $\mathbb U$, it will be accepted that its numerical value is between 0 and 1: $0 \leq w(x_i \in C_j) \leq 1$.
Notice that ($x_i \in \mathbb U$) and ($x_i \in C_j$) are, respectively, the propositions ``The element $x_i$ belongs to $\mathbb U$'' and ``The element $x_i$ belongs to $C_j$''. Therefore, the weight of membership of any element in a set may be interpreted also in CFL set theory as the weight of truth of the proposition which affirms the membership of that element in the set considered.
The propositions $x_i \in \mathbb U$ and $x_i \in \xvec{\mathbb U}$ will be expressed, respectively, in an abbreviated way, as $q_{i,\mathbb U}$ and $q_{i,\xvecsub{\mathbb U}}$. The first subscript in both cases specifies which element is mentioned and the second subscript specifies which sets -- $\mathbb U$ and $ \xvec{\mathbb U}$, respectively -- the element is said to belong to. Given that $ \xvec{\mathbb U} = \varnothing$, $q_{i,\xvecsub{\mathbb U}} = q_{i,\varnothing}$.
The propositions ($x_i \in C_j$) and ($x_i \in \xvec{C}_j$) will be expressed, respectively, in an abbreviated way, as $q_{i,j}$, and $q_{i,\xvecsub{j}}$. The first subscript in both cases specifies unambiguously which sets $C_j$ and $\xvec{C}_j$, respectively, the element belongs to.
As seen above, each proposition of this type has a weight of truth. Consider, for example, the proposition $q_{i,j}$. Its weight of truth is symbolized as $w(q_{i,j})$. That proposition $q_{i,j}$ will be regarded as a proposition characteristic of CFL propositional calculus. Thus, $w(q_{i,j}) + w(\overline{q}_{i,j}) = 1$, and
\begin{equation}
w(\overline{q}_{i,j}) = 1 - w(q_{i,j}).
\end{equation}
It will be accepted that the following equality is valid: $w(q_{i,\xvecsub{j}}) = w(\overline{q}_{i,j})$. Therefore, due to (3),
\begin{equation}
w(q_{i,\xvecsub{j}}) = 1 - w(q_{i,j}).
\end{equation}
Written out, equation (4) can be expressed as follows: The weight of truth of $q_{i,\xvecsub{j}}$ is equal to the difference between 1 and the weight of truth of $q_{i,j}$, or in an equivalent way, the weight of membership of any element belonging to the complement of any set $C_j$ (that is, $\xvec{C}_j$) is equal to 1 minus the weight of membership of that element in $C_j$. If this result is applied to the universal set $\mathbb U$, the following is obtained: $w(q_{i,\xvecsub{\mathbb U}}) = w(q_{i,\varnothing}) = 1-w(q_{i,\mathbb U}) = 1-1 = 0 $. As previously stated, $w(q_{i,\varnothing}) = 0$.
If desired, to recall that one is operating with fuzzy sets, according to CFL, the superscript $f$, for example, for \textit{f}uzzy may be added on the left to each set of this type, in the following way:\\
$~^f\mathbb U$; $~^f\varnothing$; $~^fC_5$; $~^f\xvec{C}_9$.\\
That will not be done here. In many cases it is unnecessary because the type of set with which one is operating is clear in context or by clarification.
In applied logic one often selects a universal set $\mathbb U$ such that all the elements belonging to it are the same type. In this section, to provide examples, according to CFL, of some operations with sets, attention will be given below to a $\mathbb U$ such that each of the 5 elements belonging to it is a person -- a human being. Consideration will also be given to 2 of the subsets of that $\mathbb U$: $C_1$ and $C_2$.
$C_1$ and $C_2$ will be characterized as follows:\\
$C_1$: set of chess players, and\\
$C_2$ set of wealthy people.
$\mathbb U$, to which it will be admitted that persons $x_1$, $x_2$, $x_3$, $x_4$ and $x_5$ belong, was selected in a way such that $x_2$ is a distinguished professional chess player, a grandmaster, with the possibility of becoming the next world champion in chess. According to experts in chess, it may be considered that $w(q_{2,1}) = 1$.
The element $x_5$ is a chess fan whose level, according to experts in chess, is that of a third class player: $w(q_{5,1}) = 0.4$.
Regarding $x_1$, $x_3$ and $x_4$, none of these people know anything about chess rules. They do not know, for instance, how to move each one of the chess pieces, according to the rules of the game. According to chess experts, $w(q_{1,1}) = w(q_{3,1}) = w(q_{4,1}) = 0$.
In another area, let us admit that experts in economics and finances came to the following conclusions:\\
$w(q_{1,2}) = 0.9$; $w(q_{2,2}) = 0.8$; $w(q_{3,2}) = 0.7$; $w(q_{4,2}) = 0$; and $w(q_{5,2}) = 0.6$.
For clarity and to facilitate the comprehension of the operations to be carried out with the objective of providing some examples of them, the different fuzzy sets to be discussed will be expressed below as column vectors. Each row of these column vectors begins with the symbol $x_i$ for $i = 1,2,3,4,5$, corresponding to the element considered; and to the right of that symbol and separated from it by a semi-colon (;), the equation which establishes the corresponding weight of truth is provided.
\begin{equation*}
\mathbb U = \begin{array}{|cc|}
x_1; & w(q_{1,\mathbb U}) =1 \\
x_2; & w(q_{2,\mathbb U})=1 \\
x_3; & w(q_{3,\mathbb U}) =1 \\
x_4; & w(q_{4,\mathbb U}) =1 \\
x_5; & w(q_{5,\mathbb U}) =1 \\
\end{array}\quad;\qquad
\xvec{\mathbb U} =\varnothing = \begin{array}{|cc|}
x_1; & w(q_{1,\varnothing}) =0 \\
x_2; & w(q_{2,\varnothing})=0 \\
x_3; & w(q_{3,\varnothing}) =0 \\
x_4; & w(q_{4,\varnothing}) =0 \\
x_5; & w(q_{5,\varnothing}) =0 \\
\end{array}\end{equation*}
\begin{equation*}
C_1 = \begin{array}{|cc|}
x_1; & w(q_{1,1}) =0 \\
x_2; & w(q_{2,1})=1 \\
x_3; & w(q_{3,1}) =0 \\
x_4; & w(q_{4,1}) =0 \\
x_5; & w(q_{5,1}) =0.4 \\
\end{array}\quad;\qquad
\xvec{C}_1 = \begin{array}{|cc|}
x_1; & w(q_{1,\xvecsub{1}}) =1 \\
x_2; & w(q_{2,\xvecsub{1}})=0 \\
x_3; & w(q_{3,\xvecsub{1}}) =1 \\
x_4; & w(q_{4,\xvecsub{1}}) =1 \\
x_5; & w(q_{5,\xvecsub{1}}) =0.6 \\
\end{array}\end{equation*}
\\
For each element $x_i$, for $i = 1,2,3,4,5$, the numerical values of $w(q_{i,\xvecsub{1}})$ were obtained from $w(q_{i,1})$ by using (4).
\begin{equation*}
C_2 = \begin{array}{|cc|}
x_1; & w(q_{1,2}) =0.9 \\
x_2; & w(q_{2,2})=0.8 \\
x_3; & w(q_{3,2}) =0.7 \\
x_4; & w(q_{4,2}) =0 \\
x_5; & w(q_{5,2}) =0.6 \\
\end{array}\quad;\qquad
\xvec{C}_2 = \begin{array}{|cc|}
x_1; & w(q_{1,\xvecsub{2}}) =0.1 \\
x_2; & w(q_{2,\xvecsub{2}})=0.2 \\
x_3; & w(q_{3,\xvecsub{2}}) =0.3 \\
x_4; & w(q_{4,\xvecsub{2}}) =1 \\
x_5; & w(q_{5,\xvecsub{2}}) =0.6 \\
\end{array}\end{equation*}
\\
Again, (4) was used to obtain numerical values of $w(q_{i,\xvecsub{2}})$, from the numerical valus of $w(q_{i,2})$.
Some operations carried out with the fuzzy sets specified are presented below to emphasize that all the laws of classical set theory are conserved -- that is, also valid -- in CFL fuzzy set theory.
In classical set theory, for any set $C$, the following law is valid: $C \cup \xvec{C} = \mathbb U$. Written out: The union of any set $C$ whatsoever and its complement $\xvec{C}$ is equal to the universal set $\mathbb U$.
Consider any universal set $\mathbb U$ according to CFL set theory. It will be proven that if the operation of union of that universal set and its complement $\xvec{\mathbb U}$ is carried out, the following result is obtained:\\
$\mathbb U \cup \xvec{\mathbb U} = \mathbb U \cup \varnothing = \mathbb U$
In effect, the weight of truth $w(q_{i,\mathbb U} \lor q_{i,\varnothing})$ can be computed for each $x_i$, for $i = 1,2,3,\ldots, n$, belonging to that $\mathbb U$:\\
$w(q_{i,\mathbb U} \lor q_{i,\varnothing}) = w(q_{i,\mathbb U} \lor q_{i,\xvecsub{\mathbb U}}) = w(q_{i,\mathbb U} \lor \overline{q}_{i,\mathbb U}) = w(q_{i,\mathbb U})+w(\overline{q}_{i,\mathbb U}) = w(q_{i,\mathbb U}) + 1- w(q_{i,\mathbb U}) = 1$.
Given that for any element $x_i$, for $i = 1,2,3,\ldots, n$, belonging to $\mathbb U \cup \varnothing$, the corresponding weight of truth $w(q_{i,\mathbb U} \lor q_{i,\varnothing})$ is equal to 1, the union of $\mathbb U$ and $\varnothing$ ($\mathbb U \cup \varnothing$) is equal to the set $\mathbb U$: $\mathbb U \cup \varnothing = \mathbb U$.
Consider any subset $C_j$, for $j = 1,2,3,\ldots$, of a $\mathbb U$, according to CFL. In general, for any element $x_i$, for $i = 1,2,3,\ldots$, $n$, belonging to ($C_j \cup \xvec{C_j}$), the following equality is valid:\\
$w(q_{i,j} \lor q_{i,\xvecsub{j}}) = w(q_{i,j} \lor \overline{q}_{i,j}) = w(q_{i,j}) + w(\overline{q}_{i,j}) =1$.\\
Therefore, ($C_j \cup \xvec{C}_j = \mathbb U$).
In particular, it can be easily verified that for the set $\mathbb U$ of the 5 people mentioned, from which the subset $C_1$ of chess players and the subset $C_2$ of wealthy people were considered, the following equalities are valid:\\
\\
$\mathbb U \cup \xvec{\mathbb U} = \mathbb U \cup \varnothing = \mathbb U$\\
\\
$C_1 \cup \xvec{C}_1 = \mathbb U$\\
\\
$C_2 \cup \xvec{C}_2 = \mathbb U$\\
For any element $x_i$ of any $\mathbb U$, according to CFL, the following equalities are valid:\\
$w(q_{i,\mathbb U} \land q_{i,\varnothing}) = w(q_{i,\mathbb U} \land q_{i,\xvecsub{\mathbb U}}) = w(q_{i,\mathbb U} \land \overline{q}_{i,\mathbb U}) = 0$.\\
In effect, according to what was discussed in section 4, the weight of truth of the conjunction of a proposition and the negation of that proposition is equal to 0. Therefore,\\
$\mathbb U \cap \varnothing=\varnothing$.
Reasoning of this same type is valid for any element $x_i$ of any subset $C_j$ of that $\mathbb U$: $w(q_{i,j} \land q_{i,\xvecsub{j}}) = w(q_{i,j} \land \overline{q}_{i,j}) = 0$.
In particular, for the set $\mathbb U$ of 5 people of which $C_1$ and $C_2$ are subsets, it can easily be verified that the following equalities are valid:\\
\\
$\mathbb U \cap \xvec{\mathbb U} = \mathbb U \cap \varnothing = \varnothing$\\
\\
$C_1 \cap \xvec{C}_1 = \varnothing$\\
\\
$C_2 \cap \xvec{C}_2 = \varnothing$\\
Consideration will be given below to the union of $C_1$ and $C_2$ ($C_1 \cup C_2$) and to the intersection of those sets ($C_1 \cap C_2$).
\begin{align*}
(C_1 \cup C_2) = \begin{array}{|cc|}
x_1; & w(q_{1,{1}}) =0 \\
x_2; & w(q_{2,{1}})=1 \\
x_3; & w(q_{3, {1}}) =0 \\
x_4; & w(q_{4, {1}}) =0 \\
x_5; & w(q_{5, {1}}) =0.4 \\
\end{array} \hspace{.1in} \cup \hspace{.1in} \begin{array}{|cc|}
x_1; & w(q_{1,2}) =0.9 \\
x_2; & w(q_{2,2})=0.8 \\
x_3; & w(q_{3,2}) =0.7 \\
x_4; & w(q_{4,2}) =0 \\
x_5; & w(q_{5,2}) =0.6 \\
\end{array} =\\ \begin{array}{|cc|}
x_1; & w(q_{1,1} \lor q_{1,{2}}) =0.9 \\
x_2; & w(q_{2,1} \lor q_{2,{2}})= 1 \\
x_3; & w(q_{3,1} \lor q_{3,{2}}) =0.7 \\
x_4; & w(q_{4,1} \lor q_{4,{2}}) =0 \\
x_5; & w(q_{5,1} \lor q_{5,{2}}) =0.76 \\
\end{array}\end{align*}
\\
The five weights of truth for $w(q_{i,1} \lor q_{i,2})$, for $i = 1, 2, 3, 4,5$, were computed as specified in section 4.
\begin{align*}
(C_1 \cap C_2) = \begin{array}{|cc|}
x_1; & w(q_{1,{1}}) =0 \\
x_2; & w(q_{2,{1}})=1 \\
x_3; & w(q_{3, {1}}) =0 \\
x_4; & w(q_{4, {1}}) =0 \\
x_5; & w(q_{5, {1}}) =0.4 \\
\end{array} \hspace{.1in} \cap \hspace{.1in} \begin{array}{|cc|}
x_1; & w(q_{1,2}) =0.9 \\
x_2; & w(q_{2,2})=0.8 \\
x_3; & w(q_{3,2}) =0.7 \\
x_4; & w(q_{4,2}) =0 \\
x_5; & w(q_{5,2}) =0.6 \\
\end{array} = \\ \begin{array}{|cc|}
x_1; & w(q_{1,1} \land q_{1,{2}}) =0 \\
x_2; & w(q_{2,1} \land q_{2,{2}})= 0.8 \\
x_3; & w(q_{3,1} \land q_{3,{2}}) =0 \\
x_4; & w(q_{4,1} \land q_{4,{2}}) =0 \\
x_5; & w(q_{5,1} \land q_{5,{2}}) =0.24 \\
\end{array}\end{align*}
The five weights of $w(q_{i,1} \land q_{i,2})$, for $i = 1, 2, 3, 4,5$, were computed as specified in section 4.
Consider two subsets whatsoever $C_j$ and $C_k$ of any fuzzy universal set $\mathbb U$, according to CFL. The set $C_l$ is characterized as:\\
$C_l=( (\xvec{C_j \cup C_k}) \naturaltolr (\xvec{C}_j \cap \xvec{C}_k) )$.
If $C_l$ is equal to $\mathbb U$ (that is, $C_l=\mathbb U$) in the sense that any element $x_i$ of $\mathbb U$ belongs to $C_l$ with a weight of membership equal to 1 (or in an equivalent way, such that the weight of truth $w(q_{i,l})$ is equal to 1, that is, $w(q_{i,l})=1$), then it is proven that one of De Morgan's laws is valid also for the CFL fuzzy sets.
The membership table of any element of $\mathbb U$ belonging to $C_l$ is shown in figure 15.
\begin{figure}[H]
\centering
\hspace{1in}
\begin{tabular}{c|c|c|c}
$C_j$ & $C_k$ & $C_l = ( (\xvec{C_j \cup C_k}) \naturaltolr (\xvec{C}_j \cap \xvec{C}_k) ) $ & $w(q_{i,l}) = \displaystyle\sum_{i=1}^{4}S_i$\\
\midrul
0 & 0 & 1 & $S_1=w(\overline{q}_{i,j}) \cdot w(\overline{q}_{i,k})$ \\
0 & 1 & 1 & $S_2=w(\overline{q}_{i,j}) \cdot w(q_{i,k}) $ \\
1 & 0 & 1 & $S_3=w(q_{i,j}) \cdot w(\overline{q}_{i,k})$ \\
1 & 1 & 1 & $S_4=w(q_{i,j}) \cdot w(q_{i,k})$\\
\end{tabular}
\hspace{1in}
\caption{Table of membership of any element $x_i$ of $\mathbb U$ belonging to $C_l$. As shown above, it may be interpreted also as a particular truth table.}
\label{f15}
\end{figure}
As shown in figure 15,\\
$w(q_{i,l})= \displaystyle\sum_{i=1}^{4}S_i = w(\overline{q}_{i,j}) \cdot w(\overline{q}_{i,k}) + w(\overline{q}_{i,j}) \cdot w(q_{i,k}) + w(q_{i,j}) \cdot w(\overline{q}_{i,k}) + w(q_{i,j}) \cdot w(q_{i,k}) = w(\overline{q}_{i,j}) \cdot (w(\overline{q}_{i,k}) + w(q_{i,k})) + w({q}_{i,j}) \cdot (w(\overline{q}_{i,k}) + w(q_{i,k})) = w(\overline{q}_{i,j})\cdot 1 + w(q_{i,j}) \cdot 1 = w(\overline{q}_{i,j})+ w(q_{i,j}) = 1$.
Therefore, $C_l = \mathbb U$; De Morgan's law mentioned above is also valid for CFL fuzzy sets.
In a future article, consideration will be given to the following result: Any law of classic set theory is also valid in CFL fuzzy set theory.
If for each fuzzy set characterized within the framework of a specific $\mathbb U$, the numerical value of the weight of membership in that set is known or may be computed for each element of $\mathbb U$, which may be reinterpreted as the weight of truth of the proposition which affirms that membership, then a convention that simplifies the characterization of these sets can be accepted. In effect, to characterize any of these sets, the following dyad is provided for each of six elements: 1) the denomination of the element considered, and 2) the numerical value of its weight of membership in that set. This convention will be used below to emphasize an important one-to-one correspondence between each ``classical'' set and a particular fuzzy set, according to CFL. This correspondence is obtained in the following way: Each element of $\mathbb U$ belonging to the ``classical'' set considered will belong also to the corresponding fuzzy set with a weight of membership of 1 and each element of $\mathbb U$ not belonging to the ``classical'' set will belong to the corresponding fuzzy set with a weight of membership equal to 0.
In the following sequence of correspondences, in which the ``classical'' sets were represented to the left and the fuzzy sets to the right, the convention specified has been used.
\begin{equation*}
\mathbb U = \{1,2,3,4,5,6,7\} \quad \text{corresponds to} \quad \mathbb U = \begin{array}{|cc|}
1; &1 \\
2; & 1 \\
3; & 1 \\
4; & 1 \\
5; & 1 \\
6; & 1 \\
7; & 1 \\
\end{array}
\end{equation*}
\\
\begin{equation*}
\xvec{\mathbb U} =\varnothing = \{\quad\} \quad \text{corresponds to} \quad \xvec{\mathbb U} =\varnothing = \begin{array}{|cc|}
1; & 0 \\
2; & 0 \\
3; & 0\\
4; & 0 \\
5; & 0 \\
6; & 0 \\
7; & 0 \\
\end{array}
\end{equation*}
\\
\begin{equation*}
C_1 = \{2,3,5,7\} \quad \text{corresponds to} \quad C_1 = \begin{array}{|cc|}
1; & 0 \\
2; & 1 \\
3; & 1 \\
4; & 0 \\
5; & 1 \\
6; & 0 \\
7; & 1 \\
\end{array}
\end{equation*}
\\
\begin{equation*}
\xvec{C}_1 = \{1,4,6\} \quad \text{corresponds to} \quad \xvec{C}_1 = \begin{array}{|cc|}
1; &1 \\
2; & 0 \\
3; & 0 \\
4; & 1 \\
5; & 0 \\
6; & 1 \\
7; & 0 \\
\end{array}
\end{equation*}
\\
\begin{equation*}
C_2 = \{1,3,4,7\} \quad \text{corresponds to} \quad C_2 = \begin{array}{|cc|}
1; & 1 \\
2; & 0 \\
3; & 1 \\
4; & 1 \\
5; & 0 \\
6; & 0 \\
7; & 1 \\
\end{array}
\end{equation*}
\\
\begin{equation*}
\xvec{C}_2 = \{2,5,6\} \quad \text{corresponds to} \quad \xvec{C}_2 = \begin{array}{|cc|}
1; &0 \\
2; & 1 \\
3; & 0 \\
4; & 0 \\
5; & 1 \\
6; & 1 \\
7; & 0 \\
\end{array}
\end{equation*}
\\
\begin{equation*}
(C_1 \cup C_2) = \{1,2,3,4,5,7\} \quad \text{corresponds to} \quad (C_1 \cup C_2) = \begin{array}{|cc|}
1; & 1 \\
2; & 1 \\
3; & 1 \\
4; & 1 \\
5; & 1 \\
6; & 0 \\
7; & 1 \\
\end{array}
\end{equation*}
\\
\begin{equation*}
(C_1 \cap C_2) = \{3,7\} \quad \text{corresponds to} \quad (C_1 \cap C_2) = \begin{array}{|cc|}
1; & 0 \\
2; & 0 \\
3; & 1 \\
4; & 0 \\
5; & 0 \\
6; & 0 \\
7; & 1 \\
\end{array}
\end{equation*}\\
\section{Discussion and Perspectives}
Some basic results have been presented regarding a variant of fuzzy logic: canonical fuzzy logic (CFL).
Two of the main objectives of this article, and of others to be published in the future, are:
1. to show the naturally existing continuity for each logic discussed (that of BL and diverse non-classical logics) between calculi of different levels, i.e., between propositional calculus and predicate calculus, which can be presented using set theory terminology; and
2. to specify how BL can be considered a particular ``limit case'' of the other logics to be considered. Thus, for example, BL propositional calculus can be regarded as a particular limit case of CFL propositional logic if in the latter the weight of truth of any proposition considered is restricted to only 2 possible numerical values, 0 or 1. Likewise, as will be discussed in future articles, classical set theory is isomorphic to a particular case of CFL set theory, that in which the weight of truth $w(x \in C)$ can have only one of two numerical values: 0 or 1. That case refers to the weight of truth of the proposition which indicates the membership of any element $x$ of the universal set $\mathbb U$ considered, belonging to any set $C$ characterized within the framework of that $\mathbb U$.
As seen above, the treatment given in this article to CFL set theory is still quite incomplete. For example, important operators which would extend its scope and provide versatility to its applications remain to be introduced. In addition, it is necessary to specify how CFL operates with fuzzy sets characterized within the frameworks corresponding to infinite universal fuzzy sets.\\
|
1,941,325,220,717 | arxiv | \section{Introduction} \label{section:Introduction}
Hubel and Wiesel \cite{Hube59a} discovered that certain visual cells in cats' striate cortex have a directional preference.
It has turned out that there exists an intriguing and extremely precise spatial and directional organization into so-called cortical hypercolumns, see Figure~\ref{fig:VisualCortex}.
A hypercolumn can be interpreted as a ``visual pixel'', representing the optical world at a single location, neatly decomposed into a complete set of orientations. Moreover, correlated horizontal connections run parallel to the cortical surface and link columns across the spatial visual field with a shared orientation preference, allowing cells to combine visual information from spatially separated receptive fields.
Synaptic physiological studies of these horizontal pathways in cats' striate cortex show that neurons with aligned receptive field sites excite each other \cite{Bosking}. Apparently, the visual system not only constructs a score of local orientations, but also accounts for context and alignment by excitation and inhibition \emph{a priori}, which can be modeled by left-invariant PDE's and ODE's on $SE(2)$ \cite{Petitot,Citti,DuitsPhDThesis,Duits2007IJCV,Boscain3,August,BenYosef2012a,Chirikjian2,MashtakovNMTMA,Gonzalo,SartiCitteCompiledBook2014,Zweck,DuitsAMS1,DuitsAMS2,DuitsAlmsick2008,FrankenPhDThesis,BarbieriArxiv2013,Mumford}.
\begin{figure}[!b]
\centering
\includegraphics[width= 0.6\hsize]{v1_simple.pdf}
\caption{The orientation columns in the primary visual cortex.}
\label{fig:VisualCortex}
\end{figure}
Motivated by the orientation-selective cells, so-called orientation scores are constructed
by lifting all elongated structures (in 2D images) along an extra orientation dimension \cite{Kalitzin97,DuitsPhDThesis,Duits2007IJCV}. The main advantage of using the orientation score is that we can disentangle the elongated structures involved in a crossing allowing for a crossing preserving flow.
Invertibility of the transform between image and score is of vital importance, to both tracking \cite{BekkersJMIV} and enhancement\cite{Sharma2014,Franken2009IJCV}, as we do not want to tamper data-evidence in our continuous coherent state transform\cite{Alibook,Zweck} before actual processing takes place. This is a key advantage over related state-of-the-art methods\cite{AugustPAMI,MashtakovNMTMA,Zweck,Boscain3,Citti}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.7\textwidth]{OSStack.pdf}
\caption{Real part of an orientation score of an example image.}
\label{fig:OSIntro}
\end{figure}
Invertible orientation scores (see Figure~\ref{fig:OSIntro}) are obtained via a unitary transform between the space of disk-limited images $\mathbb{L}_{2}^{\varrho}(\mathbb{R}^{2}):=\{f \in \mathbb{L}_{2}(\mathbb{R}^{2}) \; |\; \textrm{support}\{\mathcall{F}_{\mathbb{R}^{2}}f\} \subset B_{\textbf{0},\varrho}\}$ (with $\varrho>0$ close to the Nyquist frequency and $B_{\textbf{0},\varrho}=\{\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}\;|\; \|\mbox{\boldmath$\omega$}\|\leq \varrho\}$), and the space of orientation scores. The space of orientation scores is a specific reproducing kernel vector subspace \cite{DuitsPhDThesis,Aronszajn1950,Alibook} of $\mathbb{L}_{2}(\mathbb{R}^{2}\times S^{1})$, see Appendix~\ref{app:new} for the connection with continuous wavelet theory. The transform from an image $f$ to an orientation score $U_f:=\mathcall{W}_\psi f$ is constructed via an anisotropic convolution kernel $\psi \in \mathbb{L}_{2}(\mathbb{R}^{2}) \!\cap\! \mathbb{L}_{1}(\mathbb{R}^{2})$:
\begin{equation} \label{OrientationScoreConstruction}
U_f(\textbf{x},\theta)=(\mathcall{W}_\psi [f])(\textbf{x},\theta)=\int_{\mathbb{R}^2}\overline{\psi(\textbf{R}_{\theta}^{-1}(\textbf{y}{-\textbf{x}}))}f(\textbf{y})d\textbf{y},
\end{equation}
where $\mathcall{W}_\psi$ denotes the transform and $\small \textbf{R}_\theta=
\left( \begin{array}{ccc}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta \\
\end{array} \right).$
Exact reconstruction is obtained by
\begin{equation}\label{OrientationScoreReconstruction}
\begin{aligned}
f(\textbf{x})
=(\mathcall{W}_\psi^*[U_f])(\textbf{x})
=\left(\mathcall{F}_{\mathbb{R}^2}^{-1}\left[M_\psi^{-1}\mathcall{F}_{\mathbb{R}^2}\left[\frac{1}{2\pi}\int_0^{2\pi}(\psi_\theta*U_f(\cdot,\theta))d\theta\right]\right]\right)(\textbf{x}),
\end{aligned}
\end{equation}
for all $\textbf{x} \in \mathbb{R}^2$, where $\mathcall{F}_{\mathbb{R}^2}$ is the unitary Fourier transform on $\mathbb{L}_2(\mathbb{R}^2)$ and $M_\psi \in C(\mathbb{R}^2, \mathbb{R})$ is given by $M_\psi(\pmb{\omega})=\int_0^{2\pi}|\hat{\psi}(\textbf{R}_\theta^{-1}\pmb{\omega})|^2 d\theta$ for all $\pmb{\omega} \in \mathbb{R}^{2}$, with $\hat{\psi}:=\mathcall{F}_{\mathbb{R}^2}\psi$, $\psi_{\theta}(\textbf{x})=\psi(R_{\theta}^{-1}\textbf{x})$. Furthermore, $\mathcall{W}_\psi^*$ denotes the adjoint of wavelet transform $\mathcall{W}_\psi:\mathbb{L}_2(\mathbb{R}^2)\rightarrow \mathbb{C}_{K}^{SE(2)}$, where the reproducing kernel norm on the space of orientation scores, $\mathbb{C}_{K}^{SE(2)}=\{\mathcall{W}_{\psi}f \; |\; f \in \mathbb{L}_{2}(\mathbb{R}^{2})\}$, is explicitly characterized in \cite[Thm.4, Eq.~11]{Duits2007IJCV}. Well-posedness of the reconstruction is controlled by $M_\psi$\cite{Duits2007IJCV,BekkersJMIV}. For details see Appendix~\ref{app:new}. Regarding the choice of $\psi$ in our algorithms, we rely on the wavelets proposed in \cite[ch:4.6.1]{DuitsPhDThesis},\cite{BekkersJMIV}.
In this article, the invertible orientation scores serve as the initial condition of left-invariant $\mbox{(non-)}$ linear PDE evolutions on the rotation-translation group $\mathbb{R}^2 \rtimes SO(2) \equiv SE(2)$, where by definition, \\$\mathbb{R}^d \rtimes S^{d-1}:=\mathbb{R}^d \rtimes SO(d)/(\{0\} \times SO(d-1))$. Now in our case $d=2$, so $\mathbb{R}^2 \rtimes S^1=\mathbb{R}^2 \rtimes SO(2)$ and we identify rotations with orientations. The primary focus of this article, however, is on the numerics and comparison to the exact solutions of linear left-invariant PDE's on $SE(2)$. Here by left-invariance and linearity we can restrict ourselves in our numerical analysis to the impulse response, where the initial condition is equal to $\delta_e=\delta_0^x \otimes \delta_0^y \otimes \delta_0^\theta$, where $\otimes$ denotes the tensor product in distributional sense.
In fact, we consider all linear, second order, left-invariant evolution equations and their resolvents on $\mathbb{L}_{2}(\mathbb{R}^{2} \rtimes S^{1}) \equiv \mathbb{L}_2(SE(2))$, which actually correspond to the forward Kolmogorov equations of left-invariant stochastic processes. Specifically, there are two types of stochastic processes we will investigate in the field of imaging and vision:
\begin{compactitem}
\item The contour enhancement process as proposed by Citti et al.\cite{Citti} in the cortical modeling.
\item The contour completion process as proposed by Mumford \cite{Mumford} also called the direction process.
\end{compactitem}
In image analysis, the difference between the two processes is that the contour enhancement focuses on the de-noising of elongated structures, while the contour completion aims for bridging the gap of interrupted contours since it contains a convection part.
Although not being considered in this article, we mention related 3D $(SE(3))$ extensions of these processes and applications (primarily in DW-MRI) in \cite{Creusen2013,MomayyezSiakhal2013,ReisertSE3-2012}. Most of our numerical findings in this article apply to these $SE(3)$ extensions as well.
Many numerical approaches for implementing left-invariant PDE's on $SE(2)$ have been investigated intensively in the fields of cortical modeling and image analysis. Petitot introduced a geometrical model for the visual cortex V1 \cite{Petitot}, further refined to the $SE(2)$ setting by Citti and Sarti \cite{Citti}. A method for completing the boundaries of partially occluded objects based on stochastic completion fields was proposed by Zweck and Williams\cite{Zweck}. Also, Barbieri et al.\cite{BarbieriArxiv2013} proposed a left-invariant cortical contour perception and motion integration model within a 5D contact manifold. In the recent work of Boscain et al.\cite{Boscain3}, a numerical algorithm for integration of a hypoelliptic diffusion equation on the group of translations and discrete rotations $SE(2,N)$ is investigated. Moreover, some numerical schemes were also proposed by August et al. \cite{August,AugustPAMI} to understand the direction process for curvilinear structure in images. Duits, van Almsick and Franken\cite{DuitsAMS1,DuitsAMS2,DuitsAlmsick2008,FrankenPhDThesis,MarkusThesis,DuitsPhDThesis} also investigated different models based on Lie groups theory, with many applications to medical imaging.
The numerical schemes for left-invariant PDE's on $SE(2)$ can be categorized into 3 approaches:
\begin{compactitem}
\item Finite difference approaches.
\item Fourier based approaches, including $SE(2)$-Fourier methods.
\item Stochastic approaches.
\end{compactitem}
Recently, several explicit representations of exact solutions were derived \cite{DuitsCASA2005,DuitsCASA2007,DuitsAMS1,MarkusThesis,DuitsAlmsick2008,Boscain1}. In this paper we will set up a structured framework to compare all the numerical approaches to the exact solutions. \\
\textbf{Contributions of the article:}
In this article, we:
\begin{compactitem}
\item compare all numerical approaches (finite difference methods, a stochastic method based on Monte Carlo simulation and Fourier based methods) to the exact solution for contour enhancement/completion. We show that the Fourier based approaches perform best and we also explain this theoretically in Theorem \ref{th:RelationofFourierBasedWithExactSolution};
\item provide a concise overview of all exact approaches;
\item implement exact solutions, including improvements of Mathieu-function evaluations in $\textit{Mathematica}$;
\item establish explicit connections between exact and numerical approaches for contour enhancement;
\item analyze the poles/singularities of the resolvent kernels;
\item propose a new probabilistic time integration to overcome the poles, and we prove this via new simple asymptotic formulas
for the corresponding kernels that we present in this article;
\item show benefits of the newly proposed time integration in contour completion via stochastic completion fields \cite{Zweck};
\item analyze errors when using the $\textbf{DFT}$ (Discrete Fourier Transform) instead of the $\textbf{CFT}$ (Continuous Fourier Transform) to transform exact formulas in the spatial Fourier domain to the $SE(2)$ domain;
\item apply left-invariant evolutions as preprocessing before tracking the retinal vasculature via the ETOS-algorithm \cite{BekkersJMIV} in optical imaging of the eye.
\end{compactitem}
\vspace{1.5ex}
\textbf{Structure of the article:} In Section 2 we will briefly describe the theory of the $SE(2)$ group and left-invariant vector fields. Subsequently, in Section 3 we will discuss the linear time dependent $\mbox(\text{convection-})$ diffusion processes on $SE(2)$ and the corresponding resolvent equation for contour enhancement and contour completion. In Subsection~\ref{IterationResolventOperators} we provide improved kernels via iteration of resolvent operators and give a probabilistic interpretation.
Then we show the benefit in stochastic completion fields.
For completeness, the fundamental solution and underlying probability theory for contour enhancement/completion is explained in Subsection~\ref{section:FundamentalSolutions}.
In Section 4 we will give the full implementations for all our numerical schemes for contour enhancement/completion, i.e. explicit and implicit finite difference schemes, numerical Fourier based techniques, and the Monte-Carlo simulation of the stochastic approaches. Then, in Section 5, we will provide a new concise overview of all three exact approaches in the general left-invariant PDE-setting. Direct relations between the exact solution representations and the numerical approaches are also given in this section. After that, we will provide experiments with different parameter settings and show the performance of all different numerical approaches compared to the exact solutions. Finally, we conclude our paper with applications on retinal images to show the power of our multi-orientation left-invariant diffusion with an application on complex vessel enhancement, i.e. in the presence of crossings and bifurcations.
\section{The $SE(2)$ Group and Left-invariant Vector Fields}
\label{section:The $SE(2)$ Group and Left-invariant Vector Fields}
\subsection{The Euclidean Motion Group $SE(2)$ and Representations}\label{section:The Euclidean Motion Group $SE(2)$ and Group Representations}
An orientation score $U:SE(2) \to \mathbb{C}$ is defined on the Euclidean motion group $SE(2)=\mathbb{R}^2 \rtimes S^1$. The group product on $SE(2)$ is given by
\begin{equation}
gg'=(\textbf{x},\theta)(\textbf{x}',\theta')=(\textbf{x}+\textbf{R}_\theta \cdot \textbf{x}',\theta+\theta'), \quad \textit{for all} \quad g,g' \in SE(2).
\end{equation}
The translation and rotation operators on an image $f$ are given by $(\mathcall{T}_\textbf{x}f)(\textbf{y})=f(\textbf{y}-\textbf{x})$ and $(\mathcall{R}_\theta f)(\textbf{x})=f((\textbf{R}_\theta)^{-1}\textbf{x})$. Combining these operators yields the unitary $SE(2)$ group representation $\mathcall{U}_g=\mathcall{T}_\textbf{x} \circ \mathcall{R}_\theta$. Note that $g h \mapsto \mathcall{U}_{gh}=\mathcall{U}_{g} \mathcall{U}_{h}$ and $\mathcall{U}_{g^{-1}}=\mathcall{U}_{g}^{-1}=\mathcall{U}_{g}^{*}$.
We have
\begin{equation}
\forall g \in SE(2):(\mathcall{W}_\psi \circ \mathcall{U}_g)= (\mathcall{L}_g \circ \mathcall{W}_\psi)
\end{equation}
with group representation $g \mapsto \mathcall{L}_{g}$ given by $\mathcall{L}_{g}U(h)=U(g^{-1}h)$, and consequently, the effective operator $\Upsilon:=\mathcall{W}_\psi^* \circ \Phi \circ \mathcall{W}_\psi$ on the image domain (see Figure~\ref{fig:ImageProcessingViaOS}) commutes with rotations and translations if the operator $\Phi$ on the orientation score satisfies
\begin{align}\label{rel}
\Phi \circ \mathcall{L}_g=\mathcall{L}_g \circ \Phi, \quad \textit{for all}\quad g \in SE(2).
\end{align}
Moreover, if $\Phi$ maps the space of orientation scores onto itself, sufficient condition (\ref{rel}) is also necessary for rotation and translation covariant image processing (i.e. $\Upsilon$ commutes with $\mathcall{U}_{g}$ for all $g \in SE(2)$).
For details and proof see \cite[Thm.21, p.153]{DuitsPhDThesis}.
However, operator $\Phi$ should not be right-invariant, i.e. $\Phi$ should not commute with the right-regular representation $g \mapsto \mathcall{R}_{g}$ given by $\mathcall{R}_{g}U(h)=U(hg)$, as $\mathcall{R}_{g}\mathcall{W}_{\psi}=\mathcall{W}_{\mathcall{U}_{g}\psi}$ and operator $\Upsilon$ should rather take advantage from the anisotropy of the wavelet $\psi$.
We conclude that by our construction of orientation scores \emph{only left-invariant operators are of interest}.
Next we will discuss the left-invariant derivatives (vector-fields) on smooth functions on $SE(2)$, which we will employ in the PDE of interest presented in Section~\ref{section:The PDE's of Interest}. For an intuition of left-invariant processing on orientation scores (via left-invariant vector fields) see
Figure~\ref{fig:ImageProcessingViaOS}.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{pipelineOS.pdf}
\caption{Image processing via invertible orientation scores. Operators $\Phi$ on the invertible orientation score robustly relate to operators $\Upsilon$ on the image domain. Euclidean-invariance of $\Upsilon$ is obtained by left-invariance of $\Phi$. Thus, we consider left-invariant (convection)-diffusion operators $\Phi=\Phi_t$ with evolution time $t$, which are generated by a quadratic form $Q=Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$ (
cf.~\!Eq.~\!(\ref{diffusionconvectiongenerator})) on the left-invariant vector fields $\{\mathcall{A}_i\}$, cf.~\!Eq.~\!(\ref{leftInvariantDerivatives}). We show the relevance of left-invariance of $\mathcall{A}_2$ acting on an image of a circle (as in Figure \ref{fig:OSIntro}) compared to action of the non-left-invariant derivative $\partial_y$ on the same image. }
\label{fig:ImageProcessingViaOS}
\end{figure}
\subsection{Left-invariant Tangent Vectors and Vector Fields}\label{section:Left-invariant Vector Fields}
The Euclidean motion group $SE(2)$ is a Lie group. Its tangent space at the unity element $T_e(SE(2))$ is the corresponding Lie algebra and it is spanned by the basis $\{\textbf{e}_x,\textbf{e}_y,\textbf{e}_\theta\}$. Next we derive the left-invariant derivatives associated to $\textbf{e}_x,\textbf{e}_y,\textbf{e}_\theta$, respectively.
A tangent vector $X_e \in T_e(SE(2))$ is tangent to a curve $\gamma$ at unity element $e=(0,0,0)$. Left-multiplication of the curve $\gamma$ with $g \in SE(2)$ associates to each $X_{e} \in T_{e}(SE(2))$ a corresponding tangent vector $X_{g}=(L_{g})_{*}X_{e} \in T_{g}(SE(2))$:
\begin{equation}
\begin{aligned}
\{\textbf{e}_\xi(g),\textbf{e}_\eta(g),\textbf{e}_\theta(g)\} &=\{(L_g)_{*} \textbf{e}_x,(L_g)_{*} \textbf{e}_y,(L_g)_{*} \textbf{e}_\theta\} \\ &=\{\cos\theta\textbf{e}_x\!+\!\sin\theta\textbf{e}_y,-\sin\theta\textbf{e}_x\!+\!\cos\theta\textbf{e}_y,\textbf{e}_\theta\},
\end{aligned}
\end{equation}
where $(L_g)_*$ denotes the pushforward of left-multiplication $L_gh = gh$, and where we introduce the local coordinates $\xi:= x \cos \theta + y \sin \theta$ and $\eta:= -x \sin \theta + y \cos \theta$.
As the vector fields can also be considered as differential operators on locally defined smooth functions \cite{aubin2001diffgeo}, we replace $\textbf{e}_i$ by using $\partial_i$, $i=\xi,\eta,\theta$, yielding the general form for a left-invariant vectorfield
\begin{equation}
\begin{aligned}
&X_g(U)=(c^\xi(\cos\theta\partial_x+\sin\theta\partial_y)
+c^\eta(-\sin\theta\partial_x+\cos\theta\partial_y)+c^\theta\partial_\theta)U,
\textit{ for all } c^\xi, c^\eta, c^\theta \in \mathbb{R}.
\end{aligned}
\end{equation}
Throughout this article, we shall rely on the following notation for left-invariant vector fields
\begin{equation} \label{leftInvariantDerivatives}
\{\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3\}:=\{\partial_\xi,\partial_\eta,\partial_\theta\}=\{\cos\theta\partial_x+\sin\theta\partial_y,-\sin\theta\partial_x+\cos\theta\partial_y,\partial_\theta\},
\end{equation}
which is the frame of left-invariant derivatives acting on $SE(2)$, the domain of the orientation scores.
\section{The PDE's of Interest} \label{section:The PDE's of Interest}
\subsection{Diffusions and Convection-Diffusions on $SE(2)$ }\label{section:TimedDiffusion}
A diffusion process on $\mathbb{R}^n$ with a square integrable input image $f:\mathbb{R}^n \longmapsto \mathbb{R}$ is given by
\begin{align}
\left\{\begin{aligned}
&\partial_t u(\textbf{x},t)=\triangledown \cdot \textbf{D}\triangledown u(\textbf{x},t) \qquad \textbf{x}\in\mathbb{R}^n,t \geq 0, \\
&u(\textbf{x},0)=f(\textbf{x}).\\
\end{aligned} \right.
\end{align}
Here, the $\triangledown$ operator is defined based on the spatial coordinates with $\triangledown=(\partial_{x_1},...,\partial_{x_n})$, and the constant diffusion tensor $\textbf{D}$ is a positive definite matrix of size $n \times n$. Similarly, the left-invariant diffusion equation on $SE(2)$ is given by:
\begin{align} \label{ExactDiffusionConvectionEquation}
\left\{\begin{aligned}
\partial_t W(g,t)&=\left( \begin{array}{ccc}
\partial_\xi & \partial_\eta & \partial_\theta \end{array} \right)
\left( \begin{array}{ccc}
D_{\xi\xi} & D_{\xi\eta} & D_{\xi\theta} \\
D_{\eta\xi} & D_{\eta\eta} & D_{\eta\theta} \\
D_{\theta\xi} & D_{\theta\eta} & D_{\theta\theta}\\
\end{array} \right)
\left( \begin{array}{ccc}
\partial_\xi\\
\partial_\eta\\
\partial_\theta \end{array} \right)W(g,t),\\
W(g,t=0)&=U^{0}(g),\\
\end{aligned} \right.
\end{align}
where as a default the initial condition is usually chosen as the orientation score of image $f \in \mathbb{L}_{2}(\mathbb{R}^{2})$, $U^{0}=U_{f}=\mathcall{W}_\psi f$. From the general theory for left-invariant scale spaces \cite{DuitsSSVM2007}, the quadratic form of the convection-diffusion generator is given by
\begin{equation} \label{diffusionconvectiongenerator}
\begin{aligned}
&Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)=\sum_{i=1}^3\left(-a_i\mathcall{A}_i+\sum_{j=1}^3 D_{ij}\mathcall{A}_i \mathcall{A}_j \right),\\ &a_i,D_{ij} \in \mathbb{R}, \textbf{D}:=[D_{ij}] \geq 0, \textbf{D}^T=\textbf{D},
\end{aligned}
\end{equation}
where the first order part takes care of the convection process, moving along the exponential curves $t \longmapsto g \cdot exp(t(\sum_{i=1}^3 a_iA_i))$ over time with $g \in SE(2)$, and the second order part specifies the diffusion in the following left-invariant evolutions
\begin{align} \label{diffusionconvection}
\left\{ \begin{aligned}
&\partial_t W=Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)W,\\
&W(\cdot,t=0)=U^{0}(\cdot).\\
\end{aligned} \right.
\end{align}
In case of linear diffusion, the positive definite diffusion matrix $\textbf{D}$ is constant. Then we obtain the solution of the left-invariant diffusion equation via a $SE(2)$-convolution with the Green's function $K_t^{\textbf{D},\textbf{a}}: SE(2)\rightarrow \mathbb{R}^+$ and the
initial condition $U^{0}:SE(2) \to \mathbb{C}$:
\begin{equation} \label{SE(2)ConvolutionOnDiffusion}
\begin{aligned}
W(g,t) =(K_t^{\textbf{D},\textbf{a}} \ast_{SE(2)}U^{0})(g) &=\int \limits_{SE(2)}K_t^{\textbf{D},\textbf{a}}(h^{-1}g)U^{0}(h)\, {\rm d}h \\ &=\int \limits_{\mathbb{R}^2}\int \limits_{-\pi}^{\pi}K_t^{\textbf{D},\textbf{a}}(\textbf{R}_{\theta'}^{-1}(\textbf{x}-\textbf{x}'), \theta-\theta')U^{0}(\textbf{x}',\theta')\, {\rm d}\theta'{\rm d}\textbf{x}',
\end{aligned}
\end{equation}
for all $g=(\textbf{x},\theta)\in SE(2)$.
This can symbolically be written as $W(\cdot,t)=e^{tQ^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)}U^{0}(\cdot)$.
In this time dependent diffusion we have to set a fixed time $t>0$. In the subsequent sections we consider time integration while imposing a negatively exponential distribution $T \sim NE(\alpha)$, i.e. $P(T=t)=\alpha e^{-\alpha t}$. We choose this distribution since it is the only continuous memoryless distribution, and in order to ensure that the underlying stochastic process is Markovian, traveling time must be memoryless.
There are two specific cases of interest:
\begin{compactitem}
\item Contour enhancement, where $\textbf{a}=\textbf{0}$ and $\textbf{D}$ is symmetric positive semi-definite such that the H\"{o}rmander condition is satisfied. This includes both elliptic diffusion $\textbf{D}>0$ and hypo-elliptic diffusion in which case we have $\textbf{D} \geq 0$ in such a way that H\"{o}rmander's condition \cite{Hoermander} is still satisfied. In the linear case we shall be mainly concerned with the hypo-elliptic case $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$,
\item Contour completion, where $\textbf{a}=(1,0,0)$ and $\textbf{D}=\textrm{diag}\{0,0,D_{33}\}$ with $D_{33}>0$.
\end{compactitem}
Several new exact representations for the (resolvent) Green's functions in $SE(2)$ were derived by Duits et al. \cite{DuitsAMS1,DuitsAlmsick2008,DuitsCASA2005,DuitsCASA2007,MarkusThesis} in the spatial Fourier domain, as explicit formulas were still missing, see e.g.~\cite{Mumford}.
This includes the Fourier series representations, studied independently in \cite{Boscain3}, but also includes a series of rapidly decaying terms and explicit representations obtained by computing the latter series representations explicitly via the Floquet theorem, producing explicit formulas involving only 4 Mathieu functions. The works in \cite{DuitsAMS1,DuitsAlmsick2008} relied to a large extend on distribution theory to derive these explicit formulas. Here we deal with the general case with $D\geq 0$ and $\textbf{a} \in \mathbb{R}^{3}$ (as long as H\"{o}rmander's condition
\cite{Hoermander} is satisfied) and we stress the analogy between the contour completion and contour enhancement case in
the appropriate general setting (for the resolvent PDE, for the (convection)-diffusion PDE, and for fundamental solution PDE).
Instead of relying on distribution theory \cite{DuitsAlmsick2008,DuitsAMS1}, we obtain the general solutions more directly via Sturm-Liouville theory.
Furthermore, in Section \ref{section:Experimental results} we include, for the first time, numerical comparisons of various numerical approaches to the exact solutions. The outcome of which, is underpinned by a strong convergence theorem that we will present in Theorem~\ref{th:RelationofFourierBasedWithExactSolution}.
On top of this, in Appendix~\ref{app:A}, we shall present new asymptotic expansions around the origin that allow us to analyze the order of the singularity at the origin of the resolvent kernels. From these asymptotic expansions we deduce that the singularities in the resolvent kernels
(and fundamental solutions) are quite severely. In fact, the better the equations are numerically approximated, the weaker the completion and enhancement properties of the kernels.
To overcome this severe discrepancy between the mathematical PDE theory and the practical requirements, we propose time-integration via Gamma distributions (beyond the negative exponential distribution case).
Mathematically, as we will prove in Subsection~\ref{IterationResolventOperators}, this newly proposed time integration both reduces the singularities, and maintains the formal PDE theory. In fact using a Gamma distribution coincides with iteration the resolvents, with an iteration depth $k$ equal to the squared mean divided by the variance of the Gamma distribution.
We will also show practical experiments that demonstrate the advantage of using the Gamma-distributions: we can control and amplify the infilling property ("the spread of ink") of the PDE's.
\subsection{The Resolvent Equation}\label{section:ResolventEquation}
Traveling time of a memoryless random walker in $SE(2)$ is negatively exponential distributed, i.e.
\begin{align} \label{exponentialdistribution}
p(T=t)=\alpha e^{-\alpha t}, t\geq0,
\end{align}
with the expected life time $E(T)=\frac{1}{\alpha}$. Then the resolvent kernel is obtained by integrating Green's function $K_t^{\textbf{D},\textbf{a}}:SE(2)\rightarrow \mathbb{R}^+$ over the time distribution, i.e.
\[\label{ResolventKernel}
\begin{aligned}
R_{\alpha}^{\textbf{D},\textbf{a}}&=\alpha\int_0^\infty K_t^{\textbf{D},\textbf{a}}e^{-\alpha t}dt=\alpha\int_0^\infty e^{tQ}\delta_ee^{-\alpha t}dt=-\alpha(Q-\alpha I)^{-1}\delta_e,
\end{aligned}
\]
where we use short notation $Q=Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$.
Via this resolvent kernel, one gets the probability density $P_{\alpha}(g)$ of finding a random walker at location
$g \in SE(2)$ regardless its traveling time, given that it has departed from distribution $U:SE(2) \to \mathbb{R}^{+}$:
\begin{equation} \label{Resolvent}
\begin{aligned}
P_\alpha(g)=(R_{\alpha}^{\textbf{D},\textbf{a}} \ast_{SE(2)}U)(g)=-\alpha(Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)-\alpha I)^{-1}U(g),
\end{aligned}
\end{equation}
which is the same as taking the Laplace transform of the left-invariant evolution equations ~(\ref{diffusionconvection}) over time. The resolvent equation can be written as
\[
\begin{aligned}
P_\alpha(g)=\alpha\int_0^\infty e^{-\alpha t}(e^{tQ}U^{0})(g)dt=\alpha((\alpha I-Q)^{-1}U)(g).
\end{aligned}
\]
However, we do not want to go into the details of semigroup theory \cite{Yosida} and just included where $(e^{tQ}U^0)$ in short notation for the solution of Eq.~(\ref{diffusionconvection}).
Resolvents can be used in completion fields\cite{Zweck,DuitsAMS1,August}. Some resolvent kernels of the contour enhancement and completion process are given in Figure~\ref{fig:ResolventCompletionEnhancementKernels}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\hsize]{ResolventCompletionEnhancementKernels.pdf}
\caption{Left: the $xy$-marginal of the contour enhancement kernel $R_{\alpha}^{\textbf{D}}:=R_{\alpha}^{\textbf{D},\textbf{0}}$ with parameters $\alpha=\frac{1}{100}$, $\textbf{D}=\{1,0,0.08\}$, numbers of orientations $N_o = 48$ and spatial dimensions $N_s = 128$. Right: the $xy$-marginal of the contour completion kernel $R_{\alpha}^{\textbf{D},\textbf{a}}$ with parameters $\alpha=\frac{1}{100}$, $\textbf{a}=(1,0,0)$, $\textbf{D}=\{0,0,0.08\}$, $N_o = 72$ and $N_s = 192$.}
\label{fig:ResolventCompletionEnhancementKernels}
\end{figure}
\subsection{Improved Kernels via Iteration of Resolvent Operators \label{IterationResolventOperators}}
The kernels of the resolvent operators suffer from singularities at the origin. Especially for contour completion, this is cumbersome from the application point of view, since here the better one approximates Mumford's direction process and its inherent singularity in the Green's function, the less ``ink'' is spread in the areas with missing and interrupted contours. To overcome this problem we extend the temporal negatively exponential distribution in our line enhancement/completion models by a 1-parameter family of Gamma-distributions.
As a sum $T=T_{1} + \ldots + T_{k}$ of linearly independent negatively exponential time variables is Gamma distributed $P(T=t)= \frac{\alpha^{k} t^{k-1}}{(k-1)!} e^{-\alpha t}$, the time integrated process is now obtained by a $k$-fold resolvent operator. While keeping the expectation of the Gamma distribution fixed by $E(T)=k/\alpha $, increasing of $k$ will induce more mass transport away from $t=0$ towards the void areas of interrupted contours. For $k\geq 3$
the corresponding Green's function of the $k$-step approach even no longer suffers from a singularity at the origin. This procedure is summarized in the following theorem and applied in Figure~\ref{fig:Gamma}.
\begin{theorem}\label{th:prob}
A concatenation of $k$ subsequent, independent time-integrated memoryless stochastic process for contour enhancement(/completion) with expected traveling time $\alpha^{-1}$,
corresponds to a time-integrated contour enhancement(/completion) process with a Gamma distributed traveling time $T=T_{1}+ \ldots +T_{k}$ with
\begin{equation}\label{GammaDistributionIntegration}
\begin{array}{l}
P(T_{i}=t)=\alpha e^{-\alpha t}, \textrm{ for } i=1,\ldots,k, \\
P(T=t)=\Gamma(t; k,\alpha):=\frac{\alpha^{k} t^{k-1}}{\Gamma(k)} e^{-\alpha t}.
\end{array}
\end{equation}
The probability density kernel of this stochastic process is given by
\begin{equation}\label{ProbabilityDensityKernel}
R_{\alpha,k}^{\textbf{D},\textbf{a}}=R_{\alpha}^{\textbf{D},\textbf{a}} *^{(k-1)}_{SE(2)}R_{\alpha}^{\textbf{D},\textbf{a}}= \alpha^{k} (Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})-\alpha I)^{-k} \delta_{e},
\end{equation}
For the linear, hypo-elliptic, contour enhancement case (i.e. $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$ and $\textbf{a}=\textbf{0}$) the kernels admit the following asymptotical formula for $|g| << 1:$
\begin{equation}\label{enhass}
\begin{array}{ll}
R_{\alpha,k}(g) &= \int \limits_{0}^{\infty} \frac{\alpha^{k} t^{k-1}e^{-\alpha t}}{(k-1)!}
\frac{e^{-C^2\frac{|g|^2}{4t}}}{4\pi D_{11}D_{33}t^2} {\rm d}t=
\frac{\alpha^k}{(k-1)! 4\pi D_{11}D_{33}} \int \limits_{0}^{\infty}
t^{k-3}e^{-C^2\frac{|g|^2}{4t}-\alpha t}\,{\rm d}t \\
&= \frac{2^{1-k}}{\pi D_{11}D_{33} (k-1)!}\alpha^{k}
||g|C|^{k-2} \; \mathcall{K}(2-k,|g|C\sqrt{\alpha}),
\end{array}
\end{equation}
where $\mathcall{K}(n,z)$ denotes the modified Bessel function of the 2nd kind, and
with $C \in [2^{-1},\sqrt[4]{2}]$ and with
\begin{equation} \label{logmodulus}
|g|=\left|e^{c^{1}\mathcall{A}_{1}+c^{2}\mathcall{A}_{2}+c^{3}\mathcall{A}_{3}}\right|=
\sqrt{\left(\frac{|c^{1}|^2}{D_{11}}+\frac{|c^{3}|^2}{D_{33}}\right)^2 +\frac{|c^{2}|^2}{D_{11}D_{33}}}
\end{equation}
with $c^{1}=\frac{\theta(y-\eta)}{2(1-\cos \theta)}$, $c^{2}=\frac{\theta(\xi-x)}{2(1-\cos \theta)}$, $c^{3}=\theta$ if $\theta \neq 0$ and $(c^{1},c^{2},c^{3})=(x,y,0)$ if $\theta=0$.
\end{theorem}
\textbf{Proof }
We consider a random traveling time $T=\sum_{i=1}^{n} T_{i}$ in an
$n$-step approach random process $G_{T}=\sum_{i=1}^{N}G_{T_i}$ on $SE(2)$,
with $G_{T_i}$ independent random random walks whose Fokker-Planck equations are given by (\ref{diffusionconvection}), and with independent traveling times $T_{i} \sim NE(\alpha)$ (i.e. $P(T_{i}=t)=f(t):=\alpha e^{-\alpha t}$).
Then for $k \geq 2$ we have $T \sim f *_{\mathbb{R}^{+}}^{k-1} f=\Gamma(\cdot; k,\alpha)$, (with $f*_{\mathbb{R}^+}g(t)=\int_{0}^{t} f(\tau)g(t-\tau)\,{\rm d}\tau$),
which follows by consideration of the characteristic function and application of Laplace transform $\mathcall{L}$.
We have $\alpha^{k}(Q-\alpha I)^{-k}=(\alpha (Q-\alpha I)^{-1})^k$, and for $k=2$ we have
identity
\[
\begin{array}{l}
R_{\alpha,k=2}^{\textbf{D},\textbf{a}}(\textbf{x},\theta)
=\int \limits_{0}^{\infty} p(G_{T}=(\textbf{x},\theta) | T=t, G_{0}=e)\cdot p(T=t)\, {\rm d}t \\
=\int \limits_{0}^{\infty} p(G_{T}=(\textbf{x},\theta) \; |\; T=T_{1}+T_{2}=t, G_{0}=e)\cdot p(T_{1}+T_{2}=t) \, {\rm d}t \\
=\int \limits_{0}^{\infty} \int \limits_{0}^{t} p(G_{T_{1}+T_2}=(\textbf{x},\theta) \; |\; T_{1}=t-s, T_{2}=s, G_{0}=e)\cdot
p(T_{1}=t-s)\; p(T_{2}=s) \, {\rm d}s {\rm d}t \\
=\alpha^2 \, \mathcall{L}\left(t \mapsto \int \limits_{0}^{t} (K_{t-s}^{\textbf{D},\textbf{a}}*_{SE(2)}K_{s}^{\textbf{D},\textbf{a}} *_{SE(2)} \delta_{e})(\textbf{x},\theta) {\rm d}s\right)(\alpha)\\
= \alpha^2 \, \mathcall{L}\left(t \mapsto \int \limits_{0}^{t} (K_{t-s}^{\textbf{D},\textbf{a}}*_{SE(2)} K_{s}^{\textbf{D},\textbf{a}} )(\textbf{x},\theta) {\rm d}s\right)(\alpha) \\
= \alpha^2 \, \left(\mathcall{L}\left(t \mapsto K_{t}^{\textbf{D},\textbf{a}}(\cdot)\right)(\alpha) *_{SE(2)}\mathcall{L}\left(t \mapsto K_{t}^{\textbf{D},\textbf{a}}(\cdot)\right)(\alpha)\right)(\textbf{x},\theta)
= (R_{\alpha,k=1}^{\textbf{D},\textbf{a}}*_{SE(2)}R_{\alpha,k=1}^{\textbf{D},\textbf{a}})(\textbf{x},\theta).
\end{array}
\]
Thereby main result Eq.~\!(\ref{ProbabilityDensityKernel}) follows by induction.
Result (\ref{enhass}) follows by direct computation and application of the theory of weighted
sub-coercive operators on Lie groups \cite{TerElst} to the $SE(2)$ case. We have filtration $\gothic{g}_0:=
\textrm{span}\{\mathcall{A}_{1},\mathcall{A}_{3}\}$,
and $\gothic{g}_{1}=[\gothic{g}_0,\gothic{g}_0]=\textrm{span}\{\mathcall{A}_{1},\mathcall{A}_{2},\mathcall{A}_{3}\}=\mathcall{L}(SE(2))$,
so $w_1=1$, $w_3=1$ and $w_{2}=2$ and computation of the logarithmic map on $SE(2)$,
$g=e^{\sum_{i=1}^{3}c^{i} A_{i}} \Leftrightarrow \sum_{i=1}^{3}c^{i} A_{i} = \log g$, yields a non-smooth logarithmic squared modulus
locally equivalent to smooth $|g|^2$ given by (\ref{logmodulus}), see \cite[ch:5.4,eq.5.28]{DuitsAMS1}.
$\hfill \Box$ \\
\\
We have the following asymptotical formula for $\mathcall{K}(n,z)$:
\[
\mathcall{K}(n,z)
\approx
\left\{
\begin{array}{ll}
- \log(z/2) -\gamma_{EUL} & \textrm{if }n=0 \\
\frac{1}{2}(|n|-1)! \left( \frac{z}{2}\right)^{-|n|}
\end{array}
\right.
\textrm{ for }0 < z <\!<\! 1,
\]
with Euler's constant $\gamma_{EUL}$,
and thereby Eq.~(\ref{enhass}) implies the following result:
\begin{corollary}\label{corr:X}
If $k=1$ then $R_{\alpha,k}^{\textbf{D}}(g)\equiv O(|g|^{-2})$. If $k=2$ then $R_{\alpha,k}^{\textbf{D}}(g)\equiv O(\log |g|^{-1})$.
If $k\geq 3$ then $R_{\alpha,k}^{\textbf{D}}(g)\equiv O(1)$ and the kernel has no singularity.
\end{corollary}
\begin{remark}
As this approach also naturally extends to positive (non-integer) fractional powers $k \in \mathbb{Q}, k\geq 0$ of the resolvent operator we wrote $\Gamma(k)$ instead of $(k-1)!$ in
Eq.~\!(\ref{GammaDistributionIntegration}). The recursion depth $k$ equals $(E(T))^2/Var(T)$, since the variance of $T$ equals $Var(T)= k/\alpha^2$.
\end{remark}
In Figure~\ref{fig:Gamma}, we show that increase of $k$ (while fixing $E(T)=k/\alpha$) allows for better propagation of ink towards the completion areas. The same concept applies to the contour enhancement process. Here we change time integration (using the stochastic approach outlined in Section~\ref{section:MonteCarloStochasticImplementation}) in Eq.~\!(\ref{GammaDistributionIntegration}) rather than iterating the resolvents in Eq.~\!(\ref{ProbabilityDensityKernel}) for better accuracy.
\begin{figure}
\centering
\includegraphics[width=0.85\hsize]{figGamma.pdf}
\caption{Illustration of Theorem~\ref{th:prob} and Corollary~\ref{corr:X}, via the stochastic implementation for the $k$-step contour completion process ($T=\sum_{i=1}^k T_{i}$) explained in Subsection~\ref{section:MonteCarloStochasticImplementation}. We have depicted the (2D marginals) of 3D completion fields \cite{Zweck} now generalized to
$\mathcall{C}(x,y,\theta):=((Q-(\alpha k) I)^{-k}\delta_{g_{0}})(x,y,\theta) \cdot ((Q^{*}-(\alpha k) I)^{-k}\delta_{g_{1}})(x,y,\theta)$, with $Q=-\mathcall{A}_{1}+ D_{33} \mathcall{A}_{3}^2$ and with
$g_0=(\textbf{x}_0, \frac{\pi}{6})$ and $g_{1}=(\textbf{x}_1, -\frac{\pi}{6})$, $\alpha=0.1$, $D_{33}=0.1$, via a rough resolution
(on a $200\times 200 \times 32$-grid) and a finer resolution (on a $400\times 400 \times 64$-grid).
Image intensities have been scaled to full range.
The resolvent process $k=1$ suffers from: "the better the approximation, the less relative infilling in the completion" (cf.~left column). The singularities at $g_0$
and $g_{1}$ vanish at $k=3$. A reasonable compromise is found at $k=2$ where infilling is stronger, and where the modes (i.e. curves $\gamma$ with $\mathcall{A}_{2}\mathcall{C}(\gamma)=\mathcall{A}_{3}C(\gamma)=0$, cf.~\cite[App.~A]{BekkersJMIV},\cite{DuitsAlmsick2008}) are easy to detect. \label{fig:Gamma}}
\end{figure}
\subsection{Fundamental Solutions\label{section:FundamentalSolutions}}
The fundamental solution $S^{\textbf{D},\textbf{a}}:SE(2) \to \mathbb{R}^{+}$ associated to generator
$Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$
solves
\begin{equation} \label{fundsolPDE}
Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) \; S^{\textbf{D},\textbf{a}} =-\delta_{e}\ ,
\end{equation}
and is given by
\begin{equation}\label{FundamentalSolution}
\begin{aligned}
S^{\textbf{D},\textbf{a}}(x, y, \theta) &= \int \limits_{0}^{\infty}K_{t}^{\textbf{D},\textbf{a}}(x,y,\theta)\, {\rm d}t =
\left(-(Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1}\delta_e \right)(x,y,\theta) \\
&=\lim_{\alpha \downarrow 0}\left(\frac{-\alpha(Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)-\alpha I)^{-1}}{\alpha}\delta_e \right)(x,y,\theta)
=\lim_{\alpha \downarrow 0}\frac{R_{\alpha}^{\textbf{D},\textbf{a}}(x, y, \theta)}{\alpha}.\\
\end{aligned}
\end{equation}
There exist many intriguing relations \cite{DuitsAMS2,Boscain2} between fundamental solutions of hypo-elliptic diffusions
and left-invariant metrics on $SE(2)$, which make these solutions interesting. Furthermore, fundamental solutions on the nilpotent approximation $(SE(2))_{0}$ take a relatively simple explicit form \cite{Gaveau,DuitsAMS1}.
However, by Eq.~(\ref{FundamentalSolution}) these fundamental solutions suffer from some practical drawbacks: they are not probability kernels, in fact they are not even $\mathbb{L}_1$-normalizable, and they suffer from poles in both spatial and Fourier domain. Nevertheless, they are interesting to study for the limiting case $\alpha \downarrow 0$ and they have been suggested in cortical modeling \cite{Barbieri2012,BarbieriArxiv2013}. \\
\\
\subsection{The Underlying Probability Theory}
In this section we provide an overview of the underlying probability theory
belonging to our PDE's of interest, given by Eq.~(\ref{diffusionconvection}), (\ref{Resolvent}) and (\ref{fundsolPDE}).
We obtain the contour enhancement case by setting $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$ and $\textbf{a}=\textbf{0}$. Then, by application of Eq.~(\ref{leftInvariantDerivatives}), Eq.~(\ref{diffusionconvection}) becomes the forward Kolmogorov equation
\begin{equation} \label{StochasticEnhancementEvolution}
\left\{
\begin{aligned}
&\partial_t W(g,t)=(D_{11}\partial_\xi^2+D_{33}\partial_\theta^2)W(g,t),\\
&W(g,t=0)=U(g)\\
\end{aligned} \right.
\end{equation}
of the following stochastic process for contour enhancement:
\begin{equation} \label{StochasticEnhancementProcess}
\left\{\begin{aligned}
&\textbf{X}(t)=\textbf{X}(0)+\sqrt{2D_{11}}\varepsilon_\xi\int^t_0(\cos\Theta(\tau)\textbf{e}_x+\sin\Theta(\tau)\textbf{e}_y)\frac{1}{2\sqrt{\tau}}\,{\rm d}\tau\\
&\Theta(t)=\Theta(0)+\sqrt{t}\sqrt{2D_{33}}\varepsilon_\theta,\qquad\varepsilon_\xi,\varepsilon_\theta\thicksim\mathcall{N}(0,1)\\
\end{aligned} \right.
\end{equation}
For contour completion, we must set the diffusion matrix $\textbf{D}=\textrm{diag}\{0,0,D_{33}\}$ and convection vector $\textbf{a}=(1,0,0)$. In this case Eq.~(\ref{diffusionconvection}) takes the form
\begin{equation} \label{StochasticCompletionEvolution}
\left\{
\begin{aligned}
&\partial_t W(g,t)=(\partial_\xi+D_{33}\partial_\theta^2)W(g,t),\qquad g\in SE(2), t>0,\\
&W(g,t=0)=U(g).\\
\end{aligned} \right.
\end{equation}
This is the Kolmogorov equation of Mumford's direction process \cite{Mumford}
\begin{equation} \label{eq:MumfordDirectionProcess}
\left\{\begin{aligned}
&\textbf{X}(t)=X(t)\textbf{e}_x+Y(t)\textbf{e}_y=\textbf{X}(0)+\int^t_0 \cos\Theta(\tau)\textbf{e}_x+\sin\Theta(\tau)\textbf{e}_y\,{\rm d}\tau\\
&\Theta(t)=\Theta(0)+\sqrt{t}\sqrt{2D_{33}}\varepsilon_\theta,\qquad\varepsilon_\theta\thicksim\mathcall{N}(0,1)\\
\end{aligned} \right.
\end{equation}
\begin{remark}
As contour completion processes aim to reconstruct the missing parts of interrupted contours based on the contextual information of the data, a positive direction $\textbf{e}_{\xi}=\cos(\theta)\textbf{e}_x+\sin(\theta)\textbf{e}_y$ in the spatial plane is given to a random walker.
On the contrary, in contour enhancement processes a bi-directional movement of a random walker along $\pm\textbf{e}_{\xi}$ is included for noise removal by anisotropic diffusion.
\end{remark}
The general stochastic process on $SE(2)$ underlying Eq.~(\ref{diffusionconvection}) is :
{\small
\begin{equation} \label{eq:form}
\left\{
\begin{array}{l}
G_{n+1}:=(X_{n+1},\Theta_{n+1})=G_n + \Delta t \sum \limits_{i \in I} a_{i} \left.\textbf{e}_{i}\right|_{G_n} +\sqrt{\Delta t}\sum \limits_{i \in I} \epsilon_{i, n+1}\,\sum \limits_{j \in I} \sigma_{ji}\,
\left. \textbf{e}_{j}\right|_{G_n}, \\
G_{0}=(X_{0},\Theta_{0}),
\end{array}
\right.
\end{equation}
}
with $I = \{1,2,3\} $ in the elliptic case and $I = \{1,3\}$ in the hypo-elliptic case and where $n =1,\ldots, N-1$, $N \in \mathbb{N}$ denotes the number of steps with stepsize $\Delta t >0$, $\sigma=\sqrt{2D}$ is the unique symmetric positive definite matrix such that $\sigma^2=2D$, $\{\epsilon_{i, n+1}\}_{i \in I, n =1,\ldots, N-1 }$ are independent normally distributed \mbox{$\epsilon_{i, n+1} \sim \mathcall{N}(0,1)$} and {\small $\left. \textbf{e}_{1} \right|_{G_{n}}=(\cos \Theta_{n},\sin \Theta_{n},0)$, $\left. \textbf{e}_{2} \right|_{G_{n}}=(-\sin \Theta_{n},\cos \Theta_{n},0)$, and $\left. \textbf{e}_{3} \right|_{G_{n}}=(0,0,1)$}. In case $I = \{1,2,3\}$, Eq.~(\ref{eq:form}) boils down to:
{
\begin{equation}
\begin{array}{l}
\begin{array}{l}
\left(
\begin{array}{c}
X_{n+1} \\
Y_{n+1} \\
\Theta_{n+1}
\end{array}
\right)=
\left(
\begin{array}{c}
X_{n} \\
Y_{n} \\
\Theta_{n}
\end{array}
\right)+
\Delta t \,
{\rm R}_{\Theta_n}
\left(
\begin{array}{c}
a_{1} \\
a_{2} \\
a_{3}
\end{array}
\right)
+
\sqrt{\Delta t}\,
({\rm R}_{\Theta_n})^{T} \,
\sigma \,
\rm{ R}_{\Theta_n}
\left(
\begin{array}{c}
\epsilon_{1,n+1} \\
\epsilon_{2,n+1} \\
\epsilon_{3,n+1}
\end{array}
\right),\\
\textrm{ with }{\rm R}_{\theta}=
\left(
\begin{array}{ccc}
\cos \theta & -\sin \theta & 0 \\
\sin \theta & \cos \theta & 0 \\
0 & 0 & 1
\end{array}
\right).
\end{array}
\end{array}
\end{equation}
}
See Figure~\ref{figure:StochasticRandomWalkerCompletionEnhancementResult} for random walks of the Brownian motion and the direction process in $SE(2)$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.7\hsize]{StochasticRandomWalkerCompletionEnhancementResult.pdf}
\caption{From left to right: Up row: 20 random walks of the direction process for contour completion in $SE(2)=\mathbb{R}^2 \rtimes S^1$ by Mumford \cite{Mumford} with $\textbf{a}=(1,0,0)$, $D_{33}=0.3$, time step $\triangle t$=0.005 and 1000 steps. Bottom row: 20 random walks of the linear left-invariant stochastic processes for contour enhancement within $SE(2)$ with parameter settings $D_{11}=D_{33}=0.5$ and $D_{22}=0$, time step $\triangle t$=0.05 and 1000 steps.}
\label{figure:StochasticRandomWalkerCompletionEnhancementResult}
\end{figure}
\section{Implementation} \label{section:Implementation}
\subsection{Left-invariant Differences} \label{section:Left-invariantDifferences}
\subsubsection{Left-invariant Finite Differences with B-Spline Interpolation} \label{section: Left-invariant Finite Differences with B-spline Interpolation}
As explained in Section \ref{section:The Euclidean Motion Group $SE(2)$ and Group Representations}, our diffusions must be left-invariant. Therefore, a new grid template based on the left-invariant frame $\{\textbf{e}_\xi,\textbf{e}_\eta,\textbf{e}_\theta\}$, instead of the fixed frame $\{\textbf{e}_x,\textbf{e}_y,\textbf{e}_\theta\}$, need to be used in the finite difference methods.
\begin{figure}[!htbp]
\centering
\subfloat{\includegraphics[width=0.9\hsize]{finiteDifferenceScheme.pdf}}
\caption{Illustration of the spatial part of the stencil of the numerical scheme. The horizontal and vertical dashed lines indicate the sampling grid, which is aligned with $\{\textbf{e}_x,\textbf{e}_y\}$. The black dots, which are aligned with the rotated left-invariant coordinate system $\{\textbf{e}_\xi,\textbf{e}_\eta\}$ with $\theta=m \cdot \Delta\theta$, where $m \in \{0,1,...,N_o-1\}$ denotes the sampled orientation equidistantly sampled with distance $\Delta \theta = \frac{2\pi}{N_o}$.}
\label{fig:finiteDifferenceScheme}
\end{figure}
To understand how left-invariant finite differences are implemented, see Figure~\ref{fig:finiteDifferenceScheme}, where 2nd order B-spline interpolation \cite{Unser1993} is used to approximate off-grid samples.
The main advantage of this left-invariant finite difference scheme is the improved rotation invariance compared to finite differences applied after expressing the PDE's in fixed $(x,y,\theta)$-coordinates, such as in \cite{Boscain2,FrankenPhDThesis,Zweck}. This advantage is clearly demonstrated in \cite[Fig.~10]{Franken2009IJCV}. The drawback, however, is the low computational speed and a small amount of additional blurring caused by the interpolation scheme \cite{FrankenPhDThesis}.
\subsection{Left-invariant Finite Difference Approaches for Contour Enhancement and Completion}
\label{section:Left-invariant Finite Difference Approaches for Contour Enhancement}
Eq.~(\ref{StochasticEnhancementEvolution}) of the contour enhancement process and Eq.~(\ref{StochasticCompletionEvolution}) of the contour completion process show us respectively the Brownian motion and direction process of oriented particles moving in $SE(2)\equiv \mathbb{R}^2 \rtimes S^1$. Next we will provide and analyze finite difference schemes for both processes.
\subsubsection{Explicit Scheme for Linear Contour Enhancement and Completion}\label{section:ExplicitSchemeforLinearContourEnhancementCompletion}
We can represent the explicit numerical approximations of the contour enhancement process and contour completion process by using the generator
$Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)$ in a general form, i.e. $Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) = (D_{11}\mathcall{A}_1^2+D_{33}\mathcall{A}_3^2) = (D_{11}\partial_\xi^2+D_{33}\partial_\theta^2)$ for the diffusion process and $Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3)=(\partial_\xi+D_{33}\partial_\theta^2)$ for
the convection-diffusion process, which yield the following forward Euler discretization:
\begin{align} \label{forwardEuler}
\left\{ \begin{aligned}
&W(g,t+\Delta t)=W(g,t)+\Delta t \, Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) \, W(g,t),\\
&W(g,0)=U_f(g).\\
\end{aligned} \right.
\end{align}
We take the centered 2nd order finite difference scheme with B-spline interpolation as shown in Figure~\ref{fig:finiteDifferenceScheme} to numerically approximate the diffusion terms $(D_{11}\partial_\xi^2+D_{33}\partial_\theta^2)$, and use upwind finite differences for $\partial_\xi$. In the forward Euler discretization, the time step $\Delta t$ is critical for the stability of the algorithm. Typically, the convection process and the diffusion process have different properties on the step size $\Delta t$. The convection requires time steps equal to the spatial grid size ($\Delta t=\Delta x$) to prevent the additional blurring due to interpolation, while the diffusion process requires sufficiently small $\Delta t$ for stability, as we show next. In this combined case, we simulate the diffusion process and convection process alternately with different step size $\Delta t$ according to the splitting scheme in \cite{Creusen2013}, where half of the diffusion steps are carried out before one step convection, and half after the convection.
The resolvent of the (convection-)diffusion process can be obtained by integrating and weighting each evolution step with the negative exponential distribution in Eq.~(\ref{exponentialdistribution}). We set the parameters $\textbf{a}=(1,0,0)$ and $\textbf{D}=\textrm{diag} \{1,0,D_{33}\}$ with $D_{33}=\frac{D_{33}}{D_{11}}\approx0.01$ to avoid too much blurring on $S^{1}$.
\begin{remark}
Referring to the stability analysis of Franken et al.\cite{Franken2009IJCV} in the general gauge frame setting, we similarly obtain: $\Delta t \leq \frac{1}{2(1+\sqrt{2}+\frac{1}{q^2})}$ in our case of normal left-invariant derivatives.
For a typical value of $q=\frac{\Delta\theta}{\beta}=\frac{(\pi/24)}{0.1}$ in our convention with $\beta^2:=\frac{D_{33}}{D_{11}} = 0.01$, in which $D_{33} = 0.01$ and $D_{11} = 1$, cf.~\cite{DuitsJMIV2014b}, we obtain stability bound $\Delta t \leq 0.16$ in the case of contour enhancement Eq.~(\ref{StochasticEnhancementEvolution}).
\end{remark}
\subsubsection{Implicit Scheme for Linear Contour Enhancement and Completion}
The implicit scheme of the contour enhancement and contour completion is given by:
\begin{align} \label{ImplicitScheme}
\left\{ \begin{aligned}
&W(g,t+\Delta t)=W(g,t)+\Delta t \, Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3) \, W(g,t+\Delta t),\\
&W(g,0)=U_f(g).\\
\end{aligned} \right.
\end{align}
Then, the equivalent discretization form of the Euler equation can be written as:
\begin{align} \label{DiscretizationImplicitScheme}
\left\{ \begin{aligned}
&\textbf{w}^{s+1}=\textbf{w}^s+\hat{\textbf{Q}}\textbf{w}^{s+1},\\
&\textbf{w}^1=\textbf{u},\\
\end{aligned} \right.
\end{align}
in which $\hat{\textbf{Q}} \equiv \Delta t (Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))$, and $\textbf{w}^s$ is the solution of the PDE at $t=(s-1)\Delta t, s \in \{1,2,...\}$, with the initial state $\textbf{w}^1=\textbf{u}$. According to the conjugate gradient method as shown in \cite{Creusen2013}, we can approximate the obtained linear system $(\textbf{I}-\hat{\textbf{Q}})\textbf{w}^{s+1}=\textbf{w}^s$ iteratively without evaluating matrix $\hat{\textbf{Q}}$ explicitly. The advantage of an implicit method is that it is unconditionally stable, even for large step sizes.
\subsection{Numerical Fourier Approaches \label{section:Duitsmatrixalgorithm}}
The following numerical scheme is a generalization of the numerical scheme proposed by Jonas August for the direction process \cite{August}.
An advantage of this scheme over others, such as the algorithm by Zweck et al. \cite{Zweck} or other finite difference schemes \cite{Franken2009IJCV}, is that (as we will show later in Theorem \ref{th:RelationofFourierBasedWithExactSolution}) it is directly related to the exact analytic solutions (approach 1) presented in Section~\ref{3GeneralFormsExactSolutions}.
The goal is to obtain a numerical approximation of the exact solution of
\begin{equation} \label{theeqn}
\alpha(\alpha I-Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}))^{-1}U=P, \, U \in \mathbb{L}_{2}(G), \quad \textit{with} \quad \underline{\mathcall{A}}=(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3),
\end{equation}
where the generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ is given in the general form Eq.~(\ref{diffusionconvectiongenerator})
without further assumptions on the parameters $a_{i}>0$, $D_{ii}>0$. Recall that its solution is given by $SE(2)$-convolution with the corresponding kernel. First we write
\begin{equation} \label{ansatz}
\begin{array}{l}
\mathcall{F}[P(\cdot,e^{i\theta})](\mbox{\boldmath$\omega$})=\hat{P}(\mbox{\boldmath$\omega$},e^{i\theta})= \sum \limits_{l=-\infty}^{\infty} \hat{P}^{l}(\mbox{\boldmath$\omega$}) e^{i l \theta}, \\
\mathcall{F}[U(\cdot,e^{i\theta})](\mbox{\boldmath$\omega$})=\hat{U}(\mbox{\boldmath$\omega$},e^{i\theta})= \sum \limits_{l=-\infty}^{\infty} \hat{U}^{l}(\mbox{\boldmath$\omega$}) e^{i l \theta}. \\
\end{array}
\end{equation}
Then by substituting (\ref{ansatz}) into (\ref{theeqn}) we obtain the following 4-fold recursion
{\small
\begin{equation} \label{5recursion}
\begin{array}{l}
(\alpha \!+\!l^2 D_{33}\!+\! i\, a_{3} l+\frac{\rho^2}{2}(D_{11}+D_{22}))\hat{P}^{l}(\mbox{\boldmath$\omega$}) + \frac{ a_1(i\, \omega_x
\!+\! \omega_{y})\!+\!a_2(i \, \omega_y \!-\!\omega_{x})}{2} \hat{P}^{l-1}(\mbox{\boldmath$\omega$})\\+\frac{ a_1(i\, \omega_x\!-\! \omega_{y})+a_2(i \, \omega_y \!+\!\omega_{x})}{2} \hat{P}^{l+1}(\mbox{\boldmath$\omega$})
-
\frac{ D_{11}(i\, \omega_x\!+\! \omega_{y})^2\!+\!D_{22}(i \, \omega_y \!-\!\omega_{x})^2}{4}
\hat{P}^{l-2}(\mbox{\boldmath$\omega$}) \\
-
\frac{ D_{11}(i\, \omega_x\!-\! \omega_{y})^2+D_{22}(i \, \omega_y \!+\! \omega_{x})^2}{4}
\hat{P}^{l+2}(\mbox{\boldmath$\omega$}) = \alpha \, \hat{U}^{l}(\mbox{\boldmath$\omega$}),
\end{array}
\end{equation}
}
which can be rewritten in polar coordinates
\begin{equation} \label{recurs}
\begin{array}{l}
(\alpha + i l a_3 +D_{33}l^2+ \frac{\rho^2}{2}(D_{11}+D_{22})) \, \tilde{P}^{l}(\rho)+ \frac{\rho}{2}(i a_{1}-a_2)\, \tilde{P}^{l-1}(\rho)+ \\
\frac{\rho}{2}(i a_{1}+a_2) \, \tilde{P}^{l+1}(\rho) + \frac{\rho^2}{4}(D_{11}-D_{22})\, (\tilde{P}^{l+2}(\rho)+\tilde{P}^{l-2}(\rho))=
\alpha \, \tilde{U}^{l}(\rho)
\end{array}
\end{equation}
for all $l=0,1,2,\ldots$ with $\tilde{P}^{l}(\rho) = e^{il\varphi} \hat{P}^{l}(\mbox{\boldmath$\omega$})$ and $\tilde{U}^{l}(\rho) = e^{il\varphi} \hat{U}^{l}(\mbox{\boldmath$\omega$})$, with
$\mbox{\boldmath$\omega$}=(\rho \cos \varphi, \rho \sin \varphi)$.
Equation (\ref{recurs}) can be written in matrix-form, where a 5-band matrix must be inverted. For explicit representation
of this 5-band matrix where the spatial Fourier transform in (\ref{ansatz}) is replaced by the $\textbf{DFT}$ we refer to \cite[p.230]{DuitsPhDThesis}. Here we stick to a Fourier series on $\mathbb{T}$, $\textbf{CFT}$ on $\mathbb{R}^2$ and truncation of the series at $N \in \mathbb{N}$ which yields the
$(2N+1) \times (2N+1)$ matrix equation:
\begin{equation} \label{MatrixInverse}
{\tiny \left(
\begin{array}{ccccccc}
p_{-N} & q+t & r & 0 & 0 & 0 & 0 \\
q-t & p_{-N+1} & q+t & r & 0 & 0 & 0 \\
r & \ddots & \ddots & \ddots & r & 0 & 0 \\
0 & \ddots & q-t & p_{0} & q+t & r & 0 \\
0 & 0 & r & \ddots & \ddots & \ddots & r \\
0 & 0 & 0 & r & q-t & p_{N-1} & q+t \\
0 & 0 & 0 & 0 & r & q-t & p_{N}
\end{array}
\right)
\left(
\begin{array}{c}
\tilde{P}^{-N}(\rho) \\
\tilde{P}^{-N+1}(\rho) \\
\vdots \\
\tilde{P}^{0}(\rho) \\
\vdots \\
\tilde{P}^{N-1}(\rho)
\\
\tilde{P}^{N}(\rho)
\end{array}
\right)=
\frac{4 \alpha}{ D_{11}} \!
\left(
\begin{array}{c}
\tilde{U}^{-N}(\rho) \\
\tilde{U}^{-N+1}(\rho) \\
\vdots \\
\tilde{U}^{0}(\rho) \\
\vdots \\
\tilde{U}^{N-1}(\rho)
\\
\tilde{U}^{N}(\rho)
\end{array}
\right)
}
\end{equation}
where $p_{l}= (2l)^2 + \frac{4 \alpha + 2 \rho^2(D_{11}+D_{22})+4 i a_{3} l}{D_{33}}$, $r=\frac{\rho^2(D_{11}-D_{22})}{D_{33}}$, $q= \frac{2 \rho a_{1}i}{D_{33}}$ and $t= \frac{2 a_2 \rho}{D_{33}}.$
\begin{remark}\label{rem:41}
The four-fold recursion Eq.~(\ref{recurs}) is uniquely determined by $\tilde{P}_{-N-1}=0, \tilde{P}_{-N-2}=0$, $\tilde{P}_{N+1}=0, \tilde{P}_{N+2}=0$, which is applied in Eq.~(\ref{MatrixInverse}).
\end{remark}
\begin{remark}\label{rem:42}
When applying the Fourier transform on $SE(2)$ to the PDE's of interest, as done in \cite{DuitsAlmsick2008,Boscain3,Boscain2}, one obtains a fully isomorphic 5-band matrix system as pointed out in \cite[App.A, Lemma A.1, Thm A.2]{DuitsAlmsick2008}, the basic underlying coordinate transition to be applied is given by
\[
(p,\phi)= (\rho,\varphi - \theta)
\]
where $p$ indexes the irreducible representations of $SE(2)$ and $\phi$
denotes the angular argument of the $p$-th irreducible function subspace $\mathbb{L}_{2}(S^{1})$ on which
the $p$-th irreducible representation acts. For further details see \cite[App.A]{DuitsAlmsick2008} and \cite{Chirikjian}.
\end{remark}
In \cite{DuitsAlmsick2008}, we showed the relation between spectral decomposition of this matrix (for $N \to \infty$) and the exact solutions of contour completion. In this paper we do the same for the contour enhancement case in Section \ref{section:FourierBasedForEnhancement}.
\subsection{Stochastic Implementation}\label{section:MonteCarloStochasticImplementation}
In a Monte-Carlo simulation as proposed in \cite{Gonzalo,BarbieriArxiv2013}, we sample the stochastic process (Eq.~\!(\ref{eq:form})) such that we obtain the kernels for our linear left-invariant diffusions. In particular the kernel of the contour enhancement process, and the kernel for the contour completion process. Figure~\ref{figure:MentoCarloSimulation} shows the xy-Marginal of the enhancement and the completion kernel, which were obtained by counting the number of paths crossing each voxel in the orientation score domain. In addition, the length of each path follows a negative exponential distribution.
Within Figure~\ref{figure:MentoCarloSimulation} we see, for practically reasonable parameter settings, that increasing the number of sample paths to 50000 already provides a reasonable approximation of the exact kernels.
In addition, each path was weighted using the negative exponential distribution with respect to time in Eq.~\!(\ref{exponentialdistribution}), in order to obtain the resolvent kernels.
\begin{figure}[!htb]
\centering
{\includegraphics[width=\textwidth]{stochastic.pdf}}
\caption{Stochastic random process for the contour enhancement kernel (top) and stochastic random process for the contour completion raw kernel (bottom). Both processes are obtained via Monte Carlo simulation of random process
(\ref{eq:form}). In contour completion, we set step size $\Delta t=0.05, \alpha=10, D_{11}=D_{33}=0.5$, and $D_{22}=0$. In contour completion, we set step size $\Delta t=0.005, \alpha=5, D_{33}=1$, and $\textbf{a}=(1,0,0)$.}
\label{figure:MentoCarloSimulation}
\end{figure}
The implementation of the $k$-fold resolvent kernels is obtained by application of Theorem~\ref{th:prob}, i.e. by imposing a Gamma distribution instead of a negatively exponential distribution. Here stochastic implementations become slower as one can no longer rely on the memoryless property of the negatively exponential distribution, which means one should only take the end-condition of each sample path $G_T$ after a sampling of random traveling time $T\sim\Gamma(t;k,\alpha)$. Still such stochastic implementations are favorable (in view of the singularity) over the concatenation of $SE(2)$-convolutions of the resolvent kernels with themselves.
\section{Implementation of the Exact Solution in the Fourier and the Spatial Domain and their Relation to Numerical Methods}\label{section:Comparison}
In previous works by Duits and van~Almsick \cite{DuitsCASA2005,DuitsCASA2007,DuitsAlmsick2008}, three methods were applied producing three different exact representations for the kernels (or "Green's functions") of the forward Kolmogorov equations of the contour completion process:
\begin{enumerate}
\item The first method involves a spectral decomposition of the bi-orthogonal generator in the $\theta$-direction for each fixed spatial frequency $(\omega_{x},\omega_y)=(\rho \cos\varphi, \rho \sin\varphi) \in \mathbb{R}^{2}$ which is an unbounded Mathieu operator, producing a (for reasonably small times $t>0$) \emph{slowly converging} Fourier series representation. Disadvantages include the Gibbs phenomenon. Nevertheless, the Fourier series representation in terms of \emph{periodic} Mathieu functions directly relates to the numerical algorithm proposed by August in \cite{August}, as shown in \cite[ch:5]{DuitsAlmsick2008}. Indeed the Gibbs phenomenon appears in this algorithm as the method requires some smoothness of data: running the algorithm on a sharp discrete delta-spike provides Gibbs-oscillations. The same holds for Fourier transform on $SE(2)$ methods \cite{DuitsAlmsick2008,Boscain3,Boscain2}, recall Remark \ref{rem:42}.
\item The second method unwraps for each spatial frequency the circle $S^{1}$ to the real line $\mathbb{R}$, to solve the Green's function with absorbing boundary conditions at infinity which results in a quickly converging series in rapidly decaying terms expressed in \emph{non-periodic} Mathieu functions. There is a nice probabilistic interpretation: The $k$-th number in the series reflects the contribution of sample-paths in a Monte-Carlo simulation, carrying homotopy number $k \in \mathbb{Z}$, see Figure~\ref{fig:K0K1K2}.
\item The third method applies the Floquet theorem on the resulting series of the second method and application of the geometric series produces a formula involving only 4 Mathieu functions \cite{DuitsAlmsick2008,MarkusThesis}.
\end{enumerate}
We briefly summarize these results in the general case and then we provide the end-results of the three approaches for respectively the contour enhancement case and the contour completion case in the theorems below.
In Figure~\ref{fig:EnhancementKernel}, we show an illustration of an exact resolvent enhancement kernel and an exact fundamental solution and their marginals.
\begin{figure}[!htb]
\centerline{
\includegraphics[width=0.9\hsize]{fig11Heat.pdf}
}
\caption{Top row, left: The three marginals of the exact Green's function $R_{\alpha}^{\textbf{D}}$ of the resolvent process where $\textbf{D} = \textrm{diag}\{D_{11},0,D_{33}\}$ with parameter settings {\small $\alpha=0.025$ and $\textbf{D}=\{1,0,0.08\}$}.
right: The isotropic case of the exact Green's function $R_{\alpha}^{\textbf{D}}$ of the resolvent process with {\small $\alpha=0.025$, $\textbf{D}=\{1,0.9,1\}$}.
Bottom row: The fundamental solution $S^{\textbf{D}}$ of the resolvent process with {\small $\textbf{D}=\{1,0,0.08\}$}. The iso-contour values are indicated in the Figure.
}\label{fig:EnhancementKernel}
\end{figure}
Furthermore, we investigate the distribution of the stochastic line propagation process with periodic boundaries at $-\pi-2k\pi$ to $\pi+2k\pi$ of the exact kernel. The probability density distribution of the kernel shows us that most of the random walks only move within $k=2$ loops, i.e. from $-3\pi$ to $3\pi$. See Figure~\ref{fig:K0K1K2}, where it can be seen
that the series of rapidly decaying terms of method 2 for reasonable parameter settings already be truncated at $N=1$ or $N=2$.
\begin{figure}[!htb]
\centerline{
\includegraphics[width=0.9\hsize]{K0K1K2.pdf}
}
\caption{Top row, left to right: Two random walks in $SE(2)=\mathbb{R}^2 \rtimes S^1$ (and their projection on $\mathbb{R}^2$) of the direction processes for $k=0, 1, 2$ cases (where $k$ denotes the amount of loops) of contour enhancement with $\mathbf{D}=\{0.5,0.,0.19\}$ (800 steps, step-size $\Delta t = 0.005$). Bottom row, left to right: the intensity projection of the exact enhancement kernels corresponding to the three cases in the top row, i.e. $\theta$ range from $-\pi$ to $\pi$ for $k=0$ case, from $-3\pi$ to $-\pi$ and $\pi$ to $3\pi$ for $k=1$ case, from $-5\pi$ to $-3\pi$ and $3\pi$ to $5\pi$ for $k=2$ case, with {\small $\alpha=\frac{1}{40}$, $\mathbf{D}=\{0.5,0.,0.19\}$}.}
\label{fig:K0K1K2}
\end{figure}
In Appendix~\ref{app:A} we analyze the asympotical behavior of the spatial Fourier transform of the kernels at the origin and at infinity. It turns out that the fundamental solutions (the case $\alpha \downarrow 0$) are the only kernels with a pole at the origin. This reflects that fundamental solutions are not $\mathbb{L}_{1}$-normalizable, in contrast to resolvent kernels and temporal kernels. Furthermore, the Fourier transform of any kernel restricted to a fixed $\theta$-layer has a rapidly decaying direction $\omega_{\eta}$ and a slowly decaying direction $\omega_{\xi}$. Therefore we analyze the decaying behavior of the spatially Fourier transformed kernels along these axes at infinity and we deduce that all resolvent kernels and fundamental solutions have a singularity at the origin, whereas the time-dependent kernels do not suffer from such a singularity.
\subsection{Spectral Decomposition and the 3 General Forms of Exact Solutions}\label{3GeneralFormsExactSolutions}
In this section, we will derive 3 general forms of the exact solutions. To this end we note that analysis of strongly continuous semigroups \cite{Yosida} and their resolvents start with analysis of the generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$. Symmetries of the solutions
directly follow from the symmetries of the generator. Furthermore, spectral analysis of the generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ as an unbounded operator on $\mathbb{L}_{2}(SE(2))$ provides spectral decomposition and explicit formulas for the time-dependent kernels, their resolvents and fundamental solutions as we will see next.
First of all, the domain of the self-adjoint operator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ equals
\[
\begin{array}{l}
\mathcall{D}(Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}))=\mathbb{H}_{2}(\mathbb{R}^{2}) \otimes \mathbb{H}_{2}(S^{1}), \textrm{ with second order Sobolev space} \\
\mathbb{H}_{2}(S^{1})\equiv \{\phi \in \mathbb{H}_{2}([0,2\pi])\;|\; \phi(0)=\phi(2\pi) \textrm{ and } {\rm d}\phi(0)={\rm d}\phi(2\pi)\},
\end{array}
\]
where ${\rm d}\phi \in \mathbb{H}_{1}(S^{1})$ is the weak derivative of $\phi$ and where both Sobolev spaces $\mathbb{H}_{2}(S^{1})$ are $\mathbb{H}_{2}(\mathbb{R}^{2})$ are endowed with the $\mathbb{L}_{2}$-norm. Operator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$ is equivalent to the corresponding operator
\[
\mathcall{B}^{\textbf{D},\textbf{a}}:=(\mathcall{F}_{\mathbb{R}^{2}} \otimes \textrm{id}_{\mathbb{L}_{2}(S^{1})}) \circ Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}) \circ (\mathcall{F}_{\mathbb{R}^{2}}^{-1} \otimes \textrm{id}_{\mathbb{H}_{2}(S^{1})}),
\]
where $\otimes$ denotes the tensor product in distributional sense, $\mathcall{F}_{\mathbb{R}^{2}}$ denotes the unitary Fourier transform operator on $\mathbb{L}_{2}(\mathbb{R}^{2})$ almost everywhere given by
\[
\mathcall{F}_{\mathbb{R}^{2}}f(\mbox{\boldmath$\omega$})=\hat{f}(\mbox{\boldmath$\omega$}):= \frac{1}{2\pi} \int_{\mathbb{R}^{2}} f(\textbf{x}) e^{-i\, \mbox{\boldmath$\omega$} \cdot \textbf{x}}\, {\rm d}\textbf{x},
\]
and where $\textrm{id}_{\mathbb{H}_{2}(S^{1})}$ denotes the identity map on $\mathbb{H}_{2}(S^{1})$.
This operator $\mathcall{B}^{\textbf{D},\textbf{a}}$ is given by
\[
(\mathcall{B}^{\textbf{D},\textbf{a}}\hat{U})(\mbox{\boldmath$\omega$},\theta)= (\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}\hat{U}(\mbox{\boldmath$\omega$},\cdot))(\theta),
\]
where for each fixed spatial frequency $\mbox{\boldmath$\omega$}=(\rho \cos \varphi, \rho \sin \varphi) \in \mathbb{R}^{2}$ operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}: \mathbb{H}_{2}(S^{1}) \to \mathbb{L}_{2}(S^{1})$ is a mixture of multiplier operators and
weak derivative operators $d=\partial_{\theta}$:
\begin{equation}
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}= -\sum \limits_{j=1}^{2} a_{j} m_j + \sum \limits_{k,j=1}^{2} D_{kj} m_k m_j
+(-a_{3} + 2 D_{j3} m_j) d + D_{33} d^2,
\end{equation}
with multipliers $m_{1}=i \rho \cos (\varphi - \theta)$ and $m_{2}= -i \rho \sin(\varphi - \theta)$ corresponding to respectively $\partial_{\xi}=\cos \theta \partial_{x} +\sin \theta \partial_{y}$ and $\partial_{\eta}=-\sin \theta \partial_{x} +\cos \theta \partial_{y}$.
By straightforward goniometric relations it follows that for each $\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}$ operator
$\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$ boils down to a 2nd order Mathieu-type operator (i.e. an operator of the type $\frac{d^2}{dz^2}-2q\cos(2z)+a $).
In case of the contour enhancement we have
\[\label{ContourEnhancementMathieuOperator}
\begin{array}{l}
\left(\textbf{a}=\textbf{0} \textrm{ and }\textbf{D}=\textrm{diag}\{D_{11},D_{22},D_{33}\}\textrm{ and } D_{11},D_{22} \geq 0, D_{33}> 0 \right) \Rightarrow \\
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}= - D_{11} \rho^2 \cos^{2}(\varphi - \theta) - D_{22} \rho^{2}\sin^{2}(\varphi - \theta)+
D_{33} \partial_{\theta}^2.
\end{array}
\]
In case of the contour completion we have
\[
(\textbf{a}=(1,0,0) \textrm{ and }D_{33}>0 ) \Rightarrow
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}= - i\rho \cos(\varphi - \theta) + D_{33} \partial_{\theta}^2.
\]
Operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$ satisfies
\[
(\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}})^* \Theta = \overline{\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}\overline{\Theta}},
\]
and moreover it admits a right-inverse kernel operator
$K:\mathbb{L}_{2}(S^{1}) \to \mathbb{H}_{2}(S^{1})$ given by
\begin{equation}\label{relconj}
Kf(\theta) = \int \limits_{S^{1}} k(\theta,\nu) f(\nu) {\rm d}\nu,
\end{equation}
with a kernel satisfying $k(\theta,\nu)=k(\nu,\theta)$ (without conjugation). This kernel $k$
relates to the fundamental solution of operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$:
\[
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}} \hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\cdot) =\delta^{\theta}_{0},
\textrm{ for all }\mbox{\boldmath$\omega$}=(\rho \cos\varphi, \rho \sin \varphi) \in \mathbb{R}^{2},
\]
with $\hat{S}^{\textbf{D},\textbf{a}} :SE(2)\setminus \{e\} \to \mathbb{R}$, infinitely differentiable. By left-invariance of our generator $Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}})$, we
have
\[
k(\theta,\nu)= \hat{S}^{\textbf{D},\textbf{a}}(\rho \cos(\varphi-\theta), \rho \sin (\varphi-\theta),\nu-\theta),
\]
where $\hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)$ denotes the spatial Fourier transform of the fundamental solution $S^{\textbf{D},\textbf{a}}:SE(2) \setminus \{e\} \to \mathbb{R}^{+}$. Now that we have analyzed the generator of our PDE evolutions, we summarize 3 exact approaches describing the kernels of the PDE's of interest.
\subsubsection*{Exact Approach 1}
Kernel operator $K$ given by Eq.~(\ref{relconj}) is compact and its kernel satisfies $k(\theta,\nu) = k(\nu,\theta)$ and thereby it has a complete bi-orthonormal basis of eigenfunctions $\{\Theta_{n}\}_{n \in \mathbb{Z}}$:
\[
\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}} \Theta_{n}^{\mbox{\boldmath$\omega$}}= \lambda_{n} \Theta_{n}^{\mbox{\boldmath$\omega$}} \textrm{ and } K \Theta_{n}^{\mbox{\boldmath$\omega$}} =\lambda_{n}^{-1} \Theta_{n}^{\mbox{\boldmath$\omega$}}, \textrm{ with }0\geq \lambda_{n} \to \infty,
\]
As operator $\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}$ is a Mathieu type of operator these eigenfunctions $\Theta_{n}$ can be expressed in periodic Mathieu functions, and the corresponding
eigenvalues can be expressed in Mathieu characteristics as we will explicitly see in the subsequent subsections for both the contour-enhancement and contour-completion cases.
The resulting solutions of our first approach are
\begin{equation} \label{sols1}
\boxed{
\begin{array}{l}
W(x,y,\theta,s)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{W}(\cdot,\theta,s)](x,y) \textrm{ with }
\hat{W}(\mbox{\boldmath$\omega$},\theta,s)= \sum \limits_{n \in \mathbb{Z}} e^{s \lambda_{n}} (\hat{U}(\mbox{\boldmath$\omega$},\cdot),\overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}})\Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta), \\
P_{\alpha}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{P}_{\alpha}(\cdot,\theta)](x,y) \textrm{ with }
\hat{P}_{\alpha}(\mbox{\boldmath$\omega$},\theta)= \alpha \sum \limits_{n \in \mathbb{Z}} \frac{1}{\alpha -\lambda_n} (\hat{U}(\mbox{\boldmath$\omega$},\cdot),\overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}}) \Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta), \\
\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\textbf{\mbox{\boldmath$\omega$}},\theta)= \frac{\alpha}{2\pi} \sum \limits_{n \in \mathbb{Z}} \frac{1}{\alpha-\lambda_n} \overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta)}\, \Theta_{n}^{\mbox{\boldmath$\omega$}}(0),\\
S^{\textbf{D},\textbf{a}}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{S}^{\textbf{D},\textbf{a}}(\cdot,\theta)](x,y) \textrm{ with }
\hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)=-\frac{1}{2\pi} \sum \limits_{n \in \mathbb{N}} \frac{1}{\lambda_n} \overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}(\theta)}\, \Theta_{n}^{\mbox{\boldmath$\omega$}}(0).
\end{array}
}
\end{equation}
\begin{remark}
If $\textbf{a}=\textbf{0}$ then $(\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}})^*=(\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}})$ and $\overline{\Theta_{n}^{\mbox{\boldmath$\omega$}}}=\Theta_{n}^{\mbox{\boldmath$\omega$}}$ and the $\{\Theta_{n}^{\mbox{\boldmath$\omega$}}\}$ form an orthonormal basis for $\mathbb{L}_{2}(S^{1})$ for each fixed $\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}$.
\end{remark}
\subsubsection*{Exact Approach 2}
The problem with the solutions (\ref{sols1}) is that the Fourier series representations (\ref{sols1}) do not converge quickly for $s>0$ small.
Therefore, in the second approach we unfold the circle and for the moment we replace the $2\pi$-periodic boundary condition in $\theta$ by absorbing boundary conditions at infinity
and we consider the auxiliary problem of finding $\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}: \mathbb{R}^{2} \times \mathbb{R} \setminus \{e\} \to \mathbb{R}^{+}$, such that
\begin{equation} \label{unfoldeqs}
\begin{array}{l}
(-Q^{\textbf{D},\textbf{a}}+\alpha I) R^{\textbf{D},\textbf{a},\infty}_{\alpha} =\alpha \delta_{0}^{x} \otimes \delta_{0}^{y} \otimes \delta_{0}^{\theta}, \\
R^{\textbf{D},\textbf{a},\infty}_{\alpha}(\cdot, \theta) \to 0 \textrm{ as }|\theta| \to \infty.
\end{array}
\Leftrightarrow \forall_{\mbox{\boldmath$\omega$} \in \mathbb{R}^{2}}\;:\:
\left\{
\begin{array}{l}
(-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I) \hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}(\mbox{\boldmath$\omega$}, \cdot) =\alpha \, \frac{1}{2\pi}\, \delta_{0}^{\theta}, \\
\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}(\mbox{\boldmath$\omega$}, \theta) \to 0 \textrm{ as }|\theta| \to \infty.
\end{array}
\right.
\end{equation}
The spatial Fourier transform of the corresponding fundamental solution again follows by taking the limit $\alpha \downarrow 0$: $\hat{S}^{\infty}:=\lim \limits_{\alpha \downarrow 0}\alpha^{-1}\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}$. Now the solution of (\ref{unfoldeqs}) is given by
\begin{equation}\label{genform}
\boxed{
\hat{R}^{\textbf{D},\textbf{a},\infty}_{\alpha}(\mbox{\boldmath$\omega$}, \theta)=
\frac{\alpha}{ 2\pi D_{33}\, W_{\rho}}
\left\{
\begin{array}{l}
G_{\rho}(\varphi)F_{\rho}(\varphi-\theta), \textrm{ for }\theta \geq 0, \\
F_{\rho}(\varphi)G_{\rho}(\varphi-\theta), \textrm{ for }\theta \leq 0,
\end{array}
\right.
\textrm{ for all }
\mbox{\boldmath$\omega$}=(\rho \cos \varphi, \rho \sin\varphi)}
\end{equation}
where $\theta \mapsto F_{\rho}(\varphi-\theta)$ is the unique solution in the nullspace of operator $-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I$ satisfying $F_{\rho}(\theta) \to 0$ for $\theta \to +\infty$,
and where $G_{\rho}$ is the unique solution in the nullspace of operator $-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I$ satisfying $G_{\rho}(\theta) \to 0$ for $\theta \to -\infty$, and
The Wronskian of $F_{\rho}$ and $G_{\rho}$ is given by
\begin{equation}\label{WronskianComputation}
W_{\rho}=F_{\rho}G_{\rho}'-G_{\rho}F_{\rho}'=
F_{\rho}(0)G_{\rho}'(0)-G_{\rho}
(0)F_{\rho}'(0).
\end{equation}
See Figure~\ref{fig:ContinuousFit}.
We conclude with the periodized solutions
\begin{equation} \label{periodized}
\boxed{
\begin{array}{l}
R_{\alpha}^{\textbf{D},\textbf{a}}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\cdot,\theta)](x,y) \textrm{ with }
\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)
= \sum \limits_{n \in \mathbb{Z}} \hat{R}_{\alpha}^{\textbf{D},\textbf{a},\infty}(\mbox{\boldmath$\omega$},\theta+2n \pi) , \\
S^{\textbf{D},\textbf{a}}(x,y,\theta)= [\mathcall{F}^{-1}_{\mathbb{R}^{2}}\hat{S}^{\textbf{D},\textbf{a}}(\cdot,\theta)](x,y) \textrm{ with }
\hat{S}^{\textbf{D},\textbf{a}}(\mbox{\boldmath$\omega$},\theta)=\sum \limits_{n \in \mathbb{Z}}
\hat{S}^{\textbf{D},\textbf{a},\infty}(\mbox{\boldmath$\omega$},\theta+2n \pi).
\end{array}
}
\end{equation}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\hsize]{ContinuousFit.pdf}
\caption{Illustration of the continuous fit of $\theta \mapsto \hat{R}_{\alpha}^{\textbf{D},\textbf{0},\infty}(\mbox{\boldmath$\omega$},\theta)$ in Eq.~(\ref{genform}) for contour enhancement with parameter settings
$D_{11}=1, D_{22}=0, D_{33}=0.05$ and $\alpha=\frac{1}{20}$, at $(\omega_x, \omega_y)=(\frac{\pi}{20},\frac{\pi}{20})$.}
\label{fig:ContinuousFit}
\end{figure}
For further details see \cite{DuitsAlmsick2008,DuitsCASA2005,DuitsCASA2007,DuitsAMS1,MarkusThesis}. Here we omit the details on these explicit solutions for the general case as the proof is fully equivalent to \cite[Lemma 4.4\&Thm 4.5]{DuitsAlmsick2008}, and moreover the techniques are directly generalizable from standard Sturm-Liouville theory.
\subsubsection*{Exact Approach 3}
In the third approach, where for simplicity we restrict ourselves to both cases of the contour enhancement and the contour completion, we apply the well-known Floquet theorem to the second order ODE
\begin{equation}\label{Mathieu}
(-\mathcall{B}^{\textbf{D},\textbf{a}}_{\mbox{\boldmath$\omega$}}+\alpha I)F(\theta)=0 \Leftrightarrow
F''(\theta) -2 q_{\rho} \cos((\varphi - \theta)\mu) F(\theta) = -a_{\rho}\, F(\theta),
\end{equation}
with $\mu \in \{1,2\}$. For the precise settings/formulas of $a_{\rho}$, $q_{\rho}$ and $\mu$, in the case of contour enhancement and contour completion we refer to the next subsections.
Note that in both the case of contour enhancement and completion we have the Mathieu functions (following the conventions of \cite{AbraMathieu,Schaefke} ) with
\begin{equation} \label{MatheiuEllipticFunctions}
\boxed{
\begin{array}{l}
\textrm{me}_\nu(z;q_\rho)=\textrm{ce}_\nu(z;q_\rho)+i\textrm{se}_\nu(z;q_\rho)\\
\textrm{me}_{-\nu}(z;q_\rho)=\textrm{ce}_\nu(z;q_\rho)-i\textrm{se}_\nu(z;q_\rho)
\end{array},
}
\end{equation}
where $z= \varphi-\theta, \nu=\nu(a_{\rho},q_{\rho})$, $\textrm{ce}_\nu(z;q_\rho)$ denotes the cosine-elliptic functions and $\textrm{se}_\nu(z;q_\rho)$ denotes the sine-elliptic functions, given by
\begin{equation*} \label{CosineSineEllipticFunctions}
\begin{array}{l}
\textrm{ce}_\nu(z;q_\rho)=\sum \limits_{r=-\infty}^{\infty} \textrm{c}_{2r}^\nu(q_\rho)\cos{(\nu+2r)}z\; \textrm{with}\; \textrm{ce}_\nu(z;0)=\cos{\nu z}\\
\textrm{se}_{\nu}(z;q_\rho)=\sum \limits_{r=-\infty}^{\infty} \textrm{c}_{2r}^\nu(q_\rho)\sin{(\nu+2r)}z\; \textrm{with}\; \textrm{se}_\nu(z;0)=\sin{\nu z}
\end{array},
\end{equation*}
For details see \cite{Schaefke}.
Then, we have \[
F_{\rho}(z)=\textrm{me}_{-\nu}(z/\mu ,q_{\rho}),\;
G_{\rho}(z)=\textrm{me}_{\nu}(z/\mu ,q_{\rho}),
\]
with $\mu=1$ in the contour enhancement case and $\mu=2$ in the contour completion case. Furthermore $a_{\rho}$ denotes the Mathieu characteristic and $q_{\rho}$
denotes the Mathieu coefficient and $\nu=\nu(a_{\rho},q_{\rho})$ denotes the purely imaginary Floquet exponent (with $i \nu <0$)
with respect to the Mathieu ODE-equation (\ref{Mathieu}), whose general form is
\[
y''(z)- 2q \cos(2z) y(z)= -a y(z).
\]
Application of this theorem to the solutions $F_{\rho}$ and $G_{\rho}$ in Eq.~(\ref{periodized}) yields
\begin{equation} \label{geom}
F_{\rho}\left(z -2n \pi\right)=e^{\frac{2n \pi \,i\, \nu}{\mu}} F_{\rho}\left(z\right) \textrm{ and }G_{\rho}\left(z -2n \pi\right)=e^{-\frac{2n \pi\, i \,\nu}{\mu}} F_{\rho}\left(z\right), \qquad z=\varphi-\theta.
\end{equation}
Substitution of (\ref{geom}) into (\ref{periodized}) together with the geometric series
\[
\sum \limits_{n=0}^{\infty} \left(e^{2 \nu \pi i/\mu} \right)^{n}=\frac{1}{1-e^{2 i\nu \pi/\mu}} \textrm{ and } \frac{1+e^{2i\nu \pi/\mu}}{1-
e^{2i \nu \pi/\mu}}= -\textrm{coth}\,(i \nu \pi/\mu)= i\cot(\nu \pi/\mu),
\]
with Floquet exponent $\nu=\nu(a_{\rho},q_{\rho}), \ \textrm{Im}(\nu)>0$,
yields
the following closed form solution expressed in 4 Mathieu functions:
{\small
\begin{equation} \label{sols3}
\boxed{
\begin{array}{l}
[\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\cdot,\theta)](\mbox{\boldmath$\omega$})= \frac{\alpha}{D_{33}\, i \, W_{\rho}}
\left\{ \right. \\
\left.\left(-\cot(\frac{\nu \pi}{\mu})\left(\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu}, q_{\rho})+
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})\right)+\right. \right.\\
\left. \left. \qquad
\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})-
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})\right){\rm u}(\theta)\;+ \qquad \right. \\
\left.\left(-\cot(\frac{\nu \pi}{\mu})\left(\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})-
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho}\right) + \right. \right.\\
\left. \qquad
\textrm{ce}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{se}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho})+
\textrm{se}_{\nu}(\frac{\varphi}{\mu},q_{\rho})\, \textrm{ce}_{\nu}(\frac{\varphi-\theta}{\mu},q_{\rho}\right){\rm u}(-\theta)
\end{array}
}
\end{equation}
}
with Floquet exponent $\nu=\nu(a_{\rho},q_{\rho})$ and where $\theta \mapsto {\rm u}(\theta)$ denotes the unit step function.
Next we will summarize the main results, before we consider the special cases of the contour enhancement and the contour completion.
\begin{theorem}\label{th:exact}
The exact solutions of all linear left-invariant (convection)-diffusions on $SE(2)$, their resolvents, and their fundamental solutions given by
\[
W(g,t)=(K_{t}^{\textbf{D},\textbf{a}} *_{SE(2)} U)(g), \ \ P_{\alpha}(g)= (R_{\alpha}^{\textbf{D},\textbf{a}} *_{SE(2)} U)(g), \ \
S^{\textbf{D},\textbf{a}}= (Q^{\textbf{D},\textbf{a}}(\underline{\mathcall{A}}))^{-1} \delta_{e},
\]
admit three types of exact representations for the solutions. The first type is a series expressed involving periodic Mathieu functions given by Eq.~(\ref{sols1}).
The second type is a rapidly decaying series involving non-periodic Mathieu functions given by Eq.~(\ref{genform}) together with Eq.~(\ref{periodized}),
and the third one involves only four non-periodic Mathieu functions and is given by Eq.~(\ref{sols3}).
\end{theorem}
\subsubsection{The Contour Enhancement Case}
In case $\textbf{D}=\textrm{diag}\{D_{11},D_{22}, D_{33}\}$ with $D_{11},D_{33}>0$ and $D_{22}\geq 0$ and $\textbf{a}=\textbf{0}$,
the settings in the solution formula of the first approach Eq.\!~(\ref{sols1}) are
\begin{equation} \label{efenh}
\begin{array}{l}
\Theta_{n}(\theta)= \frac{\textrm{me}_{n}(\varphi-\theta,q_{\rho})}{\sqrt{2\pi}},\;
q_{\rho}=\frac{\rho^2 (D_{11}-D_{22})}{4 D_{33}},\;
\lambda_{n}=-a_{n}(q_{\rho}) D_{33} - \frac{\rho^{2}(D_{11}+D_{22})}{2},
\end{array}
\end{equation}
where $\textrm{me}_{n}(z,q)$ denotes the periodic Mathieu function with parameter $q$ and $a_{n}(q)$ the corresponding Mathieu characteristic,
and with Floquet exponent $\nu = n \in \mathbb{Z}$.
The settings of the solution formula of the second approach
Eq.\!~(\ref{Mathieu}) together with Eq.\!~(\ref{periodized}) are
\begin{equation} \label{aqenh}
\begin{array}{l}
a_{\rho}=\frac{-\alpha -\frac{\rho^2}{2}(D_{11}+D_{22})}{D_{33}}, \
q_{\rho}=\frac{\rho^2 (D_{11}-D_{22})}{4 D_{33}}, \
\mu = 1, \ W_{\rho}=-2i \, \textrm{se}_{\nu}'(0,q_{\rho})\textrm{ce}_{\nu}(0,q_{\rho}),
\end{array}
\end{equation}
where $\textrm{se}_\nu'{(0,q_\rho)}=\frac{d}{dz}\textrm{se}_\nu(z,q)|_{z=0}$.
The third approach Eq.\!~(\ref{sols3}) yields for $D_{11}>D_{22}$ the result
in \cite[Thm 5.3]{DuitsAMS1}.
\begin{remark}
As the generator $Q^{\textbf{D},\textbf{0}}(\underline{\mathcall{A}})=D_{11}\mathcall{A}_{1}^{2} + D_{33}\mathcall{A}_{3}^{2}$ is invariant under the reflection
$\mathcall{A}_{3} \mapsto -\mathcall{A}_{3}$ we have that our real-valued kernels satisfy $K(x,y,\theta)=K(-x,-y,\theta)$. As a result the spatially Fourier transformed enhancement kernels given by
$\hat{K}_{t}^{\textbf{D}}(\mbox{\boldmath$\omega$},\theta)$, $\hat{R}_{\alpha}^{\textbf{D}}(\mbox{\boldmath$\omega$},\theta)$, $\hat{S}^{\textbf{D}}(\mbox{\boldmath$\omega$},\theta)$ are real-valued.
This is indeed the case in e.g. Eq.\!~(\ref{genform}), Eq.\!~(\ref{sols3}), as for $q,z \in \mathbb{R}$ and $\overline{\nu}=-\nu$, we have
$\overline{\textrm{me}_{\nu}(z,q)}=
\textrm{me}_{\overline{\nu}}(-\overline{z},\overline{q})=\textrm{me}_{\nu}(z,q)$,
so that
$\overline{\textrm{se}_{\nu}(z,q)} \in i\mathbb{R}$ and $\overline{\textrm{ce}_{\nu}(z,q)} \in \mathbb{R}$.
\end{remark}
\subsubsection{The Contour Completion Case}
In case $\textbf{D}=\textrm{diag}\{0,0, D_{33}\}$ with $D_{33}>0$ and $\textbf{a}=(1,0,0)$,
the settings in the solution formula of the first approach Eq.\!~(\ref{sols1}) are
\begin{equation} \label{efcom}
\begin{array}{l}
\Theta_{n}(\theta)= \frac{\textrm{ce}_{n}\left(\frac{\varphi-\theta}{2},q_{\rho}\right)}{\sqrt{\pi}} , n \in \mathbb{N}\cup \{0\}, \qquad
\lambda_{n}=-\frac{a_{n}(q_{\rho})D_{11}}{4}, \ \ q_{\rho}=\frac{2\rho i}{D_{33}},
\end{array}
\end{equation}
where $\textrm{ce}_{n}$ denotes the even periodic Mathieu-function
with Floquet exponent $n$.
The settings of the solution formula of the second approach
Eq.\!~(\ref{Mathieu}) together with Eq.\!~(\ref{periodized}) are
\begin{equation} \label{aqcom}
\begin{array}{l}
a_{\rho}= -\frac{4\alpha}{D_{33}}, \
q_{\rho}= \frac{2\rho i}{D_{33}}, \
\mu = 2, \ W_{\rho}= - i \, \textrm{se}_{\nu}'(0,q_{\rho})\textrm{ce}_{\nu}(0,q_{\rho}).
\end{array}
\end{equation}
See Figure~\ref{fig:ExactCompletionKernel} for plots of completion kernels.
\begin{figure}[!htb]
\centerline{
\includegraphics[width=0.9\hsize]{fig10FP.pdf}
}
\caption{The marginals of the exact Green's functions for contour completion. All the figures have the same settings: $\sigma=0.4$, $\textbf{D}=\{0,0,0.08\}$ and $\textbf{a}=(1,0,0)$. Top row, left: The resolvent process with {\small $\alpha=0.1$},
right: The resolvent process with {\small $\alpha=0.01$}.
Bottom row: The fundamental solution of the resolvent process with {\small $\alpha=0.$} The iso-contour values are indicated in the Figure.
}\label{fig:ExactCompletionKernel}
\end{figure}
\subsubsection{Overview of the Relation of Exact Solutions to Numerical Implementation Schemes}
Theorem~\ref{th:exact} provides three type of exact solutions for our PDE's of interest, and the question rises how these exact solutions
relate to the common numerical approaches to these PDE's.
The solutions of the first type relate to $SE(2)$-Fourier and finite element type (but then using a in Fourier basis) of techniques, as we will show for the general case in Section~\ref{section:Duitsmatrixalgorithm}. The general idea is that if the dimension of the square band matrices (where the bandsize is atmost $5$) tends to infinity, the exact solutions arise in the spectral decomposition of the numerical matrices.
To compare the solutions of the second/third type of exact solutions to the numerics we must sample the solutions involving non-periodic Mathieu functions
in the Fourier domain. Unfortunately, as also reported by Boscain et al. \cite{Boscain2}, well-tested and complete publically available packages for Mathieu-function evaluations
are not easy to find. The routines for Mathieu function evaluation in \emph{Mathematica 7,8,9}, at least show proper results for specific parameter settings. However,
in case of contour enhancement their evaluations numerically break down for the interesting cases $D_{11}=1$ and $D_{33}<0.2$, see Figure~\ref{fig:MathieuImplementationComparison} in Appendix \ref{app:B}. Therefore, in Appendix~\ref{app:B}, we provide
our own algorithm for Mathieu-function evaluation relying on standard theory of continued fractions \cite{ContinuedFractions}.
This allows us to sample the exact solutions in the Fourier domain for comparisons. Still there are two issues left that we address in the next section:
1. One needs to analyze errors that arise by replacing $\textbf{CFT}^{-1}$ (Inverse of the Continuous Fourier
Transform) by the $\textbf{DFT}^{-1}$ (Inverse of the Discrete Fourier Transform), 2. One needs to deal with singularities at the origin.
\subsubsection{The Direct Relation of Fourier Based Techniques to the Exact Solutions}\label{section:FourierBasedForEnhancement}
In \cite{DuitsAlmsick2008} we have related matrix-inversion in Eq.~(\ref{MatrixInverse}) to the exact solutions for the contour completion case. Next we follow a similar approach for the contour enhancement case with ($D_{22}=0$, i.e. hypo-elliptic diffusion), where again we relate diagonalization of the five-band matrix to the exact solutions.
\begin{theorem}\label{th:RelationofFourierBasedWithExactSolution}
Let $\pmb{\omega}=(\rho\cos\varphi, \rho\sin\varphi) \in \mathbb{R}^2$ be fixed. In case of contour enhancement with $\textbf{D}=\textrm{diag}\{D_{11},0,D_{33}\}$ and $\textbf{a}= \mathbf{0}$, the solution of the matrix system (\ref{5recursion}), for $N \rightarrow \infty$, can be written as\\
{\small
\begin{equation}
\boxed{
\hat{P}=S \Lambda^{-1} S^T \hat{\mathbf{u}}}
\end{equation}
}
with
\begin{equation}\label{recursionParameters}
\begin{array}{l}
\hat{P}=\{\tilde{P}^\ell(\rho)\}_{\ell \in \mathbb{Z}},\quad \hat{\mathbf{u}}=\{\tilde{u}^\ell(\rho)\}_{\ell \in \mathbb{Z}}, \quad S=[S_n^\ell]=[c_\ell^n(q_\rho)],\\
\Lambda=\textrm{diag}\{\alpha-\lambda_n({\rho})\},\quad \lambda_n(\rho)=-a_{2n}(q_\rho)D_{33}-\frac{\rho^2 D_{11}}{2}, \quad q_\rho=\frac{\rho^2D_{11}}{4 D_{33}},\\
\end{array}
\end{equation}
and where
\begin{align*}
c_\ell^n=
\left\{
\begin{array}{l}
\textit{Mathieu Coefficient }\; c_\ell^n, \quad if \; \ell \; is \; even\\
0, \quad if \;\ell\; is\; odd.
\end{array}
\right.
\end{align*}
In fact Eq.~(\ref{5recursion}), for $N \rightarrow \infty,$ boils down to a steerable $SE(2)$ convolution\cite{FrankenPhDThesis} with the corresponding exact kernel $R_\alpha^{\textbf{D},\textbf{a}}:SE(2)\rightarrow \mathbb{R}^+$
\end{theorem}
\begin{proof}
Both $\{\frac{1}{\sqrt{2\pi}}e^{i\ell(\varphi-\theta)}| \ell \in \mathbb{Z}\}$
and $\{\frac{1}{\sqrt{2\pi}}\Theta_n^{\pmb{\omega}}(\theta):=\frac{me_n(\varphi-\theta,q_\rho)}{\sqrt{2\pi}}|n \in \mathbb{Z}\}$
form an orthonormal basis of $\mathbb{L}_2({S^1})$. The corresponding basis transformation is given by $S$. As this basis transformation is unitary, we have $S^{-1}=S^\dagger=\bar{S}^T$. As a result we have
\begin{equation}
\begin{aligned}
\tilde{P}^\ell(\rho) = \sum_{m,n,p \in \mathbb{Z}} S_m^\ell \delta_n^m \frac{1}{\alpha-\lambda_n(\rho)} (S^\dagger)_p^n\tilde{u}^p(\rho) =\sum_{n \in \mathbb{Z}}\sum_{p \in \mathbb{Z}} \frac{c_\ell^n(q_\rho)c_p^n(q_\rho)\tilde{u}^p(\rho)}{\alpha-\lambda_n(\rho)}.
\end{aligned}
\end{equation}
Thereby, as $me_n(z)=\sum_{\ell \in \mathbb{Z}}c_\ell^n(q_\rho)e^{i \ell z}$, we have:
\begin{equation}
\begin{aligned}
\hat{P}_\alpha(\pmb{\omega},\theta)
=\alpha \sum_{\ell \in \mathbb{Z}} \tilde{P}^\ell(\rho)e^{i \ell (\varphi - \theta)}
=\alpha \sum_{n \in \mathbb{Z}}\sum_{p \in \mathbb{Z}}\frac{me_n(\varphi - \theta, q_\rho)c_p^n(q_\rho)e^{ip\varphi} \hat{u}^p(\rho)}{\alpha-\lambda_n(\rho)},
\end{aligned}
\end{equation}
where we recall $\tilde{u}^p=e^{ip\varphi}\hat{u}^p$.
Now by setting $u=\delta_e \Leftrightarrow \hat{u}(\pmb{\omega},\theta)=\frac{1}{2\pi}\delta_0^\theta \Leftrightarrow \forall_{p \in \mathbb{Z}},\; \hat{u}^p=\frac{1}{2\pi}$.\\
We obtain the exact kernel \\
\begin{equation}
R_\alpha^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta)=\frac{\alpha}{2\pi}\sum_{n \in \mathbb{Z}}\frac{\Theta_n^{\pmb{\omega}}(\theta)\Theta_n^{\pmb{\omega}}(0)}{\alpha-\lambda_n(\rho)}.
\end{equation}
From which the result follows. $\hfill \Box$
\end{proof}
\textbf{Conclusion:} This theorem supports our numerical findings that will follow in Section \ref{section:Experimental results}. The small relative error are due to rapid convergence $\frac{1}{(\alpha-\lambda_n(\rho))} \rightarrow 0\quad (n\rightarrow \infty)$, so that truncation of the 5-band matrix produces very small $\textit{uniform}$ errors compared to the exact solutions. It is therefore not surprising that the Fourier based techniques outperform the finite difference solutions in terms of numerical approximation (see experiments Section~\ref{section:Experimental results}).
\subsection{Comparison to The Exact Solutions in the Fourier Domain\label{ch:comparison}}
In the previous section we have derived the Green's function of the exact solutions of the system
\begin{align} \label{ResolventEquations2}
\left\{
\begin{aligned}
&(\alpha I-Q^{\textbf{D},\textbf{a}}) R_{\alpha}^{\textbf{D},\textbf{a}}=\alpha \delta_e\\
&R_{\alpha}^{\mathbf{D},\mathbf{a}}(x,y,-\pi)
=
R_{\alpha}^{\mathbf{D},\mathbf{a}}(x,y,\pi)
\end{aligned}
\right. \end{align}
in the continuous Fourier domain. However, we still need to produce nearly exact solutions $R_{\alpha}^{\textbf{D},\textbf{a}}(x,y,\theta_r)$ in the spatial domain, given by
\begin{equation}\label{ContinuousExactSolutions}
\begin{aligned}
R_{\alpha}^{\textbf{D},\textbf{a}}(x,y,\theta_r)&=\left(\frac{1}{2\pi}\right)^2\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\pmb{x}}d\pmb{\omega}\\
&=\left(\frac{1}{2\pi}\right)^2\int_{-\varsigma\pi}^{\varsigma\pi}\int_{-\varsigma\pi}^{\varsigma\pi}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\pmb{x}}d\pmb{\omega}+I_\varsigma(\textbf{x},r),
\end{aligned}
\end{equation}
where $\pmb{x}=(x,y) \in \mathbb{R}^2$, $\pmb{\omega}=(\omega_x,\omega_y)\in \mathbb{R}^2$, $\theta_r=(\frac{2\pi}{2R+1} \cdot r) \in [-\pi,\pi]$ are the discrete angles and $r \in \{-R,-(R-1),...,0,...,R-1,R\}$, $\varsigma$ is an oversampling factor and $I_\varsigma(\textbf{x},r)$ represent the tails of the exact solutions due to their support outside the range $[-\varsigma\pi,\varsigma\pi]$ in the Fourier domain, given by
\begin{equation}
I_{\zeta}(\textbf{x},r)=\left(\frac{1}{2\pi}\right)^2 \int_{\mathbb{R}^2 \setminus [-\varsigma\pi,\varsigma\pi]^2}e^{-s|\pmb{\omega}|^2}\hat{R}_{\alpha}^{\textbf{D}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\textbf{x}}d\pmb{\omega}.
\end{equation}
However in practice we sample the exact solutions in the Fourier domain and then obtain the spatial kernel by directly applying the $\textbf{DFT}^{-1}$. Here errors will emerge by using the $\textbf{DFT}^{-1}$ instead of the $\textbf{CFT}^{-1}$. More precisely, we shall rely on the $\textbf{CDFT}^{-1}$ (Inverse of the Centered Discrete Fourier Transform). Next we analyze and estimate the errors via Riemann sum approximations \cite{RiemannSum}. The nearly exact solutions of the spatial kernel in Eq.~(\ref{ContinuousExactSolutions}) can be written as
\begin{equation}\label{DiscreteExactSolutions}
\begin{array}{l}
R^{\textbf{D},\textbf{a}}_\alpha(x,y,\theta_r)=\left(\frac{1}{2\pi}\right)^2 \sum\limits_{p'=-\varsigma P}^{\varsigma P}\sum\limits_{q'=-\varsigma Q}^{\varsigma Q}\hat{R}^{\textbf{D},\textbf{a}}_\alpha (\omega_{p'}^1,\omega_{q'}^2,\theta_r)e^{i(\omega_{p'}^1 x+\omega_{q'}^2 y)}\Delta\omega^1\Delta\omega^2+\\
\qquad\qquad\qquad \left.
I_\varsigma(\textbf{x},r)+O\left(\frac{1}{2P+1}\right)+O\left(\frac{1}{2Q+1}\right)\right.\\
\qquad \qquad \left.
=\frac{1}{2P+1}\frac{1}{2Q+1} \sum\limits_{p'=-\varsigma P}^{\varsigma P}\sum\limits_{q'=-\varsigma Q}^{\varsigma Q}\hat{R}^{\textbf{D},\textbf{a}}_\alpha (\omega_{p'}^1,\omega_{q'}^2,\theta_r)e^{i(\omega_{p'}^1 x+\omega_{q'}^2 y)}+\right.\\
\qquad\qquad\qquad \left.
I_\varsigma(\textbf{x},r)+O\left(\frac{1}{2P+1}\right)+O\left(\frac{1}{2Q+1}\right)\right.
\end{array}
\end{equation}
where
$\Delta\omega^1=\frac{2\pi}{2P+1}=\frac{2\pi}{x_{dim}},
\Delta\omega^2=\frac{2\pi}{2Q+1}=\frac{2\pi}{y_{dim}}$ and $P, \, Q \in \mathbb{N}$ determine the number of samples in the spatial domain, with discrete frequencies and angles given by
\begin{equation}\label{discretefrequencies}
\omega_{p'}^1=\frac{2\pi}{2P+1} \cdot p' \in [-\varsigma\pi,\varsigma\pi],\;
\omega_{q'}^2=\frac{2\pi}{2Q+1} \cdot q' \in [-\varsigma\pi,\varsigma\pi],\;
\theta_r=\frac{2\pi}{2R+1} \cdot r \in [-\pi,\pi]
\end{equation}
There are three approximation terms in Eq.~(\ref{DiscreteExactSolutions}), and two of them, i.e. $O\left(\frac{1}{2P+1}\right)$ and $O\left(\frac{1}{2Q+1}\right)$ are standard due to Riemann sum approximation. However, $I_\varsigma(\textbf{x},r)$ is harder to control and estimate.
This is one of the reasons why we include a spatial Gaussian blurring with small scale $0<s \ll 1$. This means that instead of solving
$R_\alpha^{\textbf{D},\textbf{a}}=
\alpha(\alpha I-Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1}\delta_e$,
we compute
\begin{equation}\label{ResolventWithGaussian}
R_\alpha^{\textbf{D},\textbf{a},s}=e^{s\Delta}\alpha(\alpha I-Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1}\delta_e
=\alpha(\alpha I - Q^{\textbf{D},\textbf{a}}(\mathcall{A}_1,\mathcall{A}_2,\mathcall{A}_3))^{-1} e^{s\Delta} \delta_e.
\end{equation}
So instead of computing the impulse response of a resolvent diffusion we compute the response of a spatially blurred spike $G_s \otimes \delta_0^\theta$ with Gaussian kernel $G_s(x)=\frac{e^{-\frac{||x||^2}{4s}}}{4\pi s}$. Another reason for including a linear isotropic diffusion is that the kernels $R_\alpha^{\textbf{D},\textbf{a},s}$ are not singular at the origin. The singularity at the origin $(0,0,0)$ of $R_\alpha^{\textbf{D},\textbf{a}}$ reproduces the original data, whereas the tails of $R_\alpha^{\textbf{D},\textbf{a}}$ take care of the external actual visual enhancement. Therefore, reducing the singularity at the origin by slight increase of $s>0$, amplifies the enhancement properties of the kernel in practice.
However, $s>0$ should not be too large as we do not want the isotropic diffusion to dominate the anisotropic diffusion.
\begin{theorem}
The exact solutions of $R_\alpha^{\textbf{D},\textbf{a},s}:SE(2) \rightarrow \mathbb{R}^+$ are given by
\begin{equation}\begin{aligned} \label{ExactSolutionsFourier}
\left(\mathcall{F}_{\mathbb{R}^2}R_\alpha^{\textbf{D},\textbf{a},s}(\cdot,\theta)\right)(\pmb{\omega})
=
\left(\mathcall{F}_{\mathbb{R}^2}R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\theta)\right)(\pmb{\omega})e^{-s|\pmb{\omega}|^2},
\end{aligned}\end{equation}
where analytic expressions for $\hat{R}_\alpha^{\textbf{D},\textbf{a}}(\pmb{\omega,\theta})=\left[\mathcall{F}_{\mathbb{R}^2}(R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\theta))\right](\pmb{\omega})$ in terms of Mathieu functions are provided in Theorem \ref{th:exact}. For the spatial distribution, we have the following error estimation:
\begin{equation}\label{ExactSolutionRiemannSumApproximation}
R_\alpha^{\textbf{D},\textbf{a},s}(\textbf{x},\theta_r)=\left(\left[\mathbf{CDFT}\right]^{-1}(\hat{R}_{\alpha}^{\mathbf{D},\mathbf{a},s}(\pmb{\omega}_\cdot^1,\pmb{\omega}_\cdot^2,\theta_r))\right)(\textbf{x})+I_\varsigma^s(\textbf{x},r)+O\left(\frac{1}{2P+1}\right)+O\left(\frac{1}{2Q+1}\right),
\end{equation}
for all $\textbf{x}=(x,y) \in \mathbb{Z}_P \times \mathbb{Z}_Q$, with discretization in Eq.~(\ref{discretefrequencies}), $\varsigma \in \mathbb{N}$ denotes the oversampling factor in the Fourier domain and $s=\frac{1}{2}\sigma^2$ is the spatial Gaussian blurring scale with $\sigma \approx 1, 2$ pixel length, and \\
\begin{equation}
I_\varsigma^s(\textbf{x},r)=\int_{\mathbb{R}^2 \setminus [-\varsigma\pi,\varsigma\pi]^2}e^{-s|\pmb{\omega}|^2}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\cdot\textbf{x}}d\pmb{\omega}.
\end{equation}
\label{thm:DiscreteExactSolutionsErrorEstimation}
\end{theorem}
First of all we recall Eq.~(\ref{ResolventWithGaussian}), from which Eq.~(\ref{ExactSolutionsFourier}) follows. Eq.~(\ref{ExactSolutionRiemannSumApproximation}) follows by standard Riemann-sum approximation akin to Eq.~(\ref{DiscreteExactSolutions}).
Finally, we note that due to H\"{o}rmander theory \cite{Hoermander} the kernel $R_\alpha^{\textbf{D},\textbf{a}}$ is smooth on $SE(2)\setminus\{e\}=(0,0,0)$. Now, thanks to the isotropic diffusion, $R_\alpha^{\textbf{D},\textbf{a},s}$ is well-defined and smooth on the whole group $SE(2)$.
\begin{remark}
In the isotropic case $D_{11}=D_{22}$ we have the asympotic formula (for $\rho \gg 0$ fixed):
\begin{equation}
\begin{array}{l}
\cr(D_{11}\rho^2+D_{33}\rho_\theta^2+\alpha I)\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb\omega,\theta)=1
\Longrightarrow \hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb\omega,\rho_\theta)=\frac{1}{D_{11}\rho^2+D_{33}\rho_\theta^2+\alpha} \approx O(\frac{1}{\rho^2})
\end{array}
\end{equation}
\end{remark}
Now for
\begin{equation}
\begin{array}{l}
|I_\varsigma^s(\textbf{x},r)|=|\int_{\mathbb{R}^2 \setminus [-\varsigma\pi,\varsigma\pi]^2}e^{-s|\pmb{\omega}|^2}\hat{R}_{\alpha}^{\textbf{D},\textbf{a}}(\pmb{\omega},\theta_r)e^{i\pmb{\omega}\textbf{x}}d\pmb{\omega}|
\leq 2\pi\int_{\varsigma\pi}^\infty e^{-s\rho^2}\frac{C}{\rho}d\rho = \pi C \; \Gamma(0,\pi^2 s \varsigma^2),
\end{array}
\end{equation}
for fixed $\textbf{a}$, $C \approx \frac{1}{D_{11}}$ (for $D_{33}$ small), and where $\Gamma(a,z)$ denotes the incomplete Gamma distribution. We have $s=\frac{1}{2}\sigma^2$. For typical parameter settings in the contour enhancement case, $\sigma=1$ pixel length, $D_{11}=1,D_{33}=0.05$, we have
\begin{align} \label{GammaDistribution}
|I_\varsigma^s(\textbf{x},r)|\leq\left\{
\begin{aligned}
&(0.00124)\pi C, &\varsigma=1 \\
&(10^{-10})\pi C, &\varsigma=2
\end{aligned}
\right. \end{align}
which is sufficiently small for $\varsigma \geq 2$.
\subsubsection{Scale Selection of the Gaussian Mask and Inner-scale}\label{ScaleSelectionGaussianMask}
In the previous section, we proposed to use a narrow spatial isotropic Gaussian window to control errors caused by using the $\textbf{DFT}^{-1}$. In $\mathbb{R}$, we have $\sqrt{4\pi s} \mathcall{F}G_{s}=G_{\frac{1}{4s}}$, i.e.
\begin{equation}\label{GaussianFunction}
\begin{aligned}
(\mathcall{F}G_{s})(\omega)&=e^{-s||\omega||^2}, \qquad
G_s(x)=\frac{1}{\sqrt{4\pi s}}e^{\frac{-||x||^2}{4s}},
\qquad
\sigma_s \cdot \sigma_f=1.
\end{aligned}
\end{equation}
where $\sigma_f$ denotes the standard deviation of the Fourier window, and $\sigma_s$ denotes the standard deviation of the spatial window.
In our convention, we always take $\Delta x=\frac{l}{N_s}$ as the spatial pixel unit length, where $l$ gives the spatial physical length and $N_s$ denotes the number of samples.
The size of the fourier Gaussian window can be represented as: $2\sigma_f=\nu\cdot\varsigma\pi$,
where $\nu\in[\frac{1}{2},1]$ is the factor that specifies the percentage of the maximum frequency we are going to sample in the fourier domain and $\varsigma$ is the oversampling factor. Then, we can represent the size of the continuous and discrete spatial Gaussian window $\sigma_{s}$ and $\sigma_{s}^{Discrete}$ as:
\begin{equation}
\sigma_{s}=\frac{2}{\nu \varsigma \pi},\qquad
\sigma_{s}^{Discrete}=\sigma_s \cdot \frac{l}{N_s}=\frac{2}{\nu \varsigma \pi}\left(\frac{l}{N_s}\right).
\end{equation}
From Figure~\ref{fig:GaussianScale}, we can see that a Fourier Gaussian window with $\nu<1$ corresponds to a spatial Gaussian blurring of slightly more than 1 pixel unit.
\begin{figure}[htbp]
\centering
\subfloat{\includegraphics[scale=0.6]{GaussianScale.pdf}}
\caption{Illustration of the scales between a Fourier Gaussian window and the corresponding spatial Gaussian window. Here we define the number of samples $N_s=65$.}
\label{fig:GaussianScale}
\end{figure}
If we set the oversampling factor $\varsigma=1$, one has $2\sigma_{s}^{Discrete}\in \left[\Delta x, 2\Delta x\right]$. Then, the scale of the spatial Gaussian window $s_s=\frac{1}{2}(\sigma_s^{Discrete})^2\leq\frac{1}{2}(\Delta x)^2$, in which $\frac{1}{2}(\Delta x)^2$ is called \emph{inner-scale} \cite{FlorackInnerScale1992}, which is by definition the minimum reasonable Gaussian scale due to the sampling distance.
\subsubsection{Comparison by Relative $\ell_K-$errors in the Spatial and Fourier Domain}\label{section:ComparisonRelativeErrorFormula}
Firstly, we explain how to make comparisons in the Fourier domain. Before the comparison, we apply a normalization such that all the DC components in the discrete Fourier domain add up to 1, i.e.
\begin{align*}
\sum_{r=-R}^R\sum_{x=-P}^P\sum_{y=-Q}^Q R_\alpha^{\textbf{D},\textbf{a}}(x,y,\theta_r)\Delta x \Delta y \Delta \theta=\sum_{r=-R}^R \left(\left[\mathbf{CDFT}\right]R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\cdot,\theta_r)\right)(0,0)\cdot \Delta\theta=1,
\end{align*}
where the $\textbf{CDFT}$ and its inverse are given by
\begin{equation}
\begin{array}{l}
\cr\left[\mathbf{CDFT}\left(R_\alpha^{\textbf{D},\textbf{a}}(\cdot,\cdot,\theta_r)\right)\right][p',q']:=\sum\limits_{p=-P}^P\sum\limits_{q=-Q}^Q R_\alpha^{\textbf{D},\textbf{a}}(p,q,\theta_r)e^{\frac{-2\pi i p p'}{2P+1}}e^{\frac{-2\pi i q q'}{2Q+1}},\\
\cr\left[\mathbf{CDFT^{-1}}\left([p',q']\rightarrow \hat{R}_\alpha^{\textbf{D},\textbf{a}}(\omega_{p'}^1,\omega_{q'}^2,\theta_r)\right)\right][p,q]\cr\qquad\qquad\qquad:=\left(\frac{1}{2P+1}\frac{1}{2Q+1}\right)\sum\limits_{p'=-P}^P\sum\limits_{q'=-Q}^Q R_\alpha^{\textbf{D},\textbf{a}}(\omega_{p'}^1,\omega_{q'}^2,\theta_r)e^{\frac{2\pi i p p'}{2P+1}}e^{\frac{2\pi i q q'}{2Q+1}},
\end{array}
\end{equation}
in order to be consistent with the normalization in the continuous domain:
\begin{align*}
\int_{-\pi}^\pi\hat{R}_\alpha^{\textbf{D},\textbf{a}}(0,0,\theta){\rm d}\theta=\int_{-\pi}^\pi\int_\mathbb{R}\int_\mathbb{R} R_\alpha^{\textbf{D},\textbf{a}}(x,y,\theta){\rm d}x {\rm d}y {\rm d}\theta=1.
\end{align*}
The procedures of calculating the relative errors $\epsilon_R^f$ in the Fourier domain are given as follows:
\begin{equation}\label{RelativeError}
\epsilon_R^f=\frac{\textbf{|}\hat{R}_\alpha^{\textbf{D},\textbf{a},exact}(\omega_{\cdot}^1,\omega_{\cdot}^2,\theta_\cdot)-\hat{R}_\alpha^{\textbf{D},\textbf{a},approx}(\omega_{\cdot}^1,\omega_{\cdot}^2,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}{{\textbf{|}\hat{R}_\alpha^{\textbf{D},\textbf{a},exact}(\omega_{\cdot}^1,\omega_{\cdot}^2,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}
},
\end{equation}
where $K \in \mathbb{N}$ indexes the $\ell_K$ norm on the discrete domain $\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R$. Akin to comparisons in the Fourier domain, we compute relative errors $\epsilon_R^s$ in the spatial domain as follows:
\begin{equation}\label{RelativeError}
\epsilon_R^s=\frac{\textbf{|}R_\alpha^{\textbf{D},\textbf{a},exact}(x_\cdot,y_\cdot,\theta_\cdot)-R_\alpha^{\textbf{D},\textbf{a},approx}(x_\cdot,y_\cdot,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}{{\textbf{|}R_\alpha^{\textbf{D},\textbf{a},exact}(x_\cdot,y_\cdot,\theta_\cdot)\textbf{|}_{\ell_K(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )}}
},
\end{equation}
where we firstly normalize the approximation kernel with respect to the $\ell_1(\mathbb{Z}_P \times \mathbb{Z}_Q \times \mathbb{Z}_R )$ norm.
\section{Experimental Results}\label{section:Experimental results}
To compare the performance of different numerical approaches with the exact solution, Fourier and spatial kernels with special parameter settings are produced from different approaches in both enhancement and completion cases. The evolution of all our numerical schemes starts with a spatially blurred orientation score spike, i.e. $(G_{\sigma_s}*\delta_0^{\mathbb{R}^2})\otimes\delta_0^{S^1}$, which corresponds to the Fourier Gaussian window mentioned in Section~\ref{ch:comparison} for the error control of the exact kernel in Theorem \ref{thm:DiscreteExactSolutionsErrorEstimation}. We vary $\sigma_s>0$ in our comparisons. We analyze the relative errors of both spatial and Fourier kernels with changing standard deviation $\sigma_s$ of Gaussian blurring in the finite difference and the Fourier based approaches for contour enhancement, see Figure~\ref{fig:RelativeErrorWithChangingSigma}.
All the kernels in our experiments are $\ell_1-$ normalized before comparisons are done. In the contour completion experiments, we construct all the kernels with the number of orientations $N_o = 72$ and spatial dimensions $N_s = 192$, while in the contour enhancement experiments we set $N_o = 48$ and $N_s = 128$. Our experiments are not aiming for speed of convergence in terms of $N_o$ and $N_s$, as this can be derived theoretically from Theorem \ref{th:RelationofFourierBasedWithExactSolution}, we rather stick to reasonable sampling settings to compare our methods, and to analyze a reasonable choice of $\sigma_s > 0$.
\begin{figure}[!htbp]
\centering
\subfloat{\includegraphics[width=.6\textwidth]{RelativeErrorWithChangingSigma.pdf}}
\caption{The relative errors, Eq.~(\ref{RelativeError}), of the finite difference (FD), and Fourier based techniques (FBT) with respect to the exact methods (Exact) for contour enhancement. Both $\ell_1$ and $\ell_2$ normalized spatial and Fourier kernels are calculated based on different standard deviation $\sigma_s$ ranging from 0.5 to 1.7 pixels, with parameter settings $\textbf{D}=\{1.,0.,0.03\}, \alpha=0.05$ and time step size $\Delta t=0.005$ in the FD explicit approach.
}
\label{fig:RelativeErrorWithChangingSigma}
\end{figure}
From Figure~\ref{fig:RelativeErrorWithChangingSigma} we deduce that the relative errors of the $\ell_1$ and $\ell_2$ normalized finite difference (FD) spatial kernels converge to an offset of approximately $5\%$, which is understood by additional numerical blurring due to B-spline approximation in Section \ref{section: Left-invariant Finite Differences with B-spline Interpolation}, which is needed for rotation covariance in discrete implementations \cite[Figure~\!10]{Franken2009IJCV}, but which does affect the actual diffusion parameters. The relative errors of the Fourier based techniques (FBT) are very slowly decaying from $0.61\%$ along the axis $\sigma_s$. We conclude that an appropriate stable choice of $\sigma_s$ for fair comparison of our methods is $\sigma_s=1$, recall also Section~ \ref{ScaleSelectionGaussianMask}.
\begin{table*}[!ht]
\centering
\caption{Enhancement kernel comparison of the exact analytic solution with the numerical Fourier based techniques, the stochastic methods and the finite difference schemes.}
\begin{tabular}{r|l|l|l}
\toprule
Relative Error & $\textbf{D}=\{1.,0.,0.05\}$ & $\textbf{D}=\{1.,0.,0.05\}$ & $\textbf{D}=\{1.,0.9,1.\}$ \\
(\textbf{\%}) & $\alpha=0.01$ & $\alpha=0.05$ & $\alpha=0.05$ \\
\midrule
$\ell_1$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{0.12} \qquad \textbf{1.30} & \textbf{0.35} \qquad \textbf{1.92} & \textbf{2.27} \qquad \textbf{0.60} \\
\textit{Exact-Stochastic} & 2.18 \qquad 3.94 & 1.74 \qquad 3.82 & 2.66 \qquad 2.54 \\
\textit{Exact-FDExplicit} & 5.07 \qquad 1.82 & 5.70 \qquad 2.34 & 2.99 \qquad 3.56 \\
\textit{Exact-FDImplicit} & 5.08 \qquad 2.29 & 5.70 \qquad 3.03 & 3.00 \qquad 5.59 \\
\midrule
$\ell_2$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{1.40} \qquad \textbf{1.37}& \textbf{2.39} \qquad \textbf{2.30} & \textbf{2.24} \qquad \textbf{1.23} \\
\textit{Exact-Stochastic} & 2.26 \qquad 2.32& 3.50 \qquad 3.16 & 2.93 \qquad 2.65 \\
\textit{Exact-FDExplicit} & 4.80 \qquad 1.72& 4.97 \qquad 1.60 & 2.90 \qquad 3.15 \\
\textit{Exact-FDImplicit} & 5.17 \qquad 2.11& 5.80 \qquad 2.29 & 5.42 \qquad 5.56 \\
\bottomrule
\end{tabular}%
\caption*{
\textbf{Measurement method abbreviations}: ($\textit{Exact}$) - Ground truth measurements based on the analytic solution by using Mathieu functions in Section~\ref{3GeneralFormsExactSolutions}, ($\textit{FBT}$) - Fourier based techniques in Section~\ref{section:Duitsmatrixalgorithm} and Section~\ref{section:FourierBasedForEnhancement}, ($\textit{Stochastic}$) - Stochastic method in Section~\ref{section:MonteCarloStochasticImplementation} (with $\Delta t=0.02$ and $10^8$ samples), ($\textit{FDExplicit}$) and ($\textit{FDImplicit}$) - Explicit and implicit left-invariant finite difference approaches with B-Spline interpolation in Section~\ref{section:Left-invariant Finite Difference Approaches for Contour Enhancement}, respectively. The settings of time step size are $\Delta t=0.005$ in the $\textit{FDExplicit}$ scheme, and $\Delta t=0.05$ in the $\textit{FDImplicit}$ scheme. }
\label{tab:RelativeErrorEnhancementComparison}%
\end{table*}
Table~\ref{tab:RelativeErrorEnhancementComparison} shows the validation results of our numerical enhancement kernels, in comparison with the exact solution using the same parameter settings. The first 5 rows and the last 5 rows of the table show the relative errors of the $\ell_1$ and $\ell_2$ normalized kernels separately. In all the three parameter settings, the kernels obtained by using the FBT method provides the best approximation to the exact solutions due to the smallest relative errors in both the spatial and the Fourier domain.
Overall, the stochastic approach (a Monte Carlo simulation with $\Delta t=0.02$
and $10^8$ samples) performs second best.
Although the finite difference scheme performs less, compared to the more computationally demanding FBT and the stochastic approach, the relative errors of the FD explicit approach are still acceptable, less than $5.7\%$. The $5\%$ offset is understood by the B-spline interpolation to compute on a left-invariant grid. Here we note that finite differences do have the advantage of straightforward extensions to the non-linear diffusion processes \cite{Citti,Creusen2013,FrankenPhDThesis,Franken2009IJCV}, which will also be employed in the subsequent application section. For the FD implicit approach, larger step size can be used than the FD explicit approach in order to achieve a much faster implementation, but still with negligible influence on the relative errors.
Table~\ref{tab:RelativeErrorCompletionComparison} shows the validation results of the numerical completion kernels with three sets of parameters. Again, all the $\ell_1$ and $\ell_2$ normalized FBT kernels show us the best performance (less than $1.2\%$ relative error) in the comparison.
\begin{table*}[!ht]
\centering
\caption{Completion kernel comparison of the exact analytic solution with the numerical Fourier based techniques, the stochastic methods and the finite difference schemes.}
\begin{tabular}{r|l|l|l}
\toprule
Relative Error & $\textbf{D}=\{0.,0.,0.08\}$& $\textbf{D}=\{0.,0.,0.08\}$ & $\textbf{D}=\{0.,0.,0.18\}$ \\
& $\textbf{a}=(1.,0.,0.)$ & $\textbf{a}=(1.,0.,0.)$ & $\textbf{a}=(1.,0.,0.)$ \\
(\textbf{\%}) & $\alpha=0.01$ & $\alpha=0.05$ & $\alpha=0.05$ \\
\midrule
$\ell_1$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{0.02} \qquad \textbf{1.06}& \textbf{0.11} \qquad \textbf{1.17} & \textbf{0.05} \qquad \textbf{0.52} \\
\textit{Exact-Stochastic} & 2.49 \qquad 3.31 & 2.37 \qquad 5.40 & 1.95 \qquad 4.26\\
\textit{Exact-FDExplicit} & 1.91 \qquad 8.36& 4.29 \qquad 8.68 & 4.57 \qquad 9.03 \\
\midrule
$\ell_2$-norm & Spatial \quad Fourier & Spatial \quad Fourier & Spatial \quad Fourier \\
\midrule
\textit{Exact-FBT} & \textbf{0.94} \qquad \textbf{1.21}& \textbf{1.20} \qquad \textbf{1.50} & \textbf{0.65} \qquad \textbf{0.79} \\
\textit{Exact-Stochastic} & 4.96 \qquad 3.40 & 4.84 \qquad 3.25 & 4.39 \qquad 2.45\\
\textit{Exact-FDExplicit} & 6.60 \qquad 5.50& 7.92 \qquad 6.56 & 8.46 \qquad 6.48 \\
\bottomrule
\end{tabular}%
\caption*{
\textbf{Measurement method abbreviations}:
($\textit{Exact}$) - Ground truth measurements based on the analytic solution by using Mathieu functions in Section~\ref{3GeneralFormsExactSolutions}, ($\textit{FBT}$) - Fourier based techniques in Section~\ref{section:Duitsmatrixalgorithm} and Section~\ref{section:FourierBasedForEnhancement}, ($\textit{Stochastic}$) - Stochastic method in Section~\ref{section:MonteCarloStochasticImplementation}
(with $\Delta t=0.02$ and $10^8$ samples), ($\textit{FDExplicit}$) - Explicit left-invariant finite difference approaches with B-Spline interpolation in Section~\ref{section:Left-invariant Finite Difference Approaches for Contour Enhancement}. The settings of time step size are $\Delta t=0.005$ in the $\textit{FDExplicit}$ scheme.}
\label{tab:RelativeErrorCompletionComparison}%
\end{table*}
\section{Application of Contour Enhancement to Improve Vascular Tree Detection in Retinal Imaging}\label{section:Applications on Retinal Image}
In this section, we will show the potential of achieving better vessel tracking results by applying the $SE(2)$ contour enhancement approach on challenging retinal images where the vascular tree (starting from the optic disk) must be detected. The retinal vasculature provides a convenient mean for non-invasive observation of the human circulatory system. A variety of eye-related and systematic diseases such as glaucoma\cite{CassonGlaucoma2012}, age-related macular degeneration, diabetes, hypertension, arteriosclerosis or Alzheimer's disease affect the vasculature and may cause functional or geometric changes\cite{Ikram2013}. Automated quantification of these defects enables
massive screening for systematic and eye-related vascular diseases on the basis of fast and inexpensive imaging modalities, i.e. retinal photography. To automatically extract and assess the state of the retinal vascular tree, vessels have to be segmented, modeled and analyzed. Bekkers et al.\cite{BekkersJMIV} proposed a fully automatic multi-orientation vessel tracking method (ETOS) that performs excellently in comparison with other state-of-the-art algorithms. However, the ETOS algorithm often suffers from low signal to noise ratios, crossings and bifurcations, or some problematic regions caused by leakages/blobs due to some diseases. See Figure~\ref{fig:ProblematicRetinalVesselTracking}.
\begin{figure}[htbp]
\centering
\subfloat{\includegraphics[width=.75\textwidth]{ProblematicRetinalVesselTracking.pdf}}
\caption{Three problematical cases in the ETOS tracking algorithm\cite{BekkersJMIV}. From left to right: blurry crossing parts, small vessels with noise and small vessels with high curvature.
}
\label{fig:ProblematicRetinalVesselTracking}
\end{figure}
We aim to solve these problems via left-invariant contour enhancement processes on invertible orientation scores as pre-processing for subsequent tracking\cite{BekkersJMIV}, recall Figure~\ref{fig:OSIntro}. In our enhancements, we rely on non-linear extension \cite{Franken2009IJCV} of finite difference implementations of the contour enhancement process to improve adaptation of our model to the data in the orientation score. Finally, the ETOS tracking algorithm \cite{BekkersJMIV} is performed on the enhanced retinal images with respect to various problematic tracking cases, in order to show the benefit of the left-invariant diffusion on $SE(2)$.
As a proof of concept, we show examples of tracking on left-invariantly diffused invertible orientation scores on cases where standard ETOS-tracking without left-invariant diffusion fails, see~\mbox{Figure~\ref{fig:VesselTrackingonCEDOSRetinalImages}.}
\begin{figure}[htbp]
\centering
\includegraphics[width=.75\textwidth]{VesselTrackingonCEDOSRetinalImages.pdf}
\caption{Vessel tracking on retinal images. From up to down: the original retinal images with erroneous ETOS tracking, the enhanced retinal images with accurate tracking after enhancement.}
\label{fig:VesselTrackingonCEDOSRetinalImages}
\end{figure}
All the experiments in this section use the same parameters. All the retinal images are selected with the size $400 \times 400$. Parameters used for tracking are the same as the parameters of the ETOS algorithm in \cite{BekkersJMIV}: Number of orientations $N_o$ = 36, wavelets-periodicity = $2\pi$. The following parameters are used for the non-linear coherence-enhancing diffusion (CED-OS): spatial scale of the Gaussian kernel for isotropic diffusion is $t_s=\frac{1}{2}\sigma_s^2=12$, the scale for computing Gaussian derivatives is $t_s'=0.15$, the metric $\beta=0.058$, the end time $t=20$, and $c=1.2$ for controlling the balance between isotropic diffusion and anisotropic diffusion, for details see \cite{Franken2009IJCV}.
\section{Conclusion}
We analyzed linear left-invariant diffusion, convection-diffusion and their resolvents on invertible orientation scores, following both 3 numerical and 3 exact approaches. In particular, we considered the Fokker-Planck equations of Brownian motion for contour enhancement, and the direction process for contour completion. We have provided 3 exact solution formulas for the generic left-invariant PDE's on $SE(2)$ to place previous exact formulas into context. These formulas involve either infinitely many periodic or non-periodic Mathieu functions, or only 4 non-periodic Mathieu functions.
Furthermore, as resolvent kernels suffer from severe singularities that we analyzed in this article, we propose a new time integration via Gamma distributions, corresponding to iterations of resolvent kernels. We derived new asymptotic formulas
for the resulting kernels and show benefits towards applications, illustrated via stochastic completion fields in Figure~\ref{fig:Gamma}.
Numerical techniques can be categorized into 3 approaches: finite difference, Fourier based and stochastic approaches. Regarding the finite difference schemes, rotation and translation covariance on reasonably sized grids requires B-spline interpolation \cite{Franken2009IJCV} (towards a left-invariant grid), including additional numerical blurring. We applied this both to implicit schemes and explicit schemes with explicit stability bound. Regarding Fourier based techniques (which are equivalent to $SE(2)$ Fourier methods, recall Remark~\ref{rem:42}), we have set an explicit connection in Theorem \ref{th:RelationofFourierBasedWithExactSolution} to the exact representations in periodic Mathieu functions from which convergence rates are directly deduced. This is confirmed in the experiments, as they perform best in the numerical comparisons.
We compared the exact analytic solution kernels to the numerically computed kernels for all schemes. We computed the relative $\ell_1$ and $\ell_2$ errors in both spatial and Fourier domain. We also analyzed errors due to Riemann sum approximations that arise by using the $\textbf{DFT}^{-1}$ instead of using the $\textbf{CFT}^{-1}$. Here, we needed to introduce a spatial Gaussian blurring with small ``inner-scale'' due to finite sampling. This small Gaussian blurring allows us, to control truncation errors, to maintain exact solutions, and to reduce the singularities.
We implemented all the numerical schemes in $\textit{Mathematica}$, and constructed the exact kernels based on our own implementation of Mathieu functions to avoid the numerical errors and slow speed caused by $\textit{Mathematica}$'s Mathieu functions.
We showed that FBT, stochastic and FD provide reliable numerical schemes. Based on the error analysis we demonstrated that best numerical results were obtained using the FBT with negligible differences. The stochastic approach (via a Monte Carlo simulation) performs second best.
The errors from the FD method are larger, but still located in an admissible scope, and they do allow non-linear adaptation. Preliminary results in a retinal vessel tracking application show that the PDE's in the orientation score domain preserve the crossing parts and help the ETOS algorithm\cite{BekkersJMIV} to achieve more robust tracking.
\section*{Acknowledgements}
The research leading to the results of this article
has received funding from the European Research Council under the European Community's 7th Framework Programme (FP7/2007-2014)/ERC grant agreement No. 335555. The China Scholarship Council (CSC) is gratefully acknowledged for the financial support No. 201206300010.
\begin{figure}[htbp]
\centering
\includegraphics[width=.75\textwidth]{logos.pdf}
\label{fig:logos}
\end{figure}
|
1,941,325,220,718 | arxiv | \section{Introduction}
\subsection{Overview}
Throughout, we work over the complex numbers ${\mathbb{C}}$. Let $C$ be a nonsingular irreducible projective curve of genus $g \geq 2$. The purpose of this paper is to explore cohomological structures for the moduli space of degree $d$ semistable $\mathrm{SL}_n$-Higgs bundles on $C$ with respect to an effective divisor $D$ of degree $\mathrm{deg}(D)>2g-2$. More precisely, we show that the support theorem \cite{dC_SL} and the topological mirror symmetry conjecture \cite{HT, GWZ, MS}, which were proven in the case $\mathrm{gcd}(n,d)=1$, actually hold for \emph{arbitrary} $d$.
For this more general setting, the essential difference with the coprime case is that the moduli space may be singular due to the presence of strictly semistable locus. Hence it is natural for us to consider intersection cohomology. Our main tool is an Ng\^o--type support inequality for weak abelian fibrations recently established in \cite{MS2} which works for singular ambient spaces and intersection cohomology complexes.
As an immediate application of our results, we also give a proof of a generalized version of the Harder--Narasimhan theorem \cite{HN} for intersection cohomology and arbitrary degree.
\subsection{Moduli of $\mathrm{SL}_n$-Higgs bundles}
We fix $D$ to be an effective divisor of degree $\mathrm{deg}(D)>2g-2$ and we fix $L \in \mathrm{Pic}^d(C)$ to be a degree $d$ line bundle on $C$. We denote by $M_{n,L}$ the moduli space of semistable Higgs bundles
\[
({\mathcal E}, \theta): \quad \theta: {\mathcal E} \to {\mathcal E}\otimes {\mathcal O}_C(D), \quad \mathrm{rank}({\mathcal E})=n, \quad \mathrm{det}({\mathcal E})\simeq L, \quad \mathrm{trace}(\theta) = 0,
\]
where the (semi-)stability is with respect to the slope $\mu({\mathcal E},\theta) = \mathrm{deg}({\mathcal E})/\mathrm{rank}({\mathcal E})$. The moduli space $M_{n,L}$ admits a proper surjective morphism
\begin{equation}\label{Hitchin}
h: M_{n,L} \to A = \bigoplus_{i= 2}^n H^0(C, {\mathcal O}_C(iD)), \quad ({\mathcal E} ,\theta) \mapsto \mathrm{char}(\theta)
\end{equation}
known as the Hitchin fibration \cite{Hit, Hit1}. Here $\mathrm{char}(\theta)$ denotes the characteristic polynomial of the Higgs field $\theta: {\mathcal E} \to {\mathcal E} \otimes {\mathcal O}_C(D)$:
\[
\mathrm{char}(\theta) = (a_2, a_3, \dots, a_n), \quad a_i = \mathrm{trace}(\wedge^i\theta) \in H^0(C, {\mathcal O}_C(iD)).
\]
Alternatively, we may view a closed point $a\in A$ as a spectral curve $C_a \subset \mathrm{Tot}_C({\mathcal O}_C(D))$ which is a degree $n$ cover over the zero section $C$. Let the elliptic locus $A^{\mathrm{ell}} \subset A$ be the open subset consisting of \emph{integral} spectral curves. The fibers of the restricted Hitchin fibration over $A^{\mathrm{ell}}$
\begin{equation}\label{eqn8}
h^{\mathrm{ell}}: M_{n,L}^{\mathrm{ell}} \rightarrow A^{\mathrm{ell}}
\end{equation}
are compactified Prym varieties of the integral spectral curves $C_a$. In particular, the open subvariety $M_{n,L}^{\mathrm{ell}}$ is nonsingular and contained in the stable locus $M_{n,L}^s$:
\[
M_{n,L}^{\mathrm{ell}} \subset M_{n,L}^s \subset M_{n,L}.
\]
\subsection{Support theorem for $\mathrm{SL}_n$}
By \cite{BBD}, we have the decomposition for the direct image complex of the intersection cohomology complex
\[
Rh_* \mathrm{IC}_{M_{n,L}} \simeq \bigoplus_{\alpha,i} \mathrm{IC}_{Z_{\alpha,i}}({\mathcal L}_{\alpha,i})[-r_i] \in D^b_c(A), \quad r_i \in {\mathbb{Z}}
\]
into (shifted) simple perverse sheaves. Here $D^b_c(-)$ denotes the bounded derived category of constructible sheaves, $Z_{\alpha,i} \subset A$ are irreducible closed subvarieties, each ${\mathcal L}_{\alpha,i}$ is a simple local system on an open subset of $Z_{\alpha.i}$, and $\mathrm{IC}_{Z_{\alpha,i}}({\mathcal L}_{\alpha,i})$ is the intermediate extension of ${\mathcal L}_{\alpha,i}$ in $Z_{\alpha,i}$. We call $Z_{\alpha,i}$ the \emph{supports} of the direct image complex $Rh_*\mathrm{IC}_{M_{n,L}}$ that are important invariants for the map $h: M_{n,L} \to A$.
The following theorem, which generalizes de Cataldo's $\mathrm{SL}_n$-support theorem \cite{dC_SL} in the case of $\mathrm{gcd}(n,d)=1$, shows that the decomposition theorem of the Hitchin fibration $h: M_{n,L} \to A$ is governed by the elliptic locus (\ref{eqn8}).
\begin{thm}[Support theorem]\label{thm0.1}
The generic point of any support of ${Rh}_* \mathrm{IC}_{M_{n,L}}$ lies in the elliptic locus $A^{\mathrm{ell}}$.
\end{thm}
In fact, by combining the techniques of \cite{CL, dC_SL} and \cite{MS2}, we prove in Sections \ref{sec1} and \ref{Sec2} a more general support theorem (Theorem \ref{thm1.1}) for certain relative moduli space of Higgs bundles associated with a cyclic \'etale Galois cover $\pi: C' \to C$. These moduli spaces are tightly connected to the endoscopic theory for $\mathrm{SL}_n$ \cite{Ngo0,Ngo} and the topological mirror symmetry for Hitchin systems \cite{HT, GWZ, MS}.
\subsection{Topological mirror symmetry}
Motivated by the Strominger--Yau--Zaslow mirror symmetry, Hausel--Thaddeus \cite{HT} conjectured that the moduli of semistable $\mathrm{SL}_n$- and $\mathrm{PGL}_n$-Higgs bundles should have identical (properly interpreted) Hodge numbers. In the case of $\mathrm{gcd}(n,d)=1$, the match of the Hodge numbers for the $\mathrm{SL}_n$- and $\mathrm{PGL}_n$-Higgs moduli spaces was formulated precisely in \cite{HT} using singular cohomology, and was proven recently in \cite{GWZ, LW, MS} by different methods. From the viewpoint of $S$-duality \cite[Section 5.4]{Survey} and the approach of \cite{MS}, the Hausel-Thaddeus conjecture is closely connected to the endoscopy theory and the fundamental lemma for $\mathrm{SL}_n$.
In this paper, we explore the Hausel--Thaddeus conjecture for arbitrary degree $d$. Under the assumption $\mathrm{deg}(D)>2g-2$, we prove that an analog of the Hausel--Thaddeus conjecture holds for intersection cohomology and arbitrary degree $d$. Our approach follows the spirit of \cite{MS}, that we view the (refined) Hausel--Thaddeus conjecture \cite[Conjeture 4.5]{Survey} as an extension of Ng\^o's geometric stabilization theorem \cite{Ngo} in his proof of the fundamental lemma of the Langlands program. Our new input is the support theorem for $\mathrm{SL}_n$ and its endoscopic groups (see Theorem \ref{thm1.1}), relying on the framework of \cite{MS2}.
Now in the following we introduce some notation and state the main theorem.
Let $\Gamma = \mathrm{Pic}^0(C)[n]$ be the group of $n$-torsion line bundles on $C$. The finite group $\Gamma$ admits a non-degenerate Weil pairing \cite[Section 1.3]{MS}, which after identifying $\Gamma$ with $H_1(C, {\mathbb{Z}}/n{\mathbb{Z}})$, coincides with the intersection pairing. Hence we obtain a canonical isomorphism between $\Gamma$ and the group of characters $\Gamma = \mathrm{Hom}(\Gamma, {\mathbb{G}}_m)$:
\begin{equation}\label{Weil_Pairing}
\Gamma = \hat{\Gamma}.
\end{equation}
For the $\mathrm{SL}_n$-Higgs moduli space $M_{n,L}$ associated with the line bundle $L$, the corresponding $\mathrm{PGL}_n$-Higgs moduli space $[M_{n,L}/\Gamma]$ is a Deligne--Mumford stack obtained as the quotient of the natural finite group action of $\Gamma = \mathrm{Pic}^0(C)[n]$ on $M_{n,L}$:
\[
{\mathcal L} \cdot ({\mathcal E}, \theta) = ({\mathcal E}\otimes {\mathcal L}, \theta), \quad \quad {\mathcal L} \in \Gamma,\quad ({\mathcal E}, \theta) \in M.
\]
Note that when $\mathrm{gcd}(n,d) \neq 1$, both the $\mathrm{SL}_n$- and the $\mathrm{PGL}_n$-Higgs moduli spaces are singular as a variety and a Deligne--Mumford stack respectively. For an element $\gamma \in \Gamma$, we denote by $M^\gamma_{n,L} \subset M_{n,L}$ the subvariety of the $\gamma$-fixed locus. Assume that
\[
h_\gamma: M^\gamma_{n,L} \to A_\gamma : = \mathrm{Im}(h_\gamma) \subset A
\]
is the morphism induced by the Hitchin fibration (\ref{Hitchin}), which recovers $h$ when $\gamma = 0$. We denote by $i_\gamma: A_\gamma \hookrightarrow A$ the closed embedding and $d_\gamma$ the codimension of $A_\gamma$ in $A$. The $\Gamma$-action on $M_{n,L}$ induces a $\Gamma$-action on the fixed locus $M^\gamma_{n,L}$. This action is fiberwise with respect to the morphism $h_\gamma$, which induces a canonical decomposition
\[
{Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L}} = \bigoplus_{\kappa} \left({Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L}}\right)_\kappa \in D^b_c(A_\gamma), \quad \kappa \in \hat{\Gamma}
\]
into eigen-subcomplexes \cite[Lemma 3.2.5]{NL}. The following theorem is a sheaf-theoretic version of the Hausel-Thaddeus conjecture for the divisor $D$, which resembles the fundamental lemma.
\begin{thm}\label{thm0.2}
Assume that $\gamma \in \Gamma$ and $\kappa \in \hat{\Gamma}$ are matched via the Weil pairing (\ref{Weil_Pairing}).
\begin{enumerate}
\item[(a)](Endoscopic decomposition) We have an isomorphism
\begin{equation}\label{thm0.2_a}
\left({Rh_\gamma}_* \mathrm{IC}_{M_{n,L}}\right)_\kappa \simeq {i_\gamma}_*\left({Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L}}\right)_\kappa [-2d_\gamma] \in D^b_c(A).
\end{equation}
\item[(b)](Transfer) Assume $L'\in \mathrm{Pic}^{d'}(C)$ with $\mathrm{gcd}(d,n)=\mathrm{gcd}(d',n)$. Then we have
\[
\left({Rh_\gamma}_*\mathrm{IC}_{M^\gamma_{n,L}}\right)_\kappa \simeq \left({Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L'}}\right)_{q\kappa} \in D^b_c(A_\gamma)
\]
where $q$ is an integer coprime to $n$ satisfying that
\begin{equation}\label{condition}
d = d'q \quad\mathrm{mod}~~~n.
\end{equation}
\end{enumerate}
Moreover, both (a) and (b) hold in the bounded derived categories $D^b\mathrm{MHM}(-)$ of mixed Hodge modules refining $D_c^b(-)$.
\end{thm}
Theorem \ref{thm0.2} concerns Higgs bundles with respect to $D$ satisfying $\mathrm{deg}(D)>2g-2$. By taking global cohomology, it recovers an identity between the (stringy) \emph{intersection} E-polynomials for the $\mathrm{SL}_n$- and the $\mathrm{PGL}_n$-Higgs moduli spaces. This is analogous to the original Hausel--Thaddeus conjecture \cite{HT, Survey}; see Section \ref{HT_conj}. A natural question is if the intersection E-polynomial version of the Hausel--Thaddeus conjecture holds for $D= K_C$. This was recently conjectured by Mauri \cite{Mauri}, who also verified it for the case of $n=2$.
\begin{rmk}
In \cite[Remark 3.30]{Survey}, Hausel proposed that a version of the topological mirror symmetry conjecture \cite{HT} should hold without the coprime assumption between the degrees and the rank, and he asked what is the cohomology theory we should use to formulate this. As mentioned above, Mauri proposed to use \emph{intersection cohomology}. Theorem \ref{thm0.2} provides further evidence that intersection cohomology is the correct theory to formulate the topological mirror symmetry for possibly singular moduli spaces. Our reasons come naturally from the decomposition theorem \cite{BBD} and the support theorem (Theorem \ref{thm1.1}).
\end{rmk}
\subsection{The Harder--Narasimhan theorem}\label{0.1} The moduli space $N_{n,L}$ of (slope-)semistable vector bundles on $C$ of rank $n$ and determinant isomorphic to $L$ is an irreducible projective variety
which has been studied intensively for decades. Similar to the Higgs case, the finite group $\Gamma = \mathrm{Pic}^0(C)[n]$ acts on $N_{n,L}$ via tensor product
\begin{equation}\label{action}
{\mathcal L} \cdot {\mathcal E} = {\mathcal L} \otimes {\mathcal E}, \quad \quad {\mathcal L} \in \Gamma= \mathrm{Pic}^0(C)[n], \quad {\mathcal E} \in N_{n,L}.
\end{equation}
Harder and Narasimhan \cite{HN} proved that, when $\mathrm{gcd}(n, d)=1$, the $\Gamma$-action on the cohomology $H^*(N_{n,L}, {\mathbb{C}})$ induced by (\ref{action}) is trivial. Other proofs of the Harder--Narasimhan theorem have been found by Atiyah--Bott \cite{AB} and Hausel--Pauly \cite{HP}.
The following theorem is a generalization of the Harder--Narasimhan theorem for arbitrary degree $d$. It is an immediate consequence of Theorem \ref{thm0.2}.
\begin{thm}\label{thm0.3}
The $\Gamma$-action on $\mathrm{IH}^*(N_{n,L}, {\mathbb{C}})$ induced by (\ref{action}) is trivial. Consequently, we obtain the match of the intersection cohomology groups for the varieties $N_{n,L}$ and $\check{N}_{n,L} := N_{n,L}/\Gamma$:
\begin{equation}\label{SL_PGL}
\mathrm{IH}^*(N_{n,L}, {\mathbb{C}}) = \mathrm{IH}^*(\check{N}_{n,L}, {\mathbb{C}}).
\end{equation}
\end{thm}
The varieties $N_{n,L}$ and $\check{N}_{n,L}$ may be viewed as the moduli spaces of semistable $\mathrm{SL}_n$- and $\mathrm{PGL}_n$-bundles on the curve $C$, and Theorem \ref{thm0.3} shows that they share the same intersection cohomology.
An alternative proof of Theorem \ref{thm0.3} may be obtained by Kirwan's surjectivity for intersection cohomology \cite{Kir0, Kiem}.\footnote{We are grateful to Young-Hoon Kiem and Mirko Mauri for very interesting and helpful discussions on this.} Our approach is to realize Theorem \ref{thm0.3} as a consequence of (a version of) the Hausel--Thaddeus topological mirror symmetry for Hitchin systems. This is close to \cite{HP} in spirit. The proof of Theorem \ref{thm0.3} here suggests that the isomorphism (\ref{SL_PGL}) is essentially a consequence of the fact that the Hitchin systems for $\mathrm{SL}_n$ and $\mathrm{PGL}_n$ share the same Hitchin base over which the decomposition theorems coincide restricting to the generic point. Hence a version of (\ref{SL_PGL}) may hold for general $G$ and its Langlands dual $G^\vee$ which we will explore in subsequent work.
\subsection{Acknowledgements}
We would like to thank Young-Hoon Kiem and Mirko Mauri for very helpful discussions. J.S. was supported by the NSF grant DMS-2000726.
\section{Support theorems for Hitchin fibrations}\label{sec1}
Throughout the rest of the paper, we fix a curve $C$ of genus $g \geq 2$, an integer $n \geq 2$, and a line bundle $L \in \mathrm{Pic}^d(C)$. Let $D$ be an effective divisor of degree $\mathrm{deg}(D) > 2g-2$.
\subsection{Support theorem}
Assume $n = mr$. Following \cite{MS}, we introduce the endoscopic moduli space $M_{r,L}(\pi)$ associated with a cyclic \'etale Galois cover $\pi: C' \to C$ which plays a crucial role in the cohomological study of $M_{n,L}$.
Let $\pi: C' \to C$ be a degree $m$ cyclic \'etale Galois cover with Galois group $G_\pi \simeq {\mathbb{Z}}/m{\mathbb{Z}}$. We denote by $M_{r,L}(\pi)$ the moduli of rank $r$ semistable Higgs bundles $({\mathcal E}, \theta)$ on $C'$ with respect to the divisor $D' : = \pi^* D$ satisfying that
\[
\mathrm{det}(\pi_* {\mathcal E}) \simeq L , \quad \mathrm{trace}(\pi_*\theta) = 0.
\]
Here the pushforward $\pi_*\theta$ is an element in $H^0(C, \pi_* {\mathcal O}_{C'}(D'))$ whose trace recovers its projection to the direct summand component $H^0(C, {\mathcal O}_C(D)$:
\[
\mathrm{trace}(\pi_*\theta) \in H^0(C, {\mathcal O}_C(D)) \subset H^o(C', \pi_*{\mathcal O}_{C'}(D')).
\]
The moduli space $M_{r,L}(\pi)$ lies in the moduli of semistable $\mathrm{GL}_r$-Higgs bundles on $C'$, and the Hitchin fibration associated with the latter induces a Hitchin fibration
\begin{equation}\label{Hitchin_relative}
h_\pi: M_{r,L}(\pi) \to A(\pi);
\end{equation}
see \cite[Section 1.2]{MS} for more details. The Hitchin base $A(\pi)$ naturally sits inside the $\mathrm{GL}_r$-Hitchin base $\widetilde{A}'$ associated with the curve $C'$,
\[
A(\pi) \subset \widetilde{A}': = \bigoplus_{i=1}^r H^0(C', {\mathcal O}_{C'}(iD')).
\]
We define the \emph{elliptic locus} $A^\mathrm{ell}(\pi) \subset A(\pi)$ to be the restriction of the elliptic locus of $\widetilde{A}'$ parameterizing integral spectral curves over $C'$.
Our main result of Sections \ref{sec1} and \ref{Sec2} is a support theorem for the Hitchin fibration (\ref{Hitchin_relative}) associated with the endoscopic moduli spaces.
\begin{thm}[Support Theorem]\label{thm1.1}
The generic point of any support of ${Rh_\pi}_* \mathrm{IC}_{M_{r,L}(\pi)}$ lies in the elliptic locus $A^{\mathrm{ell}}(\pi)$.
\end{thm}
When $m =1$ and $\pi = \mathrm{id}$, the moduli space $M_{r,L}(\pi)$ and its Hitchin fibration (\ref{Hitchin_relative}) recover the $\mathrm{SL}_n$-Higgs moduli space $M_{n,L}$ and (\ref{Hitchin}). Hence Theorem \ref{thm1.1} recovers Theorem \ref{thm0.1}. It also generalizes \cite[Theorem 2.3]{MS} for nonsingular ambient spaces.
Theorem \ref{thm1.1} is a first step towards the study of the global topology for $\mathrm{SL}_n$-Higgs moduli space $M_{n,L}$ and the associated endoscopic moduli spaces. It shows that their global intersection cohomology groups are governed by the (nonsingular) elliptic parts. A similar phenomenon was proven for the $\mathrm{GL}_n$-Higgs moduli spaces and moduli of 1-dimensional semistable sheaves on toric del Pezzo surfaces \cite{MS2}.
\subsection{Weak abelian fibrations}
Since in general the total moduli space $M_{r,L}(\pi)$ may be singular, we use the framework developed in \cite{MS2} to study the Hitchin fibration $h_\pi: M_{r,L}(\pi) \to A(\pi)$. We first show that $h_\pi$ admits the structure as a weak abelian fibration.
For a smooth $A(\pi)$-group scheme $g_\pi: P(\pi) \to A(\pi)$ with geometrically connected fibers acting on $M_{r,L}(\pi)$, we say that the triple $(M_{r,L}(\pi), P(\pi), A(\pi))$ is a {weak abelian fibration} of relative dimension $e$, if
\begin{enumerate}
\item[(a)] every fiber of the map $g_\pi$ is pure of dimension $e$, and $M_{r,L}(\pi)$ has pure dimension \[\mathrm{dim}M_{r,L}(\pi) =e +\mathrm{dim}A(\pi),
\]
\item[(b)] the action of $P(\pi)$ on $M_{n,L}(\pi)$ has \emph{affine} stabilizers, and
\item[(c)] the Tate module $T_{\overline{\mathbb{Q}}_l}(P(\pi))$ associated with the group scheme $P(\pi)$ is polarizable.
\end{enumerate}
We refer to \cite[Section 2]{MS2} for more details about these conditions.
In the following, we complete $h_\pi: M_{n,L}(\pi) \to A(\pi)$ into a weak abelian fibration by constructing the group scheme $P(\pi)$ following \cite[Section 4]{dC_SL} and \cite[Section 2.4]{MS}.
Let ${\mathcal C} \to A(\pi)$ be the universal spectral curve given by the restriction of the universal spectral curve on $\widetilde{A}'$. The relative degree 0 Picard variety\footnote{It parameterizes line bundles on the closed fibers whose restrictions to each irreducible components are of degree 0.} $\mathrm{Pic}^0({\mathcal C}/A(\pi))$ admits a map
\[
\mathrm{Pic}^0({\mathcal C}/A(\pi))\to \mathrm{Pic}^0(C)\times A(\pi)
\]
between $A(\pi)$-group schemes as the composition (see the paragraph following \cite[Proposition 2.5]{MS}):
\[
\mathrm{Pic}^0({\mathcal C}/A(\pi)) \to \mathrm{Pic}^0(C')\times A(\pi) \to \mathrm{Pic}^0(C)\times A(\pi).
\]
We define $P(\pi)$ to be the identity component of the kernel of this map, which is naturally an $A(\pi)$-group scheme.\footnote{We note that the group scheme $P(\pi)$ is denoted by $P^0$ in \cite{MS}.} By viewing a Higgs bundle in $M_{r,L}(\pi)$ as a pure 1-dimensional semistable sheaf on the spectral curve $C_a$, the $A(\pi)$-group scheme $P(\pi)$ acts on $M_{r,L}(\pi)$ via tensor product. It was proven in \cite[Proposition 2.6]{MS} that $(M_{r,L}(\pi), A(\pi), P(\pi))$ is a weak abelian fibration of relative dimension $e := \mathrm{dim}M_{r,L}(\pi) - \mathrm{dim}A(\pi)$ when $\mathrm{gcd}(n,d)=1$. In fact, this holds also in the singular case:
\begin{prop}[\emph{c.f. \cite[Proposition 2.6]{MS}}]\label{prop2.2}
The triple $(M_{r,L}(\pi), A(\pi), P(\pi))$ is a weak abelian fibration of relative dimension $e= \mathrm{dim}M_{r,L}(\pi) - \mathrm{dim}A(\pi)$.
\end{prop}
\begin{proof}
The condition (a) is obvious. The condition (c) only concerns the group scheme $P(\pi)$ which was already verified in (ii) of \cite[Proof of Proposition 2.6]{MS}. As in (i) of \cite[Proof of Proposition 2.6]{MS}, the affineness of the stabilizers for the $P(\pi)$-action on $M_{n,L}(\pi)$ follows from the same statement for the corresponding $\mathrm{GL}_r$-Higgs moduli space \cite[Lemma 3.5.4]{dCRS}, since the stabilizers of the $P(\pi)$-actions are closed subgroups of the stabilizers of the $\mathrm{Pic}^0({\mathcal C}/\widetilde{A}')$-action. Hence the condition (b) holds as well.
\end{proof}
\subsection{$\delta$-inequalities}
For a closed point $a \in A(\pi)$, we denote by $\delta(a)$ the dimension of the affine part of the algebraic group $P(\pi)_a$ over $a$. This defines an upper semi-continuous function
\[
\delta: A(\pi) \to {\mathbb{N}}, \quad a \mapsto \delta(a).
\]
For a closed subvariety $Z \subset A(\pi)$, we define $\delta_Z$ to be the minimal value of the function $\delta$ on $Z$. Following the strategy of \cite{CL, dC_SL}, it was proven in \cite[Section 2]{MS} that $\delta$-inequalities of the group scheme $P(\pi)$ effectively control the decomposition theorem for $h_\pi: M_{r,L}(\pi) \to A(\pi)$, as we now review.
A key observation of \cite{MS} is that, when $\mathrm{deg}(D)> 2g-2$, a combination of the multi-variable $\delta$-inequality \cite[Proposition 2.7]{MS} and the support inequality (\ref{support}) below implies that the decomposition theorem of $h_\pi: M_{r,L}(\pi) \to A(\pi)$ has no support with generic point lying in $A(\pi) \setminus A^{\mathrm{ell}}(\pi)$.
\begin{prop}[\cite{MS} Section 2.5: Proof of Theorem 2.3 (a)] \label{prop2.3}
Assume that for any support $Z$ of ${Rh_\pi}_*\mathrm{IC}_{M_{r,L}(\pi)}$, we have
\begin{equation}\label{support}
\mathrm{codim}_{A(\pi)}Z \leq \delta_Z.
\end{equation}
Then the generic points of all supports are contained in $A^{\mathrm{ell}}(\pi)$.
\end{prop}
When the ambient space $M_{r,L}(\pi)$ is nonsingular, the support inequality (\ref{support}) follows from Ng\^o's work \cite{Ngo}. A singular version was established recently in \cite{MS2} which generalizes Ng\^o's original support inequality.
Recall that $e$ is the relative dimension for the weak abelian fibration $(M_{r,L}(\pi),A(\pi),P(\pi))$ of Proposition \ref{prop2.2}.
\begin{thm}[\cite{MS2} Theorem 1.8]\label{thm2.4}
Suppose we have the vanishing
\begin{equation}\label{truncation}
\tau_{>2e}\left( Rh_*\mathrm{IC}_{M_{r,L}(\pi)}[-\mathrm{dim}M_{r,L}(\pi)]\right) = 0,
\end{equation}
where $\tau_{>\bullet}(-)$ denotes the standard truncation functor.
Then the inequality (\ref{support}) holds for any support $Z$.
\end{thm}
As a consequence of Proposition \ref{prop2.3} and Theorem \ref{thm2.4}, Theorem \ref{thm1.1} follows from the relative dimension bound (\ref{truncation}), which we prove in the next section.
\section{Proper approximations and support theorems}\label{Sec2}
\subsection{Overview}
The main purpose of this section is to complete the proof of Theorem \ref{thm1.1}. As we explained at the end of Section \ref{sec1}, it suffices to prove the relative dimension bound (\ref{truncation}) which we complete in the following.
\subsection{Proper approximations}
We follow the strategy of \cite[Section 3]{MS2} to prove (\ref{truncation}).
Let $q: {\mathcal W} \to W$ be a morphism from a nonsingular Artin stack of finite type to an algebraic variety. Modelled on \cite[Proposition 3.6]{MS2}, we say that $q$ has \emph{a proper approximation} if, for any $R >0$, there exists a nonsingular scheme $W_R$ and an Artin stack ${\mathcal X}_R$ with a commutative diagram
\begin{equation}\label{diagram}
\begin{tikzcd}[column sep=small]
W_R \arrow[dr, "p_W"] \arrow[rr, hook, "j"] & & {\mathcal X}_R \arrow[dl, "p_{\mathcal X}"] \\
& {\mathcal W} &
\end{tikzcd}
\end{equation}
satisfying the following properties:
\begin{enumerate}
\item[(a)] $p_{\mathcal X}$ is an affine space bundle,
\item[(b)] $j: W_R \hookrightarrow {\mathcal X}_R$ is an open immersion,
\item[(c)] the composition $q_R: W_R \xrightarrow{p_W}{\mathcal W} \xrightarrow{q} W$ is projective, and
\item[(d)] for the complement ${\mathcal Z}_R: = {\mathcal X}_R \setminus W_R$, we have
\[
\mathrm{codim}_{{\mathcal X}_R}\left( {\mathcal Z}_R\right)>R.
\]
\end{enumerate}
\begin{prop}\label{Prop3.2}
Assume that $q: {\mathcal W} \to W$ has a proper approximation. Then the following statements hold.
\begin{enumerate}
\item[(1)] We have a splitting
\begin{equation}\label{splitting}
Rq_* {\mathbb{C}} \simeq \mathrm{IC}_W[-\mathrm{dim}W] \oplus {\mathcal K} \in D_c^+(W).
\end{equation}
\item[(2)] Let $q': {\mathcal W}' \to W'$ be the pullback of $q$ along a morphism $f: W' \to W$ with ${\mathcal W}'$ a nonsingular stack. Then $q'$ has a proper approximation.
\end{enumerate}
\end{prop}
\begin{proof}
(1) follows from \cite[Section 3.4]{MS2}. In fact, although \cite[Proposition 3.4]{MS2} concerns a more specific geometry, the proof only relies on the diagram (\ref{diagram}) and the properties (a-d) above. More precisely, we view the complex
\[
Rq_* {\mathbb{C}} = R(q \circ p_{\mathcal X})_* {\mathbb{C}}
\]
as a homotopy colimit of truncations of the direct image complexes $Rq_{R*} {\mathbb{C}}$, and use the decomposition theorem for the projective morphism $q_R: W_R \to W$ to deduce the desired splitting (\ref{splitting}).
(2) is deduced by pulling back the diagram (\ref{diagram}) along $f: W' \to W$.
\end{proof}
\subsection{Connnecting to $\mathrm{GL}_r$-Hitchin fibrations}
Recall the Hitchin fibration $h_\pi: M_{r,L}(\pi) \to A(\pi)$ associated with $\pi: C' \to C$ with relative dimension
\[
e = \mathrm{dim}M_{r,L}(\pi) - \mathrm{dim}A(\pi).
\]
To verify the relative dimension bound (\ref{truncation}) for $M_{r,L}(\pi)$, we consider the stack ${\mathcal M}_{r,L}(\pi)$ of semistable Higgs bundles $({\mathcal E}, \theta)$ with $\mathrm{det}(\pi_*{\mathcal E})\simeq L \in \mathrm{Pic}^d(C)$ and $\mathrm{trace}( \pi_*\theta) = 0$. We denote by $q:{\mathcal M}_{r,L}(\pi) \to M_{r,L}(\pi)$ the map from the stack to the good moduli space.
For our purpose, we also consider the $\mathrm{GL}_r$-Hitchin fibration $\widetilde{h}: \widetilde{M}'_{r,d} \to \widetilde{A}'$ associated with the curve $C'$. Here $\widetilde{M}'_{r,d}$ is the moduli space of semistable Higgs bundles
\[
({\mathcal E}, \theta), \quad \quad \theta: {\mathcal E} \to {\mathcal E}\otimes{\mathcal O}_{C'}(D'), \quad \quad D' = \pi^*D
\]
of rank $r$ and degree $d$ on $C'$, and $\widetilde{h}$ is the Hitchin fibration sending $({\mathcal E}, \theta)$ to its characteristic polynomial
\[
\mathrm{char}(\theta) \in \widetilde{A}' = \oplus_{i=1}^r H^0(C', {\mathcal O}_{C'}(iD')).
\]
We denote by $\widetilde{{\mathcal M}}'_{r,d}$ the corresponding moduli stack with the natural morphism $\widetilde{q}: \widetilde{{\mathcal M}}'_{r,d} \to \widetilde{M}'_{r,d}$. We recall the following proposition from \cite{MS} concerning $\widetilde{{\mathcal M}}'_{r,d}$.
\begin{prop}[\cite{MS2} Proposition 2.9 (2) and Proposition 3.6] \label{Prop3.3}
The stack $\widetilde{{\mathcal M}}'_{r,d}$ is nonsingular, and $\widetilde{q}: \widetilde{{\mathcal M}}'_{r,d} \to \widetilde{M}'_{r,d}$ has a proper approximation.
\end{prop}
Now we connect the moduli spaces and stacks for the endoscopic groups and $\mathrm{GL}_r$ via the construction of \cite[Section 5]{MS}.
We consider the moduli space $\widetilde{M}_{1,0}$ (resp. moduli stack $\widetilde{{\mathcal M}}_{1,0}$) of Higgs bundles on $C$ with rank 1 and degree 0. More concretely, they can be described as:
\[
\widetilde{M}_{1,0} = \mathrm{Pic}^0(C)\times H^0(C, {\mathcal O}_C(D)), \quad \widetilde{{\mathcal M}}_{1,0} = {{\mathcal P}}ic^0(C)\times H^0(C, {\mathcal O}_C(D))
\]
where $\mathrm{Pic}^0(-)$ and ${{\mathcal P}}ic^0(-)$ stand for the degree 0 Picard scheme and stack respectively. We denote by
\[
q_P: \widetilde{{\mathcal M}}_{1,0} \to \widetilde{M}_{1,0}
\]
the natural morphism. The group scheme $ \widetilde{M}_{1,0}$ acts on $\widetilde{M}'_{r,d}$:
\[
({\mathcal L}, \sigma) \cdot ({\mathcal E}, \theta) = (\pi^*{\mathcal L} \otimes {\mathcal E}, \pi^*\sigma + \theta), \quad \quad ({\mathcal L}, \sigma) \in \widetilde{M}_{1,0}, \quad ({\mathcal E}, \theta) \in \widetilde{M}'_{r,d}
\]
which induces a morphism
\[
t: \widetilde{M}_{1,0} \times M_{r,L}(\pi) \to \widetilde{M}'_{r,d}
\]
by restricting the action to $M_{r,L}(\pi) \subset \widetilde{M}_{r,d}$. The map $t$ can be interpreted as the quotient map by the finite group $\Gamma = \mathrm{Pic}^0(C)[n]$ acting diagonally on the two factors; see \cite[Section 5.3]{MS}. Similarly, we have the $\Gamma$-quotient map for the moduli stacks:
\[
\widetilde{{\mathcal M}}_{1,0} \times {\mathcal M}_{r,L}(\pi) \to \widetilde{{\mathcal M}}'_{r,d}
\]
inducing the following Cartesian diagram
\begin{equation}\label{BC}
\begin{tikzcd}
\widetilde{{\mathcal M}}_{1,0} \times {\mathcal M}_{r,L}(\pi) \arrow[r] \arrow[d]
& \widetilde{{\mathcal M}}'_{r,d} \arrow[d] \\
\widetilde{M}_{1,0} \times M_{r,L}(\pi) \arrow[r, "t"]
& \widetilde{M}'_{r,d}
\end{tikzcd}
\end{equation}
where the horizontal arrows are quotient maps by the $\Gamma$-actions and the vertical arrows are the maps from the stacks to the good moduli spaces.
\begin{prop}\label{Prop3.4}
The moduli stack ${\mathcal M}_{r,L}(\pi)$ is nonsingular, and the left vertical map of (\ref{BC})
\[
g:= q_P \times q: \widetilde{{\mathcal M}}_{1,0} \times {\mathcal M}_{r,L}(\pi) \to \widetilde{M}_{1,0} \times M_{r,L}(\pi)
\]
has a proper approximation.
\end{prop}
\begin{proof}
By the discussion in the proof of \cite[Proposition 4.1]{MS}, the obstruction space for an element $({\mathcal E}, \theta) \in {\mathcal M}_{r,L}(\pi)$ is the second cohomology group of the following complex
\[
\left[(\pi_*{\mathcal E}{nd}({\mathcal E}))_0 \xrightarrow{\pi_*\mathrm{ad}(\theta)} (\pi_*{\mathcal E}{nd}({\mathcal E}))_0\otimes {\mathcal O}_C(D)\right]
\]
obtained by removing the trace from the pushforward of the complex
\begin{equation}\label{obstruction}
\left[{\mathcal E}{nd}({\mathcal E}) \xrightarrow{\mathrm{ad}(\theta)} {\mathcal E}{nd}({\mathcal E})\otimes {\mathcal O}_{C'}(D')\right].
\end{equation}
Here $(\pi_*{\mathcal E}{nd}({\mathcal E}))_0$ denotes the kernel with respect to the trace on the curve $C$:
\[
\mathrm{tr}_C: \pi_* {\mathcal E}{nd}({\mathcal E}) \xrightarrow{\pi_*\mathrm{tr}_{C'}} \pi_*{\mathcal O}_{C'} \to {\mathcal O}_C
\]
In particular, the obstruction space for $({\mathcal E}, \theta) \in {\mathcal M}_{r,L}(\pi)$ is a subspace of the second cohomology group of (\ref{obstruction}) on $C'$ which is actually the obstruction space for $({\mathcal E}, \theta) \in \widetilde{{\mathcal M}}'_{r,d}$ by viewing $({\mathcal E}, \theta)$ as a $\mathrm{GL}_r$-Higgs bundle on $C'$. Its vanishing follows from the (the proof of) Proposition \ref{Prop3.3} on the smoothness of $\widetilde{{\mathcal M}}'_{r,d}$. This shows that ${\mathcal M}_{r,L}(\pi)$ is nonsingular.
Consequently, we obtain the smoothness of $\widetilde{{\mathcal M}}_{1,0} \times {\mathcal M}_{r,L}(\pi)$. The second part is a corollary of Proposition \ref{Prop3.2} (2) and Proposition \ref{Prop3.3}.
\end{proof}
By Propositions \ref{Prop3.2} (1) and \ref{Prop3.4}, we get the following result.
\begin{cor}\label{Cor3.5}
We have a splitting
\begin{equation}\label{eqn15}
Rg_* {\mathbb{C}} \simeq \mathrm{IC}_{\widetilde{M}_{1,0} \times M_{r,L}(\pi) }[-\mathrm{dim}\widetilde{M}_{1,0} -\mathrm{dim}M_{r,L}(\pi)] \oplus {\mathcal K}
\end{equation}
with ${\mathcal K}$ some complex bounded from below.
\end{cor}
\subsection{Proof of Theorem \ref{thm1.1}}\label{Sec2.4}
We verify (\ref{truncation}) in this section which completes the proof of Theorem \ref{thm1.1}. For convenience, we use the following simplified notation (only) in Section \ref{Sec2.4}:
\[
\begin{split}
H:= \widetilde{M}_{1,0} ,\quad M:= M_{r,L}(\pi),\quad \widetilde{M}':=
\widetilde{M}'_{r,d}, \\ {\mathcal H}:= \widetilde{{\mathcal M}}_{1,0}, \quad {\mathcal M}:= {\mathcal M}_{r,L}(\pi), \quad \widetilde{{\mathcal M}}':= \widetilde{{\mathcal M}}'_{r,d}.
\end{split}
\]
\medskip
\noindent {\bf Fact 1.} For the morphism $q: {\mathcal M} \to M$, we have a splitting
\[
Rq_* {\mathbb{C}} \simeq \mathrm{IC}_M[-\dim M] \oplus {\mathcal K}' .
\]
\begin{proof}[Proof of Fact 1.]
Since $H$ is nonsingular, we have
\[
\mathrm{IC}_{H\times M} \simeq {\mathbb{C}}_H[\mathrm{dim}H] \boxtimes \mathrm{IC}_M.
\]
On the other hand, the lefthand side of (\ref{eqn15}) is equal to
\[
Rg_* {\mathbb{C}} = \bigoplus_{i\geq 0}{\mathbb{C}}_H \boxtimes Rq_* {\mathbb{C}}_{M} [-2i].
\]
Hence by restricting (\ref{eqn15}) to $\mathrm{pt}\times M \subset H \times M$, we obtain that
\[
\bigoplus_{i\geq 0 } Rq_* {\mathbb{C}}_M [-2i]\simeq \mathrm{IC}_M[-\mathrm{dim}M] \oplus \cdots \in D^+_c(M).
\]
Since $\mathrm{IC}_M[-\mathrm{dim}M]$ is simple, it has to be a direct summand component of some $Rq_*{\mathbb{C}}_M[-2k]$. By comparing over the nonsingular locus of $M$, we see that $k=0$.
\end{proof}
\medskip
\noindent {\bf Fact 2.} Let $h_{\mathcal M}: {\mathcal M} \to A(\pi)$ be the composition
\[
h_{\mathcal M}: {\mathcal M} \xrightarrow{q} M \xrightarrow{h_\pi} A(\pi).
\]
Then we have
\[
\tau_{>2e}\left(Rh_{{\mathcal M}!} {\mathbb{C}}_{\mathcal M}\right) =0, \quad \quad e = \mathrm{dim}M-\mathrm{dim}A(\pi) = \mathrm{dim}{\mathcal M}-\mathrm{dim}A(\pi)+1.
\]
\begin{proof}[Proof of Fact 2]
We consider the map $h_{\widetilde{{\mathcal M}}'} : \widetilde{{\mathcal M}}' \to \widetilde{A}'$ given as the composition
\[
h_{\widetilde{{\mathcal M}}'}= \widetilde{h}\circ \widetilde{q}: \widetilde{{\mathcal M}}' \to \widetilde{M}' \to \widetilde{A}'.
\]
By \cite[Proposition 2.9 (1)]{MS2} (see also \cite[Section 10]{CL}) we have the dimension bound for any closed fiber:
\[
\mathrm{dim}h^{-1}_{\widetilde{{\mathcal M}}'}(a) \leq \mathrm{dim}\widetilde{{\mathcal M}}'-\mathrm{dim}\widetilde{A}' = e +(g-1), \quad \forall a\in \widetilde{A}'.
\]
Hence, for the morphism $h_{{\mathcal H}\times{\mathcal M}}: {\mathcal H} \times {\mathcal M} \to H^0(C, {\mathcal O}_C(D)) \times A(\pi)$ given by the composition
\[
h_{{\mathcal H}\times{\mathcal M}}: {\mathcal H} \times {\mathcal M} \to H \times M \to H^0(C, {\mathcal O}_C(D)) \times A(\pi),
\]
we obtain from the diagram (\ref{BC}) that
\[
\mathrm{dim}h_{{\mathcal H}\times{\mathcal M}}^{-1}(w,s) = \mathrm{dim}h^{-1}_{\widetilde{{\mathcal M}}'}\left(t(w,s)\right) \leq e+ (g-1), \quad \quad \forall (w,s)\in H^0(C, {\mathcal O}_C(D))\times A(\pi).
\]
On the other hand,
\[
\mathrm{dim}h_{{\mathcal H}\times{\mathcal M}}^{-1}(t,s) = \mathrm{dim}h_{\mathcal M}^{-1}(s) +(g-1).
\]
Consequently $\mathrm{dim}h_{\mathcal M}^{-1}(s) \leq e$ for any closed point $s\in A(\pi)$. Fact 2 follows from \cite[Lemma 3.5]{MS2} and base change.
\end{proof}
As explained in the paragraph following \cite[Proposition 3.4]{MS}, Facts 1 and 2 imply the relative dimension bound (\ref{truncation}) immediately. This completes the proof of Theorem \ref{thm1.1}. \qed
\section{The Hausel--Thaddeus conjecture}
\subsection{Overview}
We complete the proof of Theorem \ref{thm0.2} in this section. As a consequence of Theorem \ref{thm1.1}, we first show that both sides of (\ref{thm0.2_a}) are semisimple objects with $A_\gamma$ as the only support. Then Theorem \ref{thm0.2} (a) is reduced to showing the desired isomorphism over an arbitrary Zariski open subset of the locus $A_\gamma \subset A$. This is essentially identical to the proof of \cite[Theorem 3.2]{MS} which only relies on the calculation over the elliptic locus \cite{Ngo, Yun3}.
Theorem \ref{thm0.2} (b) is more complicated, since this is a new phenomenon when $\mathrm{gcd}(n,d)\neq 1$.\footnote{When $\mathrm{gcd}(n,d) = \mathrm{gcd}(n,d')=1$, the condition (\ref{condition}) specializes to the condition that $\kappa' = d'^{-1}d\kappa$ as in \cite[Theorem 0.5]{MS}.} Again, we use the support theorem to reduce the desired isomorphism to a calculation of the $G_\pi$-action on the $m$ components of the moduli space $M_{r,L}(\pi)$. This is carried out in Section \ref{Sec3.5}.
In Section \ref{HT_conj}, we further discuss the connection between Theorem \ref{thm0.2} and the original formulation of the Hausel--Thaddeus conjecture \cite{HT}.
\subsection{Supports for $h: M_{n,L} \to A$}
Recall the $\mathrm{SL}_n$-Hitchin fibration $h: M_{n,L} \to A$, and the elliptic locus $A^{\mathrm{ell}} \subset A$ which is the open subset of $A$ consisting of integral spectral curves. The fiberwise $\Gamma$-action on $M_{n,L}$ yields the canonical decomposition
\begin{equation*}
{Rh}_* \mathrm{IC}_{M_{n,L}} = \bigoplus_{\kappa} \left({Rh}_* \mathrm{IC}_{M_{n,L}}\right)_\kappa , \quad \kappa \in \hat{\Gamma}.
\end{equation*}
Let $\gamma \in \Gamma$ be the element matched with the nontrivial character $\kappa \in \hat{\Gamma}$ via the Weil pairing (\ref{Weil_Pairing}). Ng\^o proved in \cite[Theorem 7.8.5]{Ngo} that the restriction of the object
\begin{equation}\label{kappa_object}
\left({Rh}_*\mathrm{IC}_{M_{n,L}}\right)_\kappa
\end{equation}
to $A^{\mathrm{ell}}$ has
\[
A_\gamma^{\mathrm{ell}}: = A_\gamma \cap A^{\mathrm{ell}} \subset A
\]
as its only support. Hence we obtain the following proposition concerning the lefthand side of (\ref{thm0.2_a}) from Theorem \ref{thm0.1}:
\begin{prop}\label{prop3.1}
We have that $A_\gamma$ is the only support of the object (\ref{kappa_object}).
\end{prop}
\subsection{The moduli spaces $M_{r,L}(\pi)$ and $M^\gamma_{n,L}$}
Now we prove a support theorem for the fibration $h_\gamma: M^\gamma_{n,L} \to A_\gamma$ concerning the object in the righthand side of (\ref{thm0.2_a}). We achieve this using the moduli space $M_{r,L}(\pi)$ discussed in Sections \ref{sec1} and \ref{Sec2}.
Assume $\kappa$ has order $m$ in $\hat{\Gamma}$. Therefore $\gamma$ is an $m$-torsion line bundle. Let $\pi: C' \to C$ be the degree $m$ cyclic \'etale Galois cover associated with $\gamma$ \cite[Section 1.3]{MS}. In the following, we construct the commutative diagram
\begin{equation}\label{diagram111}
\begin{tikzcd}
M_{r,L}(\pi) \arrow[r, "q_M"] \arrow[d, "h_{\pi}"]
& M_\gamma \arrow[d, "h_\gamma"] & \\
A{(\pi)} \arrow[r, "q_A"]
& A_\gamma
\end{tikzcd}
\end{equation}
connecting $h_\pi$ and $h_\gamma$, where the bottom horizontal map $q_A$ is the $G_\pi$-quotient; see \cite[Section 1.5]{MS} for the coprime case. Note that the map $q_M$ is the free $G_\pi$-quotient in the coprime case, but it is more complicated in general without the coprime assumption (Remark \ref{rmk1}).
We first review the construction of \cite[Section 7]{HT} which gives the top horizontal map $q_M$. Let $({\mathcal E}, \theta)$ be a rank $r$ Higgs bundle on the curve $C'$, then $(\pi_* {\mathcal E}, \pi_*\theta)$ is a rank $n (= rm)$ Higgs bundle on $C$. Here the bundle $\pi_*{\mathcal E}$ is simply the pushforward of ${\mathcal E}$ along $\pi: C' \to C$, and the Higgs field $\theta$ is given by descending the block-diagonal Higgs field $\bigoplus_{g\in G_\pi} g^*\theta$ on the vector bundle
\begin{equation}\label{eqn17}
\pi^*\pi_*{\mathcal E}. = \bigoplus_{g\in G_\pi} g^*{\mathcal E}
\end{equation}
along the $G_\pi$-quotient $\pi: C'\to C$. We recall the following well-known lemma.
\begin{lem}\label{lem3.2} The Higgs bundle $({\mathcal E}, \theta)$ is semistable if and only if $(\pi_*{\mathcal E} , \pi_*\theta)$ is semistable.
\end{lem}
\begin{proof}
The \emph{if} part is obvious: for any sub-Higgs bundle destabilizing $({\mathcal E}, \theta)$, its pushforward along $\pi$ will destabilize $(\pi_*{\mathcal E}, \pi_*\theta)$. For the \emph{only if} part, we consider the decomposition (\ref{eqn17}):
\begin{equation}\label{eqn18}
\pi^*\pi_*({\mathcal E}, \theta) = \bigoplus_{g\in G_\pi} g^*({\mathcal E}, \theta).
\end{equation}
In particular, if $({\mathcal E},\theta)$ is semistable, then (\ref{eqn18}) as a direct summand of semistable Higgs bundles of the same slope is also semistable. Hence the pullback of any sub-Higgs bundle destabilizing $(\pi_*{\mathcal E}, \pi_*\theta)$ will destabilize (\ref{eqn18}) as well. This completes the proof.
\end{proof}
By Lemma \ref{lem3.2}, the push forward $\pi_*$ induces a morphism between the moduli spaces
\begin{equation}\label{eqn19}
M_{r,L}(\pi) \to M_{n,L}.
\end{equation}
Moreover, by \cite[Proposition 3.3]{NR}, the restriction of (\ref{eqn19}) to the Zariski dense open subset $M_{r,L}(\pi)^\circ \subset M_{r,L}(\pi)$ formed by points not fixed by any element of $G_\pi$ is a free $G_\pi$-quotient with image lying in $M^\gamma_{n,L}$. In conclusion, we obtain
\[
q_M: M_{r,L}(\pi) \to M^\gamma_{n,L} \subset M_{n,L}.
\]
which completes the diagram (\ref{diagram111}).
\begin{rmk}\label{rmk1}
When $\mathrm{gcd}(n,d)=1$ so that there is no strictly semistable objects, both varieties $M_{r,L}(\pi)$ and $M^\gamma_{n,L}$ are nonsingular, and the map $q_M$ induced by $\pi_*$ is a \emph{free} $G_\pi$-quotient \cite[Proposition 7.1]{HT}. However, this may fail when $\mathrm{gcd}(n,d)\neq 1$. For example, the rank 1 stable Higgs bundle $({\mathcal O}_{C'}, 0)$ is a $G_\pi$-fixed point.
\end{rmk}
\begin{lem}\label{lem3.4}
We have a splitting
\[
{Rq_M}_* \mathrm{IC}_{M_{r,L}(\pi)} = \mathrm{IC}_{M^\gamma_{n,L}} \oplus \cdots.
\]
\end{lem}
\begin{proof}
Over an open subset of $M^\gamma_{n,L}$ where $q_M$ is a free $G_\pi$-quotient, we have the canonical splitting
\[
{Rq_M}_* {\mathbb{C}} = \left({Rq_M}_* {\mathbb{C}}\right)^{G_\pi} \oplus \left({Rq_M}_* {\mathbb{C}}\right)_{\mathrm{var}} = {\mathbb{C}} \oplus \left({Rq_M}_* {\mathbb{C}}\right)_{\mathrm{var}}
\]
with $\left({Rq_M}_* {\mathbb{C}}\right)_{\mathrm{var}}$ the variant part. The lemma follows.
\end{proof}
To analyze the supports for $h_\gamma: M^\gamma_{n,L} \to A_\gamma$, we note the following standard lemma.
\begin{lem}\label{lem3.5}
Let $f: X \to Y$ be a finite surjective map between irreducible varieties. Then for any semisimple perverse sheaf $\mathrm{IC}_X({\mathcal L})$ with full support $X$, the pushforward $f_*\mathrm{IC}_X({\mathcal L})$ is s semisimple perverse sheaf with full support $Y$.
\end{lem}
\begin{proof}
To show that $f_*\mathrm{IC}_X({\mathcal L})$ is an intermediate extension of a local system on an open subset of $Y$, it suffices to prove the support condition (see \cite[Section 2.1 (12),(13)]{dCM1}):
\[
\mathrm{dim}\left(\mathrm{supp}({\mathcal H}^{-i}(-)\right) <i, \quad \quad \textup{for } i<\mathrm{dim}Y
\]
for $f_*\mathrm{IC}_X({\mathcal L})$ and its dual. This follows from the finiteness of $f$ and the same support conditions for $\mathrm{IC}_X({\mathcal L})$ and its dual on $X$.
\end{proof}
\begin{prop}\label{prop3.6}
Assume that $\gamma\in \Gamma$ and $\kappa \in \hat{\Gamma}$ are matched via the Weil pairing (\ref{Weil_Pairing}), and $\kappa' \in \langle \kappa \rangle$. The object
\begin{equation}\label{ttt}
\left({Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L}}\right)_{\kappa'}
\end{equation}
has full support $A_\gamma$.
\end{prop}
\begin{proof}
We first consider the map $h_\pi: M_{r,L}(\pi) \to A(\pi)$ and observe that the object
\begin{equation}\label{eqn20}
\left({Rh_\pi}_* \mathrm{IC}_{M_{r,L}(\pi)}\right)_{\kappa'}
\end{equation}
has full support $A(\pi)$ for $\gamma$ and $\kappa'$ as in the assumption and $\pi: C' \to C$ given by $\gamma$. When $\mathrm{gcd}(n,d) =1$ this is verified in \cite[Theorem 2.3 (b) and Proposition 2.10]{MS}, which relies on the support theorem (\cite[Theorem 2.3 (a)]{MS}) and a direct calculation over the elliptic locus. Since the moduli space $M_{r,L}(\pi)$ is nonsingular restricting over the elliptic locus and the calculation of \cite{MS} over the elliptic locus does not rely on the coprime assumption, we obtain that the full support property still holds for (\ref{eqn20}) as a consequence of Theorem \ref{thm1.1}.
To prove the proposition, we use the commutative diagram (\ref{diagram111}) which induces a canonical $\Gamma$-equivariant isomorphism
\[
{Rq_A}_*{Rh_\pi}_* \mathrm{IC}_{M_{r,L}(\pi)} = {Rh_\gamma}_*{Rq_M}_* \mathrm{IC}_{M_{r,L}(\pi)}.
\]
Taking the $\kappa'$-isotypic parts, we get
\begin{equation}\label{eqn21}
{Rq_A}_*\left({Rh_\pi}_* \mathrm{IC}_{M_{r,L}(\pi)}\right)_{\kappa'} = \left({Rh_\gamma}_*{Rq_M}_* \mathrm{IC}_{M_{r,L}(\pi)}\right)_{\kappa'}
\end{equation}
where both sides are semisimple objects due to the decomposition theorem. Since $q_A$ is a finite quotient map and (\ref{eqn20}) has full support $A(\pi)$, the lefthand side (\ref{eqn21}) has full support $A_\gamma$ by Lemma \ref{lem3.5}. Furthermore, Lemma \ref{lem3.4} implies that (\ref{ttt}) is a direct summand component of the righthand side of (\ref{eqn21}). This completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm0.2} (a)}
Theorem \ref{thm0.2} (a) is an immediate consequence of Propositions \ref{prop3.1} and \ref{prop3.6}.
More precisely, since both sides of (\ref{thm0.2_a}) have $A_\gamma$ as their only supports, it suffices to show the isomorphism over an \emph{arbitrary} open subset of $A_\gamma$ which is proven essentially by \cite[Theorem B]{Yun3}; see also \cite[Theorem 3.2]{MS} . \qed
\begin{rmk}
In fact, even without the coprime assumption, the proof of \cite[Theorem 3.2]{MS} works over the elliptic locus $A^\mathrm{ell}_\gamma \subset A_\gamma$. In particular, we may choose the open subset in the proof above to be the elliptic locus.
\end{rmk}
\subsection{Proof of Theorem \ref{thm0.2} (b)}\label{Sec3.5}
Since the object (\ref{ttt}) has full support $A_\gamma$, its isomorphism class is determined by the restriction over a Zariski open subset. In view of the diagram (\ref{diagram111}), it suffices to treat the $G_\pi$-equivariant objects
\begin{equation}\label{eqn23}
\left({Rh_\pi}_*{\mathbb{C}}_{h_\pi^{-1}(V)}\right)_{\kappa'}
\end{equation}
over an arbitrary Zariski open $V \subset A(\pi)$. After shrinking $V$, we may assume that all the fibers of $h_\pi$ are nonsingular and $G_\pi$ acts freely on $V$. By \cite[Proposition 7.2.3]{Ngo} (see \cite[Theorem 5.0.2]{dCRS} for the Hodge module version), the isomorphism class of the object (\ref{eqn23}) is completely determined by the $G_\pi$-equivariant local system given by the relative top degree cohomology:
\[
\left({R^{2s}h_\pi}_*{\mathbb{C}}_{h_\pi^{-1}(V)}\right)_{\kappa'}.
\]
Here $s$ is the dimension of a fiber of $h_\pi$ over $V$. The sheaf
\[
{R^{2s}h_\pi}_*{\mathbb{C}}_{h_\pi^{-1}(V)}
\]
is a rank $m$ trivial local system indexed by the $m$ connected components of a general fiber of $h_{\pi}$, which are further identified with the $m$ connected components of the degree $d$ Prym variety
\[
\mathrm{Prym}^d(C'/C):= \mathrm{Nm}^{-1}(L), \quad \mathrm{Nm} = \mathrm{det}(\pi_*-): \mathrm{Pic}^d(C') \to \mathrm{Pic}^d(C)
\]
associated with the cyclic Galois cover $\pi: C' \to C$; see \cite[Section 1]{MS}.
In conclusion, the isomorphism class of (\ref{eqn23}) is completely determined by the $G_\pi$- and the $\Gamma$-actions on the $m$ connected components of $\mathrm{Prym}^d(C'/C)$. These two actions commute with each other.
Now we want to connect the Hitchin fibrations \[
h_{\pi,L}: M_{r,L}(\pi) \to A(\pi), \quad\quad h_{\pi,L'}: M_{r,L'}(\pi) \to A(\pi)
\]
where the line bundles $L$ and $L'$ are of degrees $d$ and $d'$ respectively.\footnote{In this section we use $h_{\pi,L}$ to denote the Hitchin fibration $M_{r,L}(\pi) \to A(\pi)$ to indicate its dependence on the line bundle $L$.}
We first note the following elementary lemma which justifies the condition (\ref{condition}).
\begin{lem}\label{lem3.8}
There is an integer $q$ coprime to $n$ such that
\[
d = d'q ~~\mathrm{mod}~~n.
\]
\end{lem}
\begin{proof}
Assume that
\[
\mathrm{gcd}(n,d) = \mathrm{gcd}(n,d') = a.
\]
Then both the primary ideals $(d)$ and $(d')$ of ${\mathbb{Z}}/n{\mathbb{Z}}$ coincide with $(a)$. Hence the generators $d$ and $d'$ differ by a unit of ${\mathbb{Z}}/n{\mathbb{Z}}$.
\end{proof}
In the following, the integer $q$ will be chosen as in Lemma \ref{lem3.8}. The proof of Theorem \ref{thm0.2} (b) follows from the following two steps.
\subsubsection{Step 1: Connecting $h_{\pi,L'}$ to $h_{\pi,L'^{\otimes q}}$}\label{3.5.1}
Since the $G_\pi$-equivariant objects (\ref{eqn23}) associated with the Hitchin fibrations $h_{\pi,L'}$ and $h_{\pi,L'^{\otimes q}}$ are completely determined by the $G_\pi$- and the $\Gamma$-actons on the Prym varieties
\[
\mathrm{Prym}^{d'}(C'/C): = \mathrm{Nm}^{-1}(L')
\]
and
\[
\mathrm{Prym}^{d'q}(C'/C) : = \mathrm{Nm}^{-1}(L'^{\otimes q})
\]
respectively. An identical argument as for \cite[Proposition 2.11]{MS} yields
\begin{equation*}
\left(\mathrm{Rh_{\pi,L'}}_* {\underline{\BC}}_{h_{\pi,L'}^{-1}(V)} \right)_{q\kappa} \simeq \left(\mathrm{Rh_{\pi,L'^{\otimes q}}}_* {\underline{\BC}}_{h^{-1}_{\pi,L'^{\otimes q}}(V)} \right)_{\kappa} \in D_c^b(V).
\end{equation*}
In view of Proposition \ref{prop3.6}, this further implies that
\begin{equation}\label{extra}
\left({Rh_\gamma}_*\mathrm{IC}_{M^\gamma_{n,L'}}\right)_{q\kappa} \simeq \left({Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L'^{\otimes q}}}\right)_{\kappa} \in D^b_c(A_\gamma).
\end{equation}
\subsubsection{Step 2: Connecting $M^\gamma_{n,L'^{\otimes q}}$ and $M^\gamma_{n,L}$}
By the choice of $q$ we have
\begin{equation}\label{eqn25}
\mathrm{deg}(L'^{\otimes q}) - \mathrm{deg}(L) = 0 ~~ \mathrm{mod}~~ n.
\end{equation}
Note that for two line bundles $L_1$ and $L_2$ with $L_1 = L_2 \otimes N^{\otimes n}$, there is a natural identification of the moduli spaces
\begin{equation*}
M_{n, L_1} \xrightarrow{\simeq} M_{n, L_2}, \quad ({\mathcal E}, \theta) \mapsto ({\mathcal E}\otimes N, \theta)
\end{equation*}
compatible with the $\Gamma$-actions and the Hitchin fibrations. Therefore, by (\ref{eqn25}) we have natural isomorphisms
\[
M_{n,L'^{\otimes q}} \xrightarrow{\simeq} M_{n,L},\quad \quad M^\gamma_{n,L'^{\otimes q}} \xrightarrow{\simeq} M^\gamma_{n,L}
\]
which further induce
\begin{equation}\label{extra1}
\left({Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L'^{\otimes q}}}\right)_{\kappa} \simeq \left({Rh_\gamma}_* \mathrm{IC}_{M^\gamma_{n,L}}\right)_{\kappa}.
\end{equation}
The proof of Theorem \ref{thm0.2} (b) is completed by combining (\ref{extra}) and (\ref{extra1}).
\qed
\subsection{The Hausel--Thaddeus conjecture}\label{HT_conj}
In this section, we give a few remarks regarding the relation of our result with the Hausel--Thaddeus conjecture.
The original form of the Hausel--Thaddeus conjecture involves
Higgs bundles of type $\mathrm{SL}_n$ and $\mathrm{PGL}_n$ with $D = K_C$ and in the coprime setting $\mathrm{gcd}(n,d) = 1$. It relates the singular cohomology of $M_{n,L}$ with the stringy cohomology of $[M_{n,L}/\Gamma]$, twisted by a particular gerbe $\alpha$
whose appearance is motivated by SYZ mirror symmetry. In the coprime setting, as explained in the appendix of \cite{LW}, the $\alpha$-twisted cohomology of the sector
\[
[M_{n,L}^\gamma/\Gamma], \quad \gamma \in \Gamma
\]
is equivalent to a certain isotypic component of the singular cohomology of $M_{n,L}^\gamma$. Hence the original Hausel--Thaddeus formulation is implied by the formulation as in Theorem \ref{thm0.2}, after passing to global sections. In the non-coprime setting, however, it is not clear to us how to define the corresponding gerbe $\alpha$ on the singular stack $[M_{n,L}/\Gamma]$ and so we do not have a direct definition of the $\alpha$-twisted intersection cohomology. As a result, the formulation we give here in terms of the endoscopic decomposition seems more natural.
If we consider the case of Higgs bundles with $D= K_C$ but general degree $d$, then our argument no longer applies, and one can ask what the correct formulation should be. Mauri has conjectured in \cite{Mauri} that one should again work with the intersection cohomology to formulate the Hausel-Thaddeus conjecture, and has verified this in rank $2$. From the perspective of enumerative geometry, another natural option is to work with the cohomology of the so-called BPS sheaf $\phi_{\mathrm{BPS}}$, a perverse sheaf on $M_{n,L}$ defined by Davison--Meinhardt \cite{DM1} and Toda \cite{Toda}. When $\deg(D) > 2g-2$, these coincide but it is unclear to us which cohomology should be better behaved and, indeed, it seems plausible that both could satisfy a version of the Hausel-Thaddeus conjecture.
Finally, it is reasonable to expect Theorem \ref{thm0.2} can be extended to the case of Higgs bundles for a general reductive group $G$ and its Langlands dual $G^\vee$, and we hope to explore this in subsequent work.
\section{Vector bundles and Higgs bundles}
In this section, we discuss the interplay between the moduli of vector bundles and the moduli of Higgs bundles, and complete the proof of Theorem \ref{thm0.3}. As before, we fix a line bundle $L \in \mathrm{Pic}^d(C)$ and an effective divisor $D$ of degree $\mathrm{deg}(D) > 2g-2$.
\subsection{Moduli spaces $M_{n,L}$ and $N_{n,L}$}
We would like to study the topology of $N_{n,L}$ via the Higgs moduli space $M_{n,L}$.
We consider the ${\mathbb{C}}^*$-action on $M_{n,L}$ by the scaling action on the Higgs field:
\[
\lambda\cdot ({\mathcal E}, \theta) = ({\mathcal E}, \lambda \theta), \quad \quad \lambda \in {\mathbb{C}}^*.
\]
The ${\mathbb{C}}^*$-fixed locus $F \subset M_{n,L}$ can be decomposed as
\[
F = N_{n,L} \sqcup F'.
\]
Here the first connected component parameterizes (S-equivalence classes of) semistable Higgs bundles with $\theta =0$ which is naturally isomorphic to $N_{n,L}$. The restriction of the $\Gamma$-action on $M_{n,L}$ to $N_{n,L}$ recovers (\ref{action}).
We apply hyperbolic localization to connect the intersection cohomology of the moduli spaces $M_{n,L}$ and $N_{n,L}$.
\subsection{Hyperbolic Localization}
We consider the following subvarieties of $M_{n,L}$ obtained from the scaling ${\mathbb{C}}^*$-action:
\[
M^+: = \{x\in M_{n,L}: \lim_{\lambda \to 0 }\lambda\cdot x \in F\}, \quad M^-: = \{x\in M_{n,L}: \lim_{\lambda\to \infty}\lambda\cdot x \in F\}.
\]
Let $f^{+}, f^{-}, g^+, g^-$ be the inclusions
\begin{equation}\label{fg}
f^+: F \hookrightarrow M^+, \quad f^-: F \hookrightarrow M^-, \quad g^+: M^+ \hookrightarrow M_{n,L}, \quad g^-: M^- \hookrightarrow M_{n,L}.
\end{equation}
Following \cite{Kir2, Braden}, we consider the \emph{hyperbolic localization functor}:
\begin{equation}\label{functor}
(-)^{!*} : D^b_c(M_{n,L}) \rightarrow D^b_c(F), \quad {\mathcal K} \mapsto (f^+)^*(g^+)^!{\mathcal K}.
\end{equation}
We obtain from the main theorem of Kirwan \cite{Kir2} that there is an isomorphism
\begin{equation}\label{localization}
\mathrm{IH}^*(M_{n,L}, {\mathbb{C}}) \simeq H^*\left(F,~~ (\mathrm{IC}_{M_{n,L}})^{!*}[-\mathrm{dim}M_{n,L}]\right).
\end{equation}
In fact, Kirwan proved (\ref{localization}) for normal projective varieties with ${\mathbb{C}}^*$-actions. In the case of the moduli of Higgs bundles, one may deduce (\ref{localization}) by applying Kirwan's theorem to a compactification $M_{n,L} \subset \overline{M}_{n,L}$ \cite{Compact, Hausel} where the ${\mathbb{C}}^*$-action can be lifted, and then restrict the isomorphism (\ref{localization}) for $\overline {M}_{n,L}$ to the open subvariety $M_{n,L}$; see the first paragraph in \cite[Proof of Corollary 1.5]{HP}.
Concerning the righthand side of (\ref{localization}), Braden showed in \cite{Braden} that there is a splitting
\begin{equation}\label{decomp}
(\mathrm{IC}_{M_{n,L}})^{!*} \simeq \bigoplus_{i} \mathrm{IC}_{Y_i}({\mathcal L}_i)[d_i]
\end{equation}
with $Y_i \subset F$ irreducible closed subvarieties, ${\mathcal L}_i$ local systems on open subsets of $Y_i$, and $d_i \in {\mathbb{Z}}$.
Recall the finite group $\Gamma = \mathrm{Pic}^0(C)[n]$. For a $\Gamma$-action on a ${\mathbb{C}}$-vector space $V$, we have the canonical decomposition
\[
V = V^{\Gamma} \oplus V_{\mathrm{var}}
\]
with $V^\Gamma$ the $\Gamma$-invariant part and $ V_{\mathrm{var}}$ the variant part. The following proposition concerns the $\Gamma$-actions on the intersection cohomology groups of $M_{n,L}$ and $N_{n,L}$.
\begin{prop} \label{prop4.1}
We have
\[
\mathrm{dim} \mathrm{IH}^*(N_{n,L}, {\mathbb{C}})_\mathrm{var} \leq
\mathrm{dim} \mathrm{IH}^*(M_{n,L}, {\mathbb{C}})_\mathrm{var}.
\]
\end{prop}
\begin{proof}
We first show that the righthand side of the decomposition (\ref{decomp}) contains \[
\mathrm{IC}_{N_{n,L}}[\mathrm{dim}M_{n,L} - \mathrm{dim}N_{n,L}]
\]
as a direct summand component. Consider the open subvariety $M_{n,L}^{s} \subset M_{n,L}$ formed by stable Higgs bundles. By definition we have $M_{n,L}^s \cap N_{n,L} = N_{n,L}^s$ where $N_{n,L}^s$ is the locus of stable vector bundles. Both $M_{n,L}^s$ and $N_{n,L}^s$ are nonsingular. The component of the attracting locus $(M^s)^+$ over $N_{n,L}^s$ is an open subvariety of $M_{n,L}$, so we have the splitting over the stable locus $M_{n,L}^s$:
\[
(f^+)^*(g^+)^!{\mathbb{C}}_{M_{n,L}^s} \simeq {\mathbb{C}}_{N_{n,L}^s} \oplus \cdots.
\]
In particular, this shows that there is a term in the righthand side of (\ref{decomp}) with
\[
Y_0 = N_{n,L}, \quad {\mathcal L}_0 = {\mathbb{C}}, \quad d_0 = \mathrm{dim}M_{n,L} - \mathrm{dim}N_{n,L}.
\]
Hence (\ref{decomp}) induce an isomorphism
\begin{equation}\label{eq5}
\mathrm{IH}^*(M_{n,L}, {\mathbb{C}}) \simeq \mathrm{IH}^*(N_{n,L}, {\mathbb{C}}) \oplus \left( \bigoplus_{j>0} H^{*-\mathrm{dim}M_{n,L}+d_j}(F,~~\mathrm{IC}_{Y_j}({\mathcal L}_j)) \right).
\end{equation}
Since the $\Gamma$- and the ${\mathbb{C}}^*$-actions on $M_{n,L}$ commute, the embeddings (\ref{fg}) are $\Gamma$-equivariant. The hyperbolic localization functor (\ref{functor}) and the isomorphisms (\ref{localization}) and (\ref{decomp}) are also $\Gamma$-equivariant. Consequently, (\ref{eq5}) is an $\Gamma$-equivariant isomorphism whose variant parts implies the proposition.
\end{proof}
\subsection{Codimension estimate}
Recall that $d_\gamma$ is the codimension of $A_\gamma$ in $A$. We have
\[
d_\gamma = \mathrm{dim}A - \mathrm{dim}A_\gamma = \mathrm{dim}A - \mathrm{dim}A(\pi)
\]
where $\pi: C' \to C$ is the \'etale Galois cover associated with $\gamma$. By the formulas of \cite[Section 6.1]{dC_SL} for the Hitchin bases, we obtain the following codimension formula for endoscopic loci.
\begin{lem}\label{lem4.2}
Assume that $\gamma \in \Gamma$ has order $m$ with $n = mr$. We have
\[
d_\gamma = \frac{n(n - r) \cdot \mathrm{deg}(D)}{2}.
\]
In particular for fixed rank $n$, we have $\mathrm{min}_{\gamma\neq 0}\{d_\gamma\} \to +\infty$ when $\mathrm{deg}(D) \to \infty$.
\end{lem}
Now we complete the proof of Theorem \ref{thm0.3}.
\subsection{Proof of Theorem \ref{thm0.3}}
For fixed genus $g$ curve $C$ and rank $n$, we work with Higgs bundles with $\mathrm{deg}(D)$ large enough, so that $d_\gamma > \mathrm{dim}N_{n,L}$ for any nonzero $\gamma \in \Gamma$. This is possible due to Lemma \ref{lem4.2} and the fact that $\mathrm{dim}N_{n,L} = (n^2-1)(g-1)$ is independent of $\mathrm{deg}(D)$.
Theorem \ref{thm0.2} (a) implies that the variant part
\begin{equation*}
\left(Rh_* \mathrm{IC}_{M_{n,L}}\right)_{\mathrm{var}} \in D^b_c(A)
\end{equation*}
(contributed by the nontrivial characters) is concentrated in degrees $\geq \mathrm{min}_{\gamma\neq 0}\{2d_\gamma\}$. Taking global cohomology, we have
\[
\mathrm{IH}^k(M_{n,L}, {\mathbb{C}})_{\mathrm{var}} = 0, \quad \quad \forall k < \mathrm{min}_{\gamma\neq 0}\{2d_\gamma\},
\]
which further yields from Proposition \ref{prop4.1} that
\[
\dim \mathrm{IH}^k(N_{n,L}, {\mathbb{C}})_{\mathrm{var}} \leq \dim \mathrm{IH}^k(M_{n,L}, {\mathbb{C}})_\mathrm{var} =0, \quad \forall k < \mathrm{min}_{\gamma\neq 0}\{2d_\gamma\}.
\]
By our choice of $D$ we conclude that $\mathrm{IH}^*(N_{n,L}, {\mathbb{C}})_{\mathrm{var}} = 0$. This proves the triviality of the $\Gamma$-action on $\mathrm{IH}^k(N_{n,L}, {\mathbb{C}})$.
To prove (\ref{SL_PGL}), we consider the natural finite quotient map
\[
f: N_{n,L} \to N_{n,L}/\Gamma= \check{N}_{n,L}
\]
Since the intersection cohomology complex $\mathrm{IC}_{N_{n,L}}$ is naturally $\Gamma$-equivariant, the pushforward complex $f_*\mathrm{IC}_{N_{n,L}}$ admits a canonical decomposition with respect to the $\Gamma$-action:
\[
f_*\mathrm{IC}_{N_{n,L}} = \left(f_*\mathrm{IC}_{N_{n,L}}\right)^\Gamma \oplus \left(f_*\mathrm{IC}_{N_{n,L}}\right)_\mathrm{var}.
\]
By the first part of the theorem, the cohomology of $\left(f_*\mathrm{IC}_{N_{n,L}}\right)_\mathrm{var}$ vanishes. Therefore it suffices to show that the complex $\left(f_*\mathrm{IC}_{N_{n,L}}\right)^\Gamma$ coincides with $\mathrm{IC}_{\check{N}_{n,L}}$, which follows from Lemma \ref{lem3.5}. \qed
|
1,941,325,220,719 | arxiv | \section{Introduction}
\def\ensuremath{\chi}PT{\ensuremath{\chi}PT}
Recent years have seen extensive efforts to gain a quantitative understanding
of the low-energy dynamics of hadrons. The principal theoretical tools in
this endeavour are Chiral Perturbation Theory (\ensuremath{\chi}PT)
\cite{Gasser:1983yg,Gasser:1984gg}
and numerical simulations of QCD on a space-time lattice. While \ensuremath{\chi}PT\ is an
effective theory based on hadronic degrees of freedom, lattice QCD seeks to
describe hadronic properties from first principles in terms of the fundamental
constituents, i.e. the quarks and gluons. Lattice QCD and \ensuremath{\chi}PT\ interact in
two ways: on the one hand, for performance reasons, lattice simulations are
usually performed at unphysically heavy light quark masses (although recently,
simulation results at physical light quark masses and below
\cite{Durr:2010aw,Durr:2010vn,Aoki:2009ix}
have become available), and thus \ensuremath{\chi}PT\ is used to extrapolate results
obtained in a range of masses to the physical point, in order to obtain
physical predictions; on the other hand, lattice simulations allow for
the calculation of low-energy matrix elements that can also be computed in
\ensuremath{\chi}PT. Thus the low-energy constants of \ensuremath{\chi}PT\ can be determined from first
principles
(cf. e.g. \cite{Heitger:2000ay,Giusti:2003iq,Giusti:2004yp,Gattringer:2005ij,
Hasenfratz:2008ce,Beane:2011zm,Bernardoni:2011fx,
Damgaard:2012gy,Borsanyi:2012zv,Herdoiza:2013sla}).
An important long-term goal is the quantitative description of nucleon
properties for which a wealth of data has been accumulated by numerous
experiments. However, baryonic systems are more difficult to treat
theoretically: while the range of validity of baryonic \ensuremath{\chi}PT\ is largely
unknown, one finds that baryonic correlation functions computed in lattice QCD
suffer from an exponentially increasing noise-to-signal ratio. Therefore, the
interplay between lattice QCD and \ensuremath{\chi}PT\ has mostly been studied in the context
of mesonic systems. In addition to investigations of masses and decay
constants, the focus has recently shifted to dynamical observables, such as
form factors, which depend on a momentum transfer. For instance, the vector
form factor, which describes the coupling of a photon to the pion and is thus
directly accessible to experiment, has been calculated to a fair level of
accuracy in lattice simulations
\cite{Capitani:2005ce,Brommel:2006ww,Jiang:2006gna,Kaneko:2007nf,
Alexandrou:2007pn,Boyle:2008yd,Aoki:2009qn,Nguyen:2011ek,
Fukaya:2012dla,Brandt:2013mb}.
While some of the systematics remain to be understood, the various
determinations of the pion charge radius, $\langle r^2\rangle^\pi_{_{\rm V}}$,
are mostly compatible with one another and also consistent with experiment.
On the other hand, the scalar pion form factor, defined by
\begin{equation}
F^\pi_{_{\rm S}}\left(Q^2\right) \equiv
\left<\pi^+\left(p_f\right)\right|\,m_{\rm d}\overline{d}d
+m_{\rm u}\overline{u}u\,\left|\pi^+\left(p_i\right)\right>,
\qquad Q^2=-q^2=-(p_f-p_i)^2
\label{eq:defff}
\end{equation}
is not directly accessible to experiment, since the Higgs (whose coupling
to the pion is determined by this form factor) is far too heavy to matter
in the low-energy regime of QCD. However, the scalar radius
\begin{equation}
\left\langle r^2\right\rangle^\pi_{_{\rm S}} = - \frac{6}{F^\pi_{_{\rm
S}}(0)} \frac{\partial F^\pi_{_{\rm S}}(Q^2)}{\partial Q^2}
\Big|_{Q^2=0} \label{eq:scalarr}
\end{equation}
of the pion can be related in \ensuremath{\chi}PT\ to the ratio of the pion decay constant and
its value at vanishing quark mass via
\cite{Gasser:1983kx}
\begin{equation}
\frac{F_\pi}{F} = 1
+ \frac{1}{6} M_\pi^2 \left\langle r^2\right\rangle^\pi_{_{\rm
S}}
+ \frac{13M_\pi^2}{192\pi^2F_\pi^2} + O(M_\pi^4) \,.
\end{equation}
The scalar radius can also be linked to $\pi\pi$-scattering amplitudes
\cite{Donoghue:1990xh,Gasser:1990bv,Moussallam:1999aq},
and the most recent phenomenological estimate of ref.
\cite{Colangelo:2001df},
based on this approach, is $\left\langle
r^2\right\rangle^\pi_{_{\rm{S}}}=0.61\pm0.04$~fm$^2$.
The chiral expansion of the pion scalar radius at next-to-leading order (NLO)
\cite{Gasser:1983kx}
contains only a single low-energy constant $\bar\ell_4$. Since $\bar\ell_4$
also appears in the NLO expressions of other observables, one can test the
consistency of \ensuremath{\chi}PT\ by comparing the lattice estimate of $\bar\ell_4$
extracted from the scalar form factor with that obtained from pseudoscalar
meson decay constants. Moreover, computing the pion scalar form factor in
lattice QCD gives a first-principles determination of $\bar\ell_4$ without any
modelling assumption, which would otherwise be implicit in a phenomenological
estimate. Another interesting feature of the pion scalar radius, from a
more technical point of view, is that a recent calculation in partially
quenched \ensuremath{\chi}PT\
\cite{Juttner:2011ur,Juttner:2012xs}
indicates that the disconnected contribution to the scalar radius is
not negligible.
Determining the scalar form factor of the pion in lattice QCD is
computationally very demanding, due to the occurrence of quark-disconnected
diagrams (see figure \ref{fig:3ptdiagrams}). Such contributions are absent in
the corresponding hadronic matrix element of the vector current as a result
of charge conjugation invariance. Disconnected diagrams are expensive to
compute on the lattice, because they require the trace of the propagator from
a point to itself to be evaluated; in order to reliably estimate this
quantity, it is necessary to compute the propagator from each point of the
lattice to itself. Naively, this would require an inversion of the lattice
Dirac operator for each lattice point, which is prohibitively
expensive. Efficient methods to calculate such all-to-all propagators have
therefore been developed, including the use of noisy sources
\cite{Bitar:1988bb},
low-mode averaging
\cite{Neff:2001zr,Giusti:2004yp,Bali:2005fu},
hopping parameter expansions
\cite{Thron:1997iy},
and truncated solver methods
\cite{Collins:2007mh}.
Nevertheless, the computational effort involved is significant.
The pion scalar form factor is therefore far less well studied than the
vector form factor; so far only one calculation of the full scalar form
factor
\cite{Aoki:2009qn},
which has been performed on a rather small $32\times16^3$ lattice, exists.
In this paper we expand on our account in
\cite{Gulpers:2012kd}
by presenting the details and results of our calculation of the pion
scalar form factor using $\mathcal{O}(a)$-improved Wilson fermions.
Details of the lattice ensembles and observables used are given in
section~\ref{sec:sim}, and the methods used to calculate the disconnected
contribution using a combination of stochastic sources and a generalized
hopping parameter expansion are described in section~\ref{sec:inv}.
Our data analysis methods are detailed in section~\ref{sec:ratio}, and the
results for the form factor, as well as the scalar radius, including the
determination of the low-energy constant $\bar\ell_4$ from the chiral
extrapolation of the scalar radius are given in section~\ref{sec:results}.
We conclude with a summary of our main findings and
several remarks on the differences between our results and those of
\cite{Aoki:2009qn}
in section~\ref{sec:conclusions}.
\section{Simulation Setup}
\label{sec:sim}
Our calculation of the scalar pion form factor is performed with
$N_f=2$ dynamical flavors of non-perturbatively $\mathcal{O}(a)$-improved
Wilson fermions. The corresponding Dirac operator $D_{_{SW}}$ is given by
\begin{equation}
D_{\rm{sw}} = D_{\rm{w}} + c_{\rm{sw}}\,\frac{i}{4} \sigma_{\mu\nu}
\hat{F}_{\mu\nu} \label{eq:SW}
\end{equation}
where
\begin{equation}
D_{\rm{w}}=\frac{1}{2\kappa}\,\hbox{\upshape \small1\kern-3.3pt\normalsize1} -\frac{1}{2}\,H\label{eq:WilsonDirac}
\end{equation}
is the unimproved Wilson-Dirac operator, and the term with coefficient
$c_{\rm{sw}}$ in \eqref{eq:SW} is the Sheikholeslami-Wohlert (clover) term
\cite{Sheikholeslami:1985ij}
implementing $\mathcal{O}(a)$-improvement
\cite{Luscher:1996sc}.
Since the latter is local, all couplings between neighboring lattice points
appearing in \eqref{eq:WilsonDirac} are contained in the hopping matrix $H$.
The hopping parameter $\kappa$ determines the bare quark mass
\begin{equation}
m = \frac{1}{2a}\left(\frac{1}{\kappa}-\frac{1}{\kappa_c}\right)\,,
\end{equation}
where $\kappa_c$ is the critical value for which the quark (and hence pion)
mass vanishes. For our simulations we use gauge ensembles produced as part of
the CLS initiative, which have been generated using L\"uscher's
deflation-accelerated DD-HMC
algorithm
\cite{Luscher:2005rx,Luscher:2007es}.
An overview of the ensembles used in this study can be found in
table~\ref{tab:ensembles}. Here we use the non-perturbative determination of
the improvement coefficient $c_{\rm{sw}}$ for $N_f=2$ flavors
\cite{Jansen:1998mx}
at a single value of the gauge coupling, $\beta=5.3$. The corresponding
lattice spacing of $a=0.063$~fm was determined via the mass of the $\Omega$
baryon
\cite{Capitani:2011fg}.
A similar result for the lattice spacing was obtained by the ALPHA
collaboration using the Kaon decay constant
\cite{Fritzsch:2012wq}.
\begin{table}[h]
\begin{tabular}{ccccccccccccccccc}
\hline\hline
$\beta$ && $a [\textnormal{fm}]$ && lattice && $m_\pi [\textnormal{MeV}]$ &&
$m_\pi L$ && $\kappa$ && Label && $N_{\rm{cfg}}$\\
\hline
$5.3$ && $0.063$ && $64\times32^3$ && 650 && 6.6 && $0.13605$ && E3 && $156$\\
$5.3$ && $0.063$ && $64\times32^3$ && 605 && 6.2 && $0.13610$ && E4 && $162$\\
$5.3$ && $0.063$ && $64\times32^3$ && 455 && 4.7 && $0.13625$ && E5 && $1000$\\
\hline
$5.3$ && $0.063$ && $96\times48^3$ && 325 && 5.0 && $0.13635$ && F6 && $300$\\
$5.3$ && $0.063$ && $96\times48^3$ && 280 && 4.3 && $0.13638$ && F7 && $351$\\
\hline\hline
\end{tabular}
\caption{Overview of the CLS ensembles used in this work. The lattice spacing
given was determined using the $\Omega$ baryon mass
%
\cite{Capitani:2011fg}.
%
Note that all ensembles fulfill $m_\pi L>4$.
}
\label{tab:ensembles}
\end{table}
\section{Calculation of disconnected diagrams}
\label{sec:inv}
\subsection{Inversion with stochastic sources}
\label{subsec:stochsources}
\begin{figure}[t]
\centering
\includegraphics[trim = 25mm 238mm 10mm 33mm, scale=0.75]{3pt.pdf}
\caption{The three contributions to the three-point function. The connected
on the left, the disconnected with
subtracted vacuum on the right. The middle diagram
contains the loop factor $L(\vec{p},t)$.}\label{fig:3ptdiagrams}
\end{figure}
While the connected three-point function can be calculated using conventional
point-to-all propagators and the extended propagator method
\cite{Martinelli:1988rr},
the disconnected three-point function is computationally more demanding,
since the calculation of the loop $L(\vec{p},t)$
(c.f. figure \ref{fig:3ptdiagrams}) requires the all-to-all propagator,
i.e. the inverse of a generic lattice Dirac operator~$D$ for arbitrary source
and sink positions:
\begin{equation}
L(\vec{p},t) = \sum\limits_{\vec{x}}
e^{i\vec{p}\cdot\vec{x}}\,\,\textnormal{Tr}\left[\Gamma
D^{-1}(x,x)\right]\,.\label{eq:loop}
\end{equation}
One particular method for calculating the all-to-all propagator is based on
the use of stochastic sources
\cite{Bitar:1988bb,Bali:2009hu}.
As a first step one selects $N$ random source vectors, $\left|\eta_i\right>$,
which fulfill the conditions
\begin{equation}
\frac{1}{N}\sum\limits_{i=1}^N \left|\eta_i\right> =
0 +
\mathcal{O}\left(1/\sqrt{N}\right)\hspace{0.3cm}\textnormal{,}\hspace{2.5cm}
\frac{1}{N}\sum\limits_{i=1}^N
\left|\eta_i\right>\left<\eta_i\right|=\hbox{\upshape \small1\kern-3.3pt\normalsize1} +
\mathcal{O}\left(1/\sqrt{N}\right)\,. \label{eq:condstochsources}
\end{equation}
After solving the Dirac equation
$D\,\left|s_i\right> = \left|\eta_i\right>$ for all $N$ sources,
an estimate of the propagator is given by
\begin{equation}
D^{-1} = \frac{1}{N} \sum\limits_{i=1}^N
\left|s_i\right>\left<\eta_i\right|\,.\label{eq:invwithstoch}
\end{equation}
While the statistical error associated with the stochastic noise scales like
$N^{-1/2}$, the numerical cost of the method is proportional to the number of
stochastic sources, $N$. It is then clear that one has to optimize the value
of $N$, in order to balance good statistical accuracy against an acceptable
numerical effort. The generalized hopping parameter expansion described in the
following section is designed to reduce the statistical error of the
disconnected contribution for a given number of stochastic sources.
\subsection{The generalized Hopping Parameter Expansion}
The inverse of the Wilson-Dirac operator can be expressed in terms of a
hopping parameter expansion (HPE)
\cite{Thron:1997iy,Bali:2009hu}.
As already indicated in \eqref{eq:WilsonDirac}, the unimproved Wilson-Dirac
operator can be split into two parts, one of which is proportional to the unit
matrix while the other matrix, the hopping term $H$, contains all couplings of
neighboring lattice points,
\begin{equation}
D_{\rm{w}}=\frac{1}{2\kappa}\,\hbox{\upshape \small1\kern-3.3pt\normalsize1} -\frac{1}{2}\,H\,,
\end{equation}
where $\kappa$ denotes the hopping parameter. For the calculation of the quark
propagator $D_{\rm{w}}^{-1}$, the hopping parameter expansion amounts to
performing a geometric series expansion in $\kappa$,
\begin{align}
D_{_{W}}^{-1}& = 2\kappa \sum\limits_{i=0}^{k-1} \left(\kappa\,H\right)^{i} +
\left(\kappa\,H\right)^{k} D_{_{W}}^{-1} \,. \label{eq:hpewoimp}
\end{align}
The advantage of rewriting the propagator in this way lies in the fact that
$D_{\rm{w}}^{-1}$ on the right-hand side is multiplied by $k$ powers of
$\kappa<1$. Hence one expects that the noise introduced by the stochastic
inversion of $D_{\rm{w}}$ is reduced accordingly.
When $\mathcal{O}(a)$-improvement is employed, equation \eqref{eq:hpewoimp}
must be generalized. According to equation \eqref{eq:SW} the improved operator
has
the form
\begin{equation}
D_{\rm{sw}}=\frac{1}{2\kappa}\,\hbox{\upshape \small1\kern-3.3pt\normalsize1} -\frac{1}{2}\,H + c_{\rm{sw}} B
\,,\label{eq:WD}
\end{equation}
where $B=\frac{1}{4}\sigma_{\mu\nu}F_{\mu\nu}$ is the clover term. This can be
rewritten as
\begin{equation}
D_{_{SW}} =
A-\frac{1}{2}\,H=A\left(\hbox{\upshape \small1\kern-3.3pt\normalsize1}-\frac{1}{2}\,A^{-1}H\right)\hspace{0.5cm}
\textnormal{where}\hspace{0.3cm}A =
\frac{1}{2\kappa}\,\hbox{\upshape \small1\kern-3.3pt\normalsize1} + c_{_{SW}} B\,,\label{eq:Drewritten}
\end{equation}
which again allows for a geometric series expansion, resulting in
\begin{equation}
D_{_{SW}}^{-1} = \sum\limits_{i=0}^{k-1}
\left(\frac{1}{2}\,A^{-1}\,H\right)^{i}\,A^{-1} +
\left(\frac{1}{2}\,A^{-1}\,H\right)^{k} D_{_{SW}}^{-1}\,.\label{eq:hpe}
\end{equation}
In \eqref{eq:hpe}, the inverse of the matrix $A$, which is defined
in \eqref{eq:Drewritten}, appears. Without $\mathcal{O}(a)$-improvement, i.e.
$c_{_{SW}}=0$, this inverse is trivial, $A^{-1}=2\kappa$, and \eqref{eq:hpe}
reduces to \eqref{eq:hpewoimp}. For $c_{_{SW}}\neq0$, one can show that the
matrix $A$ is block-diagonal due to the local form of the clover term.
Therefore one only has to invert two $6\times6$ matrices for each lattice
point, which is still comparatively cheap in terms of the required computer
time.
The inverse $D_{^{SW}}^{-1}$ on the right-hand side of \eqref{eq:hpe} can now
be estimated with stochastic sources as described above. In order to find a
good compromise between statistical fluctuations and low numerical cost, one
can now tune two parameters, namely the number of stochastic sources $N$ and
the order $k$ of the hopping parameter expansion.
\begin{figure}[h]
\centering
\includegraphics[scale=0.65]{loopgaugemean.pdf}
\caption{The relative statistical error of the loop $L(\vec{p}=0,\,t=0)$}
\label{fig:loopgaugemean}
\end{figure}
As an example how these two parameters influence the effort required to reach
a given statistical precision, we show in figure \ref{fig:loopgaugemean} the
standard deviation of the loop $L(\vec{p}=0,\,t=0)$ divided by its gauge mean
(i.e. the relative statistical error) computed on 33
configurations of the E4 ensemble (cf. table \ref{tab:ensembles}). The loop
has been calculated stochastically without employing the HPE, as well as for
$k=2$, $4$, $6$ terms in the hopping parameter expansion, using $N=3$, $5$ and
$7$ sources in each case. One can see clearly that increasing the order of the
HPE decreases the statistical error of the loop. In addition, we observe the
expected behavior for the scaling of the error, $\sigma\propto\sqrt{N}^{-1}$,
as indicated by the linear curves in figure \ref{fig:loopgaugemean}. Therefore
the intercept on the $y$-axis shows the remaining gauge noise in the
calculation. To obtain a good balance between the accuracy of the calculation
and the computer time needed, we use $N=3$ stochastic sources and the order
$k=6$ of the generalized HPE for the calculation of the loop. At this point
the error is already close to the gauge noise, and the relatively small gain
in statistical accuracy does not justify a further increase in the number of
stochastic sources $N$.
In order that the method produces an exact result for the loop, also the
contributions from the first $k$ terms in the generalized hopping parameter
expansion of equation~\eqref{eq:hpe}, i.e.
\begin{equation}
X\equiv\,\, \sum\limits_{i=0}^{k-1}
\mathrm{tr~}\left[\Gamma\left(\frac{1}{2}\,A^{-1}\,H\right)^{i}\,A^{-1}\right]\,
, \label{eq:otherterms}
\end{equation}
have to be calculated. This can also be done with stochastic sources, by
inserting a unit matrix in \eqref{eq:otherterms} and using
\eqref{eq:condstochsources}, i.e.
\begin{equation}
\mathrm{tr~}X = \mathrm{tr~}(X\hbox{\upshape \small1\kern-3.3pt\normalsize1}) = \frac{1}{M}\sum\limits_{i=1}^M
\mathrm{tr~}\left(X\left|\eta_i\right>\left<\eta_i\right|\right) +
\mathcal{O}\left(1/\sqrt{M}\right)
= \frac{1}{M}\sum\limits_{i=1}^M
\left<\eta_i\right|X\left|\eta_i\right> + \mathcal{O}\left(1/\sqrt{M}\right)\,.
\end{equation}
Since this calculation does not require much computer time compared to the
inversion, we can use a large number $M=50$ of sources. A more detailed
discussion of the tuning of the generalized hopping parameter expansion
can be found in
\cite{Gulpers:Diplom}.
\section{Extracting the form factor}
\label{sec:ratio}
\subsection{Two- and three-point functions}
The scalar form of the pion can be determined from appropriate combinations of
the two- and three-point correlation functions. In order to compute the ground
state energy of a pion with momentum $\vec{p}$ we consider the two-point
function
\begin{equation}
C_{2\textrm{pt}}(t,\vec{p}) = \sum_{\vec{x}} e^{-i\vec{p}\cdot\vec{x}}
\langle \phi(t,\vec{x})\phi(0)\rangle
\end{equation}
of the pseudoscalar density
\begin{equation}
\phi(x)=\overline{q}(x)\gamma_5 q(x)\,.
\label{eq:piop}
\end{equation}
On a periodic lattice with time extent~$T$ the asymptotic behavior at large
Euclidean times~$t$ is given by
\begin{equation}
C_{2\textnormal{pt}}(t,\vec{p}) \sim
\frac{Z(\vec{p})^2}{2 E_\pi(\vec{p})}\left[e^{- t E_\pi(\vec{p})}
+ e^{-(T-t)E_\pi(\vec{p})} \right]\,,
\label{eq:2pt}
\end{equation}
where $E_\pi(\vec{p})$ is the energy of the pion, and
$Z(\vec{p})^2=\left|\left<\pi(\vec{p})\right|\phi(0)\left|0\right>\right|^2$
is the squared matrix element of the pseudoscalar density between a pion state
and the vacuum.
In order to describe the coupling of a scalar particle to the pion, one has to
consider insertions of the local scalar density
\begin{equation}
\mathcal{O}_{\rm S}(y)=\overline{q}(y)q(y)\,.
\end{equation}
The scalar form factor can be extracted from the three-point correlation
function
\begin{equation}
C_{3\textnormal{pt}}(t,t_s,\vec{p}_i,\vec{p}_f) =
\sum_{\vec{x},\vec{y}} e^{-i\vec{p}_f\cdot\vec{x}+i\vec{q}\cdot\vec{y}}
\langle\phi(t_s,\vec{x})\mathcal{O}(t,\vec{y})\phi(0)\rangle\,,
\end{equation}
where $\vec{p}_i, \vec{p}_f$ denote the three-momenta of the initial and final
pions, respectively, and $Q^2=-q^2=-(p_f-p_i)^2$ is the squared momentum
transfer. For $0\ll t\ll t_s$ the three-point functions behaves like
\begin{equation}
C_{3\textnormal{pt}}(t,t_s,\vec{p}_i,\vec{p}_f) \sim
\frac{Z(\vec{p}_i) Z(\vec{p}_f)}{4E_\pi(\vec{p}_i)E_\pi(\vec{p}_f)}
\left<\pi(\vec{p}_f)\right|\mathcal{O}_{\rm{S}}(0)\left|\pi(\vec{p}_i)\right>
e^{-(t_s-t)E_\pi(\vec{p}_f)}e^{-t E_\pi(\vec{p}_i)}\,, \label{eq:3pt}
\end{equation}
and the matrix element
$\left<\pi(\vec{p}_f)\right|\mathcal{O}_{\rm{S}}(0)\left|\pi(\vec{p}_i)\right>$
that occurs in equation\,\eqref{eq:3pt} is the desired scalar form
factor. Note that in the scalar case the vacuum contribution
\begin{equation}
C_{\textrm{vac}}(t,t_s,\vec{p}_i,\vec{p}_f) = C_{2\textrm{pt}}(t_s,\vec{p}_f)
\sum_{\vec{y}} e^{i\vec{q}\cdot\vec{y}}\left<\mathcal{O}_{\rm{S}}(t,\vec{y})
\right>
\end{equation}
is non-zero for $\vec{q}=0$ and must be subtracted prior to fitting numerical
data for $C_{3\textnormal{pt}}$ to
equation~\eqref{eq:3pt}. Figure\,\ref{fig:3ptdiagrams} shows the three
diagrams that contribute to the three-point function, i.e. the quark-connected
and disconnected diagrams, as well as the subtracted vacuum contribution.
Our simulations are performed using Wilson fermions, which break chiral
symmetry explicitly. As a consequence, the scalar operator
$\mathcal{O}=\overline{q}q$ undergoes an additive renormalization besides the
multiplicative one, i.e.
\begin{equation}
\left<\mathcal{O}^{\rm R}_{\rm S}\right> = Z_{{\rm S}}
\left<\mathcal{O}_{\rm S}-b_0\right>\,.
\end{equation}
The subtraction of the vacuum contribution (cf. figure \ref{fig:3ptdiagrams})
ensures that the cubically divergent additive renormalization $b_0$ of the
scalar operator is canceled. Since the multiplicative renormalization constant
$Z_{\rm{S}}$ has not been determined in our calculation, all form factor data in
this paper are not renormalized. Note, however, that $Z_{\rm{S}}$ drops out in
the calculation of the scalar radius (cf. equation \eqref{eq:scalarr}), which
implies that our results can be readily compared to phenomenology and other
lattice determinations.
\subsection{Building Ratios}
To extract the scalar matrix element
$\left<\pi(\vec{p}_f)\right|\mathcal{O}_{\rm{S}}(0)\left|\pi(\vec{p}_i)\right>$,
it is convenient to form appropriate ratios of three- and two-point
functions. Here we follow the approach of ref.
\cite{Boyle:2007wg},
focusing, in particular, on the two ratios called $R_1$ and $R_3$,
\begin{align}
&R_1(t,t_s,\vec{p}_i,\vec{p}_f) =
\sqrt{\frac{C_{3\textnormal{pt}}(t,t_s,\vec{p}_i,\vec{p}_f)
C_{3\textnormal{pt}}(t,t_s,\vec{p}_f,\vec{p}_i)}
{C_{2\textnormal{pt}}(t_s,\vec{p}_i)C_{2\textnormal{pt}}(t_s,\vec{p}_f)}}\,\,,
\label{eq:Ratio1}\\
&R_3(t,t_s,\vec{p}_i,\vec{p}_f) =
\frac{C_{3\textnormal{pt}}(t,t_s,\vec{p}_i,\vec{p}_f)}
{C_{2\textnormal{pt}}(t_s,\vec{p}_f)}\cdot\sqrt{\frac{C_{2\textnormal{pt}}
(t_s, \vec{p}_f)
C_{2\textnormal{pt}}(t,\vec{p}_f)C_{2\textnormal{pt}}((t_s-t),\vec{p}_i)}
{C_{2\textnormal{pt}}(t_s,\vec{p}_i)C_{2\textnormal{pt}}(t,\vec{p}_i)
C_{2\textnormal{pt}}((t_s-t),\vec{p}_f) } }\label{eq:Ratio3}\,.
\end{align}
When the expressions of equations \eqref{eq:2pt} and \eqref{eq:3pt} for the
asymptotic forms of the two- and three-point functions are inserted into the
definition of $R_1$ one obtains
\begin{equation}
R_1(t,t_s,\vec{p}_i,\vec{p}_f)
\sim\frac{\left<\pi(\vec{p}_f)\right|\mathcal{O}_{\rm{S}}(0)
\left|\pi(\vec{p}_i)\right>}{2\sqrt{E_\pi(\vec{p}_i)E_\pi(\vec{p}_f)}}
\sqrt{\frac{e^{-E_\pi(\vec{p}_i) t_s}e^{-E_\pi(\vec{p}_f) t_s}}
{(e^{-E_\pi(\vec{p}_i) t_s} +
e^{-E_\pi(\vec{p}_i)(T-t_s)})\cdot(e^{-E_\pi(\vec{p}_f)
t_s} + e^{-E_\pi(\vec{p}_f) (T-t_s)})}} \label{eq:R1}\,.
\end{equation}
Here all overlap factors $Z(\vec{p})$, as well as any dependence on the time
$t$ of the operator insertion cancel. The remaining dependence on the
source-sink separation $t_s$ is due to the backward propagating pion, and the
corresponding expression under the square root in equation \eqref{eq:R1}
approaches unity as $T\to\infty$. For any finite value of $T$, it is easily
determined, since all pion energies $E_\pi(\vec{p})$ are known from the
two-point functions.
Inserting equations \eqref{eq:2pt} and \eqref{eq:3pt} into the expression for
$R_3$ leads to
\begin{equation}
R_3(t,t_s,\vec{p}_i,\vec{p}_f)
\sim\frac{\left<\pi(\vec{p}_f)\right|\mathcal{O}_{\rm{S}}(0)
\left|\pi(\vec{p}_i)\right>}{2\sqrt{E_\pi(\vec{p}_i)E_\pi(\vec{p}_f)}}\,f(t,
t_s)\, ,
\label{eq:R3}
\end{equation}
where the factor
\begin{equation}
\begin{split}
f(t,t_s) = &\frac{e^{-(t_s-t)E_\pi(\vec{p}_f) E_\pi(\vec{p}_i)}}{(e^{-t_s
E_\pi(\vec{p}_f)} + e^{-(T-t_s) E_\pi(\vec{p}_f)})}\times\\
&\sqrt{\frac{(e^{-t_s E_\pi(\vec{p}_f)} + e^{-(T-t_s) E_\pi(\vec{p}_f)})(e^{-t
E_\pi(\vec{p}_f)} + e^{-(T-t)
E_\pi(\vec{p}_f)})(e^{-(t_s-t) E_\pi(\vec{p}_i)} + e^{-(T-(t_s-t))
E_\pi(\vec{p}_i)})}{(e^{-t_s E_\pi(\vec{p}_i)} + e^{-(T-t_s)
E_\pi(\vec{p}_i)})(e^{-t E_\pi(\vec{p}_i)} + e^{-(T-t)
E_\pi(\vec{p}_i)})(e^{-(t_s-t) E_\pi(\vec{p}_f)} + e^{-(T-(t_s-t))
E_\pi(\vec{p}_f)})}}
\end{split}
\end{equation}
depends on both $t_s$ and the time $t$ of the
operator insertion. As in the case of $R_1$, the time dependence can be
determined for every $t$ and $t_s$ once the pion energies are known from the
two-point functions. For large time separations $0\ll t\ll t_s\ll T/2$, the
factor $f(t,t_s)\rightarrow 1$, i.e. the ratio $R_3$ forms a plateau, which is
proportional to the form factor.
Note that equation \eqref{eq:R3} is only valid when the same interpolating
operator (e.g. with smeared or point-like quark fields) is used at the pion
source and the pion sink. Otherwise, not all factors $Z(\vec{p})$ cancel out,
since
they depend on the source type
\cite{Bonnet:2004fr}.
Moreover, the three-point functions must obviously be computed with the same
type of source and sink as the two-point functions.
In the calculation of the quark-connected contribution to the three-point
function, Gaussian smearing
\cite{Gusken:1989ad,Alexandrou:1990dq,Allton:1993wc}
was only applied at the source. Therefore, the
connected
part could only be determined via the ratio $R_1$. By contrast, for the
quark-disconnected part we had smeared-smeared pion two-point functions at our
disposal. Since we found that the ratio $R_3$ gives a much cleaner signal than
$R_1$, we have computed $R_3$ for smeared-smeared correlation functions, in
order to determine the quark-disconnected contribution.
\section{Results}
\label{sec:results}
In this section we present our results for the ratios from which the scalar
form factor can be determined. For these results to be reliable it is
important to address the issue of unwanted contributions from excited states
which may arise if the separations in Euclidean time are not large enough to
guarantee that the correlation functions $C_{2\textrm{pt}}$ and
$C_{3\textnormal{pt}}$ can be described by their asymptotic behavior. We have
therefore performed a systematic study of the $t_s$-dependence of the ratios
$R_1$ and $R_3$.
Twisted boundary conditions
\cite{Bedaque:2004kc,Sachrajda:2004mi,Flynn:2005in,deDivitiis:2004kq,
Boyle:2007wg}
are widely used to compute vector form factors for nearly arbitrary momentum
transfers $Q^2$. In the case of the scalar form this is not an option, since
the effect of the twist angle cancels in the quark-disconnected contribution.
Therefore we discuss our results for vanishing momentum transfer, as well as
two non-zero values of $Q^2$ which can be realized via the usual
Fourier momenta.
\subsection{Ratios for $Q^2=0$}
In the case of vanishing momentum transfer, $Q^2=0$, i.e. for
$\vec{p}_i=\vec{p}_f=\vec{p}$, the ratios $R_1$ and $R_3$ are
identical. Specifically, for $\vec{p}_i=\vec{p}_f=0$ we have
\begin{equation}
R_1(t,t_s,0,0) \equiv R_3(t,t_s,0,0) =
\frac{C_{3\textnormal{pt}}(t,t_s,0,0)}{C_{2\textnormal{pt}}
(t_s,0)}
\sim \frac{\left<\pi(0)\right|\mathcal{O}_{\rm{S}}(0)
\left|\pi(0)\right>}{2m_\pi}
\underbrace{\frac{e^{-m_\pi t_s}}{e^{-m_\pi t_s} +
e^{-m_\pi(T-t_s)}}}_{{=f(t_s)}}\,,\label{eq:Rq20}
\end{equation}
where we have assumed that the ground state
dominates. Equation~\eqref{eq:Rq20} can be used to extract the form factor for
$Q^2=0$ from the simulated three- and two-point function data at zero
momentum. To increase the statistics we have exploited translational invariance
by computing the disconnected contribution for four different pion source
positions separated by $T/4$.
\begin{figure}
\centering
\includegraphics[scale=0.65]{conmom0ratiovsts.pdf}
\includegraphics[scale=0.65]{discmom0ratiovsts.pdf}
\caption{Plateau values plotted against the different $t_s$ for
vanishing momentum transfer $Q^2=0$ for the E5 ensemble. The
connected contribution
(smeared-local) is shown on the left, the disconnected (smeared-smeared)
on the right. A function of the form \eqref{eq:Rq20} has been
fitted to the data.}
\label{fig:mom0ratiovsts}
\end{figure}
To investigate the $t_s$-dependence of the ratios, we fitted constants
to the plateau regions of the ratios for the different values of $t_s$. The
plateau values obtained are plotted against $t_s$ in figure
\ref{fig:mom0ratiovsts} for the E5 ensemble, which has the highest statistics of
all ensembles studied so far. The blue line indicates a function of the form
\eqref{eq:Rq20}, which has been fitted to the data.
Clearly, the data deviate from the expected $t_s$-dependence for the
smaller values $t_s<24$ for both the connected and the disconnected
contribution. However, for larger source-sink separations our data show the
expected $t_s$-dependence. The deviation at small $t_s$ indicates the presence
of excited state contributions for $t_s<24$.
\begin{figure}
\centering
\includegraphics[scale=0.65]{conmom0.pdf}
\includegraphics[scale=0.65]{discmom0.pdf}
\caption{Results for the ratios corrected by the $t_s$-dependence for
vanishing momentum transfer $Q^2=0$ for the E5 ensemble. The
connected contribution
(smeared-local) is shown on the left, the disconnected (smeared-smeared)
on the right. The blue lines indicate the results
of the global fit to a constant. The fit ranges in $t$ are listed in table
\ref{tab:fitrangest}.}
\label{fig:mom0}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.65]{F7conmom0.pdf}
\includegraphics[scale=0.65]{F7discmom0.pdf}
\caption{Same as figure \ref{fig:mom0} shown for the F7
ensemble. The fit ranges in $t$ are listed in table
\ref{tab:fitrangest}.}
\label{fig:F7mom0}
\end{figure}
In figures \ref{fig:mom0} and \ref{fig:F7mom0} the ratios are
plotted against the time $t$ of the
operator insertion at each value of the sink timeslice $t_s$ for E5 and for F7,
which has the lightest pion mass of all ensembles studied so far.
To account for the $t_s$-dependence (cf. equation
\eqref{eq:Rq20}) we have divided the ratios by the factor $f(t_s)$. Provided
that excited state contributions are sufficiently suppressed, one expects the
quantity $R(t,t_s,0,0)/f(t_s)$ to form plateaus in $t$ about $t_s/2$, which
are independent of $t_s$.
From the plots for the E5 ensemble one can easily see that the
ratios show a systematic trend as
the source-sink separation $t_s$ is increased, which is particularly apparent
in the case of the quark-connected contribution. At the same time one observes
that consistent plateaus are obtained when $t_s\geq24$. For the
quark-disconnected part the trend is somewhat obscured by the larger statistical
errors. The same $t_s$-behavior was already observed in the plateau values
shown in figure
\ref{fig:mom0ratiovsts}. The most likely explanation is the presence of excited
state contributions for $t_s<24$.
In order to avoid a systematic bias, we have excluded ratios with $t_s<24$
from the subsequent analysis.
\begin{table}
\begin{tabular}{|cr|l|}
\hline
\multicolumn{2}{|c|}{label} & $t_s$ values in global fit\\
\hline\hline
\multirow{2}{*}{E3 - E5} & connected & $24$, $28$, $32$\\
& disconnected & $24$, $26$, $28$, $30$, $32$\\
\hline
\multirow{2}{*}{F6, F7} & connected & $28$, $36$, $40$, $44$, $48$\\
& disconnected & $24$, $28$, $32$, $36$, $40$, $44$, $48$\\
\hline
\end{tabular}
\caption{The values of $t_s$ that have been used in the global fits.}
\label{tab:fitranges}
\end{table}
The blue lines in the plots of figures \ref{fig:mom0} and
\ref{fig:F7mom0} show the results of
global fits to a constant within the plateau regions, applied to the data
computed for $t_s\geq24$. The values of $t_s$, that have been used for the
global fit are listed in table \ref{tab:fitranges}. Furthermore, in table
\ref{tab:fitrangest} we have compiled the fit ranges in $t$ applied to the
ensembles E5 and F7, which are shown in figures \ref{fig:mom0} and
\ref{fig:F7mom0}. The fit
result is proportional to the unnormalized scalar form factor
at vanishing momentum transfer,
\begin{equation}
\frac{R(t,t_s,0,0)}{f(t_s)} =
\frac{\left<\pi(0)\right|\mathcal{O}_{\rm{S}}(0)
\left|\pi(0)\right>}{2m_\pi} = \frac{1}{2m_\pi} F_{_{\rm S}}^{\rm
bare}(Q^2=0)\,,
\end{equation}
where the pion mass $m_\pi=E_\pi(0)$ is known from the two-point function
$C_{2\textnormal{pt}}(t,0)$.
We end this discussion with the observation that our method for the evaluation
of the quark-disconnected contribution can resolve the corresponding ratio
with good statistical accuracy at vanishing momentum transfer. The
plots on the right-hand side of figures \ref{fig:mom0} and
\ref{fig:F7mom0}, clearly show a good signal,
which differs from zero within several standard deviations.
\begin{figure}[h]
\centering
\includegraphics[scale=0.65]{conmom1.pdf}
\includegraphics[scale=0.65]{discmom1.pdf}
\caption{Results for the ratios corrected by the time-dependence for
$Q^2=0.278$~GeV$^2$ for the E5 ensemble. The connected
contribution (smeared-local) is shown on the
left, the
disconnected (smeared-smeared) on the right. The blue lines indicate the results
of the global fit. The fit ranges in $t$ are listed in table
\ref{tab:fitrangest}.}
\label{fig:mom1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.65]{F7conmom1.pdf}
\includegraphics[scale=0.65]{F7discmom1.pdf}
\caption{Same as figure \ref{fig:mom1} shown for the F7
ensemble at $Q^2=0.121$~GeV$^2$. The fit ranges in $t$ are listed in table
\ref{tab:fitrangest}.}
\label{fig:F7mom1}
\end{figure}
\subsection{Ratios for $Q^2\neq0$}
As was mentioned above, we cannot employ twisted boundary conditions to study
the $Q^2$-dependence of the scalar form factor. Non-vanishing values of $Q^2$
are obtained by projecting the final-state pion and the insertion point of the
operator onto the values of $\vec{p}_f$ and $\vec{q}$, respectively.
On a finite lattice with spatial extent~$L$ the Fourier momenta are discrete,
and the smallest possible momentum is $\left|\vec{p}\right|=2\pi/L$.
For the
ensembles E5 and F7, the minimum momentum transfer corresponds to
$Q^2=0.278$~GeV$^2$ and $Q^2=0.121$~GeV$^2$, respectively.
\begin{table}
\begin{tabular}{|c||p{0.3cm}cp{0.3cm}cp{0.3cm}|p{0.3cm}cp{0.3cm}cp{0.3cm}||p{
0.3cm}cp{0.3cm}cp{0.3cm}|p{0.3cm}cp{0.3cm}cp{0.3cm}|}
\hline
label &\multicolumn{5}{|c|}{connected $Q^2=0$}
&\multicolumn{5}{|c||}{disconnected $Q^2=0$}&\multicolumn{5}{|c|}{connected
$Q_1^2$}
&\multicolumn{5}{|c|}{disconnected $Q_1^2$} \\
&&$t_s$ && $t$&&&$t_s$ && $t$ &&&$t_s$ && $t$&&&$t_s$ && $t$ &\\
\hline\hline
E5 && $24$ && $5-19$ &&& $24$ && $4-20$&
&& $24$ && $7-14$ &&& $24$ && $9-21$&\\
&& && &&& $26$ && $4-22$&
&& && &&& $26$ && $10-23$&\\
&& $28$ && $6-22$ &&& $28$ && $5-23$&
&& $28$ && $9-15$ &&& $28$ && $10-24$&\\
&& && &&& $30$ && $5-25$&
&& && &&& $30$ && $10-26$&\\
&& $32$ && $12-20$ &&& $32$ && $6-26$&
&& $32$ && $10-16$ &&& $32$ && $10-27$&\\
\hline
F7 && && &&& $24$ && $4-20$&
&& && &&& $24$ && $3-21$&\\
&& $28$ && $6-15$ &&& $28$ && $5-23$&
&& $28$ && $6-12$ &&& $28$ && $4-25$&\\
&& && &&& $32$ && $6-26$&
&& && &&& $32$ && $7-29$&\\
&& $36$ && $13-28$ &&& $36$ && $7-29$&
&& $36$ && $6-13$ &&& $36$ && $12-32$&\\
&& $40$ && $18-25$ &&& $40$ && $8-32$&
&& $40$ && $18-27$ &&& $40$ && $16-35$&\\
&& $44$ && $18-28$ &&& $44$ && $8-36$&
&& $44$ && $18-27$ &&& $44$ && $20-39$&\\
&& $48$ && $19-29$ &&& $48$ && $8-40$&
&& $48$ && $18-28$ &&& $48$ && $23-43$&\\
\hline
\end{tabular}
\caption{Values of the source-sink separation $t_s$ and the interval in $t$
used in the global fits to the connected and disconnected contributions to the
E5 and F7 ensembles.}
\label{tab:fitrangest}
\end{table}
To increase statistics for quark-disconnected contributions we have again used
four different source positions in the calculation of quark propagators.
Additionally, we have averaged over all equivalent
momenta, e.g. $(0,0,2\pi/L)$, $(0,2\pi/L,0)$ and $(2\pi/L,0,0)$ for
the smallest non-zero value of $Q^2$.
As explained above, we use ratio $R_1$ of equation \eqref{eq:R1} for the
analysis for the
connected, and ratio $R_3$ of equation \eqref{eq:R3} for the disconnected
contribution.
Both ratios have known time-dependences which we can correct for. The ratios
with the time-dependence divided out are shown in figures
\ref{fig:mom1} and \ref{fig:F7mom1}, where
they are plotted against the operator insertion time $t$ for different values
of $t_s$. Within our statistical accuracy we do not see a trend in the data
computed for non-vanishing momentum transfer at different values of $t_s$,
unlike the case of $Q^2=0$ discussed earlier. Nonetheless, we again exclude
the data with $t_s<24$ from the analysis, to be sure that systematic effects
from excited states are under control.
\par
As before, the blue lines in figures \ref{fig:mom1} and
\ref{fig:F7mom1} indicate the results from a
global fit to the plateau regions for different values of $t_s\geq24$. From
the fit results the scalar form factor for this momentum transfer can be
calculated,
\begin{align}
&R_1(t,t_s,\vec{p}_i,\vec{p}_f)/f(t_s) =
\frac{\left<\pi(\vec{p}_i)\right|\mathcal{O}_{\rm{S}}(0)
\left|\pi(\vec{p}_f)\right>}{2\sqrt{E_\pi(\vec{p}_i)E_\pi(\vec{p}_f)}} =
\frac{1}{2\sqrt{E_\pi(\vec{p}_i)E_\pi(\vec{p}_f)}}\,F_{_{\rm S}}^{\rm
bare}(Q^2)\,, \\
&R_3(t,t_s,\vec{p}_i,\vec{p}_f)/f(t,t_s) =
\frac{\left<\pi(\vec{p}_i)\right|\mathcal{O}_{\rm{S}}(0)
\left|\pi(\vec{p}_f)\right>}{2\sqrt{E_\pi(\vec{p}_i)E_\pi(\vec{p}_f)}} =
\frac{1}{2\sqrt{E_\pi(\vec{p}_i)E_\pi(\vec{p}_f)}}\,F_{_{\rm S}}^{\rm
bare}(Q^2)\,.
\end{align}
While the relative contribution of the quark-disconnected diagram to the form
factor is smaller compared to the case of vanishing momentum transfer,
$Q^2=0$, we note that our method is clearly able to resolve a signal.
In addition, we have included data for another momentum transfer, where the
final state
of the pion is projected to $\left|\vec{p}_f\right|=2\cdot2\pi/L$. The
corresponding pion two-point functions $C_{2\textrm{pt}}(t_s,\vec{p}_f)$, which
occur in the ratios, are fluctuating strongly, especially for larger values of
$t_s\lesssim T/2$, such that a reliable estimate for the form factor is not
possible using the two-point data themselves. Instead of dividing the
three-point function by $C_{2\textnormal{pt}}(t_s,\vec{p}_f)$ we use the fitted
two-point function in order to compute the ratios $R_1$ and $R_3$, which
reduces their statistical fluctuations.
\subsection{The $Q^2$ dependence of the form factor}
\begin{figure}[b]
\centering
\includegraphics[scale=0.65]{E5q2.pdf}
\includegraphics[scale=0.65]{F7q2.pdf}
\caption{The $Q^2$-dependence of the scalar form factor: on the left-hand side
E5 with a pion mass of $455$~MeV, on the right-hand side F7 with
$m_\pi=280$~MeV. The red points show the results for the total form factor
and the green points for the connected contribution only.}
\label{fig:q2}
\end{figure}
We briefly recall the definition of the scalar radius in terms of the scalar
form factor
\begin{equation}
\left\langle r^2\right\rangle^\pi_{_{\rm S}} = -\frac{6}{F^\pi_{_{\rm S}}(0)}
\frac{\partial F^\pi_{_{\rm S}}(Q^2)}{\partial
Q^2}\Big|_{Q^2=0}\,.
\label{eq:scalarrdef}
\end{equation}
The scalar form factor admits an expansion, which has the general form
\begin{equation}
F^\pi_{_{\rm S}}\left(Q^2\right) = F^\pi_{_{\rm S}}(0)
\left(1 - \frac{1}{6}\left\langle r^2\right\rangle^\pi_{_{\rm S}} Q^2
+ \mathcal{O}(Q^4) \right)\,,
\label{eq:Q2dependence}
\end{equation}
and which is consistent with the definition \eqref{eq:scalarrdef} of the scalar
radius.
In practice, the slope at $Q^2=0$ is difficult to determine on the lattice in
a model-independent way. Usually one fits the lattice data for form factors
obtained at a few discrete values of $Q^2$ to some phenomenological model such
as vector meson dominance. In the case of the pion vector form factor, which
is amenable to the use of twisted boundary conditions, it is possible to tune
$Q^2$ so as to generate a high density of data points in the immediate
vicinity of $Q^2=0$ from which the slope can be extracted without any model
assumptions \cite{Brandt:2013dua}.
Here we must resort to a more naive treatment, since twisted boundary
conditions cannot be used to evaluate the quark-disconnected contribution, so
that the resolution in $Q^2$ is only quite rough. As a consequence, we
estimate the scalar radius from a linear fit over a relatively broad
interval in $Q^2$, using three data points only. However, we compare different
fit ans\"atze in an attempt to investigate the systematics of this procedure.
In figure \ref{fig:q2} we show the $Q^2$-dependence for the ensembles E5 and
F7. Both plots show the total form factor and the results
obtained when the
disconnected contributions are neglected. According to
\eqref{eq:Q2dependence}
a linear function was fitted to the data to estimate the scalar
radius. For both ensembles shown here the descending slope of the linear curves
is clearly steeper for the total form factor than for the connected part
only. This stresses the importance of including the disconnected diagram for
determining the scalar radius.
\begin{figure}[t]
\centering
\includegraphics[scale=0.75]{radiuscomp.pdf}
\caption{A comparison of different descriptions of the form factor
data: a linear interpolation (yellow) at the two smallest values of
$Q^2$, and both a linear (red) and a VMD-inspired (green) fit to the
three smallest values of $Q^2$, with the radii resulting from
considering only the connected part and the complete three-point
function shown using open and filled symbols, respectively.}
\label{fig:compare}
\end{figure}
\begin{table}[b]
\centering
\begin{tabular}{|cr||c|c|c|c|c||c|}
\hline
&& $F^\pi_{_{\rm S}}\left(0\right)$ & $Q_1^2
\left[\textnormal{GeV}^2\right]$ & $F^\pi_{_{\rm
S}}\left(Q_1^2\right)$ & $Q_2^2
\left[\textnormal{GeV}^2\right]$ &
$F^\pi_{_{\rm S}}\left(Q_2^2\right)$ & $\left\langle
r^2\right\rangle^\pi_{_{\rm S}} \left[\textnormal{fm}^2\right]$ \\
\hline\hline
\multirow{2}{*}{E3} & connected & $1.39\pm 0.01$ &\multirow{2}{*}{$0.319$}&
$1.20\pm0.03$ &\multirow{2}{*}{$0.565$}& $1.02\pm0.12$ &
$0.099\pm0.018$\\
& total & $1.97\pm0.11$ && $1.61\pm0.07$ && $1.33\pm0.14$ &
$0.134\pm0.032$\\
\hline
\multirow{2}{*}{E4} & connected & $1.39\pm0.01$ &\multirow{2}{*}{$0.311$}&
$1.17\pm0.04$ &\multirow{2}{*}{$0.548$}& $0.93\pm0.11$
& $0.125\pm0.017$\\
& total & $1.88\pm0.09$ && $1.70\pm0.06$ && $1.38\pm0.13$ &
$0.208\pm0.027$\\
\hline
\multirow{2}{*}{E5} & connected & $1.36\pm0.01$ &\multirow{2}{*}{$0.278$}&
$1.11\pm0.05$ &\multirow{2}{*}{$0.471$}& $1.02\pm0.19$
& $0.149\pm0.028$\\
& total & $1.82\pm0.05$ && $1.34\pm0.06$ && $1.17\pm0.20$ &
$0.208\pm0.027$\\
\hline
\multirow{2}{*}{F6} & connected & $1.44\pm0.03$ &\multirow{2}{*}{$0.128$}&
$1.36\pm0.07$ &\multirow{2}{*}{$0.221$}& $1.02\pm0.14$
& $0.197\pm0.069$\\
& total & $1.97\pm0.10$ && $1.60\pm0.08$ && $1.17\pm0.16$ &
$0.396\pm0.081$\\
\hline
\multirow{2}{*}{F7} & connected & $1.39\pm0.03$ &\multirow{2}{*}{$0.121$}&
$1.26\pm0.06$ &\multirow{2}{*}{$0.203$}& $1.17\pm0.23$
& $0.175\pm0.088$\\
& total & $1.88\pm0.09$ && $1.37\pm0.08$ && $1.23\pm0.24$
& $0.487\pm0.083$\\
\hline
\end{tabular}
\caption{Numerical results of the scalar pion form factor $F^\pi_{_{\rm
S}}\left(Q^2\right)$ for three different momentum transfers $Q^2$ and the
results for the scalar radius $\left\langle
r^2\right\rangle^\pi_{_{\rm S}}$ as determined from an uncorrelated linear fit.}
\label{tab:results}
\end{table}
\normalsize
For all ensembles studied so far, we find the results for the three different
$Q^2$ to be consistent with a linear $Q^2$ dependence within their statistical
errors. In order to investigate the systematic effect in the determination of
the scalar radius arising from the ansatz for the $Q^2$ dependence, we have
compared the linear fit to a VMD-inspired fit of the form $1/(1 + Q^2/M^2)^2$
as well as a linear interpolation using only the two smallest $Q^2$ values. As
can be inferred from figure \ref{fig:compare} no statistically significant
effect in the determination of $\left\langle r^2\right\rangle^\pi_{_{\rm S}}$
arising from the use of different ans\"atze is observed. This indicates that
any possible curvature contained in the data cannot be resolved at the current
level of statistical accuracy.\par
We choose the linear fit as a reasonable
compromise between achieving a well-motivated description of the data and
keeping the statistical error of the fitted radius in check. The results for
the form factor and the scalar radius from the linear fit are summarized in
table \ref{tab:results}.
\subsection{Chiral extrapolation}
\label{subsec:extra}
Since our simulations of the scalar radius have been performed with pion masses
larger than the physical mass $m_\pi>m_{\pi,\textnormal{phys}}$, we have
to perform a chiral extrapolation. In chiral perturbation theory at NLO the
scalar radius of the pion is
\cite{Gasser:1983yg,Gasser:1990bv,Bijnens:1998fm}
\begin{equation}
\left\langle r^2\right\rangle^\pi_{_{\rm S}} = \frac{1}{(4\pi F)^2}
\left(-\frac{13}{2}\right)
+ \frac{6}{(4\pi F)^2}\left[\overline{\ell}_4
+ \ln\left(\frac{m_{\pi,phys}^2}{m_\pi^2}\right)\right]
\label{eq:NLO}
\end{equation}
where $F=92.2$~MeV
\cite{PDG:2012}
is the pion decay constant.
\begin{figure}[b]
\begin{center}
\includegraphics[scale=0.75]{radiusvsmpicomp.pdf}
\end{center}
\caption{The $m_\pi^2$-dependence of the scalar radius. The blue band is a
fit to the lattice data obtained from both quark-connected and
-disconnected diagrams.}
\label{fig:rsqvsmpisq}
\end{figure}
In figure \ref{fig:rsqvsmpisq} the values obtained for
$\left\langle r^2\right\rangle^\pi_{_{\rm S}}$ are plotted against the square
of the pion mass, $m_\pi^2$. The point shown at the physical pion mass is the
value obtained from $\pi\pi$-scattering
\cite{Colangelo:2001df}.
The expression \eqref{eq:NLO} from NLO \ensuremath{\chi}PT\ has been fitted to the data
and the obtained curve is shown in blue. This fit allows a determination of
the low energy constant $\overline{\ell}_4$ for which we find
$\overline{\ell}_4=4.74\pm0.09$, where the error is only
statistical.
This is in excellent agreement with the result of ref.
\cite{Brandt:2013dua},
which was extracted from chiral fits to the pseudoscalar decay constant
computed on the CLS ensembles at three different lattice spacings. The result
for the scalar radius at physical pion mass obtained from our NLO fit is
\begin{equation}
\left\langle r^2\right\rangle^\pi_{_{\rm S}} = 0.635\pm0.016\
\textnormal{fm}^2\,,
\end{equation}
which agrees very well with the $\pi\pi$-scattering value
$\left\langle r^2\right\rangle^\pi_{_{\rm S}}=0.61\pm0.04$~fm$^2$ reported in
ref.
\cite{Colangelo:2001df}.
In figure \ref{fig:rsqvsmpisq} one can see that our data are well described by
\ensuremath{\chi}PT\ at NLO. As already indicated in figure \ref{fig:q2}, the
quark-disconnected contribution to the scalar radius of the pion is not
negligible. The yellow points in figure \ref{fig:rsqvsmpisq} show the data
obtained from the connected contribution only. For the ensembles analyzed so
far, we find that the disconnected contribution to the scalar radius becomes
more important as the pion mass approaches its physical value. Clearly,
neglecting the disconnected diagram fails to reproduce the phenomenological
expectation for the scalar radius.
These findings differ from the results obtained by the JLQCD and
TWQCD collaborations
\cite{Aoki:2009qn},
where no significant pion mass dependence of the scalar radius was observed.
The reason for this discrepancy is presently unknown. Here we only comment
that the two simulations in question differ substantially regarding the value
of the lattice spacing, the minimum value of $m_\pi L$, and the type of
fermionic discretization. It should also be noted that the contribution of
quark-disconnected diagrams in
\cite{Aoki:2009qn},
though significant, was observed to be much smaller than in our study.
Clearly, more work is needed to investigate the systematics of these
calculations. To this end we will add more ensembles at smaller pion masses
and different lattice spacings.
\section{Conclusions}
\label{sec:conclusions}
The combination of the hopping parameter expansion with the use of stochastic
sources provides a powerful means for estimating quark-disconnected
contributions to hadronic form factors. We have been able to obtain a clearly
non-vanishing signal for the scalar form factor of the pion both at $Q^2=0$
(where there is a large subtraction of the vacuum contribution) and
at non-vanishing momentum transfer, where the
correlation functions become intrinsically noisy.
We find that the disconnected contribution to the scalar form factor is
not negligible, and that indeed the purely connected part of the form factor
fails to reproduce the expected logarithmic behaviour of the pion scalar radius
as a function of the pion mass. This is in qualitative agreement with what
has been found in partially quenched \ensuremath{\chi}PT\
\cite{Juttner:2011ur}.
From our determination of the pion scalar radius, we can derive a lattice
estimate of the low-energy constant $\overline{\ell}_4=4.74\pm0.09$, which is in
fair
agreement with the phenomenological estimate
\cite{Colangelo:2001df}
$\bar{\ell}_4=4.4\pm 0.2$ based on the analysis of $\pi\pi$-scattering
amplitudes.
The present study is based on a single, albeit rather fine, lattice spacing.
It is therefore important to repeat this study on ensembles with different
values of the lattice spacing, to estimate the size of discretization effects
and perform an extrapolation to the continuum limit. Another potential source
of systematic errors are finite-volume effects. While all of our lattices
satisfy $M_\pi L\ge 4$, it is desirable to include further, even larger,
lattice volumes to ensure that finite-volume effects are indeed fully under
control.
Another source of systematic error in the determination of the pion scalar
radius, and hence of $\bar{\ell}_4$, is the simple linear fit
used to estimate the derivative of the scalar form factor at vanishing $Q^2$.
It would be highly desirable to augment this somewhat naive approximation
by using partially twisted boundary conditions for the connected part along
the lines of
\cite{Jiang:2006gna,Brandt:2013mb}.
Unfortunately this method is fundamentally inapplicable to the disconnected
part, where the same quark propagator connects to the operator insertion on
both sides, and some interpolation will necessarily be required in this case.
However, all our data are consistent with a linear $Q^2$ dependence,
and any possible curvature cannot be resolved with our current accuracy.
Finally, another potential for systematic error lies in the use of NLO \ensuremath{\chi}PT\
formulae, which may not always give a good description of pion form factors
\cite{Brandt:2013dua}.
The ability of the NLO expressions to describe the numerical data crucially
depends on the overall accuracy of the latter. If the the statistical errors in
the determinations of the scalar form factor and radius can be substantially
decreased, one may have to resort to \ensuremath{\chi}PT\ at NNLO.
\begin{acknowledgments}
We acknowledge useful discussions with Andreas J\"uttner, Bastian Brandt and
Harvey B.~Meyer. Our calculations were performed on the ``Wilson'' HPC
Cluster at the Institute for Nuclear Physics, University of Mainz. We thank
Dalibor Djukanovic and Christian Seiwerth for technical support. We are
grateful for computer time allocated to project HMZ21 on the BlueGene
computers ``JUGENE'' and ``JUQUEEN'' at NIC, J\"ulich. This research has been
supported in part by the DFG in the SFB~1044. We are grateful to our
colleagues in the CLS initiative for sharing ensembles.
\end{acknowledgments}
\bibliographystyle{h-physrev4}
|
1,941,325,220,720 | arxiv | \section{Introduction}
With billions of videos uploaded at any time on online video platforms, it is worthwhile to retrieve the best corresponding video for a given query to efficiently access the desired video. Therefore, the tasks of Text-to-Video (T2V) and Video-to-Text (V2T) are tackled in this paper. T2V aims to obtain the ranking of all candidate videos for each caption query, while V2T finds the ranking of all candidate captions for each video query.
Unlike images, video is a kind of media owning multiple different modalities. Therefore, considering and exploring different modalities in videos is really necessary for video understanding. Some traditional methods~\cite{Gabeur2020multi,dzabraev2021mdmmt} have paid attention to this point. For example, MMT~\cite{Gabeur2020multi} first extracted audio, visual, motion, face, scene, appearance, and ocr multi-modal features for obtaining better video representation. However, the coarse-grained way of concatenating all these features and feeding them into a transformer encoder may lead to models wishing to focus on certain modalities, while other informative modalities are overwhelmed and ignored, hindering the full video understanding of multi-modal contents.
Recently, some methods~\cite{luo2021clip4clip,Cao2022visual,min2022hunyuan_tvr} have tried to utilize the pre-trained text-image retrieval model CLIP (contrastive language-image pretraining)~\cite{clip}, which is trained on 400 million text-image pairs to learn representation between text and image, as the backbone to conduct text-video retrieval task. For instance, CLIP4Clip~\cite{luo2021clip4clip} first utilized CLIP to extract the visual frame features and the caption token features, and then accumulated the similarity scores between frame-level video features and sentence-level caption features for the final results, achieving a notable improvement in TVR. Based on CLIP4Clip, HunYuan\_tvr~\cite{min2022hunyuan_tvr} formulated video-sentence, clip-phrase, and frame-word relationships to explore hierarchical cross-modal interactions. Unfortunately, these CLIP-based works entirely ignore other rich multi-modal signals, such as audio, motion, and text in videos.
In this paper, we propose a novel method, Multi-Level Multi-Modal Hybrid Fusion (M2HF), which not only completes multi-modal contents in a fine-grained multi-level way, but also embraces the powerful pre-trianed model CLIP. First, M2HF early fuses audio and motion features respectively with visual features extracted by CLIP, producing audio-guided visual features and motion-guided visual features, which explicitly pay attention to sound source and moving objects. Then, we exploit the relationships between visual features, audio-guided visual features, motion-guided visual features, and text contents from ASR (automatic speech recognition) in a multi-level way with caption queries. When encountering the modality missing, we present a simple alignment strategy to align it. Finally, the results at each level are integrated by selecting the best ranking for each text-video pair as the final retrieval result.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose a \textit{Multi-Level Multi-Modal Hybrid Fusion} network which improves the performance of text-video retrieval task by utilizing the capability of CLIP and exploring rich multi-modal information.
\item We explore \textit{multi-modal complement and multi-modal alignment} based on early fusion strategy. Multi-level framework is designed by building relationships between language queries and visual, audio, motion, and text information extracted from videos for fully exploiting interactions between caption and video.
\item We devise a \textit{late multi-modal balance fusion} method to integrate results of each level by choosing the best ranking among them for obtaining the final ranking result.
\item We introduce a novel \textit{multi-modal balance loss} for end-to-end training (E2E) of TVR task. M2HF can also be trained in an ensemble manner. The experimental results on the public datasets show remarkable performance in E2E and ensemble settings.
\item \textit{M2HF} achieves new state-of-the-art Rank@1
retrieval results of 64.9\%, 68.2\%, 33.2\%, 57.1\%, 57.8\% on MSR-VTT, MSVD, LSMDC, DiDemo, and ActivityNet.
\end{itemize}
\section{Related Work}
\subsection{Multi-modal Fusion}
\subsubsection{Early Fusion.} Such methods mainly fuse multiple modalities at the feature level. Bilinear pooling-based approaches fuse two modalities by learning a joint representation space, \emph{e.g.}, MLB (low-rank bilinear pooling)~\cite{Kim2017Hadamard} and MFB (multi-modal factorized bilinear pooling)~\cite{Yu2017Multimodal}, etc. Attention-based methods fuse features from different modalities based on the correlation, including channel-wise attention~\cite{hu2018squeeze}, non-local model~\cite{Wang2018Nonlocal}, and transformer~\cite{Vaswani2017Attention,Xu2020Cross}, etc.
\subsubsection{Late Fusion.}
Late fusion, also known as decision-level fusion, first trains different models on different modalities, and then fuses the predictions of these trained models~\cite{Snoek2005Early}. Late fusion methods mainly design the different combination strategies to merge models' outputs, such as voting, average combination, ensemble learning, and other combination methods.
\subsubsection{Hybrid Fusion.} The hybrid fusion method combines early fusion and late fusion, absorbing the advantages of both fusions. In this work, we employ a hybrid fusion mechanism to simultaneously exploit cross-modal relationships and tolerate the intervention and asynchrony of different modalities.
\subsection{Text-Video Retrieval}
For TVR, two research directions mainly exist: multi-modal features and large-scale pre-trained models.
\subsubsection{Text-Video Retrieval based on Multi-modal Features.}
One direction applies rich multi-modal cues to retrieve videos. MMT~\cite{Gabeur2020multi} encoded seven modalities such as audio, visual, and motion separately, and then fed them into a transformer for better video representation.
MDMMT~\cite{dzabraev2021mdmmt} extends MMT by optimizing on training datasets.
MDMMT-2~\cite{kunitsyn2022mdmmt} introduced a three-stage training process and double positional encoding for better retrieval performance. However, these methods mainly input various multi-modal features into an encoder producing video representations. This fusion method is somewhat simple and coarse-grained. Instead, our method can utilize a fine-grained hybrid fusion method to fuse multi-modal features in a multi-level manner.
\begin{figure*}[t]
\centering
\includegraphics[width=0.89\linewidth]{fig/Main_Network.png}
\caption{The architecture of our multi-level multi-modal hybrid fusion network (M2HF) for text-video retrieval.}
\label{fig:Main_Network}
\end{figure*}
\subsubsection{Text-Video Retrieval based on CLIP.}
Another direction attempts to utilize pre-trained CLIP~\cite{clip} as the backbone for TVR task. The seminal work CLIP4Clip~\cite{luo2021clip4clip} exploited CLIP to extract features of visual frames and captions, and then computed the similarity scores between video and caption features. Based on CLIP, Fang \emph{et al.}~\cite{fang2021clip2video} introduced temporal difference block and temporal alignment block to enhance temporal relationships between video frames and video captions. Cheng \emph{et al.}~\cite{cheng2021improving} proposed a novel dual Softmax loss (DSL). Wang \emph{et al.}~\cite{wang2022disentangled} carefully studied the cross-modality interaction process and representation learning for TVR, and proposed a disentangled framework, including a weighted token-wise interaction (WTI) block and a channel decorrelation regularization block, to model the sequential and hierarchical representation. Very recently, Gorti \emph{et al.}~\cite{Gorti2022xpool} leveraged CLIP as a backbone and proposed a parametric text-conditioned pooling to aggregate video frame representations based on the similarity between video frame and text. However, these CLIP-based methods mainly focus on the visual modality, while ignoring other information in videos, such as motion, audio, and text, which are still important cues for TVR task.
\section{Proposed Method}
\subsection{Overall Architecture}
Fig.~\ref{fig:Main_Network} illustrates the entire pipeline of our M2HF. Given a set of video-caption pairs $\{(V_{1},c_{1}),...,(V_{n},c_{n})\}$, our method measures the similarity of video and caption from four levels. To realize multi-modal complement, M2HF establishes relationships and conducts similarity computation between caption $c_{j}$ and visual $v_{i}$, audio $a_{i}$, motion $m_{i}$, and text $t_{i}$ extracted from video $V_{i}$, respectively.
Multi-modal fusion is in a hybrid fusion way, where motion and audio features are early fused with visual features, \emph{i.e.}, motion-visual fusion and audio-visual fusion in Fig.~\ref{fig:Main_Network}, and all levels' ranking results are late fused for the final retrieval results by selecting the best ranking in the output of each level. We aggregate multi-modal cues in a fine-grained way for more accurate retrieval performance, and provide a multi-modal alignment method for the situation of modality missing.
Furthermore, two training strategies (E2E and Ensemble) are provided in this paper, and a novel multi-modal balance loss is proposed to serve E2E training by minimizing each pair score and calculating the balanced loss.
\subsection{Visual-to-Caption Level}
Visual-to-caption level is designed for making cross-modality relationship between visual features from video and caption query features. Image encoder ViT~\cite{dosovitskiy2020vit} and text encoder Bert~\cite{Devlin2019Bert} of CLIP is first used to extract visual features ($v_{i} \in \mathbb{R}^{F \times d_{v}}$) and caption features ($c_{i} \in \mathbb{R}^{T \times d_{c}}$), respectively, where $F$ is the number of video frames, $T$ is the number of caption tokens, $d_{v}$ and $d_{c}$ represents the dimensions of visual and caption features respectively. To compute the similarity matrix $\mathcal{S}_{c-v}$ between caption features and visual features, we choose the weighted token-wise interaction (WTI) function~\cite{wang2022disentangled}. The entire process is computed as:
\begin{equation}
\begin{split}
c2v\_logits & = \sum_{p=1}^{T}f^{p}_{cw,\theta}(c_{i})max^{L_{v}}_{q=1}(\frac{c^{p}_{i}}{\left \|c^{p}_{i}\right \|_{2}})^{\tat}{v^{q}_{i}}, \\
v2c\_logits & = \sum_{q=1}^{F}f^{q}_{vw,\theta}(v_{i})max^{L_{c}}_{p=1}(\frac{c^{p}_{i}}{\left \|c^{p}_{i}\right \|_{2}})^{\tat}(\frac{v^{q}_{i}}{\left \|v^{q}_{i}\right \|_{2}}), \\
\mathcal{S}_{c-v}[i,i] & = \wti(c_{i},v_{i}) = \frac{c2v\_logits+ v2c\_logits}{2.0},
\end{split}
\label{equ:wti}
\end{equation}
where $f_{cw,\theta}$ and $f_{vw,\theta}$ both are the combination of the MLP (multilayer perceptron) and a Softmax, $i$ is a sampled index, $p$ and $q$ denote the index of caption token and video frame.
\subsection{Audio-to-Caption Level}
\label{subsec:audio-to-caption}
In the audio-to-caption level, audio features and visual features are early fused to highlight the visual semantic information related to audio. Then, the audio-guided visual features are used to build connections with caption features. Audio features ($a_{i} \in \mathbb{R}^{F \times d_{a}}$) are extracted from the log mel-spectrogram via the VGGish~\cite{Shawn2017CNN} pre-trained on AudioSet~\cite{Gemmeke2017Audio}, where $d_{a}$ is the dimension of audio features.
We adopt MFB-based method in text-to-video task to early fuse audio and visual features, yielding high-level semantic audio-visual fusion features $\mathcal{F}_{av_{i}} \in \mathbb{R}^{F \times d_{v}}$. Specifically, audio features $a_{i}$ and visual features $v_{i}$ are projected and aligned as the same dimension $kd$ using linear layers and ReLU. The aligned audio and visual features are multiplied and fed into sum pooling layer with the kernel size $k$. The formulation is as follows:
\begin{equation}
\mathcal{F}_{av_{i}} = \text{Drop}(\text{SP}(\Psi^{\tat}a_{i} \odot \Phi^{\tat}v_{i},k)),
\end{equation}
where $\Psi \in \mathbb{R} ^{d_{a} \times(kd)}$ and $\Phi \in\mathbb{R}^{d_{v} \times(kd)}$ are two learnable matrices, $\odot$ represents element-wise product, $\text{SP}(\cdot,k)$ is the sum pooling with kernel size $k$ and stride $k$, and $\text{Drop}(\cdot)$ is a dropout layer to prevent the over-fitting. To stabilize the model training, power and $L_{2}$ normalizations are utilized:
\begin{equation}
\label{equ:map_mfb_s}
\mathcal{F}_{av_{i}} \leftarrow \sign (\mathcal{F}_{av_{i}}) \sqrt{\left | \mathcal{F}_{av_{i}} \right |},
\mathcal{F}_{av_{i}} \leftarrow \mathcal{F}_{av_{i}}/\left \| \mathcal{F}_{av_{i}} \right \|.
\end{equation}
Next, the audio-visual fusion feature guides the raw visual features $v_{i}$ by channel-wise attention operation for obtaining the final audio-guided visual features. A squeeze-and-excitation operation~\cite{hu2018squeeze} is applied to produce channel-wise attentive weights ($\mathcal{W}_{i}^{\mathcal{A}} \in \mathbb{R}^{d_{v}\times 1}$). This process is formulated as:
\begin{equation}
\mathcal{W}_{i}^{\mathcal{A}} = \delta (\mathbf{W}_{2} \sigma (\mathbf{W}_{1}(\mathcal{F}_{av_{i}}))),
\end{equation}
where $\mathbf{W}_{1} \in \mathbb{R}^{d_v \times d}$ and $\mathbf{W}_{2} \in \mathbb{R}^{d \times d_v}$ are two linear transformations with $d=\frac{d_v}{2}$; $\sigma$ and $\delta$ denote the ReLU and sigmoid functions, respectively.
The final audio-guided visual features are obtained via:
\begin{equation}
av_{i} = \mathcal{W}_{i}^{\mathcal{A}} \odot v_{i}.
\end{equation}
Similar to visual-to-caption level, the relationship between audio-guided visual features and caption features is formulated by WTI. The detailed formula of the similarity matrix $\mathcal{S}_{c-a}$ is similar with Eq.(\ref{equ:wti}) replacing $v_{i}$ with $av_{i}$.
Considering that not all videos exist audio signals, \emph{i.e.}, modality missing problem, we pad missing audio features with element $1$. The primary idea of this alignment strategy is that the guidance mechanism can still work by guiding with original visual features.
\begin{comment}
Finally, the relationship between audio-guided visual features and caption features is formulated by WTI. The similarity matrix $\mathcal{S}_{c-a}$ is calculated as:
\begin{equation}
\begin{split}
&c2a\_logits = \sum_{p=1}^{L_c}f^{p}_{cw,\theta}(c_{i})max^{L_{a}}_{q=1}(\frac{c^{p}_{i}}{\left \|c^{p}_{i}\right \|_{2}})^{T}{av^{q}_{i}}, \\
&a2c\_logits = \sum_{q=1}^{L_a}f^{q}_{aw,\theta}(av_{i})max^{L_{c}}_{p=1}(\frac{c^{p}_{i}}{\left \|c^{p}_{i}\right \|_{2}})^{T}{av^{q}_{i}}, \\
\mathcal{S}_{c-a}[i,i] &= WTI(c_{i},av_{i}) = (c2a\_logits+ a2c\_logits)/2.0.
\end{split}
\end{equation}
where $f_{cw,\theta}$ and $f_{aw,\theta}$ are consisted of classic MLP and a SoftMax function, $i$ is represented as sampled index, $p$ and $q$ are denoted as the index of caption token and audio frame respectively, $L_c$ and $L_a$ are the number of caption tokens and audio frames separately.
\end{comment}
\subsection{Motion-to-Caption Level}
Motion-to-caption level is proposed to early fuse motion features with visual features obtaining motion-guided visual features, which explicitly considers the object movement in visual modality. The fused features then compute the similarity with caption features. In our work, S3D~\cite{Xie2018RethinkingSF} pre-trained on the Kinetics~\cite{carreira2017quo} is applied to extract motion features ($m_{i} \in \mathbb{R}^{F \times d_{m}}$), where $d_{m}$ is the dimension of motion features.
For the fusion of motion features and visual features, we utilize the encoder of transformer block~\cite{Vaswani2017Attention}. The detailed calculation process is as follows:
\begin{equation}
\begin{split}
&\Encoder(Q,K,V) = \LN(X+Y),\\
&X=\MHA(\tilde{Q},\tilde{K},\tilde{V}), Y=\FFN(\LN(X+\tilde{Q})),\\
&\tilde{Q}=Q\mathbf{W}_{\tilde{Q}}, \tilde{K}=K\mathbf{W}_{\tilde{K}}, \tilde{V}=V\mathbf{W}_{\tilde{V}},
\end{split}
\label{equ:Encoder}
\end{equation}
where $Q$, $K$, $V \in \mathbb{R}^{F \times d}$ are input features of transformer's encoder; $\mathbf{W}_{\tilde{Q}},\mathbf{W}_{\tilde{K}},\mathbf{W}_{\tilde{V}} \in \mathbb{R}^{d \times d}$ are projection matrices; $\LN$ refers to the layer normalization; $\MHA$ is the multi-head attention~\cite{Vaswani2017Attention} with 4 heads; and $\FFN$ is the feed forward network.
As shown in Fig.~\ref{fig:Main_Network}, motion features $m_{i}$ and visual features $v_{i}$ are first fed into the intra-modality attention module to learn the informative segments of each modality.
The motion modality is taken as an example to explain the intra-modality attention module. Specifically, motion features are first projected yielding query features ($Q \in \mathbb{R}^{F \times d_{m}}$), key features ($K \in \mathbb{R}^{F \times d_{m}}$), and value features ($V \in \mathbb{R}^{F \times d_{m}}$). They are then fed into encoder of transformer producing the self-attentive motion features $m^{self} = \Encoder(Q,K,V)$ via Eq.~\ref{equ:Encoder}. Self-attentive visual features $v^{self}$ are obtained using the same way.
Next, inter-modality attention module is introduced to exploit relationship between motion and visual features via encoder of transformer as well. Different from the intra-modality computation, key and value features of the inter-modality attention are the concatenation of one modality features and the self-attentive features of another modality. Cross-modality features $m^{cross}$ and $v^{cross}$ are obtained as:
\begin{equation}
\begin{split}
&m^{cross} = \Encoder(m_{i},\cat(m_{i},v^{self}_{i}),\cat(m_{i},v^{self}_{i})),\\
&v^{cross} = \Encoder(v_{i},\cat(v_{i},m^{self}_{i}),\cat(v_{i},m^{self}_{i})),
\end{split}
\end{equation}
where $\cat$ is the concatenation of two features in temporal dimension. The cross-modality features are then integrated by the encoder of transformer yielding motion-visual fusion features $F_{mv_{i}} \in \mathbb{R}^{F\times d_{v}}$ via:
\begin{equation}
\begin{split}
&F_{mv_{i}} = \Encoder(Q,K,V),\\
&Q = m^{cross}_{i} \odot v^{cross}_{i}, K,V = \cat(m^{cross}_{i}, v^{cross}_{i}).
\end{split}
\end{equation}
After that, $F_{mv_{i}}$ is used to guide the visual features to highlight the moving objects. The guidance weights are first estimated via the squeeze-and-excitation block~\cite{hu2018squeeze} as follows:
\begin{equation}
\mathcal{W}_{i}^{\mathcal{M}} = \delta (\mathbf{W}_{4}\sigma(\mathbf{W}_{3}(\mathcal{F}_{mv_{i}}))),
\end{equation}
where $\mathbf{W}_{3} \in \mathbb{R}^{d_v \times d}$ and $\mathbf{W}_{4} \in \mathbb{R}^{d \times d_v}$ are two linear transformations with $d=\frac{d_v}{2}$.
Motion-guided visual features $mv_{i}$ are achieved via:
\begin{equation}
mv_{i} = \mathcal{W}_{i}^{\mathcal{M}} \odot v_{i},
\end{equation}
\begin{comment}
Finally, the relationship between motion-guided visual features and caption features is formulated by WTI. The similarity matrix $\mathcal{S}_{c-m}$ is calculated as :
\begin{small}
\begin{equation}
\begin{split}
&c2m\_logits = \sum_{p=1}^{L_c}f^{p}_{cw,\theta}(c_{i})max^{L_{m}}_{q=1}(\frac{c^{p}_{i}}{\left \|c^{p}_{i}\right \|_{2}})^{T}{mv^{q}_{i}}, \\
&m2c\_logits = \sum_{q=1}^{L_m}f^{q}_{mw,\theta}(mv_{i})max^{L_{c}}_{p=1}(\frac{c^{p}_{i}}{\left \|c^{p}_{i}\right \|_{2}})^{T}{mv^{q}_{i}}, \\
\mathcal{S}_{c-m}[i,i] &= WTI(c_{i},mv_{i}) = (c2m\_logits+ m2c\_logits)/2.0.
\end{split}
\end{equation}
\end{small}
where $f_{cw,\theta}$ and $f_{mw,\theta}$ are consisted of classic MLP and a SoftMax function, $i$ is represented as sampled index, $p$ and $q$ are denoted as the index of caption token and motion frame respectively, $L_c$ and $L_m$ are the number of caption tokens and motion frames separately.
\end{comment}
Finally, the similarity matrix $\mathcal{S}_{c-m}$ between motion-guided visual features and caption features is calculated via the WTI, \emph{i.e.}, replacing $v_{i}$ in Eq.(\ref{equ:wti}) with $mv_{i}$.
\subsection{Text-to-Caption Level}
Text information is extracted from video via ASR technology. The same modality can directly compute the similarity matrix $\mathcal{S}_{c-t}$ without intervention from other modalities. Jaccard scores~\cite{Jaccard1912Distrbution} are formulated for each pair of caption and text. Before the formulation, several pre-processing operations are conducted. First, stop words including pronouns, integrated nouns, and other less representative words are filtered from text and caption. Then, the remaining words will be filtered again to keep only nouns since nouns are more representative than verbs, adverbs, prepositions, and others. Next, the filtered tokens are converted to the same root. Finally, all the letters lowercase yielding the final set of text $S_{t}$ and caption $S_{c}$. The calculation of Jaccard score is as:
\begin{equation}
\mathcal{S}_{c-t}[i,i] = \text{Jaccard}(S_{c},S_{t}) = \frac{\text{len}(S_{c}\cap S_{t})}{\text{len}(S_{c} \cup S_{t})},
\end{equation}
where $\text{Jaccard}(\cdot, \cdot)$ computes the jaccard correlation, and $\text{len}(\cdot)$ calculates the number of each set's tokens.
\subsection{Ensemble and E2E Text-to-Video Retrieval}
\subsubsection{Ensemble Retrieval.} Inspired by ensemble learning, we
first train each multi-modality level model independently, and then fuse their predictions. Dual softmax loss (DSL)~\cite{cheng2021improving} is applied for visual-to-caption level, audio-to-caption level, and motion-to-caption level. DSL pursues the dual optimal match and thus obtain the good retrieval performance. Specifically, the similarity matrices $\mathcal{S}_{c-v}$, $\mathcal{S}_{c-a}$, and $\mathcal{S}_{c-m}$ are fed into DSL function.
We take $\mathcal{S}_{c-v}$ as an example, the loss of visual-to-caption level ($\mathcal{L}_{v} = -\frac{1}{B}\sum_{i}^{B}\mathbf{L}_{v}$) formulates as follows:
\begin{equation}
\begin{split}
&P_{v2c}[i,j] = \frac{e^{(\lambda\mathcal{S}_{c-v}[i,i])}}{\sum_{j=1}^{B}{e^{(\lambda\mathcal{S}_{c-v}[j,i])}}}, \\
&P_{c2v}[i,j] = \frac{e^{(\lambda\mathcal{S}_{c-v}[i,i])}}{\sum_{j=1}^{B}{e^{(\lambda\mathcal{S}_{c-v}[i,j])}}}, \\
&\mathbf{L}_{v2c}[i] = log(\frac{e^{(\eta \mathcal{S}_{c-v}[i,i] P_{v2c}[i,i])}}{\sum_{j=1}^{B}e^{(\eta \mathcal{S}_{c-v}[i,j] P_{v2c}[i,j])}}),\\
&\mathbf{L}_{c2v}[i] =log(\frac{e^{(\eta \mathcal{S}_{c-v}[i,i] P_{c2v}[i,i])}}{\sum_{j=1}^{B}e^{(\eta \mathcal{S}_{c-v}[j,i] P_{c2v}[j,i])}}), \\
&\mathbf{L}_{v} = \mathbf{L}_{v2c} + \mathbf{L}_{c2v},
\end{split}
\end{equation}
where $\lambda$ is a temperature hyper-parameter to smooth the gradients, $B$ is batch size, and $\eta$ is a logit scaling factor. $\mathcal{L}_{a}$ and $\mathcal{L}_{m}$ are obtained in the same way.
For the evaluation, a novel late fusion strategy, called multi-modal balance fusion (MMBF), is proposed to fuse the outputs of all levels by selecting the best ranking from each level. The ranking of each level is denoted as $\mathcal{R}_{c-v}$, $\mathcal{R}_{c-a}$, $\mathcal{R}_{c-m}$, and $\mathcal{R}_{c-t}$, which are obtained based on the respective similarity matrices. Then, the final ranking is
\begin{equation}
\mathcal{R} = min(\mathcal{R}_{c-v}, \mathcal{R}_{c-a}, \mathcal{R}_{c-m}, \mathcal{R}_{c-t}),
\end{equation}
where $min$ is element-wise minimizing operation.
\subsubsection{E2E Retrieval.} In addition, we introduce a novel multi-modal balance loss (MMBL) to train the model in an end-to-end manner. Specifically, MMBL uses the minimum value of each level yielding the final balanced loss as follows:
\begin{equation}
\label{equ:mmbl}
\mathcal{L} =-\frac{1}{B}\sum_{i}^{B}min(\mathbf{{L}_{v}}, \mathbf{{L}_{a}}, \mathbf{{L}_{m}}).
\end{equation}
We also try other similar fusion methods, including average, element-wise maximizing, and element-wise adding, and find that the element-wise minimizing achieves the best performance. The evaluation of E2E Retrieval also uses MMBF.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{fig/Qualitative_results.png}
\caption{Qualitative comparisons of our method with CLIP4Clip~\cite{luo2021clip4clip}. ``Blue'', ``Green'', ``Orange'', and ``Purple'' represents visual, text, motion, and audio cues, respectively.}
\label{fig:Qualitative_results}
\end{figure*}
\begin{table*}[htp]
\centering
\begin{tabular}{|l|l l l l l|l l l l l|}
\hline
\multirow{2}{*}{Methods} & & & T2V & & & & & V2T & & \\
& R@1 & R@5 & R@10 & MdR & MnR & R@1 & R@5 & R@10 & MdR & MnR \\
\hline \hline
T2VLAD \cite{wang2021t2vlad} & 29.5 & 59.0 & 70.1 & 4.0 & - & - & - & - & - & - \\
CLIP4Clip \cite{luo2021clip4clip} & 44.5 & 71.4 & 81.6 & 2.0 & 15.3 & 42.7 & 70.9 & 80.6 & 2.0 & - \\
VCM \cite{Cao2022visual} & 43.8 & 71.0 & 80.9 & 2.0 & 14.3 & 45.1 & 72.3 & 82.3 & 2.0 & 10.7 \\
X-Pool \cite{Gorti2022xpool} & 46.9 & 72.8 & 82.2 & 2.0 & 14.3 & - & - & - & - & - \\
CAMOE \cite{cheng2021improving} & 48.8 & 75.6 & 85.3 & 2.0 & 12.4 & 50.3 & 74.6 & 83.8 & 2.0 & 9.9 \\
DCR \cite{wang2022disentangled} & 55.3 & 80.3 & 87.6 & 1.0 & - & 56.2 & 79.9 & 87.4 & 1.0 & - \\
Hun Yuan\_tvr (ViT-B/16) \cite{min2022hunyuan_tvr} & 55.0 & 80.4 & 86.8 & 1.0 & 10.3 & 55.5 & 78.4 & 85.8 & 1.0 & 7.7 \\
Hun Yuan\_tvr (ViT-L/14) \cite{min2022hunyuan_tvr} & 53.2 & 77.6 & 83.9 & 1.0 & 10.1 & 54.0 & 78.8 & 87.1 & 1.0 & 8.3 \\
\hline
Ours\_E2E (ViT-B/16) & 60.5 & 83.5 & 90.3 & 1.0 & \textbf{7.0} & 60.7 & 85.3 & 91.4 & 1.0 & 5.9 \\
Ours\_E2E (ViT-L/14) & 60.9 & 83.6 & 89.6 & 1.0 & 8.3 & 60.8 & 83.6 & 90.2 & 1.0 & 5.7 \\
Ours\_Ensemble (ViT-B/16) & 63.0 & 84.6 & 90.9 & 1.0 & 7.2 & 63.0 & 85.1 & 91.5 & 1.0 & 4.8 \\
Ours\_Ensemble (ViT-L/14) & \textbf{64.9} & \textbf{85.0} & \textbf{91.6} & \textbf{1.0} & 7.9 & \textbf{66.2} & \textbf{85.8} & \textbf{91.9} & \textbf{1.0} & \textbf{5.0} \\
\hline
\end{tabular}
\vskip -0.2cm
\caption{Retrieval results on MSR-VTT 1K dataset.}
\label{tab:compar_msrvtt}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{fig/Ablation_results.png}
\caption{Ablation studies of M2HF. ``Audio'', ``Motion'', ``Text'', and ``Visual'' are the model of only using corresponding level.}
\label{fig:Ablation_results}
\end{figure*}
\begin{table*}[htp]
\centering
\begin{tabular}{|l|l l l l l|l l l l l|}
\hline
\multirow{2}{*}{Methods} & & & T2V & & & & & V2T & & \\
& R@1 & R@5 & R@10 & MdR & MnR & R@1 & R@5 & R@10 & MdR & MnR \\
\hline \hline
CLIP4Clip \cite{luo2021clip4clip} & 46.2 & 76.1 & 84.6 & 2.0 & 10.0 & 48.4 & 70.3 & 77.2 & 2.0 & - \\
X-Pool \cite{Gorti2022xpool} & 47.2 & 77.4 & 86.0 & 2.0 & 9.3 & - & - & - & - & - \\
CAMOE \cite{cheng2021improving} & 49.8 & 79.2 & 87.0 & - & 9.4 & - & - & - & - & - \\
DCR \cite{wang2022disentangled} & 50.0 & 81.5 & 89.5 & 2.0 & - & 58.7 & 92.5 & 95.6 & 1.0 & - \\
Hun Yuan\_tvr (ViT-B/16) \cite{min2022hunyuan_tvr} & 54.6 & 82.4 & 89.6 & 1.0 & 8.0 & 58.0 & 85.4 & 89.6 & 1.0 & 5.5 \\
Hun Yuan\_tvr (ViT-L/14) \cite{min2022hunyuan_tvr} & 57.8 & 83.3 & 89.6 & 1.0 & 7.8 & 63.4 & 88.1 & 92.6 & 1.0 & 3.3 \\
\hline
Ours\_E2E (ViT-B/16) & 60.0 & 84.8 & 90.8 & 1.0 & 6.4 & 70.0 & \textbf{90.0} & 93.5 & 1.0 & 3.3 \\
Ours\_E2E (ViT-L/14) & 62.6 & 85.9 & 91.5 & 1.0 & 5.7 & 71.3 & 89.6 & 95.8 & 1.0 & 2.9 \\
Ours\_Ensemble (ViT-B/16) & 61.1 & 86.0 & 91.5 & 1.0 & 6.3 & 69.7 & 86.9 & 92.8 & 1.0 & 3.5 \\
Ours\_Ensemble (ViT-L/14) & \textbf{68.2} & \textbf{88.6} & \textbf{92.9} & \textbf{1.0} & \textbf{5.1} & \textbf{75.5} & 83.0 & \textbf{96.0} & \textbf{1.0} & \textbf{2.3} \\
\hline
\end{tabular}
\vskip -0.2cm
\caption{Retrieval results on MSVD dataset.}
\label{tab:compar_msvd}
\end{table*}
\begin{table*}[htp]
\centering
\begin{tabular}{|l|l l l l l|l l l l l|}
\hline
\multirow{2}{*}{Methods} & & & T2V & & & & & V2T & & \\
& R@1 & R@5 & R@10 & MdR & MnR & R@1 & R@5 & R@10 & MdR & MnR \\
\hline \hline
CLIP4Clip \cite{luo2021clip4clip} & 11.2 & 26.9 & 34.8 & 25.3 & - & - & - & - & - & - \\
T2VLAD \cite{wang2021t2vlad} & 14.3 & 32.4 & - & 16 & - & 14.2 & 33.5 & - & 17 & - \\
X-Pool \cite{Gorti2022xpool} & 25.2 & 43.7 & 53.5 & 8.0 & 53.2 & - & - & - & - & - \\
CAMOE \cite{cheng2021improving} & 25.9 & 46.1 & 53.7 & - & 54.4 & - & - & - & - & - \\
DCR \cite{wang2022disentangled} & 26.5 & 47.6 & 56.8 & 7.0 & - & 27.0 & 45.7 & 55.4 & 8.0 & - \\
Hun Yuan\_tvr (ViT-B/16) \cite{min2022hunyuan_tvr} & 26.3 & 46.1 & 54.1 & 7.0 & 55.3 & 27.1 & 46.6 & 54.5 & 7.0 & 45.7 \\
Hun Yuan\_tvr (ViT-L/14) \cite{min2022hunyuan_tvr} & 29.7 & 46.4 & 55.4 & 7.0 & 56.4 & 30.1 & 47.5 & 55.4 & 7.0 & 48.9 \\
\hline
Ours\_E2E (ViT-B/16) & 27.4 & 45.2 & 53.2 & 9.0 & 46.7 & 27.0 & 45.7 & 53.8 & 8.0 & 43.1 \\
Ours\_E2E (ViT-L/14) & 31.4 & 49.1 & 58.5 & 6.0 & 44.4 & 30.5 & 48.6 & 59.6 & 6.0 & 38.1 \\
Ours\_Ensemble (ViT-B/16) & 30.0 & 49.0 & 58.5 & 6.0 & 40.4 & 29.9 & 49.4 & 58.5 & 6.0 & 35.2 \\
Ours\_Ensemble (ViT-L/14) & \textbf{33.2} & \textbf{54.3} & \textbf{63.8} & \textbf{4.0} & \textbf{34.9} & \textbf{34.8} & \textbf{55.0} & \textbf{64.3} & \textbf{4.0} & \textbf{28.8} \\
\hline
\end{tabular}
\vskip -0.2cm
\caption{Retrieval results on LSMDC dataset.}
\label{tab:compar_lsmdc}
\end{table*}
\begin{table*}[htp]
\centering
\begin{tabular}{|l|l l l l l|l l l l l|}
\hline
\multirow{2}{*}{Methods} & & & T2V & & & & & V2T & & \\
& R@1 & R@5 & R@10 & MdR & MnR & R@1 & R@5 & R@10 & MdR & MnR \\
\hline \hline
CLIP4Clip \cite{luo2021clip4clip} & 41.4 & 58.2 & 79.1 & 2.0 & - & 42.8 & 69.8 & 79.0 & 2.0 & - \\
CAMOE \cite{cheng2021improving} & 43.8 & 71.4 & - & - & - & 45.5 & 71.2 & - & - & - \\
DCR \cite{wang2022disentangled} & 49.0 & 76.5 & 84.5 & 2.0 & - & 49.9 & 75.4 & 83.3 & 2.0 & - \\
Hun Yuan\_tvr (ViT-B/16) \cite{min2022hunyuan_tvr} & 52.1 & 78.2 & 85.7 & 1.0 & 11.1 & 54.8 & 79.9 & 87.2 & 1.0 & 7.1 \\
Hun Yuan\_tvr (ViT-L/14) \cite{min2022hunyuan_tvr} & 49.5 & 73.7 & 81.6 & 2.0 & 14.8 & 50.3 & 76.5 & 83.7 & 1.0 & 10.4 \\
\hline
Ours\_E2E (ViT-B/16) & 53.0 & 76.7 & 84.5 & 1.0 & 11.5 & 53.7 & 76.2 & 84.9 & 1.0 & 8.1 \\
Ours\_E2E (ViT-L/14) & 54.1 & 76.9 & 85.5 & 1.0 & 11.1 & 53.5 & 77.6 & 86.0 & 1.0 & 8.3 \\
Ours\_Ensemble (ViT-B/16) & 55.1 & 79.3 & 85.5 & 1.0 & 10.0 & 56.2 & 79.0 & 86.0 & 1.0 & 7.3 \\
Ours\_Ensemble (ViT-L/14) & \textbf{57.1} & \textbf{79.3} & \textbf{87.5} & \textbf{1.0} & \textbf{9.5} & \textbf{58.0} & \textbf{80.4} & \textbf{89.6} & \textbf{1.0} & \textbf{7.1} \\
\hline
\end{tabular}
\vskip -0.2cm
\caption{Retrieval results on DiDeMo dataset.}
\label{tab:compar_didemo}
\end{table*}
\begin{table*}[htp!]
\centering
\begin{tabular}{|l|l l l l l|l l l l l|}
\hline
\multirow{2}{*}{Methods} & & & T2V & & & & & V2T & & \\
& R@1 & R@5 & R@10 & MdR & MnR & R@1 & R@5 & R@10 & MdR & MnR \\
\hline
w/o Audio & 60.6 & \textbf{83.9} & 89.6 & 1.0 & 8.9 & 60.4 & 83.1 & \textbf{90.3} & 1.0 & 6.8 \\
w/o Motion & 58.5 & 81.1 & 88.2 & 1.0 & 9.9 & 57.9 & 82.1 & 90.1 & 1.0 & 7.0 \\
w/o Text & 59.2 & 82.2 & 88.3 & 1.0 & 8.8 & 58.9 & 82.5 & 89.5 & 1.0 & 6.0 \\
w/o Visual & 59.9 & 82.9 & 88.9 & 1.0 & 9.0 & 59.6 & 82.4 & 89.0 & 1.0 & 6.4 \\
Ours\_E2E (ViT-L/14) & \textbf{60.9} & 83.6 & \textbf{89.6} & \textbf{1.0} & \textbf{8.3} & \textbf{60.8} & \textbf{83.6} & 90.2 & \textbf{1.0} & \textbf{5.7} \\
\hline
w/o Audio & 61.4 & 83.6 & 90.1 & 1.0 & 8.6 & 62.3 & 83.6 & 90.6 & 1.0 & 5.6 \\
w/o Motion & 62.3 & 83.7 & 90.6 & 1.0 & 8.7 & 63.5 & 83.9 & 90.7 & 1.0 & 5.6 \\
w/o Text & 63.4 & 83.9 & 90.5 & 1.0 & 8.3 & 64.9 & 84.5 & 91.0 & 1.0 & 5.4 \\
w/o Visual & 62.1 & 82.2 & 90.2 & 1.0 & 9.0 & 62.9 & 83.9 & 90.7 & 1.0 & 5.8 \\
Ours\_Ensemble (ViT-L/14) & \textbf{64.9} & \textbf{85.0} & \textbf{91.6} & \textbf{1.0} & \textbf{7.9} & \textbf{66.2} & \textbf{85.8} & \textbf{91.9} & \textbf{1.0} & \textbf{5.0} \\
\hline
\end{tabular}
\vskip -0.2cm
\caption{Ablation studies on MSR-VTT 1K dataset.}
\label{tab:msrvtt_ablation_studies}
\end{table*}
\section{Experiments}
\subsection{Experimental Settings}
\subsubsection{Datasets.}
We use five common benchmarks: MSR-VTT, MSVD, LSMDC, DiDeMo, and ActivityNet to conduct extensive experiments to validate our method.
\textbf{MSR-VTT}~\cite{xu2016msr} is a large-scale dataset containing 10,000 video clips and each video clip is described with 20 natural sentences via Amazon Mechanical Turks. Following the setting~\cite{yu2018joint}, 9,000 and 1,000 videos are used for training and testing, respectively.
\textbf{MSVD}~\cite{chen2011collecting} has 1,970 video clips, and each video clip contains about 40 sentences. We adopt the original data split, 1,200 videos for training, 100 videos for validation, and 670 videos for testing.
\textbf{LSMDC}~\cite{rohrbach2015long} is composed of 118,081 video clips extracted from 202 movies and each video clip has a caption. The validation set and evaluation set contains 7,408 and 1,000 videos, respectively.
\textbf{ActivityNet}~\cite{krishna2017dense}
contains 20,000 YouTube videos with 100k captions. Standard split, the training set has 10,009 videos and the validation set has 4,917 videos, is followed. Like~\cite{zhang2018cross}, we concatenate all the captions of a video as a paragraph.
\textbf{DiDeMo}~\cite{anne2017localizing} contains 10,000 videos annotated with 40,000 sentences. All captions of a video are concatenated into a paragraph~\cite{liu2019use}.
\subsubsection{Metrics.}
Following the standard retrieval metrics, Recall at rank N (R@N, higher is better), mean rank (MnR, lower is better), and median rank (MdR, lower is better).
\subsubsection{Implementation Details.}
Our method is implemented with PyTorch 1.7.1, and is trained on NVIDIA Tesla A100 GPU. We set the initial learning rate as $1e-7$ for the CLIP and $1e-4$ for the remaining parameters, respectively. For MSR-VTT, MSVD, and LSMDC, the frame number $F$ and token number $T$ are 12 and 32, respectively; For DiDeMo and ActivityNet, $F=64$ and $T=64$. Adam optimizer with batch size of 128 is used for training the model with 5 epoch.
\subsection{Comparison with State-of-the-art Methods}
In this subsection, we compare M2HF with state-of-the-art methods on MSR-VTT, MSVD, LSMDC, DiDeMo, and ActivityNet benchmarks.
Table \ref{tab:compar_msrvtt} shows results of MSR-VTT, which can be seen that our M2HF significantly surpasses CLIP4Clip by 20.4\% R@1 and outperforms a very recent parallel work Hun Yuan\_tvr by 9.9\%. Table \ref{tab:compar_msvd} shows that M2HF achieves 10.4\% improvement on the MSVD compared to Hun Yuan\_tvr. For LSMDC as shown in Table \ref{tab:compar_lsmdc}, our approach obtains the gain over Hun Yuan\_tvr by 3.5\%. As reported in Table \ref{tab:compar_didemo}, M2HF remarkably outperforms the state-of-the-art method by 7.6\% for DiDeMo. All the quantitative results consistently illustrate the superiority of M2HF.
Fig.~\ref{fig:Qualitative_results} shows two qualitative comparison examples, which show that only using visual modality is not enough to represent videos well. In contrast, our multi-modal complement method provides multi-modal cues to obtain more accurate results. For the first example, images are helpful to match ``baby'' word. The harmonica sound made by this baby and the text information from off-screen sound can be associated with ``harmonica''. The baby's movements are matched with ``swirling back'' and ``forth dancing''. In the second example, ``little boy'' corresponds to a semantic visual target. The keywords ``mouthwash'' and ``taste'' in the text match the relevant tokens in caption. The ``crying'' sound made by this little boy is captured with the help of audio signals. His moving figures are related to ``walk out of''. However, visual-based method CLIP4clip can only retrieve ``a little boy''.
\subsection{Ablation Studies}
As reported in Table \ref{tab:msrvtt_ablation_studies}, ablation experiments on MSR-VTT are conducted to evaluate the effect of each level in M2HF. ``w/o Audio'', ``w/o Motion'', ``w/o Text'', and ``w/o Visual'' represents removing the relevant level from M2HF. Quantitative results demonstrate that each modality contributes to the performance of text-video retrieval.
Fig.~\ref{fig:Ablation_results} shows the qualitative studies of our proposed method. ``Audio'', ``Motion'', ``Text'', and ``Visual'' are the effect of only using corresponding level for T2V retrieval. The green and red boxes represent the true and false retrieval results, respectively. These four examples explain the advantages of each modality. For the first column, ``Audio'' level can catch the instrument's sound source with the guidance of audio signal, however, other levels cannot provide the same contribution. The second example utilizes the motion features of jumping to predict the right retrieval, where ``Motion'' level can pay attention to the moving objects. The third one shows the effect of ``Text'' level, and there are six same keywords between caption tokens and text tokens, including ``wheel'', ``bike'', ``frame'', ``wrench'', ``nut'', and ``axle''. This level is really helpful for those cases containing lots of abstract nouns. For the last example, there are no sound and moving objects. Therefore, the ability of visual is powerful to detect ``ocean floor'' and ``scuba divers''. To this end, quantitative and qualitative results illustrate the superiority of our multi-level and multi-modal method for TVR.
In addition, when removing the multi-modal alignment (in Sec.~\ref{subsec:audio-to-caption}), the retrieval performance drops from 53.0\% to 52.5\% on DiDeMo. We also conduct the ablation experiments of multi-modal balance loss (Eq.~\eqref{equ:mmbl}) on MSR-VTT. Compared with element-wise minimizing (60.5\%), the R@1 metric of other fusion methods is lower, including average (55.7\%), element-wise maximizing (54.6\%), and element-wise adding(55.2\%).
More detailed experimental results and ablation studies are included in the \emph{Supplementary Material}.
\section{Conclusion}
Based on the multi-modal nature of videos, in this paper, we proposed a novel multi-level multi-modal hybrid fusion network for text-video retrieval. The core idea is to explore fine-grained multi-modal cues in a multi-level way, and M2HF can also leverage the powerful knowledge from pre-trained text-image retrieval model (\emph{i.e.}, CLIP). To solve multi-modal complement and multi-modal alignment, we introduced a hybrid fusion methods. Moreover, two training strategies are exploited and implemented: end-to-end training with a multi-modal balance loss and ensemble training with a multi-modal balance fusion. Extensive quantitative and qualitative comparisons and ablation experiments are conducted to validate our method. M2HF has achieved the state-of-the-art performance for TVR on MSR-VTT, MSVD, LSMDC, DiDeMo, and ActivtiyNet.
|
1,941,325,220,721 | arxiv | \section*{Appendix}
\section{Comparison of model robustness on all the corruptions in CIFAR-10-C and ImageNet-C}\label{apx:big_table}
We first define \emph{mCE}, a quantity that we use to measure the robustness improvement of the models compared to a baseline model. Consider a total of $K$ corruptions, each with $S$ severities. Let $f$ be a model, and $E_{k,s}(f)$ be the model's test error under the $k$-th corruption in the benchmark with severity $s$, $k=1,\ldots, K$, $s=1,\ldots, S$. Let $f_0$ be the baseline model. We define mCE as the following quantity:
\[
\text{mCE} = \frac{1}{K} \sum_{k=1}^{K} \frac{\sum_{s=1}^S E_{k,s}(f)}{\sum_{s=1}^S E_{k,s}(f_0)}.
\]
For our CIFAR-10-C results in Table~\ref{tab:cifar_full}, we use the naturally trained WideResNet model as the baseline model. We present the full test accuracy results on CIFAR-10-C and ImageNet-C in Tables~\ref{tab:cifar_full} and~\ref{tab:imagenet_full}, respectively.
\begin{table}[h]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ c c c c c c c | c }
\hline
& natural & Gauss & adversarial & low pass & high pass & AutoAugment & all-but-one\\ \hline
clean images & 0.9626 & 0.9369 & 0.8725 & 0.9235 & 0.9378 & \textbf{0.9693} & 0.9546 \\ \hline
brightness & 0.9493 & 0.9244 & 0.8705 & 0.8996 & 0.9275 & \textbf{0.9635} & 0.9407 \\ \hline
contrast & 0.8225 & 0.5703 & 0.7700 & 0.6917 & 0.7806 & \textbf{0.9526} & 0.9015 \\ \hline
defocus blur & 0.8456 & 0.8371 & 0.8355 & 0.9063 & 0.7489 & \textbf{0.9229} & 0.9495 \\ \hline
elastic transform & 0.8600 & 0.8429 & 0.8175 & \textbf{0.8838} & 0.7870 & 0.8726 & 0.9221 \\ \hline
fog & 0.8997 & 0.7194 & 0.7263 & 0.8191 & 0.8811 & \textbf{0.9463} & 0.9061 \\ \hline
Gaussian blur & 0.7273 & 0.7907 & 0.8213 & \textbf{0.8929} & 0.6453 & 0.8840 & 0.9448 \\ \hline
glass blur & 0.5677 & 0.8046 & 0.8017 & \textbf{0.8770} & 0.4735 & 0.7621 & 0.8503 \\ \hline
impulse noise & 0.5428 & 0.8308 & 0.6881 & 0.5999 & 0.3619 & \textbf{0.8560} & 0.9016 \\ \hline
jpeg compression & 0.8009 & \textbf{0.9078} & 0.8541 & 0.8405 & 0.6395 & 0.8142 & 0.8807 \\ \hline
motion blur & 0.8079 & 0.7715 & 0.8045 & \textbf{0.8605} & 0.7206 & 0.8491 & N/A \\ \hline
pixelate & 0.7317 & 0.8983 & 0.8531 & \textbf{0.9156} & 0.6234 & 0.7066 & 0.9369 \\ \hline
shot noise & 0.6773 & \textbf{0.9233} & 0.8275 & 0.7447 & 0.5374 & 0.7834 & 0.9342 \\ \hline
snow & 0.8505 & 0.8835 & 0.8258 & 0.8688 & 0.7929 & \textbf{0.8939} & N/A \\ \hline
speckle noise & 0.7041 & \textbf{0.9171} & 0.8183 & 0.7502 & 0.5603 & 0.8125 & 0.9352 \\ \hline
zoom blur & 0.8046 & 0.8163 & 0.8279 & 0.8987 & 0.6514 & \textbf{0.8994} & 0.9412 \\ \hline
average & 0.7728 & 0.8292 & 0.8095 & 0.8299 & 0.6754 & \textbf{0.8613} & N/A \\ \hline
mCE & 1.000 & 0.9831 & 1.0825 & 0.8924 & 1.4449 & \textbf{0.6376} & N/A \\ \hline
\end{tabular}}
\end{center}
\caption{Test accuracy on clean images and all the 15 corruptions in CIFAR-10-C. We compare $6$ models: the naturally trained model, Gaussian data augmentation with parameter $0.1$, adversarially trained model, low pass filter front end with bandwidth $15$, high pass filter front end with bandwidth $31$, and AutoAugment without brightness and contrast. Every test accuracy for the corruptions is obtained by averaging over $5$ severities. The ``average'' row provides the average test accuracy over all the corruptions. We also present the results for the all-but-one training. More specifically, for a given corruption type and severity, we train on all the other corruptions at the same severity and evaluate on the given one. Due to some software dependency issue, we were not able to implement two of the corruptions on the training data, therefore, we only report the all-but-one results for $13$ of the $15$ corruptions. The test accuracy on clean images of all-but-one is averaged over all the ``all-but-one'' models. Since there test accuracies are not achieved by a single model, we do not compare them with other models, nor do we calculate the average corruption test accuracy and mCE.}\label{tab:cifar_full}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{ c c c c c c }
\hline
& natural & Gauss & low pass & high pass & AutoAugment \\ \hline
clean images & 0.7623 & 0.7425 & 0.7082 & 0.7500 & \textbf{0.7725} \\ \hline
brightness & 0.6975 & 0.6687 & 0.6214 & 0.6923 & \textbf{0.7406} \\ \hline
contrast & 0.4449 & 0.3578 & 0.3473 & 0.4911 & \textbf{0.5656} \\ \hline
defocus blur & 0.5023 & 0.5294 & \textbf{0.5803} & 0.4414 & 0.5414 \\ \hline
elastic transform & 0.5637 & 0.6000 & \textbf{0.6211} & 0.5255 & 0.5846 \\ \hline
fog & 0.5715 & 0.4736 & 0.4031 & 0.6459 & \textbf{ 0.6534} \\ \hline
frosted glass blur & 0.4187 & 0.5217 & \textbf{0.6000} & 0.3460 & 0.5073 \\ \hline
Gaussian noise & 0.4492 & \textbf{0.6956} & 0.4897 & 0.3979 & 0.5798 \\ \hline
impulse noise & 0.4210 & \textbf{0.6785} & 0.4736 & 0.3737 & 0.5832 \\ \hline
jpeg compression & 0.6630 & \textbf{0.6997} & 0.5688 & 0.6388 & 0.6893 \\ \hline
pixelate & 0.5826 & 0.6173 & 0.6790 & 0.5237 & \textbf{0.6814} \\ \hline
shot noise & 0.4294 & \textbf{0.6820} & 0.4894 & 0.3837 & 0.5845 \\ \hline
zoom blur & 0.3663 & 0.3653 & \textbf{0.4177} & 0.2826 & 0.3398\\ \hline
average & 0.5092 & 0.5741 & 0.5243 & 0.4785 & \textbf{0.5876} \\ \hline
\end{tabular}
\end{center}
\caption{Test accuracy on clean images and 12 corruptions in ImageNet-C. Instead of using the compressed ImageNet-C images provided in~\cite{hendrycks2019benchmarking}, the models are evaluated on the corruptions applied in memory. Due to some software dependency issue, we were not able to implement $3$ of the $15$ corruptions in memory, and thus we only the report test accuracy for $12$ corruptions. We compare $5$ models: the naturally trained model, Gaussian data augmentation with parameter $0.4$, low pass filter front end with bandwidth $45$, high pass filter front end with bandwidth $223$, and AutoAugmentation. Every test accuracy for the corruptions is obtained by averaging over $5$ severities.}\label{tab:imagenet_full}
\end{table}
\section{Fourier heat maps}\label{apx:model_heatmaps}
In this section, we provide the Fourier heat maps for the intermediate layers of the model. We first define the Fourier heat map of the output of a layer. Recall that the $H$-layer feedforward neural network is a function that maps $X$ to a vector $z\in\mathbb{R}^K$, known as the logits. Let $W_h$ be the weights and $\rho_h$ be the possibly nonlinear activation in the $h$-th layer. We let
\[
z_h(X) = \rho_h(\cdots \rho_2(\rho_1(X, W_1), W_2) \cdots, W_h) \in \mathbb{R}^{p_h}
\]
be the output of the $h$-th layer and thus the logits $z(X) = z_H(X)$. The model makes prediction by choosing $y = \arg\max_k z(X)[k]$. Recall that for a validation image $X$, we can generate a perturbed image with Fourier basis noise, i.e., ${\widetilde X}_{i,j} = X + rvU_{i,j}$. We then compute layers' outputs $z_h(X)$ and $z_h({\widetilde X}_{i,j})$, given the clean and perturbed images, respectively, and obtain $\norms{z_h(X) - z_h({\widetilde X}_{i,j})}$ as the model's output change at the $h$-th layer. We conduct this procedure for $n$ validation images $X^{(1)}, \ldots, X^{(n)}$, compute the average output change, and use this average as a measure of the model's stability to the Fourier basis noise. More specifically, we generate the Fourier heat map of the $h$-th layer, denoted by $Z_h\in\mathbb{R}^{d_1\times d_2}$, as a matrix with entries $Z_h[i, j] = \frac{1}{n}\sum_{\ell=1}^n \norms{z_h(X^{(\ell)}) - z_h({\widetilde X^{(\ell)}}_{i,j})}$.
In Figure~\ref{fig:model_heatmaps_full}, for $5$ different models, we demonstrate the Fourier heat maps for the outputs of $5$ layer outputs in the WideResNet architecture: the output of the initial convolutional layer, the outputs of the first, second, and third residual block, and the logits, and we also provide the test error heat map in the last column. In Figure~\ref{fig:imgnet_fourier}, we plot the test error Fourier heat map for two ImageNet models.
\begin{figure}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline
& init conv & 1st res block & 2nd res block & 3rd res block & logits & test error \\ \hline
natural train &
\includegraphics[scale=0.15]{nat_hmp_0_norm_4.pdf} &
\includegraphics[scale=0.15]{nat_hmp_1_norm_4.pdf} &
\includegraphics[scale=0.15]{nat_hmp_2_norm_4.pdf} &
\includegraphics[scale=0.15]{nat_hmp_3_norm_4.pdf} &
\includegraphics[scale=0.15]{nat_hmp_4_norm_4.pdf} &
\includegraphics[scale=0.15]{nat_test_err_norm_4.pdf} \\ \hline
Gauss augmentation &
\includegraphics[scale=0.15]{gauss_hmp_0_norm_4.pdf} &
\includegraphics[scale=0.15]{gauss_hmp_1_norm_4.pdf} &
\includegraphics[scale=0.15]{gauss_hmp_2_norm_4.pdf} &
\includegraphics[scale=0.15]{gauss_hmp_3_norm_4.pdf} &
\includegraphics[scale=0.15]{gauss_hmp_4_norm_4.pdf} &
\includegraphics[scale=0.15]{gauss_test_err_norm_4.pdf} \\ \hline
adversarial train &
\includegraphics[scale=0.15]{adv_hmp_0_norm_4.pdf} &
\includegraphics[scale=0.15]{adv_hmp_1_norm_4.pdf} &
\includegraphics[scale=0.15]{adv_hmp_2_norm_4.pdf} &
\includegraphics[scale=0.15]{adv_hmp_3_norm_4.pdf} &
\includegraphics[scale=0.15]{adv_hmp_4_norm_4.pdf} &
\includegraphics[scale=0.15]{adv_test_err_norm_4.pdf} \\ \hline
fog noise-3 &
\includegraphics[scale=0.15]{fog3_hmp_0_norm_4.pdf} &
\includegraphics[scale=0.15]{fog3_hmp_1_norm_4.pdf} &
\includegraphics[scale=0.15]{fog3_hmp_2_norm_4.pdf} &
\includegraphics[scale=0.15]{fog3_hmp_3_norm_4.pdf} &
\includegraphics[scale=0.15]{fog3_hmp_4_norm_4.pdf} &
\includegraphics[scale=0.15]{fog3_test_err_norm_4.pdf} \\ \hline
AutoAugment &
\includegraphics[scale=0.15]{autoaug_1_hmp_0_norm_4.pdf} &
\includegraphics[scale=0.15]{autoaug_1_hmp_1_norm_4.pdf} &
\includegraphics[scale=0.15]{autoaug_1_hmp_2_norm_4.pdf} &
\includegraphics[scale=0.15]{autoaug_1_hmp_3_norm_4.pdf} &
\includegraphics[scale=0.15]{autoaug_1_hmp_4_norm_4.pdf} &
\includegraphics[scale=0.15]{autoaug_1_test_err_norm_4.pdf} \\ \hline
\end{tabular}}
\caption{Model heat maps for naturally trained model, Gaussian data augmentation, adversarially trained model, data augmentation with ``fog noise'' at severity $3$ (additive noise that matches the Fourier statistics of fog-3 corruption), and AutoAugment.}
\label{fig:model_heatmaps_full}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.3\linewidth]{imagenet_nat_sparse_norm_10000.pdf} &
\includegraphics[width=0.3\linewidth]{imagenet_gauss_sparse_norm_10000.pdf} \\
\includegraphics[width=0.3\linewidth]{imagenet_nat_dense_norm_10000.pdf} &
\includegraphics[width=0.3\linewidth]{imagenet_gauss_dense_norm_10000.pdf}
\end{tabular}
\caption{Fourier heat map of ImageNet models with perturbation $\ell_2$ norm $40$. In a large area around the center of the Fourier spectrum, the model has test error at least 95\%. First row: heat map of the full Fourier spectrum ($224\times 224$); second row: heat map of the $63 \times 63$ low frequency centered square in the Fourier spectrum.}
\label{fig:imgnet_fourier}
\end{figure}
\section{Experiment detail}\label{apx:experiment_detail}
In Figure~\ref{fig:filter_front_end}, we visualize the high pass filtered images using normalization. The specific method is as follows. For an image $X \in [0,1]^{d_1\times d_2}$ (for RGB images we can divide the pixel values by $255$), we compute the mean and standard deviation of all the pixels:
\begin{align*}
\bar{X} = & \frac{1}{d_1d_2} \sum_{i,j}X_{i,j} \\
s_X = & \left(\frac{1}{d_1d_2}\sum_{i,j} (X_{i,j} - \bar{X})^2, \right)^{1/2}
\end{align*}
and then the normalized image is defined as
\[
X_{\text{norm}} = \frac{1}{s_X} (X - \bar{X}).
\]
In Figure~\ref{fig:filter_front_end}, we visualize $X_{\text{norm}}$ using the $\mathsf{imshow}$ function in the $\mathsf{matplotlib.pyplot}$ python package.
\section{Introduction}\label{sec:intro}
Although many deep learning computer vision models achieve remarkable performance on many standard i.i.d benchmarks, these models lack the robustness of the human vision system when the train and test distributions differ~\cite{recht2019imagenet}. For example, it has been observed that commonly occurring image corruptions, such as random noise, contrast change, and blurring, can lead to significant performance degradation~\cite{dodge2017study, azulay2018deep}. Improving distributional robustness is an important step towards safely deploying models in complex, real-world settings.
Data augmentation is a natural and sometimes effective approach to learning robust models. Examples of data augmentation include adversarial training~\cite{goodfellow2014explaining}, applying image transformations to the training data, such as flipping, cropping, adding random noise, and even stylized image transformation~\cite{geirhos2018imagenet}.
However, data augmentation rarely improves robustness across all corruption types. Performance gains on some corruptions may be met with dramatic reduction on others. As an example, in \cite{ford2019adversarial} it was observed that Gaussian data augmentation and adversarial training improve robustness to noise and blurring corruptions on the CIFAR-10-C and ImageNet-C common corruption benchmarks \cite{hendrycks2019benchmarking}, while significantly degrading performance on the fog and contrast corruptions. This begs a natural question
\emph{What is different about the corruptions for which augmentation strategies improve performance vs. those which performance is degraded?}
Understanding these tensions and why they occur is an important first step towards designing robust models. Our operating hypothesis is that the frequency information of these different corruptions offers an explanation of many of these observed trade-offs. Through extensive experiments involving perturbations in the Fourier domain, we demonstrate that these two augmentation procedures bias the model towards utilizing low frequency information in the input. This low frequency bias results in improved robustness to corruptions which are more high frequency in nature while degrading performance on corruptions which are low frequency.
Our analysis suggests that more diverse data augmentation procedures could be leveraged to mitigate these observed trade-offs, and indeed this appears to be true. In particular we demonstrate that the recently proposed AutoAugment data augmentation policy~\cite{cubuk2018autoaugment} achieves state-of-the-art results on the CIFAR-10-C benchmark. In addition, a follow-up work has utilized AutoAugment in a way to achieve state-of-the-art results on ImageNet-C~\cite{anonymous2020augmix}.
Some of our observations could be of interest to research on security. For example, we observe perturbations in the Fourier domain which when applied to images cause model error rates to exceed 90\% on ImageNet while preserving the semantics of the image. These qualify as simple, single query\footnote{In contrast, methods for generating small adversarial perturbations require 1000's of queries \cite{guo2019simple}.} black box attacks that satisfy the content preserving threat model~\cite{gilmer2018motivating}. This observation was also made in concurrent work \cite{tsuzuku2018structural}.
Finally, we extend our frequency analysis to obtain a better understanding of worst-case perturbations of the input. In particular adversarial perturbations of a naturally trained model are more high-frequency in nature while adversarial training encourages these perturbations to become more concentrated in the low frequency domain.
\input{preliminary.tex}
\section{The robustness problem}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1.0\linewidth]{filtering.png}
\end{center}
\caption{Models can achieve high accuracy using information from the input that would be unrecognizable to humans. Shown above are models trained and tested with aggressive high and low pass filtering applied to the inputs. With aggressive low-pass filtering, the model is still above 30\% on ImageNet when the images appear to be simple globs of color. In the case of high-pass (HP) filtering, models can achieve above 50\% accuracy using features in the input that are nearly invisible to humans. As shown on the right hand side, the high pass filtered images needed be normalized in order to properly visualize the high frequency features (the method that we use to visualize the high pass filtered images is provided in the appendix). }\label{fig:filter_front_end}
\end{figure}
How is it possible that models achieve such high performance in the standard settings where the training and test data are i.i.d., while performing so poorly in the presence of even subtle distributional shift? There has been substantial prior work towards obtaining a better understanding of the \emph{robustness problem}. While this problem is far from being completely understood, perhaps the simplest explanation is that models lack robustness to distributional shift simply because there is no reason for them to be robust \cite{jo2017measuring, geirhos2018imagenet, ilyas2019adversarial}. In naturally occurring data there are many correlations between the input and target that models can utilize to generalize well. However, utilizing such sufficient statistics will lead to dramatic reduction in model performance should these same statistics become corrupted at test time.
As a simple example of this principle, consider Figure 8 in \cite{jacobsen2018excessive}. The authors experimented with training models on a ``cheating'' variant of MNIST, where the target label is encoded by the location of a single pixel. Models tested on images with this ``cheating'' pixel removed would perform poorly. This is an unfortunate setting where Occam's razor can fail. The simplest explanation of the data may generalize well in perfect settings where the training and test data are i.i.d., but fail to generalize \emph{robustly}. Although this example is artificial, it is clear that model brittleness is tied to latching onto non-robust statistics in naturally occurring data.
As a more realistic example, consider the recently proposed \emph{texture hypothesis}~\cite{geirhos2018imagenet}. Models trained on natural image data can obtain high classification performance relying on local statistics that are correlated with texture. However, texture-like information can become easily distorted due to naturally occurring corruptions caused by weather or digital artifacts, leading to poor robustness.
In the image domain, there is a plethora of correlations between the input and target. Simple statistics such as colors, local textures, shapes, even unintuitive high frequency patterns can all be leveraged in a way to achieve remarkable i.i.d generalization. To demonstrate, we experimented with training and testing of ImageNet models when severe filtering is performed on the input in the frequency domain. While modest filtering has been used for model compression~\cite{dziedzic2019band}, we experiment with extreme filtering in order to test the limits of model generalization. The results are shown in Figure~\ref{fig:filter_front_end}. When low-frequency filtering is applied, models can achieve over 30\% test accuracy even when the image appears to be simple globs of color. Even more striking, models achieve 50\% accuracy in the presence of the severe high frequency filtering, using high frequency features which are nearly invisible to humans. In order to even visualize these high frequency features, we had normalize pixel statistics to have unit variance. Given that these types features are useful for generalization, it is not so surprising that models leverage these non-robust statistics.
It seems likely that these invisible high frequency features are related to the experiments of~\cite{ilyas2019adversarial}, which show that certain imperceptibly perturbed images contain features which are useful for generalization. We discuss these connections more in Section~\ref{sec:advex}.
\section{Trade-off and correlation between corruptions: a Fourier perspective}\label{sec:trade_off}
The previous section demonstrated that both high and low frequency features are useful for classification. A natural hypothesis is that data augmentation may bias the model towards utilizing different kinds of features in classification. What types of features models utilize will ultimately determine the robustness at test time. Here we adopt a Fourier perspective to study the trade-off and correlation between corruptions when we apply several data augmentation methods.
\subsection{Gaussian data augmentation and adversarial training bias models towards low frequency information}\label{sec:low_pass}
Ford et al.~\cite{ford2019adversarial} investigated the robustness of three models on CIFAR-10-C: a naturally trained model, a model trained by Gaussian data augmentation, and an adversarially trained model. It was observed that Gaussian data augmentation and adversarial training improve robustness to all noise and many of the blurring corruptions, while degrading robustness to fog and contrast. For example adversarial training degrades performance on the most severe contrast corruption from 85.66\% to 55.29\%. Similar results were reported on ImageNet-C.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.3\linewidth]{clean_image_heatmap.pdf} & \includegraphics[width=0.6\linewidth]{cifar_heatmaps.pdf}
\end{tabular}
\caption{Left: Fourier spectrum of natural images; we estimate $\mathbb{E}[|\mathcal{F}(X)[i,j]|]$ by averaging all the CIFAR-10 validation images. Right: Fourier spectrum of the corruptions in CIFAR-10-C at severity $3$. For each corruption, we estimate $\mathbb{E}[|\mathcal{F}(C(X) - X)[i, j]|]$ by averaging over all the validation images. Additive noise has relatively high concentrations in high frequencies while some corruptions such as fog and contrast are concentrated in low frequencies.}
\label{fig:spectrum}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\linewidth]{cifar10_heatmap.pdf}
\caption{Model sensitivity to additive noise aligned with different Fourier basis vectors on CIFAR-10. We fix the additive noise to have $\ell_2$ norm $4$ and evaluate three models: a naturally trained model, an adversarially trained model, and a model trained with Gaussian data augmentation. Error rates are averaged over $1000$ randomly sampled images from the test set. In the bottom row we show images perturbed with noise along the corresponding Fourier basis vector. The naturally trained model is highly sensitive to additive noise in all but the lowest frequencies. Both adversarial training and Gaussian data augmentation dramatically improve robustness in the higher frequencies while sacrificing the robustness of the naturally trained model in the lowest frequencies (i.e. in both models, blue area in the middle is smaller compared to that of the naturally trained model).}
\label{fig:model_heatmaps_cifar}
\end{figure*}
We hypothesize that some of these trade-offs can be explained by the Fourier statistics of different corruptions. Denote a (possibly randomized) corruption function by $C:\mathbb{R}^{d_1\times d_2} \rightarrow \mathbb{R}^{d_1 \times d_2}$. In Figure~\ref{fig:spectrum} we visualize the Fourier statistics of natural images as well as the average delta of the common corruptions. Natural images have higher concentrations in low frequencies, thus when we refer to a ``high'' or ``low'' frequency corruption we will always use this term on a relative scale. Gaussian noise is uniformly distributed across the Fourier frequencies and thus has much higher frequency statistics relative to natural images. Many of the blurring corruptions remove or change the high frequency content of images. As a result $C(X) - X$ will have a higher fraction of high frequency energy. For corruptions such as contrast and fog, the energy of the corruption is concentrated more on low frequency components.
The observed differences in the Fourier statistics suggests an explanation for why the two augmentation methods improve performance in additive noise but not fog and contrast --- the two augmentation methods encourage the model to become invariant to high frequency information while relying more on low frequency information. We investigate this hypothesis via several perturbation analyses of the three models in question. First, we test model sensitivity to perturbations along each Fourier basis vector. Results on CIFAR-10 are shown in Figure~\ref{fig:model_heatmaps_cifar}. The difference between the three models is striking. The naturally trained model is highly sensitive to additive perturbations in all but the lowest frequencies, while Gaussian data augmentation and adversarial training both dramatically improve robustness in the higher frequencies. For the models trained with data augmentation, we see a subtle but distinct lack of robustness at the lowest frequencies (relative to the naturally trained model). Figure~\ref{fig:model_heatmaps_imagenet} shows similar results for three different models on ImageNet. Similar to CIFAR-10, Gaussian data augmentation improves robustness to high frequency perturbations while reducing performance on low frequency perturbations.
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\linewidth]{imagenet_model_heatmap.pdf}
\caption{Model sensitivity to additive noise aligned with different Fourier basis vectors on ImageNet validation images. We fix the basis vectors to have $\ell_2$ norm $15.7$. Error rates are averaged over the entire ImageNet validation set. We present the $63 \times 63$ square centered at the lowest frequency in the Fourier domain. Again, the naturally trained model is highly sensitive to additive noise in all but the lowest frequencies. On the other hand, Gaussian data augmentation improves robustness in the higher frequencies while sacrificing the robustness to low frequency perturbations. For AutoAugment, we observe that its Fourier heat map has the largest blue/yellow area around the center, indicating that AutoAugment is relatively robust to low to mid frequency corruptions.}
\label{fig:model_heatmaps_imagenet}
\end{figure*}
To test this further, we added noise with fixed $\ell_2$ norm but different frequency bandwidths centered at the origin. We consider two settings, one where the origin is centered at the lowest frequency and one where the origin is centered at the highest frequency. As shown in Figure~\ref{fig:controlled_norm}, for a low frequency centered bandwidth of size $3$, the naturally trained model has less than half the error rate of the other two models. For high frequency bandwidth, the models trained with data augmentation dramatically outperform the naturally trained model.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{controlled_perturbation.pdf}
\caption{Robustness of models under additive noise with fixed norm and different frequency distribution. For each channel in each CIFAR-10 test image, we sample i.i.d Gaussian noise, apply a low/high pass filter, and normalize the filtered noise to have $\ell_2$ norm $8$, before applying to the image. We vary the bandwidth of the low/high pass filter and generate the two plots. The naturally trained model is more robust to the low frequency noise with bandwidth $3$, while Gaussian data augmentation and adversarial training make the model more robust to high frequency noise.}
\label{fig:controlled_norm}
\end{figure}
This is consistent with the hypothesis that the models trained with the noise augmentation are biased towards low frequency information. As a final test, we analyzed the performance of models with a low/high pass filter applied to the input (we call the low/high pass filters the \emph{front end} of the model). Consistent with prior experiments we find that applying a low pass front-end degrades performance on fog and contrast while improving performance on additive noise and blurring. If we instead further bias the model towards high frequency information we observe the opposite effect. Applying a high-pass front end degrades performance on all corruptions (as well as clean test error), but performance degradation is more severe on the high frequency corruptions. These experiments again confirm our hypothesis about the robustness properties of models with a high (or low) frequency bias.
To better quantify the relationship between frequency and robustness for various models we measure the ratio of energy in the high and low frequency domain. For each corruption $C$, we apply high pass filtering with bandwidth $27$ (denote this operation by $H(\cdot)$) on the delta of the corruption, i.e., $C(X) - X$. We use $\frac{\norms{H(C(X) - X)}^2}{\norms{C(X) - X}^2}$ as a metric of the fraction of high frequency energy in the corruption. For each corruption, we average this quantity over all the validation images and all $5$ severities. We evaluate $6$ models on CIFAR-10-C, each trained differently --- natural training, Gaussian data augmentation, adversarial training, trained with a low pass filter front end (bandwidth $15$), trained with a high pass filter front end (bandwidth $31$), and trained with AutoAugment (see a more detailed discussion on AutoAugment in Section~\ref{sec:beyond}). Results are shown in Figure~\ref{fig:scatter}. Models with a low frequency bias perform better on the high frequency corruptions. The model trained with a high pass filter has a forced high frequency bias. While this model performs relatively poorly on even natural data, it is clear that high frequency corruptions degrade performance more than the low frequency corruptions. Full results, including those on ImageNet, can be found in the appendix.
\begin{figure}[h]
\centering
\begin{tabular}{ccc}
\hspace{-0.1in} \includegraphics[width=0.33\linewidth]{scatter_mean_gauss_adv_low_12.pdf} \hspace{-0.1in} & \hspace{-0.1in} \includegraphics[width=0.33\linewidth]{scatter_mean_high_12.pdf} \hspace{-0.1in} & \hspace{-0.1in}
\includegraphics[width=0.33\linewidth]{scatter_mean_auto_12.pdf} \hspace{-0.1in} \\
\multicolumn{3}{c}{\includegraphics[width=0.98\linewidth]{legend.pdf}}
\end{tabular}
\caption{Relationship between test accuracy and fraction of high frequency energy of the CIFAR-10-C corruptions. Each scatter point in the plot represents the evaluation result of a particular model on a particular corruption type. The x-axis represents the fraction of high frequency energy of the corruption type, and the y-axis represents change in test accuracy compared to a naturally trained model. Overall, Gaussian data augmentation, adversarial training, and adding low pass filter improve robustness to high frequency corruptions, and degrade robustness to low frequency corruptions. Applying a high pass filter front end yields a more significant accuracy drop on high frequency corruptions compared to low frequency corruptions. AutoAugment improves robustness on nearly all corruptions, and achieves the best overall performance. The legend at the bottom shows the slope ($k$) and residual ($r$) of each fitted line.}
\label{fig:scatter}
\end{figure}
\subsection{Does low frequency data augmentation improve robustness to low frequency corruptions?}\label{sec:asymmetry}
While Figure~\ref{fig:scatter} shows a clear relationship between frequency and robustness gains of several data augmentation strategies, the Fourier perspective is not predictive in all situations of transfer between data augmentation and robustness.
We experimented with applying additive noise that matches the statistics of the fog corruption in the frequency domain. We define ``fog noise'' to be the additive noise distribution $\sum\limits_{i,j} \mathcal{N}(0, \sigma_{i,j}^2) U_{i,j}$ where the $\sigma_{i,j}$ are chosen to match the typical norm of the fog corruption on basis vector $U_{i,j}$ as shown in Figure~\ref{fig:spectrum}. In particular, the marginal statistics of fog noise are identical to the fog corruption in the Fourier domain. However, data augmentation on fog noise \emph{degrades} performance on the fog corruption (Table~\ref{tab:fog_noise}). This occurs despite the fact that the resulting model yields improved robustness to perturbations along the low frequency vectors (see the Fourier heat maps in the appendix).
\begin{table}[h]
\begin{center}
\begin{tabular}{cccccc}
\hline
fog severity & 1 & 2 & 3 & 4 & 5 \\ \hline
naturally trained & 0.9606 & 0.9484 & 0.9395 & 0.9072 & 0.7429 \\ \hline
fog noise augmentation & 0.9090 & 0.8726 & 0.8120 & 0.7175 & 0.4626 \\ \hline
\end{tabular}
\end{center}
\caption{Training with fog noise hurts performance on fog corruption.}\label{tab:fog_noise}
\end{table}
We hypothesize that the story is more complicated for low frequency corruptions because of an asymmetry between high and low frequency information in natural images. Given that natural images are concentrated more in low frequencies, a model can more easily learn to ``ignore'' high frequency information rather than low frequency information. Indeed as shown in Figure~\ref{fig:filter_front_end}, model performance drops off far more rapidly when low frequency information is removed than high.
\subsection{More varied data augmentation offers more general robustness}\label{sec:beyond}
The trade-offs between low and high frequency corruptions for Gaussian data augmentation and adversarial training lead to the natural question of how to achieve robustness to a more diverse set of corruptions. One intuitive solution is to train on a variety of data augmentation strategies.
Towards this end, we investigated the learned augmentation policy AutoAugment \cite{cubuk2018autoaugment}. AutoAugment applies a learned mixture of image transformations during training and achieves the state-of-the-art performance on CIFAR-10 and ImageNet. In all of our experiments with AutoAugment, we remove the brightness and constrast sub-policies as they explicitly appear in the common corruption benchmarks. \footnote{Our experiment is based on the open source implementation of AutoAugment at\\ \url{https://github.com/tensorflow/models/tree/master/research/autoaugment}.} Despite the fact that this policy was tuned specifically for clean test accuracy, we found that it also dramatically improves robustness on CIFAR-10-C. Here, we demonstrate part of the results in Table~\ref{tab:cifar_comp}, and the full results can be found in the appendix. In the third plot in Figure~\ref{fig:scatter}, we also visualize the performance of AutoAugment on CIFAR-10-C.
More specifically, on CIFAR-10-C, we compare the robustness of the naturally trained model, Gaussian data augmentation, adversarially trained model, and AutoAugment. We observe that among the four models, AutoAugment achieves the best average corruption test accuracy of 86\%. Using the mean corruption error (mCE) metric proposed in~\cite{hendrycks2019benchmarking} with the naturally trained model being the baseline (see a formal definition of mCE in the appendix), we observe that AutoAugment achieves the best mCE of $64$, and in comparison, Gaussian data augmentation and adversarial training achieve mCE of $98$ and $108$, respectively. In addition, as we can see, AutoAugment improves robustness on all but one of the corruptions, compared to the naturally trained model.
\begin{table}[h]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ c c c | c c c | c c c c c | c c c | c c c c}
\hline
& & & \multicolumn{3}{c}{noise} & \multicolumn{5}{|c|}{blur} & \multicolumn{3}{|c|}{weather} & \multicolumn{4}{c}{digital} \\ \hline
\hspace{-0.1in} {model} \hspace{-0.1in} & \hspace{-0.1in} {acc} \hspace{-0.1in} & \hspace{-0.1in} {mCE} \hspace{-0.1in} & \hspace{-0.1in} {speckle} \hspace{-0.1in} & \hspace{-0.1in} {shot} \hspace{-0.1in} & \hspace{-0.1in} {impulse} \hspace{-0.1in} & \hspace{-0.1in} {defocus} \hspace{-0.1in} & \hspace{-0.1in} {Gauss} \hspace{-0.1in} & \hspace{-0.1in} {glass} \hspace{-0.1in} & \hspace{-0.1in} {motion} \hspace{-0.1in} & \hspace{-0.1in} {zoom} \hspace{-0.1in} & \hspace{-0.1in} {snow} \hspace{-0.1in} & \hspace{-0.1in} {fog} \hspace{-0.1in} & \hspace{-0.1in} {bright} \hspace{-0.1in} & \hspace{-0.1in} {contrast} \hspace{-0.1in} & \hspace{-0.1in} {elastic} \hspace{-0.1in} & \hspace{-0.1in} {pixel} \hspace{-0.1in} & \hspace{-0.1in} {jpeg} \hspace{-0.1in} \\ \hline
{natural} & 77 & 100 & 70 & 68 & 54 & 85 & 73 & 57 & 81 & 80 & 85 & 90 & 95 & 82 & 86 & 73 & 80 \\ \hline
{Gauss} & 83 & 98 & \textbf{92} & \textbf{92} & 83 & 84 & 79 & \textbf{80} & 77 & 82 & 88 & 72 & 92 & 57 & 84 & \textbf{90} & \textbf{91} \\ \hline
{adversarial} & 81 & 108 & 82 & 83 & 69 & 84 & 82 & \textbf{80} & 80 & 83 & 83 & 73 & 87 & 77 & 82 & 85 & 85 \\ \hline
{Auto} & \textbf{86} & \textbf{64} & 81 & 78 & \textbf{86} & \textbf{92} & \textbf{88} & 76 & \textbf{85} & \textbf{90} & \textbf{89} & \textbf{95} & \textbf{96} & \textbf{95} & \textbf{87} & 71 & 81 \\ \hline
\end{tabular}}
\end{center}
\caption{Comprison between naturally trained model (natural), Gaussian data augmentation (Gauss), adversarially trained model (adversarial), and AutoAugment (Auto) on CIFAR-10-C. We remove all corruptions that appear in this benchmark from the AutoAugment policy. All numbers are in percentage. The first column shows the average top1 test accuracy on all the corruptions; the second column shows the mCE; the rest of the columns show the average test accuracy over the $5$ severities for each corruption. We observe that AutoAugment achieves the best average test accuracy and the best mCE. In most of the blurring and all of the weather corruptions, AutoAugment achieves the best performance among the four models. }\label{tab:cifar_comp}
\end{table}
As for the ImageNet-C benchmark, instead of using the compressed ImageNet-C images provided in~\cite{hendrycks2019benchmarking}, we evaluate the models on corruptions applied in memory,~\footnote{The dataset of images with corruptions in memory can be found at~\url{https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/image/imagenet2012_corrupted.py}.} and observe that AutoAugment also achieves the highest average corruption test accuracy. The full results can be found in the appendix. As for the compressed ImageNet-C images, we note that a follow-up work has utilized AutoAugment in a way to achieve state-of-the-art results~\cite{anonymous2020augmix}.
\subsection{Adversarial examples are not strictly a high frequency phenomenon}\label{sec:advex}
Adversarial perturbations remain a popular topic of study in the machine learning community. A common hypothesis is that adversarial perturbations lie primarily in the high frequency domain. In fact, several (unsuccessful) defenses have been proposed motivated specifically by this hypothesis. Under the assumption that compression removes high frequency information, JPEG compression has been proposed several times \cite{liu2018feature, aydemir2018effects, das2018shield} as a method for improving robustness to small perturbations. Studying the statistics of adversarially generated perturbations is not a well defined problem because these statistics will ultimately depend on how the adversary constructs the perturbation. This difficulty has led to many false claims of methods for detecting adversarial perturbations \cite{carlini2017adversarial}. Thus the analysis presented here is to better understand common hypothesis about adversarial perturbations, rather than actually detect all possible perturbations.
For several models we use PGD to construct adversarial perturbations for every image in the test set. We then analyze the delta between the clean and perturbed images and project these deltas into the Fourier domain. By aggregating across the successful attack images, we obtain an understanding of the frequency properties of the constructed adversarial perturbations. The results are shown in Figure~\ref{fig:adv_perturb_spectrum}.
For the naturally trained model, the measured adversarial perturbations do indeed show higher concentrations in the high frequency domain (relative to the statistics of natural images). However, for the adversarially trained model this is no longer the case. The deltas for the adversarially trained model resemble that of natural data.
Our analysis provides some additional understanding on a number of observations in prior works on adversarial examples. First, while adversarial perturbations for the naturally trained model do indeed show higher concentrations in the high frequency domain, this does not mean that removing high frequency information from the input results in a robust model. Indeed as shown in Figure~\ref{fig:model_heatmaps_cifar}, the naturally trained model is not worst-case or even average-case robust on any frequency (except perhaps the extreme low frequencies). Thus, we should expect that if we adversarially searched for errors in the low frequency domain, we will find them easily. This explains why JPEG compression, or any other method based on specifically removing high frequency content, should not be expected to be robust to worst-case perturbations.
Second, the fact that adversarial training biases these perturbations towards the lower frequencies suggests an intriguing connection between adversarial training and the DeepViz~\cite{olah2017feature} method for feature visualization. In particular, optimizing the input in the low frequency domain is one of the strategies utilized by DeepViz to bias the optimization in the image space towards semantically meaningful directions. Perhaps the reason adversarially trained models have semantically meaningful gradients~\cite{tsipras2018robustness} is because gradients are biased towards low frequencies in a similar manner as utilized in DeepViz.
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\linewidth]{adv_pert_heatmap.png}
\caption{(a) and (b): Fourier spectrum of adversarial perturbations. For any image $X$, we run the PGD attack~\cite{madry2017towards} to generate an adversarial example $C(X)$. We estimate the Fourier spectrum of the adversarial perturbation, i.e., $\mathbb{E}[|\mathcal{F}(C(X) - X)[i, j]|]$, where the expectation is taken over the perturbed images which are incorrectly classified. (a) naturally trained; (b) adversarially trained. The adversarial perturbations for the naturally trained model are uniformly distributed across frequency components. In comparison, adversarial training biases these perturbations towards the lower frequencies. (c) and (d): Adding Fourier basis vectors with large norm to images is a simple method for generating content-preserving black box adversarial examples. }
\label{fig:adv_perturb_spectrum}
\end{figure*}
As a final note, we observe that adding certain Fourier basis vectors with large norm ($24$ for ImageNet) degrades test accuracy to less than 10\% while preserving the semantics of the image. Two examples of the perturbed images are shown in Figure~\ref{fig:adv_perturb_spectrum}. If additional model queries are allowed, subtler perturbations will suffice --- the perturbations used in Figure~\ref{fig:model_heatmaps_imagenet} can drop accuracies to less than 30\%. Thus, these Fourier basis corruptions can be considered as content-preserving black box attacks, and could be of interest to research on security. Fourier heat maps with larger perturbations are included in the appendix.
\section{Conclusions and future work}\label{sec:conclusion}
We obtained a better understanding of trade-offs observed in recent robustness work in the image domain. By investigating common corruptions and model performance in the frequency domain we establish connections between frequency of a corruption and model performance under data augmentation. This connection is strongest for high frequency corruptions, where Gaussian data augmentation and adversarial training bias the model towards low frequency information in the input. This results in improved robustness to corruptions with higher concentrations in the high frequency domain at the cost of reduced robustness to low frequency corruptions and clean test error.
Solving the robustness problem via data augmentation alone feels quite challenging given the trade-offs we commonly observe. Naively augmenting on different corruptions often will not transfer well to held out corruptions \cite{geirhos2018generalisation}. However, the impressive robustness of AutoAugment gives us hope that data augmentation done properly can play a crucial role in mitigating the robustness problem.
Care must be taken though when utilizing data augmentation for robustness to not overfit to the validation set of held out corruptions. The goal is to learn \emph{domain invariant features} rather than simply become robust to a specific set of corruptions. The fact that AutoAugment was tuned specifically for clean test error, and transfers well even after removing the contrast and brightness parts of the policy (as these corruptions appear in the benchmark) gives us hope that this is a step towards more useful domain invariant features. The robustness problem is certainly far from solved, and our Fourier analysis shows that the AutoAugment model is not strictly more robust than the baseline --- there are frequencies for which robustness is degraded rather than improved. Because of this, we anticipate that robustness benchmarks will need to evolve over time as progress is made. These trade-offs are to be expected and researchers should actively search for new blindspots induced by the methods they introduce. As we grow in our understanding of these trade-offs we can design better benchmarks to obtain a more comprehensive perspective on model robustness.
While data augmentation is perhaps the most effective method we currently have for the robustness problem, it seems unlikely that data augmentation \emph{alone} will provide a complete solution. Towards that end it will be important to develop orthogonal methods --- e.g. architectures with better inductive biases or loss functions which when combined with data augmentation encourage extrapolation rather than interpolation.
\subsubsection*{Acknowledgments}
We would like to thank Nicolas Ford and Norman Mu for helpful discussions.
\bibliographystyle{abbrv}
\section{Preliminaries}\label{sec:prelim}
We denote the $\ell_2$ norm of vectors (and in general, tensors) by $\norms{\cdot}$. For a vector $x \in \mathbb{R}^d$, we denote its entries by $x[i]$, $i\in\{0, \ldots, d-1\}$, and for a matrix $X\in\mathbb{R}^{d_1\times d_2}$, we denote its entries by $X[i,j]$, $i\in\{0, \ldots, d_1-1\}$, $j\in\{0, \ldots, d_2-1\}$. We omit the dimension of image channels, and denote them by matrices $X\in\mathbb{R}^{d_1 \times d_2}$. We denote by $\mathcal{F}: \mathbb{R}^{d_1\times d_2} \rightarrow \mathbb{C}^{d_1 \times d_2}$ the 2D discrete Fourier transform (DFT) and by $\mathcal{F}^{-1}$ the inverse DFT. When we visualize the Fourier spectrum, we always shift the low frequency components to the center of the spectrum.
We define high pass filtering with bandwidth $B$ as the operation that sets all the frequency components outside of a centered square with width $B$ in the Fourier spectrum with highest frequency in the center to zero, and then applies inverse DFT. The low pass filtering operation is defined similarly with the difference that the centered square is applied to the Fourier spectrum with low frequency shifted to the center.
We assume that the pixels take values in range $[0, 1]$. In all of our experiments with data augmentation we always clip the pixel values to $[0, 1]$. We define Gaussian data augmentation with parameter $\sigma$ as the following operation: In each iteration, we add i.i.d. Gaussian noise $\mathcal{N}(0, \widetilde{\sigma}^2)$ to every pixel in all the images in the training batch, where $\widetilde{\sigma}$ is chosen uniformly at random from $[0, \sigma]$. For our experiments on CIFAR-10, we use the Wide ResNet-28-10 architecture~\cite{zagoruyko2016wide}, and for our experiment on ImageNet, we use the ResNet-50 architecture~\cite{he2016deep}. When we use Gaussin data augmentation, we choose parameter $\sigma=0.1$ for CIFAR-10 and $\sigma = 0.4$ for ImageNet. All experiments use flip and crop during training.
\paragraph*{Fourier heat map} We will investigate the sensitivity of models to high and low frequency corruptions via a perturbation analysis in the Fourier domain. Let $U_{i,j} \in \mathbb{R}^{d_1 \times d_2}$ be a real-valued matrix such that $\norms{U_{i,j}} = 1$, and $\mathcal{F}(U_{i,j})$ only has up to two non-zero elements located at $(i, j)$ and the its symmetric coordinate with respect to the image center; we call these matrices the 2D \emph{Fourier basis} matrices~\cite{bracewell1986fourier}.
Given a model and a validation image $X$, we can generate a perturbed image with Fourier basis noise. More specifically, we can compute ${\widetilde X}_{i,j} = X + rvU_{i,j}$,
where $r$ is chosen uniformly at random from $\{-1, 1\}$, and $v>0$ is the norm of the perturbation. For multi-channel images, we perturb every channel independently. We can then evaluate the models under Fourier basis noise and visualize how the test error changes as a function of $(i,j)$, and we call these results the Fourier heat map of a model. We are also interested in understanding how the outputs of the models' intermediate layers change when we perturb the images using a specific Fourier basis, and these results are relegated to the Appendix.
|
1,941,325,220,722 | arxiv | \section*{Introduction}
\label{sec:intro}
The main result of this paper, Theorem~\ref{thm:main-thm}, calculates the topological higher simple structure sets $\sS^{s}_{\del} (L \times D^{m})$ in the sense of surgery theory, see Section~\ref{sec:higher-str-sets} for a definition, for $L$ a fake lens space with the fundamental group $\ZZ/N \cong G = \pi_{1} (L)$ for $N = 2^{K}$ and $m \geq 1$. The case $m=0$ was done by Wall for $N$ odd \cite[14.E]{Wall(1999)}, by Lopez de Medrano \cite{LdM(1971)} and Wall for $N=2$ \cite[14.D]{Wall(1999)} and by Macko and Wegner for $N=2^{K}$ \cite{Macko-Wegner(2009)} and for general $N = 2^{K} \cdot M$, where $K \geq 1$ and $M$ is odd \cite{Macko-Wegner(2011)}. The case $m \geq 1$ was done by Madsen and Rothenberg for $N=2$ and $N$ odd \cite{Madsen-Rothenberg(1989)}. Hence the remaining cases are when $m \geq 1$ and $N = 2^{K} \cdot M$, where $K \geq 1$ and $M$ is odd. In this paper we take care of the case $N=2^K$ and we plan to address the most general case in a subsequent work. We also obtain Corollary~\ref{cor:higehr-str-sets-L-times-S} where we calculate $\sS^{s} (L \times S^m)$ for $m \geq 3$.
The calculations build on the work from all the above mentioned sources. The basic idea is to extend the methods of \cite{Macko-Wegner(2009)}, which in turn use \cite{Wall(1999)}, from the case $m=0$ to the case $m \geq 1$. Several issues need to be verified and adjusted in the present case, which is what we do. We use the $\rho$-invariant for manifolds with boundary from \cite{Madsen-Rothenberg(1989)} and some of its general properties, but we need to do a bit more in our special case $N=2^K$. Most importantly, a formula of Wall for the $\rho$-invariant of certain closed manifolds from \cite[Theorem 14C.4]{Wall(1999)} needs to be generalized and slightly modified. The generalization and its proof is the main technical result of this paper. It is formulated in Theorem~\ref{thm:rho-formula-cp-times-disk} and the proof is based on a technical Proposition~\ref{prop:alpha-and-beta-depend-linearly-on-s-4i}.
The paper is organized as follows. In Section~\ref{sec:results} we state the main Theorem~\ref{thm:main-thm} and Corollary~\ref{cor:higehr-str-sets-L-times-S}. Section~\ref{sec:higher-str-sets} contains the definition of the higher simple structure sets for a general manifold $X$ and their properties. Section~\ref{sec:higher-str-set-for-cp} deals with the cases when $X$ are complex projective spaces, which we also need as explained at the beginning of that section and Proposition~\ref{prop:alpha-and-beta-depend-linearly-on-s-4i} is proved. Section~\ref{sec:lens-times-disk} contains the first results about the cases when $X$ are fake lens spaces based on the long surgery exact sequence, which reduces the calculation to a certain extension problem, see Theorem~\ref{thm:how-to-split-alpha-k}. In Section~\ref{sec:the-rho-invariant} the $\rho$-invariant is discussed which is the main tool to solve the remaining extension problem, in particular the above mentioned Theorem~\ref{thm:rho-formula-cp-times-disk} is proved. The formula from this theorem is then used in Section~\ref{sec:calculations} which contains the proof of the main Theorem~\ref{thm:main-thm} and Corollary~\ref{cor:higehr-str-sets-L-times-S}. The final Section~\ref{sec:final-remarks} contains a discussion of further directions.
\section{Results}
\label{sec:results}
A {\it fake lens space} $L = L (\alpha)$ is a topological manifold given as the orbit space of a free action $\alpha$ of a finite cyclic group $G = \ZZ/N$ on a sphere $S^{2d-1}$. The main result of this paper is the following theorem about them.
\begin{thm}\label{thm:main-thm}
Let $L = L (\alpha)$ be a $(2d-1)$-dimensional fake lens space for some free action $\alpha$ of the cyclic group $G = \ZZ/N$ with $N = 2^K$ for some $K \geq 1$ on $S^{2d-1}$ with $d \geq 2$ and let $k \geq 1$. Then we have isomorphisms
\begin{align*}
(\wrho_{\del},\bbr_{0},\bbr,\br) \co \sS^{s}_{\del} (L \times D^{2k}) & \xra{\cong}
\begin{cases}
F^{+} \oplus \ZZ \oplus T'_{N} \oplus T_{2} \quad & d = 2e, k=2l \\
F^{-} \oplus \ZZ/2 \oplus T'_{N} \oplus T_{2} \quad & d = 2e, k=2l+1 \\
F^{-} \oplus \ZZ \oplus T'_{N} \oplus T_{2} \quad & d = 2e+1, k=2l \\
F^{+} \oplus \ZZ/2 \oplus T'_{N} \oplus T_{2} \quad & d = 2e+1, k=2l+1
\end{cases} \\
(\bbr,\br) \co \sS^{s}_{\del} (L \times D^{2k+1}) & \xra{\cong} T_{2} (\textup{odd}) \quad \textup{also} \; k=0,
\end{align*}
where the meaning of the symbols in the target is as follows.
\begin{enumerate}
\item $F^{+}$ is a free abelian group of rank $2^{K-1}$;
\item $F^{-}$ is a free abelian group of rank $2^{K-1} - 1$;
\item Let $c_N (d,k) = e-1$ when $(d,k)=(2e,2l)$ and let $c_N (d,k) = e$ in other cases. Then
\[
T'_{N} \cong \bigoplus_{i=1}^{c_N(d,k)} \ZZ/2^{\textup{min} \{ 2i , K \}};
\]
\item Let $c_2 (d,k) = e$ when $(d,k)=(2e+1,2l)$ and let $c_2 (d,k) = e-1$ in other cases. Then
\[
T_{2} \cong \bigoplus_{i=1}^{c_2 (d,k)} \ZZ/2;
\]
\item Let $c_2 (d,k,\textup{odd}) = e-1$ when $(d,k)=(2e,2l+1)$ and let $c_2 (d,k,\textup{odd}) = e$ in other cases. Then
\[
T_{2} (\textup{odd}) \cong \bigoplus_{i=1}^{c_2 (d,k,\textup{odd})} \ZZ/2;
\]
\item The symbol $\wrho_{\del}$ denotes the reduced $\rho$-invariant for manifolds with boundary;
\item The invariant $\bbr$ is an invariant derived from the splitting invariants along $4i$-dimensional submanifolds;
\item The invariant $\br$ consists of the splitting invariants along $(4i-2)$-dimensional submanifolds;
\item The invariant $\bbr_{0}$ is an invariant derived from the splitting invariants along $2k$-dimensional submanifolds.
\end{enumerate}
\end{thm}
The invariant $\wrho_{\del}$ is defined in Section~\ref{sec:the-rho-invariant}, the invariants $\br$ are defined in Section~\ref{sec:lens-times-disk} and the invariants $\bbr_{0}$ and $\bbr$ are defined in Section~\ref{sec:calculations}.
\begin{cor} \label{cor:higehr-str-sets-L-times-S}
For $k \geq 2$ we have isomorphisms
\[
(\red{_\del},\br',\br'') \co \sS^{s} (L \times S^{2k}) \cong \sS^{s}_{\del} (L \times D^{2k}) \oplus T_N (d) \oplus T_2 (d)
\]
and for $k \geq 1$ we have isomorphisms
\[
(\red{_\del},\br',\br'') \co\sS^{s} (L \times S^{2k+1}) \cong \sS^{s}_{\del} (L \times D^{2k+1}) \oplus T_N (d) \oplus T_2 (d)
\]
where
\[
T_N (d) \cong \bigoplus_{i=1}^{\lfloor (d-1)/2 \rfloor} \ZZ/2^{K} \quad \textup{and} \quad T_2 (d) \cong \bigoplus_{i=1}^{\lfloor d/2 \rfloor} \ZZ/2.
\]
\end{cor}
The map $\red_\del$ is explained in Section~\ref{sec:higher-str-sets} and the invariants $\br',\br''$ in Section~\ref{sec:calculations}. Together with Theorem \ref{thm:main-thm} this shows that $\sS^{s} (L \times S^{m})$ is calculated by the invariants $\wrho_{\del},\bbr_{0},\bbr,\br,\br', \br''$ where each symbol has to be appropriately interpreted depending on parity of $d$ and $m$.
\section{Higher structure sets and the long surgery exact sequence}
\label{sec:higher-str-sets}
We review basic definitions and properties of the higher simple structure sets from surgery theory which we use. More detailed and more comprehensive information can be found in \cite{Wall(1999)}, \cite{Ranicki(2002)}, \cite{Crowley-Lueck-Macko(2019)}, \cite{Kirby-Siebenmann(1977)}, \cite{Quinn(1970)}, \cite{Madsen-Rothenberg(1989)}, \cite{Weiss-Williams(2001)}.
Let~$X$ be a closed~$n$-dimensional topological manifold. The {\it simple structure set}~$\sS^{s} (X)$ of~$X$ in the sense of surgery theory is defined to be the set of equivalence classes of simple homotopy equivalences~$h \co M \ra X$, with the source an~$n$-dimensional closed manifold, modulo homeomorphism up to homotopy in the source. Knowledge of~$\sS^{s} (X)$ is generally regarded as understanding manifolds in the simple homotopy type of~$X$. Many calculations are known, e.g. for $X = S^n$, $\RR P^n$, $\CC P^n$, $T^n = (S^{1})^{\times n}$, lens spaces, see e.g. \cite{Wall(1999)}.
Let~$Y$ be a compact~$n$-dimensional manifold with (a possibly empty) boundary. Then the simple structure set $\sS^{s}_{\del} (Y)$ is defined to be the set of equivalence classes of simple homotopy equivalences~$h \co (M,\del M) \ra (Y,\del Y)$, with the source an~$n$-dimensional compact manifold with boundary and whose restriction~$h| \co \del M \ra \del Y$ is a homeomorphism, modulo homeomorphism up to homotopy relative boundary in the source. We regard knowledge of~$\sS^{s}_{\del} (Y)$ as understanding manifolds in the simple homotopy type of~$Y$ relative to~$\del Y$.
If $X$ is closed we can take for any $m \geq 1$ the compact manifold with boundary $Y = X \times D^{m}$ and consider $\sS^{s}_{\del} (X \times D^{m})$. It turns out that these simple structure sets form a group, where the group operation is obtained geometrically by ``stacking''. In fact there is a space sometimes denoted $\sStw^{s} (X)$ whose $m$-th homotopy group is~$\sS^{s}_{\del} (X \times D^{m})$, \cite{Quinn(1970)} (this includes the case $m=0$). Therefore we sometimes call the simple structure sets of $X \times D^m$ {\it higher simple structure sets} of $X$.
The space~$\sStw^{s} (X)$ is closely related to automorphism spaces of~$X$ and hence its knowledge not only tells us about the manifolds in the homotopy type of $X \times D^{m}$ relative $X \times S^{m-1}$, but it also possibly tells us something about the space of self-homeomorphisms of $X$ (see \cite{Weiss-Williams(2001)} for more details).
On the other hand, given $X$, we might be interested in the simple structure sets of closed manifolds $X \times S^{m}$ for some $m \geq 1$. If $m \geq 3$ then transversality, restriction and the $\pi-\pi$-theorem of \cite[Chapter 4]{Wall(1999)} provide us with a map denoted $\res_{\pitchfork} \co \sS^{s} (X \times S^m) \ra \sS^{s} (X \times D^{m},X \times S^{m-1})$, where now $\sS^{s} (Y,\del Y)$ is yet another version of the structure set where we do not require homeomorphism on the boundary. When $Y = X \times D^{m}$ with $m \geq 3$, the set $\sS^{s} (Y,\del Y)$ is well calculable, again by the $\pi-\pi$-theorem as we explain below, see ~\eqref{eqn:str-set-rel-not-rel-bdry}. The above map is surjective by taking a double and the kernel is $\sS^{s}_{\del} (X \times D^{m})$ so that we have
\begin{equation} \label{eqn:higher-str-set-X-times-S}
(\red_\del,\res_{\pitchfork}) \co \sS^{s} (X \times S^m) \cong \sS^{s}_{\del} (X \times D^{m}) \times \sS^{s} (X \times D^{m},X \times S^{m-1})
\end{equation}
and hence any knowledge of the higher structure sets also tells us about~$\sS^{s} (X \times S^{m})$.
Thanks to the $s$-cobordism theorem elements in $\sS^{s}_{\del} (X \times D^{m})$ can also be represented by simple homotopy equivalences $h \co X \times D^{m} \ra X \times D^{m}$ so that the restriction of $h$ to the product of $X$ with the lower hemisphere of the boundary $S^{m-1}$ is the identity and the restriction of $h$ to the product of $X$ with the upper hemisphere is some homeomorphism (which a-priori does not commute with the projection to that hemisphere), see \cite{Madsen-Rothenberg(1989)}. Hence the source manifold is fixed in this description which may be of advantage in some constructions.
There are versions of the above concepts where the word simple is dropped, but we will not use them in the present paper. Of course. if the corresponding Whitehead group vanishes there is no difference, see~\cite[Chapters 1,2]{Crowley-Lueck-Macko(2019)}. Since this is the case in the simply-connected situation we often leave out the word simple when dealing with such manifolds.
The main tool for computing $\sS^{s}_{\del} (X \times D^{m})$ for a given $a$-dimensional manifold $X$ with $G = \pi_1 (X)$ and $m \geq 0$, so that the dimension of $X \times D^{m}$ is $n = a+m \geq 5$, is the long surgery exact sequence:
\begin{equation} \label{eqn:surgery-exact-sequence}
\begin{split}
\cdots \xra{\eta} \sN_\partial (X \times D^{m+1}) \xra{\theta}
L^s_{n+1} (\ZZ G) & \xra{\partial} \\ \quad \quad \xra{\partial} \sS_\del^s (X \times D^{m})
\xra{\eta} \sN_\partial (X \times D^{m}) & \xra{\theta} L^s_{n}
(\ZZ G) \xra{\del} \cdots.
\end{split}
\end{equation}
For detailed explanations of the terms we refer the reader to \cite[Chapter 10]{Wall(1999)} or \cite[Chapter 10]{Crowley-Lueck-Macko(2019)} and \cite{Kirby-Siebenmann(1977)} in the topological case. Here we only review the facts that are essential in this paper.
The sequence \eqref{eqn:surgery-exact-sequence} is a long exact sequence of abelian groups, which is a geometric fact for the terms with $m \geq 2$ and for smaller $m$ it is shown using algebraic theory of surgery of Ranicki \cite{Ranicki(1992)}. The $L$-groups are $4$-periodic in $n$ and can be defined algebraically using quadratic forms. They are functorial in $G$ and using the functoriality it is convenient to denote
\begin{equation} \label{eqn:reduced_L-group}
L_{n}^{s} (\ZZ G) \cong L_{n} (\ZZ) \oplus \widetilde{L}_{n}^{s} (\ZZ G).
\end{equation}
The normal invariants \cite[Chapter 6]{Crowley-Lueck-Macko(2019)} are a generalized cohomology theory
\begin{equation} \label{eqn:normal-invariants-general-formula}
\begin{split}
\sN_\partial (X \times D^{m}) \cong [X \times D^{m},X \times S^{m-1} ; \G/\TOP,\ast] & \cong \\
\cong H^{0} (X \times D^{m},X \times S^{m-1} ; \bL_{\bullet} \langle 1 \rangle) \cong & H_{n} (X ; \bL_{\bullet} \langle 1 \rangle)
\end{split}
\end{equation}
whose coefficients spectrum~$\bL_{\bullet} \langle 1 \rangle$ is well understood, its associated infinite loop space is the well known space $\G/\TOP$ with homotopy groups
\begin{equation} \label{eqn:htpy-grps-g-mod-top}
\pi_{n} (\G/\TOP) \cong L_{n} (\ZZ) \cong \ZZ, 0, \ZZ/2, 0 \quad \textup{for} \; n \equiv 0,1,2,3 \; (\mod \; 4), \; n \geq 1.
\end{equation}
Its homotopy type is recalled in \eqref{eqn:htpy-type-of-g-top}. In particular, it also possesses an almost $4$-periodicity.
Elements in $\sN_{\del} (X \times D^{m})$ can be represented by degree one normal maps of the form $(f,\ol f) \co (M,\del M) \ra (X \times D^{m},X \times S^{m-1})$ where the restriction of $f$ to $\del M$ is a homeomorphism. The map $\theta \co \sN_\partial (X \times D^{m}) \ra L_{n} (\ZZ G)$ is called the surgery obstruction map. The summand $L_{n} (\ZZ)$ is always hit by this map thanks to the existence of the Milnor and Kervaire manifolds \cite{Madsen-Milgram(1979)}.
In fact, we will need a more detailed description of the surgery obstruction map $\theta \co \sN (X) \ra L_{n} (\ZZ)$ in the case $X$ is closed with $\pi_{1} (X) = \{ 1\}$ and with dimension $n=4i$. Let $(f,\ol f) \co M \ra X$ be a degree one normal map representing an element in $\sN (X)$ for such an $X$. These data contain in particular the bundle map $\ol f \co \tau_M \ra \xi$ from the stable tangent microbundle $\tau_M$ to some stable microbundle $\xi$ over $X$.\footnote{The microbundles are used here since we are in the topological category, see~\cite{Kirby-Siebenmann(1977)}} Then under the identification $L_{4i} (\ZZ) \cong \ZZ$ of \eqref{eqn:htpy-grps-g-mod-top} the surgery obstruction is the difference of signatures divided by $8$
\begin{equation} \label{eqn:surgery-obstruction-is-difference-of-signatures}
\theta (f, \ol f) = 1/8 \cdot (\sign (M) - \sign (X)) = 1/8 \cdot (\ell (M)[M] - \ell (X)[X]) \in \ZZ,
\end{equation}
where $\ell$ denotes the total Hirzebruch $\ell$-class which is a rational linear combination of the rational Pontrjagin classes of the tangent microbundle, which are well defined for topological manifolds due to the homotopy equivalence of the rationalized classifying spaces $\BO_{\QQ} \simeq \BTOP_{\QQ}$, \cite{Kirby-Siebenmann(1977)}. In general we have the class $\ell (\xi) \in H^{4 \ast} (X;\QQ)$ as $\ell (\xi)= \sum_i \ell_{i} (\xi)$ with components $\ell_{i} (\xi) \in H^{4i} (X;\QQ)$ for any stable topological microbundle $\xi$ over $X$. Here we use the notation as in \cite[13B]{Wall(1999)}, which is in the PL-case, the use in the topological case is again justified by \cite{Kirby-Siebenmann(1977)}.
Denote by $\hat{f} \co X \ra \G/\TOP$ the map corresponding to the degree one normal map $(f,\ol f) \co M \ra X$ under the bijection $\sN (X) \cong [X,\G/\TOP]$ of \eqref{eqn:normal-invariants-general-formula}. According to \cite[Chapter 13B, page 188]{Wall(1999)} there exists a characteristic class $\ell (\G/\TOP) \in H^{4\ast} (\G/\TOP;\QQ)$ such that
\begin{equation} \label{eqn:char-class-formula-surgery-obstr-1-ctd-case-dim-4i}
\theta (f, \ol f) = (\ell (X) \cdot \hat{f}^{\ast} \ell (\G/\TOP)) \; [X] \in L_{4i} (\ZZ) = \ZZ.
\end{equation}
The equations \eqref{eqn:surgery-obstruction-is-difference-of-signatures} and \eqref{eqn:char-class-formula-surgery-obstr-1-ctd-case-dim-4i} together give a relationship between the surgery obstruction and the coefficients of the $\ell (M)$ or equivalently the coefficients of $\ell (\xi)$ since $\tau_M \cong f^{\ast} \xi$. Note that $\ell (\xi)$ a-priori differs from $\ell (X)$ and their difference can be used to calculate $\theta (f,\ol f)$. In fact as explained in \cite[page 210]{Davis(2000)} in a slightly different notation we have that
\begin{equation} \label{eqn:L-of-xi-versus-L-of-G-mod-TOP}
\ell (\xi) = (8 \cdot \hat{f}^{\ast} \ell (\G/\TOP)+1) \cdot \ell (X),
\end{equation}
which gives $\ell (\xi)$ as a function of $\hat f$, a fact which will be used in Section \ref{sec:higher-str-set-for-cp}. We also have
\begin{equation} \label{eqn:}
\theta (f, \ol f) = (1/8) \cdot (\ell (\xi) - \ell (X)) [X] \in \ZZ.
\end{equation}
Coming back to the case of $\sS (X \times D^{m},X \times S^{m-1})$ for $m \geq 3$ we note that the $\pi-\pi$-theorem of \cite[Ch 4]{Wall(1999)} and the homotopy invariance of normal invariants tell us that
\begin{equation} \label{eqn:str-set-rel-not-rel-bdry}
\sS (X \times D^{m},X \times S^{m-1}) \cong \sN (X \times D^{m},X \times S^{m-1}) \cong \sN (X).
\end{equation}
The almost $4$-periodicity for normal invariants and $L$-groups has as a consequence an almost $4$-periodicity for higher structure sets. This was established by Siebenmann \cite{Kirby-Siebenmann(1977)} and the precise statement is that for any compact manifold $X$ with boundary $\del X$ which might be empty we have an exact sequence of abelian groups
\begin{equation} \label{eqn:siebenmann-periodicity}
0 \ra \sS^{s}_{\del} (X) \xra{CW} \sS^{s}_{\del} (X \times D^{4}) \xra{t} \ZZ.
\end{equation}
The map $t$ is the zero map if $\del X \neq \emptyset$. The map $CW$ was a zig-zag of maps in the original source, but Cappell and Weinberger provided us in \cite{Cappell-Weinberger(1985)} with a geometric description, see also \cite{Crowley-Macko(2011)}.
This leaves us with a smaller number of cases to calculate, namely those of $L \times D^m$ for $m=0,\ldots,7$. We do this, but as we will see our calculations also turn out to be independent of this periodicity result, which is perhaps also an interesting point. The periodicity is also related to the $\rho$-invariant from Section~\ref{sec:the-rho-invariant} as we discuss there.
Next we discuss some properties of the higher simple structure sets which hold specifically for manifolds we are interested in.
We start with the join construction. The main idea is that if a group $G$ (in our case $G \leq S^{1}$) acts freely on spheres $S^{a}$ and $S^{b}$ then the natural extension of this action on the join $S^{a+b+1} = S^{a} \ast S^{b}$ is also free. For the corresponding operation on fake lens spaces we will use notation $L(\alpha \ast \beta) = L (\alpha) \ast L (\beta)$. This operation gives certain maps between simple structure sets of lens spaces of different dimensions with the same fundamental group \cite{Wall(1999)}. Madsen and Rothenberg noticed that this construction can be modified to obtain also maps between higher simple structure sets. The construction appears at the end of paragraph 2 in \cite{Madsen-Rothenberg(1989)}, see also \cite{Macko(2007)}. It may be described as an iterated cone construction. Note that the join may be seen as a union of cones and this is one idea in generalizing Wall construction to the iterated construction of \cite{Madsen-Rothenberg(1989)}. If $L (\alpha_1)$ is the standard $1$-dimensional lens space we obtain in this way maps between higher simple structure sets which we call the {\it suspension maps} and we denote them
\begin{equation} \label{eqn:suspension-higher-str-sets}
\Sigma \co \sS^{s}_{\del} (L^{2d-1} (\alpha) \times D^{m}) \ra \sS^{s}_{\del} (L^{2d+1} (\alpha \ast \alpha_1) \times D^{m}).
\end{equation}
An analogous map exists also for the complex projective spaces (that means when $G = S^{1}$).
Another piece of structure is functoriality with respect to restricting the group actions, which allows us to map between the higher simple structure sets of fake lens spaces of the same dimension but with different fundamental groups and also to map the higher simple structure sets of complex projective spaces to the higher simple structure sets of fake lens spaces. Given $H < G \leq S^1$ restricting action provides us with fiber bundles
\begin{equation} \label{eqn:transfer-from-G-to-H}
p_H^G \co L(\alpha|_{H}) \lra L(\alpha),
\end{equation}
which induce the vertical ``transfer'' maps in the following diagram
\begin{equation} \label{eqn:transfer-str-sets-and-ni}
\begin{split}
\xymatrix{
\sS^s_{\del} (L(\alpha) \times D^{m}) \ar[r]^{\eta} \ar[d]_{(p_H^G)^{!}} & \sN_{\del} (L(\alpha) \times D^{m}) \ar[d]^{(p_H^G)^{!}} \\
\sS^s_{\del} (L(\alpha|_{H}) \times D^{m}) \ar[r]^{\eta} & \sN_{\del} (L(\alpha|_{H}) \times D^{m}).
}
\end{split}
\end{equation}
\section{The long surgery exact sequence for a complex projective space} \label{sec:higher-str-set-for-cp}
For our calculation we also need to deal with a version of our problem for the complex projective spaces. This case is easier due to the triviality of the fundamental group, but at the same time it will illustrate the strategy which we will employ later. We assume $d \geq 2$ and $k \geq 1$ from now on.
The connection with lens spaces is that for $H=\ZZ/N$ and $G = S^{1}$ and $\alpha_1$ the standard action of $H$ on $S^{2d-1}$ the map $p_H^G \co L^{2d-1} (\alpha_1) \lra \CC P^{d-1}$ from \eqref{eqn:transfer-from-G-to-H} gives via Diagram~\eqref{eqn:transfer-str-sets-and-ni} maps from the higher structure sets and normal invariants of $\CC P^{d-1}$ to the higher structure sets and normal invariants of $L^{2d-1} (\alpha_1)$.
The complex projective space $\CC P^{d-1}$ can be viewed as the quotient of the diagonal $S^1$-action on $S^{2d-1} = S^1 \ast \cdots
\ast S^1$ ($d$-factors). As a real manifold it has dimension $2d-2$ and $\pi_1 (\CC P^{d-1}) = 1$. Hence from (\ref{eqn:surgery-exact-sequence}) we have that for $n-1 = 2d-2+m$ the long surgery exact sequence for $\CC P^{d-1}$ splits into the short exact sequences
\begin{equation} \label{ses-cp^d-1}
0 \ra \sS_\del (\CC P^{d-1} \times D^m) \ra \sN_{\del} (\CC P^{d-1} \times D^m) \xra{\theta}
L_{n-1}(\ZZ) \ra 0,
\end{equation}
since in the simply connected case the map $\theta$ is always surjective \cite{Madsen-Milgram(1979)}. The last term in \eqref{ses-cp^d-1} is $0$ if $n-1$ is odd, so it is convenient to split the discussion into two cases, namely when $m$ is even and odd. For the normal invariants we have from \eqref{eqn:normal-invariants-general-formula} and \eqref{eqn:htpy-grps-g-mod-top} and using the Atiyah-Hirzberuch spectral sequence that
\begin{equation}
\begin{split} \label{eqn:normal-invariants-cp^d-1}
\sN_\del (\CC P^{d-1} \times D^{2k+1}) \cong & \quad 0, \\
\sN_\del (\CC P^{d-1} \times D^{2k}) \cong & \bigoplus_{i=1}^{\infty}
H^{4i} (\CC P^{d-1}_+ \wedge S^{2k} ;\ZZ) \oplus \\
& \bigoplus_{i=1}^{\infty} H^{4i-2} (\CC P^{d-1}_+ \wedge S^{2k};\ZZ/2),
\end{split}
\end{equation}
where of course all but a finite number of summands are zero. Further we can identify the factors
\begin{align}
\bs_{4i} & \co \sN_{\del}(\CC P^{d-1} \times D^{2k}) \ra H^{4i} (\CC P^{d-1}_+ \wedge S^{2k};\ZZ) \cong \ZZ \cong L_{4i} (\ZZ) \\
\bs_{4i-2} & \co \sN_{\del}(\CC P^{d-1} \times D^{2k}) \ra H^{4i-2} (\CC P^{d-1}_+ \wedge S^{2k};\ZZ_2)
\cong \ZZ/2 \cong L_{4i-2} (\ZZ)
\end{align}
as surgery obstructions of degree one normal maps obtained from an element $(f,\overline{f}) \co M \ra \CC P^{d-1} \times D^{2k}$ of $\sN_{\del}(\CC P^{d-1} \times D^{2k})$ by first making $f$ transverse to $\CC P^{j} \times D^{2k}$ and then taking the surgery obstruction of the degree one map obtained by restricting to the preimage of $\CC P^{j} \times D^{2k}$. Here $j = 2i-k$ when we want $\bs_{4i}$ and $j = 2i-k-1$ when we want $\bs_{4i-2}$. The maps $\bs_{2i}$ are called the {\it splitting invariants}. This description is obtained analogously to \cite[14C]{Wall(1999)} building on \cite[13B]{Wall(1999)} which in turn builds on \cite{Sullivan(1996)}. We will sometimes use (\ref{eqn:normal-invariants-cp^d-1}) to identify the elements of $\sN_{\del} (\CC P^{d-1} \times D^{2k})$ by $s = (s_{2i})_i$.
The surgery obstruction map $\theta$ takes the top summand of $\sN_{\del} (\CC P^{d-1} \times D^{2k})$ isomorphically onto $L_{2d-2+2k} (\ZZ)$. Hence the short exact sequence (\ref{ses-cp^d-1}) splits and we obtain a bijection of $\sS_{\del} (\CC P^{d-1} \times D^{2k})$ given by the splitting invariants $\bs_{2i}$ for $k \leq i \leq k+d-2$:
\begin{equation} \label{eqn:ss-cp^d-1}
\bigoplus_{k \leq i \leq k+d-2} \bs_{2i} \co \sS_{\del} (\CC P^{d-1} \times D^{2k}) \xra{\cong} \bigoplus_{k \leq i \leq k+d-2} L_{2i} (\ZZ).
\end{equation}
Notice that if we compare $\sS (\CC P^{d-1})$ with $\sS_{\del} (\CC P^{d-1} \times D^{2k})$ we have one more summand corresponding to $\bs_{2k}$ which in case $k=2l$ corresponds to the extra $\ZZ$-summand in \eqref{eqn:siebenmann-periodicity}.
Later we will need to identify the indexes $i$ of the splitting invariants $\bs_{4i}$ in the above isomorphisms \eqref{eqn:normal-invariants-cp^d-1} and \eqref{eqn:ss-cp^d-1}. They depend on the parity of $d$ and $k$, so to this end we introduce the following notation, where $I_{4}^{N} (d,k)$ is the indexing set of the normal invariants with indexes divisible by $4$ and $I_{4}^{S} (d,k)$ is the indexing set of the higher structure set with indexes divisible by $4$ in both cases for $k \geq 1$. Note that the dimension of the manifolds involved is $2d-2+2k$.
The set $I_{4}^{N} (d,k)$ is defined as the set of $i \in \ZZ$ such that
\
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$I_{4}^{N} (d,k)$ & $k = 2l$ & $k = 2l+1$ \\
\hline
$d = 2e$ & $l \leq i \leq e+l-1$ & $l+1 \leq i \leq e+l$ \\
\hline
$d = 2e+1$ & $l \leq i \leq e+l$ & $\quad l+1 \leq i \leq e+l$ \\
\hline
\end{tabular}
\end{center}
\
The set $I_{4}^{S} (d,k)$ is defined as the set of $i \in \ZZ$ such that
\
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$I_{4}^{S} (d,k)$ & $k = 2l$ & $k = 2l+1$ \\
\hline
$d = 2e$ & $l \leq i \leq e+l-1$ & $l+1 \leq i \leq e+l-1$ \\
\hline
$d = 2e+1$ & $l \leq i \leq e+l-1$ & $\quad l+1 \leq i \leq e+l$ \\
\hline
\end{tabular}
\end{center}
\
Hence, for example, for the free part we have
\begin{equation} \label{eqn:free-part-ss-cp^d-1}
\bigoplus_{i \in I_{4}^{S} (d,k)} \bs_{4i} \co \textup{Free} \; \sS_{\del} (\CC P^{d-1} \times D^{2k}) \xra{\cong} \bigoplus_{i \in I_{4}^{S} (d,k)} L_{4i} (\ZZ) \cong \bigoplus_{i \in I_{4}^{S} (d,k)} \ZZ.
\end{equation}
Similarly it is possible to identify the indexing sets $I^{N}_{2} (d,k)$ and $I^{S}_{2} (d,k)$ of the $\ZZ/2$-summands.
When studying the $\rho$-invariant later in Section~\ref{sec:the-rho-invariant} we will also need to understand the structure sets of the closed manifolds $\CC P^{d-1} \times S^{2k}$ to some extent. Note that we have a map
\begin{align}
\begin{split} \label{eqn:cp-times-disk-to-cp-times-sphere}
\glue \co \sS_{\del} (\CC P^{d-1} \times D^{2k}) & \ra \sS (\CC P^{d-1} \times S^{2k}) \\
[h \co Q \ra \CC P^{d-1} \times D^{2k}] & \mapsto [\glueh \co Q(h) = Q \cup_{\del h} \CC P^{d-1} \times D^{2k} \ra \CC P^{d-1} \times S^{2k}]
\end{split}
\end{align}
where $\glueh$ is the obvious map. An analysis analogous to the first part of this section shows that the structure set $\sS (\CC P^{d-1} \times S^{2k})$ is isomorphic to a sum of several copies of $\ZZ$ and $\ZZ/2$ via a set of splitting invariants $\bs_{4j,0}$ and $\bs_{4i,1}$ along submanifolds
\begin{equation} \label{eqn:splitting-invariants-for-cp-times-sphere}
\CC P^{2j} \times \{ \ast \} \; \textup{for} \; \bs_{4j,0} \quad \textup{and} \quad \CC P^{2i-k} \times S^{2k} \; \textup{for} \; \bs_{4i,1}.
\end{equation}
The map ``$\glue$'' sends $\bs_{4i}$ to $\bs_{4i,1}$ and hence the elements in its image have $\bs_{4j,0} = 0$.
What we will need in Section~\ref{sec:the-rho-invariant} is a relationship between the splitting invariants $\bs_{4i}$ and the $\ell$-class of $Q(h)$. This will be obtained in an analogous way as by Wall in \cite[14C]{Wall(1999)} for the case of $X = \CC P^{d-1}$ but we have to make a couple of adjustments. Recall that we have
\[
H^{\ast} (\CC P^{d-1} \times S^{2k};\ZZ) \cong \ZZ [x,y]/(x^d,y^2) \quad |x| = 2, \; |y| = 2k.
\]
Setting $\bar x = (\glueh)^{\ast} (x)$ and $\bar y = (\glueh)^{\ast} (y)$ we obtain the isomorphism
\[
H^{\ast} (Q(h);\ZZ) \cong \ZZ [\bar x, \bar y]/({\bar x}^d, {\bar y}^2 ) \quad |\bar x| = 2, \; |\bar y| = 2k.
\]
Let
\[
\ell (Q(h)) = \sum_{\substack{u = 0, \ldots, d-1 \\ v = 0,1}} \alpha_{u,v} {\bar x}^{u} {\bar y}^{v}
\]
and remember that $\alpha_{u,v} = 0$ if $u+k \cdot v$ is odd. If $(\glueh,\ol \glueh) \co Q(h) \ra \CC P^{d-1} \times S^{2k}$ is the associated degree one normal map with the bundle map $\ol \glueh \co \tau_{Q(h)} \ra \xi $ this means that
\[
\ell (\xi) = \sum_{\substack{u = 0, \ldots, d-1 \\ v = 0,1}} \alpha_{u,v} {x}^{u} {y}^{v}.
\]
Analogously let
\[
\hat{\glueh}^{\ast} \ell (\G/\TOP) = \sum_{\substack{u = 0, \ldots, d-1 \\ v = 0,1}} \beta_{u,v} {x}^{u} {y}^{v}.
\]
and remember that $\beta_{u,v} = 0$ if $u+k \cdot v$ is odd. We know from \eqref{eqn:L-of-xi-versus-L-of-G-mod-TOP} that
\[
\ell (\xi) = (8 \cdot \hat{\glueh}^{\ast} \ell (\G/\TOP)+1) \cdot \ell (\CC P^{d-1} \times S^{2k}).
\]
We want to show that the coefficients $\alpha_{u,v}$ are linear in $\bs_{4i}$. Hence it is enough to show that $\beta_{u,v}$ are linear in $\bs_{4i}$. To this end we recall that the map ``glue'' maps the splitting invariants $\bs_{4i}$ to $\bs_{4i,1}$ and these are surgery obstructions of the restrictions of $\glueh$ to the preimage $W_{2i}$ of $\CC P^{2i-k} \times S^{2k}$. Now using formula \eqref{eqn:char-class-formula-surgery-obstr-1-ctd-case-dim-4i} we get
\begin{equation} \label{eqn:splitting-invariants-vs-pullback-of-l-G-TOP-1}
8 \cdot \bs_{4i} (h) = 8 \cdot \bs_{4i,1} (\glueh) = (\ell (\CC P^{2i-k} \times S^{2k}) \cdot \hat{\glueh}^{\ast} \ell (\G/\TOP)) [\CC P^{2i-k} \times S^{2k}].
\end{equation}
The splitting invariants $\bs_{4i,0}$ are surgery obstructions of the restrictions of $\glueh$ to the preimage $W_{2i}$ of $\CC P^{2i} \times \{ \ast \}$ and using formula \eqref{eqn:char-class-formula-surgery-obstr-1-ctd-case-dim-4i} we get
\begin{equation} \label{eqn:splitting-invariants-vs-pullback-of-l-G-TOP-0}
8 \cdot \bs_{4i,0} (\glueh) = (\ell (\CC P^{2i} \times \{ \ast \}) \cdot \hat{\glueh}^{\ast} \ell (\G/\TOP)) [\CC P^{2i} \times \{ \ast \}].
\end{equation}
These are similar to the equations on the top half of page 203 in \cite[14C]{Wall(1999)} and we proceed analogously making necessary adjustments. Recall that $\ell (S^{2k}) = 1$ and hence we obtain that $\ell (\CC P^{2i-k} \times S^{2k}) = \ell (\CC P^{2i-k})$. Denote
\begin{equation} \label{eqn:l-of-cp-j-k}
\ell (\CC P^{j} \times S^{2k}) = \sum_{w=0}^{\lfloor j/2 \rfloor} \gamma_{j,w} x^{2w}
\end{equation}
for appropriate $\gamma_{j,w}$, where we know $\gamma_{j,0} = 1$.
Since $[\CC P^{2i-k} \times S^{2k}]$ is cohomologically dual to $x^{2i-k}y$ the equation \eqref{eqn:splitting-invariants-vs-pullback-of-l-G-TOP-1} gives
\begin{equation}
8 \cdot \bs_{4i,1} (\glueh) = \sum_{u} \gamma_{2i-k,i-(u+k)/2} \beta_{u,1}.
\end{equation}
by extracting the coefficient of $x^{2i-k}y$ in the product of the cohomology classes. Varying $i$ this gives a system of linear equations and since we know $\gamma_{j,0} = 1$ it has a unique solution and we obtain that $\beta_{u,1}$ are linear in $\bs_{4i,1} (\glueh) = \bs_{4i} (h)$.
Similarly since $[\CC P^{2i} \times \{ \ast \}]$ is cohomologically dual to $x^{2i}$ the equation \eqref{eqn:splitting-invariants-vs-pullback-of-l-G-TOP-0} gives
\begin{equation}
8 \cdot \bs_{4i,0} (\glueh) = \sum_{u} \gamma_{2i,i-u/2} \beta_{u,0}.
\end{equation}
by extracting the coefficient of $x^{2i}$ in the product of the cohomology classes. Varying $i$ this gives again a regular system of linear equations, since we know $\gamma_{j,0} = 1$. Because $8 \cdot \bs_{4i,0} (\glueh)=0$ we obtain that $\beta_{u,0}=0$ for all $u$.
Putting both cases together we have the following proposition.
\begin{prop} \label{prop:alpha-and-beta-depend-linearly-on-s-4i}
With the above notation the coefficients $\beta_{u,v}$ and hence also the coefficients $\alpha_{u,v}$ depend linearly on $\bs_{4i}$.
\end{prop}
\section{The long surgery exact sequence for a lens space} \label{sec:lens-times-disk}
Now we turn to the higher structure sets of fake lens spaces. We note that any fake lens space $L^{2d-1} (\alpha)$ is homotopy equivalent to a lens space $L^{2d-1} (\alpha_{(u,1,\cdots,1)})$ where $\alpha_{(u_1,\cdots,u_d)}$ denotes the linear action of $\ZZ/N$ on $S^{2d-1}$ given by multiplication by $e^{2 u_j \pi i}$ on the $j$-th complex coordinate of $S^{2d-1} = S (\CC^{d})$, see \cite[14E]{Wall(1999)}. Since a homotopy equivalence induces an isomorphism on higher structure sets $\sS^{s}_{\del} (X \times D^{m})$, see \cite{Ranicki(1992)}, \cite{Ranicki(2009)}, it is enough to calculate the higher simple structure sets of $L^{2d-1} (\alpha_{(u_1,1,\cdots,1)})$ for $u_1 = 1, \ldots, N-1$. For simplicity we will work with the case $u_1 = 1$ here. The other cases yield the same results, only the algebra gets a little more complicated and is solved in the same way as in \cite{Macko-Wegner(2009)}. Therefore we will abbreviate from now on $L^{2d-1} = L^{2d-1} (\alpha_{(1,\ldots,1)})$ or simply $L$ if the dimension is clear. Moreover, we assume $N = 2^K$.
We start by summarizing what we already know about the terms in~\eqref{eqn:surgery-exact-sequence} for $X = L^{2d-1}$. The $L$-theory we need is described in the following proposition from \cite{Hambleton-Taylor(2000)}. The symbol $R_{\CC} (G)$ denotes the complex representation ring of a group $G$ and the superscripts $\pm$ denote the $\pm$-eigenspaces with respect to the involution given by complex conjugation. The symbol $\Gsign$ means the $G$-signature and Arf is the Arf invariant.
\begin{thm} \cite{Hambleton-Taylor(2000)} \label{L(G)}
For $G = \ZZ/N$ with $2|N$ we have that
\begin{align*}
L^s_n (\ZZ G) & \cong
\begin{cases}
4 \cdot R_{\CC}^+ (G) & n \equiv 0 \; (\mod 4) \; (\Gsign, \; \mathrm{real}) \\
0 & n \equiv 1 \; (\mod 4) \\
4 \cdot R_{\CC}^- (G) \oplus \ZZ/2 & n \equiv 2 \; (\mod 4) \;
(\Gsign, \; \mathrm{purely}
\; \mathrm{imaginary}, \mathrm{Arf}) \\
\ZZ/2 & n \equiv 3 \; (\mod 4) \; (\mathrm{codimension} \; 1 \;
\mathrm{Arf})
\end{cases} \\
\widetilde L^s_{2k} (\ZZ G) & \cong 4 \cdot \RhG^{(-1)^k} \;
\textit{where} \; \RhG^{(-1)^k} \; \textit{is} \; R_{\CC}^{(-1)^k}
(G) \; \textit{modulo the regular representation.}
\end{align*}
\end{thm}
For the normal invariants~$\sN_{\del} (Y) \cong [Y/\del Y;\G/\TOP]$, using localization at $2$ and away from $2$, we have in general the following homotopy pullback square \cite{Madsen-Milgram(1979)}
\begin{equation} \label{eqn:htpy-type-of-g-top}
\begin{split}
\xymatrix{
\G/\TOP \ar[r] \ar[d] & \prod_{i > 0} K(\ZZ_{(2)},4i) \times K(\ZZ/2,4i-2) \ar[d] \\
\BO[1/2] \ar[r] & \BO_{\QQ} \simeq \prod_{i > 0} K(\QQ,4i)
}
\end{split}
\end{equation}
which induces a Mayer-Vietoris sequence for the homotopy sets of mapping spaces. For the products $Y = L \times D^{m}$ we notice that
\[
Y/\del Y = L \times D^{m} / L \times S^{m-1} \simeq L_{+} \wedge S^{m} \simeq (L \wedge S^{m}) \vee S^{m}.
\]
Hence we can use the known calculations of both summands to obtain the following calculation
\begin{equation} \label{ni-lens-space-times-disk}
\sN_{\del} (L \times D^{m}) \cong \bigoplus_{i=1}^{\infty} H^{4i} (L_{+} \wedge S^{m};\ZZ) \oplus
\bigoplus_{i=1}^{\infty} H^{4i-2} (L_{+} \wedge S^{m};\ZZ/2).
\end{equation}
At this point it is useful to split the discussion into the case when $m$ is odd and when $m$ is even.
\nin \textbf{Case $m=2k+1$.} We have
\begin{equation} \label{ni-lens-space-times-odd-disk}
\sN_{\del} (L \times D^{2k+1}) \cong \bigoplus_{i=1}^{\infty} H^{4i} (L_{+} \wedge S^{2k+1};\ZZ) \oplus
\bigoplus_{i=1}^{\infty} H^{4i-2} (L_{+} \wedge S^{2k+1};\ZZ/2).
\end{equation}
The first summand is zero except it is $\ZZ$ in two instances, when $(d,k)=(2e,2l)$, so that $2d-1+2k+1 = 4(e+l)$, and when $(d,k)=(2e+1,2l+1)$, so that $2d-1+2k+1=4(e+l+1)$. The other summands become $\ZZ/2$ until we reach the dimension of $2d-1+2k+1$. We denote these summands by $\bt_{4i-2}$ and the indexing set for the $i$ is denoted $J_{2}^{N} (d,k,\textup{odd})$.
\nin \textbf{Case $m=2k$.} This case is basically a shifted copy of the case $k=0$ plus a summand coming from the sphere $S^{2k}$. Similarly as in the complex projective case we introduce notation $\bt_{2i}$ for the invariants given by the respective summands, although in the present case we do not have a simple identification as splitting invariants. Nevertheless we will see that these invariants are closely related to $\bs_{2i}$. Note that we have
\begin{equation} \label{eqn:cohlgy-L-plus-smash-S}
H^{4i} (L_+ \wedge S^{2k};\ZZ) \cong \begin{cases} \ZZ & k = 2l, i = l \\ \ZZ/{2^K} & 2k < 4i < 2(d+k)-1 \end{cases}
\end{equation}
and we denote the summands
\begin{align}
\bt_{4i} & \co \sN_{\del} (L \times D^{2k}) \ra H^{4i} (L_+ \wedge S^{2k};\ZZ) \cong \ZZ/{2^K} \; \textup{or} \; \ZZ \\
\bt_{4i-2} & \co \sN_{\del} (L \times D^{2k}) \ra H^{4i-2} (L_+ \wedge S^{2k};\ZZ/2) \cong \ZZ/2.
\end{align}
Similarly as in the case of $\CC P^{d-1}$ it is convenient to introduce the indexing set $J^{N}_{4} (d,k)$ of those $i$ for which the invariants $\bt_{4i}$ are non-zero.
\
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$J_{4}^{N} (d,k)$ & $k = 2l$ & $k = 2l+1$ \\
\hline
$d = 2e$ & $l \leq i \leq e+l-1$ & $l+1 \leq i \leq e+l$ \\
\hline
$d = 2e+1$ & $l \leq i \leq e+l$ & $\quad l+1 \leq i \leq e+l$ \\
\hline
\end{tabular}
\end{center}
\
Similarly one could define $J^{N}_{2} (d,k)$. Though elementary, it is also helpful to similarly put into a table the dimension $n = 2d-1+2k$ of $L^{2d-1} \times D^{2k}$ in terms of parity of $d$ and $k$.
\
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$n$ & $k = 2l$ & $k = 2l+1$ \\
\hline
$d = 2e$ & $4(e+l)-1$ & $4(e+l)+1$ \\
\hline
$d = 2e+1$ & $4(e+l)+1$ & $ 4(e+l)+3$ \\
\hline
\end{tabular}
\end{center}
\
Coming back to the general case of any $m$ we obtain even more information from the following proposition.
\begin{prop} \label{prop:theta-for-L-x-D}
\begin{enumerate}
\item If $n=2d-1+2k=4u-1$ then the map
\[
\theta \co \sN_{\del}(L^{2d-1} \times D^{2k}) \ra L^s_{4u-1}(\ZZ G) =
\ZZ/2
\]
is given by $\theta (x) = \bt_{4u-2} (x) \in \ZZ_2$.
\item
The map
\[
\theta \co \sN_\partial(L^{2d-1} \times D^{2k+1}) \ra L^s_{2d+2k}(\ZZ G)
\]
maps onto the summand $L_{2d+2k}(\ZZ)$.
\end{enumerate}
\end{prop}
\begin{proof}
For part (1) in the case $N=2$ we refer the reader to \cite[Section 4]{Madsen-Rothenberg(1989)}. The case $N=2^K$ is obtained by passing from $G$ to $\ZZ/2$ by restricting the action to obtain the transfer maps \eqref{eqn:transfer-str-sets-and-ni} which are surjective on normal invariants. Since the map $\theta$ is the surgery obstruction map in both cases, if it were trivial for $N = 2^K$, meaning any degree one normal map would be normally cobordant to a simple homotopy equivalence, then the transfer of the normal cobordism would be a normal cobordism for the transferred problem and hence from the surjectivity of the transfer map we would obtain the triviality of the map $\theta$ for $N = 2$ which is a contradiction.
Part (2) is the already mentioned general statement in topological surgery, due to the existence of the Milnor manifolds and Kervaire manifolds \cite{Madsen-Milgram(1979)}.
\end{proof}
In view of Proposition \ref{prop:theta-for-L-x-D} we denote
\begin{equation}
\widetilde{\sN}_{\del} (L \times D^{m}) := \ker \theta \co \sN_{\del}(L^{2d-1} \times D^{m}) \ra L^s_{2d-1+m}(\ZZ G)
\end{equation}
and the corresponding indexing sets as $J_{4}^{tN} (d,k)$, $J_{2}^{tN} (d,k)$ and $J_{2}^{tN} (d,k,\textup{odd})$. Notice that we have $J_{4}^{tN} (d,k) = J_{4}^{N} (d,k)$.
We can now summarize what we know. Our information is enough to solve the case $m=2k+1$, the other case will take more time.
\nin \textbf{Case $m=2k+1$.}
We have the isomorphism
\begin{equation} \label{eqn:end-result-odd-disk}
\sS^{s}_{\del} (L \times D^{2k+1}) \cong \widetilde{\sN}_{\del} (L \times D^{2k+1}) \cong \bigoplus_{J_{2}^{tN} (d,k,\textup{odd})} \ZZ/2.
\end{equation}
\nin \textbf{Case $m = 2k$.}
\nin We obtain the short exact sequence
\begin{equation} \label{ses-lens-2d-1}
0 \ra \wL^s_{2d+2k} (\ZZ G) \xra{\partial} \sS^{s}_{\del} (L^{2d-1} \times D^{2k})
\xra{\eta} \widetilde{\sN}_{\del}(L^{2d-1} \times D^{2k}) \ra 0,
\end{equation}
where
\begin{align}
\begin{split} \label{eqn:tilde-N-L-x-D}
n = 4u-1 \; : \; \widetilde{\sN}_{\del} (L^{2d-1} \times D^{2k}) & = \mathrm{ker} \; \big (
\bt_{4u-2} \co {\sN}_{\del} (L^{2d-1} \times D^{2k}) \ra \ZZ_2 \big ), \\
n = 4u+1 \; : \; \widetilde{\sN}_{\del} (L^{2d-1} \times D^{2k}) & = \sN_{\del} (L^{2d-1} \times D^{2k}).
\end{split}
\end{align}
It will be convenient to use the decomposition
\begin{equation} \label{red-ni-lens-spaces}
\widetilde \sN_{\del} (L^{2d-1} \times D^{2k}) \cong T_{F} (d,k) \oplus T_{N} (d,k) \oplus T_{2} (d,k),
\end{equation}
where
\[
T_F (d,k) \cong \begin{cases} \ZZ (t_{4l}) & k = 2l \\ 0 & k = 2l+1 \end{cases}
\]
and
\[
T_N (d,k) = \bigoplus_{i \in rJ_{4}^{tN} (d,k)} \ZZ/N (t_{4i}), \quad T_2 (d,k) = \bigoplus_{i\in J_{2}^{tN} (d,k)} \ZZ/2 (t_{4i-2}).
\]
Here $rJ_{4}^{tN} (d,k) = J_{4}^{tN} (d,k) \smallsetminus \{ l \}$ in case $k=2l$ and $rJ_{4}^{tN} (d,k) = J_{4}^{tN} (d,k)$ in case $k = 2l+1$. Then the cardinality of $rJ_{4}^{tN} (d,k)$ is equal to $c_N (d,k)$ from the statement of Theorem~\ref{thm:main-thm}. Similarly one may define $rJ_{2}^{tN} (d,k) = J_{2}^{tN} (d,k)$ in case $k=2l$ and $rJ_{2}^{tN} (d,k) = J_{2}^{tN} (d,k) \smallsetminus \{ l+1 \}$ in case $k = 2l+1$. Then the cardinality of $rJ_{2}^{tN} (d,k)$ is equal to $c_2 (d,k)$ from the statement of Theorem~\ref{thm:main-thm}.
The first term in the sequence
(\ref{ses-lens-2d-1}) is understood by Theorem \ref{L(G)}, the third
term is understood by (\ref{red-ni-lens-spaces}). Hence we are left
with an extension problem, which we solve in Section \ref{sec:the-rho-invariant}.
However, before that we mention some useful properties of the above calculations with respect to increasing $d$ via the suspension map \eqref{eqn:suspension-higher-str-sets} and with respect to changing the group $G$ via the transfer maps \eqref{eqn:transfer-str-sets-and-ni}
First note that the inclusion $L^{2d-1} \times D^{2k} \subset L^{2d+1} \times D^{2k}$ induces by transversality a restriction map on the normal invariants denoted by
\begin{equation} \label{eqn:restriction-ni-dimension}
\res \co \sN_{\del} (L^{2d+1} \times D^{2k}) \lra \sN_{\del} (L^{2d-1} \times D^{2k}).
\end{equation}
This map is related to the suspension homomorphism $\Sigma$ from \eqref{eqn:suspension-higher-str-sets} by the commutative diagram analogous to the diagram from \cite[Lemma 14A.3]{Wall(1999)}). In our situation we need to compose the horizontal maps $\eta$ with the projections onto the reduced normal invariants and to consider the map on $\widetilde{\sN}$ induced by \eqref{eqn:restriction-ni-dimension} (and denoted by the same symbol) so that we have the commutative diagram
\begin{equation} \label{susp-diagram}
\begin{split}
\xymatrix{
\sS^s_{\del} (L^{2d-1} \times D^{2k}) \ar[r]^{\eta} \ar[d]_{\Sigma} & \widetilde{\sN}_{\del} (L^{2d-1} \times D^{2k}) \\
\sS^s_{\del} (L^{2d+1} \times D^{2k}) \ar[r]^{\eta} & \widetilde{\sN}_{\del} (L^{2d+1} \times D^{2k}).
\ar[u]_{\res} }
\end{split}
\end{equation}
Clearly we have $t_{2i} = \res (t_{2i})$ and so the vertical map $\res$ from the diagram is an isomorphism when $n+2=2d+1+2k=4u-1$ and it is onto with kernel $\ZZ/N \oplus \ZZ/2$ when $n+2=2d+1+2k=4u+1$. A similar diagram exists also for the situation $\CC P^d = \CC P^{d-1} \ast \mathrm{pt}$ and the corresponding map $\res$ is always surjective in that case.
It is also useful to investigate Diagram \eqref{eqn:transfer-str-sets-and-ni} in the light of the calculations of the normal invariants. Of course, we study the case when the groups involved are $G < S^{1}$ so that we have the $S^{1}$-bundle $L^{2d-1} \ra \CC P^{d-1}$. In view of the isomorphisms \eqref{eqn:normal-invariants-cp^d-1} and \eqref{ni-lens-space-times-disk} the transfer map on normal invariants is just the induced map in cohomology, and hence on summands it is either isomorphism or the modulo $N$ reduction from $\ZZ$ to $\ZZ/N$. The indexing sets match. It will also be helpful to notice that the composition
\begin{equation} \label{eqn:fibering-ni-lens-x-disk-by-cp-x-disk}
\sS_{\del} (\CC P^{d-1} \times D^{2k}) \xra{\eta} \sN_{\del} (\CC P^{d-1} \times D^{2k}) \xra{\textup{proj} \circ (p_{G}^{S^{1}})^{!}} \widetilde{\sN}_{\del} (L^{2d-1} \times D^{2k})
\end{equation}
is surjective for $n-1=2d-2+2k=4u+2$. This can be phrased as saying that any representative of any element in $\sS^{s}_{\del} (L^{2d-1} \times D^{2k})$ is normally cobordant to a representative of possibly another element of the same group which fibers over a fake $\CC P^{d-1} \times D^{2k}$. In case $n-1=2d-2+2k=4u$ this map is close to be surjective. Its image is the subgroup consisting of all but the top $\ZZ/N$ summand (the one which is the image of $\bs_{4u}$ with the index $u \in rJ_{4}^{N} (d,k)$). However, the corresponding summand in $\widetilde{\sN}_{\del} (L^{2d+1} \times D^{2k})$, which maps to this one under the map $\res$ from Diagram \eqref{susp-diagram}, is in the image of the map \eqref{eqn:fibering-ni-lens-x-disk-by-cp-x-disk} with $d$ replaced by $d+1$. This can be phrased by saying that in case $n-1=2d-2+2k=4u$ the suspension of any element of $\sS^{s}_{\del} (L^{2d-1} \times D^{2k})$ is normally cobordant to a representative of an element of $\sS^{s}_{\del} (L^{2d+1} \times D^{2k})$ which fibers over a fake $\CC P^{d} \times D^{2k}$. Compare to \cite[Lemma 14E.9]{Wall(1999)}
\section{The $\rho$-invariant} \label{sec:the-rho-invariant}
\subsection{Definition of the $\rho$-invariant} \label{subsec:def-of-rho-invariant}
Just as in \cite{Macko-Wegner(2009)} we will solve the extension problem by employing the~$\rho$-invariant. We first recall the definition of the $\rho$-invariant for closed manifolds as used in \cite[subsection 4.1]{Macko-Wegner(2009)} and its generalization from \cite{Madsen-Rothenberg(1989)}.
Let $\RhG := R(G) / \langle \textup{reg} \rangle$ where quotienting by $\langle
\textup{reg} \rangle$ means dividing by the regular representation and $\QQ \RhG = \QQ \otimes \RhG$.
\begin{defn}{\cite[Remark after Corollary 7.5]{Atiyah-Singer-III(1968)}} \label{defn-rho-1}
Let $X^{2u-1}$ be a closed oriented manifold with with a reference map $\lambda (X) \co X \ra BG$. Define
\begin{equation}
\rho (X,\lambda(X)) = \frac{1}{r} \cdot \Gsign(\widetilde Y) \in \QQ \RhG^{(-1)^u}
\end{equation}
for some $r \in \NN$ and $(Y,\partial Y)$ such that there exists $\lambda_Y \co Y \ra BG$ and $\partial Y = r \cdot X$.
\end{defn}
As explained in \cite{Atiyah-Singer-III(1968)} this is well defined. There it is also explained that there is also another definition which
works for actions of compact Lie groups, in particular for $S^1$-actions, on certain odd-dimensional manifolds. Whenever the two definitions apply, they coincide. For $G < S^1$ we identify $R(G)$ with $\ZZ \Gh$ and we adopt the notation $\RhG = \ZZ [\chi] / \langle 1 +
\chi + \cdots + \chi^{N-1} \rangle$ following \cite[section 4.1]{Macko-Wegner(2009)}. As also explained in \cite[Definition 4.2]{Macko-Wegner(2009)}, or \cite[Definition 2.5]{Crowley-Macko(2011)}, the~$\rho$-invariant defines a function on simple structure sets:
\begin{defn} \label{defn:reduced-rho}
Let $X$ be a closed oriented manifold of dimension $n = (2u-1)$ with a reference map $\lambda (X) \co X \ra BG$. Define the function
\[
\wrho \co \sS^{s} (X) \ra \QQ \RhG^{(-1)^u} \quad \textup{by} \quad
\wrho ([h]) = \rho (M,\lambda (X) \circ h) - \rho (X,\lambda (X)),
\]
where the orientation on $M$ is chosen so that the homotopy
equivalence $h \co M \ra X$ is a map of degree $1$.
\end{defn}
The definition in the relative setting comes from \cite[Section 3]{Madsen-Rothenberg(1989)}. We need a little preparation.
Consider a closed oriented $a$-dimensional manifold $X$, $m \geq 1$ such that $n = a+m = 2u-1$ and an element $[h \co M \ra X \times D^m]$ in $\sS^{s}_\partial (X \times D^m)$. Let $M(h)$ be a closed manifold given by
\begin{equation} \label{defn:M(h)}
M (h) := M \cup_{\partial h} (X \times D^m).
\end{equation}
If $h$ is the identity we obtain $M (\id) \cong X \times S^m$, in general the map $h$ induces $M(h) \simeq X \times S^m$. We equip $M$ with an orientation so that $h$ is a map of degree $1$. The orientation on the closed manifold $M(h)$ can then be chosen so that it agrees with the given orientation on $M$ and it reverses the orientation on $X \times D^m$. If $X$ possesses a reference map $\lambda (X) \co X \ra BG$ then we obtain a reference map $\lambda (M(h)) \co M(h) \simeq X \times S^m \ra X \ra BG$. As explained in~\cite[Definition 2.5]{Crowley-Macko(2011)} (also a minor modification of \cite[(3.7)]{Madsen-Rothenberg(1989)}) we can make the following definition:
\begin{defn} \label{defn:reduced-rho-del}
Let $X$ be a closed oriented $a$-dimensional manifold with a reference map $\lambda (X) \co X \ra BG$ and let $m \geq 1$ be such that $n=a+m=2u-1$. Define the function
\[
\wrho_\partial \co \sS^{s}_\partial (X \times D^m) \ra \QQ \RhG^{(-1)^u}
\quad \textup{by} \quad \wrho_\partial ([h]) := \rho (M(h),\lambda(M(h))).
\]
\end{defn}
Again this is well-defined and notice that $\wrho_\partial ([\id]) = 0$.
\subsection{Properties of the $\rho$-invariant} \label{subsec:prop-of-rho-inv}
First a basic example. For the standard $1$-dimensional lens space $L=L^1(\alpha_1)$ we have \cite[Proof of Theorem 14C.4]{Wall(1999)}
\begin{equation} \label{rho-alpha-k}
\rho (L^1(\alpha_1)) = f = \frac{1+\chi}{1-\chi} \in \QQ \RhGm.
\end{equation}
As explained in \cite[Chapter 14E, page 215]{Wall(1999)} we have
\begin{equation}
(1-\chi)^{-1} = - (1/N) \cdot (1 + 2 \chi + 3 \chi^2 + \cdots N \chi^{N-1})
\end{equation}
which together with \eqref{rho-alpha-k} gives an expression used in calculations in \cite{Macko-Wegner(2009)}.
Next we note that the $\rho$-invariant in Definitions \ref{defn:reduced-rho} and \ref{defn:reduced-rho-del} is a homomorphism. This was shown in general in \cite{Crowley-Macko(2011)}. Also the $\rho$-invariant commutes with Siebenmann $4$-periodicity alias Cappell-Weinberger map from \eqref{eqn:siebenmann-periodicity}, see also \cite{Crowley-Macko(2011)}. As mentioned in the introduction this reduces the number of cases we need to study, but our treatment turns out to be independent of this observation.
Recall that for the join $L (\alpha \ast \beta)$ of fake lens spaces $L (\alpha)$ and $L (\beta)$ we have \cite[chapter 14A]{Wall(1999)}
\begin{equation} \label{rho-join}
\rho (L(\alpha \ast \beta)) = \rho (L(\alpha)) \cdot \rho (L(\beta)).
\end{equation}
Recall that when $\beta$ is the standard $1$-dimensional representation this operation produces a map between the structure sets of lens spaces of different dimensions. We are interested in its generalization given by the suspension map $\Sigma$ of \eqref{eqn:suspension-higher-str-sets} and the behavior of $\wrho_{\del}$ with respect to this map. Madsen and Rothenberg show in \cite[Lemma 3.9]{Madsen-Rothenberg(1989)} that the $\wrho_{\del}$-invariant is indeed multiplicative with respect to this $\Sigma$. They use a slightly different definition of the $\wrho_{\del}$-invariant, which differs from our in a factor corresponding to the element $f$ from \eqref{rho-alpha-k}. Hence the commutativity of the diagram from~\cite[Lemma 3.9]{Madsen-Rothenberg(1989)} translates into the formula
\begin{equation} \label{eqn:rho-mult-wrt-join-higher-str-sets}
\wrho_{\del} (\Sigma ([h])) = f \cdot \wrho_{\del} ([h]) \in \QQ \RhG^{(-1)^{u+1}}
\end{equation}
for $h \co M \ra L^{2d-1} \times D^{2k}$ representing an element in $\sS^{s}_{\del} (L^{2d-1} \times D^{2k})$
The $\rho$-invariant also behaves naturally with respect to the passage to a subgroup. How it works in our notation is explained in detail in Remark 4.4 in \cite{Macko-Wegner(2011)} if needed.
\subsection{The main diagram}
\label{sec:com-ladder}
We now have all the ingredients we need to analyze the surgery exact sequence for $X = L^{2d-1} \times D^{2k}$. Denoting $n = 2d-1+2k$ these can be summarized in the commutative ladder:
\[
\xymatrix{
0 \ar[r] & \wL_{n+1} (\ZZ G) \ar[r] \ar[d]_{G-\textup{sign}} & \sS^{s}_{\del} (L \times D^{2k}) \ar[d]^{\wrho_{\del}} \ar[r] & \wsN_{\del} (L \times D^{2k}) \ar[d]^{[\wrho_{\del}]} \ar[r] & 0 \\
0 \ar[r] & 4 \cdot \RhG^\pm \ar[r] & \QQ \RhG^{\pm} \ar[r] & \QQ \RhG^{\pm}/4 \cdot \RhG^{\pm} \ar[r] & 0
}
\]
By the following theorem we need to understand the kernel of $[\wrho_\del]$.
\begin{thm} \label{thm:how-to-split-alpha-k}
Let $\bar T := \ker \big( [\wrho_{\del}]: \wsN_{\del} (L \times D^{2k}) \lra \QQ\RhG^{(-1)^{d+k}}/4 \cdot \RhG^{(-1)^{d+k}}
\big)$. Then we have
\[
\sS^{s}_{\del} (L \times D^{2k}) \cong F^{(-1)^{d+k}} \oplus \bar T
\]
where $F^{(-1)^{d+k}} := \wrho_{\del} (\sS^{s}_{\del} (L \times D^{2k}))$ is a free abelian group of rank $N/2-1$ if $d+k$ is odd and of rank $N/2$ if $d+k$ is even.
\end{thm}
\begin{proof}
The proof is the same as the proof of Theorem 5.1 in~\cite{Macko-Wegner(2009)}. The dimensions of $F^{\pm}$ are determined in \cite{Macko-Wegner(2009)} at the end of Section 4.1.
\end{proof}
\subsection{The generalized formula of Wall}
\label{subsec:generalized-formula-of-Wall}
In addition to the information we obtained in the previous subsection we need formulas to calculate the homomorphism $[\wrho_{\del}]$ in some cases. The formulas and the proofs are generalizations of the similar formulas in \cite[section 4.2]{Macko-Wegner(2009)}. The starting point is the following generalization of the formula of Wall from {\cite[Theorem 14C.4]{Wall(1999)}}.
\begin{thm} \label{thm:rho-formula-cp-times-disk}
Let $h \co Q \ra \CC P^{d-1} \times D^{2k}$ represent an element in the higher structure set $\sS_{\del} (\CC P^{d-1} \times D^{2k})$. Then for $t \in S^1$ we have
\[
\wrho_{S^1,\del} (t,[h]) = \sum_{i \in I_{4}^{S} (d,k)} 8 \cdot \bs_{4i} (\eta ([h])) \cdot (f^{d+k-2i} - f^{d+k-2i-2}) \in \CC
\]
where $f = (1+t)/(1-t)$ and the indexing set $I_{4}^{S} (d,k)$ is defined in Section~\ref{sec:higher-str-set-for-cp}.
\end{thm}
\begin{proof}
The proof follows the same general pattern as the proof of Theorem 14C.4 in~\cite{Wall(1999)}. We have to make sure that all steps either carry over from the case of $\CC P^{d-1}$ to our case of $\CC P^{d-1} \times D^{2k}$ or we have to modify them accordingly. In particular, we have to take care of the extra summand coming from the splitting invariant $\bs_{2k}$ along $\{ \textup{pt} \} \times D^{2k}$.
Firstly, starting with the homotopy equivalence $h \co Q \ra \CC P^{d-1} \times D^{2k}$ which is a homeomorphism on the boundary, we use the homeomorphism $\del h$ to form the closed manifold $Q(h) = Q \cup_{\del h} (\CC P^{d-1} \times D^{2k})$ and it is the $\rho$-invariant of $Q(h)$ that we are studying. The map $h$ also induces a homotopy equivalence $\glueh \co Q (h) \ra \CC P^{d-1} \times S^{2k}$, which induces an isomorphism on cohomology. The passage from $h$ to $\glueh$ was already explained in the second half of Section~\ref{sec:higher-str-set-for-cp} together with the behavior of the respective invariants and now we will use that knowledge.
The manifold $Q (h)$ can be viewed as obtained from a free action of $S^{1}$ on the product $S^{2d-1} \times S^{2k}$ and hence the Atiyah-Singer-Bott formula (7.9) from \cite{Atiyah-Singer-III(1968)} can be used. It expresses the $\rho$-invariant in terms of the $\sL$-class of $Q (h)$ as
\[
\rho (Q(h),t) = \varepsilon - 2^{d+k-1} \Bigg( \frac{te^{\bar x}+1}{te^{\bar x}-1}\Bigg) \sL (Q(h)) [Q(h)],
\]
where $\varepsilon$ is zero in our case, $t \in S^{1} \subset \CC$ and $\bar x \in H^{2} (Q(h);\ZZ)$ is the first Chern class of the associated complex line bundle and hence we can assume it is the generator $\bar x$ which we chose earlier arbitrarily. The class $\sL (X)$ is the characteristic class given by (6.5) in \cite{Atiyah-Singer-III(1968)} and is related to $\ell (X)$ for an $(2u-2)$-dimensional manifold $X$ via the equation $\ell (X) [X] = 2^{u-1} \cdot \sL (X)[X]$.
In Proposition \ref{prop:alpha-and-beta-depend-linearly-on-s-4i} at the end of Section \ref{sec:higher-str-set-for-cp} we showed that the coefficients of $\ell (Q(h))$ are linear in the splitting invariants $\bs_{4i}$. Hence, just as in the proof of Theorem 14C.4 in \cite{Wall(1999)}, we obtain that the $\rho$-invariant can be expressed as a linear combination of the splitting invariants. But here we have to take as the indexing set $I_{4}^{S}(d,k)$ which has one more element than $I_{4}^{S}(d,0)$ and this has to be taken into the account. So we have
\[
\rho (h) = a_{d+k} + b_{d+k}^j \cdot \bs_{4j} + b_{d+k}^{j+1} \cdot \bs_{4(j+1)} + \cdots + b_{d+k}^{r} \cdot \bs_{4r}, \quad \textup{where} \; I_{4}^{S}(d,k) = \{ j, \ldots, r \}.
\]
We proceed by induction on $d$. At the induction beginning, when $d=1$, in \cite[Theorem 14C.4]{Wall(1999)} one had to deal with the point being the quotient of the free action of $S^1$ on itself. Here, we instead have the product of the point with the sphere $S^{2k}$ seen as the quotient of the free action of $S^1$ on $S^1 \times S^{2k}$ that is trivial on the second coordinate. Since this is the boundary of $S^1 \times D^{2k+1}$ we have that
\[
\rho (h) = 0
\]
and hence $a_{d+k} = 0$.
In the induction step we use that the $\rho$-invariant is multiplicative with respect to modified join. From \eqref{eqn:rho-mult-wrt-join-higher-str-sets} it follows that
\[
b_{d+k}^{s} = f^{d+k-2s-2} b_{2s+2}^{s}
\]
just as in \cite[Theorem 14C.4]{Wall(1999)}. Hence we assume we have all the $b_{d+k}^{s}$ except $b_{2r+2}^{r}$, which we need to calculate.
To calculate $b_{2r+2}^{r}$ we need to look at the relation between $\bs_{4r}$ and the coefficients in $\sL (Q(h))$ even closer. Let $\delta_r x^{2r-k}y$ be the coefficient of $\bs_{4r}$ in $\sL (Q(h))$.
As in \cite[Theorem 14C.4]{Wall(1999)} consider the preimage $W_{2r}$ of $\CC P^{2r-k} \times S^{2k}$, which is the dual of $x^{2r-k}y$, with normal bundle $\nu_{2r}$ of $\iota_{r} \co W_{2r} \hookrightarrow Q(h)$ and observe that the leading $0$-dimensional term in $\sL (\nu_{2r})$ is $1$. We look at the equation
\[
8 \cdot \bs_{4r} (h) = \sign (W_{2r}) = 2^{2r} \sL (W_{2r}) [W_{2r}] = 2^{2r} (\iota_{r})^{\ast} \sL (Q(h)) \cdot \sL (\nu_{2r})^{-1} [W_{2r}].
\]
Both sides are linear functions of $\bs_{4r}$ and the coefficient on the right hand side is $2^{2r} \cdot \delta_r$ so we get
\[
\delta_{r} = 2^{3-2r}.
\]
Finally, exactly as in \cite[Theorem 14C.4]{Wall(1999)} we obtain
\[
b_{2r+2}^{r} = 2^{3-2r} \cdot -2^{2r+1} \cdot 2t/(1-t)^{2} = 8 (f^2-1).
\]
This finishes the proof.
\end{proof}
Next, similarly as in \cite{Macko-Wegner(2009)} we pass from the formula for the $\rho$-invariant of elements in $\sS^{s}_{\del} (\CC P^{d-1} \times D^{2k})$ to the $\rho$-invariant of elements in $\sS^{s}_{\del} (L^{2d-1} \times D^{2k})$. The idea is the same, we use the transfer maps induced by $p \co L^{2d-1} \ra \CC P^{d-1}$ followed by projection on the reduced normal invariants (see discussion at the end of Section \ref{sec:lens-times-disk}) :
\begin{equation}
\begin{split}
\xymatrix{
\sS^{s}_{\del} (\CC P^{d-1} \times D^{2k}) \ar[d] \ar[r] & \sN_{\del} (\CC P^{d-1} \times D^{2k}) \ar[d] \\
\sS^{s}_{\del} (L^{2d-1} \times D^{2k}) \ar[r] & \widetilde{\sN}_{\del} (L^{2d-1} \times D^{2k}).
}
\end{split}
\end{equation}
Let us now remember that we are only interested in the case $N = 2^K$ for $K \geq 1$ and hence assume this from now on. The composition from the upper left to the lower right corner is surjective if $n-1= 2(d-1)+2k = 4u+2$, which immediately gives the desired formula for $[\widetilde{\rho}_{\del}]$, see the first part of Proposition~\ref{rho-formula-lens-sp} below. If $n-1=4u$, then this composition is onto the subgroup obtained by leaving out the last $\ZZ/N$-summand. To include the last summand into the formula the same trick as in \cite[Lemma 4.10, 4.11]{Macko-Wegner(2009)} can be used in our situation. Its proof only used the suspension map $\Sigma$ which is available in our case as well, the multiplicativity of the $\rho$-invariant with respect to $\Sigma$ and the rest of the proof was completely algebraic and hence carries over word-for-word to our case so that we obtain the following lemma.
\begin{lem} \label{all-summands-n=4m+2}
Let $d = 2e$ and $k=2l$ or let $d = 2e+1$ and $k = 2l+1$ so that in either case we have $n-1=2(d-1)+2k=4u$.
Let $c \in \sS^s_{\del} (L^{2d-1} \times D^{2k})$ be such that $c = b + a$ where $b = p^{!} (\widetilde b)$ for some
$\widetilde b \in \sS_{\del} (\CC P^{d-1} \times D^{2k})$ with $s_{2i} = \bs_{2i} (\eta \widetilde b)$ and $a$ is such that there exists an $\widetilde a \in \sN_{\del} (\CC P^{d-1} \times D^{2k})$ such that $\bs_{2i} (\widetilde a) = 0$ for $i < 2u$, $s_{4u} = \bs_{4u} (\widetilde a)$ and $\eta (a) = p^{!} (\widetilde a)$. Then
\[
\widetilde\rho_{\del} (c) = \sum_{i \in I^{S}_{4} (d,k)} \!\! 8 \cdot s_{4i} \cdot (f^{d+k-2i} - f^{d+k-2i-2}) + 8 \cdot s_{4u}
\cdot f + z \quad \in \quad \QQ\RhGm
\]
for some $z \in 4 \cdot \RhGm$.
\end{lem}
Using discussion around \eqref{eqn:fibering-ni-lens-x-disk-by-cp-x-disk} this immediately gives an analogue of Proposition 4.12 from \cite{Macko-Wegner(2009)} which reads as follows.
\begin{prop} \label{rho-formula-lens-sp}
Let $t = (t_{2i})_i \in \widetilde \sN_{\del} (L^{2d-1} \times D^{2k})$. If $n-1= 2(d-1) + 2k$, then we have for the homomorphism $[\wrho_\del] \co \widetilde \sN_{\del} (L^{2d-1} \times D^{2k}) \lra \QQ \RhG^{(-1)^{d+k}} / 4 \cdot \RhG^{(-1)^{d+k}}$ that
\begin{align*}
n-1 = 4u+2 \; : \; [\widetilde \rho_{\del}] (t) & = \sum_{i \in J^{N}_{4} (d,k)} 8 \cdot
t_{4i} \cdot f^{d+k-2i-2} \cdot (f^2-1) \\
n-1 = 4u \; : \; [\widetilde \rho_{\del}] (t) & = \sum_{i \in J^{N}_{4} (d,k) \smallsetminus \{u\}} 8 \cdot
t_{4i} \cdot f^{d+k-2i-2} \cdot (f^2-1) + 8 \cdot t_{4u}
\cdot f.
\end{align*}
\end{prop}
\section{Calculations} \label{sec:calculations}
We want to complete the proof of Theorem \ref{thm:main-thm} by calculating the summand $\bar T = \ker [\wrho_{\del}]$ from Theorem~\ref{thm:how-to-split-alpha-k}. We achieve this in Propositions \ref{T-2} and \ref{T-N} using formulas from Proposition~\ref{rho-formula-lens-sp}.
In \cite{Macko-Wegner(2009)} in a similar situation the reduced normal invariants were an abelian group which was a sum of $N$-torsion and $2$-torsion. As described in \eqref{red-ni-lens-spaces} here we have an extra summand from $D^{2k}$ and also the indexing is a little different. Nevertheless it is possible to make only minor changes in the setup from \cite{Macko-Wegner(2009)} to obtain our results.
\begin{prop} \label{T-2}
We have
\[
T_2 (d,k) \subseteq \bar T.
\]
\end{prop}
\begin{proof}
By Proposition \ref{rho-formula-lens-sp} the formula for $[\wrho_{\del}]$ only depends on $t_{4i}$.
\end{proof}
Next we investigate the behavior of the map $[\wrho_{\del}]$ on the summands $T_F (d,k)$ and $T_N (d,k)$ via the formulas from Proposition \ref{rho-formula-lens-sp}. Denote
\begin{equation} \label{eqn:bar-T-F-plus-N}
\bar T_{F \oplus N} (d,k) = T_{F \oplus N} (d,k) \cap \bar T, \quad \textup{where} \quad T_{F \oplus N} (d,k) = T_{F} (d,k) \oplus T_{N} (d,k).
\end{equation}
The formulas are very similar to those from Proposition 4.15 from \cite{Macko-Wegner(2009)} except the source of the map has an extra copy of $\ZZ$ and the indexing is a little different. Nevertheless the same general strategy as in \cite{Macko-Wegner(2009)} can be employed to find the kernel of $[\wrho_{\del}]$. The strategy is explained at the beginning of Section 5 in \cite{Macko-Wegner(2009)}. Briefly, first a passage from $T_N (d)$ is made to the underlying abelian group $\ZZ/N [x](d)$ of the appropriately truncated polynomial ring in the variable $x$ over $\ZZ/N$ via isomorphisms (5.1) and (5.3), so that $[\wrho]$ is transformed into maps (5.2) and (5.4); here the references are to formulas in \cite{Macko-Wegner(2009)}. Then another passage is made to the underlying abelian subgroup $\ZZ[x](d)$ the appropriately truncated polynomial ring in the variable $x$ over the integer coefficients $\ZZ$. The problem is thus translated to finding
\begin{equation} \label{eqn:rho-hat}
\textup{the preimage} \; A_K (d) \; \textup{of} \; 4 \cdot \RhG^{(-1)^d} \; \textup{under the map} \; [\widehat{\rho}] \co \ZZ[x](d) \ra \QQ \RhG^{(-1)^d}
\end{equation}
given by the same formulas (5.2) and (5.4) keeping in mind that the coefficients are now integers. In symbols, the situation in \cite{Macko-Wegner(2009)} was like this:
\begin{equation} \label{eqn:scheme-T-vs-Z-x}
\xymatrix{
T_N (d) \ar[r]^(0.4){\cong} & \ZZ/N [x](d) & \ZZ[x](d) \ar[l]_(0.4){\red_N}
}
\end{equation}
so that the following transformations lead to the map $[\widehat{\rho}]$ given for $d=2e$ as
\begin{equation} \label{eqn:rho-hat-formula}
t = (t_{4i}) \mapsto q_t (x) = \sum t_{4(i+1)} \cdot x^{c-i-1} \; \leadsto \; [\widehat{\rho}] (q) = 8 \cdot (f^{2}-1) \cdot q (f^{2}).
\end{equation}
and for $d = 2e+1$ as
\begin{equation} \label{eqn:rho-hat-formula-second}
t = (t_{4i}) \mapsto q_t (x) = \sum (t_{4(i+1)} - t_{4i}) \cdot x^{c-i-1} + t_{4} \cdot x^{c-1} \; \leadsto \; [\widehat{\rho}] (q) = 8 \cdot f \cdot q (f^{2}).
\end{equation}
In both cases $c = \lfloor (d-1)/2 \rfloor$. Then $\bar T_N (d) = \red_{N} (A_K (d))$, and the subgroup $A_K (d)$ is shown to be equal to certain subgroup $B_K (d)$ in Theorem 5.4 of \cite{Macko-Wegner(2009)}, which is the main calculation of that paper.
In our case these maps have to be modified as follows. The indexing set is $J_{4}^{tN} (d,k)$ and for $c (a) =\lfloor (a-1)/2 \rfloor$ we use the following truncated polynomial ring $\ZZ [x] (a) = \{ q \in \ZZ[x] \; | \: \deg(q) \leq c(a)-1 \}$. The coefficients of the polynomial $q$ are indexed as $q = q_0 + q_1 \cdot x + \cdots + q_{c(a)-1} \cdot x^{c(a)-1}$. Note that $c(2b+1)=c(2b+2)$ hence in picking $a$ below we do have some freedom which we will use in order to get compatibility between the truncation and the signs of the eigenspaces.
We have four cases in each of which the maps $\red_{N,d,k} \co q \mapsto t$ correspond to the composition of $\red_N$ and the inverse of the isomorphism $T_N (d) \xra{\cong}\ZZ/N [x](d)$ from Diagram~\eqref{eqn:scheme-T-vs-Z-x}:
\noindent \textbf{Case $d=2e$, $k=2l$.} Hence $n=2d-1+2k=4(e+l)-1$ and we have
\begin{equation} \label{eqn:red-2e-2l}
\red_{N,d,k} \co \ZZ [x] (d+2) \ra T_{F \oplus N} (d,k) = T_F (d,k) \oplus T_N (d,k)
\end{equation}
given by
\begin{equation} \label{eqn:rho-hat-del-formula-2e-2l}
q \mapsto t=(t_{4j} := q_{(e+l-1)-j})_j \; \leadsto \; [\widehat{\rho}] (q) = 8 \cdot (f^{2}-1) \cdot q (f^{2}).
\end{equation}
\noindent \textbf{Case $d=2e+1$, $k=2l+1$.} Hence $n=2d-1+2k=4(e+l)+3$ and we have
\begin{equation} \label{eqn:red-2e+1-2l+1}
\red_{N,d,k} \co \ZZ [x] (d+1) \ra T_{F \oplus N} (d,k) = T_F (d,k) \oplus T_N (d,k)
\end{equation}
given by
\begin{equation} \label{eqn:rho-hat-del-formula-2e+1-2l+1}
q \mapsto t=(t_{4j} := q_{(e+l)-j})_j \; \leadsto \; [\widehat{\rho}] (q) = 8 \cdot (f^{2}-1) \cdot q (f^{2}).
\end{equation}
\noindent \textbf{Case $d=2e$, $k=2l+1$.} Hence $n=2d-1+2k=4(e+l)+1$ and we have
\begin{equation} \label{eqn:red-2e-2l+1}
\red_{N,d,k} \co \ZZ [x] (d+1) \ra T_{F \oplus N} (d,k) = T_F (d,k) \oplus T_N (d,k)
\end{equation}
given by
\begin{equation} \label{eqn:rho-hat-del-formula-2e-2l+1}
q \mapsto t=(t_{4j} := \sum_{v=1}^{j-l} q_{e-v})_j \; \leadsto \; [\widehat{\rho}] (q) = 8 \cdot f \cdot q (f^{2}).
\end{equation}
\noindent \textbf{Case $d=2e+1$, $k=2l$.} Hence $n=2d-1+2k=4(e+l)+1$ and we have
\begin{equation} \label{eqn:red-2e+1-2l}
\red_{N,d,k} \co \ZZ [x] (d+2) \ra T_{F \oplus N} (d,k) = T_F (d,k) \oplus T_N (d,k)
\end{equation}
given by
\begin{equation} \label{eqn:rho-hat-del-2e+1-2l}
q \mapsto t=(t_{4j} := \sum_{v=0}^{j-l} q_{e-v})_j \; \leadsto \; [\widehat{\rho}] (q) = 8 \cdot f \cdot q (f^{2}).
\end{equation}
In each case the truncation $\ZZ[x](a)$ is chosen so that $c(a)$ is the cardinality of the indexing set $J_{4}^{N}(d,k)$ and so that $a$ and $(d+k)$ have the same parity.
Hence we now have the maps
\begin{equation} \label{eqn:rho-hat-k-not-0}
[\widehat{\rho}] \co \ZZ[x](a) \ra \QQ \RhG^{(-1)^{a}}
\end{equation}
with appropriate $a$ in the respective cases and we need to find the preimage $A_K (a)$ of $4 \cdot \RhG^{(-1)^{a}}$.
This means that the setup is the same as in \cite{Macko-Wegner(2009)}. Indeed, let us in particular look at the definition of the sets $B_K (a)$ on pages 17-18 of \cite{Macko-Wegner(2009)}. First it is mentioned that for each $n \in \NN$ there are universal polynomials $r_n^-$ and $r_n^+$ of degree $n$ with leading coefficient $1$ with certain properties (they are explicitly constructed later in Definition 5.26 of \cite{Macko-Wegner(2009)}). Then a definition is made
\begin{equation} \label{eqn:B_K-a}
B_{K} (a) := \left \{ \sum_{n = 0}^{c(a)-1} a_n \cdot 2^{\textup{max}\{K-2n-2,0\}} \cdot r^{(-1)^{a}}_{n} \; | \; a_n \in \ZZ \right \}
\end{equation}
By universality we mean that the polynomials $r_n^-$ or $r_n^+$ do not depend on $a$, the same polynomial $r_n^-$ or $r_n^+$ is used in all sets $B_K (a)$. The main calculational result of \cite{Macko-Wegner(2009)} is Theorem 5.4 where it is proved that
\begin{equation} \label{eqn:A_K-a=B_K-a}
A_K (a) = B_K (a).
\end{equation}
\begin{prop} \label{T-N}
When $k=2l$ we have
\[
\bar T_{F \oplus N} (d,k) \cong \ZZ \oplus \bigoplus_{i =1}^{c_{N} (d,k)} \ZZ/{2^{\min\{K,2i\}}}.
\]
When $k=2l+1$ we have
\[
\bar T_{F \oplus N} (d,k) \cong \bigoplus_{i =1}^{c_{N} (d,k)} \ZZ/{2^{\min\{K,2i\}}}.
\]
\end{prop}
\begin{proof}
Equation \eqref{eqn:A_K-a=B_K-a} and the definition of $B_K(a)$ show that $A_K^k(a)$ is a free abelian subgroup of $\ZZ[x](a)$ with a basis given by polynomials $2^{\max\{K-2n-2,0\}} \cdot r_n^\pm$. Under the homomorphism $\ZZ[x](a) \ra T_{F \oplus N} (d,k)$ the subgroup $A_K (a)$ for appropriate $a$ is mapped onto a subgroup isomorphic to a direct sum as claimed by the theorem, since $c_N (d,k)$ is the cardinality of the indexing set $rJ_{4}^{N} (d,k)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main-thm}]
The proof for $L \times D^{2k}$ follows from Theorem \ref{thm:how-to-split-alpha-k}, Proposition \ref{T-N} and suitable reindexing in Proposition \ref{T-2}. The part about the invariant $\wrho_{\del}$ follows directly from Theorem \ref{thm:how-to-split-alpha-k}. The invariants $\bbr_{0}$ are taken out of the sums $\bar T_{F \oplus N} (d,k)$ of Proposition \ref{T-N} and $T_{2} (d,k)$ of Proposition \ref{T-2}, respectively. They are given as splitting invariants along $\{ \ast \} \times S^{2k}$ and hence play a special role and by extracting them from the sum we stress this point. The invariant $\bbr$ is then defined abstractly as coming from the torsion summand of Proposition \ref{T-N} and the invariant $\br$ defined abstractly as coming from the torsion summand of Proposition \ref{T-2}. The proof for $L \times D^{2k+1}$ follows from \eqref{eqn:end-result-odd-disk} by calculating the cardinality of the indexing set.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:higehr-str-sets-L-times-S}]
Follows from the isomorphism \eqref{eqn:higher-str-set-X-times-S} and the calculation of the structure sets $\sS^{s} (X \times D^{m},X \times S^{m-1}) = \sN (X)$ from \eqref{eqn:str-set-rel-not-rel-bdry}. The normal invariants $\sN (L)$ are calculated in (3.9) of \cite{Macko-Wegner(2009)} which gives the indexing in the statement of the corollary and the invariants $\br'$ and $\br''$ are given by the corresponding components.
\end{proof}
\section{Final Remarks} \label{sec:final-remarks}
One obvious future direction would be to try to obtain a better geometric description of the invariants $\bbr$ from Theorem \ref{thm:main-thm}.
In \cite{Macko-Wegner(2011)} there was offered an inductive obstruction theoretic description of the corresponding invariants when $m=0$. Such a description works also in the present case with the proof very similar to the case $m=0$, so we refrain from repeating it here and we refer the reader to \cite{Macko-Wegner(2011)}. Of course, even better would be a non-inductive description, but this remains open even in the case $m=0$.
Another future direction would be to address the case $N = 2^K \cdot M$ for $M$ odd, again similarly as is done in \cite{Macko-Wegner(2011)}. We plan to do this in a future work. The reason it is not included here is that the localization which is the main tool in \cite{Macko-Wegner(2011)} becomes more difficult and hence we want to deal with it separately.
\section*{Acknowledgments}
We thank James F. Davis for a discussion about formula \eqref{eqn:L-of-xi-versus-L-of-G-mod-TOP}.
\small
|
1,941,325,220,723 | arxiv | \section{User Manual}
\setcounter{figure}{0}
\subsection{Introduction}
The software demo developed by us comprises of a synthetic tabular data generation pipeline. It was implemented using python 3.7.* along with the flask library to work as a web application on a local server. The application functionality and usage can be found listed under the \nameref{Fmarker} \& \nameref{Umarker} sections respectively. In addition, a video of the demo can be seen \href{https://drive.google.com/file/d/1VK6479YPnjg0zVbfdgJb2G4_7lz2CWp6/view}{\underline{here}}.
\subsection{Functionality}
\label{Fmarker}
Our demo comprises of the following salient features:
\begin{enumerate}
\item \textbf{Synthetic Data Generator: }Our software is a cross-platform application that sits on top of a python interpreter. Moreover, it is relatively lightweight and can be set-up easily using pip.
Our application is also robust against missing values and supports date-type formats. We believe these factors increases its usability in real-world scenarios.
\item \textbf{Synthetic Data Evaluator: }In addition to our generator, we also provide a detailed evaluation of the synthetic data. The report provides end users with visual plots comparing the real and synthetic distributions of individual columns as shown in sub-figures \ref{fig:workclass} \& \ref{fig:age} of Fig.~\ref{fig:visual_plots}. In addition, the synthetic data's utility for ML applications along with its privacy preservability metrics are reported as can be seen in sub-figures \ref{fig:utility} \& \ref{fig:privacy} of Fig.~\ref{fig:efficacy}. Note that the table-evaluator\footnote{\url{ttps://github.com/Baukebrenninkmeijer/Table-Evaluator}} library aided us in generating this evaluating report.
\end{enumerate}
\subsection{Usage}
\label{Umarker}
The following step-by-step instructions are provided to allow end-users to use our product in a hassle-free manner.
\begin{enumerate}[start=1,label={\bfseries Step \arabic*:},leftmargin=1.425cm]
\item Open the terminal and navigate to the root directory of the software package to run the following command \texttt{python3 / python server.py}.
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/1.png}%
}
\medskip
Open the browser, the application should now be available at the following address: \texttt{http://127.0.0.1:5000/}.
\end{minipage}
\item If this is the first time running the web application, it is advised to click on the ``Train a new model'' button to begin training the model with a dataset. Otherwise, click on the ``Use existing mode'' button to use an existing trained model.
If you clicked on the ``Use existing mode'' button, please go to \textbf{step 8}. If not, please continue with the next step.
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/2.png}%
}
\medskip
Click on the ``Browse'' button to select the dataset for which the model needs to train. Afterwards click on the ``Uploap'' button.
\end{minipage}
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/3.png}%
}
\medskip
The software will auto-detect the column types, and will give the option to adjust a column's data type and inclusion in the training. Note that the red highlighted column shows the current columns in the uploaded \texttt{csv} file. The yellow highlighted column gives the option to include or exclude a particular column in the training process by clicking on the switch button. The highlighted green column is the auto detected data type. It also has the option to be adjusted as needed. Simply click on it and select the desired data type from the drop down menu. Click on the ``Submit'' button after choosing the right settings.
\end{minipage}
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/4.png}%
}
\medskip
In the following page, specify the problem type for the given dataset. The software currently provides the following problem types: None, Binary Classification and Multi-class Classification. If unsure, leave it as None. Then enter the number of epochs needed to train the model. Click on ``Train Model'' to start the training.
\end{minipage}
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/5.png}%
}
\medskip
Once the model has finished training, the option to train a new model or proceed to the synthesizer is presented. To generate synthetic data, click on ``Proceed to data synthesizer''.
\end{minipage}
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/6.png}%
}
\medskip
The trained models can be found in the dropdown menu of the Models field, as seen in the figure above. Click on it, and select the trained model. After this, type the amount of rows to be generated in the second field, and click on ``Start Synthesizer'' to start the process.
\end{minipage}
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/7.png}%
}
\medskip
Once the data is generated, the following page shows a snippet of the synthetic data. The generated data can be saved locally by clicking on ``Download csv''. This page also gives you the option to generate a report for the given data. In order to generate the report in PDF format, simply click on ``Generate report'' and continue with step 10.
\end{minipage}
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/8.png}%
}
\medskip
The following page is presented while the report is being generated. It will automatically redirect to the PDF once it is completed.
\end{minipage}
\item \begin{minipage}[t]{\linewidth}
\raggedright
\adjustbox{valign=t}{%
\includegraphics[width=.8\linewidth]{user_manual/9.png}%
}
\medskip
Finally, once the PDF has been generated, it can be saved locally by clicking on ``Save as'' or ``Print'' in the browser.
\end{minipage}
\end{enumerate}
\begin{figure}[H]
\begin{center}
\subfloat[Cumulative distribution comparison of Age in Adult]{
\includegraphics[width=0.47\columnwidth,height=4.5cm]{user_manual/age.JPG}
\label{fig:workclass}
}
\hfil
\subfloat[Frequency comparison of categories within Workclass in Adult]{
\includegraphics[width=0.47\columnwidth,height=4.25cm]{user_manual/workclass.JPG}
\label{fig:age}
}
\caption{Visual plots comparing the generated vs real data distribution.}
\label{fig:visual_plots}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\subfloat[ML Utility]{
\includegraphics[width=0.57\columnwidth]{user_manual/utility.jpg}
\label{fig:utility}
}
\hfil
\subfloat[Privacy Preservability]{
\includegraphics[width=0.57\columnwidth]{user_manual/privacy.jpg}
\label{fig:privacy}
}
\caption{ML utility and privacy preservability of the generated data.}
\label{fig:efficacy}
\end{center}
\end{figure}
\section{Decentralized Architecture}
\label{sec:relatedwork}
GANs comprise two types of antagonizing deep neural networks: generator and discriminator. In training the two take turns. First the generator tries to synthesize data which as indistinguishable as possible from the real data to fool the discriminator. Then the discriminator tries to detect the fake from real data to counter the generator.
Model weights are updated using a minimax loss function which tries to minimize the loss of the generator, related to the number of correctly detected fakes by the discriminator, while maximizing the loss of the discriminator, related to how often the generator successfully fooled the discriminator, see Fig.~\ref{fig:centralized_gan}.
Training GANs is computationally demanding since they comprise of at least two deep neural networks (i.e., one generator and one discriminator) and are trained using big datasets. Therefore, a decentralized training framework can be highly beneficial in such a setting and is explored for image GANs only. Existing
solutions to decentralized GANs training can be classified into two categories: (1) Multi-Discriminator (MD) structure~\cite{mdgan, asyndgan, temporary_dis} and (2) Federated Learning (FL) structure~\cite{fedgan, fegan, fl}.
{\bf Multi-Discriminator} has one single generator in the server and multiple discriminators distributed across the clients. The structure is illustrated in Fig.~\ref{fig:mdgan}.
The server determines the network architecture for the generator and discriminators.
The generator is located at the sever and trains its network using random inputs and the gradients from all discriminators, as typically done for the centralized GANs. On the contrary, the discriminators located at the decentralized data sources train their networks locally using outputs from the generator, i.e., synthesized data. Such a structure ensures that the client's data does not need to leave the clients' machines.
The downside of the MD structure is that it induces significant communication overhead between the generator and discriminator, i.e., sending synthesized data to all discriminators, and returning discriminator's gradients to the generator per training epoch.
In addition, client discriminators tend to over-fit to their local data with more training epochs. MD-GAN~\cite{mdgan} counters the latter issue by allowing clients to randomly swap their models in a peer-to-peer way every several epochs. Even so, each discriminator is treated with the same weight to update the generator. Thus, the convergence of the generator is not optimal~\cite{fegan} when the quantity and distribution of data is highly skewed among clients.
{\bf Federated Learning} \cite{fl} structure (shown in Fig.~\ref{fig:fedgan}) is composed of multiple GANs (with a discriminator and generator) on each client who have direct access to data. Each client first trains a GAN using the local data. Then sends the GAN model to the federator. The key role for the federator is: (i) during initialization to determine the GANs architecture; and (ii) during training to aggregate the local GANs models into a global GAN and redistribute it to all clients.
Communication occurs when clients upload their model weights to the federator and when the federator redistributes the updated weights. Such a communication and merging local models is commonly refereed a global {\emph{training round}} in FL studies. The resulting overhead is lower than for the MD structure that requires communication between server and clients per training epoch. Additionally, transferring model weights to/from the federator is more efficient than transferring synthesized data to each discriminator in the case of the MD structure.
The FL structure also has a strong scalability relative to the number of clients, as the computation complexity of model aggregation is lower than training the generator network.
Another advantage of FL structure is to allow weighting local models during aggregation, which helps to accelerate the convergence of the generator under skewed data distributions among clients.
Local data ratios and Kullback-Leibler (KL) weighting methods from~\cite{fegan} are introduced to address skewed data challenges for image data.
\noindent\textbf{Architecture choice for Fed-TGAN\xspace}. The FL structure has multiple benefits, ranging from communication overhead, scalability, training stability, and handling skewed client data, compared to the MD structure. In this work, we thus adopt the FL structure for enabling training tabular GANs on decentralized data sources. In summary, \emph{the proposed Fed-TGAN\xspace is composed of one federator and multiple clients, following the training procedure of the FL structure.}
\section{Experimental Analysis}
\label{ssec:setup}
Our algorithm Fed-TGAN\xspace is evaluated on four commonly used datasets, and compared with three alternative architectures. To evaluate the similarity between real and synthetically generated data, we resort to use avg-JSD and avg-WD for categorical columns and continuous columns, respectively. We also provide an ablation analysis to highlight the efficacy of the proposed client weighting strategies of Fed-TGAN\xspace. A training time analysis is reported in the end, to show the time efficiency of all algorithms.
\subsection{Experimental setup}
\hspace{1em} {\bf Datasets}. We test our algorithm on four commonly used machine learning datasets. {Adult},
{Covertype} and {Intrusion} -- are from the UCI machine learning repository~\cite{UCIdataset}, and {Credit} is from Kaggle~\cite{kagglecredit}. Due to our computational limitation, we randomly sample 40k data from each of above datasets. The details of each dataset are shown in Tab.~\ref{table:DD}.
\begin{table}[t]
\centering
\caption{Description of datasets.}
\vspace{-0.7em}
\begin{tabular}{ |c|c|c|c|c| }
\hline
\multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Rows [\#]}} & \multicolumn{3}{c|}{\textbf{Columns [\#]}}\\
\cline{3-5}
& & \textbf{Categorical} & \textbf{Continuous}& \textbf{Total} \\
\hline
Adult & 40k & 9 & 5 & 14 \\
Covertype & 40k & 45 & 10 & 55 \\
Credit & 40k & 1 & 30 & 31\\
Intrusion & 40k & 20 & 22 & 42 \\
\hline
\end{tabular}
\label{table:DD}
\vspace{-1em}
\end{table}
{\bf Baselines}. We compare Fed-TGAN\xspace against 3 baselines: (i) multi-discriminator structure, (ii) {vanilla federated learning structure} and (iii) {centralized approach} abbreviated as \textbf{MD-TGAN}, \textbf{vanilla FL-TGAN}, and \textbf{Centralized}, respectively. The aim is to learn a CTGAN model from distributed clients using the three frameworks on the basis of CTGAN's default settings for encoding features~\cite{ctgan}. Specifically, we use 10 as the limiting number for estimated modes for the VGM encoders for each continuous column and use one-hot-encoding for categorical columns. We re-implement all baselines using the Pytorch v1.8.1 RPC framework.
MD-TGAN clients swap discriminator models with each other at the end of each training epoch~\cite{mdgan}. For a fair comparison with MD-TGAN, we force also Fed-TGAN\xspace and vanilla FL-TGAN to share the model weights with the federator at the end of each training epoch. Due to this, the notion \textit{per round} commonly referred in FL studies equals to \textit{per epoch} in this paper, if not otherwise stated.
Vanilla FL-TGAN is identical to Fed-TGAN\xspace, except it uses identical weights for all clients equal to $\frac{1}{P}$ where $P$ is the number of clients.
Due to the different learning speed per epoch of the four frameworks, for a fair comparison we fix the number of epochs so that the training time is similar. In particular we use
to 500, 500, 500, and 150 epochs for Fed-TGAN\xspace, Vanilla FL-TGAN, centralised, and MD-TGAN, respectively.
We repeat each experiment 3 times and report the average.
{\bf Testbed}. Experiments are run under Ubuntu 20.04 on two machines. Each machine is equipped with 32 GB memory, GeForce RTX 2080 Ti GPU and 10-core Intel i9 CPU. Each CPU core has two threads, hence each machine contains 20 logical CPU cores in total. The machine are interconnected via 1G Ethernet links (measured speed: 943Mb/s). One machine hosts the federator, the other all the clients. When not otherwise stated both federator and clients use the GPU for training. For experiments in Sec.~\ref{ssec:training_time}, when CPU is used to host clients for Fed-TGAN\xspace and MD-TGAN, CPU affinity (by \textit{taskset} command in Linux) is used to bind each client to one logical CPU core to reduce interference between different processes.
\subsection{Evaluation metrics}
\label{sec:metrics}
To evaluate the performance of the generator, a 40k synthetic data is sampled from the trained generator at the end of each epoch. We use two metrics to quantitatively measure the statistical similarity between the real and synthetic data:
{\bf Average Jensen-Shannon divergence (Avg-JSD).} Used for categorical columns (see Sec.~\ref{sec:model} for its definition). First, we compute the JSD between the synthetic and real data for each categorical column. Second, we average the obtained JSDs to obtain a compact comprehensible score, abbreviated as {Avg-JSD}. The closer to 0 {Avg-JSD} is, the more realistic the synthetic data is.
{\bf Average Wasserstein distance (Avg-WD).} Used for continuous columns\footnote{We use WD over JSD for continuous columns since JSD is not well-defined when the synthetic values lie outside of the original value range from the real dataset, i.e., the KL divergence is not defined when comparing the similarity of probability distributions with non-overlapping support.} (see Sec.~\ref{sec:model} for its definition). Unlike JSD, WD is unbounded and can vary greatly depending on the scale of the data. To make the WD scores comparable across columns, before computing the WD we fit and apply a min-max normalizer to each continuous column in the real data and apply the same normalizer to the corresponding columns in the synthetic data. We average all column WD scores to obtain the final score abbreviated as {Avg-WD}. The closer to 0 {Avg-WD} is, the more realistic the synthetic data is.
\subsection{Result analysis}
We first evaluate how realistic the generated synthetic data is. Sec.~\ref{ssec:iid} designs an experiment where all the clients contain the whole dataset, this is to test the performance of each framework under the ideal case. Then in Sec.~\ref{ssec:imbalanced}, we implement a scenario where data on clients are IID, but quantities of data are highly imbalanced across all the clients. The objective of this experiment is to show the effect of our model aggregation weighting method. In the end in Sec.~\ref{ssec:ablation}, an ablation analysis is designed where one of the clients has much higher amount of data, but the data quality is low. We want to show the efficacy of our table-similarity aware weighting method in the calculation of model aggregation weights.
\subsubsection{Ideal case of full dataset}
\label{ssec:iid}
\begin{figure}[t]
\begin{center}
\subfloat[Avg-JSD by epoch]{
\includegraphics[width=0.47\columnwidth]{figures/5full_intrusion_avg_jsd_epoch.png}
\label{fig:5full_intrusionjsd}
}
\subfloat[Avg-JSD by time]{
\includegraphics[width=0.47\columnwidth]{figures/5full_intrusion_avg_jsd_time.png}
\label{fig:5full_intrusionjsdt}
}
\hspace{\fill}
\subfloat[Avg-WD by epoch]{
\includegraphics[width=0.47\columnwidth]{figures/5full_intrusion_avg_wd_epoch.png}
\label{fig:5full_intrusionwd}
}
\subfloat[Avg-WD by time]{
\includegraphics[width=0.47\columnwidth]{figures/5full_intrusion_avg_wd_time.png}
\label{fig:5full_intrusionwdt}
}
\caption{MD-TGAN\xspace, Fed-TGAN\xspace and Centralized\xspace: 5 clients each with a \textit{complete} Intrusion data copy.}
\label{fig:5full_intrusion}
\end{center}
\vspace{-1.2em}
\end{figure}
This experiments uses one server (or federator) and 5 clients. Each client is provided with a copy of the full real dataset. This represents the ideal case with perfectly identical clients, i.e. each client has identical IID data. We compare in particular Fed-TGAN\xspace, MD-TGAN\xspace and Centralized\xspace. Since in this case the aggregation weights of Fed-TGAN\xspace are the same as for vanilla FL-TGAN\xspace due to the identical data, we skip vanilla FL-TGAN\xspace. Results for the Intrusion dataset are shown in Fig.~\ref{fig:5full_intrusion}. Avg-JSD and Avg-WD are presented by both epoch and time (in seconds) as different architectures spend vastly different time per epoch. For categorical columns Fed-TGAN\xspace converges faster both by epoch and by time (see Fig.\ref{fig:5full_intrusionjsd} and \ref{fig:5full_intrusionjsdt}). Moreover, the Avg-JSD of MD-TGAN\xspace converges quite slowly after epoch 24. For continuous columns, from the perspective of number of epochs, Avg-WD for Fed-TGAN\xspace converges faster at the beginning, then becomes slightly worse than the Avg-WD for MD-TGAN\xspace (see Fig.~\ref{fig:5full_intrusionwd}). However, inspecting the result by time, Fed-TGAN\xspace not only converges faster, but also achieves a lower Avg-WD than the other two architectures (see Fig.~\ref{fig:5full_intrusionwdt}). The performance gap between the centralized approach and Fed-TGAN\xspace may look counter-intuitive. However, similar results are reported by FeGAN~\cite{fegan}. The reason is that, Fed-TGAN\xspace can see the data five times per epoch as compared to the centralized approach which only sees it once. This boosts the diversity of samples seen by Fed-TGAN\xspace thereby providing superior performance.
We summarize the final similarity results of all three approaches and all four datsets in Tab.~\ref{table:5full_iid}. The scores are taken at the time in seconds when Centralized\xspace finishes 500 epochs training. One can see that Fed-TGAN\xspace consistently achieves higher similarity (lower Avg-JSD and Avg-WD values) than the other two approaches.
\begin{table}[t]
\centering
\caption{Final similarity for MD-TGAN\xspace and Fed-TGAN\xspace and Centralized\xspace: 5 clients each having a complete data copy.}
\vspace{-0.5em}
\begin{tabular}{|C{1.4cm}|C{3cm}|C{3cm}|}
\hline
\textbf{Dataset} & \textbf{Avg JSD \small(MD\xspace/Fed\xspace/Centralized\xspace)} & \textbf{Avg WD \small(MD\xspace/Fed\xspace/Centralized\xspace)} \\
\hline
{Adult} & 0.072/\textbf{0.059}/0.117& 0.014/\textbf{0.012}/0.015 \\
{Covertype} & 0.038/\textbf{0.018}/0.075&0.022/\textbf{0.021}/0.086 \\
{Credit} & 0.083/\textbf{0}/0.012 &\textbf{0.006}/\textbf{ 0.006}/0.041 \\
{Intrusion} & 0.095/\textbf{0.031}/0.032& 0.027/\textbf{0.02}/0.026 \\
\hline
\end{tabular}
\label{table:5full_iid}
\vspace{-1em}
\end{table}
\subsubsection{Imbalanced amount of IID data}
\label{ssec:imbalanced}
For this experiment, we design a scenario where the number of data rows distributed among clients is highly imbalanced. Specifically, we include 5 clients in the group. 4 out of 5 clients contain only 500 rows of data randomly sampled from the original dataset. The last client contains the full dataset. We select 500 because it is the batch size setting in CTGAN. So we need at least 500 rows to form one mini batch for one epoch.
This scenario is to show the effect of the model aggregation weights that the federator calculates during initialization.
Results are shown in Fig.~\ref{fig:5imbalance_intrusion}. For categorical columns, one can see that the Avg-JSD by epoch, converges faster for Fed-TGAN\xspace than vanilla FL-TGAN\xspace by around 35\% (epoch 17 versus epoch 26) (see Fig.~\ref{fig:5imbalance_intrusionjsd}).
Moreover, the Avg-JSD value for Fed-TGAN\xspace after convergence is also smaller as compared to that of MD-TGAN\xspace and vanilla FL-TGAN\xspace. Similar results are presented for measuring the Avg-JSD by time as well (see Fig.~\ref{fig:5imbalance_intrusionjsdt}). For continuous columns,
Fed-TGAN\xspace converges faster at very beginning.
Between 80 and 400 seconds, Fed-TGAN\xspace is slightly worse than MD-TGAN\xspace and vanilla FL-TGAN\xspace. From then on till the end, MD-TGAN\xspace and Fed-TGAN\xspace perform similarly, see Fig.~\ref{fig:5imbalance_intrusionwdt}. A similar pattern can be found also while computing the Avg-WD with respect to epochs, see Fig.~\ref{fig:5imbalance_intrusionwd}. Full results on the four datasets are presented in Tab.~\ref{table:5imbalance_iid}. We notice that except for the Intrusion dataset, Fed-TGAN\xspace and vanilla FL-TGAN\xspace perform similarly for continuous columns. But for categorical columns, Fed-TGAN\xspace outperforms vanilla FL-TGAN\xspace for most datasets.
Fed-TGAN\xspace converges better than vanilla FL-TGAN\xspace because the model trained on 40K converges better than model trained on 500, since all their data are IID and sampled from original dataset. As we give more weight to the model which is better trained, we benefits from its better convergence.
The reason that Fed-TGAN\xspace and vanilla FL-TGAN\xspace have similar performance on Adult dataset, that can due to the fact that Adult dataset has less columns, thus simpler to learn. The Avg-WD results for Adult, Covertype and Credit datasets are similar, the reason is because in each of the 500 IID data, continuous columns distributions are well maintained as in original data.
From Fig.~\ref{fig:5imbalance_intrusion} and Tab.~\ref{table:5imbalance_iid} (Results are taken at the time when MD-TGAN finishes 150 epochs training.), we conclude that under an imbalanced data quantity distribution across clients, vanilla FL-TGAN\xspace not only suffers from slow convergence speeds, but also results in poor sample quality.
\begin{figure}[t]
\vspace{-0.5em}
\begin{center}
\subfloat[Avg-JSD by epoch]{
\includegraphics[width=0.49\columnwidth]{figures/5weights_intrusion_avg_jsd_epoch.png}
\label{fig:5imbalance_intrusionjsd}
}
\subfloat[Avg-JSD by time]{
\includegraphics[width=0.5\columnwidth]{figures/5weights_intrusion_avg_jsd_time.png}
\label{fig:5imbalance_intrusionjsdt}
}
\hspace{\fill}
\subfloat[Avg-WD by epoch]{
\includegraphics[width=0.49\columnwidth]{figures/5weights_intrusion_avg_wd_epoch.png}
\label{fig:5imbalance_intrusionwd}
}
\subfloat[Avg-WD by time]{
\includegraphics[width=0.5\columnwidth]{figures/5weights_intrusion_avg_wd_time.png}
\label{fig:5imbalance_intrusionwdt}
}
\vspace{-0.5em}
\caption{MD-TGAN, Fed-TGAN\xspace and vanilla FL-TGAN: 4 clients have 500, 1 client has 40k rows of sampled IID data.}
\label{fig:5imbalance_intrusion}
\end{center}
\vspace{-1.2em}
\end{figure}
\begin{table}[t]
\centering
\caption{Final similarity for MD-TGAN\xspace, Fed-TGAN\xspace and vanilla FL-TGAN\xspace: 4 clients have 500, 1 client has 40k rows of sampled IID data.}
\vspace{-0.5em}
\begin{tabular}{|C{1.4cm}|C{3cm}|C{3cm}|}
\hline
\textbf{Dataset} & \textbf{Avg JSD \small(MD/Fed/Vanilla-FL)} & \textbf{Avg WD \small(MD/Fed/Vanilla-FL)} \\
\hline
{Adult} &0.07/\textbf{0.062/0.062} &0.014/\textbf{0.012/0.012 } \\
{Covertype} &0.029/\textbf{0.026}/0.032 &\textbf{0.02/0.02/0.02} \\
{Credit} &0.078/\textbf{0.007}/0.011&0.006/\textbf{0.005/0.005} \\
{Intrusion} &0.092/\textbf{0.037}/0.044 &\textbf{0.025}/\textbf{0.025}/0.052\\
\hline
\end{tabular}
\label{table:5imbalance_iid}
\vspace{-1em}
\end{table}
\subsubsection{Ablation analysis}
\label{ssec:ablation}
Recall the weights calculation process in Fig.~\ref{fig:algo_weights}. The $SD_i$ is composed of two parts: (1) the ratio of the number of data rows locally available at the client $i$ to global number of data rows, i.e., $\frac{N_i}{N_{all}}$; and (2) the similarity calculated between the local data distribution of client $i$ and the global distribution, i.e., $1 - \frac{SS_i}{\sum_{i=1}^{P} SS_i}$. Our experiment in Sec.~\ref{ssec:imbalanced} shows the difference between Fed-TGAN\xspace and vanilla FL-TGAN\xspace (i.e., Fed-TGAN\xspace with equal weights for all clients). Results show that weighting clients differently based on the amount of data is indeed useful when the data quantity at each client is skewed
The contribution of data number ratio part is intuitive. Therefore in this ablation analysis, we design a scenario where for Fed-TGAN\xspace, the client weights are only calculated using data number ratio of each client, without using the similarity component.
\begin{figure}[t]
\vspace{-0.5em}
\begin{center}
\subfloat[Avg-JSD by epoch]{
\includegraphics[width=0.49\columnwidth]{figures/5ablation_intrusion_avg_jsd_epoch.png}
}
\subfloat[Avg-JSD by time]{
\includegraphics[width=0.5\columnwidth]{figures/5ablation_intrusion_avg_jsd_time.png}
}\hspace{\fill}
\subfloat[Avg-WD by epoch]{
\includegraphics[width=0.49\columnwidth]{figures/5ablation_intrusion_avg_wd_epoch.png}
\label{fig:ablation_c}
}
\subfloat[Avg-WD by time]{
\includegraphics[width=0.5\columnwidth]{figures/5ablation_intrusion_avg_wd_time.png}
\label{fig:ablation_d}
}
\vspace{-0.5em}
\caption{MD-TGAN\xspace, Fed-TGAN\xspace and Fed-TGAN\xspace without similarity weights: 4 clients have 10k, 1 client has 40K rows sampled \textit{Non IID} data.}
\label{fig:5ablation_intrusion}
\end{center}
\vspace{-1em}
\end{figure}
\begin{table}[t]
\centering
\caption{Final similarity for MD-TGAN\xspace, Fed-TGAN\xspace and Fed-TGAN\xspace without similarity weights (Fed$\backslash$SW): 4 clients have 10k, 1 client has 40K rows of sampled \textit{Non IID} data.}
\vspace{-0.5em}
\begin{tabular}{|C{1.4cm}|C{3cm}|C{3cm}|}
\hline
\textbf{Dataset} & \textbf{Avg JSD \small(MD/Fed/Fed$\backslash$SW)} & \textbf{Avg WD \small(MD/Fed/Fed$\backslash$SW)} \\
\hline
{Adult} & 0.37/\textbf{0.149}/0.261&0.107/\textbf{0.026}/0.027 \\
{Covertype} & 0.089/\textbf{0.05}/0.06 &0.125/\textbf{0.045}/0.056 \\
{Credit} &0.074/\textbf{0.014}/0.06&0.04/\textbf{0.01}/0.015 \\
{Intrusion} &0.208/\textbf{0.068}/0.073 &0.107/\textbf{0.032}/0.036\\
\hline
\end{tabular}
\label{table:5ablation}
\vspace{-1em}
\end{table}
To better show the importance of similarity weights, we design a specific scenario for this experiment. Still with 5 clients, 4 of them containing 10k IID data sampled from original data, the last client is modified to contain 40k rows of data by repeating one row sampled from the original dataset 40k times. One can imagine, this last client has a large number of rows, but contains little information in them. Fig.~\ref{fig:5ablation_intrusion} shows the results on Intrusion dataset.
One can already notice that this scenario badly hits MD-TGAN\xspace since it treats all clients equally while updating the generator's weights. Moreover, for the results in Fig.~\ref{fig:ablation_c} and \ref{fig:ablation_d}, one can see the client with 40k repeated data introduces oscillation to the curves of Fed-TGAN\xspace with and without the similarity component. As expected, the curve for Fed-TGAN\xspace without similarity component naturally performs worse than Fed-TGAN\xspace. Results in Tab.~\ref{table:5ablation} (Scores are taken at the time when MD-TGAN finishes 150 epochs training.) shows that Fed-TGAN\xspace undoubtedly outperforms MD-TGAN\xspace and Fed-TGAN\xspace without similarity computation for all datasets. Therefore, similarity component in Fed-TGAN\xspace gives more stability for model convergence.
\subsection{Training time analysis}
\label{ssec:training_time}
Above experiments all focus on the quality of generation. In this section, we study the training efficiency of MD-TGAN and Fed-TGAN\xspace. The first experiment scenario is the same as we discussed in Sec.~\ref{ssec:iid}.
The entire FL system consists of 5 clients, and each of them possesses the full set original dataset. Fig.~\ref{fig:time_per_epoch} shows the time distribution for MD-TGAN and Fed-TGAN\xspace during one training epoch on the Intrusion dataset. The \textit{Calculation on C} is estimated at server or federator. It means when server or federator calls the training on clients, the time is calculated from the start of first client to the finishing of all clients.
For MD-TGAN,
clients all need to wait for synthesized data from generator. Its \textit{Calculation on C} extracts this part of time, and adds it to \textit{Communication}. \textit{Communication} counts the time for exchanging model weights, swapping discriminator between clients, or sharing training data between server (or federator) and clients.
We can see that for Fed-TGAN\xspace, the calculation time on federator is negligible since it is only averaging model weights. Fed-TGAN\xspace has a slightly higher calculation time on clients because it trains both generator and discriminator networks on clients.
Moreover, the communication time of MD-TGAN is much higher. Because for updating generator or discriminator in MD-TGAN, the generator needs to send the generated data from generator to each discriminator. Since MD-TGAN only has one server, the above tasks can not be distributed. Fig.~\ref{fig:time_per_epoch} shows that Fed-TGAN\xspace saves more than 200\% of the time taken by MD-TGAN per epoch.
The communication time of Fed-TGAN\xspace is only 30\% of what MD-TGAN uses.
\begin{figure}[t]
\vspace{-0.7em}
\begin{center}
\subfloat[Training time per epoch]{
\includegraphics[width=0.49\columnwidth]{figures/intrusion_5full_time_distribution.png}
\label{fig:time_per_epoch}
}
\subfloat[Impact of epochs per round]{
\includegraphics[width=0.5\columnwidth]{figures/intrusion_me_time_distribution.png}
\label{fig:varying_epochs}
}
\vspace{-0.7em}
\caption{(1) Epoch training time per phase of MD-TGAN\xspace and Fed-TGAN\xspace with 5 clients. \textit{S}erver, \textit{C}lients. (2) Total training time with varying local epochs per round for Fed-TGAN\xspace.}
\label{fig:timedistribution_intrusion}
\end{center}
\vspace{-1em}
\end{figure}
In the second experiment, instead of aggregating models at the end of each epoch for every client, we vary the number of local training epochs before aggregating models for Fed-TGAN\xspace. Fig.~\ref{fig:varying_epochs} shows the total training time with variation on local epochs per round for Fed-TGAN\xspace, where we fix all the clients to train for 500 epochs in total. Therefore with more local epochs per round, we are left with less rounds for the whole training process. For local epochs 1, 10, 25 and 50, the corresponding round numbers are 500, 50, 20 and 10.
The massive time decrease between local epoch 1 and others is simply because of the reduction of model aggregations. The differences among other numbers of local epochs are not that significant.
Fig.~\ref{fig:timedistribution_interval} shows the generation results under different local epochs. We see for categorical columns, the Avg-JSD converges for all to a small value. For continuous columns, the Avg-WD for Fed-TGAN\xspace with 10 local epochs per round converges fastest and provides the best result until 1150s. This result indicates that it is possible to speed up training of Fed-TGAN\xspace by utilizing more local epochs while still preserving the statistical similarity between real and synthetic datasets. However, increasing the local epochs to a large value can potentially lead to over-fitting on the local data of clients ultimately deteriorating performance. Thus, the local epoch number introduces a trade-off between efficiency and performance
\begin{figure}[t]
\vspace{-1em}
\begin{center}
\subfloat[Avg-JSD by time]{
\includegraphics[width=0.49\columnwidth]{figures/5full_variation_intrusion_avg_jsd_time.png}
\label{fig:time_analysis_variation_interval_jsd}
}
\subfloat[Avg-WD by time]{
\includegraphics[width=0.49\columnwidth]{figures/5full_variation_intrusion_avg_wd_time.png}
\label{fig:time_analysis_variation_interval_wd}
}
\vspace{-0.7em}
\caption{Varying epochs per round with 5 clients.}
\label{fig:timedistribution_interval}
\end{center}
\vspace{-1.2em}
\end{figure}
Next, we evaluate time consumption of one epoch for Fed-TGAN\xspace and MD-TGAN with varying factors. First, we fix the number and type of data (10K IID data from Intrusion dataset) on each client, and vary the number of clients from 5 to 20. For computing resource limitations, all the experiments with varying number of clients are implemented using CPUs on client side with the server (or federator) side using the GPU. To limit interference between different processes, CPU affinity is used to bind each client to one logical CPU core.
Fig.~\ref{fig:varying_number} clearly shows that Fed-TGAN\xspace scales better than MD-TGAN\xspace with number of clients. In MD-TGAN\xspace the central server becomes increasingly the bottleneck when adding clients due to large amount of data exchanged with each client. Second, we fix the number of clients to 5 and vary the number of IID sampled data from the Intrusion dataset. We vary the number of rows from 10k to 40k. The experiments are implemented on both CPU and GPU for clients. The result in Fig.~\ref{fig:varying_data} shows that with increasing the number of data rows on each client, Fed-TGAN\xspace and MD-TGAN\xspace both experience an increase in the training time per epoch. The difference between the two algorithms is small when training happens on CPU. But when using GPU, we have an increasing difference with increasing amount of data on clients. The reason is because when sharing data between the client and the server, tensors that are on GPU must first be detached from GPU to CPU for being sent through the Pytorch RPC framework. Since Fed-TGAN\xspace trains all tabular GAN models locally on each client, the training process is highly accelerated by GPUs. As clients in the federated setting only need to detach the model from the GPU to CPU at the end of training to share them with the when exchanging messages. And so, since the server shares message more times in MD-TGAN than the federator in Fed-TGAN\xspace, the GPUs do not accelerate the training process of MD-TGAN as much as for Fed-TGAN\xspace.
\subsection{Further discussion}
For calculating the weights for merging local client models during the initialization process, we only use the individual column data distributions to compare local data distributions with the global distributions. But for tabular data, inter-dependency between columns is also an important factor.
\begin{figure}[t]
\vspace{-0.5em}
\begin{center}
\subfloat[Scalability on number of clients]{
\includegraphics[width=0.49\columnwidth]{figures/intrusion_training_time_by_worker.png}
\label{fig:varying_number}
}
\subfloat[Scalability on data per client]{
\includegraphics[width=0.5\columnwidth]{figures/intrusion_training_time_by_number_gpu.png}
\label{fig:varying_data}
}
\caption{MD-TGAN\xspace and Fed-TGAN\xspace with: (a) Varying number of clients and fixed data per client. (b) Fixed 5 clients and varying amounts of data per client.}
\label{fig:time_intrusion}
\end{center}
\vspace{-1em}
\end{figure}
The reason that we do not use it is due to privacy. Since the server (or federator) cannot collect real data from clients, inter-dependencies between columns cannot be inferred only according to distributions of each column
But analysis of column inter-dependency for client’s data is not useless. Recall the experiment in Sec.~\ref{ssec:ablation}. One malicious client contains 40k data, which is only one data repeated 40k times. Other 4 clients each contains 10k IID data sampled from original data. For a dataset of one row repeated 40k times, the correlation between every two columns is 0, since they are just two constant columns. Therefore, the analysis of self-reported column inter-dependency may not improve the similarity calculation under the current privacy-preserving rule, but it may still help to identify some types of malicious clients.
\textbf{Further note on Privacy-Preserving Technologies}.
FL indeed emerges as a viable solution that enables a collaborative distributed learning without disclosing original data. Orthogonal to FL, there is an array of privacy-preserving techniques that can be jointly applied to further strengthen privacy guarantee of GANs and FL, namely differential privacy (DP)~\cite{dwork2006our, DPGAN, pategan} and homomorphic encryption (HE)~\cite{gentry2010computing, crossSilos}. Exploring advanced privacy enhancing technologies is beyond the scope of this paper and will be addressed in our future work.
\section{Introduction}
\begin{figure*}[htb!]
\begin{center}
\includegraphics[width=1\textwidth]{figures/motivation_age_with_number_vertical.png}
\vspace{-1.0em}
\caption{Challenge of initializing column distribution of a tabular model with example of \textit{age} column from the \textbf{Adult} dataset: (1) original data, (2) skewed 1\% sampled data to build VGM encoder, and (3) generated data based on model using VGM encoder built on skewed data. (4) Comparison with generated data based on model using VGM encoder built from original data.}
\label{fig:motivation}
\end{center}
\vspace{-1em}
\end{figure*}
Generative Adversarial Networks (GANs)~\cite{gan} are an emerging methodology to synthesize data, ranging from images~\cite{stylegan, stylegan2}, to text~\cite{semeniuta2018accurate}, to tables~\cite{tablegan, ctgan}. The key components of GANs are training two competing neural networks, i.e., generator and discriminator, where the former iteratively generates synthetic data and the latter judges its quality. During the training process, the discriminator needs to access the original data and provide feedback to the generator by comparing it with the generated data. However, such a privilege of direct data access may no longer be taken for granted due to the ever increasing concern for data privacy. For instance, training a medical image generator~\cite{asyndgan} from multiple hospitals refrains from centralized data processing and calls for decentralized and privacy-preserving learning solutions.
In response to such a demand, the federated learning (FL) paradigm emerges. FL features decentralized local processing, under which machine learning (ML) models can first be trained on clients' local data in parallel and subsequently be securely aggregated by the federator. As such, the local data is not directly accessed, except by the owner, and only intermediate model is shared. The key design choices of constructing a FL framework for GANs depends on how to effectively distribute the training of generator and discriminator networks across data sources. On the one hand, discriminators are typically located on clients' premises due to the need of processing the client's data. On the other hand, the prior art explores a disparate trend of training image generators: centrally at the server~\cite{mdgan} or locally at the clients~\cite{fegan}. While tabular data is the most dominant data type in industries~\cite{arik2019tabnet}, there is no prior study on training GANs for tabular data under the FL paradigm.
Training of state-of-the-art tabular GANs, e.g., CTGAN~\cite{ctgan}, from decentralized data sources in a privacy preserving manner present multiple additional challenges as compared to image GANs. They are closely related to how current tabular GANs explicitly model each column, be it continuous or categorical variables, via data-dependent coding schemes and statistical distributions. Hence, the first challenge is to unify the encoding schemes across data sources that are non-identically independently distributed (Non-IID), and in a privacy preserving manner\footnote{Privacy preserving solutions refer to ones that do not require full knowledge of the local data.}. Secondly, the convergence speed of GANs critically depends on how to merge local models~\cite{jill_fed}. For image GANs~\cite{fedgan}, the merging weights are determined jointly by the data quantity and the (dis)similarity of class distribution across clients. Beyond that, tabular GANs need to consider a more fine-grained (dis)similarity mechanism for deciding merging weights, i.e., differences in every column across clients.
In this paper, we aim to design a federated learning framework, Fed-TGAN\xspace, that allows to train tabular GAN models from decentralized clients. The architecture of Fed-TGAN\xspace is that (i) each client trains its generator and discriminator networks using its local data and (ii) the federator aggregates the generators and discriminators. We also propose two algorithmic features that address more fine-grained per column modeling in a privacy preserving manner. First, the novel feature encoding scheme of Fed-TGAN\xspace can reconstruct the entire column distribution via bootstrapping each client's partial information. Secondly, a more precise weighting scheme can effectively merge local models by considering the quantity and distribution dissimilarity for every column across all clients.
We design and implement a first of its kind federated learning framework for tabular GANs using the PyTorch RPC framework.
We extensively evaluate Fed-TGAN\xspace on a vast number of client scenarios, which have disparate data distributions.
Specifically, Fed-TGAN\xspace is compared with three architecture baselines: (1) centralized approach, (2) vanilla federated learning and (3) multiple discriminator architecture ~\cite{mdgan} comprising of a single generator and multiple discriminators. The evaluation is performed on four commonly used machine learning datasets where the statistical similarity between generated and real data are reported as evaluation metrics. Our results show that Fed-TGAN\xspace remarkably reduces training time per epoch comparing to multi-discriminator solution by up to 200\%.
Additionally, under an unbalanced amount of local data among the clients, Fed-TGAN\xspace converges much faster than vanilla federated learning. And, for scenarios where data in all clients is non independently and identically distributed, the convergence of Fed-TGAN\xspace is not only stable, but also provides better similarity between generated and real data.
The main contributions of this study can be summarized as follows:
\begin{itemize}
\item We design and prototype a one-of-a-kind federated learning framework for the decentralized learning of tabular GANs (i.e. CTGAN) on distributed clients' data.
\item We create a privacy preserving feature encoding method, which allows the federator to build the global feature encoders (either for categorical or continuous columns) without accessing local data
\item We design a table-similarity aware weighting scheme for merging local models, which is shown to achieve a faster convergence speed when the data quantity and data quality are highly imbalanced among clients.
\item We extensively evaluate Fed-TGAN\xspace to synthesize four widely used tabular datasets on the prototype testbed.
Across various clients scenarios, the Fed-TGAN\xspace shows remarkably high similarity to the original data while also converging faster than vanilla FL and MD-GAN.
\end{itemize}
\section{Conclusion}
Due to ever increasing distributed data sources and privacy concerns, it is imperative to learn GANs in a decentralized and privacy-preserving manner -- features offered by federated learning systems. While the prior art demonstrates the feasibility of learning image GANs in FL systems, it remained unknown if the predominant tabular data in industry and its GANs can be deployed in a FL framework. This paper proposes and implements, Fed-TGAN\xspace, a first of its kind FL architecture and prototype for tabular GANs, overcoming specific challenges related to tabular data. Two main features of Fed-TGAN\xspace are (i) privacy preserving feature encoding to enable model initialization across heterogeneous data sources, and (ii) table-similarity aware weighting for merging local models. We extensively evaluate Fed-TGAN\xspace using a state-of-the-art tabular GAN and compare it with two alternative decentralized architectures, i.e., MD-TGAN\xspace and vanilla FL-TGAN\xspace, and a centralized approach. Our results show that Fed-TGAN\xspace can generate synthetic tabular data that preserves high similarity to the original data with faster convergence speeds, even in the challenging case of Non-IID data among clients. The prototype of Fed-TGAN\xspace is currently under testing by the fortune 500 financial institute. The promising evaluation results confirm that Fed-TGAN\xspace can help large organizations to unlock their data stored across multi-national silos to build a better tabular data synthesizer in a privacy preserving manner. We plan to release the source code after publication of the paper.
\ifnotdoubleblind
\fi
\bibliographystyle{abbrv}
\section{Fed-TGAN\xspace}
\label{sec:model}
In this section we introduce the design of Fed-TGAN\xspace which aims to adapt the FL structure presented in Sec.~\ref{sec:relatedwork} to overcome the challenges presented in Sec.~\ref{sec:motivation}. To this purpose we first add an initialization step to standardize the encoding for each column across all participants. Second, we choose the best encoding in a privacy-preserving manner by estimating the global data distribution without directly accessing the participant's local data. Third, we introduce a multi-dimension weighting mechanism to ensure model convergence under Non-IID data distributions across multiple columns.
\ifshownotationtable
\begin{table}[t]
\begin{center}
\caption{Symbol description}
\label{tab:symboltable}
\begin{tabular}{|L{1.6cm} |C{6cm} |}
\hline
\textbf{Symbol} & \textbf{Description}\\
\hline
\hline
$P$\xspace & Number of clients\\
\hline
$Q$\xspace & Number of table columns\\
\hline
\Ni{i} & Number of table rows at client $i$\\
\hline
$N$\xspace & Total number of table rows across all clients\\
\hline
$X_{ij}$\xspace & Category frequency distribution of categorical column $j$\\
\hline
$X_{j}$\xspace & Aggregated category frequency distribution of column $j$\\
\hline
\vgmij{i} & Variational Gaussian mixture model of continuous column $j$ at client $i$\\
\hline
$VGM_{j}$\xspace & Aggregated global variational Gaussian mixture model of column $j$\\
\hline
$LE_{j}$\xspace & Label encoder for categorical column $j$\\
\hline
\Dij{i} & Sampled data for column $j$ and client $i$\\
\hline
$\bm{S}$ & $P$\xspace$\times$$Q$\xspace matrix of divergence terms for each column $j$ and client $i$\\
\hline
\end{tabular}
\end{center}
\end{table}
\fi
\subsection{Privacy-preserving feature encoding}
Our privacy-preserving model initialization comprises two steps as shown in Fig.~\ref{fig:collect_distribution} and~\ref{fig:distribute_information}.
{\bf Step 1}. Each of the $P$\xspace clients extracts the statistical properties of the local data and sends them to the federator. The information sent is different based on the column type. For any categorical column $j$, each client $i$ computes and sends in the category frequency distribution $X_{ij}$\xspace. This information is used in three ways. First, the federator uses all distinct categories to build the label encoder $LE_{j}$\xspace for column~$j$.
A label encoder is a table which maps all possible distinct values of a categorical column into their corresponding rank in one-hot encoding. Second, the frequency information is used to build an aggregated global frequency distribution $X_{j}$\xspace for column $j$. Third, the sum of the frequency values is used to compute the number of table rows: i) available locally \Ni{i} at each client $i$; ii) available globally $N$\xspace across all clients.
The global label frequency distribution $X_{j}$\xspace, \Ni{i} and $N$\xspace are needed to estimate the similarity of clients' local data for computing the clients' weights for model aggregation. If no categorical columns are present in the tabular data, the client sends out \Ni{i} instead.
For any continuous column $j$, each client $i$ fits and sends in the parameters of a VGM model \vgmij{i}. To estimate the global distribution of column $j$ the federator uses \vgmij{1}, \vgmij{2}, $\dots$, \vgmij{P} to create the data sets \Dij{1}, \Dij{2}, $\dots$, \Dij{P} with \Ni{1}, \Ni{2}, $\dots$,\Ni{P} data points. The federator then uses these data sets to fit a new global VGM model $VGM_{j}$\xspace for column $j$\footnote{It might be possible to fit the global model directly from the parameters of the local models by, e.g., adapting \cite{DBLP:conf/icpr/BruneauGP08}. This is left for future work.}.
$VGM_{j}$\xspace is used as the final encoder for column $j$.
{\bf Step 2}. The federator distributes all the column encoders $LE_{j}$\xspace and $VGM_{j}$\xspace to each client. Clients use this information to encode the local data and initialize the local models. Models initialized by using the same encoders will have the same input/output layers. This solves the first challenge outlined in Sec.~\ref{sec:motivation}. Note that the number and structure of the internal layers used for the generator and discriminator networks are predefined and independent of the data. In our evaluation against the MD structure this information is also used by the server to initialize the hosted generator network.
Note that in this process the federator never directly accesses the local data of the clients, only their statistical distribution, thus this addresses the second challenge from Sec.~\ref{sec:motivation}.
\subsection{Table-ware similarity weighting scheme}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1\textwidth]{figures/weights_calculation.png}
\caption{Weights calculation of Fed-TGAN\xspace. Starting with a matrix of divergence scores for each client and table: (1) normalizes scores by column; (2) aggregates the scores per client; (3) incorporates differences in local data quantities; and, (4) performs the final weight normalization.}
\label{fig:algo_weights}
\end{center}
\vspace{-1em}
\end{figure*}
After model initialization, federator uses the collected global data statistics to pre-compute the weights for each client. These weights are used in training during the model aggregation (shown in Fig.~\ref{fig:training_process}) to smooth convergence in the presence of skewed data across the clients. The weights calculation process is presented in Fig.~\ref{fig:algo_weights}.
{\bf Step 0} is to build a $P$\xspace$\times$$Q$\xspace divergence matrix $\bm{S}$ where $P$\xspace is the number of clients and $Q$\xspace is the number of columns.
Each matrix element $S_{ij}$ is the divergence of client $i$ for column $j$ when compared to the global statistics of column $j$. The metric used depends on the type of column.
\begin{itemize}
\item {\bf Categorical columns} use the Jensen-Shannon Divergence (JSD)~\cite{jsd}. The JSD between two probability vectors $p$ and $q$ is defined mathematically as $\sqrt{\frac{D(p||m)+D(q||m)}{2}}$ where $m$ is the point-wise mean of $p$ and $q$, and $D$ is the Kullback-Leibler divergence~\cite{Joyce2011}. The JSD distance metric is symmetric and bounded between 0 and 1 enabling a hassle-free interpretation of results. For each categorical column $j$ and client $i$ we compute $S_{ij}$ as JSD between $X_{ij}$\xspace and $X_{j}$\xspace, i.e., $S_{ij} = \text{JSD}($$X_{ij}$\xspace, $X_{j}$\xspace$)$.
\item {\bf Continuous columns} use the Wasserstein Distance (WD)~\cite{wgan_test}. The first Wasserstein distance between two distributions $u$ and $v$ is defined as $WD(u,v)=inf_{\pi\in\Gamma(u,v)}\int_{\mathbb{R}\times\mathbb{R}}|x-y|d\pi(x,y)$ where $\Gamma(u,v)$ is the set of probability distributions on $\mathbb{R}\times\mathbb{R}$ whose marginals are $u$ and $v$ on the first and second factors, respectively. It can be interpreted as the minimum cost to transform one distribution into another where the cost is given by amount of distribution to shift times the distance it must be shifted.
For each continuous column $j$, we use the data sets \Dij{i} created previously for each client $i$ to compute $S_{ij}$ as the WD between \vgmij{i} and $VGM_{j}$\xspace.
\end{itemize}
{\bf Step 1} normalizes the matrix $\bm{S}$ across the $P$\xspace clients for each table column $j$. This is done by dividing each matrix element by the sum of the elements in the corresponding matrix column. This step maintains the relative divergence between different clients with respect to the global column data distribution while allowing to give the same importance to all columns (all columns sum up to 1).
{\bf Step 2} aggregates the divergence across the different table columns $j$. This is done via a sum along the rows of the matrix. For each client $i$ the resulting score $SS_i$ can already represent the divergence between client and global data distribution, but it does not yet take into account possible difference in the amount of local data available at each client.
{\bf Step 3} fuses the divergence in data values and data quantity at each client. Step 3 first normalizes the divergence metric between 0 and 1 across the clients. Then it uses the complement to represent similarity instead of divergence and combines it with the ratio of local data available with respect to the global data, i.e., $\frac{N_i}{N_{all}}$. The resulting $SD_{i}$ take into account differences in both number of values and distribution of values of the local vs. global data. It takes into account all different dimensions given by the different columns addressing the third challenge from Sec.~\ref{sec:motivation}.
{\bf Step 4} computes the final weights $W_i$. The $W_i$ for each client $i$ are obtained by passing the $SD_{i}$ to a softmax function. $W_i$ is the weight that the federator will use when it aggregates the model from client $i$.
\subsection{Implementation details}
Fed-TGAN\xspace is implemented using the Pytorch RPC framework
This choice makes it easy to control the flow of the training steps from the federator. Clients just need to join the group, then wait to be initialized and assigned work. To parallelize the training across all clients, RPC provides a function \textit{rpc\_async()} which allows the federator to make non\-blocking RPC calls to run functions at a client.
To implement synchronization points, RPC provides a blocking function \textit{wait()} for the return from previously called function \textit{rpc\_async()}.
The return of \textit{rpc\_async()} is \textit{future} type object. Once the \textit{wait()} is called on this object, the process is blocked until the return values are received from the client.
The federator starts the training on all clients via \textit{rpc\_async()}. Then it waits for the new model from each client via the \textit{wait()}. Once all models are received, they are aggregated into a single model using the clients weight and redistributed to the clients via \textit{rpc\_async()}. Once all clients confirm the reception of the updated model (via \textit{wait()}) the federator starts the next round of training. We plan to provide the source code via Github after publication of the paper.
One weakness of current RPC framework from Pytorch v1.8.1 is that it does not support the transmission of tensors directly on GPU through RPC call. This means that each time when we collect or update the model weights we need to pay an extra time cost to detach the weights from GPU to CPU or reload the weights from CPU to GPU.
\section{Preliminary and Motivation}
\label{sec:motivation}
\begin{figure*}[t]
\begin{center}
\subfloat[Centralized GAN]{
\includegraphics[width=0.26\textwidth]{figures/centralized_gan.png}
\label{fig:centralized_gan}
}\hspace{\fill}
\subfloat[Multi-Discriminator structure]{
\includegraphics[width=0.35\textwidth]{figures/mdgan_plat.png}
\label{fig:mdgan}
}
\hspace{\fill}
\subfloat[Federated Learning structure]{
\includegraphics[width=0.35\textwidth]{figures/fegan_plat.png}
\label{fig:fedgan}
}
\vspace{-0.5em}
\caption{Privacy-preserving framework for distributed GAN}
\label{fig:algo_structure_moti}
\end{center}
\vspace{-1em}
\end{figure*}
{\textbf{Preliminary}}. Key in federated, and generally decentralized, learning is that all participating nodes use the same model structure. This structure heavily depends on the the input data type and its encoding. Previous federated GANs studies~\cite{mdgan, fedgan, asyndgan} focus only one type of data i.e images. Image data makes it easy to pre-define the encoding and neural network structure independent from the specific images located at each participating node. However, the same does not apply to tabular data. Each columns requires an encoding which shapes the input layer influencing the model structure. These encodings depend on both the data type, i.e. categorical or continuous, and the data values, i.e. data distribution. For example, the state-of-the-art generative tabular data models CTGAN~\cite{ctgan} and CTAB-GAN~\cite{ctabgan} use one-hot encoding for categorical columns and variational Gaussian mixtures (VGM) encoding for continuous columns. Both encoding types require to know the per column global data properties. One-hot encoding requires the list of all possible distinct values and VGM encoding depends on the estimation of the all possible modes with their mean and variance. Hence image federated learning systems can not readily be applied to tabular data problems.
Tabular GANs on federated learning systems need to agree on a common encoding and, consequent, model structure during initialization. For this it is important to know the column by column data distribution across all participants. This is straightforward if privacy is of no concern: e.g., collect all the client data on one node, decide the encoding and distribute the decision to all other nodes. However this goes against the fundamental aim of federated learning: training models without the need to share the detailed information of local data to preserve privacy. For categorical columns the problem can be solved by the participants sharing the list of distinct values in each column with little to no privacy infringement. But for continuous columns the problem is not as straightforward due to the VGM requirement.
{\textbf{Motivation Example}}. We demonstrate the challenge of encoding continuous columns with the following experiment using the Adult dataset (see Sec.~\ref{ssec:setup} for details on the setup and dataset). Here we momentarily relax the privacy requirement and assume that the federator coordinating the federated learning has access to 1\% of the global data. This 1\% data is used to build the VGM encoders for all continuous columns which are then distributed and used by all clients to encode the local data. Without a global view it is impossible to know how well the 1\% data represents the global population. If this 1\% data is sampled in a skewed way it can severely degrade the encoding quality leading to poor model performance. We show this effect on the \textit{age} column. We select the 1\% data from tail of the age distribution. The distributions of the original and selected data are shown in the Fig.~\ref{fig:motivation}(1) and Fig.~\ref{fig:motivation}(2). Fitting a VGM encoder from the sampled data will encode well only the data between 75 and 90 years which, however, represents only the data above the 99$^{th}$ percentile of the real age distribution. Using this encoder to train a model leads to poor generation performance, i.e., the generated samples are not representative of the original data (see Fig.~\ref{fig:motivation}(3)). For comparison, the Fig.~\ref{fig:motivation}(4) shows the distribution of samples generated with a VGM encoder built from all of the data.
Another key issue for federated tabular learning is the weighting of models from different clients during model aggregation. This issue is exacerbated with tabular data. Non iid data across clients can lead to poor training convergence and ultimately model performance. Federated learning systems counter this effect by weighting each client model differently based on the similarity of local data to the global data. Image federated learning estimates this similarity based on the distribution of labels which can be seen as 1-dimensional data. But for tabular data, each column can be seen as one dimension requiring a multi-dimension solution. Moreover, while the same method as for image labels can be applied to categorical columns, one can not directly estimate the similarity for continuous columns without knowing all the data points. Thus, a new weighting method is needed for tabular data.
|
1,941,325,220,724 | arxiv | \section{Introduction}
Generics, statements in the form of \textit{``bananas are yellow"} or \textit{``birds fly"}, express generalizations about members of a category, and are frequent in everyday language \cite{carlson1977reference, carlson1995generic}. These expressions refer to a category as a whole (e.g\textit{ birds}), as opposed to a single instance of a category (e.g \textit{a bird}), and while referring to a conceptual category as a whole, generic statements may assert information that while typical, does not cover all instances, and hence are not necessarily invalidated by counter-examples \cite{mccawley1981everything, gelman2004learning}. For example, the statement \textit{``birds fly"} is not invalidated by the existence of penguins, a bird that cannot fly. Moreover, experience with only a single instance of a conceptual category can be sufficient to acquire generic knowledge \cite{carlson1995generic, prasada2000acquiring}. Besides the prevalence and the expressive power of generics in language, generics also play an important role in child language acquisition. During conceptual development, generic statements complement observational learning and help construct conceptual knowledge, allowing children to learn hierarchical information that cannot be learned from the world with experience alone \cite{cimpian2009information, gelman2009learning}.
\citeA{prasada2000acquiring} emphasized the acquisition problem posed by the imprecise nature of generic language and our ability to learn generic knowledge from minimal experience, outlining that the requirements of a formal system for acquiring generic knowledge are to complement other learning mechanisms and to establish relationships between categories and properties. By age 2, children manifest the capabilities of such a system, as they can recognize category labels \cite{graham2004thirteen} and make use of names to classify objects, even if an instance is atypical for a given category \cite{nazzi2001linguistic, jaswal2007looks}. Considering how generic utterances such as \textit{``penguins are birds"} can override interpretation of perceptual features and allow learning hierarchical knowledge even from a young age \cite{gelman2009learning, Cimpian2010}, it is clear that language can inform perceptual learning, making generics a critical component of language acquisition worth modeling.
Inspired by the plethora of research on generics and child language development, we introduce and implement a developmentally plausible model for learning concepts from generic language that assumes no prior conceptual knowledge, learns meanings of words and concepts in a manner similar to infants (from the ground up), and demonstrates the capacity to make inference with the learned concepts. The importance of generic language has been recognized in the artificial intelligence and natural language processing community for tasks that involve knowledge acquisition, ontology development, and semantic inference \cite<e.g.,>{Reiter2010, Friedrich2015, Sedghi2018}. These approaches generally make use of large-scale resources and employ methods such as supervised learning that are not suitable for modeling child language and conceptual development. We base our system on ADAM, a software platform for modeling early language acquisition \cite{adam2021}. Through pairs of perceptual representations of situations and linguistic utterances describing these situations, ADAM learns interpretable representations for \textit{concepts} such as objects, attributes, relations, and actions (see Model and Representation section for further detail). ADAM's cognitively plausible design choices regarding perceptual representations \cite{marr1982vision, biederman1987recognition} and simple word/pattern learning mechanisms \cite{webster-marcus-1989-automatic, Yu2012, stevens2017pursuit} lay the groundwork for modeling higher level semantics in child language acquisition. Given the interplay between generic statements, concepts, and observational learning in natural language \cite{carlson1977reference, carlson1995generic, gelman2009learning} and ADAM's power to learn meanings of concepts and identify them across situations, we consider ADAM to be a promising system for modeling acquisition of generic knowledge and the resultant semantic category inference.
We expand ADAM's modeling capabilities to capture semantic associations and generics by introducing a generic learner module and combining ADAM's representations with an additional layer of abstraction, a network data structure called \textit{the concept network}. The concept network organizes the associations between concepts and learns hierarchies and properties about the concepts through observation as well as generic utterances. Through three tasks that use generic language across different learning curricula, we demonstrate that ADAM, coupled with the generic learner module and the concept network, can acquire generic knowledge, establish semantic certainty with generics language, and make category inference. Our demonstrations provide an example of how ADAM can be used to model language acquisition.
\section{Model and Representation}
\subsection{Meaning Representation in ADAM}
\begin{figure}
\centering
\includegraphics[width=8cm]{bird.png}
\hrule
\centering
\includegraphics[width=8.5cm]{animals.png}
\caption{Examples of a concept network. Nodes represent concepts (e.g. \textit{bird}, \textit{animal}, \textit{sitting down}). The edges are labeled by the semantic relation between the concepts (slot), and the association strength (weight). The top shows a portion of the concept graph for \textit{bird}. The bottom shows \textit{animal} and its neighbors.}
\label{fig:cn-graph}
\end{figure}
ADAM is a software platform for experiments in child language learning. The system can process a range of expressions covering a very young learner's vocabulary and grammar, such as objects (\textit{``a ball"}), adjectives (\textit{``a red ball''}), prepositions (\textit{``a ball on a table"}) and actions (\textit{``Mom rolls a ball"}). ADAM uses \textit{perception graphs} to represent situations (e.g. a cup sitting on a table) that it perceives. Perception graphs consist of perceptually plausible components, such as geons \cite{biederman1987recognition}, body parts, colors, and regions. For example, when the model observes a cup on the table, the model perceives a structured graph that consists of the cup's color, hollow shape, its handle, and its position relative to the table. The model learns patterns over observed perception graphs. The patterns represent hypotheses about the meaning of individual \textit{concepts} and are consolidated throughout the learning process. For example, the learner could perceive perception graphs and utterances from multiple situations that involve a bear, such as hearing \textit{ ``a brown bear"} while observing a bear by itself, hearing \textit{``a bear sits"} while observing a bear sitting, and hearing \textit{``two bears"} while observing two bears, and eventually learn a representation of \textit{``bear"}'s meaning. ADAM uses the learned mapping between linguistic structures and meaning representations to describe new scenes. Before learning, we define a configurable curriculum that pairs observations and descriptions. During testing, the perception input is generated without the descriptions.
\subsection{The Concept Network}
We extend ADAM's representations of concepts and patterns with the concept network, a graph-based data structure that represents learned concepts (e.g, objects, attributes, and actions) as nodes and the semantic associations between them as edges. The concept network enables one to see whether any two concepts are related, what their relation is, and how strong this relation is, by looking at the edge connecting the two concepts. Contrary to ADAM's perception graphs and patterns that are used to describe the components of a scene or perceptual properties of a perceived object, the concept network represents the overall semantic and conceptual knowledge of the learner learned over time. We represent each concept as a single node in the network; for example, \textit{bird} is represented by a single concept node in the concept network, and other semantically related concepts are its neighbors. Figure \ref{fig:cn-graph} (top) visualizes a section of the learned concept network structure for the \textit{bird} concept. Using the concept network, we can infer that a concept such as \textit{bird} is an \textit{animal}, and that it is associated with the concept \textit{fly}. Since these associations are formed through the learner's observations of occurrences of concepts throughout learning, the association between the \textit{bird} and \textit{fly} concepts would suggest that a bird was observed while flying. Similarly, the network can be used to observe the categories formed by the learner; Figure \ref{fig:cn-graph} (bottom) shows the \textit{animal} node and its neighbors. To construct the nodes and edges in this network, while perceiving concepts and semantic relations between them as the learner observes situations as described in \citeA{adam2021}, the learner also simultaneously creates nodes for concepts (e.g \textit{bird}, \textit{fly}) and edges to associate concepts in the network. The concept network consists of about 250 concepts when generated with ADAM's standard curriculum (see \citeA{adam2021} for a complete list of content descriptors).
\subsubsection{Edges and Semantic Associations}
The edges in the concept network represent the semantic relation between the two concepts and the association strength between them. For action concepts, the relation often denotes an argument relation with the neighboring concept; the argument relation is labeled with \textit{slots} that describe a slot an argument can take in a phrase. For instance, given \textit{``Mom drinks juice"}, \textit{Mom} has argument slot position \textit{slot-1} with \textit{drink}, while \textit{juice} has \textit{slot-2}. The strengths in the network start from \textit{0} and have a maximum of \textit{1.0}. Throughout learning, the association strength between two concepts is updated with each co-occurrence of the two concepts, using the plateauing update function \(a = a + 0.2 * (1- a) \). The edges that represent semantic associations initialized with generic statements have maximum association strength, since we tend to believe what we are told is true. These edges are also marked with a slot label that matches the argument relation of the statement, or the \textit{is} label if the statement is a predicate.
\subsubsection{Matrix Representation}
We represent the network as a weighted adjacency matrix to facilitate rich operations such as vector similarity between concepts. In order to preserve argument relation information in the edges in the matrix, each action concept in the concept network is first translated to multiple concepts that include the argument relation (e.g \textit{drink slot 1} to represent drink in \textit{Mom drinks} and \textit{drink - slot 2} to represent drink in \textit{``drinking juice"}).
\subsubsection{Concept Similarity} We use concept similarity for evaluation. Each concept is represented as an adjacency vector extracted from the weighted adjacency matrix. Vectors for \textit{category concepts} (e.g\textit{ animal}) are represented as the average of the vectors of all members of that category. Similarity between pairs of concepts is measured using cosine similarity. Measuring concept similarity provides insight into the the hierarchical structures that naturally emerge throughout learning, as visualized in Figure \ref{fig:clustering} which shows a clustered heatmap of the vector representations of concepts that are learned through a curriculum of objects and actions, without generics. We see the formation of categories that correspond to body parts, liquids, and animate objects.
\subsection{Generic Utterances}
To teach generic language and explicit categories to the learner, we present simple scenes paired with generic utterances such as \textit{``bears sit"} and \textit{``bears are brown"}. Since generic language encodes generic knowledge (e.g \textit{``birds fly"} stipulates that birds generally can fly), inputs to the learner that are in generic form maximize the corresponding association strength when observed by the learner. For instance, while the non-generic utterance \textit{``a bear sits"} increases the association strength between between \textit{bear} and \textit{sit}, observing \textit{``bears sit"} maximizes it.
\begin{figure}
\centering
\includegraphics[width=8cm]{cluster-small.png}
\caption{A qualitative demonstration of how hierarchical categories can be measured with concept similarity. We see the formation of categories in the concept space, e.g animate objects \textit{(cat, dog, Mom, baby, Dad)}, liquids \textit{(water, juice, milk)}, and body parts \textit{(head, hand)}. The vector representations of learned concepts were clustered with hierarchical clustering method \textit{clustermap} \protect\cite{mullner2011modern, waskom2020seaborn}.}
\label{fig:clustering}
\end{figure}
\subsubsection{Generic Learner Module}
The ADAM system is built of learning modules. Each module targets a specific type of learning, such as objects, actions, and relations. We build a generic learner module to enable learning from generic utterances. The generic learner first verifies that the utterance is a generic by checking whether all the recognized nouns in the utterance are bare plurals, an indicative property of generics in English \cite{lyons1977semantics, gelman2004learning}.
Once confirmed, the learner recognizes the concepts mentioned in the utterance, and forms a semantic connection between these concepts in the concept network. The association strength of this connection is maximal, which implies a semantic certainty that is learned from generic input.
In the special case where the generic statement is a predicate containing a previously unknown category, such as \textit{animal} in \textit{``dogs are animals,"} we create a new category concept node in the concept network and associate the object concept \textit{(dog)} with the category concept \textit{(animal)}. If a novel object concept appears in a generic statement as a member of a category concept, such as \textit{wug} in \textit{``wugs are animals,"} we create a new object concept and associate it with the category concept as well as the features of the members of the category. Overall, the module can interpret statements in the form of \textit{``birds fly,"} \textit{``birds are animals"} and \textit{``wugs are animals"}.
\section{Model Evaluation}
We present three tasks to illustrate the behavior of the system across different learning conditions.
\subsection{Task 1: Generic Color Predicates}
The generic color predicates task shows how the system can learn generics by establishing stronger associations between learned concepts. In other words, we measure how the generic input can change the learner's understanding of the world by influencing semantic connections. In this task, we first teach objects and arbitrary colors to the system with non-generic utterances (e.g \textit{a red truck}) based on the standard object and color curricula provided in the ADAM system. We then evaluate the associations between the object and color concepts that the system has learned. Then, we pick colors that are typical for the object concepts and teach them with generic statements (\textit{``watermelons are green," ``papers are white," ``cookies are light brown"}), and evaluate the associations between the object and color concepts again. In the non-generic training phase, each object appears with a variety of arbitrary colors (e.g cookies with blue, green, and light-brown colors), which we expect will lead the model to learn associations between an object and many colors. We expect the presentation of generic input to yield stronger associations between the object and the color used in a generic utterance.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\pgfplotstableread{
X Object Color Before After
1 cookie blue 0.2 0
2 cookie green 0.488 0
3 cookie light-brown 0.2 0.8
4 cookie red 0.2 0
5 paper blue 0.2 0
6 paper dark-brown 0.36 0
7 paper red 0.2 0
8 paper white 0.2 0.8
9 watermelon dark-brown 0.2 0
10 watermelon green 0.2 0.8
11 watermelon light-brown 0.2 0
12 watermelon red 0.2 0
}\datatable
\begin{axis}[
title=Association Strengths Between Objects and Associated Colors,
title style={align=center, font=\small},
axis lines*=left, ymajorgrids,
width=8cm, height=4cm,
ymin=0,
ybar stacked,
ylabel=Association\\Strength,
ylabel style={align=center, yshift=-1.5ex, font=\small},
yticklabel style={font=\small},
bar width=6pt,
legend style={at={(0.5,-1), font=\small},
anchor=north,legend columns=-1},
xtick=data,
xticklabels from table={\datatable}{Color},
xticklabel style={yshift=-5ex,rotate=60,anchor=mid east, font=\small},
draw group line={Object}{cookie}{cookie}{-4ex}{5pt},
draw group line={Object}{paper}{paper}{-4ex}{5pt},
draw group line={Object}{watermelon}{watermelon}{-4ex}{5pt},
after end axis/.append code={
\path [anchor=base east, yshift=0.5ex]
(rel axis cs:0,0) node [yshift=-4ex, font=\small] {Object}
(rel axis cs:0,0) node [yshift=-8ex, font=\small] {Color};
}
]
\addplot[ybar, fill=blue!80!white] table [x=X, y=Before] {\datatable}; \addlegendentry{Before Generics}
\addplot[ybar, fill=red!40!white] table [x=X, y=After] {\datatable}; \addlegendentry{Increase After Generics}
\end{axis}
\end{tikzpicture}
\caption{Results of the generic color predicates task, plotting the associations strengths between objects and colors that are associated with them, before and after observing generic input. At first, each object is associated with an arbitrary set of colors that reflect the curriculum. The only associations that increase are correctly the ones observed through generics.}
\label{fig:color-plot}
\end{figure}
\begin{figure*}[t]
\centering
\begin{tikzpicture}
\pgfplotstableread{
X Object Animal-Similarity Food-Similarity People-Similarity Curriculum
1 wug\\(animal) 0.990 0.196 0 objects-and-kinds
2 vonk\\(food) 0.141 0.981 0 objects-and-kinds
3 snarp\\(people) 0 0 1 objects-and-kinds
4 wug\\(animal) 0.913 0.095 0.336 objects-kinds-and-generics
5 vonk\\(food) 0.112 0.981 0 objects-kinds-and-generics
6 snarp\\(people) 0.345 0 0.928 objects-kinds-and-generics
7 wug\\(animal) 0.991 0.376 0.611 obj-actions-kinds-generics
8 vonk\\(food) 0.401 0.949 0.181 obj-actions-kinds-generics
9 snarp\\(people) 0.611 0.161 0.996 obj-actions-kinds-generics
}\datatable
\begin{axis}[
title={Similarity of Novel Objects to Each Category across Curricula},
ybar, axis on top,
height=3.5cm, width=15cm,
bar width=0.2cm,
ymajorgrids, tick align=inside,
enlarge y limits={value=.1,upper},
ymin=0, ymax=1,
axis x line*=bottom,
axis y line*=left,
tickwidth=0pt,
enlarge x limits=true,
legend style={
at={(1.1,0.5)},
anchor=north,
legend columns=1, font=\small
},
ylabel={Similarity},
ylabel style={yshift=-2ex, font=\small},
yticklabel style={font=\small},
xtick=data,
xticklabels from table={\datatable}{Object},
xticklabel style={align=center, yshift=-5ex, anchor=north, font=\small},
draw group line={Curriculum}{objects-and-kinds}{Level 1 (Low)}{-4ex}{7pt},
draw group line={Curriculum}{objects-kinds-and-generics}{Level 2 (Medium)}{-4ex}{7pt},
draw group line={Curriculum}{obj-actions-kinds-generics}{Level 3 (High)}{-4ex}{7pt},
after end axis/.append code={
\path [anchor=base east, yshift=0.5ex]
(rel axis cs:0,0) node [yshift=-8ex, font=\small] {Object}
(rel axis cs:0,0) node [yshift=-4ex, font=\small] {{Complexity}};
}
]
\addplot[ybar, fill=blue!80!white] table [x=X, y=Animal-Similarity] {\datatable}; \addlegendentry{Animal}
\addplot[ybar, fill=red!40!white] table [x=X, y=Food-Similarity] {\datatable}; \addlegendentry{Food}
\addplot[ybar, fill=orange!5!white] table [x=X, y=People-Similarity] {\datatable}; \addlegendentry{People}
\end{axis}
\end{tikzpicture}
\caption{Results of the category inference task, plotting the similarity of novel objects (e.g \textit{wug}) to animal, food, and people categories across curricula of varying complexity. The correct categories of objects, i.e categories in which they were presented in the generic input, are labeled in parentheses. Regardless of the curriculum complexity and concept category, the learner associates new objects most strongly with the correct category. Incorrect associations increase with curriculum complexity.}
\label{fig:category-plot}
\end{figure*}
\subsubsection{Results and Discussion}
We measure the association strengths between objects and all associated colors before and after observing generic color predicates. The results are plotted in Figure \ref{fig:color-plot}. Prior to hearing generic input, objects have associations to many arbitrary colors, reflecting the initial non-generic curriculum. Once the generic color predicates are observed, we see that each object has a significantly stronger association with the appropriate color as stated in the generic utterances. Upon observing generics, only the association strengths for the correct colors are updated. These results demonstrate that our model interprets and uses generics as designed, to establish strong associations between an object and a prototypical version of some feature.
\subsection{Task 2: Category Inference}
The category inference task demonstrates how the model forms categories, e.g \textit{animals} and \textit{food}. As visualized in Figure \ref{fig:clustering}, concept categories form naturally in the representational space throughout learning, as similar objects have similar semantic roles in actions; for instance, only animate objects \textit{eat}, and only food objects are \textit{eaten}. In this task, we label these implied categories using generic statements (e.g\textit{``cats are animals," ``dogs are animals"}). Then, we compare the novel concepts placed in these categories with each possible category. We first teach the learner a particular curriculum, then a set of previously unknown objects in known categories (e.g. \textit{``wugs are animals"}), and finally evaluate the similarity of the new item (e.g. \textit{wug}) to each possible category. We repeat this process for different curricula with increasing semantic complexity, by including more complicated content such as \textit{plurals} and \textit{actions}. Table \ref{task-2-tables} shows different learning curricula (top), example utterances (center), and examples with unknown categories (bottom).
We expected that, while the model should be able to form categories regardless of the curricula, the formed categories would be most distinct in the less complex conditions.
\begin{table}[h]
\begin{center}
\caption{Inputs to the learner in the category inference task: (top) learning curricula; (center) examples of situations and utterances; (bottom) inputs presented to the learner. Animal and people categories are treated as separate categories. }
\label{task-2-tables}
\vskip 0.12in
\begin{tabular}{lll}
\hline
& Learning Curriculum\\
\hline
1 & Objects, categories \\
2 & Objects, categories, generics \\
3 & Objects, colors, actions, plurals, categories, generics \\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace*{-\baselineskip}
\begin{table}[h]
\begin{center}
\begin{tabular}{lll}
\hline
Curriculum & Examples\\
\hline
Objects & a house; a dog\\
Colors & a red truck; papers are white\\
Actions & a baby drinks milk from a cup\\
Plurals & two balls; many cookies \\
Category Generics & bears are animals \\
Action Generics & cats walk; Moms eat\\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace*{-\baselineskip}
\begin{table}[h]
\begin{center}
\begin{tabular}{lll}
\hline
New Object & Category & Utterance \\
\hline
wug & animal & wugs are animals \\
vonk & food & vonks are foods \\
snarp & people & snarps are people \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Results and Discussion}
The results of the category inference task are plotted in Figure \ref{fig:category-plot}. In the easiest curriculum setting containing only objects and categories, every object is most similar to the category in which it was learned, i.e \textit{wugs} are most similar to \textit{animals}, \textit{vonks} to \textit{foods}, and \textit{snarps} to \textit{people}. Expectedly, we see some similarity between animals and foods, because \textit{chicken} is a member of both categories. The second curriculum condition, which includes objects, categories, and generics, shows that while the correct trend holds, there is some association between animals and people due to generic statements that apply to both categories, such as \textit{sitting}. Finally, in the most complex curriculum condition that includes objects, some actions, plurals, colors, categories, and generics, we see that the associations across categories are increased, but the correct category still has the highest association strength. There is some similarity between animals and foods, and people and animals due to shared concepts such as \textit{chicken} and \textit{eating} respectively. Overall, while associations between object concepts and the incorrect categories increase with curriculum complexity, regardless of the curriculum complexity and category of the novel object, objects are associated most strongly with the correct category in which they were learned through generic statements.
\subsection{Task 3: Joint Category}
The goal of the joint category task is to demonstrate how the ADAM system learns categories across curricula with different contents. Specifically, we create learning curricula for each of the four conditions shown in Table \ref{curriculum-contents-table} and evaluate the model behavior. We use combinations of \textit{chicken}, \textit{beef}, and \textit{cow} objects; \textit{chicken} is used as an example of a lexical item that is shared by two very similar yet distinct concepts (chicken as an animal, and chicken as food) and hence is a member of both the \textit{food} and \textit{animal} categories. While \textit{beef} and \textit{cow} could refer to the same thing at a certain semantic level, they are regarded as disjoint examples of a food and an animal category respectively. We hypothesized that while observing \textit{chicken} would cause some semantic association between food and animal categories, the model should not show any semantic association between food and animal categories when it observes just beef and cow and no chicken.
To execute the task, we run four different versions of the ADAM system, each one trained with one of the four curriculum conditions shown in Table \ref{curriculum-contents-table}. Then, similar to the category inference task, each system is presented a set of previously unknown objects in known categories, e.g “\textit{wugs are animals}”. Finally, we evaluate the similarity of the new object concept to animal and food categories.
\begin{table}[H]
\begin{center}
\caption{Curricula and contents for the joint category task}
\label{curriculum-contents-table}
\vskip 0.12in
\begin{tabular}{ll}
\hline
& Test Objects Included in the Learning Curriculum\\
\hline
1 & None (no chicken, beef, or cow) \\
4 & Beef (\textit{food}) and cow (\textit{animal}) \\
3 & Chicken (\textit{food} and \textit{animal})\\
4 & Chicken, beef, and cow \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\pgfplotstableread{
X Animal-Similarity Food-Similarity Curricula-Content
1 0.991965983 0 None
2 0.997029206 0 {Beef\\and Cow}
3 0.993005827 0.477896999 Chicken
4 0.990172229 0.26934181 {Chicken,\\Beef,\\and Cow}
}\datatable
\begin{axis}[
title=Similarity of the Wug Concept to Animal \\ and Food Categories across Curricula,
title style={align=center},
ybar, axis on top,
height=3.5cm, width=8cm,
bar width=0.2cm,
ymajorgrids, tick align=inside,
enlarge y limits={value=.1,upper},
ymin=0, ymax=1,
axis x line*=bottom,
axis y line*=left,
tickwidth=0pt,
enlarge x limits=true,
legend style={at={(0.5,-0.6), font=\small},
anchor=north,legend columns=-1},
ylabel={Similarity},
ylabel style ={font=\small, yshift=-2ex},
yticklabel style={font=\small},
xtick=data,
xticklabels from table={\datatable}{Curricula-Content},
xticklabel style={align=center, anchor=north, font=\small, yshift=-1ex},
after end axis/.append code={
\path [anchor=base east, yshift=0.5ex]
(rel axis cs:0,0) node [yshift=-4ex, font=\small] {Curriculum};
}
]
\addplot[ybar, fill=blue!80!white] table [x=X, y=Animal-Similarity] {\datatable}; \addlegendentry{Similarity to Animals}
\addplot[ybar, fill=red!40!white] table [x=X, y=Food-Similarity] {\datatable}; \addlegendentry{Similarity to Foods}
\end{axis}
\end{tikzpicture}
\caption{Results of the joint categories task. Through generic input, a novel object, \textit{wug} is learned as an \textit{animal}. The plot shows the similarity between the wug concept and the animal and food categories. Wug is more similar to foods when the curriculum includes chicken. The similarity is lower when beef and cow are added and is zero without chicken. }
\label{fig:joint-plot}
\end{figure}
\subsubsection{Results and Discussion}
Figure \ref{fig:joint-plot} plots the results of the joint category task, showing the similarity of the \textit{wug} concept to animal and food categories across curricula with different contents. In the first condition, with a curriculum that does not include \textit{chicken}, \textit{beef}, or \textit{cow}, the similarity to animals is strong, but the similarity level for food category indicates an absence of similarity between animals and foods. Likewise, in the second curriculum condition, when the curriculum includes \textit{beef} and \textit{cow}, but not \textit{chicken}, the learner does not learn any association between food and animal categories. However, when we introduce \textit{chicken} into the curriculum, the association between animals and foods grows. Finally, including \textit{chicken}, \textit{beef}, and \textit{cow} in the curriculum leads to some similarity between foods and animals, but less so than when only \textit{chicken} was included. That these changes match our expectations suggests that ADAM can successfully capture differences in the curricula and the contents in them. While doing so, the system maintains its robustness as indicated by how the similarity to the correct category outscores the similarity to the incorrect category in every condition.
\section{Conclusion \& General Discussion}
We have illustrated that the grounded language acquisition system of ADAM, coupled with the learner module and concept network, can be successfully used for modeling semantics of generic learning. We demonstrated that the model learns from generic language, makes desired semantic associations between learned concepts, and forms semantic categories that reflect the contents of the training curricula.
The concept network makes it possible to perform operations on learned concepts and on the semantics of generics, enabling generic utterances to establish associations between concepts and consequently form semantic categories. In the future, we plan to further examine the role of association weights in category formation, and how the system performs on different languages. We also hope to explore how we can use the matrix representation of the concept network to integrate ADAM's structured and interpretable meaning representations with distributional models that operate on continuous space, such as neural networks. Moreover, while we did not include the meaning patterns of the learned concepts in the concept network, the flexible nature of the network makes this possible, creating the potential for further semantic analyses that utilize perceptual properties of learned concepts.
Overall, our system's ability to acquire word meanings and robustness in capturing desired semantic properties across diverse learning curricula shows its promising capacity to model different aspects of language acquisition and to investigate the question of how children discover the expression of generics in their own languages.
\section{Acknowledgements}
Approved for public release; distribution is unlimited. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00111990060. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.
\bibliographystyle{apacite}
\setlength{\bibleftmargin}{.125in}
\setlength{\bibindent}{-\bibleftmargin}
|
1,941,325,220,725 | arxiv | \section{Introduction}
The discovery of regular solutions to the Yang-Mills field equations in the four-dimensional Euclidean space, which correspond to the absolute minimum of the action, has led to an intensive study of such theory and the search multidimensional generalizations of the self-duality equations. In Refs.~\cite{corr83,ward84}, such equations were found and classified. These were first-order equations that satisfy the Yang-Mills field equations as a consequence of the Bianchi identity. Later, solutions to these equations were found and then used to construct classical solitonic solutions of the low energy effective theory of the heterotic string~(see Refs.~\cite{fair84,fubi85,corr85,ivan92,harv91,khur93,guna95,logi05,logi08,ivan05,gemm13}).
\par
An alternative approach to the construction of self-duality equations proposed in Ref.~\cite{tchr80}, was to consider self-duality relations between higher-order terms of the field strength. An explicit example of instantons satisfying such self-duality relations was obtained on the eight-dimensional sphere in Ref.~\cite{gros84}. As shown in Ref.~\cite{duff91}, these instantons play a role in smoothing out the singularity of heterotic string soliton solutions by incorporating one-loop corrections. In Refs.~\cite{duff97,olsen00,mina01,pedd08,bill09,fuci09} these exotic solutions were used to construct various string and membrane solutions. In Refs.~\cite{bern03,hase14}, they were used to construct the higher dimensional quantum Hall effect.
\par
In this letter, we study the ADHM construction of the self-dual instantons in eight dimensions and we find a multi-instanton solution generalizing the solution that was found in Ref.~\cite{gros84}.
\section{Preliminaries}
In this section, we give a brief summary of Clifford algebra and octonion algebra. We list the features of the mathematical structure as far as they are of relevance to our work.
\par
We recall that the Clifford algebra $Cl_{0,7}(\mathbb{R})$ is a real associative algebra generated by the elements $\Gamma_1,\Gamma_2,\dots,\Gamma_7$ and defined by the relations
\begin{equation}\label{1}
\Gamma_i\Gamma_j+\Gamma_j\Gamma_i=-2\delta_{ij}.
\end{equation}
The element $\Omega=\Gamma_1\Gamma_2\dots \Gamma_7$ commutes with all other elements of the algebra, and its square $\Omega^2=1$. Therefore the pair $\Gamma^{\pm}=\frac{1}{2}(1\pm\Omega)$ forms a complete system of mutually orthogonal central idempotents, and hence the algebra $Cl_{0,7}(\mathbb{R})$ decomposes into the direct sum of two ideals. It can be shown that these ideals are isomorphic to the algebra $M_{8}(\mathbb{R})$ of all real matrices of size $8\times 8$. The latter, in turn, is the algebra of endomorphisms of the octonion algebra. Let us describe these relations in a little more detail.
\par
The algebra of octonions $\mathbb O$ is a real linear algebra with the canonical basis $1,e_{1},\dots,e_{7}$ such that
\begin{equation}\label{2}
e_{i}e_{j}=-\delta_{ij}+c_{ijk}e_{k},
\end{equation}
where the structure constants $c_{ijk}$ are completely antisymmetric and nonzero as $c_{123}=c_{145}=c_{167}=c_{246}=c_{275}=c_{374}=c_{365}=1$. The algebra of octonions is not associative but alternative, i.e. the associator $(x,y,z)=(xy)z-x(yz)$ is totally antisymmetric in $x,y,z$. Consequently, any two elements of $\mathbb O$ generate an associative subalgebra. The algebra of octonions satisfies the identity $((zx)y)x=z(xyx)$ which is called the right Moufang identity. The algebra $\mathbb O$ permits the involution (i.e. an anti-automorphism of period two) $x\to\bar x$ such that the elements $t(x)=x+\bar x$ and $n(x)=\bar{x}x$ are in $\mathbb R$. In the canonical basis, this involution is defined by $\bar e_{i}=-e_{i}$. It follows that the bilinear form $(x,y)=\frac12(\bar xy+\bar yx)$ is positive definite and defines an inner product on $\mathbb O$. It is easy to prove that the quadratic form $n(x)$ permits the composition $n(xy)=n(x)n(y)$. Since this form is positive definite, it follows that $\mathbb O$ is a division algebra.
\par
Denote by $R_{x}$ the operator of right multiplication on $x$ in the octonion algebra, i.e. $yR_{x}=yx$ for all $y\in\mathbb{O}$. The set of all such operators generates a subalgebra in the algebra $\text{End}\,\mathbb{O}$ of endomorphisms of the linear space $\mathbb{O}$. Using the multiplication law (\ref{2}) and antisymmetry of the associator $(e_{i},e_{j},e_{k})$, we prove the equalities
\begin{equation}\label{3}
R_{e_{i}}R_{e_{j}}+R_{e_{j}}R_{e_{i}}=-2\delta_{ij}1_8,
\end{equation}
where $1_8$ is the identity $8\times 8$ matrix. Comparing (\ref{3}) with (\ref{1}), we see that the correspondence $\Gamma_{i}\to R_{e_{i}}$ can be extended to the homomorphism $Cl_{0,7}(\mathbb R)\to\text{End}\,\mathbb O$. Since the algebra $M_{8}(\mathbb{R})$ is simple, it follows that this mapping is surjective and therefore $\text{End}\,\mathbb O\simeq M_8(\mathbb R)$. Note also that the product
\begin{equation}\label{4}
R_{e_1}R_{e_2}\dots R_{e_7}=1_8.
\end{equation}
This equality follows from simplicity of $M_{8}(\mathbb{R})$ and the fact that the element $\Omega$ lies in the center of $Cl_{0,7}(\mathbb R)$.
\par
Along with the operator of right multiplication, one can define the operator $L_x$ of left multiplication on $x$ in $\mathbb{O}$, namely $yL_{x}=xy$ for $y\in\mathbb{O}$. Suppose $\text{End}(\mathbb {O})^{(-)}$ is a Lie algebra obtained by introducing the commutator multiplication on the vector space $\text{End}(\mathbb{O})$. Its subalgebra generated by all operators right and left multiplications on $\mathbb{O}$ is called the Lie multiplication algebra of $\mathbb{O}$ and denoted by $\text{Lie}(\mathbb{O})$. It is well known (see, e.g., the review \cite{baez02}) that this algebra is generated by operators only right multiplications on $\mathbb{O}$ and is isomorphic to $so(8)\oplus\mathbb{R}1_8$. Using the antisymmetry of $(x,a,b)$, it is easy to prove the identity
\begin{equation}\label{40}
R_aR_b=R_{ab}-[R_a,L_b].
\end{equation}
Note that the identity (\ref{40}) and other relations in this paper rely on the right operator action. For example, $R_a R_b$ sends $x$ to $(xa)b$. In order to go to the left operator action, it is enough to do redefine $R_a\leftrightarrow L_a$. This follows, in particular, that the product $R_aR_b\in\text{Lie}(\mathbb{O})$ for any $a,b\in\mathbb{O}$. Note also that
\begin{equation}\label{14}
R_{\bar a}=|a|^2R_{a^{-1}},\quad R_{a^{-1}}=R_{a}^{-1},\quad R_aR_bR_a=R_{aba},
\end{equation}
where $|a|^2=\bar{a}a$. It follows from the homomorphism $Cl_{0,7}(\mathbb{R})\to \text{End}(\mathbb{O})$ that $R_{x}\in SO(8)$ if and only if $|x|=1$. Since the norm of $a/|a|$ is equal to 1 as $a\ne0$, we get
\begin{equation}\label{45}
R_x^t=R_x^{-1},\quad \det R_x=1,\quad R_{\bar{a}}=R_a^t.
\end{equation}
where $x=a/|a|$ and $0\ne a\in\mathbb{O}$. The last equality in (\ref{14}) is easily proved using the first two formulas in (\ref{14}) and (\ref{45}).
\par
Now we redenote the unit of $\mathbb{O}$ by the symbol $ e_0 $ so that $ R_ {e_0} = 1_8 $ and consider the combinations
\begin{equation}
R_{\mu\nu}=R_{e_{\mu}}\bar{R}_{e_{\nu}}-R_{e_{\nu}}\bar{R}_{e_{\mu}},
\end{equation}
where $\bar{R}_{e_{\mu}}\equiv R_{\bar{e}_{\mu}}$ and the Greek index takes values from 0 to 7. Then it easily follows from the identity (\ref{4}) that
\begin{equation}\label{5}
R_{[\mu\nu}R_{\lambda\rho]}=\frac{1}{4!}\varepsilon_{\mu\nu\lambda\rho\alpha\beta\gamma\sigma}
R_{[\alpha\beta}R_{\gamma\sigma]}.
\end{equation}
Hence, the tensor $R_{[\mu\nu}R_{\lambda\rho]}$ satisfies the self-duality equations.
\section{ADHM construction}
Let $M_{m,n}(\mathbb{R})$ be the set of all real $m\times n$ matrices, $\widetilde{K}$ a linear subspace of $\text{End}\,\mathbb{O}$, and $M$ a real $8m\times 8n$ matrix. We call $M$ the $\widetilde{K}$-matrix of size $m\times n$ if it is representable as a matrix with elements from $\widetilde{K}$, i.e. if $M\in M_{m,n}(\mathbb{R})\otimes\widetilde{K}$. In the case when $m=n$ and $M\in\mathbb{R}1_n\otimes\widetilde{K}$, where $1_n$ is the identity $n\times n$ matrix, we say that the matrix $M$ is diagonal over $\widetilde{K}$. Finally, we say that this matrix is real over $\text{End}\,\mathbb{O}$ if $M\in M_{m,n}(\mathbb{R})\otimes\mathbb{R}1_8$.
\par
Now let $K=\{R_a\mid a\in\mathbb{O}\}$. We choose two $K$-matrices $C$ and $D$ of size $(n + N)\times N$ in such a way that for any $x\in K$ the matrix
\begin{equation}\label{8}
M(x)=Cx+D
\end{equation}
satisfies the following conditions:
\par\medskip
i) the matrix $\bar{M}^{t}M$ is real over $\text{End}\,\mathbb{O}$;
\par
ii) the matrix $\bar{C}^{t}M(\bar{M}^{t}M)^{-1}\bar{M}^{t}C$ is real over $\text{End}\,\mathbb{O}$;
\par
iii) the matrix $M$ has the maximal rank $8N$.
\par\medskip\noindent
Here $\bar{M}$ signifies the transposition of elements in the blocks, $M^t$ signifies the transposition of the blocks, and $\bar{M}^t$ signifies the transposition of the real matrix. Note also that everywhere below we say that the matrix is real, implying that it is real over $\text{End}\,\mathbb{O}$.
\par
In addition to $K$, we need one more example of subspace of $\text{End}\,\mathbb{O}$. Suppose $\widetilde{K}$ is the Lie algebra generated by $K$, i.e. $\widetilde{K}$ is the Lie multiplication algebra of $\mathbb{O}$. Then it follows from (\ref{40}) that $M(x)$ is a $\widetilde{K}$-matrix of size $(n+N)\times N$. This construction will be used in Appendix A to calculate the gauge group of $N$-instanton.
\par
Now we choose a matrix $U(x)$ of size $(8n+8N)\times 8n$ over $\mathbb{R}$ such that
\begin{equation}\label{9}
\bar{U}^{t}M=0,\quad \bar{U}^{t}U=1_n
\end{equation}
and define the linear potential
\begin{equation}\label{7}
A_{\mu}(x)=\bar{U}^{t}\partial_{\mu}U.
\end{equation}
Then it follows from conditions i) and ii) that the corresponding completely antisymmetric 4-tensor $F_{[\mu\nu}F_{\lambda\rho]}$ will have the form
\begin{equation}\label{6}
F_{[\mu\nu}F_{\lambda\rho]}
=\bar{U}^{t}CR_{[\mu\nu}R_{\lambda\rho]}(\bar{M}^{t}M)^{-1}\bar{C}^{t}U\bar{U}^{t}C(\bar{M}^{t}M)^{-1}\bar{C}^{t}U.
\end{equation}
To prove this equality, it suffices to use the conditions (\ref{9}) and the reality of $\bar{C}^tC$ that follows from the conditions i). Obviously, the tensor (\ref{6}) satisfies the self-duality equations.
\par
We note that the use of the octonion algebra in this construction is not necessary. Instead of the algebra $\text{End}\,\mathbb{O}$, we can use the matrix algebra $M_{8}(\mathbb{R})$ which is isomorphic to $\text{End}\,\mathbb{O}$. In this case, it suffices to construct the homomorphism $Cl_{0,7}(\mathbb{R})\to M_{8}(\mathbb{R})$ and find the images of generators of the Clifford algebra in $M_{8}(\mathbb{R})$. This idea was made in ref.~[26], where an expression for $F_{[\mu\nu}F_{\lambda\rho]}$ coinciding with (\ref{6}) as $n=1$ was obtained.
\par
It is shown in Appendix A that the gauge group of the instanton (\ref{6}) is $Spin(8n)$. Therefore, when replacing $U$ with $UT$, where $T\in Spin(8n)$, the potential (\ref{7}) undergoes a gauge transformation, and the $N$-instanton (\ref{6}) does not change. Similarly, the $N$-instanton will be invariant under the transformations
\begin{equation}
M\to XMY,\quad U\to XU,
\end{equation}
where $X\in O(8n+8N)$ and $Y\in GL(N,\mathbb{R})\otimes\mathbb{R}1_8$. Therefore, without loss of generality, we can assume that the matrix (\ref{8}) has the form
\begin{equation}\label{10}
M(x)=\begin{pmatrix}\Lambda\\B-x1_{N}\end{pmatrix},
\end{equation}
where $\Lambda$ and $B$ are constant $K$-matrices and $x\in K$.
\par
It follows from the condition i) that the matrices $\bar{B}^tB+\bar{\Lambda}^t\Lambda$ and $\bar{B}^tx+\bar{x}B$ must be real. On the other hand, $\bar{B}^tx+\bar{x}B$ is real if and only if $B$ is symmetric. Therefore, the matrix $\bar{B}^tB+\bar{\Lambda}^t\Lambda$ is also symmetric. Generally speaking, the same $N$-instanton can be obtained from different matrices $\Lambda$ and $B$. Suppose $P$ is a special orthogonal $(8n\times8n)$-matrix, and $S\in O(N)\otimes\mathbb{R}1_8$. Then the transformation
\begin{equation}\label{44}
\Lambda\to P\Lambda S,\quad B\to S^tBS
\end{equation}
leads only to a replacement of bases and therefore leaves the $N$-instanton unchanged. Moreover, blocks of $\bar{B}^tB+\bar{\Lambda}^t\Lambda$ and $S$ are scalar $8\times8$-matrices. Therefore, by virtue of the theorem on reduction to principal axes, the matrix $\bar{B}^tB+\bar{\Lambda}^t\Lambda$ can be considered diagonal over $K$.
\par
In order to modify the condition ii), we note that
\begin{align}\label{22}\notag
&\bar{C}^tM(\bar{M}^{t}M)^{-1}\bar{M}^tC\\
&=B(\bar{M}^{t}M)^{-1}\bar{B}^t-\{B\bar{x}(\bar{M}^{t}M)^{-1}+(\bar{M}^{t}M)^{-1}x\bar{B}^t\}+(\bar{M}^{t}M)^{-1}|x|^2.
\end{align}
If the matrix $B$ is non-degenerate then the reality condition of the first term on the right hand side is equivalent to the reality of the matrix
\begin{equation}\label{23}
(\bar{B}^t)^{-1}(\bar{M}^{t}M)B^{-1}=(\bar{B}^t)^{-1}\{(\bar{B}^tB+\bar{\Lambda}^t\Lambda)-(\bar{x}B+\bar{B}^tx)+|x^2|\}B^{-1}.
\end{equation}
Since the matrix $B$ is symmetric, this condition is satisfied if and only if the matrices
\begin{equation}\label{11}
B\bar{B}\quad\text{and}\quad B(\bar{B}B+\bar{\Lambda}^t\Lambda)^{-1}\bar{B}
\end{equation}
are real.
\par
Now we find the reality condition of the second term on the right hand side of (\ref{22}). Suppose
\begin{equation}\label{20}
X=B\bar{x}(\bar{M}^{t}M)^{-1}+(\bar{M}^{t}M)^{-1}x\bar{B}\quad\text{and}\quad Y=(\bar{M}^{t}M)X(\bar{M}^{t}M).
\end{equation}
\par\noindent
Obviously, $X$ is real iff $Y$ is real. Using (\ref{10}), we represent $Y$ in the form
\begin{align}
&x\bar{B}(\bar{B}B+\bar{\Lambda}^t\Lambda)+(\bar{B}B+\bar{\Lambda}^t\Lambda)B\bar{x}\label{27}\\
&-x\bar{B}(\bar{x}B+\bar{B}x)-(\bar{x}B+\bar{B}x)B\bar{x}\label{25}\\
&+(x\bar{B}+B\bar{x})|x|^2.\label{21}
\end{align}
We need the following two simple statements:
\par\medskip
a) the matrix $R_aR_{\bar{b}}+R_bR_{\bar{a}}$ is real for any $a,b\in\mathbb{O}$;
\par\medskip
b) the equality $R_aR_{\bar{b}}+R_bR_{\bar{a}}=R_{\bar{b}}R_a+R_{\bar{a}}R_b$ is true for any $a,b\in\mathbb{O}$.
\par\medskip\noindent
In order to prove the statements, it is enough to use the identity $(ya)b+(yb)a=y(ab+ba)$, and notice that $\bar{a}=2a_0-a$ and $\bar{b}=2b_0-b$, where $a_0,b_0\in\mathbb{R}$. As a result, we get $ab+\bar{b}\bar{a}=ba+\bar{a}\bar{b}$ and
\begin{equation}
R_aR_{\bar{b}}+R_bR_{\bar{a}}=R_{4a_0b_0-ab-\bar{b}\bar{a}},
\end{equation}
that proves the statements.
\par
It follows from a) that the matrix (\ref{21}) is real. In order to prove the reality of (\ref{25}), we consider the matrix
\begin{equation}\label{26}
x\bar{B}(\bar{x}B+\bar{B}x)+B\bar{x}(\bar{x}B+\bar{B}x).
\end{equation}
It is real by the statement a). Summing (\ref{25}) and (\ref{26}), and using the reality of $\bar{x}B+\bar{B}x$, we get
\begin{equation}\label{30}
A(x)=\{B(\bar{x}B+\bar{B}x)-(\bar{x}B+\bar{B}x)B\}\bar{x}.
\end{equation}
Obviously, $A(x)$ is real iff (\ref{25}) is real. As noted above, the matrix $B$ is symmetric and the matrix $S=\bar{B}B+\bar{\Lambda}^t\Lambda$ is real and diagonal over $K$. Therefore $B\bar{B}=(\bar{B}B)^t=\bar{B}B$. Using this equality and the reality of $B\bar{B}$, we transform (\ref{30}) to the form
\begin{equation}\label{28}
A(x)x\bar{B}=|x|^2\{(B\bar{x}+x\bar{B})-(\bar{x}B+\bar{B}x)\}B\bar{B}.
\end{equation}
It follows from b) that the expression in curly brackets is zero. Therefore, if the matrix $\bar{B}$ is non-degenerate, then $A(x)=0$. This proves the reality of the matrix (\ref{25}).
\par
Similar considerations apply to the matrix (\ref{27}). It is enough to replace $\bar{x}B+\bar{B}x$ with $\bar{B}B+\bar{\Lambda}^t\Lambda$. As a result, we obtain an analogue of (\ref{30})
\begin{equation}
\tilde{A}(x)=\{(\bar{B}B+\bar{\Lambda}^t\Lambda)B-B(\bar{B}B+\bar{\Lambda}^t\Lambda)\}\bar{x},
\end{equation}
where $\tilde{A}(x)$ is again a real matrix for any $x\in K$. Substituting $x=x_0$, we prove the reality of the expression in curly brackets. But then $\tilde{A}(\vec{x})$, where $\vec{x}=\frac{1}{2}(x-\bar{x})$, cannot be real. Hence, the matrix (\ref{27}) is real iff
\begin{equation}\label{29}
(\bar{B}B+\bar{\Lambda}^t\Lambda)B=B(\bar{B}B+\bar{\Lambda}^t\Lambda).
\end{equation}
Note that the reality of the second matrix in (\ref{11}) is a consequence of (\ref{29}) and the reality of $B\bar{B}$ or $\bar{\Lambda}^t\Lambda$.
\par
As for the condition iii), it is satisfied if and only if the equations $B\xi=x\xi$ and $\Lambda \xi=0$ have a unique solution $\xi=0$ for any $x\in K$. Thus, we obtain the following equivalent reformulation of conditions i), ii), and iii) (under the condition that the matrix $B$ is non-degenerate):
\par\medskip
i)$'$ the matrix $B$ is symmetric, and the matrix $\bar{B}B+\bar{\Lambda}^t\Lambda$ is real and diagonal over $K$;
\par
ii)$'$ the matrices $B$ and $\bar{B}B+\bar{\Lambda}^t\Lambda$ are mutually commutative and the matrix $\bar{\Lambda}^t\Lambda$ is real;
\par
iii)$'$ the equations $B\xi=x\xi$ and $\Lambda \xi=0$ has a unique solution $\xi=0$ for any $x\in K$.
\par\medskip
Unlike the conditions i) and ii), all matrices in i)$'$ and ii)$'$ are constant, which greatly simplifies the research of the moduli space. It is difficult to find the module parameters directly from condition ii), because these parameters must satisfy the constraints for an arbitrary $x$. That is why the multi-instanton solutions in Ref.~\cite{naka16}, where the condition ii)$'$ is absent, were obtained only for a limited situation when each instanton is well separated.
\section{Multi-instanton solutions}
Let us consider in more detail the case $n=1$, when the potential (\ref{7}) takes values in the Lie algebra $so(8)$. To construct an $N$-instanton satisfying the conditions (\ref{9}), we will search for the matrix $U=U(x)$ in the following form
\begin{equation}\label{43}
U=k\begin{pmatrix}-1_8\\V\end{pmatrix},
\end{equation}
where the column vector $V=(v_1,\dots,v_N)^{t}$, $v_i\in K$ and the real $k>0$. Substituting this expression into the formula (\ref{7}) and using the conditions (\ref{9}), we get the expressions
\begin{equation}\label{15}
A_{\mu}=\frac{1}{2}\frac{\bar{V}^t\partial_{\mu}V-\partial_{\mu}\bar{V}^tV}{1+\bar{V}^tV},
\end{equation}
where $\bar{V}^t=\Lambda(B-x1_8)^{-1}$ and $1+\bar{V}^tV=k^{-2}$. Note that potentials (\ref{15}) and (\ref{7}) as $n=1$ are gauge equivalent.
\par
Now back to the consideration of the conditions i)$'$, ii)$'$, and iii)$'$. Suppose $\Lambda=(\lambda_1,\dots,\lambda_N)$ and $B=(b_{ij})$, where $b_{ij}=b_{ji}$. Then the diagonal elements of $\bar{B}^tB+\bar{\Lambda}^t\Lambda$ have the form
\begin{equation}
k_j=\sum_{i=1}^N|b_{ij}|^2+|\lambda_{j}|^2
\end{equation}
and, therefore, are real. Therefore, condition i)$'$ is reduced to the $N(N-1)/2$ relations
\begin{equation}\label{31}
\sum_{i=1}^N\bar{b}_{ij}b_{ik}+\bar{\lambda}_{j}\lambda_{k}=0,\qquad j<k,
\end{equation}
which expresses the vanishing of off-diagonal elements of $\bar{B}^tB+\bar{\Lambda}^t\Lambda$.
\par
In order to rewrite condition ii)$'$, we consider the transformation $\Lambda\to T\Lambda$, where $T\in Spin(7)$. Since the group $ Spin (7) $ acts transitively on the set of elements of the norm 1 in $K$, we can assume that $\lambda_1$ is real and positive. On the other hand, the matrix $\bar{\Lambda}^t\Lambda$ is real and elements of her first row have the form $\lambda_1\lambda_i$. Therefore, all elements of $\Lambda$ must be real. Hence, condition ii)$'$ is reduced to the relations
\begin{equation}\label{32}
(k_i-k_j)b_{ij}=0,\qquad\lambda_i\in\mathbb{R}1_8,\quad\lambda_1>0.
\end{equation}
(Here and everywhere below we write $\lambda_1>0$, implying that $\lambda_1=k1_8$ with $k>0$.) Thus, if $b_{ij}=0$, then the additional condition do not arise. If $b_{ij}\ne0$, then we have the additional condition
\begin{equation}\label{33}
\sum_{k=1}^N|b_{ik}|^2+\lambda_{i}^2=\sum_{k=1}^N|b_{jk}|^2+\lambda_{j}^2.
\end{equation}
\par
Finally, the condition iii)$'$ is equivalent to the following requirement. For any $x\in K$, the system of equations
\begin{equation}
\sum_{j=1}^Nb_{ij}\xi_j=x\xi_i,\qquad \sum_{i=1}^N\lambda_{i}\xi_i=0
\end{equation}
has only the zero solution $\xi_i=0$.
\par
We turn to the study of special cases. Obviously, the conditions i)$'$, ii)$'$, and iii)$'$ are automatically satisfied in the 1-instanton case. Let $N=2$. If $b_{12}=0$, then it is obvious that $\lambda_1>0$ and $\lambda_2=0$. While $b_{11}$ and $b_{22}$ are independent variables. Suppose $b_{12}\ne 0$. Then it follows from (\ref{31}) and (\ref{32}) that
\begin{equation}\label{13}
\bar{b}_{11}=-\bar{b}_{12}b_{22}b_{12}^{-1}-\lambda_{1}\lambda_{2}b_{12}^{-1},
\end{equation}
where $\lambda_2$ is real and $\lambda_1>0$. This equation has a solution in the space $K$. Indeed, using the identities (\ref{14}), we prove that the first term on the right-hand side of (\ref{13}) belongs to $K$. The second term is in $K$ because $\lambda_1$ and $\lambda_2$ are real. Hence, $\bar{b}_{11}\in K$, as it should be. For $N=2$, the condition (\ref{33}) takes the form
\begin{equation}\label{34}
|b_{11}|^2+\lambda_{1}^2=|b_{22}|^2+\lambda_{2}^2.
\end{equation}
In this case, $\lambda_1>0$, $b_{22}$, and $b_{12}\ne0$ can be selected as independent elements.
\par
In order to satisfy condition iii)$'$, we exclude element $\xi_1$ from system 1. We obtain
\begin{align}
(x\lambda_1^{-1}\lambda_2-b_{11}\lambda_1^{-1}\lambda_2+b_{12})\xi_2&=0,\\
(x+b_{12}\lambda_1^{-1}\lambda_2-b_{22})\xi_2&=0.
\end{align}
Hence, the condition iii)$'$ is satisfied if and only if for any $x$, at least one of the coefficients at $\xi_2$ is nonzero. It is easy to see, that this is equivalent to the condition
\begin{equation}
b_{11}-b_{12}\lambda_1\lambda_2^{-1}\ne b_{22}-b_{12}\lambda_1\lambda_2
\end{equation}
as $\lambda_2\ne0$, and the the condition $x\ne b_{22}$ as $b_{12}=\lambda_2=0$. Thus, the ansatz (\ref{15}) defines a 2-instanton in the following cases:
\par
1) the elements $\lambda_1>0$, $b_{11}$ and $b_{22}$ are independent variables, and $b_{12}=\lambda_2=0$;
\par
2) the elements $\lambda_1>0$, $b_{22}$ and $b_{12}\ne0$ are independent variables, and $b_{11}$ and $\lambda_2$ are defined by the formulas (\ref{13}) and (\ref{34}) respectively.
\par
Obviously, in both cases there are 17 free real parameters.
\par
The case $N=3$ is investigated in a similar way. Suppose $B$ is a non-degenerate symmetric matrix of size $3\times3 $ over $K$. Then condition i)$'$ reduces to solving the system of equations
\begin{align}\label{18}
\bar{b}_{11}b_{12}+\bar{b}_{12}b_{22}+\bar{b}_{13}b_{23}+\lambda_{1}\lambda_{2}&=0,\notag\\
\bar{b}_{11}b_{13}+\bar{b}_{12}b_{23}+\bar{b}_{13}b_{33}+\lambda_{1}\lambda_{3}&=0,\\
\bar{b}_{12}b_{13}+\bar{b}_{22}b_{23}+\bar{b}_{23}b_{33}+\lambda_{2}\lambda_{3}&=0.\notag
\end{align}
Consider the following four possibilities.
\par
1) Let $b_{12}=b_{13}=b_{23}=0$. Setting $\lambda_1>0$, we have $\lambda_2=\lambda_3=0$. The conditions ii)$'$ is satisfied automatically. The conditions iii)$'$ is satisfied if $b_{11}+b_{22}+b_{33}\ne0$. Thus, we have 25 free real parameters in total.
\par
2) Let $b_{12}\ne0$ and $b_{13}=b_{23}=0$. Setting $\lambda_1>0$, we have $\lambda_3=0$. So we reduce the system under consideration to one equation equivalent to (\ref{13}). Arguing as above, we prove that the elements $b_{12}\ne0$, $b_{22}$, $b_{33}$ and $\lambda_1>0$ are independent variables, and $b_{11}$ and $\lambda_2$ are defined by the formulas (\ref{13}) and (\ref{34}) respectively. Thus, we again have a total of 25 free real parameters.
\par
3) Let $b_{12}\ne0$, $b_{13}\ne0$ and $b_{23}=0$. We put choose the elements $b_{12}\ne0$, $b_{22}$ and $\lambda_1>0 $ as independent variables. Through them, $\bar{b}_{11} $, $b_{13}$ and $b_{33}$ are easily expressed
\begin{align}
\bar{b}_{11}&=-\bar{b}_{12}b_{22}b_{12}^{-1}+\lambda_{1}\lambda_{2}b_{12}^{-1},\label{35}\\
b_{33}&=-\bar{b}_{13}^{-1}\bar{b}_{11}b_{13}+\lambda_{1}\lambda_{3}\bar{b}_{13}^{-1},\label{38}\\
b_{13}&=-\lambda_{2}\lambda_{3}\bar{b}_{12}^{-1}.\label{37}
\end{align}
It follows from (\ref{14}) that solutions of the equations must be belongs to $K$. To satisfy ii)$'$, we impose the conditions
\begin{align}
|b_{11}|^2+\lambda_{1}^2&=|b_{22}|^2+\lambda_{2}^2,\label{36}\\
|b_{22}|^2+\lambda_{2}^2&=|b_{33}|^2+\lambda_{3}^2.\label{39}
\end{align}
The variables $b_{11}$ and $\lambda_2$, we find by solving the system (\ref{35}) and (\ref{36}). Substituting (\ref{37}) in (\ref{38}), we get $b_{33}$. Then (\ref{39}) and (\ref{37}) give $\lambda_3$ and $b_{13}$. As a result, we obtain a solution with 17 free real parameters.
\par
4) Let $b_{12}\ne0$, $b_{13}\ne0$, and $b_{23}\ne0$. Then the first equation in (\ref{18}) has the solution $b_{11}\in K$ only if $b_{23}=kb_{13}$, where $k\in\mathbb{R}$. We rewrite the other two equations in the form of a system with respect to the unknown $b_{33}$ and $\lambda_3$. This system will have a solution in $K$ only if $b_{13}=\lambda_1$ and $\lambda_2\ne k\lambda_1$. The conditions (\ref{36}) and (\ref{39}) fix the values of $\lambda_2$ and $k$. Hence, the independent variables are $b_{12}\ne0$, $b_{22}$ and $\lambda_1>0$. As a result, we again obtain a solution with 17 free real parameters.
\par
We see that the number of free parameters does not increase with an increase in the number of nonzero off-diagonal elements of $B$. This is due to the need to have solutions in the space $K$. This is a very strong condition. It can be shown that a similar situation takes place in the case of arbitrary $N$. Indeed, let $B_0$ be a non-degenerate symmetric $N\times N$ matrix over $K$ in which all off-diagonal elements are zero. Then the independent variables of the system (\ref{31}) are $b_{11},\dots,b_{NN}$ and $\lambda_1$. Now let $B_1$ be a non-degenerate matrix that differs from $B_0$ only in the element $b_{jk}\ne0$ with $j<k$. It follows from $(j,k)$-th equation in (\ref{31}) that $\bar{b}_{jj}$ and $b_{kk}$ are linearly dependent. Hence $B_0$ and $B_1$ contain the same number of independent elements. Using induction on the number of nonzero off-diagonal matrix elements, we consider the transition from $B_i$ to $B_{i+1}$, where again $B_{i+1}$ differs from $B_i$ only in the off-diagonal element $b_{jk}\ne0$. Suppose $b_{jk}$ and $b_{jl}$, where $j<l$, are independent elements. Then $(k,l)$-th equation in (\ref{31}) contain the arbitrary term $\bar{b}_{jk}b_{jl}$. In order for this equation to have a solution in the space $K$, an additional condition is necessary.
\par
Thus, to exclude the additional conditions occurrence, it is necessary to choose off-diagonal elements with non-matching indexes. In this case, as is easy to see, the number of independent elements of $B$ cannot be more than $N$. Now note that together with $b_{jk}\ne0$, an additional condition (\ref{33}) arises that binds the variables $\lambda_j$ and $\lambda_k$. Therefore, the number of independent elements of $\Lambda$ should also not increase. Thus, we have proved that the $N$-instanton in eight dimensions satisfying the conditions i), ii), and iii) cannot have more than $8N+1$ free parameters.
\par
A solution containing exactly $8N+1$ free parameters is easy to construct explicitly. Suppose that the $N\times N$ matrix $B$ is non-degenerate and diagonal over $K$, and the matrix $\Lambda=(\lambda_1,\dots,\lambda_N)$, where all $\lambda_i>0$. Then the conditions i$'$) is satisfied for a suitable orthogonal transformation $B\to S^{-1}BS$ and $\Lambda\to\Lambda S$, the conditions ii$'$) is satisfied when choosing $\sum_i\lambda_is_{ij}=0$ for $j=2\dots N$, and the conditions iii$'$) is true as $x\ne b_i\equiv b_{ii}$. Substituting the values of $\Lambda$ and $B$ in (\ref{15}), we find the potential
\begin{equation}\label{19}
A_{\mu}=\frac{1}{2}R_{\nu\mu}\partial_{\nu}\ln\left(1+\sum^{N}_{i=1}\frac{\lambda_i^2}{|b_i-x|^2}\right),
\end{equation}
which is an eight-dimensional analogue of the 't Hooft instanton in dimension four. In particular, for $N=1$, the obtained instanton is gauge equivalent (outside the point $x=b_1$) to the 1-instanton that was found in Ref.~\cite{gros84}. The singularities in the gauge field at $x=b_i$ are not physical but are artifacts of our choice $v_0=-1_8$ in (\ref{43}).
\par
Following Ref.~\cite{gros84} and regarding the field strength $F=\{F_{\mu\nu}\}$ as a curvature, we define the topological charge \begin{equation}
Q=k\int\text{tr}(F\wedge F\wedge F\wedge F),
\end{equation}
for a suitable normalization constant k, as the fourth Chern number. It was shown in Ref.~\cite{naka16}, that the topological charge so defined is quantized in the well-separated limit, i.e. if $|b_i-b_j|^2\gg\lambda_i\lambda_j$ for all $i\ne j$. Therefore such instanton can be regarded as a superposition of $N$ instantons and hence for it the topological charge coincides with $N$. On the other hand, if $Q$ is the topological charge of the 't Hooft type instanton with the well-separated limit, we can argue that $Q$ must be the topological charge for any 't Hooft type solution since the latter can be obtained from the solution with the well-separated limit by continuous deformation of $B$ and $\Lambda$. Thus, the topological charge of the instanton (\ref{19}) is $N$.
\section{Conclusion and discussions}
In this article, we have studied the ADHM construction of self-dual instantons in eight dimensions, which was proposed in Ref.~\cite{naka16}. Using the well-known connection between the Clifford algebra $Cl_{0,7}(\mathbb{R})$ and the octonion algebra $\mathbb{O}$, we found new restrictions on matrices in instead of those used in the cited work. This made it possible to calculate the dimension of the space of solutions to this construction in the case $n=1$ and found a new $N$-instanton solution of the 't~Hooft type. In addition, we have shown that the gauge group of the theory is $Spin(8n)$.
\par
Since the moduli space $W_n(N)$ here is the quotient space of the space of all pairs $(\Lambda,B)$ satisfying conditions i)$'$-iii)$'$ with respect to the equivalence relation given by (\ref{44}), it contains an open everywhere dense subset $W'_n(N)$, which is a smooth manifold. We have shown that $\dim W'_1(N)=8N+1$. Unfortunately, the construction of charts of $W'_1(N)$ is complicated by the need to take into account factorization with respect to relations (\ref{44}). Therefore, we have built charts (or independent parameters of $N$-instantons) only for manifolds $W'_1(2)$ and $W'_1(3)$. We stress that our method does not work for $N>3$, since equations (\ref{31}) become nonlinear and, of course, all these calculations say practically nothing about the topology of $W_1(N)$.
\par
Unlike the approximate solution of the 't~Hooft types constructed in Ref.~\cite{naka16}, the solution (\ref{19}) is exact. It containing $8N+1$ free parameters and its topological charge is $N$. The singularities in (\ref{19}) are not physical but are artifacts of our choice of the singular gauge. For $N=1$, the instanton (\ref{19}) is gauge equivalent to the 1-instanton that was found in Ref.~\cite{gros84}. Note also that the projection of $\mathbb{O}$ onto its subalgebra $\mathbb{H}$ transforms (\ref{19}) into the 't~Hooft instanton in dimension four.
|
1,941,325,220,726 | arxiv | \section{Introduction}
\label{introduction}
\subsection{Superconductivity in diamond: a still incomplete puzzle}\label{intro}
Superconductivity (SC) was recognized in doped diamond through a relatively broad
transition in the electrical resistance at 2.5~K$~\ldots$ 2.8~K
after doping it with $\sim 3\%$ boron \cite{EKI04,tak04}. A critical temperature onset of 8.7~K was reported in
polycrystalline diamond thin films one year later \cite{tak05}.
Angle-resolved photoemission spectroscopy (ARPES) studies revealed
that holes in the diamond bands determine the metallic character of the heavily
boron-doped superconducting diamond \cite{yok05}.
Meanwhile, there are indications of SC below 12~K in
B-doped carbon nanotubes \cite{mur08}, at
$\sim 25~$K in heavily B-doped diamond
samples \cite{oka15} or even at $\sim 55$~K in 27\% B-doped Q-carbon, an amorphous form of carbon \cite{bha17}.
In spite of several experimental and theoretical studies on the influence of boron as a trigger of SC in diamond and in some
carbon-based compounds, the real role of boron and the origin of SC in diamond is not so clear as it may appear. For example,
scanning tunneling microscopy studies of superconducting B-doped polycrystalline diamond
samples revealed
granular SC with an
unclear boron concentration within the superconducting regions \cite{zha14}. In spite of an apparently homogeneous
boron concentration, strong modulation of the order parameter were reported in \cite{zha14}, putting into question the
real role of boron in the SC of diamond. High resolution structural studies accompanied by electron energy loss spectroscopy (EELS)
on B-doped diamond single crystals revealed the presence of boron in distorted regions of the diamond lattice inducing
the authors to argue that
SC may appear in the disordered structure and not in the defect-free B-doped lattice of diamond \cite{bla14}.
This peculiarity may explain the high sensitivity of the superconducting
critical current between hydrogen- and oxygen-terminated boron-doped diamond \cite{yam17}.
Furthermore, we may ask whether the SC phase in B-doped diamond is homogeneously distributed in
the reported samples. This does not appear to be in general the case. For example, the SC phase
was observed only within a near surface region
of less than $\sim 1~\mu$m in heavily B-doped diamond single crystals, showing clear polycrystallinity and granular properties \cite{blaepl}.
Some studies associated the Mott transition observed as a function of the B-concentration
with metallic B-C bilayers; the measured SC was
detected at the surface of the doped crystals having a short modulation period between the B-C bilayers \cite{pol16}.
Apparently, differences in the growth processes
of the diamond samples cause differences in the morphology of the SC phase, an issue that still needs more studies.
Several recent studies indicate a granular nature of the samples' structure and of the SC phase \cite{pol16,zha11,zha16,zha19}.
The granular nature of the SC phase and localized disorder
may play an important role as recently reported experimental facts indicate, namely: (a) Not always there is a clear
correlation between
the B-concentration threshold for the metal-insulator transition
and the one for SC \cite{bou17}, and (b) there is no simple dependence between
the free-carrier concentration and the critical temperature characterized by transport measurements \cite{cre18}.
Another open issue in the SC puzzle of doped diamond is the fact that implanting boron into diamond
via irradiation does not trigger SC at least above 2~K \cite{hee08}. It has been argued that this irradiation
process does not trigger SC because of the produced defects that remained in the diamond lattice after
ion irradiation. However, this is not at all clear because the sample was heated to $900^\circ$C
during irradiation and afterwards annealed at $1700^\circ$C in vacuum \cite{hee08}, see also similar results in
\cite{bev16,tsu12,tsu06}. It might be that the absence of SC in the
irradiated samples after high temperature annealing is related to the absence of certain defects and not other way around \cite{bas08}.
That certain lattice defects are of importance for SC can be seen from
the reentrance of SC observed in ion-irradiated B-doped diamond after high-temperature
annealing \cite{cre18}.
This reentrance of SC has been attributed to the partial removal of vacancies
previously produced by
light ion irradiation. It has been argued that vacancies change the effective carrier density compensating the
boron acceptors and therefore suppressing SC \cite{cre18}.
\subsection{Defect-induced phenomena and the unconventional magnetization observed in nitrogen-doped diamond crystals}
According to recent studies, the reported SC in B-doped diamond appears to be
more complicated because not only the boron concentration matters but also some kind of disorder or even magnetic
order may play a role.
Coexistence of SC and ferromagnetism (FM) was recently reported in hydrogenated
B-doped nanodiamond films at temperatures below 3~K \cite{zha17,zha20}. Earlier studies
\cite{tala05} showed the existence of ferromagnetic hysteresis
at room temperature in the magnetization of nanodiamonds after nitrogen or carbon irradiation.
Recently published studies revealed the existence of large hysteresis in field and temperature
in the magnetization below an apparent critical temperature
$T_c \simeq 30~$K in B-free bulk diamond single crystals. Those crystals were produced under high-temperature and high-pressure
(HTHP) conditions with an average N-concentration $\lesssim 100$~ppm \cite{bardia}.
In this last work, a correlation was found between
the strength of the hysteretic behavior in the magnetization with the N-content, in particular with the concentration of C-centres.
We would like to stress that the possible origins for an irreversible behavior in field and/or temperature in the magnetization of a
material can be: (a) Intrinsic magnetic anisotropy, (b) the existence of magnetic domains and the pinning of their walls, or (c) the
pinning of vortices or fluxons in a superconducting matrix. The origin (a) can be ruled out because the measured behavior
does not depend on the direction of the applied field with respect to the main axes of the crystals nor on their shape \cite{bardia}. This
independence has been confirmed once more in the diamond crystals studied in this work.
The relatively
low average N-content and further details of the
measured diamond samples
suggest the existence of a granular structure in the concentration of nitrogen \cite{bardia}.
To facilitate the comparison of the present results with those of Ref.~\cite{bardia},
we give a summary below of the main results obtained from the measurements of
eight different nitrogen doped crystals:\\
- (a)
All samples with nitrogen concentration below 120~ppm show unconventional
magnetic moment behavior in the field hysteresis and temperature dependence below $\sim 30$~K.
As example, we show in Fig.~\ref{ht} the field hysteresis of one of the measured samples in this study,
before and after annealing. Above $\sim 30$~K all samples behave as a typical undoped
diamagnetic diamond.
\begin{figure}
\includegraphics[width=1\columnwidth]{ht2.eps}
\caption{\label{ht} Left-bottom axes: Magnetic field hysteresis loop at $T = 2~$K of the magnetic moment of
sample 4 studied in this work, before and after annealing four hours at 1200$^\circ$C in vacuum.
Right-upper axes: ferromagnetic/superconducting LSMO/YBCO bilayer
at 65~K taken from \cite{bardia}.
A small diamagnetic linear field background was subtracted from the data points.}
\end{figure}
- (b) The irreversible magnetization of the diamond samples
can be phenomenologically understood as the
superposition of a diamagnetic, superparamagnetic (or ferromagnetic) and superconducting
contributions. We stress that the main characteristics of the anomalous hysteresis are also observed in ferromagnetic/superconducting oxide bilayers for fields applied parallel to the
main area (and interface between) of the two layers, see Fig.~\ref{ht}.
More hysteresis
loops can be seen in \cite{bardia} and in its supplementary information. The similarities in the minutest details between the
field loops obtained at different temperatures starting both in the virgin, zero field cooled states, are
striking. Note that we are comparing two data sets with very similar absolute magnetic moments using
the same SQUID magnetometer. The strength of the magnetic moment signal does indicate that we are dealing with a
huge effect, 5 orders of magnitude larger than the sensitivity of our SQUID magnetometer (of the order of
$10^{-8}~$emu).
The similarities in the field hysteresis and also in the
temperature dependence of the magnetic susceptibility
suggest the existence of superparamagnetic and superconducting regions in the N-doped diamond crystals.\\
- (c) The amplitude of the anomalous magnetization signal below $\sim 30$~K increases with the
nitrogen-related C-centres concentration. \\
- (d)
The obtained phase diagrams for both phases found in the N-doped diamond samples below 30~K indicate
that the ordering phenomena should have a common origin. In contrast to the amplitude of the anomalous magnetization, the
critical temperature - below which the hysteretic behavior is measured - does not depend significantly on the C-centre concentration.
This result suggests that localized C-centre clusters with similar C-centre concentration trigger the anomalies below 30~K. \\
- (e)
The magnetic and superconducting regions are of granular nature embedded in the dielectric diamond matrix.
The used Bean-like
model to fit the field hysteresis suggests the existence of superconducting shielding currents
and field gradients within the grains of size larger than a few nanometers.
- (f) The total mass of the
regions of the samples that originates the anomalous magnetization behavior was estimated as $\sim 10^{-4}$ of total
sample mass, emphasizing its granular nature and the difficulties one has to localize them in large bulk samples. \\
We note that nitrogen, as a donor impurity in diamond, is expected
to trigger SC with higher critical temperatures than with boron \cite{bas08}. This prediction is based on the higher binding energy
of substitutional nitrogen in diamond, in comparison to boron. However, if ones takes boron as example, a nitrogen concentration of
a few percent appears
to be experimentally difficult to achieve in equilibrium conditions. Therefore, it has been argued that disorder or defective regions and an inhomogeneous N-concentration
in the diamond lattice may play a role in triggering granular SC \cite{bas08}.
Interestingly, compression-shear deformed diamond was recently
proposed as a new way to trigger phonon-mediated SC up to 12~K in undoped diamond \cite{liu20}.
Defect-induced superconductivity is actually not a new phenomenon but was already reported in different
compounds. For example,
strain-induced SC at interfaces of semiconducting
layers has been treated theoretically
based on the influence of partial flat-bands \cite{tan014}.
SC was found in semiconducting superlattices up to 6~K and attributed to
dislocations \cite{fog01} at the interfaces \cite{fog06}.
Furthermore, SC was discovered at artificially produced interfaces in Bi and BiSb bicrystals having a
$T_c \lesssim 21~$K, whereas neither Bi nor BiSb are superconducting \cite{mun08}.
Finally, SC at two-dimensional interfaces
has been recently studied at the
stacking faults of pure graphite or multilayer graphene structures theoretically \cite{kop13,mun13,pam17}
and experimentally \cite{bal13,cao18}.
\subsection{Aims of the present work}
Because of the
large penetration depth at the selected electron energy, we use a weak electron irradiation to study the influence of
lattice defects produced or changed by the irradiation
on the magnetization of bulk N-doped diamond
crystals. Weak electron irradiation means that its fluence is
of the order of the concentration of C-centres, i.e., a few tens of ppm.
Magnetization measurements
can provide valuable information especially in the case the phase(s) at the origin of the anomalies of interest, is (are) not homogeneously
distributed in the samples.
One aim of this work is therefore to check with temperature and field dependent measurements of the magnetization, whether its anomalous
behavior found in N-doped diamond crystals
can be affected by a weak electron irradiation.
If the irradiation does change the anomalies observed in the magnetization, which lattice
defect is mostly affected by the irradiation? Can it be correlated to the magnetic anomalies?
A recently published study on the influence of ion irradiation on the SC of heavily B-doped diamond samples, showed
that SC is suppressed after
He-ion irradiation of $5 \times 10^{16}$ at 1-MeV producing
a vacancy concentration of $\simeq 3 \times 10^{21}$cm$^{-3}$ ($\sim 2\%$) \cite{cre18}.
This apparent complete suppression of SC after irradiation was experimentally observed
by electrical resistance measurements. Therefore, we may further ask, if the
nominal concentration of produced vacancies by the irradiation that affects the magnetic properties
is also of the order of the C-centres concentration.
Furthermore, can a high-temperature annealing in vacuum of the irradiated samples recover
the anomalous behavior in the magnetization?
\section{Experimental details}
\subsection{Samples and characterization methods}
We selected four (111) diamond bulk crystals, three of them were previously studied in detail \cite{bardia}.
The N-doped diamond single crystals were obtained from the Japanese company SUMITOMO.
The crystals
were cleaned with acid (boiling mixture of nitric acid (100\%), sulphuric acid (100\%) and perchloricacid (70\%)
with a mixing ratio of 1:3:1 for 4~h). All four ``as-received" samples were cleaned before any measurement
was started. The
presence of magnetic impurities was characterized by
particle induced X-ray emission (PIXE) with ppm resolution in a diamond sample with similar magnetic characteristics
as the samples we used in this study \cite{bardia}. The PIXE results indicated an overall impurity concentration
below 10~ppm with a boron concentration below the resolution limit of 1~ppm.
Table~I shows further characteristics of the samples.
Because of the dielectric properties of the diamond
matrix we are actually rather restricted to measure the magnetic moment of the samples to
characterize their magnetic response as a function of field and temperature.
Even after irradiation the samples remain highly insulating, although their
yellow color gets opaque. The magnetic moment of
the samples was measured using a superconducting quantum interference
device magnetometer (SQUID) from Quantum Design. Due to the expected granularity of the phases responsible for
the anomalous behavior of the magnetization, local magnetic measurements (MFM or micro-Hall sensors, as example)
are certainly of interest and in principle possible. However, assuming that the phases of interest are at the near surface region,
the relatively large area of the samples (some mm$^2$, see Table~I)
makes local measurements time consuming, typically several months for an area of 1~mm$^2$ and at a given temperature and
magnetic field.
As in \cite{bardia}, the presence of regions with internal stress was investigated by polarized light microscopy.
Within experimental resolution those
regions were not significantly changed after electron irradiation.
\begin{widetext}
\begin{table}
\caption{Measured samples with their main characteristics and treatments. ($\star)$: These samples were previously characterized in \cite{bardia}.
The concentration of paramagnetic (PM)-centres from SQUID measurements was obtained from the linear slope
of the magnetic susceptibility vs. $1/T$ assuming $J = 1/2$ and $g_J = 2$
and has an experimental error $\lesssim \pm 7~$ppm.
The error of the average C-centres concentration obtained from EPR is
$\sim 15\%$.
The written range of the C-centres concentrations (error $\lesssim 5\%$)
obtained from IR-absorption spectroscopy represents the minimum and maximum concentrations
measured at different locations of the same
sample, an indication of the inhomogeneous N-distribution.
The electron irradiation was performed in vacuum
and at $900^\circ$C. Each annealing treatment
was 4~h in vacuum at 1200$^\circ$C with a heating and cooling time of 10~h. }
\begin{tabular}{@{}llllllll@{}}
\hline
&Name & Size & Mass & PM & C & C & Treatment \\
&&&&SQUID&EPR&IR&\\
&&mm$^3$ & mg & ppm & ppm & ppm &\\
\hline
1& CD2318-02 & $2.4 \times 2.4 \times 1.7$ & 34.1 &- & 40&$33 \ldots 47$ &as-received\\
\hline
2& CD2318-01$^\star$ & $2.4 \times 2.4 \times 1.7$ & 34.0&- &-&$60 \ldots 65$& {as-received} \\
&& & &80 & 20 & $15 \ldots 27$&e$^-$-irrad.~($2\times10^{18}$cm$^{-2}$) \\
&& &&76 & -& $13 \ldots 25$&annealed\\
\hline
3& CD1512-02$^\star$ & $1.6 \times 1.5 \times 1.2$ & 9.8 & - & -&$18 \ldots 25$& {as-received} \\
&& & & 40&- &-& e$^-$-irrad.~($1\times10^{18}$cm$^{-2}$) \\
& & & &39&10&5 \ldots 10&annealed \\
& & &&44& -&- & annealed \\
\hline
4& CD2520$^\star$ & $2.5 \times 2.5 \times 2$ & 44.1&- &-&26 \ldots 40& {as-received} \\
&& &&- &-& -& annealed \\
\hline
\end{tabular}
\label{table1}
\end{table}
\end{widetext}
Continuous wave (cw) EPR measurements were performed at room temperature with a BRUKER EMX Micro X-band spectrometer at 9.416$\,$GHz using a BRUKER ER 4119HS cylindrical cavity. Absolute concentrations of paramagnetic C-centres (usually called P1-centres in EPR) and NV$^-$-centres were determined in dark by using an ultramarine standard sample with known spin number and double integration of the corresponding EPR spectra. The external magnetic field $B_0$ was oriented along the [111] crystallographic axis of the diamond single crystals. In order to avoid saturation effects a 10 kHz modulation frequency at a microwave (MW) power of $P_{MW}$ = 630 nW was employed. We verified that the EPR signal intensities $I_{EPR}$ of both centres deviate from a square root dependence on $P_{MW}$ by less than 10 \% between $P_{MW}$ = 203 nW and 630 nW and therefore saturation effects can be neglected at room temperature and such low mw power levels. Modulation amplitudes of 0.1 mT and 0.03 mT were employed to
measure the C- and NV$^-$centres to avoid broadening effects.\\
A reliable method to characterize the granular, inhomogeneous concentration of nitrogen and the N-based centres, like the C-centres (neutral nitrogen N$^0$ with a maximum absorption at 1344~cm$^{-1}$) and N$^{+}$ (positively charged single substitutional nitrogen with a maximum absorption at 1332~cm$^{-1}$), is infrared (IR) microscopy. IR measurements and IR spectral imaging were carried with a Hyperion 3000~IR microscope coupled to a Tensor~II FTIR spectrometer (Bruker Optik GmbH, Ettlingen, Germany). The microscope is equipped with both, a single element MCT detector and a $64 \times 64$ pixel focal plane array (FPA) detector. Measurements with the MCT detector were carried out with a spectral resolution of 1~cm$^{-1}$, whereas the resolution of the FPA detector used for imaging was limited to 4~cm$^{-1}$. Diamonds were fixed in a Micro Vice sample holder (S.T. Japan) and carefully aligned horizontally.
The concentration values of C-centres shown in Table~I were obtained from IR transmission spectra taken with the MCT detector at various positions of the diamond surface. The MCT detector was favored over the FPA detector for these measurements due to the very low band widths of the above-mentioned bands. Quantitative concentration data of N$^+$- and C-centres were derived from the spectra using the relationships between the peak intensities of these bands and the concentration of corresponding nitrogen centres given by Lawson et al. \cite{law98}, see Section~\ref{is}.
\subsection{Electron irradiation}
\label{ei}
Electron irradiation was done at 10~MeV energy using a B10-30MPx Mevex Corp. (Stittsville, Canada) with a total irradiation fluence
of $1 \times 10^{18}$~electrons/cm$^2$ (sample 3) and
$2 \times 10^{18}$~electrons/cm$^2$ (sample 2). The electron irradiation was performed at 900$^\circ$C in vacuum, in order to remove a certain amount of
disorder.
In spite of a large number of studies on irradiation effects in diamond, the available literature on
the electron irradiation damage in diamond is less extensive.
For example, electron irradiation at high temperatures was used
to significantly increase the density of nitrogen-carbon vacancy (NV) centres \cite{aco09,cap19}.
Several characteristics of the electron irradiation damage in diamond
were published by Campbell and Mainwood \cite{cam00}. From that study, we estimate a
maximum penetration depth of $\simeq 15~$mm at the used electron energy.
This indicates that the irradiated electrons completely penetrate the used samples.\\
There are two recent works directly related to our studies. One of them investigated
the effects of electron irradiation on in-situ heated nano-particles of diamond \cite{min20}. These samples were
irradiated with the same accelerator as our crystals.
In \cite{cap19} the authors increased the NV-centres concentration through electron irradiation maintaining the bulk diamonds
at high temperatures. Both studies showed a higher conversion rate of C-centres into NVs with in-situ annealing.
Taking into account those studies, we used 10~MeV electrons irradiation to convert C-centres into NVs.
The selected fluences were chosen to produce a {\em nominal} concentration of vacancies similar to the
concentration of magnetic C-centres existent before irradiation.
In this way, we can directly check, whether a change in the concentration of C-centres affects or not, the magnetization of our samples.
In terms of created vacancies, homogeneously induced by the electron irradiation, the
maximal nominal concentration would be $5 \times 10^{18}~$vac./cm$^{3}$ for sample 3, i.e. a concentration of
$\simeq 30$~ppm (60~ppm for sample 2), of the same order as the
C-centres obtained by IR and EPR, see Table~I.
Obviously, the vacancy concentration that remains in the samples is smaller because of
the high temperature of the samples during the irradiation
process \cite{new02}. It means that some of the vacancies
can give rise to N-related defects and others diffuse to the sample surface.
From the change in the concentration of NV-centres one can estimate the vacancy concentration
that remains.
\section{Electron Paramagnetic Resonance}
\label{epr}
\begin{figure}
\includegraphics[width=1\columnwidth]{epr.eps}
\caption{EPR spectra obtained at room temperature of samples~1 (as-received) and 2 (after irradiation), see Table I,
for $\bf{B_0} \|$[111] showing the signals from: (a) C-centres and (b) from NV$^-$-centres.
The spectrum of the NV$^-$-centres were recorded with a tenfold higher receiver gain.
The insert in (b) displays the $^{14}$N hyperfine splitting of the $m_S = +1 \leftrightarrow 0$ signal of the NV$^-$-centres at 233.8 mT.}
\label{fig:epr1}
\end{figure}
Figure~\ref{fig:epr1} illustrates EPR spectra of the diamond single crystal samples 1 and 2 recorded for $\bf{B_0} \|$[111]. Both samples show the intense signals of the C-donor of nitrogen incorporated at carbon lattice sites having an electron spin $S = 1/2$ and the typical $^{14}$N hyperfine (hf) splitting into three lines
(Fig.\ref{fig:epr1}(a)) \cite{lou78}. For the chosen orientation of the single crystals the external magnetic field $\bf{B_0}$ points along the $C_3$ symmetry axis of one of the four magnetically nonequivalent C sites in the diamond lattice \cite{lan91}. The $C_3$ symmetry axis of the C-centres defines the symmetry axes of its axially symmetric $g$ and $^{14}$N hf coupling tensor. The two outer hf lines at 331.8 and 339.9 mT together with the central hf line at 335.8 mT belong to this orientation of the C-centres. The symmetry axes of the other three sites of the C-centres make an angle of 109.47$^\circ$ with $\bf{B_0}$ and lead to hf lines at 332.8~mT and 338.9~mT in addition to the central hf line at 335.8~mT. Sample 3 showed a comparable spectrum of the C-centres. The concentrations of the C-centres in the three diamond single crystal samples 1, 2, and 3 (Table I) were determined by double integration of the full EPR spectrum taking into account the quality factor of the cavity and comparison with an ultramarine standard sample with known spin number.
We emphasize that the concentration of C-centres obtained with this method is similar to the one obtained by
IR spectroscopy, see Table I and next section.
A broader magnetic field scan with a larger amplification as done for sample 2 (Fig.\ref{fig:epr1}(b)) revealed
additional signals of the NV$^-$-centres with $S = 1$ \cite{lou78}. Note that the intense signals at about 336~mT in Figure~\ref{fig:epr1}(b) are due to C-centres. The NV$^-$-centres has been assigned to a nitrogen atom substituting a carbon lattice site, which is associated with a next neighboured carbon vacancy. NV$^-$-centres could not be observed for sample 1. The $C_3$ symmetry axis of the
NV$^-$-centres is oriented along the [111] direction and determines the symmetry
axis of the axially symmetric zero field splitting tensor of this centres. In samples 2 and 3, the concentration of NV$^-$-centres
is $\sim(2 \pm 0.3)$~ppm. Therefore, we assume that these centres
do not play any role in the phenomena we discuss in this work.
\section{Infrared spectroscopy}
\label{is}
In a first published study of similar diamond crystals, a correlation between the magnitude of
the anomalous magnetization below 30~K and the C-centres was found \cite{bardia}.
Therefore, we characterized the concentration and the space distribution of this magnetic defect.
The characterization of this centres in diamond using IR spectroscopy has been reported in several earlier studies \cite{boy95,law98,hai12,kaz16}, including radiation damage and subsequent annealing \cite{col09}.
The C-centres are responsible for the broad absorption peak at 1130~cm$^{-1}$
followed by a very sharp absorption maximum at 1344~cm$^{-1}$, see Fig.~\ref{ir}. The broad one at 1130~cm$^{-1}$ is attributed to a quasi-local
vibration at single substitutional nitrogen atoms \cite{bri91,law98,hai12,kaz16}, whereas the one at 1344~cm$^{-1}$ is due to a local mode of vibration of the carbon atom
at the C-N bond with the unpaired electron \cite{col82}.
\begin{figure}
\includegraphics[width=1\columnwidth]{ir.eps}
\caption{(a) Infrared spectra of the absorbance vs. wavenumber of sample 1 in the as-received state measured in transmission with a
resolution of $1$~cm$^{-1}$. The inset shows a blow out at the high wavenumber region. (b) Similar for sample 2 after electron irradiation. The different curves (B $\ldots$ J) in both figures were obtained at different positions of the sample. }
\label{ir}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{CI-1.eps}
\caption{Distribution of C-centres in an area of $(170 \times 170)~\mu$m$^2$ of sample~1,
obtained from the maximum intensity of the IR band at 1344~cm$^{-1}$ using a FPA detector with
a resolution of 4~cm$^{-1}$. The color scale indicates the concentration of C-centres in ppm.
Note that this
two-dimensional distribution is the sum of the whole absorption through the whole
sample thickness at the selected
energy. The concentration values in the color scale are in ppm.}
\label{CI}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{CI-2.eps}
\caption{Distribution of C-centres in an area of $(170 \times 170)~\mu$m$^2$ of sample~2: (a) after electron irradiation and (b) after annealing measured at the same location using a FPA detector with
a resolution of 4~cm$^{-1}$. Data were obtained from the maximum intensity of the IR band at 1344~cm$^{-1}$.
The color scale indicates the concentration of C-centres in ppm.}
\label{CI-2}
\end{figure}
The concentration range of C-centres determined from IR spectra and measured at different positions
of the samples is given in Table~I. The values were obtained using the relationship C(ppm) $=
37.5~A_{1344\text{cm}^{-1}}$ \cite{law98}, where $A_{1344\text{cm}^{-1}}$ is the peak intensity
at the corresponding wavenumber. The concentration range of N$^+$-centres
can be estimated from the absorption peak at 1332~cm$^{-1}$ (using N$^+$ (ppm) = 5.5~A$_{1332\text{cm}^{-1}}$) \cite{law98}.
In all samples and states of the samples, the N$^+$ concentration is much smaller (factor 3 to 10) than the C-centres concentration and
therefore it will not be taken into account in the discussion of the results.
We note that the range of C-centres concentration obtained from
IR-absorption agrees very well with the average concentration obtained from EPR, see Table~I.
As example, we show in Fig.~\ref{ir} the IR spectra of: (a) sample 1 in the as-received state and (b) sample 2 after electron irradiation.
All spectra were obtained in transmission (MCT detector) with a resolution of $1$~cm$^{-1}$ at different positions of the sample surface.
After electron irradiation, a new maximum at 1450~cm$^{-1}$ appears, see Fig~\ref{ir}(b).
We note that
in the as-received state of all samples this maximum is absent within experimental resolution, see Fig.~\ref{ir}(a) as example.
The maximum absorption at 1450~cm$^{-1}$ was already shown to
develop during the annealing at $T > 500^\circ$C in type I diamonds after electron-irradiation and is related to N-interstitials
in the diamond lattice \cite{woo82}. Since then, it has been studied and reported several times \cite{kif96,gos04,bab16}.
Last published characterization measurements indicate that the origin of this maximum is related to
H1a-centres (di-nitrogen interstitials) \cite{gos04,bab16}. The vanishing or grow of H1a-centres appears to be
correlated to the interplay between C- and N$^+$-centres aggregation processes. The results in \cite{bab16}
suggest that an increase in the density of H1a-centres is accompanied by a decrease in the density of C-centres.
In order to quantitatively estimate the extend of the increase of the concentration of H1a centres [H1a], we use
a relation given by Dale \cite{dal15}, which relates the band integral $A$ at 1450~cm$^{-1}$ to the concentration of interstitials
according to [H1a](ppm) $ =
A$ (in cm$^{-1}) / f$,
with $f = 3.4 \times 10^{-17}~$cm. Spectra taken with the MCT detectors at various positions of the irradiated diamond before and after annealing were baseline-corrected and normalized according to \cite{dal15} before conversion of the band integrals to [H1a]. Finally, the individual values of [H1a] were averaged. Using this procedure for the results of sample~2, we found a slight increase of [H1a] from $\simeq 3$~ppm before annealing to $\simeq 3.3$~ppm after annealing. The values indicate that
[H1a~]$\lesssim 15\%$ of the C-centres concentration.
Because the concentration of C-centres plays a main role in the origin of the phenomena observed in the magnetization,
it is of interest to display the concentration distribution of these centres. Figure~\ref{CI} shows one example of this distribution
measured in sample 1 (as-received state) recorded in transmission with the FPA detector, i.e. the image accounts for the absorption around 1344~cm$^{-1}$ measured through the whole sample thickness with a spectral resolution of 4~cm$^{-1}$.
This image shows a remarkable granular-like distribution of the concentration in the selected area
of $170~\mu$m~$ \times 170~\mu$m. This image indicates the existence of regions of a few $\mu$m$^2$ with clear differences in the concentration
of C-centres of up to $\sim 25\%$ between some neighbouring regions, see Fig.~\ref{CI}. According to the IR absorption peak at 1344~cm$^{-1}$, the concentration
of C-centres can reach values up to $\sim 50~$ppm locally, depending on the region of the sample.
\begin{figure}
\includegraphics[width=1\columnwidth]{CI-3.eps}
\caption{Similar to Figs.~\ref{CI} and \ref{CI-2} but for the IR absorption at 1450~cm$^{-1}$ due to H1a-centres of sample~2: (a) after electron irradiation and (b) after annealing. The color scale indicates the amplitude of the absorption maximum at 1450~cm$^{-1}$. To approximately estimate the concentration in ppm, multiply the numbers at the color scale by two.}
\label{CI-3}
\end{figure}
Figure \ref{CI-2} shows the concentration distribution of C-centres of sample~2 after electron irradiation (a) and subsequent annealing (b), see also Table~I,
both measured at the same sample area. The distribution of the H1a-centres correlated to the maximum at 1450~cm$^{-1}$, after electron irradiation (a) and
after subsequent annealing (b) at the same sample area, is shown in Fig.~\ref{CI-3}.
Several details are of interest, namely: (1) The average concentration of C-centres after electron irradiation decreases by $\simeq 50\%$ and shows a similar granular distribution as in the as-received samples but with a larger difference (up to 50\%) between neighboring regions.
(2) After annealing, see Fig.~\ref{CI-2}(b), the
average concentration of C-centres decreases further by $\sim 6\%$ but shows a more homogeneous
distribution, see also Table I. (3) The concentration of H1a-centres (see Fig.~\ref{CI-3}), which appear only after irradiation and annealing \cite{woo82},
becomes homogeneously distributed and its average density
increases by $\sim 10\%$ after annealing.
\section{Magnetization results}
\subsection{Field hysteresis loops}
The field hysteresis loops were measured at constant temperature after zero field cooling from 380~K.
The
field was swept from $0~\rm T\rightarrow +7$~T$\rightarrow -7$~T$\rightarrow $+7~T. After this
procedure, the applied field was set to zero, the sample heated
to $380$~K and cooled at zero field to
a new constant temperature.
The main characteristics of the anomalous hysteresis at 2~K can be seen in Figs.~\ref{2K-mH} and \ref{ht},
observed in all samples before irradiation.
The magnetic moment $m$ of the diamond samples shown in Figs.~\ref{2K-mH} and \ref{ht}
was measured with the large
surface areas of the samples parallel to the magnetic field applied $\perp$ [111]. Measurements
at other field directions were also done in order to check for the existence of any magnetic anisotropy.
The magnetic moment did not show any anisotropy within error, in agreement with the reported data in \cite{bardia}.
\begin{figure}
\includegraphics[width=1\columnwidth]{2K-mH.eps}
\caption{\label{2K-mH} Field hysteresis loops at $T = 2~$K of the
magnetic moment of samples (a) 2 and (b) 3, before and after irradiation. Note the small remaining hysteresis between 2~T and 4~T in the irradiated sample 3.
The diamagnetic background obtained at 300~K was subtracted from the measured data. The measurements of sample 2 were done in two
field directions with similar results within resolution. The direction of the applied field shown in all figures is $\perp $ [111]. }
\end{figure}
We note that the high sample-temperature stabilized during irradiation is not the reason for the changes we discuss below. In order to prove this, we
show in Fig.~\ref{ht} the magnetization field hysteresis loop at 2~K of sample~4 before and after annealing at 1200$^\circ$C in vacuum for 4 hours.
The field loops indicate that the main field hysteresis does not change significantly after annealing.
The influence of the electron irradiation on the field hysteresis can be clearly seen in Fig.~\ref{2K-mH}. The field
hysteresis is suppressed. The s-like field loop curve obtained for both samples at $T = 2~$K
after irradiation is due exclusively to a paramagnetic (PM) phase, not SPM, as the measured temperature
dependence of the magnetic moment also indicates, see section~\ref{td} below.
\begin{figure}
\includegraphics[width=1\columnwidth]{t-dep.eps}
\caption{\label{t-dep} (a) Magnetic moment vs. inverse temperature measured at a constant field of 1~T
for sample 3 in the as-received and irradiated states. Note the temperature hysteresis between the zero field cooling (ZFC) (increasing temperature path)
and the field cooling (FC) (decreasing temperature path) states. The dashed line follows $m(T) = 45 T^{-0.35} - 56$ $[\mu$emu]
and the dashed-dotted line $m(T) = 52 T^{-0.6} - 49$ $[\mu$emu] with $T$ in K. (b) Magnetic susceptibility, defined as the ratio between the magnetic moment and the
applied field, vs. inverse temperature for sample 2 in the irradiated state at three magnetic fields. No background was subtracted from
the raw data. The straight line is a linear fit to the data. }
\end{figure}
\subsection{Temperature hysteresis loops}\label{td}
To further demonstrate that the s-like field loop observed after irradiation, see Fig.~\ref{2K-mH}, is related to a PM phase,
we discuss the measured
temperature dependence before and after irradiation. Figure~\ref{t-dep}(a) shows the magnetic moment of sample 3 before and after irradiation
vs. inverse temperature obtained at an applied field of 1~T. As discussed in detail in \cite{bardia}, the Curie-like behavior observed at relatively high temperatures,
i.e. at $T > 20~$K, see Fig.~\ref{t-dep}(a), does not scale
with the applied field as one expects for a PM phase. Its susceptibility clearly increases with the applied field \cite{bardia}, indicating the development of
a SPM phase at low temperatures. This SPM phase does not show any hysteresis in field in the measured temperature range, although its s-shape in the field hysteresis loop looks
similar to a FM state. The clear hysteresis in the temperature loop,
see Fig.~\ref{t-dep}(a), has a different origin. In \cite{bardia} it has been interpreted as due to a SC contribution added to the SPM one.
After electron irradiation we observe remarkable changes in the temperature dependence of the magnetic moment, namely:\\
(a) The main change after irradiation
is observed at $T < 30~$K, where
the large hysteretic contribution is completely suppressed.
The magnetic moment shows a Curie-law behavior in
the whole temperature range, with a weak tendency to saturation at the lowest measured temperatures at an applied field of 1~T, see Fig.~\ref{t-dep}(a),
as expected for a PM phase.\\
(b) From the slope of $m$ vs. $1/T$ and assuming
a total angular moment $J = 1/2$ and $g_J = 2$ per PM centre, we estimate a concentration of $(40\pm 5)$~ppm PM-centres
for sample~3 after irradiation with a fluence of $1 \times 10^{18}$~cm$^{-2}$.
This value can be interpreted as due to the following contributions: one from the C-centres with
a concentration
of $\sim 10$~ppm, see Table~I, H1a-centres
of $\lesssim 5$~ppm concentration and a rest of $\sim 25$~ppm of magnetic centres due either vacancies and/or lattice defects produced by them. \\
(c) The results obtained from sample 2 after irradiation are shown in Fig.~\ref{t-dep}(b) where the susceptibility vs. inverse temperature
at three applied fields is plotted. The temperature dependence and the observed scaling with applied field are compatible with a PM phase, see Fig.~\ref{t-dep}(b).
From the susceptibility
slope vs. $1/T$ of sample~2 and after two times higher electron irradiation fluence the
calculated density of PM centres is $(80 \pm 5)$~ppm. It means
a factor of four larger than the density of C-centres obtained by EPR and IR, see Table~I. This indicates that about 60~ppm
of the PM-centres measured by the
magnetic susceptibility should be related to extra lattice defects induced by the irradiation,
with a magnetic moment of the order of $1 ~\mu_B$.\\
(d) After an electron irradiation with a fluence that
produced a decrease of a factor of two in the C-centre concentration (equivalent to some tens of ppm),
any anomalous signs of hysteresis in field and temperature are completely absent
at $T \ge 2~$K, within experimental resolution. \\
\subsection{Partial recovery of the anomalies in the magnetization after high temperature annealing in vacuum}
\label{partial}
\begin{figure}
\includegraphics[width=1.1\columnwidth]{difT.eps}
\caption{Temperature dependence of the difference between the field cooling (FC) and zero field cooling (ZFC) states $m_d = m_{FC} - m_{ZFC}$ measured in sample~3 in the as-received, after irradiation and after
the first and second annealing treatments at a constant applied field of 1~T. }
\label{th}
\end{figure}
In an earlier study, Creedom et al. \cite{cre18} found that the observed suppression of SC
of B-doped diamond after
a certain fluence of He-ion irradiation, could be partially recovered after annealing the sample in vacuum. Therefore, we annealed
some of the samples in vacuum at 1200$^{\circ}$C for 4~h with a 10 hours ramp up and down, following the sequence used in
\cite{cre18}. Figure~\ref{th} shows the difference between the magnetic moment at FC and at ZFC states of sample 3
obtained at constant field of 1~T in the as-received, after irradiation and after two identical annealing
processes. A finite, negative difference in $m_d = m_{FC} - m_{ZFC}$ is observed at 5~K~$\lesssim T \lesssim 35$~K in
the as-received state, in agreement with the earlier publication \cite{bardia}. An interpretation for this anomalous difference
is given in the discussion.
After electron irradiation $m_d = (0\pm 1)~\mu$emu in the whole temperature range, see Fig.~\ref{th}.
After the first annealing a similar behavior as in the as-received state is observed but $m_d$ starts
to be negative at
$T < 10~$K showing the minimum at $\sim 3~$K (instead of 10~K as in the as-received state).
A second annealing slightly increases $|m_d|$ with a small shift to higher temperatures.
The obtained behavior of $m_d(T)$ after annealing indicates the partial recovering of the regions responsible for
the anomalous behavior of the magnetization.
This partial recovering is also observed by
measuring the field hysteresis loops after annealing, in particular the field hysteresis width.
Figure~\ref{fh} shows the field hysteresis width at a constant field
of 3~T vs. temperature for the as-received and after the two annealing treatments after irradiation of sample~3. The results indicate
that the amplitude of the anomaly in the magnetization partially recovers after annealing, see
Fig.~\ref{fh}. However, whether a critical temperature is really reduced respect to the one in the as-received state is
not so clear because of the different temperature dependences, see
Fig.~\ref{fh}. An apparent reduction of the superconducting $T_c$ obtained from transport measurements has been reported
after subsequent annealing treatment of a previously He-ion-irradiated
B-doped diamond sample \cite{cre18}.
\begin{figure}
\includegraphics[width=1.15\columnwidth]{FH.eps}
\caption{Field hysteresis width at a constant field of 3~T measured at different temperatures for sample 3 in
the as-received, first and second annealing treatment after electron irradiation. The black line follows the
equation\\ $0.05 - 2.3 \ln(T/T_c)$ with $T_c=30$~K. This phenomenological equation was found to satisfactorily describe the critical line of
several as-received N-doped diamonds \cite{bardia}. The blue and dashed lines through the data points of the annealed
samples are only a guide to the eye.}
\label{fh}
\end{figure}
\section{Discussion}
Pure diamond samples, without N- or B-doping, show no anomalous behavior in the magnetization
above 2~K but the usual diamagnetic response.
A very weak Curie-like behavior in the magnetic susceptibility below 100~K is
observed in some of the ``pure" diamond samples,
with a temperature irreversibility between the ZFC and FC states more than three orders of magnitude
smaller than the irreversibility we measured in N-doped diamond \cite{bardia}.
Because the concentration of boron in all diamond crystals presented in this work
is below 1~ppm,
we can certainly rule out B-doping as responsible for the anomalies
in the magnetization.
A correlation between the anomalous maximum in magnetization and
the concentration of C-centres has been obtained in \cite{bardia}.
Also our results indicate a correlation to the
C-centres. But this correlation is not a simple one, as the vanishing and recovery of the anomalies and the
corresponding C-centres concentration for those states indicate, see Table~I.
As emphasized in the introduction, the similarities between the magnetization loops obtained in N-doped diamonds and
FM/SC bilayers, see Fig.~\ref{ht}, suggest that the anomalous behavior
is related to the existence of both, a SC and a SPM phase, as proposed in the original publication.
The way to simulate such a rather anomalous field hysteresis is given by the simple
superposition of each field dependent magnetization plus eventually a small diamagnetic background, linear
in field. The good fits of the field hysteresis data of the diamond samples (as well as of the bilayers)
to this model can be seen in \cite{bardia}. Also the obtained dependence of the fit parameters in temperature
supports the used model.
At this point we would like
to give a couple of remarks on the EPR measurements. One would argue that below 30~K and if such a
mixture of SC and SPM phases in our samples exists, then we would expect to see a broadening of the EPR line related to
the C-centres at low temperatures. To check for this effect we have done
additional EPR experiments with samples 1, 2 and 3 at 11~K.
A significant line broadening for the signals of the C- and NV-centres
was not be observed, in comparison with the room temperature spectra.
The main reason for the absence of such a broadening of the EPR C-lines is the following.
EPR measures the average response of all C-centres, which are distributed through all the sample.
From all those regions, only a small part of them have the phases responsible for the huge field (and temperature) hysteresis loops.
With our EPR equipment, it is simply not possible to get the signal of such a small amount.
Note that there is always an overlap with the dominating signal from the C-centres located all over the remaining parts of the sample.
\subsection{How large would be a significant C-centres concentration to trigger ordering phenomena in diamond ?}
This question is not answered experimentally in the literature. Surprisingly, with exception of \cite{bardia},
systematic bulk magnetization measurements of diamond samples
with different concentration of P1 or C-centres are not apparently published. Because the purity of diamond
crystals nowadays is high and especially magnetic impurities can be quantitatively measured with high resolution (e.g.
with PIXE) there are no clear reasons why such measurements were not systematically done in the past.
To estimate the average distance between C-centres
we follow a similar procedure as in \cite{hen19}, getting an average distance
between C-centres $<d>_{C-C} = (2.9 \pm 1.5)~$nm in the diamond matrix for a
mean concentration of the order of 50~ppm.
Because the concentration of C-centres is not homogeneously distributed in the diamond samples, it is
quite plausible that clusters with an average smaller distance exist.
Is such a distance
between C-centres in the diamond matrix too large to trigger ordering phenomena, even at low temperatures?
Let us take a look at two recent publications that
studied the entanglement between single defect spins \cite{dol13} and NV$^-$-N$^+$ pair centre \cite{man18}.
In the first study deterministic entanglement of electron spins over a distance of 10~nm was demonstrated at room temperature.
In the second work done in 1b diamond, the authors found that for all NV-centres with a neutral N within a distance of
5~nm, an electron can tunnel giving rise to NV$^-$-N$^+$ pairs. Taking into account these studies
a non-negligible coupling between C-centres with an average distance of less than 3~nm does not appear impossible.
A concentration of 50~ppm of C-centres is much smaller than the necessary boron concentration ($\gtrsim 2~\%$) reported in the
literature to trigger the observed
superconducting transition in the electrical resistance. As we noted in the introduction, new experimental studies
do suggest that the relationship between carrier concentration due to B-doping and the superconducting
critical temperature is not as clear.
It may well be that the relatively high carrier concentration in those B-doped diamonds is necessary
to get percolation, i.e. a superconducting current path between voltage electrodes. But it does not necessarily rule out that localized superconducting grains are
formed at lower B-concentration.
\subsection{Granularity}
We may ask now, why the superconducting transition is not observed in the electrical resistance in our N-doped diamond samples \cite{bardia}?
The reason is the granularity of the C-centres distribution; it prevents the percolation of
a superconducting path between voltage electrodes when their distance is much larger than the grain size, see Fig.~\ref{CI}.
This granularity added to the apparently small
amount of the total sample mass that shows a superconducting response is the reason why
transport measurements are so difficult to perform in these samples.
From the comparison between the absolute magnetic signals in the diamond samples and those measured in oxides bilayers, a rough estimate of
the equivalent mass of the SC phase in the N-doped diamond samples
gives $\sim 1~\mu$g,
i.e. the volume of a cube of $70~\mu$m side length \cite{bardia}.
This estimate suggests that micrometer large
clusters of superconducting C-centres
should exist in the sample, as the image in Fig.~\ref{CI} suggests.
Note that the assumption that all the measured anomalous magnetic signals would come from
very few, well localized regions with a very high C-centres concentration in the percent region, as in the case of
B-doped diamond, does
not find support from our IR-absorption images. Also the clear suppression of
the two phases after such a weak and homogeneous electron irradiation and the thereof decrease
of the C-centres concentration, speaks against such an assumption. The obtained evidence speaks for a correlation of
the observed phenomena with localized clusters of C-centres with concentration at least ten times smaller than the
reported B-concentration in B-doped diamond, see Section~\ref{intro} and references therein.
We note that several experimental details gained from our magnetization results
suggest that the SPM phase should emerge simultaneously to the SC one.
For example, the anomalous behavior of the difference $m_d = m_{FC} - m_{ZFC}$, see Fig.\ref{th}. We note first that in the
case we would have only a superconducting single phase, we expect $m_d > 0$ at $T < T_c$. An interpretation for the anomalous behavior
of $m_d(T)$ has been provided in \cite{bardia} taking into account that
the samples have both, a SC and a SPM phase. In this case the anomalously large increase of the magnetic moment
$m_{ZFC}$ is due to the partial field expulsion in the SC regions.
In other words, an effective higher field than the applied one enhances the magnetic moment of the SPM phase
in the ZFC state and therefore $m_d < 0$. At low enough temperatures, $m_{FC}$ increases and eventually $m_d > 0$, as observed.
We emphasize that similar results were obtained in FM/SC oxide bilayers, supporting this
interpretation \cite{bardia}.
\subsection{Possible origin for the existence of two antagonist phases within regions with high C-centres concentration}
Clustering of C-centres could produce a spin glass phase arising from their spin 1/2 character of independent
magnetic moments. Following the ideas of the RVB model \cite{bas08,and87}, as temperature is reduced the C-centres start to be weakly
coupled and could form pairwise, antiferromagnetically (AFM)
coupled singlets. A certain density of these donors may delocalize, leading to a SC state at low enough temperatures.
There are at least two possible scenarios that could provide an answer to the simultaneous
existence of the two phases. The rather conventional
possibility is that at low enough temperatures, in our case $T \lesssim 30~$K, a mixture of
paired and unpaired C-centres are formed, a spin liquid. The unpaired C-centres start to
contribute to the magnetic response under an applied magnetic field as a
SPM phase, the reason for the s-like field loop contribution without any field hysteresis.
A SPM phase, instead of a FM one, is possible only if the spin-spin interaction (inversely proportional to the
distance between the spins) is weak enough within the used temperature range.
The successive coupling of the C-centres is predicted to be a kind of
hierarchical process, which leads to a non-Curie law in the magnetic susceptibility as, for example,
$\chi \propto T^{-\alpha}$. Such a process has been observed in P-doped Si with $\alpha \sim 0.6$ \cite{paa88,lak94}.
The FC susceptibility of our diamond samples in the as-received state follows a similar temperature dependence but
with $\alpha \simeq 0.35$, indicating the superposition of a strong diamagnetic response,
as expected if a SC phase develops, see Fig.~\ref{t-dep}(a).
The results in Fig.~\ref{t-dep}(a) indicate also that at high enough temperatures most of the
C-centres
depair and
contribute as independent PM centres to the magnetic susceptibility.
A less conventional possibility follows from arguments published in \cite{bas08} and \cite{mares08}:
spin-spin interaction is the most robust attractive interaction
in this kind of doped systems, which may operate simultaneously to the electron-phonon interaction to create Cooper pairs.
It follows that, unlike s-wave, p-wave superconductors do not necessarily have their Cooper pairs mediated by phonons.
We may argue therefore that, instead of a s-wave pairing, a p-wave SC state between the interacting donors could also be possible.
In this case the response to an applied magnetic field of a system of p-wave symmetry Cooper pairs could produce
a mix response of a SPM together to a field hysteresis loop due to the existence of vortices and/or
fluxons in the SC regions.
As example, we refer to the theoretical study of the orbital magnetic dynamics in a p-wave superconductor
with strong crystal-field anisotropy \cite{bra06}. In this case the orbital moment of Cooper pairs (the directional order parameter)
does not lead to a definite spontaneous magnetization, i.e. no hysteresis as in a SPM phase.
Clearly, more experimental evidence tying the possible causes to a consistent, measurable effect is
necessary.
\subsection{What electron irradiation does and why the observed effect is of importance}
Let us now discuss the effect of the electron irradiation. One main result is that a nominal induced defect concentration $\sim 50$~ppm,
of the order of the concentration of C-centres in the as-received state of the samples,
eliminates completely the anomalies in the magnetization. In would mean that
the two phases, SC and the SPM phases, vanish simultaneously.
Taking into account that most of those produced vacancies diffuse and/or give rise N-related
defects, and the correlation found in \cite{bardia}, it is appropriate to correlate the vanishing of the anomalies after
irradiation with the measured decrease of $\sim 10\ldots 40$~ppm of C-centres, see Table~I.
Qualitatively, the irradiation effect we found is similar to the vanishing of SC by He-ions irradiation
in B-doped diamond \cite{cre18}. This similarity
is remarkable because the studies in B-doped diamond show a suppression of SC after producing a vacancy concentration
$\sim 2$\%, similar to the boron concentration. This result also indicates that in the N-doped samples
the SC/SPM regions should be
spread in the whole sample and not just at certain localized regions, each with orders of magnitude larger density of C-centres.
If that were the case, it would be
difficult to understand how a change of $\sim 40$~ppm in the defect concentration (C-centres and/or other lattice defects)
is enough to suppress the two phases.
This experimental result supports the
notion that nitrogen doping of diamond is extraordinary, especially because of the low level of doping needed to
trigger the observed irreversibiities in the magnetization and at relatively high temperatures.
The suppression of SC
in B-doped diamond after He-irradiation has been explained assuming that the produced
vacancies act as donors, which compensate the holes introduced
by the substitutional boron atom \cite{cre18}. Evidently this argument is not applicable
in the case of the N-doped diamond, because nitrogen is already a donor in the diamond matrix.
From our results we may conclude that one main reason for the vanishing of the responsible phases by
electron irradiation is the decrease by $\simeq 50\%$ of the average concentration of C-centres, see Table~I.
However, would that decrease in the C-centres concentration be the only effect, then we would expect to still
observe the two phases, though with less
amplitude of the anomalies in the magnetization. This expectation is based on the rather weak dependence of
the $T_c$ on the average concentration of C-centres, obtained from eight diamond samples
with differences up to a factor four \cite{bardia}.
It means that the irradiation-produced lattice defects strongly affect the interaction
between the remaining C-centres.
The produced lattice defects through irradiation (H1a-centres \cite{bab16}, remaining vacancies \cite{cam00} and other
N-centres) could affect not only the
randomness of the lattice strain, the distribution of the internal electric field and of the covalent bonds
that contribute to stabilize the SC phase \cite{liu20}, but also their magnetic moments could have
a detrimental role in the coupling between the C-centres. If this would be the case, then a
distribution of C-centres in a diamond lattice with a similar amount of
magnetic defects not necessarily would trigger SC but only PM.
In fact, the susceptibility results {\em after irradiation} indicate that most of the remaining C-centres contribute independently
to a PM state at all temperatures.
The studies in \cite{cre18} found that a partial recovery
of the SC occurs after annealing the He-irradiated B-doped diamond samples. A similar annealing
treatment done in our samples also produces the recovery of the anomalies in the magnetization but their magnitude
is smaller than in the as-received state at similar temperatures,
see Fig.~\ref{fh}. Which is actually the effect of annealing?
After electron irradiation, annealing at high temperatures and in vacuum reduces only slightly the average concentration of
C-centres ($\sim 6\%$, see Table~I) but increases significantly the
homogeneity in their spatial distribution, see Fig.~\ref{CI-2}. Because the total amount
of C-centres did not change basically after annealing, see Table~I, the partial recovering of
the responsible phases after annealing, appears to be related to the clear increase in
homogeneity. Our IR results do not
indicate that the concentration of H1a-centres decreases with annealing.
\section{Conclusions}
Magnetization measurements of electron irradiated N-doped diamond crystals show the suppression of the anomalous
irreversible behavior in applied magnetic field and temperature. The suppression occurs
after producing a decrease of a few tens of ppm in the concentration of C-centres measured by IR absorption and EPR.
This is remarkable because of the relatively low density of C-centres, stressing the
extraordinary role of the C-centres in triggering those phenomena in diamond at relatively high temperatures. Spectroscopy methods
like ARPES to get information on the changes in the band structure produced by nitrogen would be of high interest. However,
it is not clear whether such a technique would be successful using similar samples as we studied here, due to the small amount of mass
of the phases that originate the anomalies in the magnetization.
We believe that future work should try to study the magnetic and electrical response locally in order to localize the regions of interest.
According to our results, the regions of interest should be all over distributed in the samples and therefore these local
studies should have a reasonable chance of success. However, the main problem would be the large
measuring scanning time (at a fixed temperature and magnetic field) for the typical areas of
the samples studied here.
\medskip
\textbf{Acknowledgements} \par
The authors thank W. B\"ohlmann for the technical support.
One of the authors (PDE) thanks G. Baskaran and G. Zhang for fruitful discussions on their work.
The studies were supported by the DFG under the grant DFG-ES 86/29-1 and DFG-ME 1564/11-1. The work in Russia was partially funded by RFBR and NSFC research project 20-52-53051.
\medskip
|
1,941,325,220,727 | arxiv | \section{ A new synthesis of time variable $G,\Lambda$ models as MOG models}
All Cosmological data from different sources testify to the fact
that our world is made of a substance of negative pressure $73\%$ (
dark energy (DE) ), missing mass $23\%$ ( dark matter (DM)) and
only $4\%$ conductive material
(baryon matter)
\cite{Riess et al}. DM and DE can have interaction and the
interaction of these is not known in the physics. It is not an
electromagnetic field and metallic material interaction.
Mathematical function is determined phenomenologically because types
of interactions is unknown with an overall classification
interaction function can be written as
$Q=Q(H,\dot{H},\rho_m,\rho_{DE},\rho_{DM},...)$.
\par
Several models have been proposed to explain the universe's
accelerated expansion \cite{jamil1}-\cite{jamil7}. The models can be divided into two general
groups: the first group of models that are needed to correct the
Einstein theory of gravity with a new geometric terms is known as
geometric models. The first of these models is $f(R)$ which is
obtained by replacing the $R$ Ricci curvature with arbitrary $f(R)$
function\cite{H. A. Buchdahl}. The second group of models that are
expansion is attributed to exotic fluids with negative pressure. It
is believed that exotic fluid is a mimic dark energy equation of
state in the present era
(for a modern review see \cite{Bamba:2013iga},\cite{Sami:2013ssa}
). Both of these models have different applications and important results of these models are derived as alternative cosmological models
\cite{Motohashi:2010zz,Motohashi:2010tb,Motohashi:2010qj,
Appleby:2009uf,Starobinsky:2007hu}.
\par
Several properties of DE have been studied in numerous
papers\cite{Kiefer:2010pb,Shafieloo:2009ti,Sahni:2008xx}. DE can be
decay \cite{Alam:2004jy} or reconstruct from different theoretical
models \cite{Sahni:2006pa}. There is no simple and unique model that
can have to describe this exotic energy. Models in which the
Scalar-tensor fields used are able to solve such complex issues by
simple mathematics to the extend possible
\cite{Gannouji:2006jm,Gannouji:2007im}. So there are very attractive
models to study. A scalar-Tensor model is proposed among all the
different cosmological models. The model is able to explain the DM
and dynamic clusters of galaxies with an additional vector field and
relying only baryonic matter \cite{Moffat}. This model is known as
STVG or MOG. MOG can be seen as a covariant theory with
vector-tensor-scalar fields for gravity with the following action:
\begin{eqnarray}
&&S=-\frac{1}{16\pi}\int \frac{1}{G}(R+2\Lambda)\sqrt{-g}d^4x+S_{\phi}+S_{M}\\&& \nonumber-\int \frac{1}{G}\Big[\frac{1}{2}g^{\alpha\beta}\Big(\nabla_{\alpha}\log G\nabla_{\beta}\log G+\nabla_{\alpha}\log \mu\nabla_{\beta}\log \mu\Big)+U_{G}(G)+W_{\mu}(\mu)\Big]\sqrt{-g}d^4x.
\end{eqnarray}
\par
The first term of the action is Einstein-Hilbert Lagrangian. The
second term is the conventional scalar field and the last term
contains a $G$ kinetic energy field that plays the role of the
gravitational constant (However, the fields can be considered
similar to a time dependent gravitational constant by slowly time
varying fields) \cite{Moffat:2011rp}. This action classes are
written in covariant forms and are used to investigate the
astrophysical phenomena such as rotation curves of galaxies, mass
distribution of cosmic clusters or gravitational lenses. The model
might be a suitable alternative to $\Lambda$CDM model considered
\cite{Toth:2010ah}. In order to understand the role of scalar and
vector fields we write the equations of motion for FLRW metric :
$$
ds^2=dt^2-a(t)^2[(1-kr^2)^{-1}dr^2+r^2d\Omega^2],\ \ d\Omega^2=d\theta^2+\sin^2{\theta}d\phi^2$$
Form of the equations can be rewritten as a
generalized Friedmann equations as follow
\cite{Moffat:2007ju}:
\begin{eqnarray}
&&H^2+\frac{k}{a^2}=\frac{8\pi G\rho}{3}
-\frac{4\pi}{3}\left(\frac{\dot{G}^2}{G^2}+\frac{\dot{\mu}^2}{\mu^2}-\dot{\omega}^2-G\omega\mu^2\phi_0^2\right)\nonumber\\
&&\qquad{}+\frac{8\pi}{3}\left(
\omega GV_\phi+\frac{V_G}{G^2}+\frac{V_\mu}{\mu^2}+V_\omega
\right)
+\frac{\Lambda}{3}+H\frac{\dot{G}}{G},
\label{eq:FR1}\\
&&\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3p)
+\frac{8\pi}{3}\left(\frac{\dot{G}^2}{G^2}+\frac{\dot{\mu}^2}{\mu^2}-\dot{\omega}^2-G\omega\mu^2\phi_0^2\right)\nonumber\\
&&\qquad{}+\frac{8\pi}{3}\left(
\omega GV_\phi+\frac{V_G}{G^2}+\frac{V_\mu}{\mu^2}+V_\omega
\right)
+\frac{\Lambda}{3}+H\frac{\dot{G}}{2G}+\frac{\ddot{G}}{2G}-\frac{\dot{G}^2}{G^2},\nonumber\\
&&\ddot{G}+3H\dot{G}-\frac{3}{2}\frac{\dot{G}^2}{G}+\frac{G}{2}\left(\frac{\dot{\mu}^2}{\mu^2}-\dot{\omega}^2\right)+\frac{3}{G}V_G-V_G'\nonumber\\
&&\qquad{}+G\left[\frac{V_\mu}{\mu^2}+V_\omega\right]
+\frac{G}{8\pi}\Lambda-\frac{3G}{8\pi}\left(\frac{\ddot{a}}{a}+H^2\right)=0,\\
&&\ddot{\mu}+3H\dot{\mu}-\frac{\dot{\mu}^2}{\mu}-\frac{\dot{G}}{G}\dot{\mu}+G\omega\mu^3\phi_0^2+\frac{2}{\mu}V_\mu-V'_\mu=0,\\
&&\ddot{\omega}+3H\dot{\omega}-\frac{\dot{G}}{G}\dot{\omega}-\frac{1}{2}G\mu^2\phi_0^2+GV_\phi+V'_\omega=0.\label{eq:omega}
\end{eqnarray}
Scalar and vector fields interaction terms of the aforementioned
classes are self interaction and they are shown by an arbitrary
mathematical functions: $V_\phi(\phi)$, $V_G(G)$,
$V_\omega(\omega)$, and $V_\mu(\mu)$. The resulting equations of
motion are highly nonlinear and there is no possibility to find
analytical solutions. The only possible way to evaluate answer is
numerical method. At the same time, we must also determine the shape
of the interaction $V_{i}$. Mathematical differences may be a good
solution for finding certain family of potentials. If we consider
the $G$ scalar field with a time variable gravitational field (G(t))
and ignore the contributions of the other fields in favor of the
G(t), and also due to the cosmological data $\frac{\dot{G}}{G}\ll
1$, time evolution of G(t) will be the major contribution.
In fact, data from the large cosmological confirm our conjecture
about just keeping the $G(t)$, and kinetic part of $G(t)$ can be
neglected because:
\begin{eqnarray}
&&g^{\alpha\beta}\nabla_{\alpha}\log G\nabla_{\beta}\log G \simeq (\frac{\dot{G}}{G})^2 \ll 1.
\end{eqnarray}
Regardless, second-order derivatives of additional fields which
introduced additional degrees of freedom and in the absence of
additional fields on MOG, with the approximation that the time
evolution of the fields is very slowly varying, MOG and
Einstein-Hilbert action can be considered as the same. The
difference is that now $G(t)$ is a scalar time variable field.
Equations of motion are written in the following general form, if we
consider small variation of $G(t)$ and $G(t),\Lambda$ are functions
of time \cite{Bonanno:2006xa}:
\begin{eqnarray}
S\simeq-\frac{1}{16\pi}\int \frac{1}{G}(R+2\Lambda)\sqrt{-g}d^4x+S_{M}.
\end{eqnarray}
(see for instance \cite{Abdussattar})
\begin{equation}
R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\approx -8 \pi G(t) \left[ T_{\mu\nu} -
\frac{\Lambda(t)}{8 \pi G(t)}g_{\mu\nu} \right]\label{FEQ},
\end{equation}
Energy-momentum function of matter fields (ordinary or exotic) is proposed as follows:
\begin{eqnarray}
T_{\mu\nu}=\mathcal{L}_{M}g_{\mu\nu}-2\frac{\delta \mathcal{L}_{M}}{\delta g^{\mu\nu}}.
\end{eqnarray}
Cosmological models, which were introduced by the mentioned equations of motion have been investigated several times by different authors \cite{Abdussattar,Jamil:2009sq,Lu:2009iv,Sadeghi:2013xca}.
But we approach this problem with a more general view. As we have shown, MOG is the limit of weak fields able to induce and introduces a gravitational field $G(t)$. So, our paper can be considered as a cosmological analysis of MOG in the weak field regime. We are particularly interested to see how cosmological data
$SneIa+BAO+CIB$ will constrain our model parameters.\par Our plan
in this paper is: In section II: introducing the cosmological
constant and dark model consist of $\{H,\dot{H},..\}$. In section
III: dynamic extraction of the model and additional equation
governing $G(t)$ and inference different densities. In section IV,
numerical analysis of the equations. In section V, statefinder
parameters $(r,s)$ analysis. In section VI, observational
constraints. The final section is devoted to the results of
references.
\section{Toy models}
A DE model of our interest is described via energy density
$\rho_{D}$ \cite{Chen}:
\begin{equation}\label{eq:rhoD}
\rho_{D}=\alpha\frac{\ddot{H}}{H}+\beta \dot{H}+\gamma H^{2},
\end{equation}
where $\beta$, $\gamma$ are positive constants, while for $\alpha$ in light of the time variable scenario, we suppose that
\begin{equation}\label{eq:alpha}
\alpha(t)=\alpha_{0}+\alpha_{1} G(t)+\alpha_{2} t \frac{\dot{G}(t)}{G(t)},
\end{equation}
where $\alpha_{0}$, $\alpha_{1}$ and $\alpha_{2}$ are positive
constants and $G(t)$ is a varying gravitational constant. Its a
generalization of Ricci dark energy scenario \cite{riccide} to
higher derivatives terms of Hubble parameter. An interaction term
$Q$ between DE and a barotropic fluid $P_{b}=\omega_{b}\rho_{b}$is
taken to be
\begin{equation}\label{eq:Q}
Q=3Hb(\rho_{b}+\rho_{D})
\end{equation}
We propose three phenomenological models for DE as the following:
\begin{enumerate}
\item The first model is the simplest one, in which we assume that time variable cosmological constant has the same order of energy as the density of DE.
$$\Lambda(t)=\rho_{D},$$ In this model, $\rho_{D}$ is determined using continuity equation with a dissipative interaction term Q.
\item Secondly, generalization of cosmological constant is proposed as a modified Ricci DE model to time variable scenario has an oscillatory form in terms of H.
$$\Lambda(t)=\rho_{b}\sin^{3}{(tH)}+\rho_{D}\cos{(tH)},$$
Note that if we think on trigonometric term as oscillatory term, the
amplitudes of the oscillations are assumed to be proportional to the
barotropic and DE components. Meanwhile these coefficients satisfy
continuity equations.
\item
The last toy model is inspired from the small variation of G(t) and
a logarithmic term of H. Here, coefficients are written in the forms
of barotropic and DE densities .
$$\Lambda(t)=\rho_{b}\ln{(tH)}+\rho_{D}\sin{\left (t\frac{\dot{G}(t)}{G(t)} \right )}.$$ In this model, a time dependent and G variable assumption is imposed.
\end{enumerate}
Following the suggested models we will study time evolution and
cosmological predictions of our cosmological model. Furthermore, we
will compare the numerical results with a package of observational
data.
\section{Dynamic of models}
By using the
following FRW metric for a flat Universe,
\begin{equation}\label{s2}
ds^2=-dt^2+a(t)^2\left(dr^{2}+r^{2}d\Omega^{2}\right),
\end{equation}
field equations (\ref{FEQ}) can be reduced to the following Friedmann equations,
\begin{equation}\label{eq: Fridmman vlambda}
H^{2}=\frac{\dot{a}^{2}}{a^{2}}=\frac{8\pi G(t)\rho}{3}+\frac{\Lambda(t)}{3},
\end{equation}
and,
\begin{equation}\label{eq:fridman2}
\frac{\ddot{a}}{a}=-\frac{4\pi
G(t)}{3}(\rho+3P)+\frac{\Lambda(t)}{3},
\end{equation}
where $d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}$, and $a(t)$
represents the scale factor. \\
Energy conservation law $T^{;j}_{ij}=0$ reads as,
\begin{equation}\label{eq:conservation}
\dot{\rho}+3H(\rho+P)=0.
\end{equation}
Combination of (\ref{eq: Fridmman vlambda}), (\ref{eq:fridman2}) and (\ref{eq:conservation}) gives the relationship between $\dot{G}(t)$ and $\dot{\Lambda}(t)$
\begin{equation}\label{eq:glambda}
\dot{G}=-\frac{\dot{\Lambda}}{8\pi\rho}.
\end{equation}
To introduce an interaction between DE and DM (\ref{eq:conservation}) we should mathematically split it into two following equations
\begin{equation}\label{eq:inteqm}
\dot{\rho}_{DM}+3H(\rho_{DM}+P_{DM})=Q,
\end{equation}
and
\begin{equation}\label{eq:inteqG}
\dot{\rho}_{DE}+3H(\rho_{DE}+P_{DE})=-Q.
\end{equation}
For the barotropic fluid with $P_{b}=\omega_{b}\rho_{b}$ (\ref{eq:inteqm}) will take following form
\begin{equation}
\dot{\rho}_{b}+3H(1+\omega_{b}-b)\rho_{b}=3Hb\rho_{D}.
\end{equation}
Pressure of the DE can be recovered from (\ref{eq:inteqG})
\begin{equation}
P_{D}=-\rho_{D}-\frac{\dot{\rho}_{D}}{3H}-b\frac{3H^{2}-\Lambda(t)}{8 \pi G(t)}.
\end{equation}
Therefore with a fixed form of $\Lambda(t)$ we will be able to
observe behavior of $P_{D}$. Cosmological parameters of our interest
are EoS parameters of DE $\omega_{D}=P_{D}/\rho_{D}$, EoS parameter
of composed fluid
$$\omega_{tot}=\frac{P_{b}+P_{D} }{\rho_{b}+\rho_{D}},$$
deceleration parameter $q$, which can be written as
\begin{equation}\label{eq:accchange}
q=\frac{1}{2}(1+3\frac{P}{\rho} ),
\end{equation}
where $P=P_{b}+P_{D}$ and $\rho=\rho_{b}+\rho_{D}$.
We have a full system of equations of motion and interaction terms. Now we are ready to investigate cosmological predictions of our model.
\section{Numerical analysis of the Cosmological parameters}
In next sections we fully analyze time evolution of three models of
DE. Using numerical integration, we will show that how cosmological
parameters $H,G(t),q,w_{\text{tot}}$, and time decay rate
$\frac{d\log G}{dt}$ and densities $\rho_D,..$ change. We fit
parameters like $H_0,$ etc from observational data.
\subsection{Model 1: $\Lambda(t)=\rho_{D}$}
In this section we will consider $\Lambda(t)$ to be of the form
\begin{equation}\label{eq:lambda1}
\Lambda(t)=\rho_{D}.
\end{equation}
Therefore for the pressure of DE we will have
\begin{equation}\label{eq:P1}
P_{D}=\left( \frac{b}{8 \pi G(t)} -1 \right )\rho_{D}-\frac{\dot{\rho}_{D}}{3H}-\frac{3b}{8 \pi G(t)}H^{2}.
\end{equation}
The dynamics of $G(t)$ we will have
\begin{equation}\label{eq:G1}
\frac{\dot{G}(t)}{G(t)}+\frac{\dot{\rho}_{D}}{3H^{2}-\rho_{D}}=0.
\end{equation}
Performing a numerical analysis for the general case we recover the
graphical behavior of different cosmological parameters. Graphical
behavior of Gravitational constant $G(t)$ against time $t$ presented
in Fig.(\ref{fig:1}). We see that $G(t)$ is an increasing function.
Different plots represent behavior of $G(t)$ as a function of the
parameters of the model. For this model with the specific behavior
of $G(t)$ for Hubble parameter $H$ gives decreasing behavior over
time. It is confirmed by LCDM scenario. From the analysis of the
graphical behavior of $\omega_{tot}$ we made the following
conclusion that with $\alpha_{0}=1$, $\gamma=0.5$, $\beta=3.5$,
$\omega_{b}=0.3$, $b=0.01$ (interaction parameter) and with
increasing $\alpha_{1}$ and $\alpha_{2}$ we increase the value of
$\omega_{tot}$ for later stages of evolution, while for the early
stages, in history, it is a decreasing function. For instance, with
$\alpha_{1}=0.5$ and $\alpha_{2}=0.5$ (blue line) $\omega_{tot}$ is
a constant and $\omega_{tot} \approx -0.9$ (Top left plot in
Fig.(\ref{fig:2})). Top right plot of Fig.(\ref{fig:2}) presents
graphical behavior of $\omega_{tot}$ against time as a function of
the parameter $b$ characterizing interaction between DE and DM. We
see that for the later stages of the evolution the interaction
$Q=3hb(\rho_{b}+\rho_{D})$ does not play any role. An existence of
the interaction can be observed only for relatively early stages of
evolution and when $b$ is too much higher than the real values of it
estimated from observations. The left-bottom plot shows the
decreasing behavior of $\omega_{tot}$ at early stages of evolution
which, while for later stages, becomes a constant. This behavior is
observed for $\alpha_{0}=\alpha_{1}=\alpha_{2}=1$, $\omega_{b}=0.1$,
$b=0.01$ and for increasing $\gamma$ and $\beta$. With the increase
in $\gamma$ and $\beta$, we increase the value of $\omega_{tot}$.
The right-bottom plot represents behavior as a function of
$\omega_{b}$. In Fig.{\ref{fig:3}}, the graphical behavior of the
deceleration parameter $q$ is observed which is a negative quantity
throughout the evolution of the Universe i.e. we have an ever
accelerated Universe. Right panel (top and bottom) shows that the
behavior of $q$ does not strongly depend upon the interaction
parameter $b$ and EoS parameter $\omega_{b}$. We also see that $q$
starts its evolution from $-1$ and for a very short period of the
history it becomes smaller than $-1$, but after this $q>-1$ for
ever, giving a hope that observational facts can be modeled (for
later stages!). Right panel (top and bottom) represents the behavior
of $q$ for $\alpha_{1}=\alpha_{2}$ and \{$\gamma$, $\beta$\} (top
and bottom) respectively. With the increase in the values of the
parameters, the value of $q$ increases. Some information about
$\omega_{D}$, $\Lambda(t)$ and $\dot{G}(t)/G(t)$ can be found in
Appendix.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/int_G_alpha.eps} &
\includegraphics[width=50 mm]{Plots/first/int_G_b.eps}\\
\includegraphics[width=50 mm]{Plots/first/int_G_gamma.eps} &
\includegraphics[width=50 mm]{Plots/first/int_G_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of Gravitational constant $G(t)$ against $t$ for Model 1.}
\label{fig:1}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/int_omega_alpha.eps} &
\includegraphics[width=50 mm]{Plots/first/int_omega_b.eps}\\
\includegraphics[width=50 mm]{Plots/first/int_omega_gamma.eps} &
\includegraphics[width=50 mm]{Plots/first/int_omega_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of EoS parameter $\omega_{tot}$ against $t$ for Model 1.}
\label{fig:2}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/int_q_alpha.eps} &
\includegraphics[width=50 mm]{Plots/first/int_q_b.eps}\\
\includegraphics[width=50 mm]{Plots/first/int_q_gamma.eps} &
\includegraphics[width=50 mm]{Plots/first/int_q_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of deceleration parameter $q$ against $t$ for Model 1.}
\label{fig:3}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/int_Hubble_alpha.eps} &
\includegraphics[width=50 mm]{Plots/first/int_Hubble_b.eps}\\
\includegraphics[width=50 mm]{Plots/first/int_Hubble_gamma.eps} &
\includegraphics[width=50 mm]{Plots/first/int_Hubble_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of Hubble parameter $H(t)$ against $t$ for Model 1.}
\label{fig:10}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/int_alpha_alpha.eps} &
\includegraphics[width=50 mm]{Plots/first/int_alpha_b.eps}\\
\includegraphics[width=50 mm]{Plots/first/int_alpha_gamma.eps} &
\includegraphics[width=50 mm]{Plots/first/int_alpha_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\alpha$ against $t$ for Model 1.}
\label{fig:11}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/int_omegaD_alpha.eps} &
\includegraphics[width=50 mm]{Plots/first/int_omegaD_b.eps}\\
\includegraphics[width=50 mm]{Plots/first/int_omegaD_gamma.eps} &
\includegraphics[width=50 mm]{Plots/first/int_omegaD_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\omega_{D}$ against $t$ for Model 1.}
\label{fig:12}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/int_DG_alpha.eps} &
\includegraphics[width=50 mm]{Plots/first/int_DG_b.eps}\\
\includegraphics[width=50 mm]{Plots/first/int_DG_gamma.eps} &
\includegraphics[width=50 mm]{Plots/first/int_DG_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\dot{G}(t)/G(t)$ against $t$ for Model 1.}
\label{fig:13}
\end{figure}
\subsection{Model 2: $\Lambda(t)=\rho_{b}\sin{(tH)}^{3}+\rho_{D}\cos{(tH)}$}
For the second model we will consider the following phenomenological
form of the $\Lambda(t)$
\begin{equation}\label{eq:Lambda2}
\Lambda(t)=\rho_{b}\sin{(tH)}^{3}+\rho_{D}\cos{(tH)}.
\end{equation}
Taking into account (\ref{eq: Fridmman vlambda}) we can write $\Lambda(t)$ in a different form
\begin{equation}\label{eq:Lambda2new}
\Lambda(t)=\left [ 1+\frac{\sin(tH)^{3}}{8 \pi G(t)} \right ]^{-1} \left( \frac{3H^{2}}{8 \pi G(t)}\sin{(tH)}^{3}-\rho_{D}(\sin{(tH)}^{3}-\cos{(tH)})\right ).
\end{equation}
\begin{equation}\label{eq:G2}
\frac{\dot{G}(t)}{G(t)}+\frac{\dot{\Lambda}(t)}{3H^{2}-\Lambda(t)}=0,
\end{equation}
with (\ref{eq:Lambda2new}) will give us the behavior of $G(t)$
Fig(\ref{fig:4}). We see that $G(t)$ is an
increasing-decreasing-increasing function (Top panel and
right-bottom plot). The left-bottom plot gives us an information
about the behavior of $G(t)$ as a function of $\gamma$ and $\beta$
with $\alpha_{0}=1$, $\alpha_{1}=\alpha_{2}=1.5$ and
$\omega_{b}=0.3$, $b=0.01$. We see that with increasing $\gamma$ and
$\beta$ we are able to change the behavior of $G(t)$. For instance,
with $\gamma=0.5$ and $\beta=3.5$ which is a blue line, still
preserves the increasing-decreasing-increasing behavior. While for
higher values of the parameters, we change the behavior of $G(t)$
compared to the other cases within this model and we have
increasing-decreasing behavior. Graphical behavior of $\omega_{tot}$
can be found in Fig.{\ref{fig:5}}. The behavior of the deceleration
parameter $q$ for this model gives us almost the same as for Model
1, where $\Lambda(t)=\rho_{D}$. We also see that with increasing
$\gamma$ and $\beta$ we increase the value of $q$ (left-bottom
plot). The presence of the interaction $Q$ and the barotropic fluid
for which EoS parameter $\omega_{b}<1$ does not leave a serious
impact on the behavior of $q$. This model with this behavior of
$q>-1$ can be comparable with the observational facts.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/sec/int_G_alpha.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_G_b.eps}\\
\includegraphics[width=50 mm]{Plots/sec/int_G_gamma.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_G_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of Gravitational constant $G(t)$ against $t$ Model 2.}
\label{fig:4}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/sec/int_omega_alpha.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_omega_b.eps}\\
\includegraphics[width=50 mm]{Plots/sec/int_omega_gamma.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_omega_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of EoS parameter $\omega_{tot}$ against $t$ for Model 2.}
\label{fig:5}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/sec/int_q_alpha.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_q_b.eps}\\
\includegraphics[width=50 mm]{Plots/sec/int_q_gamma.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_q_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of deceleration parameter $q$ against $t$ for Model 2.}
\label{fig:6}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/sec/int_Hubble_alpha.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_Hubble_b.eps}\\
\includegraphics[width=50 mm]{Plots/sec/int_Hubble_gamma.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_Hubble_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of Hubble parameter $H(t)$ versus $t$ for Model 2.}
\label{fig:14}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/sec/int_alpha_alpha.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_alpha_b.eps}\\
\includegraphics[width=50 mm]{Plots/sec/int_alpha_gamma.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_alpha_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\alpha$ versus $t$ for Model 2.}
\label{fig:15}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/sec/int_omegaD_alpha.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_omegaD_b.eps}\\
\includegraphics[width=50 mm]{Plots/sec/int_omegaD_gamma.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_omegaD_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\omega_{D}$ against $t$ for Model 2.}
\label{fig:16}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/sec/int_DG_alpha.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_DG_b.eps}\\
\includegraphics[width=50 mm]{Plots/sec/int_DG_gamma.eps} &
\includegraphics[width=50 mm]{Plots/sec/int_DG_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\dot{G}(t)/G(t)$ against $t$ for Model 2.}
\label{fig:17}
\end{figure}
\subsection{Model 3: $\Lambda(t)=\rho_{b}\ln{(tH)}+\rho_{D}\sin{\left (t\frac{\dot{G}(t)}{G(t)}\right) }$}
For this model we will consider the following phenomenological form of the $\Lambda(t)$
\begin{equation}\label{eq:Lambda2}
\Lambda(t)=\rho_{b}\ln{(tH)}+\rho_{D}\sin{\left (t\frac{\dot{G}(t)}{G(t)}\right) }.
\end{equation}
Taking into account (\ref{eq: Fridmman vlambda}) we can write $\Lambda(t)$ in a different form
\begin{equation}\label{eq:Lambda2new}
\Lambda(t)=\left [ 1+\frac{\ln(tH)}{8 \pi G(t)} \right ]^{-1} \left( \frac{3H^{2}}{8 \pi G(t)}\ln{(tH)}-\rho_{D}(\ln{(tH)}-\sin{\left (t\frac{\dot{G}(t)}{G(t)}\right) })\right ).
\end{equation}
\begin{equation}\label{eq:G2}
\frac{\dot{G}(t)}{G(t)}+\frac{\dot{\Lambda}(t)}{3H^{2}-\Lambda(t)}=0.
\end{equation}
Equation (\ref{eq:G2}) with (\ref{eq:Lambda2new}) will give us the
behavior of $G(t)$. This model also includes several interesting
facts about the behavior of the cosmological parameters. After
recovering the $G(t)$ we observe that $G(t)$ is an increasing
function, and its graphical behavior for the different cases are
given in Fig.(\ref{fig:7}). For instance with increasing $\beta$ and
$\gamma$ with $\alpha_{0}=\alpha_{2}=1$, $\alpha_{1}=1.5$,
$\omega_{b}=0.3$ and $b=0.01$ we have the following picture:
$\gamma=0.1$ and $\beta=2.5$ (a blue line at left-bottom plot) we
have a decreasing behavior for $G(t)$, while for the higher values
for $\gamma$ and $\beta$ we have increasing behavior for later
stages of evolution. With increasing $\omega_{b}$ we decrease the
value of $G(t)$ (right-bottom). We also observe that there is a
period in history of the evolution where $G(t)$ can be a constant.
With $\alpha_{0}=\alpha_{2}=1$, $\alpha_{1}=1.5$, $\gamma=0.5$,
$\beta=3.5$ and $\omega_{b}=0.3$ we see that for non interacting
case, when $b=0$ (a blue line at right-top plot) at later stages of
evolution $G(t)=const \approx 1.36$, while when we include the
interaction and increase the value of $b$, increase in the value of
$G(t)$ is observed. Behavior of $G(t)$ from $\alpha_{0}$,
$\alpha_{1}$ and $\alpha_{2}$ can be found at the left-top plot of
Fig.(\ref{fig:7}). Other cosmological parameter that we have
investigated for this model is a $\omega_{tot}$ describing
interacting DE and DM two component fluid model. From
Fig.(\ref{fig:8}) we can make conclusion about the behavior of the
parameter. We observe that as a function of $\alpha_{0}$,
$\alpha_{1}$ and $\alpha_{2}$, while the other parameters are being
fixed, we have a decreasing function for the initial stages of
evolution, while for the later stages we have a constant value for
$\omega_{tot}$. With increasing $\alpha_{1}$ and $\alpha_{2}$ we
will increase $\omega_{tot}$ and we have a possibility to obtain
decreasing-increasing-constant behavior (left-top plot). On the
right-top plot we see the role of the interaction $Q$. Starting with
the non interacting case $b=0$ and increasing $b$ we observe the
increasing value of $\omega_{tot}$. Bottom panel of
Fig.{\ref{fig:8}} represents graphical behavior of $\omega_{tot}$
from $ \{ \gamma, \beta \} $ and $\omega_{b}$. The last parameter
discussed in this section will be the deceleration parameter $q$
recovered for this specific $\Lambda(t)$. Investigating the behavior
we conclude that for this model, $\gamma > 0.1$ and $\beta>2.5$
should be taken in order to get $q>-1$ ( Fig.(\ref{fig:9})
left-bottom plot). It starts its evolution from $-1$ and then it is
strictly $q>-1$ for later stages of evolution. Interaction as well
as $\omega_{b}$ has a small impact on the behavior of $q$. Left-top
plot of Fig.{\ref{fig:9}} represents the behavior of $q$ as a
function of $\alpha_{0}$, $\alpha_{1}$ and $\alpha_{2}$. As for the
other models, additional information about other cosmological
parameters of this model can be found in Appendix.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/int_G_alpha.eps} &
\includegraphics[width=50 mm]{Plots/third/int_G_b.eps}\\
\includegraphics[width=50 mm]{Plots/third/int_G_gamma.eps} &
\includegraphics[width=50 mm]{Plots/third/int_G_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of Gravitational constant $G(t)$ against $t$ Model 3.}
\label{fig:7}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/int_omega_alpha.eps} &
\includegraphics[width=50 mm]{Plots/third/int_omega_b.eps}\\
\includegraphics[width=50 mm]{Plots/third/int_omega_gamma.eps} &
\includegraphics[width=50 mm]{Plots/third/int_omega_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of EoS parameter $\omega_{tot}$ against $t$ for Model 3.}
\label{fig:8}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/int_q_alpha.eps} &
\includegraphics[width=50 mm]{Plots/third/int_q_b.eps}\\
\includegraphics[width=50 mm]{Plots/third/int_q_gamma.eps} &
\includegraphics[width=50 mm]{Plots/third/int_q_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of deceleration parameter $q$ against $t$ for Model 3.}
\label{fig:9}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/int_Hubble_alpha.eps} &
\includegraphics[width=50 mm]{Plots/third/int_Hubble_b.eps}\\
\includegraphics[width=50 mm]{Plots/third/int_Hubble_gamma.eps} &
\includegraphics[width=50 mm]{Plots/third/int_Hubble_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of Hubble parameter $H(t)$ against $t$ for Model 3.}
\label{fig:18}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/int_alpha_alpha.eps} &
\includegraphics[width=50 mm]{Plots/third/int_alpha_b.eps}\\
\includegraphics[width=50 mm]{Plots/third/int_alpha_gamma.eps} &
\includegraphics[width=50 mm]{Plots/third/int_alpha_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\alpha$ against $t$ for Model 3.}
\label{fig:19}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/int_omegaD_alpha.eps} &
\includegraphics[width=50 mm]{Plots/third/int_omegaD_b.eps}\\
\includegraphics[width=50 mm]{Plots/third/int_omegaD_gamma.eps} &
\includegraphics[width=50 mm]{Plots/third/int_omegaD_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\omega_{D}$ against $t$ for Model 3.}
\label{fig:20}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/int_DG_alpha.eps} &
\includegraphics[width=50 mm]{Plots/third/int_DG_b.eps}\\
\includegraphics[width=50 mm]{Plots/third/int_DG_gamma.eps} &
\includegraphics[width=50 mm]{Plots/third/int_DG_omegab.eps}
\end{array}$
\end{center}
\caption{Behavior of $\dot{G}(t)/G(t)$ against $t$ for Model 3.}
\label{fig:21}
\end{figure}
\section{State finder diagnostic}
In the framework of GR, Dark energy can explain the present cosmic
acceleration. Except cosmological constant many other candidates of
dark energy(quintom, quintessence, brane, modified gravity etc.) are
proposed. Dark energy is model dependent and to differentiate
different models of dark energy, a sensitive diagnostic tool is
needed. Since $\dot{a}>0$, hence $H>0$ means the expansion of
the universe. Also, $\ddot{a}>0$ implies $q<0$. Since, the various dark
energy models give $H>0$ and $q<0$, they cannot provide enough
evidence to differentiate the more accurate cosmological
observational data and the more general models of dark energy. For
this aim we need higher order of time derivative of scale factor
and geometrical tool. Sahni \emph{et.al.} \cite{Sahni} proposed
geometrical statefinder diagnostic tool, based on dimensionless
parameters $(r, s)$ which are function of scale factor and its time
derivative. These parameters are defined as
\begin{equation}\label{eq:statefinder}
r=\frac{1}{H^{3}}\frac{\dddot{a}}{a} ~~~~~~~~~~~~
s=\frac{r-1}{3(q-\frac{1}{2})}.
\end{equation}
For $8\pi G =1$ and $\Lambda=0$ we can obtain another form of
parameters $r$ and $s$:
\begin{equation}\label{eq:rsrhop}
r=1+\frac{9(\rho+P)}{2\rho}\frac{\dot{P}}{\dot{\rho}}, ~~~ s=\frac{(\rho+P)}{P}\frac{\dot{P}}{\dot{\rho}}.
\end{equation}
For the model 3 of our consideration, we presented the $\{r,s\}$ in
Fig.(\ref{fig:rs}) as a function of $\beta$ and $\gamma$.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/third/rs_3_1.eps} &
\includegraphics[width=50 mm]{Plots/third/rs_3_2.eps}\\
\end{array}$
\end{center}
\caption{r-s for model 3. $\beta=2.5$ and $\gamma=0.1$ for the left plot. $\beta=3.5$ and $\gamma=0.3$ for the right plot. $\alpha_{0}=1.0$, $\alpha_{1}=1.5$, $\alpha_{2}=1.0$, $\omega_{b}=0.3$ and $b=0.01$.}
\label{fig:rs}
\end{figure}
As we know the pair $\{r,s\}=\{1,0\}$ corresponds to the $\Lambda$ CDM model. It is indicated on our graphs for both models. Further, $\{1,0\}$ which shows the CDM model, is present in our models. But we obsaerve the absence of Einstein static universe due to this fact that our models never mimic the pair $\{-\infty,+\infty\}$. So, our models fit the $\Lambda CDM$ and CDM perfectly.
\section{Observational constraints }
To use the $SNIa$ data, we define distance modulus $\mu$ as a
function of the luminosity distance $D_L$ as the following:
\begin{equation}
\mu=m-M=5\log_{10}{D_L},
\end{equation}
Here $D_{L}$ is in the following form:
\begin{equation}
D_{L}=(1+z)\frac{c}{H_{0}}\int_{0}^{z}{\frac{dz'}{\sqrt{H(z')}}}.
\end{equation}
Here $m$ and $M$ denote the apparent magnitude and absolute
magnitude, respectively. Due to the photon-baryon plasma, Baryonic
acoustic oscillations exist in the decoupling redshift $z = 1.090$.
A major for scaling is the following quantity
\begin{equation}
A=\frac{\sqrt{\Omega_{m0} } }{H(z_{b})^{1/3}} \left[ \frac{1}{z_{b}} \int_{0}^{z_{b}}{\frac{dz}{H(z)}} \right ]^{2/3}.
\end{equation}
From WiggleZ-data \cite{Blake} we know that $A = 0.474 \pm 0.034$,
$0.442 \pm 0.020$ and $0.424 \pm 0.021$ at the redshifts $z_{b} =
0.44$, $0.60$ and $0.73$. The major statistical analysis parameter
is:
\begin{equation}
\chi^{2}{(x^{j})}=\sum_{i}^{n}\frac{(f(x^{j})_{i}^{t}-f(x^{j})_{i}^{0})^{2}}{\sigma_{i}},
\end{equation}
Here $f(x^{j})_{i}^{t}$ is the theoretical function of the model's
parameters. To conclude the work and model analysis we perform
comparison of our results with observational data. SNeIa data
allowed us to obtain the following observational constraints for our
models. For the Model 1, we found that the best fit can occurred
with $\Omega_{m0}=0.24$ and $H_{0}=0.3$. For $\alpha_{0}=0.3$,
$\alpha_{1}=0.5$, $\alpha_{2}=0.4$ and $\beta=4.0$, $\gamma=1.4$,
$\omega_{b}=0.5$, while for interaction parameter $b=0.02$. For the
Model 2, we found that the best fit we can obtain with $H_{0}=0.5$
and $\Omega_{m}=0.4$. Meanwhile for $\alpha_{0}=1.0$,
$\alpha_{1}=1.5$, $\alpha_{2}=1.3$ and $\beta=3.5$, $\gamma=0.5$,
$\omega_{b}=0.3$, while for interaction parameter $b=0.01$. Finally
we present the results obtained for Model 3, which say that the best
fit is possible when $H_{0}=0.35$ and $\Omega_{m0}=0.28$. For the
parameters $\alpha_{0}$, $\alpha_{1}$, $\alpha_{2}$, $\beta$,
$\gamma$, $\omega_{b}$ and $b$ we have the numbers $0.7$,$1.0$,
$1.2$, $3$, $0.8$, $0.75$ and $0.01$ respectively. Finally, we would
like to discuss the constraints resulted from $SNeIa+BAO+CMB$
\cite{obs} .
\begin{center}
\begin{tabular}{ | l | l | l | l | l | l | l | l | l | l |}
\hline
M & $\alpha_{0}$ & $\alpha_{1}$ & $\alpha_{2}$ & $\beta$ & $\gamma$ & $\omega_{b}$ & $b$ & $H_{0}$ & $\Omega_{m0}$ \\ \hline
1 & $0.3^{+0.35}_{-0.15}$ & $0.5^{+0.35}_{-0.4}$ & $0.4^{+0.35}_{-0.1}$ & $4.0^{+1.3}_{-2.7}$ & $1.4^{+0.25}_{-0.25}$ & $0.5^{+0.4}_{-0.5}$ & $0.01^{+0.07}_{-0.01}$ & $0.25^{+0.35}_{-0.05}$ & $0.26^{+0.04}_{-0.03}$ \\ \hline
2 & $1.2^{+0.2}_{-0.5}$ & $1.1^{+0.4}_{-0.3}$ & $0.7^{+0.55}_{-0.15}$ & $3.0^{+1.5}_{-0.8}$ & $0.7^{+0.12}_{-0.3}$ & $0.4^{+0.6}_{0.2}$ & $0.02^{+0.03}_{0.005}$ & $0.4^{+0.35}_{-0.2}$ & $0.3^{+0.2}_{-0.05}$ \\ \hline
3 & $0.7^{+0.5}_{-0.3}$ & $1.0^{+0.2}_{0.4}$ & $1^{+0.1}_{-0.3}$ & $3.0^{+1.0}_{0.3}$ & $0.8^{+0.1}_{-0.4}$ & $0.7^{+0.6}_{-0.1}$ & $0.01^{+0.09}_{-0.01}$ & $0.3^{+0.3}_{-0.1}$ & $ 0.21^{+0.15}_{-0.01} $\\ \hline
\end{tabular}
\end{center}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=50 mm]{Plots/first/mu_1.eps} &
\includegraphics[width=50 mm]{Plots/sec/mu_2.eps}\\
\end{array}$
\end{center}
\caption{Observational data $SneIa+BAO+CMB$ for distance modulus versus our theoretical results for models 1 and 2.}
\label{fig:muz}
\end{figure}
From the graph of luminosity distance versus zm we learn that how $\mu$ depends on the values of the parameters for different redshifts $z$. For different values of $\Omega_M,\Omega_D=0$ and i the regime of low redshifts $0.001<z<0.01$ , this graph has linearity. For $z>0.4$ the graph has typical form of models with $\Omega_M$. Hubble parameter $H$ has a centeral role in the behavior of $\mu(z)$ for different ranges of $z$. We can use it to investigate the cosmological parameters.
\newpage
\section{Summary}
Time varying cosmological models with gravitational and cosmological
constant have been studied frequently. Nevertheless, in view of
cosmological data, rate of change of G is small. So the first order
correction terms are more important. Our approach to $\Lambda(t),
G(t)$ models is slightly different and more general than any other
previous work. As a proper generalization of general relativity,
scalar-tensor-vector gravity model has been proposed to explain the
structure of galaxies and dark matter problem. If we assume small
changes in the variation of the scalar fields, MOG model at the
level of action becomes equivalent to Einstein-Hilbert model, of
course it is necessary that we consider $G(t)$ as a slowly varying
scalar field. We proposed three models of generalized Ricci dark
energy including $\Lambda(t), G(t)$ to complete the time evolution
of dark energy. Due to the complexity of the model equations, the
numerical algorithms with cosmological parameters have been used.
Gravitational acceleration region and time evolution of state finder
parameters $\{r,s\}$ compared with $\Lambda$CDM model are
numerically studied with high accuracy. We obtained the fit range
of data models by comparing the free parameters of dark energy
models and cosmological data $SNeIa+BAO+CMB$. Our model is a model
that is consistent with cosmological data while the other
theoretical models are not.
\begin{acknowledgments}
The authors thank J.W.Moffat for useful comments about MOG.
\end{acknowledgments}
|
1,941,325,220,728 | arxiv | \section{Introduction}
Consider the following linear model:
\begin{equation}
\label{e:model}
\y=\A\x
\end{equation}
where $\x\in \mathbb{R}^n$ is an unknown signal, $\y\in \mathbb{R}^m$ is an observation vector and $\A\in \mathbb{R}^{m\times n}$ (with $m<<n$) is a known sensing matrix. This model arises from compressed sensing, see, e.g., \cite{Don06} and one of the central goals is to recover $\x$ based on $\A$ and $\y$. It has been shown that under some suitable conditions, $\x$ can be recovered exactly, see, e.g., \cite{CanT05}.
The orthogonal matching pursuit (OMP) \cite{TroG07} is one of the commonly used algorithms to recover $\x$ from \eqref{e:model}. A vector $\x\in \mathbb{R}^n$ is $k-$sparse if $|\text{supp}(\x)|\leq k$, where $\text{supp}(\x)=\{i:x_i\neq0\}$ is the support of $\x$.
For any set $T\subset\{1,2,\ldots,n\}$, let $\A_T$ be the submatrix of $\A$ that only contains columns indexed by $T$ and
$\x_T$ be the restriction of the vector $\x$ to the elements indexed by $T$. Then the OMP can be described by Algorithm \ref{a:OMP}.
One of the commonly used frameworks for sparse recovery is the restricted isometry property,
which was introduced in \cite{CanT05}. For any $m\times n$ matrix $\A$ and any integer $k, 1\leq k\leq n$, the $k-$restricted isometry constant $\delta_k$ is defined as the smallest
constant such that
\begin{equation}
\label{e:RIP}
(1-\delta_k)\|\x\|_2^2\leq \|\A\x\|_2^2\leq(1+\delta_k)\|\x\|_2^2
\end{equation}
for all $k-$sparse vector $\x$.
It has conjectured in \cite{DaiM09} that there exist a matrix with $\delta_{K+1}\leq\frac{1}{\sqrt{K}}$ and a $K-$sparse $\x$ such that the OMP fails in K iterations \cite{MoS12}. Counter examples have independently given in \cite{MoS12} and \cite{WanS12} that there exist a matrix with $\delta_{K+1}=\frac{1}{\sqrt{K}}$ and a $K-$sparse $\x$ such that the OMP fails in K iterations. In this letter, we will give a counter example to show that for any given positive integer $K\geq 2$ and for any $\frac{1}{\sqrt{K+1}}\leq t<1$, there always exist a $K-$sparse $\x$ and a matrix $\A$ with $\delta_{K+1}=t$ such that the OMP algorithm fails in $K$ iterations. This result not only greatly improves the existing results, but also gives a counter example with $\delta_{K+1}<\frac{1}{\sqrt{K}}$ such that the OMP fails in K iterations.
It has respectively shown in \cite{DavW10} and \cite{LiuT10} that $\delta_{K+1}<\frac{1}{3\sqrt{K}}$ and $\delta_{K+1}<\frac{1}{(1+\sqrt{2})\sqrt{K}}$ are sufficient for OMP to recover every K-sparse $\x$ in K iteration. The sufficient condition has independently improved to $\delta_{K+1}<\frac{1}{1+\sqrt{K}}$ in \cite{MoS12} and \cite{WanS12}. In this letter, we will improve it to $\delta_{K+1}\leq\frac{1}{1+\sqrt{K}}$.
\begin{algorithm}[h!]
\caption{OMP \cite{TroG07}, \cite{WanS12} } \label{a:OMP}
Input: measurements $\y$, sensing matrix $\A$
and sparsity $K$.\\
Initialize: $k=0, r^0=\y, T^0=\emptyset$.\\
While $k<K$
\begin{algorithmic}[1]
\STATE $k=k+1$,
\STATE $t^k=\arg \max_j|\langle r^{k-1},\A_j\rangle|$,
\STATE $T^k=T^{k-1}\bigcup\{t^k\}$,
\STATE $\hat{\x}_{T^k}=\arg \min_{\x}\parallel\y-\A_{T^k}\x\parallel_2$,
\STATE $r^k=\y-\A_{T^k}\hat{\x}_{T^k}$.
\end{algorithmic}
Output: $\hat{\x}=\arg \min_{\x: \text{supp}(\x)=T^K}\parallel\y-\A\x\parallel_2$.
\end{algorithm}
\section{Main Results} In this section, we will give our main results. We will first construct a counter example to show the OMP algorithm may fail in $K$ iterations if $\frac{1}{\sqrt{K+1}}\leq\delta_{K+1}<1$.
\begin{theorem} \label{t:count}
For any given positive integer $K\geq 2$ and for any
\begin{equation*}
\frac{1}{\sqrt{K+1}}\leq t<1
\end{equation*}
there always exist a $K-$sparse $\x$ and a matrix $\A$ with the restricted isometry constant
$\delta_{K+1}=t$ such that the OMP fails in $K$ iterations.
\end{theorem}
Our proof is similar to the method used in \cite{MoS12}, but the critical idea is different.
{\bf Proof.} For any given positive integer $K\geq 2$, let
\begin{equation*}
\B=\begin{bmatrix}
\frac{K}{K+1}\I_{K}& \frac{\1}{K+1} \\ \frac{\1^T}{K+1} & \frac{K+2}{K+1}
\end{bmatrix}
\end{equation*}
where $\1$ is a $K-$dimensional column vector with all of its entries being $1$ and $\I_{K}$ is the $K-$dimensional identity matrix.
By some simple calculations, we can show that the eigenvalues $\{\lambda_i\}_{i=1}^{K+1}$ of $\B$ are
\begin{align*}
&\lambda_1=\ldots=\lambda_{K-1}=\frac{K}{K+1}, \lambda_{K}=1-\frac{1}{\sqrt{K+1}}, \; \lambda_{K+1}=1+\frac{1}{\sqrt{K+1}}.
\end{align*}
let $s=t-\frac{1}{\sqrt{K+1}}$ and $\C=\B-s\I_{K+1}$. Then by the aforementioned two equations, the eigenvalues $\{\lambda_i\}_{i=1}^{K+1}$ of $\C$ are
\begin{align*}
&\lambda_1=\ldots=\lambda_{K-1}=\frac{K}{K+1}-s \\
&\lambda_{K}=1-\frac{1}{\sqrt{K+1}}-s=1-t, \; \lambda_{K+1}=1+\frac{1}{\sqrt{K+1}}-s.
\end{align*}
Since $ \frac{1}{\sqrt{K+1}}\leq t<1$, $\C$ is a symmetric positive definite matrix. Therefore, there exists an upper triangular
matrix $\A$ such that $\A^T\A=\C$. By the aforementioned inequations and \eqref{e:RIP}, $\delta_{K+1}(\A)=t$.
Let $\x=(1,1,\ldots,1,0)\in \mathbb{R}^{K+1}$, then $\x$ is $K-$sparse. Let $\e_{i}, 1\leq i\leq K+1$, denote the $i-th$ column of $\I_{K+1}$, then
one can easily show that,
\begin{align*}
\frac{K}{K+1}-s=\max_{1\leq i\leq K}|\langle\A\e_i, \A\x\rangle|\leq |\langle\A\e_{K+1}, \A\x\rangle|=\frac{K}{K+1}
\end{align*}
so the OMP fails in the first iteration. Therefore, the OMP algorithm fails in $K$ iterations for the given vector $\x$ and the given matrix $\A$.
In the following, we will improve the sufficient condition $\delta_{K+1}<\frac{1}{\sqrt{K}+1}$ \cite{MoS12}, \cite{WanS12} of the perfect recovery
to $\delta_{K+1}\leq\frac{1}{\sqrt{K}+1}$.
\begin{theorem} \label{t:main}
Suppose that $\A$ satisfies the restricted isometry property of order $K+1$ with the
restricted isometry constant
\begin{equation}
\label{e:cond1}
\delta_{K+1}=\frac{1}{\sqrt{K}+1}
\end{equation}
then the OMP algorithm can perfectly recover any $K-$sparse signal $\x$ from
$\y=\A\x$ in $K$ iteration.
\end{theorem}
Before proving this theorem, we need to introduce the following two lemmas, where Lemma \ref{l:main2} was proposed in \cite{WanS12}.
\begin{lemma} \label{l:main1}
For each $\x, \x'$ supported on disjoint subsets $S, S'\subseteq \{1,\ldots, n\}$ with $|S|\leq s, |S'|\leq s'$, we have
\begin{equation}
\label{e:eq1}
|\langle \A\x, \A\x'\rangle|=\delta_{s+s'}\|\x\|_2\|\x'\|_2
\end{equation}
if and only if:
\begin{equation}
\label{e:eq2}
\frac{\|\A\x\|_2^2}{\|\x\|_2^2}+\frac{\|\A\x'\|_2^2}{\|\x'\|_2^2}=2.
\end{equation}
\end{lemma}
{\bf Proof.} Let
\begin{equation}
\label{e:x}
\bar{\x}=\x/\|\x\|_2, \bar{\x}'=\x'/\|\x'\|_2
\end{equation}
since $S\bigcap S'=\emptyset$, we have,
$\|\bar{\x}+\bar{\x}'\|_2^2=\|\bar{\x}-\bar{\x}'\|_2^2=2$. By \eqref{e:RIP}, we have
\begin{equation}
\label{e:rip2}
2(1-\delta_{s+s'})\leq \|\A(\bar{\x}\pm\bar{\x}')\|_2^2\leq 2(1+\delta_{s+s'})
\end{equation}
By the parallelogram identity and \eqref{e:x}, we have
\begin{equation}
\label{e:par}
|\langle \A\bar{\x}, \A\bar{\x}'\rangle|=\frac{1}{4}|\|\A(\bar{\x}+\bar{\x}')\|_2^2-\|\A(\bar{\x}-\bar{\x}')\|_2^2|\leq \delta_{s+s'}.
\end{equation}
By \eqref{e:x}, \eqref{e:eq1} holds if and only if the equality in \eqref{e:par} holds. By \eqref{e:rip2}, the equality in \eqref{e:par} holds if and only if
\begin{equation*}
\label{e:RIP2}
\|\A(\bar{\x}+\bar{\x}')\|_2^2=2(1-\delta_{s+s'}), \|\A(\bar{\x}-\bar{\x}')\|_2^2= 2(1+\delta_{s+s'})
\end{equation*}
or
\begin{equation*}
\label{e:RIP2}
\|\A(\bar{\x}-\bar{\x}')\|_2^2=2(1-\delta_{s+s'}), \|\A(\bar{\x}+\bar{\x}')\|_2^2= 2(1+\delta_{s+s'}).
\end{equation*}
Therefore, \eqref{e:eq1} holds if and only if
\begin{equation*}
\label{e:RIP2}
\|\A(\bar{\x}-\bar{\x}')\|_2^2+\|\A(\bar{\x}+\bar{\x}')\|_2^2= 4.
\end{equation*}
Obviously, the aforementioned equation is equivalent to \eqref{e:eq2}, so the lemma holds.
\begin{lemma} \label{l:main2}
For $S\subset\{1,2,\ldots,n\}$, if $\delta_{|S|}<1$, then
\begin{equation*}
(1-\delta_{|S|})\parallel\x\parallel_2\leq\parallel\A_S^T\A_S\x\parallel_2\leq(1+\delta_{|S|})\parallel\x\parallel_2
\end{equation*}
for any vector $\x$ supported on $S$.
\end{lemma}
We will prove it by induction. Our proof is similar to the method used in \cite{MoS12}, but the critical idea is different.
{\bf Proof of Theorem \ref{t:main}} Firstly, we prove that if \eqref{e:cond1} holds, then the OMP can choose a correct
index in the first iteration.
Let $S$ denote the support of the $K-$sparse signal $\x$ and let $\alpha=\max_{i\in S}|\langle\A\e_i, \A\x\rangle|$.
Then
\begin{equation*}
|\langle\A\x,\A\x\rangle|=|\sum_{i\in S}x_i\langle\A\e_i,\A\x\rangle|\leq\alpha\parallel\x\parallel_1\leq\alpha\sqrt{K}\parallel\x\parallel_2.
\end{equation*}
By \eqref{e:RIP}, it holds that
\begin{equation}
\label{e:lb1}
|\langle\A\x,\A\x\rangle|\geq(1-\delta_{K+1})\parallel\x\parallel_2^2.
\end{equation}
By the aforementioned two inequations, we have
\begin{equation}
\label{e:lb2}
\max_{i\in S}|\langle\A\e_i, \A\x\rangle|\geq\frac{(1-\delta_{K+1})\parallel\x\parallel_2}{\sqrt{K}}
\end{equation}
and if the equality in \eqref{e:lb2} holds, then the equality in \eqref{e:lb1} must also hold.
By Lemma 2.1 in \cite{Can08}, for each $j\not\in S$, it holds
\begin{equation}
\label{e:ub}
|\langle\A\e_j,\A\x\rangle|\leq \delta_{K+1}\parallel\x\parallel_2.
\end{equation}
So if \eqref{e:cond1} holds and at least there is one equality in \eqref{e:lb1} or \eqref{e:ub} does not hold, then for each $j\not\in S$, it holds
\begin{equation*}
|\langle\A\e_j,\A\x\rangle|<\max_{i\in S}|\langle\A\e_i, \A\x\rangle|.
\end{equation*}
Therefore, it suffices to show that the equality in \eqref{e:lb1} and the equation in \eqref{e:ub} can not hold simultaneously.
Suppose both the equality in \eqref{e:lb1} and the equation in \eqref{e:ub} hold, then by Lemma \ref{l:main1}, $\parallel\A\e_j\parallel_2^2=1+\delta_{K+1}$. Let $\C=(\A_{S\bigcup j})^T\A_{S\bigcup j}$, then $C_{jj}=\parallel\A\e_j\parallel_2^2=1+\delta_{K+1}$, thus for each $i\in S$,
$C_{ij}=0$. In fact, suppose there exists one $i\in S$ such that $C_{ij}\neq0$, then
\begin{equation*}
\parallel\A_{S\bigcup j}^T\A_{S\bigcup j}\e_j\parallel_2\geq \sqrt{\sum_{i\in S}C_{ij}^2}>1+\delta_{K+1}
\end{equation*}
which contradicts Lemma \ref{l:main2}. Therefore, for each $i\in S$,
$C_{ij}=0$. However, in this case, we have
\begin{equation*}
|\langle\A\e_j,\A\x\rangle|=0
\end{equation*}
which contradicts the equality in \eqref{e:ub}. Thus the equality in \eqref{e:lb1} and the equation in \eqref{e:ub} can not hold simultaneously. Therefore,
if \eqref{e:cond1} holds, then the OMP can choose a correct index in the first iteration.
By applying the method used in \cite{MoS12} or \cite{WanS12} and the aforementioned proof, one can similarly show that if \eqref{e:cond1} holds, then the OMP can choose a correct index in the latter iterations, so the theorem is proved.
\section{ Future Work}
In the future, we will prove or disprove whether $\frac{1}{\sqrt{K}+1}<\delta_{K+1}<\frac{1}{\sqrt{K+1}}$ is a sufficient condition for the OMP to recover every $K-$spares signal $\x$ from $\y=\A\x$ in $K$ iterations.
\vskip3pt
\ack{This work has been supported by NSFC (Grant No. 11201161, 11171125, 91130003) and FRQNT.
}
\vskip5pt
\noindent Jinming Wen (\textit{Dept. of Mathematics and Statistics, McGill University, Canada, H3A 2K6})
\vskip3pt
\noindent Xiaomei Zhu (\textit{College of Electronics and Information Engineering, Nanjing University of Technology, China, 211816; Dept. of Electrical and Computer Engineering, McGill University, Canada, H3A 2A7})
\vskip3pt
\noindent E-mail: [email protected]
\noindent Dongfang Li (\textit{School of Mathematics and Statistics, Huazhong University of Science and
Technology, China, 430074; Dept. of Mathematics and Statistics, McGill University, Canada, H3A 2K6})
\vskip3pt
|
1,941,325,220,729 | arxiv | \section{Introduction
Let $X$ be an irreducible variety of dimension $d$ over an algebraically closed field $\mathbf{K}$, and let $D$ be a (Cartier) divisor on $X$. When $X$ is projective, the following limit, which measures how fast the dimension of the section space $\HH{0}{X}{\mathcal{O}_X(mD)}$ grows, is called the \emph{volume} of $D$: \[
\vol(D)=\vol_X(D)=\lim_{m\to\infty} \frac{\hh{0}{X}{\mathcal{O}_X(mD)}}{m^d/d!}. \]
One says that $D$ is \emph{big} if $\vol(D)>0$. It turns out that the volume is an interesting numerical invariant of a big divisor (\cite[\S 2.2.C]{Laz}), and it plays a key role in several recent works in birational geometry (\cite{BDPP}, \cite{Tsu}, \cite{HM}, \cite{Tak}).
When $D$ is ample, one can show that $\vol(D)=D^d$, the self-intersection number of $D$. This is no longer true for a general big divisor $D$, since $D^d$ may even be negative. However, it was shown by Fujita \cite{Fuj} that the volume of a big divisor can always be approximated arbitrarily closely by the self-intersection number of an ample divisor on a birational modification of $X$. This theorem, known as \emph{Fujita approximation}, has several implications on the properties of volumes, and is also a crucial ingredient in \cite{BDPP} (see \cite[\S 11.4]{Laz} for more details).
In their recent paper \cite{LM}, Lazarsfeld and Musta\c{t}\v{a} obtained, among other things, a generalization of Fujita approximation to \emph{graded linear series}. Recall that a graded linear series $W_{\bullet} = \{ W_k \}$ on a (not necessarily projective) variety $X$ associated to a divisor $D$ consists of finite dimensional vector subspaces
\[ W_k \ \subseteq \ \HH{0}{X}{\mathcal{O}_X(kD)} \]
for each $k \ge 0$, with $W_0 = \mathbf{K}$, such that \[
W_k \cdot W_\ell \ \subseteq \ W_{k + \ell} \]
for all $k , \ell \ge 0 $. Here the product on the left denotes the image of $W_k \otimes W_\ell$ under the multiplication map $\HH{0}{X}{\mathcal{O}_X(kD)} \otimes \HH{0}{X}{\mathcal{O}_X(\ell D)} \longrightarrow \HH{0}{X}{\mathcal{O}_X(({k + \ell})D)}$. In order to state the Fujita approximation for $W_{\bullet}$, they defined, for each fixed positive integer $p$, a graded linear series $W^{(p)}_{\bullet}$ which is the sub graded linear series of $W_{\bullet}$ generated by $W_p$: \[
W^{(p)}_{m}=\begin{cases}
0, &\text{if $p\nmid m$;}\\
\Image \big( S^k W_p \longrightarrow W_{kp} \big), &\text{if $m=kp$.}
\end{cases} \]
Then under mild hypotheses, they showed that the volume of $W^{(p)}_{\bullet}$ approaches the volume of $W_{\bullet}$ as $p\to\infty$. See \cite[Theorem~3.5]{LM} for the precise statement, as well as \cite[Remark~3.4]{LM} for how this is equivalent to the original statement of Fujita when $X$ is projective and $W_{\bullet}$ is the complete graded linear series associated to a big divisor $D$ (i.e. $W_k=\HH{0}{X}{\mathcal{O}_X(kD)}$ for all $k\ge 0$).
The goal of this note is to generalize the Fujita approximation theorem to \emph{multigraded linear series}. We will adopt the following notations from \cite[\S 4.3]{LM}: let $D_1, \ldots, D_r$ be divisors on $X$. For $\Vec{m} = (m_1,\ldots, m_r) \in \mathbb{N}^r$ we write $\Vec{m}D = \sum m_i D_i$, and we put
$|\Vec{m}| = \sum |m_i|$.
\begin{definition}
A \emph{multigraded linear series} $W_{\Vec{\bullet}}$ on $X$ associated to the $D_i$'s consists of finite-dimensional vector subspaces
\[ W_{\Vec{k}} \ \subseteq \ \HH{0}{X}{\mathcal{O}_X(\Vec{k}D)}\]
for each $\Vec{k} \in \mathbb{N}^r$, with $W_{\Vec{0}} = \mathbf{K}$, such that
\[ W_{\Vec{k}} \cdot W_{\Vec{m}} \ \subseteq \ W_{\Vec{k} + \Vec{m}}, \]
where the multiplication on the left denotes the image of $W_{\Vec{k}} \otimes W_{\Vec{m}} $ under the natural map $\HH{0}{X}{\mathcal{O}_X(\Vec{k}D) } \otimes \HH{0}{X}{\mathcal{O}_X(\Vec{m}D) } \longrightarrow \HH{0}{X}{\mathcal{O}_X((\Vec{k}+\Vec{m})D) } $.
\end{definition}
Given $\Vec{a} \in \mathbb{N}^r$, denote by $W_{\Vec{a},\bullet}$ the singly graded linear series associated to the divisor $\Vec{a}D$ given by the subspaces $W_{k \Vec{a}} \subseteq \HH{0}{X}{\mathcal{O}_X(k\Vec{a}D)}$. Then put
\[ \vol_{W_{\Vec{\bullet}}}(\Vec{a}) \ = \ \vol(W_{\Vec{a},\bullet}) \]
(assuming that this quantity is finite). It will also be convenient for us to consider $W_{\Vec{a},\bullet}$ when $\Vec{a}\in\mathbb{Q}^r_{\ge 0}$, given by \[
W_{\Vec{a},k}=\begin{cases}
W_{k \Vec{a}}, &\text{if $k \Vec{a}\in \mathbb{N}^r$;}\\
0, &\text{otherwise.}
\end{cases} \]
Our multigraded Fujita approximation, similar to the singly-graded version, is going to state that (under suitable conditions) the volume of $W_{\Vec{\bullet}}$ can be approximated by the volume of the following finitely generated sub multigraded linear series of $W_{\Vec{\bullet}}$:
\begin{definition}
Given a multigraded linear series $W_{\Vec{\bullet}}$ and a positive integer $p$, define $W_{\Vec{\bullet}}^{(p)}$ to be the sub multigraded linear series of $W_{\Vec{\bullet}}$ generated by all $W_{\Vec{m}_i}$ with $|\Vec{m}_i|=p$, or concretely \[
W^{(p)}_{\Vec{m}}=\begin{cases}
\displaystyle \qquad 0, &\text{if $p\nmid |\Vec{m}|$;} \\
\displaystyle \sum_{\substack{|\Vec{m}_i|=p \\ \Vec{m}_1+\cdots+\Vec{m}_k=\Vec{m}}} W_{\Vec{m}_1}\cdots W_{\Vec{m}_k}, &\text{if $|\Vec{m}|=kp$.}
\end{cases} \]
\end{definition}
We now state our multigraded Fujita approximation when $W_{\Vec{\bullet}}$ is a complete multigraded linear series, since this is the case of most interest and allows for a more streamlined statement. We will point out in Remark~\ref{r:assumptions} afterward what assumptions on $W_{\Vec{\bullet}}$ are actually needed in the proof.
\begin{theorem} \label{t:main}
Let $X$ be an irreducible projective variety of dimension $d$, and let $D_1,\ldots,D_r$ be big divisors on $X$. Let $W_{\Vec{\bullet}}$ be the complete multigraded linear series associated to the $D_i$'s, namely \[
W_{\Vec{m}}=\HH{0}{X}{\mathcal{O}_X(\Vec{m}D)} \]
for each $\Vec{m}\in \mathbb{N}^r$. Then given any $\varepsilon >0$, there exists an integer $p_0=p_0(\varepsilon)$ having the property that if $p\ge p_0$, then
\begin{equation} \label{eq:1}
\bigg| 1- \frac{\vol_{W^{(p)}_{\Vec{\bullet}}}(\Vec{a})}{\vol_{W_{\Vec{\bullet}}}(\Vec{a})} \bigg|< \varepsilon
\end{equation}
for all $\Vec{a}\in \mathbb{N}^r$.
\end{theorem}
\noindent\textbf{Acknowledgments.} The author would like to thank Robert Lazarsfeld for raising this question during an email correspondence.
\section{Proof of Theorem~\ref{t:main}}
The main tool in our proof is the theory of \emph{Okounkov bodies} developed systematically in \cite{LM}. Given a graded linear series $W_{\bullet}$ on a $d$-dimensional variety $X$, its Okounkov body $\Delta(W_{\bullet})$ is a convex body in $\mathbb{R}^d$ that encodes many asymptotic invariants of $W_{\bullet}$, the most prominent one being the volume of $W_{\bullet}$, which is precisely $d!$ times the Euclidean volume of $\Delta(W_{\bullet})$. The idea first appeared in Okounkov's papers \cite{Oko96} and \cite{Oko03} in the case of complete linear series of ample line bundles on a projective variety. Later it was further developed and applied to much more general graded linear series by Lazarsfeld-Musta\c{t}\v{a} \cite{LM}, and also independently by Kaveh-Khovanskii \cite{KK08,KK09}.
\begin{proof}[Proof of Theorem~\ref{t:main}]
Let $T=\{ (a_1,\ldots,a_r)\in \mathbb{R}_{\ge 0}^r \mid a_1+\cdots+a_r=1 \}$, and let $T_{\mathbb{Q}}$ be the set of all points in $T$ with rational coordinates. The fraction inside \eqref{eq:1} is invariant under scaling of $\Vec{a}$ due to homogeneity, hence it is enough to prove \eqref{eq:1} for $\Vec{a}\in T_{\mathbb{Q}}$.
Let $\Delta(W_{\Vec{\bullet}})\subseteq \mathbb{R}^d\times\mathbb{R}^r$ be the global Okounkov cone of $W_{\Vec{\bullet}}$ as in \cite[Theorem~4.19]{LM}, and let $\pi\colon \Delta(W_{\Vec{\bullet}})\to \mathbb{R}^r$
be the projection map. For each $\Vec{a}\in T$ we write $\Delta(W_{\Vec{\bullet}})_{\Vec{a}}$ for the fiber $\pi^{-1}(\Vec{a})$. We also define in a similar fashion the convex cone $\Delta(W^{(p)}_{\Vec{\bullet}})$ and the convex bodies $\Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{a}}$. By \cite[Theorem~4.19]{LM},
\begin{equation} \label{eq:2}
\Delta(W_{\Vec{\bullet}})_{\Vec{a}}=\Delta(W_{{\Vec{a}},\bullet}) \quad
\text{for all $\Vec{a}\in T_{\mathbb{Q}}$.}
\end{equation}
Note that although \cite[Theorem~4.19]{LM} requires $\Vec{a}$ to be in the relative interior of $T$, here we know that \eqref{eq:2} holds even for those $\Vec{a}$ in the boundary of $T$ because the big cone of $X$ is open and $W_{\Vec{\bullet}}$ was assumed to be the complete multigraded linear series. By the singly-graded Fujita approximation, $\vol(W_{{\Vec{a}},\bullet})$ can be approximated arbitrarily closely by $\vol(W^{(p)}_{{\Vec{a}},\bullet})$ if $p$ is sufficiently large. (Here by $W^{(p)}_{{\Vec{a}},\bullet}$ we mean $W^{(p)}_{\Vec{\bullet}}$ restricted to the $\Vec{a}$ direction, which certainly contains $(W_{{\Vec{a}},\bullet})^{(p)}$.) Hence given any finite subset $S\subset T_{\mathbb{Q}}$ and any $\varepsilon'>0$, we have \[
\vol\bigl(\Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{a}}\bigr)\ge
\vol\bigl(\Delta(W_{\Vec{\bullet}})_{\Vec{a}}\bigr)-\varepsilon' \quad
\text{for all $\Vec{a}\in S$} \]
as soon as $p$ is sufficiently large.
Because the function $\Vec{a}\mapsto \vol\bigl(\Delta(W_{\Vec{\bullet}})_{\Vec{a}}\bigr)$ is uniformly continuous on $T$, given any $\varepsilon'>0$, we can partition $T$ into a union of polytopes with disjoint interiors $T=\bigcup T_i$, in such a way that the vertices of each $T_i$ all have rational coordinates, and on each $T_i$ we have a constant $M_i$ such that
\begin{equation} \label{eq:3}
M_i \le \vol\bigl(\Delta(W_{\Vec{\bullet}})_{\Vec{a}}\bigr) \le M_i + \varepsilon' \quad
\text{for all $\Vec{a}\in T_i$.}
\end{equation}
Let $S$ be the set of vertices of all the $T_i$'s. Then as we saw in the end of the previous paragraph, as soon as $p$ is sufficiently large we have
\begin{equation} \label{eq:4}
\vol\bigl(\Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{a}}\bigr)\ge
\vol\bigl(\Delta(W_{\Vec{\bullet}})_{\Vec{a}}\bigr)-\varepsilon' \quad
\text{for all $\Vec{a}\in S$.}
\end{equation}
We claim that this implies
\begin{equation} \label{eq:5}
\vol\bigl(\Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{a}}\bigr)\ge
\vol\bigl(\Delta(W_{\Vec{\bullet}})_{\Vec{a}}\bigr)-2\varepsilon' \quad
\text{for all $\Vec{a}\in T_{\mathbb{Q}}$.}
\end{equation}
To show this, it suffices to verify it on each of the $T_i$'s. Let $\Vec{v}_1,\ldots,\Vec{v}_k$ be the vertices of $T_i$. Then each $\Vec{a}\in T_i$ can be written as a convex combination of the vertices: $\Vec{a}=\sum t_j \Vec{v}_j$ where each $t_j\ge 0$ and $\sum t_j=1$. Since $\Delta(W^{(p)}_{\Vec{\bullet}})$ is convex, we have \[
\Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{a}}\ \supseteq\ \sum t_j\, \Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{v}_j}, \]
where the sum on the right means the Minkowski sum. By \eqref{eq:3} and \eqref{eq:4}, the volume of each $\Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{v}_j}$ is at least $M_i-\varepsilon'$, hence by the Brunn-Minkowski inequality \cite[Theorem~5.4]{KK08}, we have \[
\vol\bigl(\Delta(W^{(p)}_{\Vec{\bullet}})_{\Vec{a}}\bigr)\ge M_i-\varepsilon' \quad
\text{for all $\Vec{a}\in T_i\cap T_{\mathbb{Q}}$.} \]
This combined with \eqref{eq:3} shows that \eqref{eq:5} is true on $T_i\cap T_{\mathbb{Q}}$, hence it is true on $T_{\mathbb{Q}}$ since the $T_i$'s cover $T$.
Since \eqref{eq:1} follows from \eqref{eq:5} by choosing a suitable $\varepsilon'$, the proof is thus complete.
\end{proof}
\begin{remark} \label{r:assumptions}
In the statement of Theorem~\ref{t:main} we assume that $W_{\Vec{\bullet}}$ is the complete multigraded linear series associated to big divisors. But in fact since the main tool we used in the proof is the theory of Okounkov bodies established in \cite{LM}, in particular \cite[Theorem~4.19]{LM}, the really indispensable assumptions on $W_{\Vec{\bullet}}$ are the same as those in \cite{LM} (which they called Conditions (A$'$) and (B$'$), or (C$'$)). The only place in the proof where we invoke that we are working with a complete multigraded linear series is the sentence right after \eqref{eq:2}, where we want to say that \eqref{eq:2} holds not only in the relative interior of $T$ but also in its boundary. Hence if $W_{\Vec{\bullet}}$ is only assumed to satisfy Conditions (A$'$) and (B$'$), or (C$'$), then given any $\varepsilon>0$ and any compact set $C$ contained in
$T\cap \interior\bigl(\supp(W_{\Vec{\bullet}})\bigr)$, there exists an integer
$p_0=p_0(C,\varepsilon)$ such that if $p\ge p_0$ then \[
\vol_{W^{(p)}_{\Vec{\bullet}}}(\Vec{a})> \vol_{W_{\Vec{\bullet}}}(\Vec{a})-\varepsilon \]
for all $\Vec{a}\in C\cap T_{\mathbb{Q}}$.
\end{remark}
|
1,941,325,220,730 | arxiv | \setcounter{equation}{0}\Section{\setcounter{equation}{0}\Section}
\newcommand{\int\!\!\!\int}{\int\!\!\!\int}
\def{\rm I}\hskip -0.85mm{\rm R} {{\rm I}\hskip -0.85mm{\rm R}}
\def{\rm I}\hskip -0.85mm{\rm N} {{\rm I}\hskip -0.85mm{\rm N}}
\newcommand{\underline{u}}{\underline{u}}
\newcommand{\underline{v}}{\underline{v}}
\newcommand{\overline{u}}{\overline{u}}
\newcommand{\overline{v}}{\overline{v}}
\newcommand{\underline{w}}{\underline{w}}
\newcommand{\underline{z}}{\underline{z}}
\newcommand{\overline{w}}{\overline{w}}
\newcommand{\overline{z}}{\overline{z}}
\newcommand{\underline{f}}{\underline{f}}
\newcommand{\overline{f}}{\overline{f}}
\newcommand{\underline{g}}{\underline{g}}
\newcommand{\overline{g}}{\overline{g}}
\newcommand{{\mathcal U}}{{\mathcal U}}
\newcommand{{\mathcal V}}{{\mathcal V}}
\newcommand{{\mathcal S}}{{\mathcal S}}
\newcommand{\tilde{u}}{\tilde{u}}
\newcommand{\hfill $\Box$}{\hfill $\Box$}
\DeclareMathOperator*{\esssup}{ess\,sup}
\DeclareMathOperator*{\essinf}{ess\,inf}
\def\alpha{\alpha}
\def\beta{\beta}
\def\varepsilon{\varepsilon}
\def\Delta{\Delta}
\def\delta{\delta}
\def\downarrow{\downarrow}
\def\gamma{\gamma}
\def\Gamma{\Gamma}
\def\lambda{\lambda}
\def\mu{\mu}
\def\Lambda{\Lambda}
\def\omega{\omega}
\def\Omega{\Omega}
\def\partial{\partial}
\def\Phi{\Phi}
\def\Psi{\Psi}
\def\rho{\rho}
\def\varrho{\varrho}
\def\sigma{\sigma}
\def\theta{\theta}
\def\Theta{\Theta}
\def\varphi{\varphi}
\def\phi{\phi}
\def\xi{\xi}
\def\overline{\overline}
\def\underline{\underline}
\def\uparrow{\uparrow}
\def\downarrow{\downarrow}
\def\Rightarrow{\Rightarrow}
\def\Leftrightarrow{\Leftrightarrow}
\def\nabla{\nabla}
\def\displaystyle{\displaystyle}
\newcommand{\rightharpoonup}{\rightharpoonup}
\newcommand{\rightarrow}{\rightarrow}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{example}[theorem]{Example}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{assertion}{Assertion}[section]
\newtheorem{remark}{Remark}[section]
\DeclareMathOperator{\cat}{cat}
\DeclareMathOperator{\dist}{dist}
\title[positive solutions for Kirchhoff equations
via bifurcation methods]
{Existence results of positive solutions for \\ Kirchhoff type equations
via bifurcation methods}
\author[W. Cintra]{Willian Cintra}
\author[J. R. Santos Jr.]{Jo\~ao R. Santos J\'unior}
\author[G. Siciliano]{Gaetano Siciliano}
\author[A. Su\'arez]{Antonio Su\'arez}
\address[W. Cintra]{\newline\indent Faculdade de Matem\'atica
\newline\indent
Instituto de Ci\^{e}ncias Exatas e Naturais
\newline\indent
Universidade Federal do Par\'a
\newline\indent
Avenida Augusto corr\^{e}a 01, 66075-110, Bel\'em, PA, Brazil}
\email{\href{mailto: willian\[email protected] }{willian\[email protected]}}
\address[J. R. Santos Jr.]{\newline\indent Faculdade de Matem\'atica
\newline\indent
Instituto de Ci\^{e}ncias Exatas e Naturais
\newline\indent
Universidade Federal do Par\'a
\newline\indent
Avenida Augusto corr\^{e}a 01, 66075-110, Bel\'em, PA, Brazil}
\email{\href{mailto: [email protected] }{[email protected]}}
\address[G. Siciliano]{\newline\indent Departamento de Matem\'atica
\newline\indent
Instituto de Matem\'atica e Estat\'istica
\newline\indent
Universidade de S\~ao Paulo
\newline\indent
Rua do Mat\~ao 1010, 05508-090, S\~ao Paulo, SP, Brazil }
\email{\href{mailto:[email protected]}{[email protected]}}
\address[A. Su\'arez]{\newline\indent Departamento de Ecuaciones Diferenciales y An{\'a}lisis Num{\'e}rico
\newline\indent
Facultad de Matem{\'a}ticas
\newline\indent
Universidad de Sevilla
\newline\indent
C/. Tarfia s/n, 41012, Sevilla, Spain.}
\email{\href{mailto: [email protected]}{[email protected]}}
\thanks{
Willian Cintra was partially supported by CAPES, Brazil. Jo\~ao R. Santos J\'unior was partially supported by CNPq-Proc. 302698/2015-9 and CAPES-Proc. 88881.120045/2016-01, Brazil. Antonio Su\'arez has been partially supported by MTM2015-69875-P (MINECO/FEDER, UE) and CNPq-Proc. 400426/2013-7.
Gaetano Siciliano was partially supported by
Fapesp, Capes and CNPq, Brazil. }
\subjclass[2010]{ 35J15,
35J25,
35Q74.
}
\keywords{ Kirchhoff type equation, logistic nonlinearity problem, bifurcation method.}
\pretolerance10000
\begin{document}
\maketitle
\begin{abstract}
In this paper we address the following Kirchhoff type problem
\begin{equation*}
\left\{ \begin{array}{ll}
-\Delta(g(|\nabla u|_2^2) u + u^r) = a u + b u^p& \mbox{in}~\Omega, \\
u>0& \mbox{in}~\Omega, \\
u= 0& \mbox{on}~\partial\Omega,
\end{array} \right.
\end{equation*}
in a bounded and smooth domain $\Omega$ in ${\rm I}\hskip -0.85mm{\rm R}^{N}$. By using change of variables and bifurcation methods, we
show, under suitable conditions on the parameters $a,b,p,r$ and
the nonlinearity $g$, the existence of positive solutions.
\end{abstract}
\maketitle
\setcounter{equation}{0}\Section{Introduction}
In the recent paper \cite{JG} two of the authors studied the following
Kirchhoff type problem
\begin{equation}\label{eq:JG}
\left\{ \begin{array}{ll}
-\text{div}(m(u,|\nabla u|_2^2)\nabla u ) = f(x,u)& \mbox{in}~\Omega, \\
u= 0& \mbox{on}~\partial\Omega,
\end{array} \right.
\end{equation}
in the smooth and bounded domain $\Omega\subset {\rm I}\hskip -0.85mm{\rm R}^{N}, N\geq1$,
where $f$ is a sublinear nonlinearity and $m:{\rm I}\hskip -0.85mm{\rm R}\times[0,+\infty)\to {\rm I}\hskip -0.85mm{\rm R}$
is a function such that, setting
$$m_{t}: s\in {\rm I}\hskip -0.85mm{\rm R}\mapsto m(s,t)\in {\rm I}\hskip -0.85mm{\rm R},$$
the following hold:
\begin{enumerate}[label=(m\arabic*),ref=m\arabic*,start=0]
\item\label{m_{0}} $m: {\rm I}\hskip -0.85mm{\rm R}\times[0,+\infty)\to \mathbb (0,+\infty)$ is continuous; \medskip
\item\label{m_{1}} there is $\mathfrak m >0$ such that $m(s, t)\geq \mathfrak m $ for all $s\in{\rm I}\hskip -0.85mm{\rm R}$ and $t\in[0,\infty)$;
\medskip
\item\label{m_{2}} for each $t\in [0,+\infty)$ the map
$m_{t}:{\rm I}\hskip -0.85mm{\rm R}\to(0,+\infty)$ is strictly decreasing in $(-\infty,0)$ and strictly increasing in $(0,+\infty)$. \medskip
\end{enumerate}
In particular the class of such admissible $m$ is very huge.
In \cite{JG} the problem was addressed by finding the fixed point of the map
$$S: t\in[0,+\infty) \mapsto \int_{\Omega}|\nabla u_{t}|^{2}dx\in (0,+\infty)$$
where $u_{t}$ is the unique solution of the auxiliary problem
\begin{equation}\label{eq:Pr}
\left\{ \begin{array}{ll}
-\text{div}(m_{t}(u)\nabla u ) = f(x,u)& \mbox{in}~\Omega, \\
u= 0& \mbox{on}~\partial\Omega.
\end{array} \right.
\end{equation}
Indeed, the main difficulty related to the method used was to guarantee existence, uniqueness
and a priori bound with respect to $t$ for the solutions of \eqref{eq:Pr}.
In particular, the uniqueness was obtained in virtue of the sublinearity condition of the nonlinearity
$f$. However, as remarked in \cite[Remark 5]{JG}, the uniqueness of the solution to \eqref{eq:Pr}
was not necessary in order to employ the method:
since the solution of \eqref{eq:JG} is obtained as a fixed point of $S$,
all that is needed is the existence of a continuous branch of solutions $t\mapsto u_{t}$
to \eqref{eq:Pr}, rather than the uniqueness of the solution $u_{t}$.
We point out here that similar methods to that in \cite{JG} has been used recently in
\cite{JG2} to deal with a biharmonic Kirchhoff operator.
\medskip
Motivated by the above remark, in this paper we study a Kirchhoff type problem
where the uniqueness of the solution to the related auxiliary problem is not expected, due to the fact that
the nonlinearity is not sublinear.
Indeed, in this paper bifurcation methods are used in order to circumvent the lack of uniqueness
and obtain a continuum of positive solutions. We point out that bifurcation theory has been used by other authors to study Kirchhoff problems (see, for instance, \cite{ArcoyaAmbrosetti, FMSS, shi} and references therein) and arguments combining bifurcation theory with Bolzano Theorem were used to analyze non-local elliptic problems in \cite{ArcoyaLP}.
More specifically, we study here
the existence of classical positive solutions for the following problem
with a logistic type nonlinearity:
\begin{equation}\label{P1}
\left\{ \begin{array}{ll}
-\Delta(g(|\nabla u|_2^2) u + u^r) = a u + b u^p& \mbox{in}~\Omega, \\
u>0& \mbox{in}~\Omega, \\
u= 0& \mbox{on}~\partial\Omega,
\end{array} \right.
\end{equation}
where $g:[0,\infty) \rightarrow [0,\infty)$ is a continuous function satisfying
minimal assumptions and
$a,b,r,p$ are constants such that $a>0$, $p,r >1$.
Observe that
problem \eqref{P1} is of Kirchhoff type since the equation can be written as
$$-\text{div} \left( m(u, |\nabla u|_{2}^{2}) \nabla u\right) = a u + b u^p$$
just defining $m(u,|\nabla u|_{2}^{2}) := g(|\nabla u|_{2}^{2}) +r u^{r-1}.$
We point out here that, if we fix the value $|\nabla u|_{2}^{2}=t$ and define the map
for every $t\geq0$
$$m_{t}: s\in \mathbb R\mapsto m(s,t)\in \mathbb R$$
the map $m$ satisfies the above conditions \eqref{m_{0}} and \eqref{m_{1}}; moreover
for suitable values of $r$ also \eqref{m_{2}} holds, falling down into the class of
$m$ permitted in \cite{JG}.
\medskip
In this paper we give sufficient conditions, depending on the parameters $a,b,r,p$
and on $g$ in such a way that problem \eqref{P1} admits a positive solution.
We have to say, however, that many other cases remain open.
Here are our main results.
\begin{taggedtheorem}{A}\label{th:A}
Assume that
$g:[0,\infty) \rightarrow [0,\infty)$ is a continuous function with $\inf_{s>0}g(s) >0$.
If one of the following conditions is satisfied:
\begin{itemize}
\item[(i)] $b\leq 0$,\smallskip
\item[(ii)] $b>0$, $r=p$ and $b<\lambda_1$,\smallskip
\item[(iii)] $b>0$ and $r>p$,
\end{itemize}
then problem \eqref{P1} admits at least one positive solution for each $a>g(0)\lambda_1$.
Hereafter $\lambda_{1}$ denotes the first eigenvalue of the Laplacian in $\Omega$ under homogeneous Dirichlet boundary conditions.
\end{taggedtheorem}
\begin{taggedtheorem}{B}\label{th:B}
Assume that $g:[0,\infty) \rightarrow [0,\infty)$ is a bounded continuous function such that $g(0)>0$.
\begin{itemize}
\item[(i)] If $r=p$ and $b>\lambda_1$ then problem \eqref{P1} admits at least one positive solution for each $a< g(0)\lambda_1$. \smallskip
\item[(ii)] If $b>0$, $1<p/r <(N+2)/(N-2)$ and $g(0)\lambda_1>\phi(s_0)$, then problem \eqref{P1}
admits at least one positive solution for each $a \in (\phi(s_0),g(0)\lambda_1)$.
Here
$$s_0 :=\left(\frac{b(p-1)}{\lambda_1(r-1)}\right)^{{1}/{(r-p)}}$$
is the maximum point of the function $\phi(s):=
\lambda_1s^{r-1}-bs^{p-1}, s\geq 0.$
\end{itemize}
\end{taggedtheorem}
\begin{taggedtheorem}{C}\label{th:C}
Assume that $g:[0,\infty) \rightarrow [0,\infty)$ is a continuous and positive function. If $b=\lambda_1$ and $r=p<2$, then problem \eqref{P1} admits a positive solution if, and only if, $a\in \lambda_1 R[g] :=\{\lambda_1g(s); s \geq 0\} \subset (0,\infty)$.
\end{taggedtheorem}
\medskip
The paper is organized as follows. In Section \ref{sec:Prelim}
some preliminary results are given and a suitable parameter-dependent problem
(see \eqref{Pa1}) is introduced in order to study the original problem \eqref{P1}.
Section \ref{sec:Change} is devoted to study problem \eqref{Pa1}.
However to do that, by means of a change of variables, the problem is transformed into an equivalent
one which is studied with bifurcation theory.
In Section \ref{sec:final} the proof of the main results is given.
\medskip
As a matter of notations,
we set for $k\in {\rm I}\hskip -0.85mm{\rm N}, C^{k}_{0}(\overline\Omega):=C^{k}(\Omega)\cap C_{0}(\overline{\Omega})$ the set of functions
defined on $\overline \Omega$ which are of class $C^{k}$ in $\Omega$ and continuous up to the boundary satisfying
the homogeneous Dirichlet boundary condition. Hereafter $|\cdot|_{p}$ stands for the $L^{p}(\Omega)-$norm
and the uniform norm will be denoted by $\|\cdot\|$.
\setcounter{equation}{0}\Section{Preliminaries}\label{sec:Prelim}
Our approach to deal with problem \eqref{P1}
is to consider the following auxiliary problem
depending on the positive parameter $\lambda$:
\begin{equation}\label{Pa1}\tag{$P_{\lambda}$}
\left\{ \begin{array}{ll}
-\Delta(u/\lambda + u^r) = a u + b u^p& \mbox{in}~\Omega, \\
u>0 & \mbox{in}~\Omega,\\
u= 0& \mbox{on}~\partial\Omega.
\end{array} \right.
\end{equation}
By a solution we mean a pair $(\lambda, u_{\lambda})\in (0,+\infty)\times C^{2}_{0}(\overline\Omega)$ satisfying \eqref{Pa1}
in the classical sense.
The idea is to find first
an unbounded continuum, say $\Sigma_0$, of positive solution of (\ref{Pa1}) via bifurcation theory. Then, using the classical Bolzano Theorem, we will search for zeros of the map,
$$h: (\lambda,u)\in \Sigma_{0}\longmapsto \frac{1}{\lambda} - g(|\nabla u|_2^2) \in {\rm I}\hskip -0.85mm{\rm R}$$
which of course provides us a solution of (\ref{P1}).
\medskip
In the following, given functions $A \in {C}^1(\overline{\Omega})$ and $B\in {C}(\overline{\Omega})$, satisfying
$A(x) \geq A_0>0$ for $x\in \Omega$
and a suitable constant $A_{0}$, we will denote by
$$\sigma_1 [ -\text{div} (A(x) \nabla) + B(x)] $$
the principal eigenvalue of the problem
\begin{equation*}
\left\{\begin{array}{ll}
-\text{div} \left(A(x) \nabla u\right) + B(x)u =\lambda u &\mbox{in}~\Omega, \\
u=0&\mbox{on}~\partial\Omega.
\end{array} \right.
\end{equation*}
It is well-known (see for instance \cite{SMPbook,Djairo}) that this eigenvalue is increasing with respect to $A$
and $B$. When $A \equiv A_0$ is constant, we have $-\text{div} (A_0 \nabla) = -A_0 \Delta$, thus in this case we will
write
$$\sigma_1 [ -A_0 \Delta + B(x)].$$
By simplicity, we set
$$\lambda_1:= \sigma_1 [ -\Delta].$$
Moreover, we will denote by $\varphi_1$ the positive eigenfunction associated to $\lambda_1$ with $\|\varphi_1\|=1.$
\medskip
The first result gives necessary conditions on the parameters $a,b,r,p$
in order to get solutions to \eqref{Pa1}.
Here the following function
\begin{equation}\label{eq:phi}
\phi(s):=\lambda_1s^{r-1}-bs^{p-1}, \quad s\geq 0,
\end{equation}
plays an important role.
It easy to see that if $r>p$ (resp. $r<p$) $\phi$ is bounded below (resp. above) and the minimum (resp. maximum)
is attained at
\begin{equation}\label{eq:s0}
s_0 =\left(\frac{b(p-1)}{\lambda_1(r-1)}\right)^{\frac{1}{r-p}},
\end{equation}
and
$$
\phi(s_0)=\left(\frac{p-1}{\lambda_1}\right)^{\frac{p-1}{r-p}}\left(\frac{b}{r-1}\right)^{\frac{r-1}{r-p}}(p-r).
$$
With these considerations, we can show the next result.
\begin{lemma}\label{ne}
We have the following.
\begin{enumerate}
\item[(a)] Suppose $b\leq0$. Then (\ref{Pa1}) does not possess positive solution for $\lambda \leq \lambda_1/a$.\smallskip
\item[(b)]Suppose $b>0$ and $r>p$. Then (\ref{Pa1}) does not possess positive solution for $\lambda < \lambda_1/(a-\phi(s_0))$. Moreover, if $\lambda> \lambda_1/a$, then (\ref{Pa1}) does not possess positive solution $(\lambda,u_\lambda)$ with $\|u_\lambda\| < (b/\lambda_1)^{1/(r-p)}$. \smallskip
\item[(c)]Suppose $b>0$ and $r=p$. If $b<\lambda_1$ (resp. $b>\lambda_1$), then (\ref{Pa1}) does not posses positive solution for $\lambda<\lambda_1/a$ (resp. $\lambda>\lambda_1/a$). Moreover, if $b = \lambda_1$ then (\ref{Pa1}) does not possess positive solution for $\lambda \neq \lambda_1/a$. \smallskip
\item[(d)] Suppose $b>0$, $r<p$ and $a>\phi(s_0)$. Then (\ref{Pa1}) does not possess positive solution for $\lambda>\lambda_1/(a-\phi(s_0))$. Moreover, if $\lambda< \lambda_1/a$, then (\ref{Pa1}) does not possess positive solution $(\lambda,u_\lambda)$ with $\|u_\lambda\| < (b/\lambda_1)^{1/(r-p)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove (a) and (b) we argue as follows. Let $\varphi_1$ be the positive eigenfunction associated to $\lambda_1$ with $\|\varphi_1\|=1$. Multiplying (\ref{Pa1}) by $\varphi_1$, integrating in $\Omega$ and applying the formula of integration by parts we get
\begin{eqnarray*}
\int_\Omega \nabla\left(\displaystyle\frac{u_\lambda}{\lambda}+u_{\lambda}^r\right) \nabla \varphi_1 &=& \int_\Omega (au_{\lambda} + bu_{\lambda}^p)\varphi_1\\
\int_\Omega \lambda_1\left(\displaystyle\frac{u_\lambda}{\lambda}+u_{\lambda}^r\right) \varphi_1 &=& \int_\Omega (au_{\lambda} + bu_{\lambda}^p)\varphi_1.
\end{eqnarray*}
Hence, $(\lambda, u_\lambda)$ satisfies
\begin{equation}\label{000}
\int_\Omega u_\lambda\varphi_1\left[\lambda_1 u_\lambda^{r-1}-bu_\lambda^{p-1} - \left(a- \frac{\lambda_1}{\lambda} \right) \right]=0.
\end{equation}
If $b\leq 0$ the above equality implies that $a>\lambda_1/\lambda$. This implies (a).
On the other hand, assuming $b>0$ since $r>p$, the function
$$\phi(s)= \lambda_1 s^{r-1} - b s^{p-1}, \quad s \geq 0.$$
is bounded below and by a direct calculation, $\min_{0\leq s < \infty}\phi(s)$ is negative and it is attained at
$$s_0 =\left(\frac{b(p-1)}{\lambda_1(r-1)}\right)^{\frac{1}{r-p}}.$$
Therefore, for all $\lambda>0$ such that
$$
\lambda < \frac{\lambda_1}{a-\phi(s_0)},
$$
we have
$$\phi(s_0)> a - \frac{\lambda_1}{\lambda} \quad \forall \lambda \in (0,\lambda_1/(a-\phi(s_0))].$$
Consequently,
$$\lambda_1 s^{r-1} - b s^{p-1} - \left(a - \frac{\lambda_1}{\lambda} \right)> 0 \quad \forall s \geq 0, ~\lambda \in (0,\lambda_1/(a-\phi(s_0))]$$
and (\ref{000}) cannot be satisfied for any positive function $u_\lambda$, showing that $\lambda\geq \lambda_1/(a-\phi(s_0))$ is a necessary condition for the existence of positive solution of (\ref{Pa1}) with $b>0$ and $r>p$.\\
Now, suppose that $(\lambda,u_\lambda)$ is a positive solution of (\ref{Pa1}) with $\lambda> \lambda_1/a$. Then, it verifies (\ref{000}) and, hence,
\begin{equation}\label{0001}
\int_\Omega u_\lambda\varphi_1(\lambda_1 u_\lambda^{r-1}-bu_\lambda^{p-1}) = \left(a- \frac{\lambda_1}{\lambda} \right)\int_\Omega u_\lambda \varphi_1>0.
\end{equation}
If $\|u_\lambda\|<(b/\lambda_1)^{1/(r-p)}$, then
$$ \phi(u_\lambda(x)) = \lambda_1 u_\lambda(x)^{r-1}-bu_\lambda(x)^{p-1} <0 \quad \forall x \in \Omega.$$
and, therefore,
$$
\int_\Omega u_\lambda\varphi_1(\lambda_1 u_\lambda^{r-1}-bu_\lambda^{p-1}) < 0,
$$
which is a contradiction with (\ref{0001}).
\medskip
In the case (c), since $r=p$, (\ref{0001}) is equivalent to
$$
(\lambda_1-b)\int_\Omega u_\lambda^r\varphi_1 = \left(a- \frac{\lambda_1}{\lambda} \right)\int_\Omega u_\lambda \varphi_1
$$
whence we easily infer the result.
\medskip
The proof of paragraph (d) is similar to that of (b).
\end{proof}
In the next Section we will analyze the auxiliary problem \eqref{Pa1}.
\setcounter{equation}{0}\Section{Study of the auxiliary problem (\ref{Pa1})} \label{sec:Change}
To study (\ref{Pa1}) we introduce the following change of variables depending on the parameter $\lambda$:
\begin{equation}\label{eq:cambio}
w:=\frac{u}{\lambda} + u^r.
\end{equation}
Then, if we define the function $I_{\lambda}:s\in[0,+\infty)\mapsto \displaystyle\frac{s}{\lambda} + s^r\in[0,+\infty)$, which
is a smooth diffeomorphism, and denote its
inverse with $q_{\lambda}$, we have
$$w = I_{\lambda}(u) = \frac{u}{\lambda} + u^r \Longleftrightarrow q_{\lambda}(w)=u.$$
The first result of this section collects some important properties related to the map $q_{\lambda}$.
\begin{lemma}\label{lem:q}
The map $\lambda \in (0,\infty) \mapsto q_{\lambda}(s)$ is continuous and increasing, for all $s >0$.
Now let $\lambda>0$ be fixed. Then the map $s\in(0,+\infty)\mapsto\displaystyle\frac{q_{\lambda}(s)}{s}\in(0,+\infty)$
has the following properties:
\begin{enumerate}[label=(q\arabic*),ref=q\arabic*,start=1]
\item\label{q1} it is decreasing and $q_{\lambda}(s)/s\leq \lambda$, \smallskip
\item\label{q2}
$\displaystyle \lim_{s \rightarrow 0^{+}} \displaystyle\frac{q_{\lambda}(s)}{s} = \lambda,$
\item\label{q3} $\displaystyle \lim_{s\to+\infty}\displaystyle\frac{q_{\lambda}(s)}{s} =0. $
\end{enumerate}
On the other hand, the map $s\in(0,+\infty)\mapsto\displaystyle\frac{q_{\lambda}(s)^{p}}{s}\in(0,+\infty)$
satisfies:
\begin{enumerate}[label=(q\arabic*),ref=q\arabic*,start=4]
\item\label{q4}
$\displaystyle \lim_{s \rightarrow 0^{+}}\displaystyle \frac{q_{\lambda}(s)^p}{s} = 0$,\smallskip
\item\label{q5} it is increasing if $p \geq r$, \smallskip
\item \label{q6} $\displaystyle \lim_{s\to+\infty} \displaystyle\frac{q_{\lambda}(s)^{p}}{s}=
\begin{cases}
+\infty &\mbox{ if } r<p, \\
0 &\mbox{ if }r>p, \\
1 &\mbox{ if } r=p.
\end{cases}
$
\end{enumerate}
\end{lemma}
\begin{proof}
The continuity of the map $\lambda \in (0,\infty) \mapsto q_\lambda(s)$, $s > 0$ follows by
the definition of $q_\lambda$. To prove that it is increasing, that is,
\begin{equation}\label{eq:qlcrescente}
\lambda'< \lambda'' \Longrightarrow q_{\lambda'}(s)< q_{\lambda''}(s),
\end{equation}
we proceed as follows. For every $s>0$, we have:
\begin{eqnarray*}\label{eq:}
I_{\lambda''}(q_{\lambda'}(s)) &=& \frac{q_{\lambda'}(s)}{\lambda''} + q_{\lambda'}(s)^{r} \\
&<& \frac{q_{\lambda'}(s)}{\lambda'} + q_{\lambda'}(s)^{r} \\
&=& I_{\lambda'}(q_{\lambda'}(s))\\ &=& s
\end{eqnarray*}
from which \eqref{eq:qlcrescente} easily follows.
Now, let $\lambda>0$ be fixed. Since $q_{\lambda} $ is the inverse function of $I_{\lambda}$, it is increasing and verifies
\begin{equation*}\label{s}
s=I_{\lambda}(q_{\lambda}(s)) = \frac{q_{\lambda}(s)}{\lambda} + q_{\lambda}(s)^r
\end{equation*}
so that
$$ \frac{q_{\lambda}(s)}{s} = \frac{1}{1/\lambda+q_{\lambda}(s)^{r-1}} \leq \lambda
$$
which proves \eqref{q1}.
Furthermore, since $q_{\lambda}(0)= 0$ and $r>1$ it follows that
$$
\lim_{s \rightarrow 0^{+} } \frac{q_{\lambda}(s)}{s} =\lim_{s \rightarrow 0^{+}} \frac{1}{1/\lambda+q_{\lambda}(s)^{r-1}} = \lambda\quad \textrm{ and }\quad \lim_{s\to+\infty}\frac{q_{\lambda}(s)}{s}=\lim_{s\to+\infty} \frac{1}{1/\lambda+q_{\lambda}(s)^{r-1}} =0
$$
which gives \eqref{q2} and \eqref{q3}.
Moreover, being $p>1$,
$$\lim_{s \rightarrow 0^{+}} \frac{q_{\lambda}(s)^p}{s} = \lim_{s \rightarrow 0^{+}} \frac{q_{\lambda}(s)}{s} q_{\lambda}
(s)^{p-1} = 0,$$
proving \eqref{q4}.
Finally, the identity
$$ \frac{q_{\lambda}(s)^p}{s} = \frac{1}{q_{\lambda}(s)^{1-p}/\lambda+ q_{\lambda}(s)^{r-p}}$$
gives \eqref{q5} and \eqref{q6}.
\end{proof}
\begin{remark}\label{limuniforme}
We point out that, in the case $r>p>1$, for each $\overline{\lambda}>0$, the limit $\lim_{s \rightarrow \infty} q_\lambda(s)^p/s = 0$, is uniform in $\lambda \in [\overline{\lambda},\infty)$. Indeed, for each $\delta>0$, to obtain
$$
\frac{q_\lambda(s)^p}{s} \leq \delta \left(\Longleftrightarrow s \leq I_\lambda((\delta s)^{1/p}) = \frac{(\delta s)^{1/p}}{\lambda}+ (\delta s)^{r/p} \right)
$$
it is sufficient to choose $s>0$ (independent on $\lambda$) such that
$$
1 \leq \delta^r s^{(r/p)-1} \left(\leq \frac{\delta ^{1/p} s^{(1/p)-1}}{\lambda}+\delta^r s^{(r/p)-1}\right).
$$
It should be noted that, by a similar argument, the limit $\lim_{s \rightarrow \infty} q_\lambda(s)/s = 0$ is also uniform in $\lambda \in [\overline{\lambda}, \infty).$
We will use these properties later to get a priori bounds of the positive solutions of (\ref{Pa2}), uniform with respect to $\lambda \in [\overline{\lambda},\infty)$.
\end{remark}
\medskip
Thus, under the above change of variable \eqref{eq:cambio}, problem \eqref{Pa1} is equivalent to
\begin{equation}\label{Pa2}
\left\{ \begin{array}{ll}
-\Delta w = a q_{\lambda}(w) + b q_{\lambda}(w)^p& \mbox{in }\Omega, \\
w>0& \mbox{in }\Omega, \\
w= 0& \mbox{on } \partial\Omega,
\end{array} \right.
\end{equation}
in the sense that $(\lambda,u_{\lambda})$
is a solution of
\eqref{Pa1} if, and only if, $(\lambda, w_{\lambda}):=(\lambda, u_{\lambda}/\lambda+u_{\lambda}^r)$
is a solution of \eqref{Pa2}.
We observe explicitly that the map
$$f: (\lambda, s)\in (0,+\infty)\times[0,+\infty)\longmapsto a q_{\lambda}(s) +bq_{\lambda}(s)^{p}\in \mathbb R$$
is locally Lipschitz.
Note that for every $\lambda>0$ there is always the trivial solution $w\equiv 0$
to \eqref{Pa2}.
\begin{remark}\label{rem:necesariaw}
In virtue of Lemma \ref{ne} we have the following necessary conditions
for the existence of solutions to \eqref{Pa2}.
\begin{enumerate}
\item[(a)] Suppose $b\leq0$. Then \eqref{Pa2} does not possess positive solution for $\lambda \leq \lambda_1/a$. \smallskip
\item[(b)] Suppose $b>0$ and $r>p$. Then \eqref{Pa2} does not possess positive solution for $\lambda < \lambda_1/(a-\phi(s_0))$. Moreover, If $\lambda> \lambda_1/a$, then \eqref{Pa2} does not possess positive solution $(\lambda,w_\lambda)$ with $$\|w_\lambda\| < \frac{a}{\lambda_1} \left(\frac{b}{\lambda_1}\right)^{1/(r-p)} + \left(\frac{b}{\lambda_1}\right)^{r/(r-p)}.$$ \smallskip
\item[(c)]Suppose $b>0$ and $r=p$. If $b<\lambda_1$ (resp. $b>\lambda_1$), then \eqref{Pa2} does not posses positive solution for $\lambda<\lambda_1/a$ (resp. $\lambda>\lambda_1/a$). Moreover, if $b = \lambda_1$ then \eqref{Pa2} does not possess positive solution for $\lambda \neq \lambda_1/a$. \smallskip
\item[(d)] Suppose $b>0$, $r<p$ and $a>\phi(s_0)$, then \eqref{Pa2} does not possess positive solution for $\lambda>\lambda_1/(a-\phi(s_0))$. Moreover, if $\lambda< \lambda_1/a$, then \eqref{Pa2} does not possess positive solution $(\lambda,w_\lambda)$ with
$$\|w_\lambda\| < \frac{a}{\lambda_1} \left(\frac{b}{\lambda_1}\right)^{1/(r-p)} + \left(\frac{b}{\lambda_1}\right)^{r/(r-p)}.$$
\end{enumerate}
\end{remark}
Problem \eqref{Pa2} will be studied with the help of bifurcation theory.
\begin{definition}
We say that $(\lambda_0,0)$ is a bifurcation point from the trivial solution of the equation in \eqref{Pa2} if there exists a sequence $(\lambda_n,w_n)$ of non-trivial solutions of \eqref{Pa2} such that
$$ (\lambda_n,w_n) \rightarrow (\lambda_0,0) \quad \mbox{as}~n \rightarrow \infty.$$
\end{definition}
Now, we will
obtain an unbounded continuum of positive solutions of
(\ref{Pa2}) emanating from the trivial solution at $\lambda = \lambda_1/a$ and, hence, we prove a result of existence of positive solution of (\ref{Pa2}) and
consequently of (\ref{Pa1}).
To this end, consider the map $\mathfrak{F}:(0,\infty)\times {C}_0^1(\overline{\Omega}) \rightarrow {C}_0^1(\overline{\Omega})$ defined by
$$
\mathfrak{F}(\lambda, w) = w - (-\Delta)^{-1}[aq_{\lambda}(w) +b q_{\lambda}(w)^p ],
$$
where $(-\Delta)^{-1}$ is the inverse of the Laplacian operator under homogeneous Dirichlet boundary conditions.
The operator $\mathfrak{F}$ is of class $\mathcal{C}^1$ and equation in (\ref{Pa2})
(including the boundary condition) can be written in the form
$$\mathfrak{F}(\lambda, w) =0.$$
Moreover, since we are interested only in positive solutions with $\lambda>0$, we can consider any $\mathcal{C}^1$-extension of $\mathfrak{F}$ on ${\rm I}\hskip -0.85mm{\rm R}\times C_0^1(\overline{\Omega})\times C_0^1(\overline{\Omega})$. Thus, still denoting it by $\mathfrak{F}$, we have:
\begin{proposition}\label{bifur}
The value $\lambda=\lambda_1/a$ is the unique bifurcation point from the trivial solution to \eqref{Pa2} with $\lambda>0$.
Moreover, from $\lambda_{1}/a$ emanates an unbounded continuum in $(0,\infty)\times {C}_0^1(\overline{\Omega})$ of positive solutions $\widehat\Sigma_{0}$ of (\ref{Pa2}).
\end{proposition}
\begin{proof}
Thanks \eqref{q2} and \eqref{q4} of Lemma \ref{lem:q},
we have
$$\lim_{s \rightarrow 0}\frac{aq_{\lambda}(s) + bq_{\lambda}(s)^p}{s} = a\lambda. $$
Thus, the linearization of $\mathfrak{F}$ at $(\lambda,0),\lambda>0$, is given by
$$ \partial_w\mathfrak{F}(\lambda,0) = I_{\mathcal{C}^1_{0}(\overline \Omega)} - a\lambda(-\Delta)^{-1}.$$
Consequently, $\partial_w\mathfrak{F}(\lambda,0)$ is a Fredholm operator of index zero, analytic in $\lambda$. Moreover, its kernel verifies
$$
N[\partial_w\mathfrak{F}(\lambda_1/a,0)] = \mbox{span}[\varphi_1],
$$
where $\varphi_1 > 0$ stands, as usual, for the principal eigenfunction associated to $\lambda_1$ and satisfying
$\|\varphi_1\|=1$. Furthermore, by a standard argument,
\begin{equation}\label{rrr}
\partial_\lambda \partial_w \mathfrak{F}(\lambda_1/a,0)\varphi_1 \not\in R[\partial_w \mathfrak{F}(\lambda_1/a,0)],
\end{equation}
that is, $\lambda = \lambda_1/a$ is a 1-transversal eigenvalue of the family $\partial_w \mathfrak{F}(\lambda,0)$, which is the transversality condition stated in \cite{RabC}. Therefore, we can apply the unilateral bifurcation theorem
(see \cite[Theorem 6.4.3]{Bifbook}) to conclude the existence of a continuum $\widehat{\Sigma}_0$ of positive solution of (\ref{Pa2}) satisfying one of the following non-excluding options: either
\begin{enumerate}
\item[1.] $\widehat{\Sigma}_0$ is unbounded in ${\rm I}\hskip -0.85mm{\rm R} \times {C}_0^1(\overline{\Omega})$.
\item[2.] There exists $\lambda_* \in {\rm I}\hskip -0.85mm{\rm R}$ such that $\lambda_* \neq \lambda_1/a$ and $(\lambda_*,0) \in \Sigma_0.$
\item[3.] $\widehat{\Sigma}_0$ contains a point $(\lambda,w) \in {\rm I}\hskip -0.85mm{\rm R} \times (Y \setminus \{0\})$, where $Y$ is the complement of $N[\partial_w\mathfrak{F}(\lambda_1/a,0)]$ in ${C}_0^1(\overline{\Omega})$.
\end{enumerate}
Let us prove that 2. and 3. cannot be satisfied.
If 2. occurs, then $\lambda_*$ is a bifurcation point of (\ref{Pa2}) from the trivial solution. Since the bifurcation points of the trivial solution of (\ref{Pa2}) are, necessarily, simple eigenvalues of $-\Delta$ in $\Omega$ under homogeneous Dirichlet boundary conditions divided by $a$ and the only simple eigenvalue is $\lambda_1/a$, we must have $\lambda_* = \lambda_1/a$, which is a contradiction.
Suppose now that 3. occurs. Note that we can take $Y= R[\partial_w\mathfrak{F}(\lambda_1/a,0)]$. Indeed, since $\partial_w\mathfrak{F}(\lambda_1/a,0)$ is a Fredholm operator of index zero, we have
$$\mbox{codim} R[\partial_w\mathfrak{F}(\lambda_1/a,0)] = \mbox{dim} N[\partial_w\mathfrak{F}(\lambda_1/a,0)] =1$$
and, hence, the complement of $R[\partial_w\mathfrak{F}(\lambda_1/a,0)]$ on ${C}_0^1(\overline{\Omega})$ is one dimensional. From the transversality condition (\ref{rrr}), we obtain
$$R[\partial_w\mathfrak{F}(\lambda_1/a,0)] \oplus N[\partial_w\mathfrak{F}(\lambda_1/a,0)] ={C}_0^1(\overline{\Omega}).$$
Now, observe that the function $w$ given in paragraph $3.$ is positive and
$$w \in Y = R[\partial_w\mathfrak{F}(\lambda_1/a,0)] = R[I- \lambda_1(-\Delta)^{-1}]$$
which is impossible. Then $\widehat\Sigma_0$ is unbounded in ${\rm I}\hskip -0.85mm{\rm R} \times {C}_0^1(\overline{\Omega})$.
\end{proof}
With the aim of obtaining a priori estimates
for the positive solutions of (\ref{Pa2}), we
first recall the following result, which applies to the following general problem
\begin{equation}\label{eq:Games}
\left\{ \begin{array}{ll}
-\Delta u = f(\lambda, x,u)& \mbox{in }\Omega, \\
u= 0& \mbox{on } \partial\Omega.
\end{array} \right.
\end{equation}
A solution for this problem is a pair $(\lambda,u)$. We denote with
\eqref{eq:Games}$_{\lambda}$ the above problem with $\lambda$ fixed.
\begin{theorem}(See \cite[Theorem 2.2.]{Gamez})\label{th:Gamez}
Assume that $f$ is locally Lipschitz. Suppose that $I\subset \mathbb R$,
is an interval and let $\Sigma \subset I \times C^{2}_{0}(\overline\Omega)$
be a connected set of solutions of
\eqref{eq:Games}. Consider a continuous map $\overline U: I\to C^{2}_{0}(\overline\Omega)$
such that $\overline U(\lambda) $ is a super-solution of \eqref{eq:Games}$_{\lambda}$
for every $\lambda\in I$,
but not a solution.
If $u_{0}\leq (\not\equiv)\overline U(\lambda_{0})$ for some $(\lambda_{0}, u_{0})\in\Sigma$,
then $u<\overline U(\lambda)$ in $\Omega$, for all $(\lambda,u)\in \Sigma$.
\end{theorem}
Then, coming back to our problem we have the following.
\begin{lemma}\label{bound}
Let $(\lambda,w_\lambda)\in \widehat{\Sigma}_{0}$, where $\widehat\Sigma_{0}$
is given in Proposition \ref{bifur} (and hence $w_{\lambda}$ a positive solution of \eqref{Pa2}).
\begin{enumerate}
\item[(a)] If $b< 0$, then
$$\|w_\lambda\| \leq c/\lambda + c^r\quad \forall \lambda>0,$$
where $c := (-a/b)^{1/(p-1)}$.
\item[(b)] If $b=0$, then there exists $c_0>0$ such that
$$\|w_\lambda \|\leq c_0 \quad \forall \lambda > \lambda_1/a.$$
\item[(c)] If $b>0$ and $p/r<1$ then there exists $c_0>0$ such that
$$\|w_\lambda \|\leq c_0 \quad \forall \lambda \in \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}} \widehat{\Sigma}_0.$$
\item[(d)]If $b>0$, $b \neq \lambda_1$ and $p/r=1$ then for each compact subset $\Lambda \subset (0,\infty)$ there exists $c_0>0$ such that
$$\|w_\lambda \|\leq c_1 \quad \forall \lambda \in \Lambda.$$
\item[(e)] If $b>0$ and $1<p/r<(N+2)/(N-2)$, then for each compact subset $\Lambda \subset (0,\infty)$,
there exists $c_2>0$ such that
$$\|w_\lambda\| \leq c_2 \quad \forall\lambda\in \Lambda.$$
\end{enumerate}
\end{lemma}
\begin{proof}
To prove (a), let $x_M \in \Omega$ be the point where $w_\lambda$ attained its maximum on $\Omega$. Then,
\begin{eqnarray*}
0 &\leq& -\Delta w_\lambda(x_M) = aq_{\lambda}(w_\lambda(x_M)) + b q_{\lambda}(w_\lambda(x_M))^p\\
0 &\leq& a+b q_{\lambda}(w_\lambda(x_M))^{p-1}.
\end{eqnarray*}
Since $b<0$, this inequality is equivalent to
$$
q_{\lambda}(w_\lambda(x_M)) \leq (-a/b)^{1/(p-1)} =: c = q_{\lambda} (I_{\lambda} (c)),
$$
and, hence,
$$w_\lambda(x_M) \leq c/\lambda+c^r = I_{\lambda}(c),$$
showing that $w_\lambda(x) \leq c/\lambda+c^r$ for all $x \in \Omega$.
\medskip
In the case $b=0$, we will build a family $\overline{W}(\lambda)$ of supersolutions of \eqref{Pa2}
for every $\lambda\in[\lambda_{1}/a,+\infty)$ and apply Theorem \ref{th:Gamez}.
To this aim, let
$e$ be the
unique (positive) solution of
\begin{eqnarray*}\label{e}
\left\{\begin{array}{rl}
-\Delta e = 1 & \mbox{in}~ \widehat{\Omega}, \\
e=0& \mbox{on}~ \partial \widehat{\Omega},
\end{array}\right.
\end{eqnarray*}
for some regular domain $\Omega \subset\subset \widehat{\Omega}$; in particular
$e_m := \min_{\overline{\Omega}}e>0$.
Let $K>0$ be a constant big enough (independent on $\lambda$) such that
\begin{equation}\label{eq:Kgrande}
q_{\lambda_1/a}(Ke_m)^{r-1} > a\|e\| \quad
and \ \ Ke(x)\geq w_{\lambda_{1}/a}(x) \quad \forall x\in \Omega.
\end{equation}
Then, we consider the map
$$\overline{W}:[\lambda_1/a, \infty) \rightarrow C^2_0(\overline{\Omega}) \quad
\text{such that } \overline{W} (\lambda) = Ke.$$
We will show that $\overline{W}(\lambda) = Ke$ is a supersolution of (\ref{Pa2})
for every $\lambda\in[\lambda_{1}/a,+\infty)$, that is
$$
\forall \lambda\geq\lambda_{1}/a: \ \ K=-\Delta(Ke) \geq a q_\lambda(Ke)\quad \mbox{in } {\Omega},
$$
or equivalently
$$
\forall \lambda\geq\lambda_{1}/a: \ \ Ke \geq aq_\lambda(Ke)e \quad \mbox{in } {\Omega}.
$$
Using that
$I_\lambda(q_\lambda(s)) = q_\lambda(s)/\lambda + q_\lambda(s)^r =s$, for all $s \geq 0$,
we are actually reduced to show that
\begin{eqnarray*}
\forall \lambda\geq\lambda_{1}/a: \ \ \frac{q_\lambda(Ke)}{\lambda} + q_\lambda(Ke)^r \geq a q_\lambda(Ke)e \quad \mbox{in } {\Omega},
\end{eqnarray*}
and then, since $e(x)\geq e_m>0$ in $\overline{\Omega}$, to prove that
\begin{equation}\label{ssol}
\forall \lambda\geq\lambda_{1}/a: \ \ \frac{1}{\lambda} + q_\lambda(Ke)^{r-1} \geq a e \quad \mbox{in}~\Omega.
\end{equation}
Now since $\lim_{s \rightarrow \infty} q_\lambda(s) = \infty$, $r>1$ and $e_m>0$,
by the monotonicity of $q_{\lambda}$ with respect to $\lambda$ (see \eqref{eq:qlcrescente})
and the choice of $K$ (see \eqref{eq:Kgrande}) we obtain that, for every $\lambda\geq\lambda_{1}/a$:
$$ \frac{1}{\lambda} + q_\lambda(Ke)^{r-1} \geq \frac{1}{\lambda} + q_\lambda(Ke_m)^{r-1} > q_{\lambda_1/a}(Ke_m)^{r-1} > a\|e\| \geq ae \quad \mbox{in } \overline{\Omega},$$
showing that \eqref{ssol} is satisfied and hence, $\overline{W}(\lambda) = Ke$ is a supersolution,
but not a solution, of \eqref{Pa2} for every $\lambda\in[\lambda_{1}/a,+\infty)$.
Due to the choice of $K$ satisfying \eqref{eq:Kgrande},
all the hypotheses of Theorem \ref{th:Gamez} are satisfied and then
we have
$$w_\lambda < \overline{W}(\lambda) = Ke\leq K \max_{\overline\Omega}e:=c_{0},$$
completing the proof of (b).
\medskip
To prove (c) we argue as above, building a family $\overline{W}(\lambda)$ of supersolutions of (\ref{Pa2}). We consider again the constant map
$$\overline{W}:[\lambda_1/a, \infty) \rightarrow C^2_0(\overline{\Omega}) \quad
\text{such that } \overline{W} (\lambda) = Ke.$$
Then, $\overline{W}(\lambda) = Ke$ is a supersolution of (\ref{Pa2}) for every $\lambda \in \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_0$ if
$$
\forall \lambda\in \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_0: \ \ K \geq aq_\lambda(Ke) + b q_\lambda(Ke)^p\quad \mbox{in } {\Omega},
$$
or equivalently
$$
\forall \lambda\in \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_0: \ \ 1 \geq a\frac{q_\lambda(Ke)}{Ke}e + b \frac{q_\lambda(Ke)^p}{Ke} e\quad \mbox{in } {\Omega}.
$$
Since $r>p$, the limits $\lim_{s \rightarrow \infty} q_\lambda(s)/s = \lim_{s \rightarrow \infty} q_\lambda(s)^p/s = 0$ are uniform in $\lambda \in \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_0$ (see Remark \ref{limuniforme}), we can obtain $K>0$ large enough (independent on $\lambda$) such that $\overline{W}(\lambda) = Ke$ is a supersolution of (\ref{Pa2}), proving the result.
\medskip
Now, let us prove (d). To this end, we consider two cases: $0<b <\lambda_1$ and $b>\lambda_1$. For the first case we argue as above, building a family $\overline{W}(\lambda)$ of supersolutions of (\ref{Pa2}). Thus, let $0<b < \lambda_1$ be fixed. By the monotonicity properties of principal eigenvalue with respect to the domain, we can get a regular domain $\widehat{\Omega}$ such that
$$
\Omega \subset \subset \widehat{\Omega} \quad \mbox{and} \quad b< \widehat{\lambda}_1<\lambda_1
$$
where $\widehat{\lambda}_1$ stands for the principal eigenvalue of $-\Delta$ in $\widehat{\Omega}$ under homogeneous Dirichlet boundary conditions. Define
$$\overline{W}:[\lambda_1/a, \infty) \rightarrow C^2_0(\overline{\Omega}) \quad
\text{such that } \overline{W} (\lambda) = K\widehat{\varphi}_1.$$
We will show that $\overline{W}(\lambda) = K \widehat{\varphi}_1$ is a supersolution of (\ref{Pa2})
for every $\lambda\in[\lambda_{1}/a,+\infty)$, that is
$$
\forall \lambda\geq\lambda_{1}/a: \ \ K\widehat{\lambda}_1 \widehat{\varphi}_1=-\Delta(K\widehat{\varphi}_1) \geq a q_\lambda(K\widehat{\varphi}_1) + b q(K\widehat{\varphi}_1)^p\quad \mbox{in } {\Omega},
$$
or equivalently
\begin{equation}\label{ttt}
\forall \lambda\geq\lambda_{1}/a: \ \ \widehat{\lambda}_1 \geq \frac{q_\lambda(K\widehat{\varphi}_1) + b q(K\widehat{\varphi}_1)^p}{K\widehat{\varphi}_1} \quad \mbox{in } {\Omega}.
\end{equation}
Since $p=r$
$$
\lim_{s \rightarrow\infty} \frac{q_\lambda(s) + b q(s)^p}{s} = b
$$
(see (\ref{q6}) in Lemma \ref{lem:q}), we can chose $K>0$ large enough such that (\ref{ttt}) holds. By Theorem \ref{th:Gamez}, we obtain the result.\\
Now, let $b>\lambda_1$. We will proceed by contradiction. If (d) fails, there exists $(\lambda_n,w_n) \in \widehat{\Sigma}_0$ such that $\lambda_n \in \Lambda \subset(0,\infty)$ and
$$
\|w_n\| \rightarrow \infty \quad \mbox{as }n \rightarrow \infty.
$$
Moreover, up to a subsequence if necessary,
$$
\lambda_n \rightarrow \lambda^*>0.
$$
Define
$$
z_n:= \frac{w_n}{\|w_n\|}, \quad n \geq 1.
$$
Since $(\lambda_n,w_n)$ is a positive solution of (\ref{Pa2}), we obtain that $(\lambda_n, z_n)$ verifies
\begin{equation}\label{zn}
\left\{\begin{array}{ll}
-\Delta z_n = \displaystyle\frac{a q_{\lambda_n}(w_n) +b q_{\lambda_n}(w_n)^p}{\|w_n\|} & \mbox{in }\Omega, \\
z_n = 0&\mbox{on }\partial\Omega.
\end{array} \right.
\end{equation}
Multiplying this equation by $z_n$, integrating in $\Omega$ and applying the formula of integration by parts gives
$$
\|z_n\|^2_{H_0^1} = \int_\Omega \frac{(a q_{\lambda_n}(w_n) +b q_{\lambda_n}(w_n)^p)z_n}{\|w_n\|}.
$$
Since $r=p$, by Lemma \ref{lem:q}, $q_\lambda(s)^p \leq s$ and $q_\lambda(s) \leq \lambda s$, for all $s \geq 0$ and $\lambda>0$. Thus,
$$
\|z_n\|^2_{H_0^1} \leq \int_\Omega \frac{(a \lambda_nw_n +b w_n)z_n}{\|w_n\|} = \int_\Omega (a \lambda_n + b )z_n^2 \leq (a \max_{n\geq1} \lambda_n +b) |\Omega|,
$$
showing that $z_n$ is bounded in $H_0^1(\Omega)$. By elliptic regularity, $z_n$ is also bounded in $W^{2,m}(\Omega)$, $m>1$. Thus, it follows from Morrey's compact embedding that, up to subsequence if necessary,
$$
z_n \rightarrow z \quad \mbox{in }C(\overline{\Omega}).
$$
Let $\psi \in C_0^\infty(\Omega)$. Multiplying (\ref{zn}) by $\psi$, integrating in $\Omega$ and applying the formula of integration by parts gives
$$
-\int_\Omega z_n \Delta \psi = \int_\Omega \frac{(a q_{\lambda_n}(w_n) +b q_{\lambda_n}(w_n)^p)\psi}{\|w_n\|} =\int_\Omega \frac{(a q_{\lambda_n}(z_n\|w_n\|) +b q_{\lambda_n}(z_n\|w_n\|)^p)z_n\psi}{z_n\|w_n\|}
$$
In view of (\ref{q3}) and (\ref{q6}) of Lemma \ref{lem:q}, letting $n \rightarrow \infty$ in the above equality yields
$$
-\int_\Omega z \Delta \psi = \int_\Omega b z\psi.
$$
Since $z \in H_0^1(\Omega)$, $z\geq 0$ in $\Omega$ and $\|z\| =1$, it follows that $b = \lambda_1$, which is a contradiction with the inicial assumption $b>\lambda_1$.
\medskip
Finally, to prove (e) observe that
\begin{eqnarray*}
\frac{aq_{\lambda}(s) + b q_{\lambda}(s)^p}{s^{p/r}} &=& \frac{aq_{\lambda}(s) + b q_{\lambda}(s)^p}{(q_{\lambda}(s)/\lambda + q_{\lambda}(s)^r)^{p/r}}\\ &=&\frac{a}{\left(q_{\lambda}(s)^{1-\frac{r}{p}}/\lambda+q_{\lambda}( s)^{r- {r}/{p}}\right)^{p/r}} + \frac{b}{\left(q_{\lambda}( s)^{1-r}/\lambda+1\right)^{p/r}}
\end{eqnarray*}
which implies
$$ \lim_{s \rightarrow +\infty} \frac{aq_{\lambda}(s) + b q_{\lambda}(s)^p}{s^{p/r}} = b>0.$$
By a classical result of Gidas and Spruck, see \cite[Theorem 1.1]{gidas},
we obtain the conclusion.
\end{proof}
\begin{remark}
Note that when $b<0$ we have a uniform bound with respect to $\lambda$ of the positive solutions
$u_{\lambda}$ of problem \eqref{Pa1}. Indeed from (a) of Lemma \ref{bound}, since
$$
\forall x\in \Omega: \ I_{\lambda}(u_{\lambda}(x))=\frac{u_{\lambda}(x)}{\lambda} +
u_{\lambda}^{r}(x) = w_{\lambda}(x) \leq \frac c\lambda +c^{r}=I_{\lambda}(c),
$$
we obtain $\|u_{\lambda}\|\leq c.$
\end{remark}
Now, we have a better information on the behaviour of the continuum of
positive solutions emanating from the trivial solution at $\lambda_{1}/a$.
\begin{proposition}\label{comportamentoSigma}
Let $\widehat{\Sigma}_{0}$ be the continuum of positive solutions of \eqref{Pa2} given in Proposition \ref{bifur}.
\begin{enumerate}
\item Suppose that one of the following conditions is satisfied:
\begin{enumerate}
\item[(i)] $b\leq 0$;
\item[(ii)] $b>0$, $r=p$ and $b<\lambda_1$;
\item[(iii)] $b>0$ and $r>p$.
\end{enumerate}
Then $(\lambda_1/a,\infty) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_{0}$.
\medskip
\item Suppose that one of the following conditions is satisfied:
\begin{enumerate}
\item[(i)] $r=p$ and $b>\lambda_1$;
\item[(ii)] $b>0$, $1<p/r <(N+2)/(N-2)$ and $a>\phi(s_0)$.
\end{enumerate}
Then $(0,\lambda_1/a) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_{0}$ and \eqref{Pa2} does not possess positive solution for $\lambda$ large. \medskip
\item Suppose that $r=p$ and $b= \lambda_1$, then $\mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_{0} = \{\lambda_1/a\}$. \end{enumerate}
\end{proposition}
\begin{proof}
First, observe that in the cases (1) and (2), by Lemma \ref{bound}, the positive solutions of (\ref{Pa2}) are bounded in ${C}(\overline{\Omega})$ and, by elliptic regularity, also
in ${C}_0^1(\overline{\Omega})$. Thus, in view of Remark \ref{rem:necesariaw}:
\begin{enumerate}
\item If (i), (ii) or (iii) occurs, then \eqref{Pa2} does not possess positive solution for $\lambda$ positive and small. Hence, $\widehat{\Sigma}_0$ has to be unbounded with respect to large values of $\lambda$, and this gives the conclusion.
\item If (i) or (ii) occurs, then there does not exist positive solution for $\lambda$ large enough. Therefore, since there does not exist bifurcation point from infinity of positive solutions of \eqref{Pa2} (by Lemma \ref{bound}), the inclusion $(0,\lambda_1/a) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\widehat{\Sigma}_{0}$ must be satisfied.
\end{enumerate}
Finally, the case $r=p$ and $b=\lambda_1$ is a direct consequence of Remark \ref{rem:necesariaw} (c).
\end{proof}
Figure \ref{bif} shows some admissible situations within the setting of Proposition \ref{comportamentoSigma}.
An immediate consequence of the above proposition is the following result of existence of positive solution of (\ref{Pa2}).
\begin{figure}
\centering
\includegraphics[scale=0.6]{bif4.pdf}
\caption{\label{bif}Possible bifurcation diagrams of solution of (\ref{Pa2}) for the case (I) $b\leq0$; (II) $b>0$ and $r >p$; (III) $b>0$, $1<p/r< (N+2)(N-2)$ and $a>\phi(s_0)$; (IV) $0<b<\lambda_1$ and $r=p$; (V) $b=\lambda_1$ and $r=p$; (VI) $b>\lambda_1$ and $r=p$.}
\end{figure}
\begin{corollary}\label{Existencia}
We have the following.
\begin{enumerate}
\item[(a)] Suppose that either $b \leq 0$ or $0<b<\lambda_1$ and $r=p$. Then (\ref{Pa2}) admits (at least) a positive solution if, and only if, $\lambda> \lambda_1/a$. Moreover, assuming $\lambda > \lambda_1/a$, if \smallskip
\begin{itemize}
\item[(i)] $b=0$, or \smallskip
\item[(ii)]$b<0$ and $p \geq r$ \smallskip
\end{itemize}
the solution is unique. \smallskip
\item[(b)] Suppose $b>0$ and $r>p$. Then (\ref{Pa2}) admits (at least) a positive solution if $\lambda> \lambda_1/a$. Moreover, there exists $\lambda^* \in (0,\lambda_1/a)$ such that (\ref{Pa2}) admits (at least) two positive solutions for each $\lambda \in (\lambda^*,\lambda_1/a)$.
\item[(c)] Suppose that $r=p$ and $b>\lambda_1$. Then (\ref{Pa2}) admits (at least) a positive solution if $0<\lambda<\lambda_{1}/a$.
\item[(d)] Suppose that $b>0$, $r<p$, $p/r <(N+2)/(N-2)$ and $a>\phi(s_0)$. Then (\ref{Pa2}) admits (at least) a positive solution if $0<\lambda<\lambda_{1}/a$. Moreover, there exists $\lambda^* >\lambda_1/a$ such that (\ref{Pa2}) admits (at least) two positive solutions for each $\lambda \in (\lambda_1/a, \lambda^*)$.
\item[(e)] Suppose $r=p$ and $b=\lambda_1$. Then (\ref{Pa2}) admits a positive solution if, and only if, $\lambda = \lambda_1/a$. Moreover, all positive solutions of (\ref{Pa2}) are given by
$$
w=c\varphi_1 \quad c\in {\rm I}\hskip -0.85mm{\rm R}, c>0.
$$
\end{enumerate}
\end{corollary}
\begin{proof}
(a)
The existence is a consequence of Proposition \ref{bifur} and Remark \ref{rem:necesariaw} (a).
The uniqueness follows by the same arguments as in
\cite[Section 2]{BO}, once that, in case (i) $s \mapsto aq(\lambda,s)/s$ is decreasing
by Lemma \ref{lem:q} item \eqref{q1}, and in case (ii)
$s \mapsto (aq(\lambda,s)+b q(\lambda,s)^p)/s$ is
decreasing if $p \geq r$ by Lemma \ref{lem:q} item \eqref{q5}. \medskip
(b) It follows again by Proposition \ref{bifur} combined with Remark \ref{rem:necesariaw} (b) that ensures that the bifurcation at $\lambda=\lambda_1/a$ is subcritical. Indeed, for $\lambda>\lambda_1/a$ there does not exist positive solution of (\ref{Pa2}) with small norm. Hence, the bifurcation is subcritical.
\medskip
The proofs of (c) and (d) are similar.
\medskip
In the case (e), Remark \ref{rem:necesariaw} (c) ensures that $\lambda= \lambda_1/a$ is a necessary condition for the existence of positive solution of (\ref{Pa2}). Moreover, for all $c>0$,
\begin{eqnarray*}
-\Delta(c\varphi_1) = \lambda_1 c\varphi_1 &=& \lambda_1 I(q_{\lambda_1/a} (c\varphi_1))\\
&=& \lambda_1 \left( \frac{q_{\lambda_1/a} (c\varphi_1)}{\lambda_1/a} + q_{\lambda_1/a} (c\varphi_1)^p\right)\\
&=& aq_{\lambda_1/a} (c\varphi_1) + \lambda_1 q_{\lambda_1/a} (c\varphi_1)^p,
\end{eqnarray*}
showing that $w=c \varphi_1$, $c>0$ is the positive solution of (\ref{Pa2}). This ends the proof.
\end{proof}
\begin{remark}\label{remark1}
Another consequence of Proposition \ref{bifur} and \ref{comportamentoSigma}
is the existence of an unbounded continuum $\Sigma_0$ of positive solutions of (\ref{Pa1}), namely,
$$\Sigma_0:= \{(\lambda,u) \in {\rm I}\hskip -0.85mm{\rm R} \times \mathcal{C}_0^1(\overline{\Omega});~ (\lambda, u/\lambda + u^r) \in \widehat{\Sigma}_0\}. $$
Moreover, Proposition \ref{comportamentoSigma} still remains valid if replace $\widehat{\Sigma}_0$ by $\Sigma_0$
\end{remark}
\setcounter{equation}{0}\Section{Proofs of the main Theorems }\label{sec:final}
In this section we will prove results of existence of positive solution of (\ref{P1}),
under suitable assumptions on $g$.
For the reader convenience we rewrite here the theorems we are going to prove.
The analysis will be done in three cases and in the first two we will use the following classical Bolzano Theorem (see for instance \cite{ArcoyaLP}).
\begin{theorem}\label{th:Bolzano}
Let $X$ be a Banach space, $I\subset {\rm I}\hskip -0.85mm{\rm R}$ be an interval and $\Sigma_0 \subset I \times X$ be a continuum.
Assume that $h:\Sigma_{0} \to {\rm I}\hskip -0.85mm{\rm R}$ is a continuous function such that for some $(\mu_1,u_1), (\mu_2,u_2) \in \Sigma_0$ it holds
$h(\mu_1,u_1) h(\mu_2,u_2)<0$. Then there exists $(\lambda_{*},u_{*}) \in \Sigma_0$ such that $h(\lambda_{*},u_{*})=0$.
\end{theorem}
Roughly speaking, our results depend on the position of $\mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\Sigma_0$ with respect to
$\lambda_{1}/a$.
\subsection{The case $(\lambda_1/a,\infty) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\Sigma_0$}
Our first result of existence of positive solution of \eqref{P1} deals with the case in which $(\lambda_1/a,\infty) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\Sigma_0$. To this end, we will assume that $g$ satisfies the next hypothesis: \medskip
\begin{enumerate}[label=(g\arabic*),ref=g\arabic*,start=1]
\item \label{g_{1}} $g:[0,\infty) \rightarrow [0,\infty)$ is a continuous function and there exists a constant $g_0>0$ such that
$$g(s) >g_0 \quad \forall s \in (0,+\infty),$$
\end{enumerate}
\medskip
The first one is.
\begin{taggedtheorem}{A}
Assume that $g$ satisfies \eqref{g_{1}}. If one of the assumptions of Proposition \ref{comportamentoSigma} (1) occurs, then \eqref{P1} admits at least one solution for each $a>g(0)\lambda_1$.
\end{taggedtheorem}
\begin{proof}
If one of the assumptions of Proposition \ref{comportamentoSigma} (1) occurs, by Corollary \ref{Existencia} and Remark \ref{remark1}, $(\lambda_1/a,\infty) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\Sigma_0$, where $\Sigma_0$ is the unbounded continuum of positive solutions of (\ref{Pa1}).
Consider the continuous map $h: \Sigma_0 \subset (0,+\infty) \times {C}_0^1(\overline{\Omega}) \rightarrow {C}_0^1(\overline{\Omega})$ defined by
$$h(\lambda,u):= \frac{1}{\lambda} - g(|\nabla u|_2^2).$$
Then the zeros of $h$ are positive solutions of (\ref{P1}). Let us apply the Bolzano Theorem to $h$.
In $(\mu_{1},u_{1}):=(\lambda_1/a,0) \in \overline{\Sigma}_0$ we have
$$
h(\mu_{1}, u_{1})= h(\lambda_1/a,0) = \frac{a}{\lambda_1} - g(0)>0
$$
thanks to $a>\lambda_1g(0)$.
On the other hand,
$$
\limsup_{\substack{(\lambda,u) \in \Sigma_0 \\ \lambda \rightarrow +\infty}} h(\lambda,u) =\limsup_{\substack{(\lambda,u) \in \Sigma_0 \\ \lambda \rightarrow +\infty}} \left(\frac{1}{\lambda} - g(|\nabla u|_2^2)\right) = - \liminf_{\substack{(\lambda,u) \in \Sigma_0 \\ \lambda \rightarrow+ \infty}} g(|\nabla u|_2^2)
$$
and since $g(s) \geq g_0>0$, it follows that
$$
\limsup_{\substack{(\lambda,u) \in \Sigma_0 \\ \lambda \rightarrow+ \infty}} h(\lambda,u) \leq - g_0 <0.
$$
Therefore, there exists some $( \mu_{2},u_{2}) \in \Sigma_0$ such that $h(\mu_{2},u_{2}) <0$. By the Bolzano Theorem
\ref{th:Bolzano}
we find $(\lambda_{*},u_{*}) \in \Sigma_0$ satisfying
$$
h(\lambda_{*},u_{*})= \frac{1}{\lambda_{*}} - g(|\nabla u_{*}|_2^2) = 0.
$$
In particular, $(\lambda_{*},u_{*})$ is a positive solution of (\ref{Pa1}) with $1/\lambda_{*} = g(|\nabla u_{*}|_2^2)$, that is,
\begin{equation*}
\left\{ \begin{array}{ll}
-\Delta (g(|\nabla {u_{*}}|_2^2){u_{*}} +{u_{*}}^r) = a {u_{*}} + b {u_{*}}^p&\mbox{in } \Omega, \\
{u_{*}} = 0& \mbox{on }\partial\Omega.
\end{array} \right.
\end{equation*}
Thus, $u_{*}$ is a positive solution of \eqref{P1}.
\end{proof}
\subsection{The case $(0,\lambda_1/a) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\Sigma_0$.}
Now, we will prove our second result of existence of solutions to (\ref{P1}) that deals with the case in which $(0,\lambda_1/a) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\Sigma_0$. For this, we will assume that: \medskip
\begin{enumerate}[label=(g\arabic*),ref=g\arabic*,start=2]
\item \label{g_{2}} $g:[0,\infty) \rightarrow [0,\infty)$ is a bounded continuous function such that $g(0)>0$.
\end{enumerate}
\medskip
Then we have
\begin{taggedtheorem}{B}
Assume that $g$ satisfies \eqref{g_{2}}.
\begin{enumerate}
\item[(i)] If $r=p$ and $b>\lambda_1$ then problem (\ref{P1}) admits (at least) one solution for each $a< g(0)\lambda_1$.
\item[(ii)] If $b>0$, $r<p$, $p/r <(N+2)/(N-2)$ and $g(0)\lambda_1>\phi(s_0)$ then problem (\ref{P1}) admits (at least) one solution for each $a \in (\phi(s_0),g(0)\lambda_1)$.
\noindent Recall that $\phi$ and $s_{0}$ are defined in
\eqref{eq:phi} and \eqref{eq:s0}.
\end{enumerate}
\end{taggedtheorem}
\begin{proof}
In both cases, by Corollary \ref{Existencia} and Remark \ref{remark1}, $(0,\lambda_1/a) \subset \mbox{Proj}_{{\rm I}\hskip -0.85mm{\rm R}}\Sigma_0$. Thus:\\
If (i) occurs then in $(\mu_{1},u_{1}):=(\lambda_1/a,0) \in \overline{\Sigma}_0$ we have
$$
h(\mu_{1}, u_{1})= h(\lambda_1/a,0) = \frac{a}{\lambda_1} - g(0)<0,
$$
thanks to $a<g(0)\lambda_1$. On the other hand, since $g$ is bound,
$$
\liminf_{\substack{(\lambda,u) \in \Sigma_0 \\ \lambda \rightarrow 0^+}} h(\lambda,u) =\liminf_{\substack{(\lambda,u) \in \Sigma_0 \\ \lambda \rightarrow 0^+}} \left(\frac{1}{\lambda} - g(|\nabla u|_2^2)\right) = +\infty.
$$
Therefore, there exists some $(\mu_1,u_2) \in \Sigma_0$ such that $h(\mu_2,u_2)>0$. By Bolzano Theorem \ref{th:Bolzano} we find $(\lambda_*,u_*) \in \Sigma_0$ satisfying $h(\lambda_*,u_*)=0$, thus $u_*$ is a positive solution of (\ref{P1}).\\
Similarly, if (ii) occurs we have
$$
h(\lambda_1/a,0) = \frac{a}{\lambda_1} - g(0)<0 \quad \mbox{and} \quad \liminf_{\substack{(\lambda,u) \in \Sigma_0 \\ \lambda \rightarrow 0^+}} h(\lambda,u)=+\infty.
$$
By Bolzano Theorem \ref{th:Bolzano} we find $(\lambda_*,u_*) \in \Sigma_0$ satisfying $h(\lambda_*,u_*)=0$, thus $u_*$ is a positive solution of (\ref{P1}).
\end{proof}
\subsection{The case $\Sigma_0 = \{\lambda_1/a\}$}
To finish, we will show a result of existence of positive solution of (\ref{P1}) when $\Sigma_0 = \{\lambda_1/a\}$. In this case we will not use the Bolzano Theorem \ref{th:Bolzano} and the only assumption on $g$ is that it is a continuous positive function.
\begin{taggedtheorem}{C}
Assume that $g:[0,\infty) \rightarrow [0,\infty)$ is continuous and positive. If $b=\lambda_1$ and $r=p<2$, then problem \eqref{P1} admits a positive solution if, and only if, $a\in \lambda_1 R[g] :=\{\lambda_1g(s); s \geq 0\} \subset (0,\infty)$.
\end{taggedtheorem}
\begin{proof}
Since $b=\lambda_1$ and $r=p$, it follows from Corollary \ref{Existencia} and Remark \ref{remark1} that $\Sigma_0 = \{\lambda_1/a\}$. Moreover, all positive solutions of (\ref{P1}) are given by
$$
u_c:= q_{\lambda_1/a}(c \varphi_1) \Longleftrightarrow \frac{a}{\lambda_1}u_c + u_c^r = c \varphi_1 \quad c \in {\rm I}\hskip -0.85mm{\rm R}, ~c>0.
$$
Thus,
\begin{eqnarray*}
\nabla(au_c + \lambda_1 u_c^r) = ( a + \lambda_1 ru_c^{r-1}) \nabla u_c &=& c \lambda_1 \nabla \varphi_1 \\
\nabla u_c &=& \frac{c\lambda_1}{ a + \lambda_1 ru_c^{r-1}} \nabla \varphi_1\\
|\nabla u_c|^2 &=& \left(\frac{c\lambda_1}{a + \lambda_1 ru_c^{r-1}}\right)^2 |\nabla \varphi_1|^2
\end{eqnarray*}
and
\begin{eqnarray*}
|\nabla u_c|_2^2 &=& \int_\Omega\left(\frac{c\lambda_1}{a + \lambda_1 rq_{\lambda_1/a}(c\varphi_1)^{r-1}}\right)^2 |\nabla \varphi_1|^2\\
&=& \int_\Omega\left(\frac{\lambda_1}{a/c + \lambda_1 rc^{r-2}(q_{\lambda_1/a}(c\varphi_1)/c)^{r-1}}\right)^2 |\nabla \varphi_1|^2.
\end{eqnarray*}
Consequently, since $r<2$, the map $c \in [0,\infty)\mapsto |\nabla u_c|_2 \in [0,\infty)$ is continuous and increasing. Let us prove that it is one-to-one. Indeed, by monotonicity, it follows that $c \mapsto |\nabla u_c|_2$ is an injection. To prove that it is a surjection, it is sufficient to show that
$$
\lim_{c \rightarrow +\infty} |\nabla u_c|_2 = +\infty.
$$
Indeed, it follows from Lemma \ref{lem:q},
$$q_{\lambda_1/a}(c\varphi_1)^{r-1} \leq \lambda_1 (c \varphi_1)^{r-1}/a \leq \lambda_1 c^{r-1}/a.$$
Thus,
\begin{eqnarray*}
|\nabla u_c|_2^2 = \int_\Omega\left(\frac{c\lambda_1}{a + \lambda_1 rq_{\lambda_1/a}(c\varphi_1)^{r-1}}\right)^2 |\nabla \varphi_1|^2 &\geq& \int_\Omega\left(\frac{c\lambda_1}{a + r\lambda_1^2 c^{r-1}/a}\right)^2 |\nabla \varphi_1|^2 \\
&=&\left(\frac{\lambda_1}{ac^{-1} + r\lambda_1^2 c^{r-2}/a}\right)^2.
\end{eqnarray*}
Once that $r<2$, one can infer
$$
\lim_{c \rightarrow+\infty} |\nabla u_c|_2 = +\infty.
$$
Subsequently, for each $a \in \{\lambda_1g(s); s \geq 0\}$, there exists $s'\geq0$ such that
$$
a=\lambda_1 g(s').
$$
By the previous discussion, we can choose $c>0$ satisfying
$$
|\nabla u_c|_2 = s'.
$$
Therefore
$$
a= \lambda_1 g(|\nabla u_c|_2) \Longleftrightarrow \frac{1}{\lambda_1/a} = g(|\nabla u_c|_2)
$$
and $u_c$ is a solution of (\ref{P1}).
\end{proof}
|
1,941,325,220,731 | arxiv | \section*{Figure Captions\markboth
{FIGURECAPTIONS}{FIGURECAPTIONS}}\list
{Figure \arabic{enumi}:\hfill}{\settowidth\labelwidth{Figure
999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endfigcap\endlist \relax
\def\tablecap{\section*{Table Captions\markboth
{TABLECAPTIONS}{TABLECAPTIONS}}\list
{Table \arabic{enumi}:\hfill}{\settowidth\labelwidth{Table
999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endtablecap\endlist \relax
\def\reflist{\section*{References\markboth
{REFLIST}{REFLIST}}\list
{[\arabic{enumi}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endreflist\endlist \relax
\def\list{}{\rightmargin\leftmargin}\item[]{\list{}{\rightmargin\leftmargin}\item[]}
\let\endquote=\endlist
\makeatletter
\newcounter{pubctr}
\def\@ifnextchar[{\@publist}{\@@publist}{\@ifnextchar[{\@publist}{\@@publist}}
\def\@publist[#1]{\list
{[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@nmbrlisttrue\def\@listctr{pubctr}
\setcounter{pubctr}{#1}\addtocounter{pubctr}{-1}}}
\def\@@publist{\list
{[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@nmbrlisttrue\def\@listctr{pubctr}}}
\let\endpublist\endlist \relax
\makeatother
\newskip\humongous \humongous=0pt plus 1000pt minus 1000pt
\def\mathsurround=0pt{\mathsurround=0pt}
\def\eqalign#1{\,\vcenter{\openup1\jot \mathsurround=0pt
\ialign{\strut \hfil$\displaystyle{##}$&$
\displaystyle{{}##}$\hfil\crcr#1\crcr}}\,}
\newif\ifdtup
\def\panorama{\global\dtuptrue \openup1\jot \mathsurround=0pt
\everycr{\noalign{\ifdtup \global\dtupfalse
\vskip-\lineskiplimit \vskip\normallineskiplimit
\else \penalty\interdisplaylinepenalty \fi}}}
\def\eqalignno#1{\panorama \tabskip=\humongous
\halign to\displaywidth{\hfil$\displaystyle{##}$
\tabskip=0pt&$\displaystyle{{}##}$\hfil
\tabskip=\humongous&\llap{$##$}\tabskip=0pt
\crcr#1\crcr}}
\relax
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\bar{\partial}{\bar{\partial}}
\def\bar{J}{\bar{J}}
\def\partial{\partial}
\def f_{,i} { f_{,i} }
\def F_{,i} { F_{,i} }
\def f_{,u} { f_{,u} }
\def f_{,v} { f_{,v} }
\def F_{,u} { F_{,u} }
\def F_{,v} { F_{,v} }
\def A_{,u} { A_{,u} }
\def A_{,v} { A_{,v} }
\def g_{,u} { g_{,u} }
\def g_{,v} { g_{,v} }
\def\kappa{\kappa}
\def\rho{\rho}
\def\alpha{\alpha}
\def {\bar A} {\Alpha}
\def\beta{\beta}
\def\Beta{\Beta}
\def\gamma{\gamma}
\def\Gamma{\Gamma}
\def\delta{\delta}
\def\Delta{\Delta}
\def\epsilon{\epsilon}
\def\Epsilon{\Epsilon}
\def\p{\pi}
\def\Pi{\Pi}
\def\chi{\chi}
\def\Chi{\Chi}
\def\theta{\theta}
\def\Theta{\Theta}
\def\mu{\mu}
\def\nu{\nu}
\def\omega{\omega}
\def\Omega{\Omega}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\s{\sigma}
\def\Sigma{\Sigma}
\def\varphi{\varphi}
\def{\cal N}{{\cal N}}
\def{\cal M}{{\cal M}}
\def\tilde V{\tilde V}
\def{\cal V}{{\cal V}}
\def\tilde{\cal V}{\tilde{\cal V}}
\def{\cal L}{{\cal L}}
\def{\cal R}{{\cal R}}
\def{\cal A}{{\cal A}}
\def{\cal{G} }{{\cal{G} }}
\def{\cal{D} } {{\cal{D} } }
\defSchwarzschild {Schwarzschild}
\defReissner-Nordstr\"om {Reissner-Nordstr\"om}
\defChristoffel {Christoffel}
\defMinkowski {Minkowski}
\def\bigskip{\bigskip}
\def\noindent{\noindent}
\def\hfill\break{\hfill\break}
\def\qquad{\qquad}
\def\bigl{\bigl}
\def\bigr{\bigr}
\def\overline\del{\overline\partial}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def Nucl. Phys. { Nucl. Phys. }
\def Phys. Lett. { Phys. Lett. }
\def Mod. Phys. Lett. { Mod. Phys. Lett. }
\def Phys. Rev. Lett. { Phys. Rev. Lett. }
\def Phys. Rev. { Phys. Rev. }
\def Ann. Phys. { Ann. Phys. }
\def Commun. Math. Phys. { Commun. Math. Phys. }
\def Int. J. Mod. Phys. { Int. J. Mod. Phys. }
\def\partial_+{\partial_+}
\def\partial_-{\partial_-}
\def\partial_{\pm}{\partial_{\pm}}
\def\partial_{\mp}{\partial_{\mp}}
\def\partial_{\tau}{\partial_{\tau}}
\def \bar \del {\bar \partial}
\def {\bar h} { {\bar h} }
\def \bphi { {\bar \phi} }
\def {\bar z} { {\bar z} }
\def {\bar A} { {\bar A} }
\def {\tilde {A }} { {\tilde {A }}}
\def {\tilde {\A }} { {\tilde { {\bar A} }}}
\def {\bar J} {{\bar J} }
\def {\tilde J} { {\tilde {J }}}
\def {1\over 2} {{1\over 2}}
\def {1\over 3} {{1\over 3}}
\def \over {\over}
\def\int_{\Sigma} d^2 z{\int_{\Sigma} d^2 z}
\def{\rm diag}{{\rm diag}}
\def{\rm const.}{{\rm const.}}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}{^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}}
\def$SL(2,\IR)\otimes SO(1,1)^{d-2}/SO(1,1)${$SL(2,\relax{\rm I\kern-.18em R})\otimes SO(1,1)^{d-2}/SO(1,1)$}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}${$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}$}
\def$SO(d-1,2)/ SO(d-1,1)${$SO(d-1,2)/ SO(d-1,1)$}
\defPoisson--Lie T-duality{Poisson--Lie T-duality}
\def{\cal M}{{\cal M}}
\def\tilde V{\tilde V}
\def{\cal V}{{\cal V}}
\def\tilde{\cal V}{\tilde{\cal V}}
\def{\cal L}{{\cal L}}
\def{\cal R}{{\cal R}}
\def{\cal A}{{\cal A}}
\def{\tilde X}{{\tilde X}}
\def{\tilde J}{{\tilde J}}
\def{\tilde P}{{\tilde P}}
\def{\tilde L}{{\tilde L}}
\begin{document}
\renewcommand{\thesection.\arabic{equation}}}{\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\eeq}[1]{\label{#1}\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\eer}[1]{\label{#1}\end{eqnarray}}
\newcommand{\eqn}[1]{(\ref{#1})}
\begin{titlepage}
\begin{center}
\hfill CERN-TH/99-63\\
\hfill hep--th/9903109\\
\vskip .8in
{\large \bf On Asymptotic Freedom and Confinement \\from Type-IIB Supergravity }
\vskip 0.6in
{\bf A. Kehagias}\phantom{x} and\phantom{x} {\bf K. Sfetsos}
\vskip 0.1in
{\em Theory Division, CERN\\
CH-1211 Geneva 23, Switzerland\\
{\tt kehagias,[email protected]}}\\
\vskip .2in
\end{center}
\vskip .6in
\centerline{\bf Abstract }
\noindent
We present a new type-IIB supergravity vacuum that
describes the strong coupling regime of a non-supersymmetric gauge theory.
The latter has a running coupling such that the theory becomes
asymptotically free in the ultraviolet. It also has a running
theta angle due to a non-vanishing axion field in the
supergravity solution.
We also present a worm-hole solution, which
has finite action per unit four-dimensional volume and
two asymptotic regions, a flat space and an $AdS^5\times S^5$.
The corresponding ${\cal N}=2$ gauge theory, instead of being
finite, has a running coupling. We compute the quark--antiquark potential
in this case and find that it exhibits, under certain assumptions,
an area-law behaviour for large separations.
\vskip 0,2cm
\noindent
\vskip 4cm
\noindent
CERN-TH/99-63\\
March 1999\\
\end{titlepage}
\vfill
\eject
\def1.2{1.2}
\baselineskip 16 pt
\noindent
\def{\tilde T}{{\tilde T}}
\def{\tilde g}{{\tilde g}}
\def{\tilde L}{{\tilde L}}
\section{Introduction}
One of the well-known vacua of type-IIB supergravity is the $AdS_5\times S^5$
one, first described in \cite{SS}. The non-vanishing fields here are the
metric and a Freund--Rubin type anti-self-dual five-form. This background has
received a lot of attention recently because of its conjectured connection to
${\cal N}=4$ $SU(N)$ supersymmetric Yang--Mills (SYM) theory at large $N$
\cite{malda,kleb}.
According to this conjecture \cite{malda},
the large-$N$ limit of certain superconformal field theories (SCFT)
can be described in terms of Anti de-Sitter (AdS)
supergravity; correlation functions of the SCFT theory that lives
in the boundary
of AdS can be expressed in terms of the bulk theory \cite{kleb}.
In particular, the four-dimensional ${\cal N}=4$
$SU(N)$ SYM theory is described by the type-IIB string theory on
$AdS_5\times S^5$, where the radius of both the $AdS_5$ and
$S^5$ are proportional to $N$.
However, one is interested for obvious reasons
in non-supersymmetric YM theories.
In particular, the existence of supergravity duals of such theories will help
in understanding their strong-coupling behaviour.
There are a number of proposals in this direction.
One of them is within the type-II0 theories \cite{KT,Mh,E}, which,
although non-supersymmetric, are consistent string
theories \cite{DH}.
These theories have a tachyon in their spectrum due to lack
of supersymmetry. The tachyon is coupled to the other fields of the theory,
namely, the graviton, the dilaton and the antisymmetric-form fields.
In particular, the coupling of the tachyon to the dilaton is such that it
drives the latter to smaller values in the ultraviolet (UV), a hint for
asymptotic freedom.
However, a different approach has been proposed in
\cite{KS}. According to this, {\it the supergravity duals of
non-supersymmetric gauge theories are non-supersymmetric background solutions
in type-IIB theory}. Based on this, a solution with a
non-constant dilaton that corresponds to a gauge theory
with a UV-stable fixed point has been found \cite{KS}.
The coupling approaches the fixed point with a
power-law behaviour. The solution
is valid for strong 't Hooft coupling $g^2_{\rm H}$, which is consistent
with the fact that there are no known perturbative field theories with
UV-stable fixed point. This scenario has also been followed
in \cite{Gubs-GPPZ}.
The same power-law behaviour as in \cite{KS} has also been found in the
case of an ${\cal N}=2$ boundary gauge theory from a supergravity vacuum
with both D3- and D7-branes \cite{demelo}.
\section{A non-supersymmetric solution}
Here, we will try
to describe a celebrated feature of gauge theories, namely the asymptotic
freedom, within the type-IIB theory. As we will see,
this is possible if we make a ``minimal'' extension
to the solution we found in \cite{KS} by turning on the other scalar field
of type-IIB, the axion.
The anti-self-dual
five-form $F_5$ is given by the Freund--Rubin-type ansatz, which is
explicitly written as
\begin{eqnarray}
F_{\mu\nu\rho\kappa\lambda}&=& -\frac{\sqrt{\Lambda}}{2}
\epsilon_{\mu\nu\rho\kappa\lambda}\ , \qquad
\mu,\nu,\ldots=0,1,\dots,4\ ,
\nonumber\\
F_{ijkpq}&=& \frac{\sqrt{\Lambda}}{2} \epsilon_{ijkpq}\ , \qquad
i,j,\dots=5,\dots,9\ ,
\label{FF}
\end{eqnarray}
and it is clearly anti-self-dual.
We will also assume, for the metric, four-dimensional
Poincar\'e invariance $ISO(1,3)$,
since we would like a gauge theory defined on Minkowski space-time.
In addition,
we will preserve the $SO(6)$ symmetry of the supersymmetric
$AdS_5\times S^5$ vacuum.
As a result, the $ISO(1,3)\times SO(6)$ invariant ten-dimensional metric,
in the Einstein frame, is of the form
\begin{equation}
ds^2=g_{\mu\nu}dx^\mu dx^\nu+ g_{ij}dx^idx^j\ ,
\label{anz0}
\end{equation}
where
\begin{equation}
g_{\mu\nu}dx^\mu dx^\nu =dr^2 + K(r)^2 dx_\alpha dx^\alpha\ , \qquad \alpha=0,1,2,3\ ,
\label{anz1}
\end{equation}
and $g_{ij}$ is the metric on $S^5$. The dilaton and the axion, by
$ISO(1,3)\times SO(6)$ invariance, can only be a function of $r$.
In the Euclidean regime, the supergravity equations for the
metric, the dilaton and the axion in the Einstein frame
follow from the action \cite{GGP}
\begin{equation}
S=\frac{1}{\kappa^2}\int d^{10}x\sqrt{g}\left(R-\frac{1}{2}(\partial\Phi)^2
+\frac{1}{2}e^{2\Phi}(\partial\chi)^2\right)\ .
\label{1action}
\end{equation}
Note the minus sign in front of the axion kinetic term,
which is the result of the Hodge-duality rotation of the type-IIB
nine-form \cite{GGP}. Then the field equations, taking into account the
anti-self-dual form as well, are
\begin{eqnarray}
R_{MN}&=&\frac{1}{2}\partial_M\Phi\partial_N \Phi-\frac{1}{2}e^{2\Phi}
\partial_M\chi
\partial_N\chi+\frac{1}{6}F_{MKLPQ}{F_N}^{KLPQ}\ ,
\label{RRR}\\
\nabla^2\Phi&=&-e^{2\Phi}\left(\partial\chi\right)^2\ ,
\label{akjs}\\
\nabla^2\chi& =& 0\ .
\label{Fi}
\end{eqnarray}
The equation of motion of the
five-form is the anti-self-duality condition, which is satisfied for the
ansatz \eqn{FF}. For the particular case we are considering here, the above
equations turn out to be
\begin{eqnarray}
&&R_{\mu\nu} = - \Lambda g_{\mu\nu} + {1\over 2} \partial_\mu \Phi\partial_\nu \Phi
- {1\over 2} e^{2\Phi} \partial_\mu \chi\partial_\nu \chi\ ,
\label{RRR1} \\
&& {1\over \sqrt{-g} } \partial_\mu \left(\sqrt{-g} g^{\mu\nu} \partial_\nu \Phi\right)
=-e^{2\Phi}\partial_\nu\chi\partial_\mu\chi g^{\mu\nu} \ , \label{FFFF}\\
&& {1\over \sqrt{-g} } \partial_\mu \left(\sqrt{-g} g^{\mu\nu} e^{2\Phi}
\partial_\nu \chi\right)
=0\ ,
\label{eqs1}
\end{eqnarray}
and
\begin{equation}
R_{ij} \ =\ \Lambda g_{ij} \ . \label{eqs12}
\end{equation}
Equation \eqn{eqs12} is automatically solved for a five-sphere of radius
$2/\sqrt{\Lambda}$, and a first integral of the axion in eq. \eqn{eqs1} is
\begin{eqnarray}
\chi' = \chi_0 K^{-4}e^{-2\phi}\ ,
\label{eqs2}
\end{eqnarray}
where the prime denotes derivation with respect to $r$,
and $\chi_0$ is a dimensionless integration constant.
With no loss of generality, it can be taken to be positive by appropriately
changing the sign of $r$.
Using this expression for $\chi$ in \eqn{FFFF} we obtain the differential
equation
\begin{equation}
K^4(K^4\Phi')'=-e^{-2\Phi}\chi_0^2\ , \label{FFFFF}
\end{equation}
a first integral of which is given by
\begin{equation}
K^8\Phi'^2=\chi_0^2e^{-2\Phi} + \mu\ . \label{fFFF}
\end{equation}
Equations \eqn{eqs2} and \eqn{fFFF} are sufficient to proceed and solve
for the function $K(r)$ that appears in the metric \eqn{anz1}.
The non-zero components
of the Ricci tensor for the metric \eqn{anz1} are
\begin{eqnarray}
R_{rr} & = & - 4 \frac{K''}{K^2} \ ,
\nonumber\\
R_{\alpha\beta} & = & - \eta_{\alpha\beta} \left( KK''
+ 3 K'^2 \right)\ ,
\label{ricc1}
\end{eqnarray}
and \eqn{RRR1} is equivalent to the following differential equation
\begin{eqnarray}
&&K'^2= {\mu\over 24}\ K^{-6} + {\Lambda\over 4}\ K^2\ .
\label{eqs3}
\end{eqnarray}
The solution of the above equation
for $\mu=\alpha^2>0$, as a function of $r$, is
\begin{equation}
K^4=\frac{\alpha}{\sqrt{6\Lambda}}\sinh\left(2 \sqrt{\Lambda}
(r_0-r)\right) \ ,\qquad r\leq r_0\ ,
\label{k44}
\end{equation}
where $r_0$ is an integration constant.
It should be noted that the case $\mu<0$ gives rise to a dilaton with bad
asymptotics and it will not be considered here.
Substituting the expression in \eqn{k44} in \eqn{fFFF} and \eqn{eqs2},
we find that the string coupling and the axion
are given by\footnote{The computation is greatly facilitated if
we first change variables as
$$z=\int K^{-4}dr=\sqrt{\frac{3}{2 \alpha^2}}
\ln \coth\left(\sqrt{\Lambda} (r_0-r)\right)\ .$$}
\begin{eqnarray}
&&e^\Phi= {\chi_0\over 2 \alpha}
\left( \Big( \coth \sqrt{\Lambda} (r_0-r)\Big)^{\sqrt{3}/2}
-\Big( \tanh(\sqrt{\Lambda} (r_0-r)\Big)^{\sqrt{3}/2} \right) \ ,
\label{eff1}\\
&&\chi=-{\alpha\over \chi_0}\
{\Big(\coth \sqrt{\Lambda} (r_0-r)\Big)^{\sqrt{3}} + 1\over
\Big(\coth \sqrt{\Lambda} (r_0-r)\Big)^{\sqrt{3}} - 1 }\ .
\label{eff2}
\end{eqnarray}
It is convenient to identify
the parameters $\alpha$ and $\Lambda$
in such a way that, in the limit $r\to -\infty$,
the Einstein metric becomes that of $AdS_5 \times S^5$. This requires that
$\alpha= 4 \sqrt{6} R^3 e^{-4 r_0/R}$ and $\Lambda=4/R^2$. On the other hand,
$r$ cannot take arbitrarily large values since, at $r=r_0$, the function
$K(r)$ has a zero
and the space is singular in both the Einstein and the string frames.
The presence
of this singularity makes the extrapolation from the UV region,
where the space is $AdS_5\times S^5$, to the IR region problematic, which
is ultimately related to the lack of supersymmetry.
As we will see next, there are supersymmetric
backgrounds that are non-singular in the sense that there are geodesically
complete.
\section{A supersymmetric solution}
The aspect we would like to address here is supersymmetry. For this we
need the
fermionic variations \cite{GGP}, which are written, in our background as
\begin{eqnarray}
\delta \lambda &=&
-\frac{1}{2}\left(e^{\Phi}\chi'-\Phi'\right)\gamma_r\epsilon^*\ , \nonumber \\
\delta\lambda^* & =&
-\frac{1}{2}\left(e^{\Phi}\chi'+\Phi'\right)\gamma_r \epsilon\ ,
\label{lamb}\\
\delta\psi_M & =& \left(\nabla_M+\frac{1}{4}e^\Phi\partial_M\chi
-\frac{\sqrt{\Lambda}}{4}\gamma_M\right)\epsilon\ , \nonumber \\
\delta\psi_M^*& =& \left(\nabla_M-\frac{1}{4}e^\Phi\partial_M\chi
-\frac{\sqrt{\Lambda}}{4}\gamma_M\right)\epsilon^*\ ,
\label{dpsi}
\end{eqnarray}
where $\lambda,~\psi_M$ are the dilatino and the gravitino, respectively,
and $\gamma_M$ are the $\gamma$-matrices.
We may easily check, using \eqn{fFFF}, that the background is supersymmetric
for
\begin{equation}
\mu=0\ , ~~~~~~\chi'=\pm e^{-\Phi}\Phi'\ .
\label{muu}
\end{equation}
In that case, there exist Killing spinors $\epsilon=e^{\pm \Phi/4}\zeta$
where $\zeta$ are Killing spinors on the (Euclidean)
$AdS^5\times S^5$. By choosing
$\chi'= e^{-\Phi}\Phi'$, the background
breaks half of the supersymmetries and we find that
\begin{eqnarray}
K(r)& =& e^{-r/R_0}\ , \\
e^\Phi& =& g_s+\frac{R_0}{4}\chi_0e^{4r/R_0}\ . \label{PP}\\
\chi& =&a_\infty -e^{-\Phi}\ , \label{xxx}
\end{eqnarray}
where $a_\infty$ is the constant value of the axion field at infinity and
$R_0^4=4 \pi N$.
After changing coordinates as $\rho=R_0e^{r/R_0}$,
the metric for the supersymmetric solution
in the string frame becomes
\begin{eqnarray}
ds^2&=&\left(1+\frac{\chi_0}{4R^3 g_s^{1/4}}\rho^4\right)^{1/2}
\left(\frac{R^2}{\rho^2}\right.\left(d\rho^2+dx^\alpha dx_\alpha\right)
+R^2d\Omega_5^2\left.\!\!\!\!\!\!\!\phantom{\frac{1}{\rho^2}}\right)\\
&=&R^2\left(\frac{\chi_0}{4R^3 g_s^{1/4}}\right)^{1/2}
\left(1+\frac{4g_s^{1/4}R^3}{\chi_0}\frac{1}{\rho^4}
\right)^{1/2}\left(d\rho^2+\rho^2 d\Omega_5^2+dx^\alpha dx_\alpha\right)\ ,
\end{eqnarray}
where $R^4=g_s R_0^4=4 \pi g_s N$.
Thus, the background is conformally flat and it has two asymptotic
regions
\begin{eqnarray}
&&\rho\rightarrow \infty ~,~~~~~~\mbox{flat space}\ , \\
&&\rho\rightarrow 0 ~, ~~~~~~~~\mbox{$AdS_5\times S^5$ \ .}
\label{INT}
\end{eqnarray}
The action of this vacuum is easily found, following \cite{GGP}, to be
\begin{equation}
S=-\frac{1}{2\kappa^2}\int d^{10}x \sqrt{g}\nabla^2\Phi=\frac{c}{g_s}\ ,~~~~~~
c=\frac{1}{2\kappa^2}\chi_0R_0^5V(S^5)\int d^4x \ , \label{ACT}
\end{equation}
where $V(S^5)=\pi^3$.
Thus, it has a finite value per unit
four-dimensional volume. In addition it has the standard $1/g_s$ scaling of
D-branes.
As a result, this solution can be interpreted as an interpolating soliton
between flat space and Euclidean $AdS_5\times S^5$.
We also note that this solution corresponds to the large size limit of the
D-instanton solutions of \cite{BGKR,SHW}.
Concerning the above solution, we would like to stress that
it breaks half of the supersymmetries even in the two asymptotic
regions, where the space is either flat or $AdS_5\times S^5$.
Thus, the corresponding boundary gauge theory will be
an ${\cal N}=2$ SYM theory, which, however, will
have running coupling, as we will see. Hence, it should be
different from the one obtained by
orbifolding the ${\cal N}=4$ theory \cite{KSs}.
\section{Running couplings and the\\
quark-antiquark potential}
In order to discuss the running of couplings, we must identify the coordinate
in the bulk that corresponds to the energy in the boundary gauge theory.
We identify
the energy in the bulk as $U= R^2 e^{-r/R}$ and also define an energy scale
$U_1= R^2 e^{-r_0/R}$, which, since $U>U_1$, may be considered as the
IR cutoff. The identification of the
energy follows from the fact \cite{SW} that if one considers the massless
scalar equation
${1\over \sqrt{G}}\partial_M e^{-2 \Phi} \sqrt{G} G^{MN}\partial_N \Psi=0$,
where $G_{MN}$ is the $\s$-model metric, this takes the form of the usual
scalar equation in $AdS_5 \times S^5$ to leading order in the expansion for
large $U$. According to the AdS/CFT correspondence, the dependence of the
bulk fields on $U$ may be interpreted as energy dependence of the boundary
theory such that long (short) distances in the AdS space corresponds to
low (high) energies in the CFT side. In particular, the $U$-dependence of the
dilaton defines the energy dependence of the YM coupling
$g_{\rm YM}^2=4\pi e^{\Phi}$, as well as of the 't Hooft coupling $g_{\rm H}^2=
g_{\rm YM}^2N$.
According to this we find the running of the 't Hooft coupling to be
\begin{equation}
g_H = g_0 \Big({U_1\over U}\Big)^4 + {\cal O}\Big({U_1\over U}\Big)^{12} \ ,
\label{betrt}
\end{equation}
where $g_0=\sqrt{3} \chi_0/R^4 \alpha$.
As we see, $g_{\rm H}$ vanishes asymptotically, the signal
of asymptotic freedom. It should be stressed, however, that the
't Hooft coupling approaches zero with a power law and not with the familiar
perturbative logarithmic way. Note also that $\beta_g'(0)= -4$, in accordance
with the universal behaviour for the first derivative of the beta function
for the coupling that was found in \cite{KS}.
In the supersymmetric solution in particular, one finds, by using \eqn{PP},
that the beta function is
\begin{equation}
\beta(g_{\rm H})=-4(g_{\rm H}-g^*_{\rm H})\ , \label{hoo}
\end{equation}
where $g_{\rm H}^{*2}=4\pi N g_s$.
Next we turn to the quark--antiquark potential
corresponding to the supersymmetric solution of section 2, by
computing the Wilson loop.
Using standard techniques \cite{Mal1,Rey} we find that the potential is
\begin{equation}
E_{q\bar q} = \Big(-U_0 + {a \over U_0^3}\Big) {\eta_1 \over \pi} \ ,\qquad
a={\chi_0 R^5\over 4 g_s^{1/4}}\ ,
\label{qbqrq}
\end{equation}
where $\eta_1 ={\pi^{1/2} \Gamma(3/4)\over \Gamma(1/4)}\simeq 0.599$,
and $U_0$ is the turning point of the trajectory given by
\begin{eqnarray}
U_0 &=& a^{1/4} \left( {1\over 6} \Delta^{1/3} - 2 \Delta^{-1/3}\right)^{-1/2}\ ,
\nonumber\\
\Delta & \equiv &
108 b^2 + 12 \sqrt{12 + 81 b^2}\ ,\qquad b={a^{1/4}\over 2 \eta_1 R^2} L\ .
\label{gfgh}
\end{eqnarray}
It turns out that, for $a^{1/4}/R^2 L \ll 1$, the first term in \eqn{qbqrq}
is dominant, resulting in the usual Coulombic law behaviour \cite{Mal1,Rey}.
The first correction to it is
${\cal O}(L^3)$, as in \cite{KS,demelo}, which is also similar to
the behaviour one finds \cite{BISY1} using near extremal D3-branes to describe
finite-temperature effects in the ${\cal N}=4$ $SU(N)$ SYM theory at large $N$.
However, in the opposite limit of $a^{1/4}/R^2 L\gg 1$, the second term
in \eqn{qbqrq} dominates, giving
\begin{equation}
E_{q\bar q} \simeq {a^{1/2}\over 2\pi R^2}\ L\ ,
\label{coonf}
\end{equation}
where we have used the fact that $U_0\simeq a^{1/4}/b^{1/3}$ in that limit.
Hence, the quark--antiquark potential exhibits the typical confining behaviour
and produces the area-law for the Wilson factor.
The plot of the potential \eqn{qbqrq} is given below.
\begin{figure}[htb]
\epsfxsize=4in
\bigskip
\centerline{\epsffile{photo3.eps}}
\caption{Plot of the quark--antiquark potential in \eqn{qbqrq} as a function of
the separation distance $L$, in units of $a=(2 \eta_1 R^2)^4$.
For small $L$ we have a Coulombic behaviour, whereas it is linear for large
$L$.}
\bigskip
\label{fig1}
\end{figure}
A second criterion \cite{witten} for confinement is the existence of
a mass gap in the theory. In our case it is straightforward to check this,
by examining the massless wave equation and realizing that the
Einstein metric is just $AdS_5 \times S^5$.
At this point we emphasize that since we go into the strong coupling
regime for $U=0$,
we should introduce an infrared cutoff $U_{\rm IR}$ such that
$U\ge U_{\rm IR}$.
This does not affect the behaviour in the UV, but it has consequences
in the IR. It implies that there is a maximum length $L_{\rm IR}$, up to
which \eqn{coonf} can be trusted. Under this assumption, the range
of validity of \eqn{coonf} is $a^{-1/4} R^2 \ll L \ll L_{\rm IR}$.
An estimate for $L_{\rm IR}$ is found as follows: It is natural to take for
$U_{\rm IR}$ the value for $U$ where the string coupling becomes of order 1,
i.e. $U_{\rm IR}\sim g_s^{1/4} a^{1/4}$.
Then the cutoff is compared with
the minimum value for $U$, i.e. $U_0\sim U_{\rm IR}$, which
implies that $L_{\rm IR}\simeq a^{-1/4} R^2 g_s^{-3/4}$.
Since $g_s\to 0$ (keeping the constant $a$ in \eqn{qbqrq} finite)
we see that our result \eqn{qbqrq} is practically valid everywhere.
\bigskip\bigskip
\centerline{\bf Note added}
Just before submitting our paper to the hep-th we received \cite{tseliu},
which overlaps with material in section 3.
|
1,941,325,220,732 | arxiv | \section{Introduction}
\pagenumbering{arabic}
\pagestyle{plain}
\setcounter{page}{1}
One of the most informative processes of Particle Physics is the $e^+e^-$
annihilation into hadrons.
It allows to probe the structure of hadronic resonances, which is necessary for testing
various strong interaction models including QCD.
Moreover, the investigation of the high-energy behavior of $e^+e^-\rightarrow \gamma^*, Z^* \rightarrow \textit{hadrons}$ enables to study predictions of the Standard
Model and its possible extensions on the high energies electron-positron colliders existing at present and planned in future (for plans of their construction see e.g. \cite{Benedikt:2020ejr,
Freitas:2021oiq}). The important characteristics of the process $e^+e^-\rightarrow \gamma^*\rightarrow {hadrons}$ is the $R$-ratio, which is proportional to its total cross-section.
The function $R\left(s\right)$ is defined in the Minkowski space-time region and depends on the Mandelstam variable
$s = \left(P_{e^+} + P_{e^-}\right)^2$.
The $R$-ratio is related to the Adler $D$-function and to the hadronic vacuum polarization function $\Pi(Q^2)$, defined in the Euclidean space, by the following way:
\begin{equation}
\label{RtoDint-rel}
D(Q^2) =-Q^2\frac{d\Pi(Q^2)}{dQ^2}= Q^2 \int\limits_0^\infty \frac{R\left(s\right)}{\left(s+Q^2\right)^2} \;d s.
\end{equation}
The QCD expression for the $D$-function in particular represents a special theoretical interest, because the
renormalization group (RG) equation, determined in the Euclidean domain for this quantity, does not include anomalous dimensions in the limit of massless quarks.
This means its momentum dependence in perturbation theory (PT) can be totally absorbed into the running of the parameter $\overline{a}_s\equiv\overline{a}_s(Q^2)=\overline{\alpha}_s(Q^2)/\pi$, which we will consider in the modified minimal-subtraction
$\overline{\rm MS}$-scheme. The dependence of the renormalized strong coupling $a_s\equiv a_s(\mu^2)=\alpha_s(\mu^2)/\pi$ on scale $\mu^2$ is described by the solution of the standard RG equation
\begin{equation}
\label{beta-exp}
\beta^{(N)}(a_s ) = \mu^2 \frac{\partial a_s}{\partial \mu^2} =
- \sum\limits_{i = 0}^{N-1} \beta_{i} a^{\;i+2}_s,
\end{equation}
where $N\geq 1$ is the order of the PT approximation of the $\beta$-function, which
equals the number of loops involved in the charge renormalization.
The continuing advances in developing the analytical calculation technique and
its computer algorithmization allowed to obtain the analytic expression for
coefficients of $\beta(a_s)$-function up to the 5-th order of PT in the $\overline{\rm MS}$-scheme. Indeed, the corresponding coefficient $\beta_4$ was independently computed by three groups of authors \cite{Baikov:2016tgj, Herzog:2017ohr, Luthe:2017ttc}.
Nowadays, the analytical expression for $D$-function is known at the 4-loop level. The $\overline{a}^{\;2}_s$-correction in the $\overline{\rm MS}$-scheme was first independently obtained in analytical \cite{Chetyrkin:1979bj}
and in numerical \cite{Dine:1979qh} form. These results were confirmed by
analytical calculation \cite{Celmaster:1979xr}. The correct analytical expression for the next-to-next-to-leading order (${\rm{NNLO}}$) $\overline{a}^{\;3}_s$-correction was first obtained in \cite{Gorishnii:1990vf} and was afterwards verified by a non-independent computation in \cite{Surguladze:1990tg}. In this regard,
the results of totally self-sufficient analytical calculations \cite{Chetyrkin:1996ez}, performed in the arbitrary covariant gauge with help of the completely different technique, were really very important and confirmed those received in \cite{Gorishnii:1990vf}.
The analytical calculations of the separate contributions to the total 4-th order
correction to $D$-function had been continued more than 15 years. The explicit results for the
renormalon-type terms were obtained in \cite{Broadhurst:1993ru}
from the related direct 5-loop QED computations of \cite{Broadhurst:1992si}.
The $\overline{a}^{\;4}_s$-contribution proportional to $n_f^2$ was calculated in \cite{Baikov:2001aa}, while the complete 4-th order results
for the Adler function are now known from the advanced analytical computations fulfilled in
\cite{Baikov:2008jh} for the case of $SU(3)$ color group and in \cite{Baikov:2010je, Baikov:2012zn} for the generic simple gauge group.
Note that the independent analytical confirmation of these results was done more than five years later in \cite{Herzog:2017dtz}.
The concrete self-consistent phenomenological applications of the
RG-improved 4-order PT terms for the $D$-function (for $R$-ratio and Bjorken polarized sum rule as well)
presume knowledge of the 4-loop $\beta$-function defining the running effect of the strong coupling
via the solution
of Eq.(\ref{beta-exp}). At this level the $\overline{\rm MS}$-scheme function $\beta(a_s)$ was analytically evaluated in \cite{vanRitbergen:1997va} and confirmed later on in \cite{Czakon:2004bu}. To use in the analysis of the existing $e^+e^-\rightarrow\gamma^*\rightarrow{hadrons}$ data the 5-loop term of the $\beta(a_s)$-function \cite{Baikov:2016tgj, Herzog:2017ohr, Luthe:2017ttc}, it is preferable to know at least partially the magnitudes of the individual $\overline{a}^{\;5}_s$ contributions to the mentioned quantities or estimates of their $\mathcal{O}(\overline{a}^{\;5}_s)$ total PT corrections (see e.g. \cite{Baikov:2008jh, Boito:2018rwt}).
In this work we examine another problem, namely the fixation of the definite $\overline{\rm MS}$-scheme $\overline{a}^{\;5}_s$ contributions to the still totally analytically unknown five-loop corrections to the considered renorm-group quantities. For this purpose we follow the approach of works \cite{Kataev:2010dm, Kataev:2010du, Cvetic:2016rot}, consisting in the representation of the 5-th order expressions for the $e^+e^-$-annihilation $D$-function and $R$-ratio (and by the same analogy for the Bjorken polarized sum rule), written for the case of the generic simple gauge group, in the $\{\beta\}$-expanded form introduced in Ref.\cite{Mikhailov:2004iq}. As a result, we fix 8 out of 12 possible $\{\beta\}$-dependent five-loop terms and define their Lie group structure. Moreover, we demonstrate that out of 4 remaining undetermined terms, 3 will contain the Riemann $\zeta_4$-contributions. We also find
their exact group structure. The arguments in favor of the validity of these values, coming from the definite cancellations due to the scale and conformal symmetries, are given.
\section{The one-fold generalization of the Crewther relation}
Consider first the Adler $D$-function. In the massless limit this theoretical quantity is decomposed into a sum of the flavor non-singlet (NS) and singlet (SI) components:
\begin{equation}
\label{DAdler}
D^{(M)}(\overline{a}_s) = d_R \left(\sum_f {Q_f}^2\right) D^{(M)}_{NS} (\overline{a}_s) +
d_R \left(\sum_f Q_f\right)^2 D^{(M\geq 3)}_{SI} (\overline{a}_s),
\end{equation}
where $Q_f$ are the charges of quarks with flavor $f$, $d_R$ is the dimension of the quark representation of the considered generic simple gauge group. In our study we are primarily interested in the case of the $SU(N_c)$ gauge group, where $d_R=N_c$, and its particular case of the $SU(3)$ color group, relevant for physical QCD. The number $M$ stands for the order of PT being considered.
For the non-singlet case starting from $M\geq 2$ it is related to $N$, defined below Eq.(\ref{beta-exp}), as $M=N+1$.
The singlet in flavor contribution to $D$-function is appearing first at the third order of
PT \cite{Gorishnii:1990vf}.
Another RG quantity interesting for our investigation is
the characteristic of the pure Euclidean process of deep inelastic scattering of the polarized leptons on nucleons, namely the Bjorken polarized sum rule, which is defined as:
\begin{equation}
\label{BJPSR}
S^{(M)}_{Bjp}(\overline{a}_s) = \int\limits_0^1 \bigg(g_{1, (M)}^{(lp)}(x,Q^2)-g_{1, (M)}^{(ln)}(x,Q^2)\bigg) dx =\frac{1}{6}\bigg|\frac{g_A}{g_V}\bigg|\bigg( C^{(M)}_{NS}(\overline{a}_s)+ C^{(M\geq 4)}_{SI}(\overline{a}_s)\bigg),
\end{equation}
where $g_{1, (M)}^{(lp)}(x,Q^2)$ and $g_{1, (M)}^{(ln)}(x,Q^2)$ are the structure functions of these deep-inelastic processes, $g_A$ and $g_V$ are the axial and vector neutron $\beta$-decay constants. In Eq.(\ref{BJPSR}) the dependence of $S_{Bjp}$ on $Q^2=-q^2$ at large $Q^2$ is absorbed into the coupling $\overline{a}_s$.
In the case of the generic color gauge group the analytical expressions for the second, third and fourth PT corrections to $C_{NS}(\overline{a}_s)$-function were analytically computed in the $\overline{\rm MS}$-scheme in Refs.\cite{Gorishnii:1985xm}, \cite{Larin:1991tj}, \cite{Baikov:2010je} respectively. It should be also reminded that in contrast to the Adler function, the SI contribution to $S_{Bjp}(\overline{a}_s)$ manifests itself starting from the 4-th order of PT \cite{Larin:2013yba}\footnote{However, the guessed expression of the $\overline{a}^{\;4}_s$ contribution to $C^{(M=4)}_{SI}$ in \cite{Larin:2013yba} does not agree with the analytical result for the same term, directly evaluated in \cite{Baikov:2015tea}. The reason for this discrepancy is still the opened question.}.
Theoretically interesting feature of
the Adler function and Bjorken polarized sum rule
is that their Born approximations are related through consideration of the axial-vector-vector (AVV) triangle diagram. In the massless case respecting the conformal symmetry this relation
was derived by Crewther in \cite{Crewther:1972kn} (see \cite{Adler:1972msp} for reanalysis and \cite{Crewther:2014kha} for review).
The PT generalization
of the Crewther relation was discovered in \cite{Broadhurst:1993ru} at $M=3$ (or $N=2$) and reads:
\begin{equation}
\label{BK}
D^{(M)}_{NS} (\overline{a}_s)C^{(M)}_{NS}(\overline{a}_s)=
1+\bigg(\frac{\beta^{(M-1)}(\overline{a}_s)}{\overline{a}_s}\bigg)K^{(M-1)}(\overline{a}_s)+\mathcal{O}(\overline{a}^{\;M+1}_s).
\end{equation}
Here $\beta^{(2)}(\overline{a}_s)$ is the two-loop approximation of the $\beta$-function (\ref{beta-exp})
of the non-abelian strong interaction theory with a simple compact Lie group. At $M=3$ the term $K^{(2)}(\overline{a}_s)=K_1\overline{a}_s +K_2\overline{a}^{\;2}_s$ is the second-degree polynomial with coefficient $K_1$, containing only the quadratic Casimir operator $C_F$
of the fundamental representation of the gauge group,
and with coefficient $K_2$ that includes three monomials $C_F^2$, $C_FC_A$ and $C_FT_Fn_f$, where $C_A$ is the quadratic Casimir
operator of the adjoint representation, $T_F$ is the Dynkin index and $n_f$ is the number of quark flavors.
There was made a guess in \cite{Broadhurst:1993ru} that the factorization property of the $\beta(\overline{a}_s)$-function in the generalized Crewther relation may be also true at $M>3$ as well. Further on, we will call
the form of Eq.(\ref{BK}) the one-fold generalization of the Crewther relation.
It follows from consideration of the PT expression for the AVV diagram with three possible form-factors appearing in the special kinematics $(pq)=0$ that the mutual cancellation of the $C_F^M\overline{a}_s^M$-factors
in the product of the Adler function and Bjorken polarized sum rule (see Eq.(\ref{BK})) in all orders of PT is the consequence of the Adler-Bardeen theorem on the non-renormalizability of axial current \cite{Gabadadze:1995ei}\footnote{This
fact also follows from studies of the perturbative quenched QED \cite{Kataev:2013vua}, previously called ``the finite QED program'' \cite{Johnson:1967pk} and respecting the conformal symmetry.}.
Moreover, it was shown there that
one of these three form-factors stays
non-renormalizable, while the remaining two are renormalized and are proportional to the conformal anomaly term $\beta^{(N)}(\overline{a}_s)/\overline{a}_s$. These
two renormalized form-factors are related to the
one-fold generalization of the Crewther relation and
may be expressed through the factorized $\beta$-function in all orders of PT starting from the second one. The direct analytical calculations \cite{Mondejar:2012sz} of all 3-loop corrections to
6 possible form-factors of the AVV Green function in the arbitrary kinematics detect the appearance in the $\overline{a}^{\;2}_s$-contributions to the certain form-factors of the terms,
proportional to the first coefficient $\beta_0$ of the QCD $\beta$-function.
Thus, the aforesaid means that the conjecture concerning the factorization of the conformal anomaly term in the one-fold generalization of the Crewther relation in higher orders of PT \cite{Broadhurst:1993ru, Gabadadze:1995ei} {\it may be} indeed true. The consequent more general theoretical $x$-space analysis of Ref.\cite{Crewther:1997ux} indicates this statement
ought be changed to the assertion {\it the $\beta(\overline{a}_s)$-function should be factorized} in all orders of PT. The studies, conducted in Ref.\cite{Braun:2003rp}, lead to the same conclusion.
As was shown in \cite{Garkusha:2018mua}, if this inference is really true in the gauge-invariant $\overline{\rm MS}$-scheme then it will be valid in the gauge-dependent ${\rm{MOM}}$-like schemes in the Landau gauge in all orders as well\footnote{For the most recent analytical 5-loop calculations of the definite anomalous dimensions in the $\overline{\rm MS}$-scheme in the Landau gauge see \cite{Chetyrkin:2017bjc}.}.
The important explicit 4-th order analytical computations of the PT corrections to
$D^{(4)}_{NS}(\overline{a}_s)$ and
$C^{(4)}_{NS}(\overline{a}_s)$-functions, carried out in the $\overline{\rm MS}$-scheme for the case of a generic simple gauge group in Ref.\cite{Baikov:2010je}, allowed the authors of this work to verify directly and thereby confirm the conjecture
\cite{Broadhurst:1993ru} on the validity of Eq.(\ref{BK}) at $M=4$ with the factorizable three-loop function $\beta^{(3)}(\overline{a}_s)$, which was evaluated in \cite{Tarasov:1980au, Larin:1993tp}.
At $M=4$ the term $K^{(3)}(\overline{a}_s)=K_1\overline{a}_s+K_2\overline{a}_s^{\;2}+K_3\overline{a}_s^{\;3}$ is the cubic polynomial in $\overline{a}_s$. Its $\overline{a}_s^{\;3}$-coefficient $K_3$ was obtained in Ref.\cite{Baikov:2010je}. It
contains 6 color monomials, namely $C_F^3$, $C_F^2C_A$, $C_FC_A^2$,
$C_F^2T_Fn_f$, $C_FC_AT_Fn_f$, $C_FT^2_Fn^2_f$. One should emphasize that coefficients $K_2$ and $K_3$ contain $n_f$-dependent terms. Now let us move on to the representation of the generalized Crewther relation
where at least in the 4-th order of PT the total flavor dependence of its r.h.s will be contained in coefficients of $\beta$-function only.
\section{The two-fold generalization of the Crewther relation }
At the next stage, using the 3-rd order analytical
PT expressions for $D^{(3)}_{NS}$ and $C^{(3)}_{NS}$-functions, it was found in \cite{Kataev:2010dm} that
in the case of a generic simple gauge group the generalized Crewther relation at $2\leq M\leq 3$
can be rewritten in the following two-fold representation:
\begin{equation}
\label{BK-two-fold}
D^{(M)}_{NS} (\overline{a}_s)C^{(M)}_{NS}(\overline{a}_s)=
1+\sum\limits_{n=1}^{M-1}\bigg(\frac{\beta^{(M-n)}(\overline{a}_s)}{\overline{a}_s}\bigg)^nP^{(M-n)}_n(\overline{a}_s)+\mathcal{O}(\overline{a}^{\;M+1}_s).
\end{equation}
The validity of the analogous expression at $M=4$ was assumed in \cite{Kataev:2010dm} even prior the
appearance of the analytical results for $\overline{a}^{\;4}_s$-corrections to $D^{(4)}_{NS}$ and $C^{(4)}_{NS}$-functions \cite{Baikov:2010je}. The explicit check of this guess was demonstrated a bit later in \cite{Kataev:2010du}, where the results of Ref.\cite{Baikov:2010je} had already been used.
At the 4-loop level the polynomials $P^{(r)}_n(\overline{a}_s)$ in Eq.(\ref{BK-two-fold}) read:
\begin{equation}
\label{polynomial}
P^{(r)}_n(\overline{a}_s)=\sum\limits_{k=1}^r P^{(r)}_{n, k} \;\overline{a}^{\;k}_s=\sum_{p = 1}^{4-n} \overline{a}^{\;p}_s \sum_{k=1}^p P_n^{(r)}[k,p-k] C_F^k C_A^{p-k},
\end{equation}
where $r=M-n$ and for the considered case (when $M=4$)
$r=4-n$ correspondingly. Coefficients $P^{(r)}_{n, k}$ can be unambiguously defined at least in the 4-th order of PT \cite{Kataev:2010du}. An important point here is that at this level of PT all dependence on $n_f$ in r.h.s. of Eq.(\ref{BK-two-fold}) is held in the coefficients of the RG $\beta$-function. Thus, in contrast to the coefficients of the polynomial $K^{(M-1)}(\overline{a}_s)$ in Eq.(\ref{BK}), the terms of $P^{(r)}_n(\overline{a}_s)$ in Eq.(\ref{BK-two-fold}) are independent on the number of quark flavors.
\section{The two-fold representation for quantities related by the generalized Crewther relation}
\subsection{The case of the Adler function}
\label{SubSecAdler}
The double sum expression for the conformal symmetry breaking term in r.h.s. of Eq.(\ref{BK-two-fold}) motivated the authors of the work
\cite{Cvetic:2016rot} to consider the similar representation for the NS contributions to the Adler function and Bjorken polarized sum rule at least at the analytically available $\mathcal{O}(\overline{a}^{\;4}_s)$ level. According to this paper at $M=4$ the PT expression for the NS Adler function, calculated in the $\overline{\rm MS}$-scheme for the non-abelian gauge theory with a simple compact Lie group, may be presented as the following two-fold series:
\begin{equation}
\label{D-two-fold}
D^{(M)}_{NS}(\overline{a}_s) = 1 + D^{(M)}_0(\overline{a}_s)+\sum\limits_{n=1}^{M-1}\left(\frac{\beta^{(M-n)}(\overline{a}_s)}{\overline{a}_s}\right)^{n} D^{(M-n)}_{n}(\overline{a}_s)+\mathcal{O}(\overline{a}^{\;M+1}_s),
\end{equation}
where polynomials $D^{(r)}_n(\overline{a}_s)$ in the coupling constant $\overline{a}_s$ are:
\begin{equation}
\label{Drn}
D^{(r)}_n(\overline{a}_s)=\sum\limits_{k=1}^r D^{(r)}_{n,k}\overline{a}^{\;k}_s.
\end{equation}
In a more detailed form Eq.(\ref{D-two-fold}) may be written down as:
\begin{eqnarray}
\label{explicit}
\hspace{-0.7cm}
D^{(M=4)}_{NS}(\overline{a}_s)&=&1 + D^{(4)}_{0,1}\overline{a}_s+\bigg(D^{(4)}_{0,2}-\beta_0 D^{(3)}_{1,1}\bigg)\overline{a}^{\;2}_s+\bigg(D^{(4)}_{0,3}-\beta_0 D^{(3)}_{1,2}-\beta_1 D^{(3)}_{1,1}+\beta^2_0 D^{(2)}_{2,1}\bigg)\overline{a}^{\;3}_s \\ \nonumber
&+&\bigg(D^{(4)}_{0,4}-\beta_0 D^{(3)}_{1,3}-\beta_1 D^{(3)}_{1,2}-\beta_2 D^{(3)}_{1,1}+\beta^2_0 D^{(2)}_{2,2}+2\beta_0\beta_1 D^{(2)}_{2,1}-\beta^3_0 D^{(1)}_{3,1}\bigg)\overline{a}^{\;4}_s+\mathcal{O}(\overline{a}^{\;5}_s),
\end{eqnarray}
where at the fixed number of $n$ and $k$, $D^{(r)}_{n,k}\equiv D^{(r+1)}_{n,k}\equiv D^{(r+2)}_{n,k}\equiv \dots$, e.g. the terms $D^{(2)}_{1,2}\equiv D^{(3)}_{1,2}$.
The coefficients $D^{(r)}_{n,k}$ including in Eq.(\ref{explicit}) are determined by an unambiguous way
as solutions of a system of linear equations, analogous to those presented in \cite{Kataev:2010du}. Herewith, the full dependence on $n_f$ (except for the light-by-light scattering effects -- see explanations below) is absorbed into the coefficients of $\beta$-function and their combinations (\ref{explicit}). For the first time at the four-loop level the values of $D^{(r)}_{n,k}$ were obtained in the work \cite{Cvetic:2016rot}. Note also that the two-fold representation (\ref{D-two-fold}) and its counterpart for the Bjorken polarized sum rule to be considered below in Subsec. \ref{Bjsub} are a sufficient condition for the validity of the generalized Crewther relation in the form of Eq.(\ref{BK-two-fold}).
Starting from the four-loop level the coefficients of the NS Adler function in the case of the generic simple gauge group contain contributions of the light-by-light scattering type, proportional to the group structures $d^{abcd}_Fd^{abcd}_A/d_R$ and $d^{abcd}_Fd^{abcd}_Fn_f/d_R$ \cite{Baikov:2010je}.
Here $d^{abcd}_F={\rm{Tr}}(T^aT^{\mathop{\{}b}T^cT^{d\mathop{\}}})/6$ and $d^{abcd}_A={\rm{Tr}}(C^aC^{\mathop{\{}b}C^cC^{d\mathop{\}}})/6$, where
the symbol $\{\dots\}$ stands for the full symmetrization procedure of elements $T^bT^cT^d$ by superscripts $b$, $c$ and $d$ \cite{vanRitbergen:1997va}; $T^a$ are the generators of the representation of fermions, $(C^a)_{bc}=-if^{abc}$ are the generators of the adjoint representation with the antisymmetric structure constants $f^{abc}$ of the Lie algebra: $[T^a,T^b]=if^{abc}T^c$. In the case of the $SU(N_c)$ color gauge group the completeness relation for the generators in the defining representation of its Lie algebra leads to the following expressions for the aforementioned contractions (together with $d^{abcd}_Ad^{abcd}_A/d_R$):
\begin{subequations}
\begin{gather}
\label{color-structures-1}
\frac{d^{abcd}_Fd^{abcd}_F}{d_R}=\frac{(N^4_c-6N^2_c+18)(N^2_c-1)}{96N^3_c}, ~~~~ \frac{d^{abcd}_Fd^{abcd}_A}{d_R}=\frac{(N^2_c+6)(N^2_c-1)}{48}, \\ \frac{d^{abcd}_Ad^{abcd}_A}{d_R}=\frac{N_c(N^2_c+36)(N^2_c-1)}{24}.
\end{gather}
\end{subequations}
Although the term $d_F^{abcd}d_F^{abcd}n_f \overline{a}^{\;4}_s/d_R$ in $D^{(4)}_{NS}(\overline{a}_s)$ is proportional
to the number of flavors $n_f$, which formally enters the $\beta_0$-coefficient, we will not include it
into $D^{(3)}_{1,3}$-coefficient in Eq.(\ref{explicit}), since such embedding will not be supported by the QED limit \cite{Cvetic:2016rot}. Indeed, in the QED limit of the QCD-like theory with the $SU(N_c)$ group $d_F^{abcd}d^{abcd}_F/d_R=1$ and $n_f=N$, where $N$ is the number of the charged leptons (structures with $d^{abcd}_A$ are nullified). This term arises from the five-loop Feynman diagram with light-by-light scattering internal subgraphs (fermion loop with four photon propagators coming out of it), contributing to the photon vacuum polarization function. However, in QED the sum of these subgraphs are convergent and does
not give extra $\beta_0$-dependent (or $N$-dependent) contribution to the coefficient $D^{(3)}_{1,3}$ \cite{Cvetic:2016rot}. Therefore, to get a smooth transition from the case of $U(1)$ to $SU(N_c)$ gauge group, these light-by-light scattering terms should be included into the $\beta$-independent coefficient $D^{(4)}_{0,4}$ at the four-loop level.
The foregoing enables to rewrite Eq.(\ref{Drn}) at the four-loop level in the form of the
explicit decomposition by the Casimir operator:
\begin{eqnarray}
\label{Dn_double-sum}
D^{(r)}_n(\overline{a}_s) &=& \sum_{p = 1}^{4-n} \overline{a}^{\;p}_s \sum_{k=1}^p D_n^{(r)}[k,p-k] C_F^k C_A^{p-k} \nonumber \\
&+& \overline{a}^{\;4}_s \delta_{n0} \bigg(D_0^{(4)}[F,A] \frac{d_F^{abcd}d_A^{abcd}}{d_R} +
D_0^{(4)}[F,F] \frac{d_F^{abcd}d_F^{abcd}}{d_R}n_f\bigg),
\end{eqnarray}
where terms with the Kronecker symbol correspond to the light-by-light scattering effects discussed above.
Up to the 4-th order of PT the coefficients $D_n^{(r)}[k, p-k]$ are known analytically in terms of
rational numbers and the odd Riemann $\zeta$-functions, namely $\zeta_3$, $\zeta_5$, $\zeta_7$, $\zeta^2_3$ \cite{Cvetic:2016rot} (see details below). Here as before at the fixed number of $n$, $k$ and $p$, the coefficients $D^{(r)}_{n}[k, p-k]\equiv D^{(r+1)}_{n}[k, p-k]\equiv D^{(r+2)}_{n}[k, p-k]\equiv \dots$, e.g. $D^{(2)}_1[2,0]=D^{(3)}_1[2,0]=D^{(4)}_1[2,0]$.
It turns out that the two-fold representation of the NS Adler function, whose detailed form at the $\mathcal{O}(\overline{a}^{\;4}_s)$ level is given in Eq.(\ref{explicit}), is in full agreement with its $\{\beta\}$-expansion structure proposed in Ref.\cite{Mikhailov:2004iq} 12 years earlier than the representation (\ref{D-two-fold}). Indeed, accordingly to this work the coefficients $d_M$ $(1\leq M\leq 4)$ of the NS Adler function
\begin{equation}
\label{simD}
D^{(M=5)}_{NS}(\overline{a}_s)=1+d_1\overline{a}_s+d_2\overline{a}^{\;2}_s+d_3\overline{a}^{\;3}_s+d_4\overline{a}^{\;4}_s+d_5\overline{a}^{\;5}_s + \mathcal{O}(\overline{a}^{\;6}_s)
\end{equation}
may be decomposed into the coefficients of the $\beta(\overline{a}_s)$-function and their definite combinations and have the following form:
\begin{subequations}
\begin{gather}
\label{d1}
d_1 = d_1[0], \\
\label{d2}
d_2 = \beta_0 d_2[1] + d_2[0], \\
\label{d3}
d_3 = \beta_0^2 d_3[2] + \beta_1 d_3[0,1] + \beta_0 d_3[1] + d_3[0], \\
\label{d4}
d_4 = \beta_0^3 d_4[3] + \beta_1 \beta_0 d_4[1,1] + \beta_2 d_4[0,0,1] + \beta_0^2 d_4[2] + \beta_1 d_4[0,1] + \beta_0 d_4[1] + d_4[0].
\end{gather}
Relations (\ref{d1}-\ref{d4}) are in full compliance with the results of application of the two-fold representation (\ref{explicit}).
Using the two-fold form (\ref{explicit}), the authors of work \cite{Cvetic:2016rot} obtained all $\{\beta\}$-expanded terms in $d_2$, $d_3$ and $d_4$ coefficients in the $\overline{\rm MS}$-scheme. These results are
presented in Tables \ref{T-d1-3} and \ref{T-d4}.
\begin{table}[h!]
\renewcommand{\tabcolsep}{0.6cm}
\renewcommand{\arraystretch}{1.7}
\centering
\begin{tabular}{|c|c|c|}
\hline
Coefficients & Group structures & Numbers \\ \hline
$d_1[0]$ & $C_F$ & $\frac{3}{4}$ \\ \hline
& $C^2_F$ & $-\frac{3}{32}$ \\ \cline{2-3}
\multirow{-2}{*}{$d_2[0]$}& $C_FC_A$ & $\frac{1}{16}$ \\ \hline
$d_2[1]$ & $C_F$ & $\frac{33}{8}-3\zeta_3$ \\ \hline
& $C^3_F$ & $-\frac{69}{128}$ \\ \cline{2-3}
& $C^2_FC_A$ & $-\frac{101}{256}+\frac{33}{16}\zeta_3$ \\ \cline{2-3}
\multirow{-3}{*}{$d_3[0]$} & $C_FC^2_A$ & $-\frac{53}{192}-\frac{33}{16}\zeta_3$ \\ \hline
\multirow{2}{*}{$d_3[1]$} & $C^2_F$ & $-\frac{111}{64}-12\zeta_3+15\zeta_5$ \\ \cline{2-3}
& $C_FC_A$ & $\frac{83}{32}+\frac{5}{4}\zeta_3-\frac{5}{2}\zeta_5$ \\ \hline
$d_3[0,1]$ & $C_F$ & $\frac{33}{8}-3\zeta_3$ \\ \hline
$d_3[2]$ & $C_F$ & $\frac{151}{6}-19\zeta_3$ \\ \hline
\end{tabular}
\captionsetup{justification=centering}
\caption{\label{T-d1-3} All terms included in the $\{\beta\}$-expansion of coefficients $d_1$, $d_2$ and $d_3$.}
\end{table}
\begin{table}[h!]
\renewcommand{\tabcolsep}{0.6cm}
\renewcommand{\arraystretch}{1.7}
\centering
\begin{tabular}{|c|c|c|}
\hline
Coefficients & Group structures & Numbers \\ \hline
& $C^4_F$ & $\frac{4157}{2048}+\frac{3}{8}\zeta_3$ \\ \cline{2-3}
& $C^3_FC_A$ & $-\frac{3509}{1536}-\frac{73}{128}\zeta_3-\frac{165}{32}\zeta_5$ \\ \cline{2-3}
& $C^2_FC^2_A$ & $\frac{9181}{4608}+\frac{299}{128}\zeta_3+\frac{165}{64}\zeta_5$ \\ \cline{2-3}
& $C_FC^3_A$ & $-\frac{30863}{36864}-\frac{147}{128}\zeta_3+\frac{165}{64}\zeta_5$ \\ \cline{2-3}
& $\frac{d^{abcd}_Fd^{abcd}_A}{d_R}$ & $\frac{3}{16}-\frac{1}{4}\zeta_3-\frac{5}{4}\zeta_5$ \\ \cline{2-3}
\multirow{-6}{*}{$d_4[0]$} & $\frac{d^{abcd}_Fd^{abcd}_F}{d_R}n_f$ & $-\frac{13}{16}-\zeta_3+\frac{5}{2}\zeta_5$ \\ \hline
& $C^3_F$ & $-\frac{785}{128}-\frac{9}{16}\zeta_3+ \frac{165}{2}\zeta_5-\frac{315}{4}\zeta_7$ \\ \cline{2-3}
& $C^2_FC_A$ & $-\frac{3737}{144}+\frac{3433}{64}\zeta_3-\frac{99}{4}\zeta^2_3-\frac{615}{16}\zeta_5+\frac{315}{8}\zeta_7$ \\ \cline{2-3}
\multirow{-3}{*}{$d_4[1]$} & $C_FC^2_A$ & $-\frac{2695}{384}-\frac{1987}{64}\zeta_3+\frac{99}{4}\zeta^2_3+\frac{175}{32}\zeta_5-\frac{105}{16}\zeta_7$ \\ \hline
& $C^2_F$ & $-\frac{111}{64}-12\zeta_3+15\zeta_5$ \\ \cline{2-3}
\multirow{-2}{*}{$d_4[0,1]$} & $C_FC_A$ & $\frac{83}{32}+\frac{5}{4}\zeta_3-\frac{5}{2}\zeta_5$ \\ \hline
& $C^2_F$ & $-\frac{4159}{384}-\frac{2997}{16}\zeta_3+27\zeta^2_3+\frac{375}{2}\zeta_5$ \\ \cline{2-3}
\multirow{-2}{*}{$d_4[2]$} & $C_FC_A$ & $\frac{14615}{256}+\frac{39}{16}\zeta_3-\frac{9}{2}\zeta^2_3-\frac{185}{4}\zeta_5$ \\ \hline
$d_4[0,0,1]$ & $C_F$ & $\frac{33}{8}-3\zeta_3$ \\ \hline
$d_4[1,1]$ & $C_F$ & $\frac{151}{3}-38\zeta_3$ \\ \hline
$d_4[3]$ & $C_F$ & $\frac{6131}{36}-\frac{203}{2}\zeta_3-45\zeta_5$ \\ \hline
\end{tabular}
\captionsetup{justification=centering}
\caption{\label{T-d4} All terms included in the $\{\beta\}$-expansion of coefficient $d_4$.}
\end{table}
Note once again that the $\{\beta\}$-expanded terms of the coefficients $d_2$ and $d_3$ are independent on the number of flavors.
As was already discussed above and was mentioned in Appendix of Ref.\cite{Brodsky:2013vpa},
the term $d_4[0]$ contains $d^{abcd}_Fd^{abcd}_A/d_R$ and $d^{abcd}_Fd^{abcd}_Fn_f/d_R$ contributions of the light-by-light scattering type (see Table \ref{T-d4}) that satisfies the correct transition to the QED limit.
Supposing the counterpart of representation (\ref{D-two-fold}) to be true at the $\mathcal{O}(\overline{a}^{\;5}_s)$ level $(M=5, N=4)$, one can obtain the analogous $\{\beta\}$-expansion for $d_5$-coefficient in (\ref{simD}), which is consistent with the one presented in \cite{Mikhailov:2016feh}:
\begin{eqnarray}
\label{d5}
d_5 &=& \beta^4_0 d_5[4] + \beta_1\beta^2_0 d_5[2,1] + \beta^3_0 d_5[3] + \beta_2 \beta_0 d_5[1,0,1] + \beta^2_1 d_5[0,2]
+ \beta_1 \beta_0 d_5[1,1] \nonumber \\
&+& \beta^2_0 d_5[2] +
\beta_3 d_5[0,0,0,1] + \beta_2 d_5[0,0,1] + \beta_1 d_5[0,1] + \beta_0 d_5[1] + d_5[0].
\end{eqnarray}
\end{subequations}
Moreover, expanding the right-hand side of the analog of Eq.(\ref{D-two-fold}) at $M=5$ in the series in coupling constant,
one can observe that the different coefficients of the $\beta$-function in various combinations are
multiplied by the same coefficients $D^{(r)}_{n,k}$ for any $n$ (except for $n=0$).
This leads to the definite relations between the coefficients of $\{\beta\}$-expansions (\ref{d2}-\ref{d5}).
One can write them in a tabular-like manner:
\begin{subequations}
\begin{eqnarray}
\label{D_{1,1}}
D_{1,1}^{(1)} &=& -d_2[1] = -d_3[0,1] = -d_4[0,0,1] = -d_5[0,0,0,1], \\
\label{D_{1,2}}
D_{1,2}^{(2)} &=& -d_3[1] = -d_4[0,1] = -d_5[0,0,1], \\
\label{D_{1,3}}
D_{1,3}^{(3)} &=& -d_4[1] = -d_5[0,1]; \\
\label{D_{2,1}}
D_{2,1}^{(1)} &=& d_3[2] = d_4[1,1]/2 = d_5[0,2] = d_5[1,0,1]/2, \\
\label{D_{2,2}}
D_{2,2}^{(2)} &=& d_4[2] = d_5[1,1]/2; \\
\label{D_{3,1}}
D_{3,1}^{(1)} &=& -d_4[3] = - d_5[2,1]/3.
\end{eqnarray}
\end{subequations}
In accordance with the aforesaid, for instance, the coefficient $D^{(1)}_{1,1}\equiv D^{(2)}_{1,1}\equiv D^{(3)}_{1,1}$, $D^{(3)}_{1,3}$ \footnote{
It should be noticed that
there is a misprint in the paper \cite{Cvetic:2016rot} in the formula (9). Instead of the rational $C^3_F$-coefficient $758/128$ in the expression for $D_1(\overline{a}_s)$ should be $785/128$ (see $d_4[1]$ in Table \ref{T-d4}).}$\equiv D^{(4)}_{1,3}$.
Due to the relations (\ref{D_{1,1}}-\ref{D_{3,1}}) it is possible to restore 7 out of 12
coefficients in the $\{\beta\}$-expansion of $d_5$ (\ref{d5}). Another one, namely $d_5[4]$, is known and corresponds to renormalon contributions whose general formula was obtained in \cite{Broadhurst:1993ru}.
All $\{\beta\}$-expanded coefficients of $d_5$, obtained in this way, are summarized in the Table \ref{T-d5}.
\begin{table}[h!]
\renewcommand{\tabcolsep}{0.6cm}
\renewcommand{\arraystretch}{1.8}
\centering
\begin{tabular}{|c|c|c|}
\hline \vspace{-0.2cm}
~~~Coefficients~~~ & ~~~Group structures~~~ & Numbers \\ \hline
\multirow{3}{*}{$d_5[0,1]$} &
$C^3_F$ & $-\frac{785}{128}-\frac{9}{16}\zeta_3+\frac{165}{2}\zeta_5-\frac{315}{4}\zeta_7$ \\ \cline{2-3}
& $C^2_FC_A$ & ~~$-\frac{3737}{144}+\frac{3433}{64}\zeta_3-\frac{99}{4}\zeta^2_3-\frac{615}{16}\zeta_5+\frac{315}{8}\zeta_7$~~ \\ \cline{2-3}
& $C_FC^2_A$ & $-\frac{2695}{384}-\frac{1987}{64}\zeta_3+\frac{99}{4}\zeta^2_3+\frac{175}{32}\zeta_5-\frac{105}{16}\zeta_7$ \\ \hline
\multirow{2}{*}{$d_5[0,0,1]$} & $C^2_F$ & $-\frac{111}{64}-12\zeta_3+15\zeta_5$ \\ \cline{2-3}
& $C_FC_A$ & $\frac{83}{32}+\frac{5}{4}\zeta_3-\frac{5}{2}\zeta_5$ \\ \hline
$d_5[0,0,0,1]$ & $C_F$ & $\frac{33}{8}-3\zeta_3$
\\ \hline
\multirow{2}{*}{$d_5[1,1]$} & $C^2_F$ & $-\frac{4159}{192}-\frac{2997}{8}\zeta_3+54\zeta^2_3+375\zeta_5$ \\ \cline{2-3}
& $C_FC_A$ & $\frac{14615}{128}+\frac{39}{8}\zeta_3-9\zeta^2_3-\frac{185}{2}\zeta_5$ \\ \hline
$d_5[0,2]$ & $C_F$ & $\frac{151}{6}-19\zeta_3$ \\ \hline
$d_5[1,0,1]$ & $C_F$ & $\frac{151}{3}-38\zeta_3$ \\ \hline
$d_5[2,1]$ & $C_F$ & $\frac{6131}{12}-\frac{609}{2}\zeta_3-135\zeta_5$ \\ \hline
$d_5[4]$ & $C_F$ & $\frac{91865}{72}-\frac{4955}{9}\zeta_3-570\zeta_5$ \\ \hline
\end{tabular}
\captionsetup{justification=centering}
\caption{\label{T-d5} All extracted terms included in the $\{\beta\}$-expansion of coefficient $d_5$.}
\end{table}
In analogous way, assuming the validity of the two-fold representation at the six-loop level, it is possible to restore 11 out of 19 terms (including the corresponding renormalon one from \cite{Broadhurst:1993ru}) for the $\{\beta\}$-expanded coefficient $d_6$ of $D$-function and etc.
In the numerical form for the case of $SU(3)$ group the presented $\{\beta\}$-expanded coefficients read:
\begin{eqnarray}
\label{DNS-beta-concrete}
\nonumber
D^{(M=5)}_{NS}(\overline{a}_s)&=&1+\overline{a}_s+\bigg(0.6918\beta_0+\uuline{0.0833}\bigg)\overline{a}^{\;2}_s +\bigg(3.1035\beta^2_0
+0.6918\beta_1+4.9402\beta_0-\uuline{23.2227}\bigg)\overline{a}^{\;3}_s \\ \nonumber
&+&\bigg(2.1800\beta^3_0+6.2069\beta_0\beta_1
+17.6990\beta^2_0+
0.6918\beta_2+4.9402\beta_1
-101.928\beta_0 \\ \nonumber
&+&\uuline{81.1571}+0.0802n_f\bigg)\overline{a}^{\;4}_s
+\bigg(30.7398\beta^4_0+6.5401\beta^2_0\beta_1
+6.2069\beta_0\beta_2+3.1035\beta^2_1 \\ \nonumber
&+& 35.3981\beta_0\beta_1
+0.6918\beta_3+4.9402\beta_2-101.928\beta_1 \\
&+&\uwave{d_5[3]}\beta^3_0+\uwave{d_5[2]}\beta^2_0+\uwave{d_5[1]}\beta_0+\uwave{d_5[0]}\bigg)\overline{a}^{\;5}_s+\mathcal{O}(\overline{a}^{\;6}_s)
\end{eqnarray}
Thus, 4 wavy terms out of 12 possible ones remain undetermined in $d_5$ (but with the fixed $\zeta_4$-contributions to $d_5[0], d_5[1], d_5[2]$ -- see extra clarifications below).
It is worth emphasizing that the first two solid underlined conformal-invariant $\beta$-independent terms are in agreement with ones, obtained with help of the generalized Brodsky-Lepage-Mackenzie\footnote{The somewhat different extension of the BLM method, called ``sequental extended BLM'' (seBLM), was proposed in \cite{Mikhailov:2004iq}, where all $\{\beta\}$-expanded terms are absorbed into the coupling-dependent scale.} (BLM \cite{Brodsky:1982gc}) ${\rm{MS}}$-like scale fixing prescription \cite{Grunberg:1991ac}\footnote{
Note that the term $d_2[0]=0.0833$, obtained within the pure QCD, is reproduced in the skeleton approach
\cite{Cvetic:2006gc}
and in the minimally extended SUSY QCD with light gluinos \cite{Kataev:2010du, Mikhailov:2004iq, Mikhailov:2016feh, Kataev:2014jba}, whereas the third-order conformally-invariant contribution $d_3[0]=-23.2227$ differs from
corresponding results $-27.849$ of \cite{Cvetic:2006gc}
and $-35.8725$ of \cite{Kataev:2010du, Mikhailov:2004iq, Mikhailov:2016feh, Kataev:2014jba}. For comparison with the gluino-extended QCD results see Sec.\ref{Discussion} of this paper.}.
Further on, the numerical expression for the 4-th
order conformal invariant term are confirmed by the
results of the first realization \cite{Brodsky:2011ta} of the ideas of the Principle of Maximum Conformality (PMC/BLM) \cite{Brodsky:2011ig}. Like the seBLM prescription, the PMC is a generalization of the BLM procedure to higher orders of PT, enabling to systematically absorb all $\beta$-dependent terms of $d_M$-coefficients in the scale parameter.
These coincidences of the underlined terms with the results of the PMC/BLM procedure may be considered as the argument in favor of validity of the two-fold representation for the NS Adler function proposed in \cite{Cvetic:2016rot} at the four-loop level at least.
In particular case $n_f=4$ the PT expression for $D_{NS}$-function has the following numerical form:
\begin{gather}
\label{DNS-n=5}
\hspace{-0.5cm}
D^{(M=5)}_{NS}(\overline{a}_s)\qquad =\qquad 1\qquad +\qquad \overline{a}_s\qquad +\qquad{\underbrace{1.5246}_{\sim \#\beta_0=1.4413}}\overline{a}^{\;^2}_s\qquad +
{\underbrace{2.7590}_{\sim \#\beta^2_0=13.4701 \atop \sim\#\beta^2_0+\#\beta_1+\#\beta_0=25.9817}}\hspace{-0.5cm}\overline{a}^{\;3}_s \\ \nonumber
\qquad + {\underbrace{27.3879}_{
\sim \#\beta^3_0=19.7121 \atop \sim\#\beta^3_0+\#\beta_0\beta_1+\#\beta^2_0+\#\beta_2+\#\beta_1+\#\beta_0=-54.0900}}\hspace{-1.5cm}\overline{a}^{\;4}_s
\qquad\qquad + {\underbrace{d_5}_{
\sim \#\beta^4_0=579.0767 \atop \sim\#\beta^4_0+\#\beta^2_0\beta_1+\#\beta_0\beta_2+\#\beta^2_1+\#\beta_0\beta_1+\#\beta_3+\#\beta_2+\#\beta_1=746.8592}}\hspace{-2.45cm}\overline{a}^{\;5}_s \qquad + \qquad \mathcal{O}(\overline{a}^{\;6}_s),
\end{gather}
where numbers above the curly brackets equal to the total $\overline{a}^{\;M}_s$-contributions in the $M$-th order of PT and those below the brackets correspond either to the large-$\beta_0$ approximation or to the sum of the specified $\{\beta\}$-dependent terms shown in Eq.(\ref{DNS-beta-concrete}). It is seen from Eq.(\ref{DNS-n=5}) that in a number of instances the conformally-invariant terms $d_M[0]$ dominates the overall contribution of the $\{\beta\}$-expanded terms, changing even the total sign in numerical expression for $d_M$-coefficient. Since the generalizations of the BLM procedure allows to eliminate all $\{\beta\}$-dependent terms (\ref{DNS-beta-concrete}) by redefining the scale parameter in every order of PT
\cite{Grunberg:1991ac, Mikhailov:2004iq, Brodsky:2011ta, Brodsky:2013vpa,
Kataev:2014jba}, leaving only the conformally-invariant pieces $d_M[0]$ in expression for $d_M$, then it would be interesting to find out the possible asymptotic behavior of these conformal contributions at large $M$.
It is clear that in this case the leading large-$\beta_0$ or the subleading renormalon approximation has no impact on the asymptotics of the terms $d_M[0]$. For the theory of renormalon studies of the perturbative series of various Euclidean quantities in the QCD, see e.g. works \cite{Zakharov:1992bx, Beneke:1994qe, Beneke:1998ui} and references therein.
In its turn, a non-renormalon mechanism analogous to the Lipatov technique \cite{Lipatov:1976ny} has been studied previously in a number of works (see e.g. \cite{Itzykson:1977mf, Bogomolny:1978ft} and reviews \cite{Zinn-Justin:1980oco, Kazakov:1980rd}) to
investigate the large order behavior of the PT series in the different quantum field models such as the conformal quenched QED. This approach, based on the expansion of the functional integral representation for the different Green functions at a non-trivial saddle points, also indicates the factorial growth of the higher order coefficients of the related PT series. The results of these works may be useful for careful studies of the possible asymptotics of the terms $d_M[0]$ at least in the case of the perturbative quenched QED as it was proposed in \cite{Itzykson:1977mf}.
\subsection{The case of the $e^+e^-$ annihilation R-ratio}
Physically the more important quantity is not the Adler $D$-function but the directly measurable Minkowskian
characteristics of the $e^+e^-$ annihilation process, namely $R$-ratio. It is related to the $D$-function
by the K\"allen-Lehmann integral representation (\ref{RtoDint-rel}). Taking into account the running effect of the 4-th order coupling constant $\overline{a}_s(s)$ in the Minkowskian region, which
may be obtained from the solution of Eq.(\ref{beta-exp}) by the transition $Q^2\rightarrow s$ from the Euclidean domain, one can arrive to the following analytic correspondence:
\begin{eqnarray}
\label{correspondence}
R^{(M=5)}(\overline{a}_s)&=&D^{(M=5)}(\overline{a}_s)-\frac{\pi^2}{3}d_1\beta^2_0\overline{a}^{\;3}_s-\pi^2\bigg(d_2\beta^2_0+\frac{5}{6}d_1\beta_1\beta_0\bigg)\overline{a}^{\;4}_s \\ \nonumber
&-&\bigg[\pi^2\bigg(2d_3\beta^2_0+\frac{7}{3}d_2\beta_0\beta_1+\frac{1}{2}d_1\beta^2_1+d_1\beta_0\beta_2\bigg)-\frac{\pi^4}{5}d_1\beta^4_0\bigg]\overline{a}^{\;5}_s+\mathcal{O}(\overline{a}^{\;6}_s),
\end{eqnarray}
where additional terms proportional to the powers of $\pi^2$ are the analytic continuation effects from the Euclidean to Minkowski region. These $\pi^2$-effects may be found in the original works \cite{Kataev:1995vh, Bakulev:2010gm} up to the six-loop level. Thus, it follows from Eq.(\ref{correspondence}) that the difference between perturbative expressions for $R$-ratio and Adler function starts to manifest itself from the 3-rd order of PT.
Bearing in mind Eq.(\ref{correspondence}), one can conclude that the coefficients $r_n$ of the perturbative expression for the NS contribution to $R$-ratio in the $M$-th order of approximation
\begin{equation}
R^{(M)}_{NS}(\overline{a}_s)=1+\sum\limits_{n=1}^{M}r_n\overline{a}^{\;n}_s
\end{equation}
will have the similar $\{\beta\}$-expanded form as coefficients of the NS Adler function (\ref{d1}-\ref{d5}). However, some of them will get extra contributions, proportional to the analytical continuation $\pi^2$-effects:
\begin{eqnarray}
\label{fromDtoR}
&& r_3[2] = d_3[2] - \frac{\pi^2}{3} d_1[0], \qquad
r_4[2] = d_4[2] - \pi^2 d_2[0], \qquad r_4[1,1] = d_4[1,1] - \frac{5\pi^2}{6} d_1[0], \\ \nonumber
&& r_4[3] = d_4[3] - \pi^2 d_2[1], \qquad
r_5[2] = d_5[2] - 2 \pi^2 d_3[0], \qquad
r_5[1,1] = d_5[1,1] - \frac{7\pi^2}{3} d_2[0], \\
\nonumber
&& r_5[0,2] = d_5[0,2] - \frac{\pi^2}{2} d_1[0], ~~~
r_5[1,0,1] = d_5[1,0,1] - \pi^2 d_1[0], ~~~ r_5[3] = d_5[3] - 2 \pi^2 d_3[1], \\ \nonumber
&& r_5[2,1] = d_5[2,1] - 2 \pi^2 d_3[0,1] - \frac{7\pi^2}{3} d_2[1], \qquad
r_5[4] = d_5[4] - 2 \pi^2 d_3[2] + \frac{\pi^4}{5} d_1[0].
\end{eqnarray}
All other $\{\beta\}$-expanded terms for $R_{NS}(\overline{a}_s)$, which do not receive the analytical continuation contributions at the $\mathcal{O}(\overline{a}^{\;5}_s)$ level, coincide with their $D_{NS}$-function counterparts, e.g. $r_3[1] = d_3[1]$, $r_5[0,0,1] = d_5[0,0,1]$, etc.
In the numerical form for the case of $SU(3)$ color gauge group these $\{\beta\}$-expanded terms read:
\begin{eqnarray}
\label{RNS-beta-concrete}
\nonumber
R^{(M=5)}_{NS}(\overline{a}_s)&=&1+\overline{a}_s+\bigg(0.6918\beta_0+\uuline{0.0833}\bigg)\overline{a}^{\;2}_s +\bigg(-0.1864\beta^2_0
+0.6918\beta_1+4.9402\beta_0-\uuline{23.2227}\bigg)\overline{a}^{\;3}_s \\ \nonumber
&+&\bigg(-4.6475\beta^3_0-2.0178\beta_0\beta_1
+16.8766\beta^2_0+
0.6918\beta_2+4.9402\beta_1
-101.928\beta_0 \\ \nonumber
&+&\uuline{81.1571}+0.0802n_f\bigg)\overline{a}^{\;4}_s
+\bigg(-11.0380\beta^4_0-23.0458\beta^2_0\beta_1
-3.6627\beta_0\beta_2-1.8314\beta^2_1 \\ \nonumber
&+& 33.4790\beta_0\beta_1
+0.6918\beta_3+4.9402\beta_2-101.928\beta_1 \\
&+&\uwave{r_5[3]}\beta^3_0+\uwave{r_5[2]}\beta^2_0+\uwave{r_5[1]}\beta_0+\uwave{r_5[0]}\bigg)\overline{a}^{\;5}_s+\mathcal{O}(\overline{a}^{\;6}_s).
\end{eqnarray}
Note that the conformally-invariant terms $r_M[0]$ coincide with their analogs $d_M[0]$.
Mention also that starting with 3-loop level (see Eq.(\ref{RNS-beta-concrete})) the $\pi^2$-effects contributing to $r_M[M-1]$ terms, which are responsible for the asymptotics in the large-$\beta_0$ approximation, lead to their negative values in comparison with the positive ones $d_M[M-1]$ in Eq.(\ref{DNS-beta-concrete}). Let us have a look what impact will have these effects on the behavior of the PT series for $R$-ratio in a particular case $n_f=4$:
\begin{gather}
\label{RNS-n=5}
\hspace{-0.5cm}
R^{(M=5)}_{NS}(\overline{a}_s)\qquad =\qquad 1\qquad +\qquad \overline{a}_s\qquad + \qquad {\underbrace{1.5246}_{\sim \#\beta_0=1.4413}}\overline{a}^{\;^2}_s\qquad -
{\underbrace{11.5201}_{
\sim \#\beta^2_0=-0.8090 \atop \sim\#\beta^2_0+\#\beta_1+\#\beta_0=11.7026}}\hspace{-0.5cm}\overline{a}^{\;3}_s \\ \nonumber
\qquad - {\underbrace{92.8916}_{
\sim \#\beta^3_0=-42.0238 \atop \sim\#\beta^3_0+\#\beta_0\beta_1+\#\beta^2_0+\#\beta_2+\#\beta_1+\#\beta_0=-174.3695}}\hspace{-1.5cm}\overline{a}^{\;4}_s
\qquad + {\underbrace{r_5}_{
\sim \#\beta^4_0=-207.9340 \atop \sim\#\beta^4_0+\#\beta^2_0\beta_1+\#\beta_0\beta_2+\#\beta^2_1+\#\beta_0\beta_1+\#\beta_3+\#\beta_2+\#\beta_1=-646.3122}}\hspace{-2.45cm}\overline{a}^{\;5}_s \qquad + \qquad \mathcal{O}(\overline{a}^{\;6}_s).
\end{gather}
First of all, the values of $r_3$ and $r_4$ are negative and exceeds (in modulus) the positive values of $d_3$ and $d_4$ by almost 4 and 3.5 times correspondingly. Secondly, a value of the $\mathcal{O}(\overline{a}^{\;3}_s)$ large-$\beta_0$ contribution to $r_3$ is less nearly 14 times than $r_3$ itself, whereas $\beta^3_0$-term in $r_4$ makes up 45$\%$ of it. In the 3-rd and 4-th orders of PT the sum of $\{\beta\}$-dependent terms also differs substantially from the total corrections $r_3$ and $r_4$. This means that the conformal invariant terms $r_3[0]$ and $r_4[0]$ (\ref{RNS-n=5}) make a significant contributions to the overall corrections. Thirdly, in absolute value the large-$\beta_0$ contribution to $r_5$ is 3 times less than the sum of its known $\{\beta\}$-expanded terms. Both of them are comparable in order of magnitude to similar ones for the Adler function but are opposite in sign.
\subsection{The case of the Bjorken polarized sum rule}
\label{Bjsub}
Let us move on to the investigation of the second renorm-invariant quantity included in the generalized Crewther relation, namely to the Bjorken sum rule for the deep inelastic scattering of the polarized leptons on nucleons (\ref{BJPSR}) and more specifically to the NS Bjorken coefficient function $C_{NS}(\overline{a}_s)$. The dependence of the Bjorken coefficient function on the squared Euclidean momentum was extracted from the experimental data of e.g. the CLAS Collaboration \cite{Deur:2014vea} (Jefferson Lab) (see also the detailed experimentally-oriented review \cite{Deur:2018roz}), the COMPASS Collaboration \cite{COMPASS:2016jwv} (CERN) and was utilized for comparison with theoretical predictions of the QCD \cite{Kotlorz:2018bxp}. For the recent low energy investigations of the Bjorken sum rule, see \cite{Deur:2021klh}.
The two-fold analog of Eq.(\ref{D-two-fold}) for the NS Bjorken function enables also to predict some its five-loop $\{\beta\}$-expanded terms and helps to carry out a partial verification of the fulfillment of the generalization of the Crewther relation in two-fold representation \cite{Kataev:2010dm, Kataev:2010du}.
In a full analogy with expressions (\ref{d1}-\ref{d5}), for coefficients $c_M$ of the NS Bjorken function
\begin{equation}
\label{simC}
C^{(M=5)}_{NS}(\overline{a}_s)=1+c_1\overline{a}_s+c_2\overline{a}^{\;2}_s+c_3\overline{a}^{\;3}_s+c_4\overline{a}^{\;4}_s+c_5\overline{a}^{\;5}_s + \mathcal{O}(\overline{a}^{\;6}_s)
\end{equation}
one can write down:
\begin{subequations}
\begin{gather}
\label{c1}
c_1 = c_1[0], \\
\label{c2}
c_2 = \beta_0 c_2[1] + c_2[0], \\
\label{c3}
c_3 = \beta_0^2 c_3[2] + \beta_1 c_3[0,1] + \beta_0 c_3[1] + c_3[0], \\
\label{c4}
c_4 = \beta_0^3 c_4[3] + \beta_1 \beta_0 c_4[1,1] + \beta_2 c_4[0,0,1] + \beta_0^2 c_4[2] + \beta_1 c_4[0,1] + \beta_0 c_4[1] + c_4[0],
\end{gather}
\begin{gather}
\label{c5}
c_5 = \beta^4_0 c_5[4] + \beta_1\beta^2_0 c_5[2,1] + \beta^3_0 c_5[3] + \beta_2 \beta_0 c_5[1,0,1] + \beta^2_1 c_5[0,2]
+ \beta_1 \beta_0 c_5[1,1] \\ \nonumber
+ \beta^2_0 c_5[2] +
\beta_3 c_5[0,0,0,1] + \beta_2 c_5[0,0,1] + \beta_1 c_5[0,1] + \beta_0 c_5[1] + c_5[0].
\end{gather}
\end{subequations}
Utilizing the analog of the representation (\ref{explicit}) for the non-singlet contribution to the Bjorken polarized sum rule, calculated analytically at the two-, three- and four-loop level in \cite{Gorishnii:1985xm, Larin:1991tj, Baikov:2010je} correspondingly, the authors of work \cite{Cvetic:2016rot} obtained all $\{\beta\}$-expanded terms in $c_2$, $c_3$ and $c_4$ coefficients in the $\overline{\rm MS}$-scheme\footnote{Note that there is a misprint
in Eq.(14) of Ref.\cite{Cvetic:2016rot}. Instead of the
$C_F$-coefficient $-151/24$ in the expression for $C_2(\overline{a}_s)$ should be $-115/24$ (see $c_3[2]$ in Table \ref{T-c1-3}).}. These results are given in Tables \ref{T-c1-3} and \ref{T-c4}.
\begin{table}[h!]
\renewcommand{\tabcolsep}{0.6cm}
\renewcommand{\arraystretch}{1.7}
\centering
\begin{tabular}{|c|c|c|}
\hline
Coefficients & Group structures & Numbers \\ \hline
$c_1[0]$ & $C_F$ & $-\frac{3}{4}$ \\ \hline
& $C^2_F$ & $\frac{21}{32}$ \\ \cline{2-3}
\multirow{-2}{*}{$c_2[0]$}& $C_FC_A$ & $-\frac{1}{16}$ \\ \hline
$c_2[1]$ & $C_F$ & $-\frac{3}{2}$ \\ \hline
& $C^3_F$ & $-\frac{3}{128}$ \\ \cline{2-3}
& $C^2_FC_A$ & $\frac{125}{256}-\frac{33}{16}\zeta_3$ \\ \cline{2-3}
\multirow{-3}{*}{$c_3[0]$} & $C_FC^2_A$ & $\frac{53}{192}+\frac{33}{16}\zeta_3$ \\ \hline
\multirow{2}{*}{$c_3[1]$} & $C^2_F$ & $\frac{349}{192}+\frac{5}{4}\zeta_3$ \\ \cline{2-3}
& $C_FC_A$ & $-\frac{155}{96}-\frac{9}{4}\zeta_3+\frac{5}{2}\zeta_5$ \\ \hline
$c_3[0,1]$ & $C_F$ & $-\frac{3}{2}$ \\ \hline
$c_3[2]$ & $C_F$ & $-\frac{115}{24}$ \\ \hline
\end{tabular}
\captionsetup{justification=centering}
\caption{\label{T-c1-3} All terms included in the $\{\beta\}$-expansion of coefficients $c_1$, $c_2$ and $c_3$.}
\end{table}
\begin{table}[h!]
\renewcommand{\tabcolsep}{0.6cm}
\renewcommand{\arraystretch}{1.5}
\centering
\begin{tabular}{|c|c|c|}
\hline
Coefficients & Group structures & Numbers \\ \hline
& $C^4_F$ & $-\frac{4823}{2048}-\frac{3}{8}\zeta_3$ \\ \cline{2-3}
& $C^3_FC_A$ & $\frac{605}{384}+\frac{469}{128}\zeta_3+\frac{165}{32}\zeta_5$ \\ \cline{2-3}
& $C^2_FC^2_A$ & $-\frac{11071}{4608}-\frac{695}{128}\zeta_3-\frac{165}{64}\zeta_5$ \\ \cline{2-3}
& $C_FC^3_A$ & $\frac{30863}{36864}+\frac{147}{128}\zeta_3-\frac{165}{64}\zeta_5$ \\ \cline{2-3}
& $\frac{d^{abcd}_Fd^{abcd}_A}{d_R}$ & $-\frac{3}{16}+\frac{1}{4}\zeta_3+\frac{5}{4}\zeta_5$ \\ \cline{2-3}
\multirow{-6}{*}{$c_4[0]$} & $\frac{d^{abcd}_Fd^{abcd}_F}{d_R}n_f$ & $\frac{13}{16}+\zeta_3-\frac{5}{2}\zeta_5$ \\ \hline
& $C^3_F$ & $-\frac{997}{384}-\frac{481}{32}\zeta_3+ \frac{145}{8}\zeta_5$ \\ \cline{2-3}
& $C^2_FC_A$ & $\frac{85801}{4608}+\frac{169}{24}\zeta_3-\frac{365}{48}\zeta_5-\frac{105}{4}\zeta_7$ \\ \cline{2-3}
\multirow{-3}{*}{$c_4[1]$} & $C_FC^2_A$ & $-\frac{931}{768}+\frac{955}{192}\zeta_3+\frac{895}{96}\zeta_5+\frac{105}{16}\zeta_7$ \\ \hline
& $C^2_F$ & $\frac{349}{192}+\frac{5}{4}\zeta_3$ \\ \cline{2-3}
\multirow{-2}{*}{$c_4[0,1]$} & $C_FC_A$ & $-\frac{155}{96}-\frac{9}{4}\zeta_3+\frac{5}{2}\zeta_5$ \\ \hline
& $C^2_F$ & $\frac{261}{64}+\frac{87}{8}\zeta_3$ \\ \cline{2-3}
\multirow{-2}{*}{$c_4[2]$} & $C_FC_A$ & $-\frac{3151}{256}-\frac{43}{16}\zeta_3-\frac{3}{2}\zeta^2_3+\frac{15}{4}\zeta_5$ \\ \hline
$c_4[0,0,1]$ & $C_F$ & $-\frac{3}{2}$ \\ \hline
$c_4[1,1]$ & $C_F$ & $-\frac{115}{12}$ \\ \hline
$c_4[3]$ & $C_F$ & $-\frac{605}{36}$ \\ \hline
\end{tabular}
\captionsetup{justification=centering}
\caption{\label{T-c4} All terms included in the $\{\beta\}$-expansion of coefficient $c_4$.}
\end{table}
As in the case of the $d_4[0]$-term for the NS Adler function, the conformal invariant coefficients $c_4[0]$ also contain $d^{abcd}_Fd^{abcd}_A/d_R$ and $d^{abcd}_Fd^{abcd}_Fn_f/d_R$ contributions of the light-by-light scattering type, exactly the same as in $d_4[0]$ but with the opposite sign. Thus, they are safely canceled out due to the conformal symmetry relations in the product of the NS Adler and Bjorken functions in the generalized Crewther relation as was observed previously in \cite{Kataev:2010du}.
Acting in full analogy with the case of the Adler function in Subsection \ref{SubSecAdler}, we obtain the similar 5-th order $\{\beta\}$-expanded analogs of the Bjorken polarized sum rule, presented in Table \ref{T-c5}.
\begin{table}[h!]
\renewcommand{\tabcolsep}{0.6cm}
\renewcommand{\arraystretch}{1.8}
\centering
\begin{tabular}{|c|c|c|}
\hline
~~~Coefficients~~~ & ~~~Group structures~~~ & Numbers \\ \hline
\multirow{3}{*}{$c_5[0,1]$} &
$C^3_F$ & $-\frac{997}{384}-\frac{481}{32}\zeta_3+\frac{145}{8}\zeta_5$ \\ \cline{2-3}
& $C^2_FC_A$ & ~~$\frac{85801}{4608}+\frac{169}{24}\zeta_3-\frac{365}{48}\zeta_5-\frac{105}{4}\zeta_7$~~ \\ \cline{2-3}
& $C_FC^2_A$ & $-\frac{931}{768}+\frac{955}{192}\zeta_3+\frac{895}{96}\zeta_5+\frac{105}{16}\zeta_7$ \\ \hline
\multirow{2}{*}{$c_5[0,0,1]$} & $C^2_F$ & $\frac{349}{192}+\frac{5}{4}\zeta_3$ \\ \cline{2-3}
& $C_FC_A$ & $-\frac{155}{96}-\frac{9}{4}\zeta_3+\frac{5}{2}\zeta_5$ \\ \hline
$c_5[0,0,0,1]$ & $C_F$ & $-\frac{3}{2}$
\\ \hline
\multirow{2}{*}{$c_5[1,1]$} & $C^2_F$ & $\frac{261}{32}+\frac{87}{4}\zeta_3$ \\ \cline{2-3}
& $C_FC_A$ & $-\frac{3151}{128}-\frac{43}{8}\zeta_3-3\zeta^2_3+\frac{15}{2}\zeta_5$ \\ \hline
$c_5[0,2]$ & $C_F$ & $-\frac{115}{24}$ \\ \hline
$c_5[1,0,1]$ & $C_F$ & $-\frac{115}{12}$ \\ \hline
$c_5[2,1]$ & $C_F$ & $-\frac{605}{12}$ \\ \hline
$c_5[4]$ & $C_F$ & $-\frac{1867}{24}$
\\ \hline
\end{tabular}
\captionsetup{justification=centering}
\caption{\label{T-c5} All extracted terms included in the $\{\beta\}$-expansion of coefficient $c_5$.}
\end{table}
Note that the leading renormalon contribution $c_5[4]$ was extracted by us from the results of work \cite{Broadhurst:1993ru}. Thus, in the case of the NS Bjorken function with help of its two-fold representation at the five-loop level we can also define 8 out of 12 coefficients in the $\{\beta\}$-expansion of $c_5$ (\ref{c5}).
In the numerical form for the case of $SU(3)$ group the presented $\{\beta\}$-expanded coefficients read:
\begin{eqnarray}
\label{CNS-beta-concrete}
\nonumber
C^{(M=5)}_{NS}(\overline{a}_s)&=&1-\overline{a}_s+\bigg(-2\beta_0+\uuline{0.9167}\bigg)\overline{a}^{\;2}_s +\bigg(-6.3889\beta^2_0
-2\beta_1-1.0048\beta_0+\uuline{22.3894}\bigg)\overline{a}^{\;3}_s \\ \nonumber
&+&\bigg(-22.4074\beta^3_0-12.7778\beta_0\beta_1
-24.7824\beta^2_0-2\beta_2-1.0048\beta_1
+209.4096\beta_0 \\ \nonumber
&-&\uuline{126.8456}-0.0802n_f\bigg)\overline{a}^{\;4}_s
+\bigg(-103.7222\beta^4_0-67.2222\beta^2_0\beta_1
-12.7778\beta_0\beta_2 \\ \nonumber
&-& 6.3889\beta^2_1-49.5649\beta_0\beta_1 -2\beta_3-1.0048\beta_2
+209.4096\beta_1 \\
&+&\uwave{c_5[3]}\beta^3_0+\uwave{c_5[2]}\beta^2_0+\uwave{c_5[1]}\beta_0+\uwave{c_5[0]}\bigg)\overline{a}^{\;5}_s+\mathcal{O}(\overline{a}^{\;6}_s).
\end{eqnarray}
Here 4 wavy terms out of 12 possible ones remain undetermined (but with the fixed $\zeta_4$-contributions to $c_5[0], c_5[1], c_5[2]$ -- see extra clarifications below). One should mention that the values of the conformal-invariant double underlined terms $c_2[0]=0.9167$ and $c_3[0]=22.3894$ (\ref{CNS-beta-concrete}) are in agreement with those, obtained in \cite{Kataev:1992jm} for the Gross-Llewellyn-Smith sum rule of the neutrino-nucleon deep-inelastic scattering, whose non-singlet part is identical to the NS Bjorken polarized sum rule. In this quoted work all $n_f$-dependent terms are absorbed into the $\overline{a}_s$-dependent scale in each considered order up to the three-loop level.
For the specific case $n_f=4$ the PT expression for the NS Bjorken polarized sum rule has the following numerical form:
\begin{gather}
\label{CNS-n=5}
\hspace{-0.5cm}
C_{NS}(\overline{a}_s)\qquad =\qquad 1\qquad -\qquad \overline{a}_s\qquad -\qquad{\underbrace{3.2500}_{\sim \#\beta_0=-4.1667}}\overline{a}^{\;^2}_s\qquad -
{\underbrace{13.8502}_{
\sim \#\beta^2_0=-27.7296 \atop \sim\#\beta^2_0+\#\beta_1+\#\beta_0=-36.2396}}\hspace{-0.5cm}\overline{a}^{\;3}_s \\ \nonumber
\qquad - {\underbrace{102.4015}_{
\sim \#\beta^3_0=-202.6132 \atop \sim\#\beta^3_0+\#\beta_0\beta_1+\#\beta^2_0+\#\beta_2+\#\beta_1+\#\beta_0=24.7649}}\hspace{-1.5cm}\overline{a}^{\;4}_s
\qquad\qquad + {\underbrace{c_5}_{
\sim \#\beta^4_0=-1953.9200 \atop \sim\#\beta^4_0+\#\beta^2_0\beta_1+\#\beta_0\beta_2+\#\beta^2_1+\#\beta_0\beta_1+\#\beta_3+\#\beta_2+\#\beta_1=-2853.3681}}\hspace{-2.45cm}\overline{a}^{\;5}_s \qquad + \qquad \mathcal{O}(\overline{a}^{\;6}_s),
\end{gather}
As seen from Eq.(\ref{CNS-n=5}) the contributions, predicted by the large-$\beta_0$ approximation, yield plausible estimates for real values of the coefficients $c_M$. This observation will also hold when $n_f=5$. Unlike the Bjorken polarized sum rule, for the case of the NS Adler function the large-$\beta_0$ approximation works worse starting from the three-loop level at $n_f=5$. This fact is in agreement with the renormalon calculus \cite{Beneke:1998ui}.
Finishing this section, we conclude that the joint usage of the two-fold representation for the NS Adler function, $R$-ratio and Bjorken polarized sum rule with their $\{\beta\}$-expansion form at the five-loop order allows to fix their previously unknown definite $\beta$-dependent non-conformal invariant terms of the 5-th order of PT. Knowledge of the explicit form of these contributions may be important say for the implementation of more refined estimates or calculations of these corrections. Moreover, in the 5-th order of PT there is an additional approach enabling to fix the Riemann $\zeta_4$-contributions and their Lie group structures
in 3 out of 4 remaining undetermined $\{\beta\}$-expanded terms (see \cite{Goriachuk:2020oah}). Let us consider this statement in more detail.
\section{Riemann $\zeta_4$-contributions to $d_5$ and $c_5$-coefficients}
The remarkable all-order no-$\pi$ theorem was proved in the work \cite{Baikov:2018wgs} and its consequences were summarized in Ref.\cite{Baikov:2019zmy} for the generic one-charge theory and QCD in particular. For instance, it guarantees the $\pi$-independence of the QCD {\it{scale-invariant}} five-loop function corresponding to an every renormalized Green function or a two-point correlator defined in the Euclidean space. In context of the $e^+e^-$ annihilation process this fact provides a link \cite{Baikov:2018wgs} between the $\pi^2$-dependent $\mathcal{O}(\overline{a}^{\;5}_s)$-contribution to the NS Adler function and the five-loop coefficient $\beta_4$. As known from explicit calculations \cite{Baikov:2016tgj, Herzog:2017ohr, Luthe:2017ttc}, the coefficient $\beta_4$
contains only one even Riemann zeta-function, namely $\zeta_4=\pi^4/90$ term. Thus, the $\overline{\rm MS}$-scheme coefficient $d_5$ will also contain this transcendental constant of weight 4. It completely explains the conjecture \cite{Jamin:2017mul, Davies:2017hyl} on the first appearance of $\zeta_4$-terms in the PT expression for the NS Adler function in the fifth order and on
the absence of even Riemann zeta-functions in all higher-order PT corrections to the RG Euclidean invariant quantities, calculated in the renormalization effective $C$-scheme.
This specific QCD scheme was proposed in \cite{Boito:2016pwf}. Its characteristic feature is that the $\mu^2_C$-scale evolution of the coupling constant $a^C_s$ is governed by the $\beta^C$-function, depending on two coefficients $\beta_0$ and $\beta_1$ in the following way:
\begin{equation}
\label{beta-C}
\beta^C(a^C_s)=\mu^2_C\frac{\partial a^C_s}{\partial \mu^2_C}=\frac{-\beta_0(a^C_s)^2}{1-(\beta_1/\beta_0)a^C_s}.
\end{equation}
The transition from the $C$-scheme to the $\overline{\rm MS}$-scheme can be implemented with help of the finite renormalization \cite{Boito:2016pwf, Jamin:2017mul}.
Along with the results of direct examinations \cite{Baikov:2018wgs}, this indirect
procedure also enables to unambiguously restore the explicit form of the coefficients and group structures proportional to $\zeta_4$-terms in the $\overline{\rm MS}$-scheme expression for $D_{NS}$ and $C_{NS}$. In order to obtain their form for the case of the generic simple gauge group, we will adhere to the more transparent and evident in our opinion second way, supplemented by the effective charges approach \cite{Grunberg:1982fw, Kataev:1995vh}. Naturally, this roundabout way confirms the results following from the work \cite{Baikov:2018wgs}.
Using the effective charges approach, one can get the following five-loop coefficient of the effective $\beta$-function, constructed for the effective coupling of the Adler function:
\begin{eqnarray}
\label{beta4eff}
\beta^{eff}_4&=&\beta_4-3\beta_3\frac{d_2}{d_1}+\beta_2\bigg(4\frac{d^2_2}{d^2_1}-\frac{d_3}{d_1}\bigg)+\beta_1\bigg(\frac{d_4}{d_1}-2\frac{d_2d_3}{d^2_1}\bigg) \\ \nonumber
&+&\beta_0\bigg(3\frac{d_5}{d_1}-12\frac{d_2d_4}{d^2_1}-5\frac{d^2_3}{d^2_1}+28\frac{d^2_2d_3}{d^3_1}-14\frac{d^4_2}{d^4_1}\bigg).
\end{eqnarray}
To obtain the analog of this expression for the Bjorken polarized sum rule it is necessary to replace $d_n\rightarrow c_n$.
Formula (\ref{beta4eff}) is valid both the $C$-scheme and $\overline{\rm MS}$-one. On the one hand, in the $C$-scheme the coefficients $d_n$ do not contain $\zeta_4$-contributions \cite{Jamin:2017mul, Baikov:2018wgs, Baikov:2019zmy} and $\beta_n$ also do not include them because they are expressed trough $\beta_0$ and $\beta_1$ only (\ref{beta-C}). Thus, for the case of the $C$-scheme $\beta^{eff}_4$-coefficient is free of $\zeta_4$. On the other hand, in the $\overline{\rm MS}$-scheme the r.h.s. of Eq.(\ref{beta4eff}) contains this Riemann function in the five-loop coefficient $\beta_4$ and coefficient $d_5$ only. The crucial point here is the scheme invariance of the coefficients of the $\beta^{eff}$-function \cite{Stevenson:1981vj}, constructed by means of the effective charges approach within the gauge-independent renormalization schemes. Therefore, the expressions for $\beta^{eff}_4$, obtained from the $C$- and $\overline{\rm MS}$-schemes, should coincide. This leads to the following equality, relating $\zeta_4$-contributions to the $\overline{\rm MS}$ coefficients $\beta_4$ and $d_5$, which was also derived previously in \cite{Baikov:2018wgs}:
\begin{equation}
\label{relation-zeta4}
0=\beta^{(\zeta_4)}_4+3\beta_0\frac{d^{(\zeta_4)}_5}{d_1} \qquad \Rightarrow \qquad d^{(\zeta_4)}_5=-d_1\frac{\beta^{(\zeta_4)}_4}{3\beta_0}.
\end{equation}
Moreover, it follows from \cite{Baikov:2018wgs} that the $\zeta_4$-part of the 5-loop $\overline{\rm MS}$-scheme coefficient $\beta_4$ is divided by $\beta_0$ without reminder and is expressed through $\zeta_3$-part of $\beta_3$:
\begin{equation}
\label{ratio-4-0}
\frac{\beta^{(\zeta_4)}_4}{\zeta_4\beta_0}=-\frac{9}{8}\frac{\beta^{(\zeta_3)}_3}{\zeta_3}.
\end{equation}
Substituting ratio (\ref{ratio-4-0}) into Eq.(\ref{relation-zeta4}), one can obtain
\begin{equation}
\label{d5-zeta-3}
d^{(\zeta_4)}_5=\frac{3}{8}d_1\frac{\zeta_4}{\zeta_3}\beta^{(\zeta_3)}_3.
\end{equation}
Taking into account Eq.(\ref{d5-zeta-3}), the explicit expression for the $\zeta_3$-part of $\beta_3$
\cite{vanRitbergen:1997va, Czakon:2004bu}
in the generic simple gauge group and replacing there the factor $N_A$, containing in combinations with the group structures $d^{abcd}_Fd^{abcd}_F$, $d^{abcd}_Fd^{abcd}_A$, $d^{abcd}_Ad^{abcd}_A$ and being equal to the dimension of the adjoint representation ($N_A=N^2_c-1$ for the $SU(N_c)$-group), with help of the equality $T_FN_A=C_Fd_R$, we find $\zeta_4$-contributions to the coefficients $d_5$ and $c_5$ in a more traditional form, involving $d_R$ rather than $N_A$ (see also \cite{Goriachuk:2020oah}):
\begin{eqnarray}
\label{d5-c5-z4}
d^{(\zeta_4)}_5=-c^{(\zeta_4)}_5&=&\zeta_4\bigg(-\frac{11}{2048}C_FC^4_A+\frac{11}{256}C^3_FC_AT_Fn_f-\frac{41}{512}C^2_FC^2_AT_Fn_f \\ \nonumber
&+&\frac{51}{1024}C_FC^3_AT_Fn_f
-\frac{11}{128}C^3_FT^2_Fn^2_f+\frac{7}{128}C^2_FC_AT^2_Fn^2_f+\frac{7}{256}C_FC^2_AT^2_Fn^2_f \\ \nonumber
&+&\frac{33}{128}\frac{d^{abcd}_Ad^{abcd}_A}{d_R}T_F
-\frac{39}{64}\frac{d^{abcd}_Fd^{abcd}_A}{d_R}T_Fn_f+\frac{3}{16}\frac{d^{abcd}_Fd^{abcd}_F}{d_R}T_Fn^2_f\bigg).
\end{eqnarray}
In the case of $N_c=3$ this expression takes the following simple form
\begin{equation}
\label{d5-c5-z4-N=3}
d^{(\zeta_4)}_5=-c^{(\zeta_4)}_5=\zeta_4\bigg(\frac{2673}{512}-\frac{1627}{4608}n_f+\frac{809}{6912}n^2_f\bigg),
\end{equation}
which coincides with the result for $\zeta_4$-corrections to $D$-function of the $SU(3)$ QCD, obtained in \cite{Jamin:2017mul}.
Let us turn to the definition of the 5-th order $\{\beta\}$-expanded terms in the NS Adler function and Bjorken polarized sum rule, containing $\zeta_4$-contributions. As it is seen from Eq.(\ref{d5-c5-z4}) the coefficients $d^{(\zeta_4)}_5$ and $c^{(\zeta_4)}_5$ hold quadratic contributions in $n_f$. Relying on the $\{\beta\}$-expansion representation for $d_5$ (\ref{d5}) and $c_5$ (\ref{c5}), we could conclude that theoretically the decomposition of $d^{(\zeta_4)}_5$ and $c^{(\zeta_4)}_5$ may be in terms $\beta^2_1$, $\beta_1\beta_0$, $\beta^2_0$, $\beta_2$, $\beta_1$, $\beta_0$ and $\beta$-independent one. However, contributions proportional to $\beta^2_1$, $\beta_1\beta_0$, $\beta_2$ and $\beta_1$, namely $d_5[0,2]$, $d_5[1,1]$, $d_5[0,0,1]$ and $d_5[0,1]$ (and their Bjorken counterparts), we have already determined (see Table \ref{T-d5}, Eq.(\ref{DNS-beta-concrete}) and Table \ref{T-c5}, Eq.(\ref{CNS-beta-concrete})) with help of the two-fold representation. These terms are free of $\zeta_4$-function. Therefore, we may write down the following expansion:
\begin{equation}
\label{d5-z4-beta}
d_5^{(\zeta_4)} = \beta^2_0 d_5^{(\zeta_4)}[2] + \beta_0 d_5^{(\zeta_4)}[1] + d_5^{(\zeta_4)}[0].
\end{equation}
Following the idea, outlined in \cite{Goriachuk:2020oah}, i.e. decomposing terms $d_5^{(\zeta_4)}[2]$, $d_5^{(\zeta_4)}[1]$, $d_5^{(\zeta_4)}[0]$ into the
Lie group structures, using the result (\ref{d5-c5-z4}) and the explicit form $\beta_0=11C_A/12-T_Fn_f/3$, we arrive to the system of linear equations, whose exact solution is:
\begin{subequations}
\begin{eqnarray}
\label{d5-z4-0}
d_5^{(\zeta_4)}[0] = - c_5^{(\zeta_4)}[0] &=& \zeta_4 \bigg(\frac{693}{2048}C_FC^4_A +
\frac{99}{512} C^2_FC^3_A - \frac{1089}{2048}C^3_FC^2_A \\ \nonumber
&+& \frac{33}{128} \frac{d_A^{abcd}d_A^{abcd}}{d_R} T_F - \frac{429}{256}\frac{d_F^{abcd}d_A^{abcd}}{d_R} C_A +
\frac{33}{64} \frac{d_F^{abcd}d_F^{abcd}}{d_R}C_A n_f \bigg), \\
\label{d5-z4-1}
d_5^{(\zeta_4)}[1] = - c_5^{(\zeta_4)}[1] &=& \zeta_4 \bigg(-\frac{615}{1024} C_FC^3_A -
\frac{339}{512} C^2_FC^2_A + \frac{165}{128} C^3_FC_A \\ \nonumber
&+& \frac{117}{64}\frac{d_F^{abcd}d_A^{abcd}}{d_R} - \frac{9}{16}\frac{d_F^{abcd}d_F^{abcd}}{d_R} n_f\bigg), \\
\label{d5-z4-2}
d_5^{(\zeta_4)}[2] = - c_5^{(\zeta_4)}[2] &=& \zeta_4 \bigg(\frac{63}{256} C_FC^2_A +
\frac{63}{128} C^2_FC_A - \frac{99}{128} C^3_F\bigg).
\end{eqnarray}
\end{subequations}
These results reproduce those presented in \cite{Goriachuk:2020oah}. However, in this quoted work the two-fold representation was not applied and the absence in the expansion (\ref{d5-z4-beta}) of other $\{\beta\}$-dependent terms, different from $\beta_0$ and $\beta^2_0$ ones, was shown there in more complicated way.
The expressions (\ref{d5-c5-z4}) and (\ref{d5-z4-0}-\ref{d5-z4-2}) demonstrate what the Lie group structures will be exactly contained in the 5-th order corrections to the NS Adler function and NS Bjorken polarized sum rule and in their some $\{\beta\}$-expanded terms. Note that in contrast to the four-loop approximation, at the $\mathcal{O}(\overline{a}^{\;5}_s)$ level the light-by-light scattering effects reveal themselves not only in the conformally-invariant terms $d_5[0]$ and $c_5[0]$, but also in the
proportional to $\beta_0$ ones, namely in $d_5[1]$ and $c_5[1]$. This partially occurs due to the insertion of fermion loop into the external gluon line, entering to the light-by-light scattering subgraph.
Let us make a few remarks about the nature of reductions of certain $\zeta_4$-contributions in the generalized Crewther relation.
Since $d_1=-c_1$, then accordingly to Eq.(\ref{d5-zeta-3}) and its Bjorken analogue,
the sum $d^{(\zeta_4)}_5+c^{(\zeta_4)}_5=0$. Thus, the $\zeta_4$-terms are mutually canceled out in this sum. As known in the class of the gauge-invariant ${\rm{MS}}$-like renormalization schemes the coefficients of the RG $\beta$-function do not change upon the scale transformations $\mu'=\lambda\mu$, where $\lambda$ is the constant. The one-loop coefficients $d_1$ and $c_1$ are also scale-independent. Therefore, in accordance with Eq.(\ref{d5-zeta-3}) the terms, proportional to $\zeta_4$ in $d_5$ and $c_5$, are scale-independent as well\footnote{It is interesting to trace explicitly how they will (not) change upon the transformations of the Poicar\'e group (rotations, Lorenz boosts, translations, reflections) and upon the special conformal transformations. The representation of the conformal transformations in the momentum space was considered e.g. in \cite{Maglio:2021yaq}.}
and the expression (\ref{d5-c5-z4}) stays valid in all class of the ${\rm{MS}}$-like schemes. Thus, these $\zeta_4$-terms are invariant under the scale transformations or, in other words, under dilatations, which are a particular case of the conformal transformations.
In this regard, considering the relation
\begin{eqnarray}
\label{crew-d5-c5}
d_5+c_5+d_1c_4+c_1d_4+d_2c_3+c_2d_3=-\beta_3K_1-\beta_2K_2-\beta_1K_3-\beta_0K_4,
\end{eqnarray}
following from the 5-loop generalization of the Crewther relation in the one-fold representation (see e.g. \cite{Garkusha:2018mua} or 5-loop analog of Eq.(\ref{BK})), one can conclude that the 4-th coefficient $K_4$, entering in the conformal symmetry breaking term $(\beta^{(N=4)}(\overline{a}_s)/\overline{a}_s)K^{(N=4)}(\overline{a}_s)$, does not contain $\zeta_4$-terms. Indeed, in the sum $d_5+c_5$ they are mutually canceled out and the rest coefficients $d_n$, $c_n$ with $n=1,2,3,4$, $K_1$, $K_2$, $K_3$ and $\beta_k$ with $k=0,1,2,3$ are free of $\zeta_4$. This fact is the consequence of the scale invariance.
In the conformal invariant limit when all $\beta_k=0$, the relation (\ref{crew-d5-c5}) takes the following form
\begin{eqnarray}
\label{crew-d5-c5-conf}
d_5[0]+c_5[0]+d_1[0]c_4[0]+c_1[0]d_4[0]+d_2[0]c_3[0]+c_2[0]d_3[0]=0,
\end{eqnarray}
which can be derived from more general expression of \cite{Kataev:2010du}.
Here $\zeta_4$-contributions are contained only in $d_5[0]$, $c_5[0]$ and are mutually canceled out in this sum (\ref{d5-z4-0}). Thus, the disappearance of these terms in the mentioned sum is in agreement with
the conformal limit of the generalized Crewther identity. This means that the cancellation of $\zeta_4$-terms in $d^{(\zeta_4)}_5[0]+c^{(\zeta_4)}_5[0]$ is the consequence of the conformal symmetry which is more general compared to the scale symmetry.
\section{Discussion and outlook}
\label{Discussion}
Since there is currently no consensus on the order of separation of conformally invariant contributions in the perturbative expressions for the NS Adler function and Bjorken polarized sum rule (see e.g. Refs.\cite{Brodsky:1982gc, Grunberg:1991ac, Mikhailov:2004iq, Brodsky:2011ta, Brodsky:2013vpa, Kataev:2014jba, Kataev:2016aib, Cvetic:2016rot, Mikhailov:2016feh, Wu:2019mky, Huang:2020skl} on this topic), we will dwell on this issue in more detail and try to make convincing arguments in favor of the correct in our opinion procedure for their extraction.
As we have seen in the case of the NS Adler function in the $SU(3)$ QCD the two-, three- and four-loop conformally invariant terms are $d_2[0]=0.0833$, $d_3[0]=-23.2227$, $d_4[0]=81.1571+0.0802n_f$ (\ref{DNS-beta-concrete}). The small $n_f$-dependent contribution to $d_4[0]$ is a manifestation of the light-by-light scattering effects. These values are also presented in \cite{Cvetic:2016rot} and are
in agreement with results of application of the PMC/BLM procedure\footnote{Note that the additional ambiguities of the PMC method, not discussed in this work, were considered in Ref.\cite{Chawdhry:2019uuv}. }
in works \cite{Grunberg:1991ac}, \cite{Brodsky:2011ta} and with expression (A5) from Appendix A of Ref.\cite{Brodsky:2013vpa}. But they differ from those, given e.g. in the main part of \cite{Brodsky:2013vpa} and other related works \cite{Wu:2019mky, Huang:2020skl}. In these works the representation of the $D$-function in terms of the photon anomalous dimension $\gamma(a_s)$ and the hadronic vacuum polarization function $\Pi(L=\ln\mu^2/Q^2, a_s)$ is used, first studied in Refs.\cite{Chetyrkin:1980sa, Chetyrkin:1980pr, Baikov:2012zm}:
\begin{equation}
\label{D-anomalous}
D(L, a_s)=\gamma(a_s)-\beta(a_s)\frac{\partial }{\partial a_s}\Pi(L, a_s).
\end{equation}
This relation follows from the definition (\ref{RtoDint-rel}) and the inhomogeneous RG equation for $\Pi(L, a_s)$
\begin{equation}
\label{inhomog}
\mu^2\frac{d}{d\mu^2}\Pi(L, a_s)\bigg|_{L=0}=\bigg(\mu^2\frac{\partial}{\partial \mu^2}+\beta(a_s)\frac{\partial}{\partial a_s}\bigg)\Pi(L, a_s)\bigg|_{L=0}=\gamma(a_s).
\end{equation}
It was mentioned in \cite{Brodsky:2013vpa, Wu:2019mky} that the anomalous dimension $\gamma(a_s)$ is associated with the renormalization of the QED coupling only and is not related to the running of the strong coupling constant (for details see section IV of the first cited work and 4.3 of the second one). This RG function is treated as conformal contribution during the PMC scale setting analysis and is not decomposed in the $\{\beta\}$-expanded terms in these quoted works. We state that these considerations are not true. Indeed, $\gamma(a_s)$-function is \textit{inseparably related} with the renormalization of the QCD charge. This fact has already been discussed in \cite{Kataev:2014jba} and studied in
\cite{Kataev:2016aib}. Let us focus on this issue in more detail.
As known the two-point photon correlator, renormalized by the QCD radiative corrections, is
\begin{equation}
\label{correlator}
G_{\mu\nu}(q)=\frac{i}{q^2}\bigg[\bigg(-g_{\mu\nu}+\frac{q_\mu q_\nu}{q^2}\bigg)\frac{1}{1+a\Pi(L, a_s)}-\xi \frac{q_\mu q_\nu}{q^2}\bigg],
\end{equation}
where $a=a(\mu^2)=\alpha(\mu^2)/\pi$ is the electromagnetic coupling renormalized by the strong interactions only, $\xi$ is the gauge covariant parameter.
Taking into account the renormalization prescription for the bare and renormalized photon fields, namely $A^{\mu}_B=\sqrt{Z_{ph}}A^{\mu}$, and the non-renormalizability of the longitudinal part of the propagator, one can obtain the following relation between the renormalized $\Pi$ and bare $\Pi_B$ polarization functions
\begin{equation}
\label{link}
1+a\Pi(L, a_s)=Z_{ph}\bigg(1+a_B\Pi_B(L, a_{sB})\bigg),
\end{equation}
which was presented in \cite{Chetyrkin:1980sa, Chetyrkin:1980pr}. Due to the gauge invariance of the $\overline{\rm MS}$-scheme neither $\Pi$ nor $\Pi_B$ depend on $\xi$. The 3-rd and 4-th order corrections to $\Pi$ were presented in \cite{Baikov:2012zm}. The condition of independence of the bare coupling constant $a_{sB}$ on the scale $\mu$ leads to the following relation:
\begin{eqnarray}
\label{asB}
a_{s,B}&=&\mu^{2\varepsilon}a_s\;{{\rm{exp}}}\;\bigg(-\int\limits_0^{a_s} \frac{dx}{x}\frac{\beta(x)}{\beta(x)-\varepsilon x}\bigg) \\ \nonumber
&=&\mu^{2\varepsilon}\bigg(a_s-\frac{\beta_0}{\varepsilon}a^2_s+\left(\frac{\beta^2_0}{\varepsilon^2}
-\frac{\beta_1}{2\varepsilon}\right)a^3_s
-\left(\frac{\beta^3_0}{\varepsilon^3}-\frac{7\beta_1\beta_0}{6\varepsilon^2}
+\frac{\beta_2}{3\varepsilon}\right)a^4_s +\mathcal{O}(a^5_s)\bigg).
\end{eqnarray}
Here in the definition of $\beta$-function we retain $\varepsilon$-contribution i.e. $\beta(a_s)=-\varepsilon a_s-\sum\limits_{i\geq 0}\beta_i a^{\;i+2}_s$, where $\varepsilon=(4-d)/2$ is the parameter of the dimensional regularization.
Owing to the Ward identity with the omitted prefactor $\mu^{2\varepsilon}$, we have $a=Z_{ph}a_B$. For the QCD coupling one can write $a_s=Z^{-1}_{a_s}a_{sB}$. In the class of the ${\rm{MS}}$-like schemes $Z_{ph}$ reads:
\begin{equation}
\label{epsilon}
Z_{ph}=1+a\cdot Z(a_s)=1+a\cdot\sum\limits_{p\geq 1}a^{p-1}_s\sum\limits_{k=1}^p \frac{Z_{p,-k}}{\varepsilon^k}.
\end{equation}
Here the QED coupling $a$ is included in $Z_{ph}$ due to the Feynman diagram with fermion loop and two external photon legs. The remaining $a_s$-corrections arise from the transmission of gluon propagators with their internal inserts in the mentioned fermion loop. Naturally, part of these contributions is related to the renormalization of the QCD charge. For instance, see Figure \ref{diagrams}, where the left diagram provides contribution to the renormalization of $a_s$ and the right one does not yield.
\begin{figure}[h!]
\centering
\scalebox{0.9}{
\includegraphics[width=\textwidth]{diagrams.pdf}
}
\centering
\captionsetup{justification=centering}
\caption{The specific diagrams, contributing to the $\beta_0$-dependent (within the Naive-Nonabelianization procedure) and conformally invariant terms of $d_2$-coefficient correspondingly.}
\label{diagrams}
\end{figure}
Using the Ward identity and Eq.(\ref{epsilon}), one can obtain:
\begin{equation}
\Pi(L, a_s)=Z(a_s)+\Pi_B(L, a_{sB}),
\end{equation}
where the bare polarization function reads
\begin{equation}
\label{PiB}
\Pi_B(L, a_{sB})=\sum\limits_{p\geq 1}\bigg(\frac{\mu^2}{Q^2}\bigg)^{\varepsilon p}a^{p-1}_{sB}\sum\limits_{k=-p}^{\infty}\Pi_{p,k}\varepsilon^k.
\end{equation}
Substituting (\ref{asB}), (\ref{PiB}) and expression for $Z(a_s)$ (\ref{epsilon}) in definition (\ref{RtoDint-rel}), one can arrive to the relations for the coefficients of the NS Adler function \cite{Chetyrkin:1980sa, Chetyrkin:1980pr, Gorishnii:1990vf}:
\begin{subequations}
\begin{eqnarray}
\label{d1Z}
d_1&=&-2Z_{2,-1}, \\
\label{d2Z}
d_2&=&-3Z_{3,-1}+\beta_0\Pi_{2,0}, \\
\label{d3Z}
d_3&=&-4Z_{4,-1}+2\beta_0\Pi_{3,0}+\beta_1\Pi_{2,0}+2\beta^2_0\Pi_{2,1}.
\end{eqnarray}
\end{subequations}
Comparing Eq.(\ref{D-anomalous}) with Eqs.(\ref{d1Z}-\ref{d3Z}), we conclude that the coefficients of the non-singlet contribution to the photon anomalous dimension $\gamma(a_s)$ are expressed through the first pole term $Z_{-1}(a_s)$ of the renormalization constant $Z(a_s)$ \cite{Kataev:2016aib}:
\begin{gather}
\gamma^{(M)}_{NS}(a_s)=1-\sum\limits_{p= 1}^{M} (p+1)Z_{p+1,\;-1}a^{p}_s=1+\sum\limits_{p=1}^M \gamma_p a^p_s, \\
\text{where}~~~ \gamma^{(M)}(a_s)=d_R\bigg(\sum\limits_f Q^2_f\bigg)\gamma^{(M)}_{NS}(a_s)+d_R\bigg(\sum\limits_f Q_f\bigg)^2\gamma^{(M\geq 3)}_{SI}(a_s).
\end{gather}
Thus, one gets $\gamma_1=-2Z_{2,-1}$, $\gamma_2=-3Z_{3,-1}$, $\gamma_3=-4Z_{4,-1}$ and etc. This means that not only $\Pi(a_s)$ but also the function $\gamma(a_s)$, included in Eq.(\ref{D-anomalous}), is related to the renormalization of the QCD charge. This fact contradicts the consideration described in \cite{Brodsky:2013vpa, Wu:2019mky}. For example, the first pole coefficient $Z_{3,-1}=C^2_F/32-133C_FC_A/576+11C_FT_Fn_f/144$ \cite{Chetyrkin:1980sa, Chetyrkin:1980pr} contains $n_f$-dependent term coming from diagrams similar to the left one presented in Figure \ref{diagrams}. The internal fermion loop, inserted into the gluon propagator, provides obviously contributions to the renormalization of $a_s$ within the Naive-Nonabelianization procedure. In this regard, we can not treat the photon anomalous dimension $\gamma(a_s)$ as the conformal contribution containing in coefficients of the $D$-function. In the $\{\beta\}$-expansion formalism this implies that in full analogy with the coefficients of $D_{NS}$-function, the terms $\gamma_n$ of $\gamma(a_s)$ should be decomposed in coefficients of $\beta$-function (see Eqs.(\ref{d1}-\ref{d5})):
\begin{subequations}
\begin{gather}
\label{g1}
\gamma_1 = \gamma_1[0], \\
\label{g2}
\gamma_2 = \beta_0 \gamma_2[1] + \gamma_2[0], \\
\label{g3}
\gamma_3 = \beta_0^2 \gamma_3[2] + \beta_1 \gamma_3[0,1] + \beta_0 \gamma_3[1] + \gamma_3[0], \\
\label{g4}
\gamma_4 = \beta_0^3 \gamma_4[3] + \beta_1 \beta_0 \gamma_4[1,1] + \beta_2 \gamma_4[0,0,1] + \beta_0^2 \gamma_4[2] + \beta_1 \gamma_4[0,1] + \beta_0 \gamma_4[1] + \gamma_4[0], ~ \dots
\end{gather}
\end{subequations}
This conclusion has already been made in the work \cite{Kataev:2016aib}. In accordance with this inference the values of the conformally-invariant terms $d_2[0]$, $d_3[0]$, $d_4[0]$, presented at the beginning of this section, are in full agreement with the results of the correct $\{\beta\}$-expansion of the $D_{NS}$-function written in terms of $\gamma(a_s)$. This fact was also properly noticed in Appendix A of Ref.\cite{Brodsky:2013vpa}, but not taken into account, for instance, in the main part of \cite{Brodsky:2013vpa} and in the subsequent works \cite{Wu:2019mky, Huang:2020skl}.
Let us focus on this issue in more detail. If one follows the logic of paper \cite{Huang:2020skl} then it appears from its results for the vector channel of the hadronic $Z$-boson decay width (see Eqs.(19), (21), (24) in this quoted paper) that contributions to the NS Adler function, called by authors of \cite{Huang:2020skl} as the ``conformally-invariant'' ones, will be equal to:
\begin{subequations}
\begin{eqnarray}
\label{d2hat}
\hat{d}_2[0]=\gamma_2&=&-\frac{3}{32}C^2_F+\frac{133}{192}C_FC_A-\frac{11}{48}C_FT_Fn_f, \\
\label{d3hat}
\hat{d}_3[0]=\gamma_3&=&-\frac{69}{128}C^3_F+\bigg(\frac{215}{288}-\frac{11}{24}\zeta_3\bigg)C^2_FC_A+\bigg(\frac{5815}{20736}+\frac{11}{24}\zeta_3\bigg)C_FC^2_A \\ \nonumber
&+&\bigg(-\frac{169}{288}+\frac{11}{12}\zeta_3\bigg)C^2_FT_Fn_f+\bigg(-\frac{769}{5184}-\frac{11}{12}\zeta_3\bigg)C_FC_AT_Fn_f-\frac{77}{1296}C_FT^2_Fn^2_f, \\
\label{d4hat}
\hat{d}_4[0]=\gamma_4.
\end{eqnarray}
\end{subequations}
Comparing these results with the analogous ones, presented in Table \ref{T-d1-3}, we conclude that only the leading $C_F$-contributions to (\ref{d2hat}-\ref{d4hat}) coincide with those given in Table \ref{T-d1-3}. This is related to the fact that these contributions do not enter into expressions for the coefficients of the RG $\beta$-function. Moreover, starting from the two-loop approximation the ``conformally invariant'' terms $\hat{d}_M[0]$ in \cite{Huang:2020skl} contain the $n_f$-dependent contributions, which, as clarified by us, should be absorbed into coefficients of the RG $\beta$-function.
This observation is confirmed by the transition to the QED limit. Indeed, in the case of the $U(1)$ gauge group with $N$ charged leptons the expressions (\ref{d2hat}-\ref{d3hat}), following from \cite{Huang:2020skl}, will turn into
\begin{subequations}
\begin{eqnarray}
\label{d2hatQED}
\hat{d}^{\;{\rm{QED}}}_2[0]&=&-\frac{3}{32}-\frac{11}{48}N, \\
\label{d3hatQED}
\hat{d}^{\;{\rm{QED}}}_3[0]&=&-\frac{69}{128}+\bigg(-\frac{169}{288}+\frac{11}{12}\zeta_3\bigg)N-\frac{77}{1296}N^2,
\end{eqnarray}
\end{subequations}
in contrast to ours:
\begin{subequations}
\begin{eqnarray}
\label{d2QED}
d^{\;{\rm{QED}}}_2[0]&=&-\frac{3}{32}, \\
\label{d3QED}
d^{\;{\rm{QED}}}_3[0]&=&-\frac{69}{128}.
\end{eqnarray}
\end{subequations}
In formulas (\ref{d2hatQED}-\ref{d3hatQED}) the fictitious ``conformally-invariant'' terms $\hat{d}^{\;{\rm{QED}}}_2[0]$, $\hat{d}^{\;{\rm{QED}}}_3[0]$, following from the results of \cite{Huang:2020skl},
contain the non-conformal $N$-dependent contributions, originating from the non-$\{\beta\}$-expanded coefficients of $\gamma_2$ and $\gamma_3$ in QED, related to the renormalization of charge.
Moreover, the expressions (\ref{d2hatQED}-\ref{d3hatQED}) do not correspond to the Rosner's known result \cite{Rosner:1966zz} of calculating of the divergent part of the photon field renormalization constant $Z_{ph}$ in the quenched QED, which do not contain the internal subgraphs renormalizing electromagnetic charge. The result of this work is
\begin{equation}
\bigg(Z^{-1}_{ph}\bigg)_{div}=\frac{\alpha_B}{3\pi}\bigg(1+\uuline{\frac{3}{4}}\frac{\alpha_B}{\pi}~\uuline{-\frac{3}{32}}\bigg(\frac{\alpha_B}{\pi}\bigg)^2\bigg)\ln\frac{M^2}{m^2},
\end{equation}
where $\alpha_B$ is the bare fine-structure constant, $m$ is the lepton mass and $M$ is the large scale cutoff mass. The double underlined terms are in full agreement with those, presented in Table \ref{T-d1-3} and (\ref{d2QED}), but the second underlined one contradicts the expression (\ref{d2hatQED}), which follows from Ref.\cite{Huang:2020skl} and the related papers of these team.
Let us give one more argument in the favor of validity of the approach requiring the decomposition of all coefficients of the photon anomalous dimension $\gamma(a_s)$ in powers of coefficients of $\beta$-function in compliance with the $\{\beta\}$-expansion (\ref{g1}-\ref{g4}). It ensues from Eq.(\ref{D-anomalous}) that at the four-loop level the NS contributions to the Adler function and $\gamma(a_s)$ are related by the equality:
\begin{equation}
\label{d4-g4}
d_4=\gamma_4+3\beta_0\Pi_3+2\beta_1\Pi_2+\beta_2\Pi_1,
\end{equation}
where coefficients $\Pi_n$ are defined as $\Pi_{NS}(a_s)=\sum\limits_{n\geq 0}\Pi_n a^n_s$. The terms $\gamma_4$ and $\Pi_3$ follow from the results \cite{Baikov:2012zm}. It is interesting that their explicit expressions contain the Riemann $\zeta_4$-contributions, which, however,
are mutually canceled out in $d_4$ (see results of explicit calculations of $d_4$ in \cite{Baikov:2010je}):
\begin{equation}
d^{(\zeta_4)}_4=\gamma^{(\zeta_4)}_4+3\beta_0\Pi^{(\zeta_4)}_3\equiv 0.
\end{equation}
If we properly expand $\gamma_4$ (accordingly to (\ref{g4})) and $\Pi_3$ as $\Pi_3=\beta^2_0\Pi_{3,\beta^2_0}+\beta_1\Pi_{3,\beta_1}+\beta_0\Pi_{3,\beta_0}+\Pi_{3,\beta^0_0}$, we will naturally arrive to the absence of the $\zeta_4$-contributions in expression for $d_4[0]$ (see Table \ref{T-d4}). However, as follows from the relation (\ref{d4hat}) of Ref.\cite{Huang:2020skl} the $\hat{d}_4[0]$-term will contain $\zeta_4$-contributions\footnote{This fact immediately follows from the analytic form of $\gamma_4$
\cite{Baikov:2012zm}.}. This circumstance contradicts our outcomes and results of \cite{Cvetic:2016rot}. Moreover, even its QED counterpart $\hat{d}^{\;{\rm{QED}}}_4[0]$ \cite{Huang:2020skl} will incorporate the contribution $11\zeta_4 N^2/32$ proportional not only to $\zeta_4$-term but also to $N^2$-factor. As we have already explained above, the total $N^2$-dependence should be contained in the coefficients of the RG $\beta$-function. All these facts point to the necessity of $\{\beta\}$-decomposing all coefficients of the photon anomalous dimension if we aim to extract the conformally-invariant part of the NS Adler function in the correct way\footnote{The results \cite{Aleshin:2019yqj} of the three-loop calculations of the Adler $D$-function in
terms of the anomalous dimension of matter superfields
in the $\mathcal{N}=1$ SUSY QCD \cite{Shifman:2014cya}
provide the extra support of this statement.}.
Another approach, leading to the results which differ from those presented in this work, consists in adding in the theory of strong interactions the extra hypothetical degrees of freedom in the form of the Majorana multiplet of light gluinos \cite{Mikhailov:2004iq, Kataev:2014jba, Mikhailov:2016feh}. Such a trick was invented in \cite{Mikhailov:2004iq} to unambiguously divide the flavor dependence in the three-loop coefficient $d_3$ between $\beta_0$ and $\beta_1$. For this goal the expansion of the coefficient $d_3(n_f, n_{\tilde{g}})$, calculated in \cite{Chetyrkin:1996ez} for this minimally extended SUSY QCD with the number $n_{\tilde{g}}$ of light gluinos, in the $\overline{\rm MS}$-scheme terms $\beta_0(n_f, n_{\tilde{g}})$ and $\beta_1(n_f, n_{\tilde{g}})$ \cite{Jones:1974pg} was considered in \cite{Mikhailov:2004iq}. This was done in close in time era, when the possibility of the existence of a light gluino has not yet been experimentally excluded. Since coefficients of the function $\beta(n_f, n_{\tilde{g}})$ include the extra degrees of freedom $n_{\tilde{g}}$ (see work \cite{Jones:1974pg} from the results of which the below expressions follow)
\begin{subequations}
\begin{eqnarray}
\label{b0}
\beta_0(n_f, n_{\tilde{g}})&=&\frac{11}{12}C_A-\frac{1}{3}\bigg(T_Fn_f+\frac{1}{2}C_An_{\tilde{g}}\bigg)~, \\
\label{b1}
\beta_1(n_f, n_{\tilde{g}})&=&\frac{17}{24}C^2_A-\frac{5}{12}C_A\bigg(T_Fn_f+\frac{1}{2}C_An_{\tilde{g}}\bigg)-\frac{1}{4}\bigg(C_FT_Fn_f+\frac{1}{2}C^2_An_{\tilde{g}}\bigg)~,
\end{eqnarray}
\end{subequations}
then this split in such model without squarks can be performed unequivocally. However, starting from the three-loop level the $\{\beta\}$-expanded coefficients of the NS Adler function, obtained in this way, are distinct from those presented in Tables \ref{T-d1-3} and \ref{T-d4}. Indeed, the results of \cite{Kataev:2014jba} read:
\begin{subequations}
\begin{eqnarray}
\label{d10-gl}
d^{n_{\tilde{g}}}_1[0]&=&\frac{3}{4}C_F, ~~ d^{n_{\tilde{g}}}_2[1]=\bigg(\frac{33}{8}-3\zeta_3\bigg)C_F, ~~ d^{n_{\tilde{g}}}_2[0]=-\frac{3}{32}C^2_F+\frac{1}{16}C_FC_A, \\
\label{d32-gl}
d^{n_{\tilde{g}}}_3[2]&=&\bigg(\frac{151}{6}-19\zeta_3\bigg)C_F, ~~~
d^{n_{\tilde{g}}}_3[0,1]=\bigg(\frac{101}{16}-6\zeta_3\bigg)C_F, \\
\label{d31-gl}
d^{n_{\tilde{g}}}_3[1]&=&\bigg(-\frac{27}{8}-\frac{39}{4}\zeta_3+15\zeta_5\bigg)C^2_F+\bigg(-\frac{9}{64}+5\zeta_3-\frac{5}{2}\zeta_5\bigg)C_FC_A, \\
\label{d30-gl}
d^{n_{\tilde{g}}}_3[0]&=&-\frac{69}{128}C^3_F+\frac{71}{64}C^2_FC_A+\bigg(\frac{523}{768}-\frac{27}{8}\zeta_3\bigg)C_FC^2_A .
\end{eqnarray}
\end{subequations}
Taking into account the explicit expressions for terms $d_3[i]$ (see Table \ref{T-d1-3}) and for $d^{n_{\tilde{g}}}_3[i]$ (\ref{d32-gl}-\ref{d30-gl}), we can fix the difference $\Delta_3[i]$ between them in the $\overline{\rm MS}$-scheme:
\begin{subequations}
\begin{eqnarray}
\Delta_{3} [i]&=& d_3[i]-d^{n_{\tilde{g}}}_3[i], \\
\label{delta_d2}
\Delta_{3} [2] &=& 0, \\
\label{delta_d0}
\Delta_{3} [0] &=&\frac{C_A}{256}
\bigg(11C_F+7C_A\bigg)\underline{C_F(48\zeta_3-35)},\\
\label{delta_d1}
\Delta_{3} [1] &=&-\frac{1}{64}
\bigg(3C_F+5C_A\bigg) \underline{C_F( 48\zeta_3-35)}, \\
\label{delta_d01}
\Delta_{3} [0,1] &=&\frac{1}{16}\; \underline{C_F(48\zeta_3-35)},
\end{eqnarray}
\end{subequations}
which turns out to be proportional to the factor $C_F(48\zeta_3-35)$. The similar analysis can be performed for the Bjorken polarized sum rule as well.
Note one interesting fact. If we fix the number of quark flavors in accordance with the formal solution of the equation $\beta_0(n^*_f)=0$, which corresponds to the Banks-Zaks ansatz \cite{Banks:1981nn} and to the case of the effective conformal limit at one-loop level, then we will obtain $T_Fn^*_f=11C_A/4$. In the same way, solving the equation $\beta_1(n^{**}_f)=0$, one can get $T_Fn^{**}_f=17C^2_A/(10C_A+6C_F)$. It turns out that the differences $\Delta_3[i]$ can be rewritten in the following form:
\begin{equation}
\label{n^*_f}
\Delta_3[0]=-\beta_1(n^*_f)\Delta_3[0,1], ~~~~~ \Delta_3[1]=\frac{\beta_1(n^*_f)}{\beta_0(n^{**}_f)}\Delta_3[0,1].
\end{equation}
The similar analysis was previously performed for the quantity $d_4(n_f)+c_4(n_f)$ in Refs.\cite{Kataev:2010du, Kataev:2010tps}. The values of $d_4(n^*_f)+c_4(n^*_f)$ and $d_4(n^{**}_f)+c_4(n^{**}_f)$, obtained there with help of the $\beta$-expansion and the two-fold generalization of the Crewther relation, are in agreement with the results of the direct calculations conducted in \cite{Baikov:2010je}.
In the case of the $SU(3)$ color group the expressions (\ref{delta_d0}-\ref{delta_d01}) acquire the following form:
\begin{equation}
\label{delta-numerical}
\Delta_{3} [0]=12.6498, ~~~\Delta_{3} [1]=-8.9849, ~~~\Delta_{3} [0,1]=1.8916.
\end{equation}
The deviation $\Delta_{3} [0]$ in Eq.(\ref{delta-numerical}), obtained for the conformally invariant terms in the 3-rd order of PT, is smaller in modulus than
the value $d_3[0]=-23.2227$ from Eq.(\ref{DNS-beta-concrete}). The analogous conformal-invariant term in the theory with light gluino is $d^{n_{\tilde{g}}}_3[0]=-35.8725$ \cite{Mikhailov:2004iq, Kataev:2014jba}. The qualitative agreement between $d_3[0]$ and $d^{n_{\tilde{g}}}_3[0]$ is retained.
In its turn, the values $d_3[1]=4.9402$, $d_3[0,1]=0.6918$ from Eq.(\ref{DNS-beta-concrete})
differ considerably from their counterparts $d^{n_{\tilde{g}}}_3[1]=13.9251$ and $d^{n_{\tilde{g}}}_3[0,1]=-1.1997$. This fact will lead to a noticeable shift of the coupling dependent scale, defined at the stage of application of the PMC/BLM procedure.
However, nowadays, it is known that the hypothetical models with light gluinos are closed. Indeed, the comparison of the three-loop results for the hadronic $\tau$ decay and hadronic cross sections in $e^+e^-$ annihilation between $5\;{\rm{GeV}}$ and $M_Z$ with experimental data of the LEP Collaboration (CERN)
\cite{Csikor:1996vz} has indicated that the light gluinos are absent. Moreover, the analysis of the LHC data on $pp$-collisions at $\sqrt{s}=13\;{\rm{TeV}}$ (CMS Collaboration) demonstrates that gluino with mass up to $2\;{\rm{TeV}}$ is excluded at 95\% confidence level (see \cite{CMS:2019zmd, Sarkar:2021lju}). Nevertheless, such a trick with introduction in the theory of additional degrees of freedom is very useful from perspective of super-high energy studies of special renormalization features of
the SUSY extensions of QCD \cite{Chetyrkin:1996ez}.
The two-fold formalism does not require the introduction of additional degrees of freedom and leads to results consistent with other independent studies of the structure of the perturbative series for the Adler function \cite{Grunberg:1991ac, Brodsky:2011ta} and Bjorken polarized sum rule \cite{Kataev:1992jm}.
To specify the ambiguities of application of the PMC method to the $e^+e^-$ annihilation $R$-ratio and to the Bjorken polarized sum rule in QCD, it is highly desirable to analyze the existing data at the four-loop level taking into account the modifications of the PMC expressions described above.
\section{Conclusion}
The application of the two-fold representation of the perturbative expressions for the non-singlet Adler function and Bjorken polarized sum rule, reproducing the structure of $\{\beta\}$-decomposition, has enabled us to define 8 out of 12 possible $\{\beta\}$-expanded terms of these physical quantities in the 5-th order of PT. We demonstrate that the $\beta^3_0$, $\beta^2_0$, $\beta_0$-dependent terms and the $\beta$-independent conformally-invariant contribution remain unknown only. Up to the four-loop level results of this approach are in agreement with those, obtained with help of other methods and presented previously in literature. We emphasize that the correct $\{\beta\}$-expansion procedure requires the decomposition of the photon anomalous dimension into the coefficients of the RG $\beta$-function. Convincing justifications in the favor of this statement are given. As known, the 5-th order PT expressions for the non-singlet Adler function and Bjorken polarized sum rule contain the Riemann $\zeta_4$-contributions. We fix them in the case of the generic simple gauge group. Moreover, we
demonstrate that these $\zeta_4$-functions will be included in only three of the remaining four unknown terms of the $\{\beta\}$-expansion, namely in $\beta^2_0$, $\beta_0$-dependent and $\beta$-free coefficients. We also define their analytical Lie group structure. The arguments in the favor of validity of these values, coming from the definite cancellations due to the scale and conformal symmetries, are given. The results and outcomes, presented in this work, may be useful in a possible future more detailed analysis of the analytical structure of the five-loop corrections to the considered renorm-group quantities.
\section*{Acknowledgments}
We would like to thank S.V. Mikhailov for numerous useful discussions and K.V. Stepanyantz for the interest in this work.
The work of VSM was supported by the Russian Science Foundation, agreement no. 21-71-30003.
\begin{flushleft}
|
1,941,325,220,733 | arxiv | \section{Introduction} \label{intro}
Power values of power sums of consecutive integers have been of interest throughout the past 70 years. Techniques from algebraic number
theory and Diophantine approximation have allowed the resolution of such equations with small exponents, as well as proofs of general
theorems.
This can be seen in the work of Brindza \cite{Br}, Cassels \cite{Cassels}, Gy\H{o}ry, Tijdeman and Voorhoeve \cite{GTV},
Hajdu \cite{Ha}, Pint\'er \cite{P}, Sch\"affer \cite{Sc}, and Zhang and Bai \cite{ZB} among many others.
Recently, Bennett, Gy\H{o}ry and Pint\'er \cite{BGP}, Pint\'er \cite{P2}, Zhang \cite{Zhang},
as well as the author in collaboration with Bennett and Siksek \cite{BPS1}, \cite{BPS2} resolved many such equations using the modular method. A Galois-theoretic approach
can be found in \cite{PatelSiksek}.
\medskip
In this paper we consider the equation
\begin{equation} \label{eqn:main}
(x+1)^2+(x+2)^2+\cdots+(x+d)^2 =y^n, \qquad n \ge 2,
\end{equation}
with $2 \le d \le 10$. Here $x$ and $y$ denote integers. This equation was solved by Cohn \cite{Cohn}
for $d=2$ and by Zhang \cite{Zhang} for $d=3$.
\begin{thm}\label{thm:main}
Let $2\leq d \leq 10$. The only solutions to equation \eqref{eqn:main} with $n \ge 3$
are
\begin{gather*}
(d,x,y,n)=(2,-1,\pm 1,2r), \quad (2,-2,\pm 1,2r), \quad
(2,-1, 1,2r+1), \\
(2,-2,1,2r+1), \quad
(2, 118 , \pm 13 ,4), \quad (2,-121,\pm 13, 4).
\end{gather*}
The only solutions to equation \eqref{eqn:main} with $n=2$ is the infinite family with $d=2$ and
\[
2x+3+y \sqrt{2}=\pm (1+\sqrt{2})^{2r+1}, \qquad r \in {\mathbb Z}.
\]
\end{thm}
\section{Sums of Consecutive Squares}
We make use of the following Lemma of Zhang and Bai \cite{ZB}.
\begin{lem}[Zhang and Bai]\label{lem:ZB}
Let $p$ be a prime such that $ p \equiv \pm5 \mod 12$. If $p \mid d$ and $\ord_p(d) \not\equiv 0 \mod n$ then equation~\eqref{eqn:main} has no solutions.
\end{lem}
We note that Lemma~\ref{lem:ZB} immediately allows us to eliminate the cases $d=5$, $7$ and $10$. As mentioned in the introduction,
equation~\eqref{eqn:main} has been solved by Zhang \cite{Zhang} for $d=3$. It was solved for $d=2$ and $n \ge 3$ by
Cohn \cite{Cohn}, and this gives the solutions enumerated in the theorem for $n \ge 3$.
It remains to consider $d=2$ with $n=2$, and $d=4$, $6$, $8$, $9$ with $n \ge 2$.
We rewrite equation~\eqref{eqn:main} as
\begin{equation}\label{eqn:conssquares1}
dx^2 + d(d+1)x + \frac{d(d+1)(2d+1)}{6} = y^n.
\end{equation}
Factorising and completing the square gives us
\begin{equation}\label{eqn:conssquares}
d \left(\left( x + \frac{d+1}{2} \right)^2 + \frac{(d-1)(d+1)}{12} \right) = y^n.
\end{equation}
\begin{lem}\label{lem:1}
Let $r = \ord_2(d)$. Suppose $r\geq 2$, then in equation~\eqref{eqn:conssquares} we have $n \mid (r-1)$.
\end{lem}
\begin{proof}
Let $D = d/2^2$. We substitute into equation~\eqref{eqn:conssquares} to get,
\begin{equation}\label{eqn:dval}
D \left(\left( 2x + (d+1) \right)^2 + \frac{(d-1)(d+1)}{3} \right) = y^n.
\end{equation}
Observe that
\[
(2x+d+1)^2 \equiv 1 \pmod{4}, \qquad
\frac{(d-1)(d+1)}{3} \equiv 1 \pmod{4}.
\]
Comparing valuations on both sides of \eqref{eqn:dval} we see that
\[
n \ord_2(y)=\ord_2(D)+1=r-1.
\]
This completes the proof.
\end{proof}
\begin{lem}\label{lem:2}
Let $q = 3$ and $r = \ord_3(d)$. Suppose $r\geq 2$, then in equation~\eqref{eqn:conssquares}, we have $n \mid (r-1)$.
\end{lem}
\begin{proof}
Let $D = d/3$. We substitute into equation~\eqref{eqn:conssquares} to get,
\[
D \left(3\left( x + \frac{(d+1)}{2} \right)^2 + \frac{(d-1)(d+1)}{4} \right) = y^n.
\]
Observe that the expression in brackets is never divisible by 3. Hence
$\ord_3(D)=\ord_3(y^n)=n \ord_3(y)$, proving that
$n \mid (r-1)$.
\end{proof}
Applying Lemmata~\ref{lem:1} and~\ref{lem:2} allows us to eliminate $d=4$ and $d=9$,
and $d=8$ with $n \ge 3$. For the proof of Theorem~\ref{thm:main}, it remains to deal
with $d=6$, and also with $d=2$, $8$ for $n=2$.
\section{Case: $n=2$}
In this section, we deal with equation~\eqref{eqn:main} with $n=2$ and $d=2$, $6$, $8$.
First, we consider $d=2$. Then, equation \eqref{eqn:main} can be rewritten as
\[
(2x+3)^2-2y^2=-1.
\]
This yields the infinite family of solutions in Theorem~\ref{thm:main}.
Now let $d=6$; we can rewrite equation~\eqref{eqn:main} as
\[
3(2x+7)^2+35=2y^2
\]
which is impossible as the left-hand side is $6 \pmod{8}$ and the right-hand side is $2 \pmod{8}$.
Finally let $d=8$; we can rewrite equation~\eqref{eqn:main} as
\[
2\left((2x+9)^2+21 \right)=y^2.
\]
Write $y=2Y$, we obtain
\[
(2x+9)^2+21=2 Y^2.
\]
Again, we see that the left-hand side is $6 \pmod{8}$, yielding a contradiction.
\section{The Case $d=6$}
It finally remains to solve equation~\eqref{eqn:main} for $d=6$. We suppose $n=p$ is an odd prime.
We rewrite equation~\eqref{eqn:main} as
\begin{equation}\label{eq:6squares}
X^2 + 3\times 5\times 7 = 6y^p,
\end{equation}
where $X=6x+21$. It is easy to see that $2,3,5,7 \nmid y$.
Let $K = {\mathbb Q}(\sqrt{-105})$ and write ${\mathcal O}_K={\mathbb Z}[\sqrt{-105}]$ for its ring of integers.
This has class group isomorphic to $({\mathbb Z}/2{\mathbb Z})^3$. We factorise the left-hand side
of equation~\eqref{eq:6squares} as
\[
(X + \sqrt{-105})(X - \sqrt{-105}) = 6y^p.
\]
It follows that
\[
(X + \sqrt{-105}){\mathcal O}_K = \mathfrak{p}_2\mathfrak{p}_3 \cdot \mathfrak{z}^p
\]
where $\mathfrak{p}_2$ and $\mathfrak{p}_3$ are the unique primes of ${\mathcal O}_K$ above $2$ and $3$ respectively, and $\mathfrak{z}$
is an ideal of ${\mathcal O}_K$.
Let ${\mathfrak{a}} = \mathfrak{p}_2\mathfrak{p}_3$. Then
\[
(X + \sqrt{-105}){\mathcal O}_K = \mathfrak{p}_2\mathfrak{p}_3 \cdot \mathfrak{z}^p ={\mathfrak{a}}^{1-p} \cdot ({\mathfrak{a}} \mathfrak{z})^p
=(6^{(1-p)/2})({\mathfrak{a}} \mathfrak{z})^p.
\]
It follows that ${\mathfrak{a}} \mathfrak{z}$ is a principal ideal. Write ${\mathfrak{a}}\mathfrak{z}=\gamma {\mathcal O}_K$ where $\gamma \in {\mathcal O}_K$,
and $\ord_{\mathfrak{p}_2}(\gamma)=\ord_{\mathfrak{p}_3}(\gamma)=1$.
After possibly changing the sign of $\gamma$ we obtain,
\begin{equation}\label{eqn:X}
X+\sqrt{-105}=\frac{\gamma^p}{6^{(p-1)/2}}.
\end{equation}
Subtracting the conjugate equation from this equation, we obtain
\begin{equation}\label{eq:cons6pp2}
\frac{\gamma^p}{6^{(p-1)/2}} - \frac{\bar{\gamma}^p}{6^{(p-1)/2}} = 2\sqrt{-105}.
\end{equation}
Let $L = {\mathbb Q}(\sqrt{-105}, \sqrt{6}) = {\mathbb Q}(\sqrt{-70}, \sqrt{6}) $.
Write ${\mathcal O}_L$ for the ring of integers of $L$ and let
\[
\alpha = \frac{\gamma}{\sqrt{6}} \qquad \text{and} \qquad
\beta = \frac{\bar{\gamma}}{\sqrt{6}}.
\]
Substituting into equation~\eqref{eq:cons6pp2}, we see that
\begin{equation}\label{eqn:70}
\alpha^p - \beta^p = \sqrt{-70}.
\end{equation}
\begin{lem}\label{lem:lehmer}
Let $\alpha$, $\beta$ be as above. Then
$\alpha$ and $\beta$ are algebraic integers. Moreover, $(\alpha + \beta)^2$ and $\alpha\beta$ are non-zero, coprime, rational integers,
and $\alpha/\beta$ is not a root of unity.
\end{lem}
\begin{proof}
Note that ${\mathfrak{a}} \cdot {\mathcal O}_L=\sqrt{6} {\mathcal O}_L$. As ${\mathfrak{a}} \mid \gamma$, $\overline{\gamma}$, we have
$\alpha$, $\beta \in {\mathcal O}_L$.
Now $\gamma=u+v \sqrt{-105}$ with $u$, $v \in {\mathbb Z}$. Thus
\[
(\alpha+\beta)^2=\frac{2u^2}{3}.
\]
but $\mathfrak{p}_3 \mid \gamma$ and $\mathfrak{p}_3 \mid \sqrt{-105}$, hence $\mathfrak{p}_3 \mid u$ and so $3 \mid u$. Therefore $(\alpha+\beta)^2 \in {\mathbb Z}$.
If $(\alpha+\beta)^2=0$ then $u=0$ and from equation~\eqref{eqn:X}, we establish that $X=0$, which doesn't result in an integer solution since $X=6x+21$.
Therefore, $(\alpha+\beta)^2$ is a non-zero rational integer.
Furthermore, $\alpha \beta=(\gamma \overline{\gamma})/6$, which is clearly a non-zero rational integer.
We must check that $(\alpha+\beta)^2$ and $\alpha \beta$ are coprime. Suppose they are not coprime.
Then there is some prime ideal $\mathfrak{q}$ of ${\mathcal O}_L$ dividing both. This divides $\alpha$, $\beta$, and so
using equation~\eqref{eqn:70}, we see that $\ord_\mathfrak{q}(\sqrt{-70}) \ge p$ and arrive at a contradiction.
Finally we need to show that $\alpha/\beta=\gamma/\overline{\gamma} \in K={\mathbb Q}(\sqrt{-105})$ is not
a root of unity. But the only roots of unity in $K$ are $\pm 1$. If $\alpha/\beta=\pm 1$
then from equation~\eqref{eqn:70}, we obtain $0=\sqrt{-70}$ or $2 \alpha^p=\sqrt{-70}$, both giving
a contradiction.
\end{proof}
Continuing with the notation of the previous proof we have,
\[
\alpha-\beta=2v \sqrt{-105}{\sqrt{6}}=v \sqrt{-70}.
\]
Therefore, equation~\eqref{eqn:70} gives $v= \pm 1$ and
\begin{equation}\label{eqn:thue}
\frac{\alpha^p - \beta^p}{\alpha - \beta} = \pm 1.
\end{equation}
To complete the proof, we need a famous theorem due to Bilu, Hanrot and Voutier \cite{BHV}.
Attached to a pair of algebraic numbers $\alpha$ and $\beta$ satisfying Lemma~\ref{lem:lehmer}
is a \textbf{Lehmer sequence} given by
\[
\tilde{u}_m=
\begin{cases}
(\alpha^m-\beta^m)/(\alpha-\beta) \qquad \text{$m$ odd}\\
(\alpha^m-\beta^m)/(\alpha^2-\beta^2) \qquad \text{$m$ even}.
\end{cases}
\]
A prime $q$ is said to be a \textbf{primitive divisor} for $\tilde{u}_m$ if divides $\tilde{u}_m$
but does not divides $(\alpha^2-\beta^2)^2 \cdot \tilde{u}_1 \cdot \tilde{u}_2 \cdots \tilde{u}_{m-1}$.
The pair $(\alpha,\beta)$ is said to be \textbf{$m$-defective} if $\tilde{u}_m$ does not have a primitive divisor.
Observe that \eqref{eqn:thue} implies that $(\alpha,\beta)$ is $p$-defective.
By Theorem 1.4 of \cite{BHV}, if $m \ge 30$ then $\tilde{u}_m$ must have a primitive divisor. Thus
we know that $p<30$. To deal with primes in the range $7 \le p <30$ we need the results of Voutier \cite{Voutier}.
The only possible values of $p$ in that range for which $\tilde{u}_p$ has no primitive divisor are $p=7$, $13$,
and for these values Voutier gives the possibilies for $\alpha/\beta$. Examining his table quickly eliminates
these cases as it is incompatible with $\alpha/\beta=\gamma/\overline{\gamma} \in {\mathbb Q}(\sqrt{-105})$.
This proves the proposition.
It remains to deal with $p=3$ and $p=5$. We may rewrite equation~\eqref{eqn:70} as
\[
(u+\pm \sqrt{-105})^p-(u \mp \sqrt{-105})^p=\sqrt{6}^p \cdot \sqrt{-70}.
\]
We merely check that these polynomial equations do not have roots in ${\mathbb Z}$.
This completes the proof.
|
1,941,325,220,734 | arxiv |
\section{Introduction}
Efficient, robust and high-fidelity two-qubit controlled-PHASE gate has become one of the central topics in the research frontier of quantum information with neutral atoms, which is not only important for quantum logic processing \cite{PhysRevLett.104.010503, PhysRevA.92.022336, PhysRevLett.119.160502}, but also crucial for quantum simulation \cite{nature24622} and quantum metrology \cite{nphoton.2011.35, RevModPhys.89.035002}. Rydberg blockade \cite{nphys1178, RevModPhys.82.2313, J.Phys.B.49.202001} emerges as one essential tool for this purpose, where the rapid progress in related studies over the past two decades, both theoretical and experimental, has already found many key advances in quantum information science and technology with neutral atoms \cite{PhysRevA.66.065403, PhysRevLett.99.260501, PhysRevLett.107.093601, PhysRevLett.107.133602, PhysRevLett.109.233602, Dudin887Science, PhysRevLett.110.103001, PhysRevLett.113.053601, PhysRevA.92.022336, PhysRevLett.117.223001, PhysRevLett.119.160502}. One prominent feature of neutral atoms is that they serve as not only good candidates for qubit registers, but also good choices for quantum interface with light, where Rydberg blockade has been deemed as a critical element on both sides \cite{PhysRevLett.109.233602, PhysRevLett.112.040501, PhysRevLett.115.093601, PhysRevLett.119.113601, PhysRevA.91.030301, Hao2015srep, PhysRevA.93.040303, PhysRevA.93.041802, PhysRevA.94.053830, PhysRevA.95.041801, OPTICA.5.001492, ISI:000457492900011}.
Ever since the seminal paper of Ref. \cite{PhysRevLett.85.2208} which pioneered in the field of quantum computing with neutral atoms, many mechanisms of constructing two-qubit controlled-PHASE gate via Rydberg blockade have been studied extensively so far. Typically, a fast and robust gate mechanism requires relatively strong blockade shift. For those feasible gate protocols readily compatible with the currently mainstream atomic physics experimental platforms, it seems to us that they can be approximately divided into four categories. Category \RN{1}. Rydberg blockade gate with the so-called $\pi$-gap-$\pi$ pulse sequence, which comes from the initial gate designs in Ref. \cite{PhysRevLett.85.2208}. It attracts persistent theoretical interests and serves as the current mainstream blueprint for serious experimental efforts, although it requires individual site addressing. Recent progress has suggested that the gate operation can be performed on the order of several hundred nano-seconds \cite{PhysRevA.88.062337, PhysRevA.92.042710, PhysRevA.94.032306, PhysRevA.96.042306}. Nevertheless, gate fidelities reported from several labs are still at a little distance away from 99\%, which may be partly due to several potential shortcomings embedded in this type of protocol, including a stringent requirement on ground-Rydberg coherence. Category \RN{2}. Rydberg dressing, which was first conceived in the context of quantum gases \cite{PhysRevLett.85.1791}. The blockade effect can also be explored via Rydberg dressing of the ground state atoms \cite{PhysRevA.65.041803, PhysRevA.82.033412}, which can in turn yield a two-qubit controlled-PHASE gate protocol \cite{PhysRevA.91.012337, nphys3487}. However, it usually needs a relatively long gate time which foreshadows gate fidelity due to the finite Rydberg level life time. Besides its role in the universal quantum computing, Rydberg dressing is suitable for implementing adiabatic quantum computation such as quantum annealing \cite{PhysRevA.87.052314}, and finds important applications in constructing multi-qubit quantum simulator \cite{nphys3835, nature24622}. Category \RN{3}. Rydberg anti-blockade gate \cite{PhysRevLett.98.023002, PhysRevLett.111.033607}. Such gate protocols usually requires the exact knowledge of the blockade shift \cite{PhysRevApplied.7.064017}, and are practically more sensitive to fluctuations of the relative motion between two atoms. Category \RN{4}. Protocols with simplified pulse sequence but more theoretical compromises, whose best achievable fidelity is less than ideal but relatively straightforward for experimental demonstration. For example, recently Ref. \cite{EPL-113-4-40001} discussed such a gate protocol with a single square pulse driving ground-Rydberg transition. The major challenge for those protocols is to improve the highest theoretical fidelity limit to fit scalable purpose or fault-tolerant quantum computing.
Over the years, intense efforts have been devoted to analyzing performances of Rydberg blockade gate \cite{PhysRevA.72.022347, PhysRevA.77.032723, RevModPhys.82.2313, PhysRevA.94.032306, PhysRevA.96.042306}, where both the protocol's inherent physical limitations and technical imperfections have been taken into consideration. Very often, techniques of adiabatic passage \cite{PhysRevA.94.062307, PhysRevA.97.032701}, including STIRAP \cite{PhysRevA.89.030301, PhysRevA.90.033408}, are employed together with the $\pi$-gap-$\pi$ \cite{PhysRevA.96.042306} and Rydberg dressing \cite{PhysRevA.91.012337} gate protocols. Tuning the F\"{o}rster resonance with dc electric fields \cite{nphys3119} or microwave have been also anticipated to facilitate gate performance. Nevertheless, experimental fidelities from those two-qubit gate mechanisms seem relatively less optimistic at this moment, despite the overall rapid progress in this field. Therefore, there exists strong demand for further explorations in gate protocols of potentially different recipes which may overcome known inconveniences in existing protocols.
In this article, we report our recent progress in theoretically devising and analyzing a Rydberg blockade type of two-qubit controlled-PHASE gate protocol for neutral atoms via Rydberg blockade, whose working principles rely upon atom-light interaction with a single off-resonant modulated laser pulse driving the ground-Rydberg transition. The modulation of the pulse waveform is engineered such that within the required fidelity, both the control and target atoms will return to original state no matter the blockade takes place or not, while gaining the correct phases as required by the two-qubit gate. Approximately speaking, this type of protocol combines the advantages of the $\pi$-gap-$\pi$ gate and Rydberg dressing gates in a hybrid form, while avoiding shelving steady population on Rydberg state of control atom for a finite time gap during gate operation and gaining more robustness against noises. The rest of this article is organized as follows, first we present the basic mechanisms of our gate protocol, then we analyze and discuss the results, and finally we conclude the article. Relevant technical details, specifics of derivations and extra examples are included in the supplementary material.
\section{Basic mechanism}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{basic_illustration_0.png}
\caption{Schematic of atomic structure for the Rydberg phase gate under investigation. On the left: the relevant atomic states including the Rydberg blockade between $|r\rangle$ and $|r'\rangle$, where the lasers are driving $|0\rangle \leftrightarrow |r\rangle$ on control atom and $|0\rangle \leftrightarrow |r'\rangle$ on target atom; on the right: under ideal blockade situation, the linkage pattern for states participating the ground-Rydberg transitions $|01\rangle, |10\rangle$ and $|00\rangle$. See Morris-Shore transform at Ref. \cite{PhysRevA.27.906} for a better explanation of comprehending linkage structures. State $|11\rangle$ does not participate the prescribed interactions and stays unchanged through the process. Rydberg states $|r\rangle$ and $|r'\rangle$ may be the same or different, depending on the choice of F\"orster resonance structure. }
\label{fig:basic_0}
\end{figure}
We start with the basic ingredients, where relevant atomic states of the atom-light interaction are shown in Fig. \ref{fig:basic_0}. The qubit basis states of the atoms may be represented by a pair of long-lived hyperfine ground clock states for typical alkali atoms, which can be manipulated by external microwave field or optical stimulated Raman transition \cite{PhysRevLett.114.100503, PhysRevA.92.022336, J.Phys.B.49.202001}. Modulated laser pulses will be applied to drive the ground-Rydberg transitions of the control and target atoms. The consequences of the required operation can be abstracted into two aspects: the boomerang condition that the population returns with unity probability and the antithesis condition that the accumulated phases achieve controlled-Z (C-Z) gate result. When combined with a local Hadamard gate on the target qubit atom ($\pi/2$ rotation for transition $|0\rangle \leftrightarrow |1\rangle$) before and after the controller-PHASE gate, this leads to the universal controlled-NOT gate \cite{PhysRevLett.85.2208, RevModPhys.82.2313, J.Phys.B.49.202001}. If $|r\rangle, |r'\rangle$ are the same state, then individual atom addressing may not be mandatory and the experiment can be operated through one single laser. For simplicity, throughout this article the condition of symmetric driving will be presumed, namely both the qubit atoms will receive the same Rabi frequency and detuning in their effective ground-Rydberg transition couplings.
Assuming the presence of ideal Rydberg blockade such that double Rydberg excitation $|rr'\rangle$ is impossible. Define the state $|R\rangle = (|r0\rangle+|0r'\rangle)/\sqrt{2}$, there exist three types of couplings: $|10\rangle\leftrightarrow|1r\rangle$, $|01\rangle\leftrightarrow|r1\rangle$ with Rabi frequency $\Omega_s$ and $|00\rangle \leftrightarrow |R\rangle$ with Rabi frequency $\sqrt{2}\Omega_s$, as can be seen in the linkage structure of Fig. \ref{fig:basic_0}. The goal is to find an atom-light interaction process such that the induced changes in the wave functions conforms to the two-qubit phase gate after the interaction is over. More specifically, let $(C_0, C_r)$ denote the wave function for the ground-Rydberg transition of $|10\rangle$ or $|01\rangle$, and let $(X_0, X_R)$ denote the wave function for the ground-Rydberg transition of $|00\rangle$. The problem reduces to determine proper and feasible $\Omega_s, \Delta$ for the time evolution:
\begin{subequations}
\label{ideal_blockade_eom}
\begin{align}
i\frac{d}{dt} \begin{bmatrix}C_0\\C_r\end{bmatrix}
=
\begin{bmatrix}
0 & \frac{1}{2}\Omega_s \\
\frac{1}{2}\Omega_s^* & \Delta
\end{bmatrix}
\cdot
\begin{bmatrix}C_0\\C_r\end{bmatrix};\\
i\frac{d}{dt} \begin{bmatrix}X_0\\X_R\end{bmatrix}
=
\begin{bmatrix}
0 & \frac{\sqrt{2}}{2}\Omega_s \\
\frac{\sqrt{2}}{2}\Omega_s^* & \Delta
\end{bmatrix}
\cdot
\begin{bmatrix}X_0\\X_R\end{bmatrix}.
\end{align}
\end{subequations}
It turns out, appropriate solutions may be obtained, where the practical task becomes to find them out and examine their properties. Our tactics involve careful refining efforts for modulations from heuristic approaches. More specifically, first we design waveforms under assumption of perfect adiabatic time evolution process in Eq. \eqref{ideal_blockade_eom}, and then perform optimizations to suppress the non-adibatcity effects \cite{SuppInfo}, where numerical tools serves an essential role in this process.
Except for technical noises, we think that two main types of intrinsic errors exist: the population leakage error due to spontaneous emission of Rydberg levels during interaction, and the rotation error due to the less than ideal Rydberg blockade with double Rydberg excitation. Nevertheless, with properly tailored smooth pulses, the mechanism of adiabatically tracking two-atom dark state gets implicitly triggered under the presence of dipole-dipole exchange interaction $|rr'\rangle \leftrightarrow |pp'\rangle$ \cite{SuppInfo}. Therefore, the rotation error will be suppressed as we may observe in later discussions.
\begin{figure}[b]
\centering
\fbox{\includegraphics[width=0.95\linewidth]{hybrid_modulation_0.png}}
\caption{Numerical simulation of the time evolution. Modulation is configured as stated in the text, while $B=2\pi \times 500$ MHz, $\delta_p = 2\pi \times -3$ MHz. The first graph shows the waveform, the second graph shows the population on different atomic states, while the last graph shows the phase accumulation of the atomic wave function during the process. The purpose is C-Z gate, and the fidelity of this example is 0.99997. To evaluate fidelity, we first numerically calculate the outcome wave functions from the time evolution. Then, with respect to the four basis states $|00\rangle, |01\rangle, |10\rangle, |11\rangle$, we acquire the 4 by 4 transform matrix $U$ representing our gate operation. Then the fidelity may be calculated as $F = ( \text{Tr}(MM^\dagger) + |\text{Tr}(M)|^2 )/20$, with $M = U_\text{C-Z}^\dagger U$ with $U_\text{C-Z}$ being the transform matrix of an ideal C-Z gate.}
\label{fig:hybrid_modulation_0}
\end{figure}
\section{Results and discussions}
For $|01\rangle$ and $|10\rangle$, the dynamics amounts to nothing more than a two-level system made from ground-Rydberg transition with time-dependent Rabi frequency $\Omega_s(t)$ and detuning $\Delta(t)$. On the other hand, for $|00\rangle$, its dynamics actually probes the Rydberg dipole-dipole interaction, whose linkage pattern may be summarized as $|00\rangle \leftrightarrow |R\rangle \leftrightarrow |rr'\rangle \leftrightarrow |pp'\rangle$. In order to quantitatively describe the F\"oster resonance structure of $|rr'\rangle \leftrightarrow |pp'\rangle$, we assume that the coupling strength is $B$ and the small F\"{o}rster energy penalty term is $\delta_p$ for $|pp'\rangle$. The interaction Hamiltonian is then:
\begin{eqnarray}
\label{eq:Rydberg_Hamiltonian}
H_I/\hbar = &\frac{\sqrt{2}}{2}\Omega_s|R \rangle \langle 00| + \frac{\sqrt{2}}{2}\Omega_s|rr'\rangle \langle R| + \text{H.c.}
\nonumber\\
&+ \Delta |R\rangle\langle R| + 2\Delta |rr'\rangle\langle rr'|,
\end{eqnarray}
where we have already included rotating wave approximation. We include Rydberg blockade as:
\begin{equation}
\label{eq:blockade_Hamiltonian}
H_F/\hbar = B|pp'\rangle \langle rr'\rangle + \text{H.c.}
+ \delta_p |pp'\rangle\langle pp'|.
\end{equation}
Following the prescribed recipes, we have obtained two categories of waveforms that yield two-qubit phase gate with satisfying performances. One of them requires amplitude and frequency modulations simultaneously, while the other one only requires amplitude modulation with a constant detuning.
For the waveform of both amplitude and frequency modulations, we make the designing goal a little more strict than necessary, such that the aim is for a standard C-Z gate where the population returns with a phase change of 0 for $|01\rangle$ and $|10\rangle$ and a phase change of $\pi$ for $|00\rangle$ after interaction. Beginning with heuristic approaches \cite{SuppInfo}, we find out the sinusoidal modulations fit well for this category after the refining work on the time evolution details. In particular, we have selected a set of waveforms described in the following:
\begin{subequations}
\label{eq:waveform_v7e}
\begin{align}
\Omega_s (t) = \Omega_0 + \Omega_1 \cos(2\pi t/T_g) + \Omega_2 \sin(\pi t/T_g); \\
\Delta (t) = \Delta_0 + \Delta_1 \cos(2\pi t/T_g) + \Delta_2 \sin(\pi t/T_g).
\end{align}
\end{subequations}
Via optimizations under the prescribed constraints, we have identified a set of values as: $\Omega_0=2.564, \Omega_1=0.950, \Omega_2=0.116, \Delta_0=1.004, \Delta_1=-1.093, \Delta_2=-0.002$; all coefficient units are MHz, and the gate time $T_g$ is set as 1 $\mu$s. To demonstrate the detailed dynamics of the system with respect to the Hamiltonian of Eq. \eqref{eq:Rydberg_Hamiltonian}, we present the numerical simulation results in Fig. \ref{fig:hybrid_modulation_0} without considerations for spontaneous emissions of Rydberg levels and technical noises. The modulation does not involve unreasonable high frequency components, and the atomic wave function does not go through `sudden' change during the course of gate operation. We have intentionally chosen a symmetric waveform, which simplifies the calculation process but is not mandatory. For dynamics associated with $|00\rangle$, although the situation is more complicated than the two-level system considered in Eq. \eqref{ideal_blockade_eom}, a clear signature is that the population returns almost ideally with no significant portion trapped in the Rydberg level, thanks to the adiabatic dark state driving mechanism in Rydberg blockade effect \cite{SuppInfo}.
For the other category of only amplitude modulation, it is preferred that the pulse starts and ends at zero intensity. Among several candidate waveforms, we are particularly interested in the ones of relatively less complexities, such as:
\begin{subequations}
\label{eq:waveform_v22a}
\begin{align}
&\Omega_s (t) = \sum_{\nu=1}^{4} \beta_\nu \big(b_{\nu, n}(t/T_g) + b_{n-\nu, n}(t/T_g) \big); \\
&\Delta (t) = \Delta_0 \equiv \textit{ constant};
\end{align}
\end{subequations}
where $b_{\nu, n}$ is the $\nu$th Bernstein basis polynomials of degree $n$ \cite{SuppInfo}, we set $n=8$ and we again intentionally configure a symmetric waveform. The result we pursue is in fact a controlled-PHASE gate, and local sing-qubit phase rotation is required if we want conversion into C-Z gate. The associated phase constraint is:
\begin{equation}
\label{C-PHASE_constraint}
\phi_{11} = \pm \pi - \phi_{00} + \phi_{01} + \phi_{10},
\end{equation}
where $\phi_{01} + \phi_{10} - \phi_{00} = \pm\pi$ if $\phi_{11} = 0$.
Next, we seek a set of values leading to appropriate phase gate performance. After optimization efforts, for gate time $T_g$ is set as 1 $\mu$s, we have reached a set of satisfying parameters, $\beta_1=1.419, \beta_2=0, \beta_3=5.076, \beta_4=13.425, \Delta_0=-3.512$; all coefficient units are MHz. The corresponding numerical simulation is shown in Fig. \ref{fig:amplitude_modulation_0}. The singly-excited Rydberg state $|R\rangle$ is not heavily populated throughout the interaction process, which does not share the same behavior as the obvious feature of quantum Rabi oscillation in Fig. \ref{fig:hybrid_modulation_0}. This is due to the difference in the underlying physics mechanisms between those two cases, where Fig. \ref{fig:hybrid_modulation_0} shares similarities with a typical quantum Rabi oscillation and Fig. \ref{fig:amplitude_modulation_0} shares similarities with adiabatic rapid passage, and this may be observed from their paths on Bloch sphere. Nevertheless, both approaches are suitable choices for the purpose of two-qubit phase gate.
\begin{figure}[t]
\centering
\fbox{\includegraphics[width=0.95\linewidth]{amplitude_modulation_0.png}}
\caption{Numerical simulation of the time evolution with amplitude modulated pulse. The waveform is set as Eq. \eqref{eq:waveform_v22a}, while $B=2\pi \times 500$ MHz, $\delta_p = 2\pi \times -3$ MHz. The first graph shows the waveform, the second graph shows the population on different atomic states, while the last graph shows the phase accumulation of the atomic wave function during the process. After appropriate local phase rotations to adjust it to a standard C-Z gate, its gate error $\mathcal{E}$ is way below $1\times10^{-5}$, defined as $\mathcal{E} = 1- F$. Spontaneous emissions of Rydberg levels and technical noises are not considered here.}
\label{fig:amplitude_modulation_0}
\end{figure}
We observe that the mechanism of adiabatically tracking the two-atom dark state also plays an essential role here \cite{SuppInfo}. The amplitude modulation does not only introduce the correct change in atomic wave functions for a phase gate with respect to Eq. \eqref{ideal_blockade_eom} and Eq. \eqref{C-PHASE_constraint}, but also helps to suppress the rotational error. In other words, major limitations on the attainable fidelity are anticipated to mostly come from spontaneous emissions, modulation imperfections and technical noises.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{fidelity_mcwf_0c.png}
\caption{Numerical simulation for gate error $\mathcal{E}$ with gate time set as 1 $\mu$s and the waveform set as Fig. \ref{fig:amplitude_modulation_0}. Each data point is extracted from 200,000 MCWF trajectories. Two values of $B$ are considered, all with $\delta_p = 2\pi \times -3$ MHz. For simplicity, the spontaneous decay rates of all Rydberg states are taken as the same. Fittings to a straight lines indicates that the linear relation holds well between Rydberg decay rate and gate error, for the parameter range we are interested in.}
\label{fig:fidelity_mcwf_0}
\end{figure}
Furthermore, we need to estimate the influences of the major intrinsic error source of spontaneous emission. We carry out numerical evaluations by resorting to quantum jump approach \cite{PhysRevLett.68.580, RevModPhys.70.101}, also known as Monte-Carlo wave function (MCWF). The result is shown in Fig. \ref{fig:fidelity_mcwf_0}, where we compute gate error as a function of the Rydberg decay rates. We deduce that in principle the Rydberg levels' spontaneous emission is the dominating theoretical limiting factor in achieving high-fidelity, provided the Rydberg blockade shift is strong enough.
We have also investigated the influence to gate performance from realistic imperfections commonly encountered in experiments, including amplitude fluctuations in the laser pulse, residual thermal motion of the cold atoms and laser power imbalance at the two qubit sites \cite{SuppInfo}. We observe that the gate protocol is robust against these types of disturbance as long as they are kept at a reasonably low level. When considering the off-resonant driving for the ground-Rydberg transition, we've concentrated on the few states that are directly involved in the mechanism. Realistically, the situation is more complicated due to the atom's many other levels, which may introduce various sources of extra ac Stark shifts and decoherences \cite{PhysRevA.72.022347, PhysRevA.94.032306}.
Several major characteristics are worth mentioning here. It may work without individual addressing on the qubit atoms. No microwave is required to drive the Rydberg-Rydberg transitions and henceforth it saves trouble of the complicated microwave electronic equipment and antennas. It does not require the exact knowledge of the magnitude of the Rydberg blockade shift. Its working condition does not involve far off-resonance detuning as that of Rydberg dressing and therefore the gate may be designed to fast operation below 1 $\mu$s with respect to realistic experimental apparatus parameters. Carrying out two-qubit entangling gate within a single continuous shaped driving pulse has already become a common practice in other platforms such as the superconducting qubit \cite{PhysRevLett.107.080502}, and we think it is beneficial to design a counterpart for neutral atom platform.
\section{Conclusion and outlook}
We have systematically presented our recent results in designing two-qubit controlled-PHASE Rydberg blockade gate protocol for neutral atoms via off-resonant modulated driving within a single pulse. In principle, the same guidelines may be extended to help construct a generic three-qubit gate such as Toffoli gate.
On the other hand, we believe that the result is not unique and a full characterization of accessible solutions remains an open problem. We are also looking forward to a few other future refinements, including the search for a faster gate operation, further suppression of population leakage, stronger robustness against environmental noises, and more user-friendly parameter setting. Error correction mechanism \cite{PhysRevLett.117.130503} for our gate protocol is also part of the long term goal.
For applications with readily available hardware in immediate future \cite{PhysRevA.92.022336, PhysRevLett.119.160502}, we expect that $\gtrsim 99$\% fidelity may be obtained for two-qubit gate in 1D, 2D or 3D atomic arrays with a gate time less than 1 $\mu$s. Our faith with the Rydberg blockade gate is that high-fidelity ground-Rydberg Rabi oscillation shall be directly translated into high-fidelity controlled-PHASE gate. We also anticipate that our work will help the endeavors for the ensemble qubit approach \cite{PhysRevLett.115.093601, PhysRevLett.119.180504} and the Rydberg-mediated atom-photon controlled-PHASE gate \cite{Hao2015srep, PhysRevA.93.040303, PhysRevA.94.053830, OPTICA.5.001492}.
\begin{acknowledgements}
The authors gratefully acknowledge the funding support from the National Key R\&D Program of China (under contract Grant No. 2016YFA0301504 and No. 2016YFA0302800). The authors also acknowledge the hospitality of Key Laboratory of Quantum Optics and Center of Cold Atom Physics, Shanghai Institute of Optics and Fine Mechanics. The authors gratefully thank the help from Professor Liang Liu, Professor Mingsheng Zhan and Professor Mark Saffman who essentially make this work possible. The authors also thank Professor Xiaodong He, Professor Dongsheng Ding and Professor Tian Xia for enlightening discussions.
\end{acknowledgements}
\bibliographystyle{apsrev4-1}
\renewcommand{\baselinestretch}{1}
\normalsize
|
1,941,325,220,735 | arxiv | \section{Introduction}
At present, large-scale image datasets play a critical role in the deep learning area. ImageNet\cite{imagenet_cvpr09}, COCO\cite{COCO}, PASCAL VOC\cite{VOC} actually brought modern deep learning technology into new stage.
Bird species classification is a difficult problem not only because the similarity between different species of birds, but also because they have more than one type of plumage even for the same bird in different timings or areas. Besides, pictures of the birds usually show different poses and actions (e.g., birds in the water, in-flight or perching).
We present DongNiao International Birds 10000 (DIB-10K), a dataset contains not only different types but also a large number of different poses and gestures of birds. By aggregating tremendous bird images, this dataset would push the limits of the visual abilities for machine learning technologies. Currently, the whole dataset could be download from DongNiao website\footnote{\url{http://ca.dongniao.net/download.html}}.
\section{Dataset Specification and Collection}
DongNiao International Birds 10000 contains 4,876,536 thumbnails of 10,922 different bird species. It complies to IOC 10.1 taxonomy\cite{IOC10}, and covers all species of the birds in the world. The size of each thumbnail is 300 x 300 pixels, which contains only one object and it is in the centre of the thumbnail.
Images were crawled from the internet and then filtered by human reviewing. After that, all images were processed through a bird-detection tool written by using Tensorflow\cite{abadi2016tensorflow} with Faster R-CNN\cite{FasterRCNN} model. Each bird in images will be cropped out and resized to fit in a square of 300 x 300 pixels thumbnail.
The thumbnails were not stretched to the square directly but only one edge would be expanded or shrink to 300 pixels to keep the birds' aspect ratio. This step might cause black "frames" in the thumbnails, as Fig ~\ref{fig:samples} shows.
To get the correct labels or species of images from the internet, we used the image search engines to search for images with the scientific names and/or common names of each species as the keys. Images were filtered out by checking the title of the pictures. Thus all the images without scientific names or common names are dropped.
Three reasons make the DIB-10K dataset special from those popular image datasets such as ImageNet\cite{imagenet_cvpr09} and COCO\cite{COCO}.
{\bf Bird centred:} For every thumbnail, the bird is in the central position. This makes it much easier for a classification job but not suitable for object detection job.
{\bf Large number of categories:} DIB-10K has more than 10000 categories, which is far larger than CUB-200\cite{CUB200}'s 200 species, and ILSVRC\cite{ILSVRC15}'s 1000 categories. It is the biggest bird dataset in the status quo.
{\bf Severely unbalanced:} Different bird species have different population. Also, some birds are more difficult than others to take pictures of. Hence the different categories of DIB-10K have different numbers of pictures, as Fig ~\ref{fig:dist} shows. For example, the {\it Western Jackdaw} has more than four thousand images but the {\it Mauritius Blue Pigeon}, which has already extinguished, has only 20 images. This brings a big challenge to build a machine learning classification system.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Dataset & No. of images & No. of categories \\
\hline\hline
ImageNet & 14 million & 20,000 \\
COCO & 328,000 & 91 \\
PASCAL VOC & 20,000 & 20 \\
CUB-200 & 6033 & 200 \\
DIB-10K & 4,800,000 & 10,000 \\
\hline
\end{tabular}
\end{center}
\caption{Compare our dataset with other popular ones}
\end{table}
\begin{figure*}
\caption{Randomly choose 65 categories to visual the distribution of numbers of each species}
\begin{center}
\includegraphics[width=\linewidth]{birds_dist}
\end{center}
\label{fig:dist}
\end{figure*}
\section{Conclusion}
DIB-10K has a total of 4,876,536 images over 10,922 bird species. It includes all the bird species around the world and hence become an interesting and challenging image dataset for fine-grain classification on ornithology and machine learning areas.
\begin{figure*}
\caption{Example of five bird images from each of random categories}
\begin{center}
\includegraphics[width=\linewidth, height=\textheight]{18birds_0}
\end{center}
\label{fig:samples}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth, height=\textheight]{18birds_1}
\end{center}
\end{figure*}
{\small
\bibliographystyle{ieee_fullname}
|
1,941,325,220,736 | arxiv |
\section*{Abstract}
\input{abstract}
\section*{Introduction}
\input{intro}
\section*{Data reduction}
\input{reduc}
\section*{Flavor ratio}
\input{ratio}
\section*{Zenith angle dependence}
\input{zenith}
\input{Zsys}
\section*{Conclusions}
\input{conclusion}
|
1,941,325,220,737 | arxiv | \section*{Acknowledgments}\end{small}}
\newcommand\altaffilmark[1]{$^{#1}$}
\newcommand\altaffiltext[1]{$^{#1}$}
\voffset=-0.6in
\title[Turbulent Grain Clustering]{A Theory of Grain Clustering in Turbulence: The Origin and Nature of Large Density Fluctuations\vspace{-0.5cm}}
\author[Hopkins]{
\parbox[t]{\textwidth}{
Philip F. Hopkins\altaffilmark{1,2}\thanks{E-mail:[email protected]}}
\vspace*{6pt} \\
\altaffiltext{1}{TAPIR, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125, USA} \\
\altaffiltext{2}{Department of Astronomy and Theoretical Astrophysics Center, University of California Berkeley, Berkeley, CA 94720\vspace{-1.1cm}} \\
}
\date{Submitted to MNRAS, August, 2013\vspace{-0.6cm}}
\begin{document}
\maketitle
\label{firstpage}
\begin{abstract}
\vspace{-0.2cm}
We propose a theory for the density fluctuations of aerodynamic grains, embedded in a (high Reynolds number) turbulent, gravitating gas disk. The theory combines calculations for the average behavior of a group of grains encountering a single turbulent eddy, with a hierarchical description of the eddy velocity statistics. We show that this makes analytic predictions for a wide range of quantities, including: the distribution of volume-average grain densities, the power spectrum and correlation functions of grain density fluctuations, and the maximum volume density of grains reached. For each, we predict how these scale as a function of grain stopping/friction time $t_{\rm s}$, spatial scale, grain-to-gas mass ratio $\tilde{\rho}$, strength of the turbulence $\alpha$, and detailed disk properties (orbital frequency $\Omega$, gas density gradients, sound speed, etc.). We test these predictions against numerical simulations with externally driven (magneto-rotational, Kelvin-Helmholtz, or ``forced'') or self-driven (streaming instability) turbulence. The simulations agree well with the analytic predictions, spanning a range of $t_{\rm s}\,\Omega \sim 10^{-4}-10$, $\tilde{\rho}\sim0-3$, $\alpha\sim 10^{-10}-10^{-2}$, with and without vertical stratification or grain-gas back-reaction, and in different numbers of spatial dimensions. Results from ``turbulent concentration'' simulations and laboratory experiments are also predicted as a special case (for Stokes and Reynolds numbers $\gg1$). We predict that vortices on a wide range of scales act to disperse and concentrate grains hierarchically (even if the gas is incompressible); for small grains this is most efficient in eddies with turnover time comparable to the stopping time, as expected. But for large grains, shear and gravity are important and lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. Small-scale fluctuations are also damped by bulk drift velocities and partial de-coupling of gas and grain turbulent velocities. The grain density distribution is driven to a log-Poisson shape, with fluctuations for large grains up to $\gtrsim1000$ times the mean density. We predict much smaller grains will also experience large fluctuations, but on small scales not resolved in most simulations. We provide simple analytic expressions for the important predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.
\end{abstract}
\begin{keywords}
planets and satellites: formation --- protoplanetary discs --- accretion, accretion disks --- hydrodynamics --- instabilities --- turbulence
\vspace{-1.0cm}
\end{keywords}
\vspace{-1.1cm}
\section{Introduction}
\label{sec:intro}
Dust grains and aerodynamic particles are fundamental in astrophysics. These determine the attenuation and absorption of light in the interstellar medium (ISM), interaction with radiative forces and regulation of cooling, and form the building blocks of planetesimals. Of particular importance is the question of grain clustering and clumping -- fluctuations in the local volume-average number/mass density of grains $\rho\grain$ -- in turbulent gas.
Much attention has been paid to the specific question of grain density fluctuations and grain concentration in proto-planetary disks. In general, turbulence sets a ``lower limit'' to the degree to which grains can settle into a razor-thin sub-layer; and this has generally been regarded as a barrier to planetesimal formation \citep[though see][and references therein]{goodman.pindor:2000.secular.drag.instabilities.grains,lyra:2009.semianalytic.planet.form.model.grain.settling,lee:2010.grain.settling.vs.grav.instability,chiang:2010.planetesimal.formation.review}. However, it is also well-established that the number density of solid grains can fluctuate by multiple orders of magnitude when ``stirred'' by turbulence, even in media where the turbulence is highly sub-sonic and the gas is nearly incompressible \citep[see e.g.][]{bracco:1999.keplerian.largescale.grain.density.sims,cuzzi:2001.grain.concentration.chondrules,johansen:2007.streaming.instab.sims,carballido:2008.large.grain.clustering.disk.sims,bai:2010.grain.streaming.sims.test,bai:2010.streaming.instability.basics,bai:2010.grain.streaming.vs.diskparams,pan:2011.grain.clustering.midstokes.sims}. This can occur via self-excitation of turbulent motions in the ``streaming'' instability \citep{youdin.goodman:2005.streaming.instability.derivation}, or in externally driven turbulence, such as that excited by the magneto-rotational instability (MRI), global gravitational instabilities, or convection \citep{dittrich:2013.grain.clustering.mri.disk.sims,jalali:2013.streaming.instability.largescales}. Direct numerical experiments have shown that the magnitude of these fluctuations depends on the parameter $\tau_{\rm s}=t_{\rm s}\,\Omega$, the ratio of the gas ``stopping'' time (friction/drag timescale) $t_{\rm s}$ to the orbital time $\Omega^{-1}$, with the most dramatic fluctuations around $\tau_{\rm s}\sim1$. These experiments have also demonstrated that the magnitude of clustering depends on the volume-averaged ratio of solids-to-gas ($\tilde{\rho}\equiv \rho\grain/\rho_{\rm g}$), and basic properties of the turbulence (such as the Mach number). These have provided key insights and motivated considerable work studying these instabilities; however, the fraction of the relevant parameter space spanned by direct simulations is limited. Moreover, it is impossible to simulate anything close to the full dynamic range of turbulence in these systems: the ``top scales'' of the system are $\scalevar_{\rm max}\sim$\,AU, while the viscous/dissipation scales $\lambda_{\nu}$ of the turbulence are $\lambda_{\nu}\sim$\,m (Reynolds numbers $Re\sim10^{6}-10^{9}$, under typical circumstances). Reliably modeling $Re\gtrsim100$ remains challenging in state-of-the-art simulations. Clearly, some analytic understanding of these fluctuations would be tremendously helpful.
The question of ``preferential concentration'' of aerodynamic particles is actually much more well-studied in the terrestrial turbulence literature. There both laboratory experiments \citep{squires:1991.grain.concentration.experiments,fessler:1994.grain.concentration.experiments,rouson:2001.grain.concentration.experiment,gualtieri:2009.anisotropic.grain.clustering.experiments,monchaux:2010.grain.concentration.experiments.voronoi} and numerical simulations \citep{cuzzi:2001.grain.concentration.chondrules,yoshimoto:2007.grain.clustering.selfsimilar.inertial.range,hogan:2007.grain.clustering.cascade.model,bec:2009.caustics.intermittency.key.to.largegrain.clustering,pan:2011.grain.clustering.midstokes.sims,monchaux:2012.grain.concentration.experiment.review} have long observed that very small grains, with stokes numbers $St\equiv t_{\rm s}/t\eddy(\lambda_{\nu})\sim1$ (ratio of stopping time to eddy turnover time at the viscous scale) can experience order-of-magnitude density fluctuations at small scales (at/below the viscous scale). Considerable analytic progress has been made understanding this regime: demonstrating, for example, that even incompressible gas turbulence is unstable to the growth of inhomogeneities in grain density \citep{elperin:1996:grain.clustering.instability,elperin:1998.grain.clustering.instability.rotation}, and predicting the behavior of the small-scale grain-grain correlation function using simple models of gaussian random-field turbulence \citep{sigurgeirsson:2002.grain.markovian.concentration.toymodel,bec:2007.grain.clustering.markovian.flow}. But extrapolation to the astrophysically relevant regime is difficult for several reasons: the Reynolds numbers of interest are much larger, and as a result the Stokes numbers are also generally much larger (in the limit where grains do not cluster below the viscous/dissipation scale because $t_{\rm s}\gg t\eddy(\scalevar_{\rm max})$), placing the interesting physics well in the inertial range of turbulence, and rotation/shear, external gravity, and coherent (non-random field) structures appear critical (at least on large scales). This parameter space has not been well-studied, and at least some predictions (e.g.\ those in \citet{sigurgeirsson:2002.grain.markovian.concentration.toymodel,bec:2008.markovian.grain.clustering.model,zaichik:2009.grain.clustering.theory.randomfield.review}) would naively lead one to estimate much smaller fluctuations than are recorded in the experiments above.
However, these studies still contribute some critical insights. They have repeatedly shown that grain density fluctuations are tightly coupled to the local vorticity field: grains are ``flung out'' of regions of high vorticity by centrifugal forces, and collect in the ``interstices'' (regions of high strain ``between'' vortices). Studies of the correlation functions and scaling behavior of higher Stokes-number particles suggest that, in the inertial range (ignoring gravity and shear), the same dynamics apply, but with the scale-free replacement of a ``local Stokes number'' $t_{\rm s}/t\eddy$, i.e.\ what matters for the dynamics on a given scale are the vortices of that scale, and similar concentration effects can occur whenever the eddy turnover time is comparable to the stopping time \citep[e.g.][]{yoshimoto:2007.grain.clustering.selfsimilar.inertial.range,bec:2008.markovian.grain.clustering.model,wilkinson:2010.randomfield.correlation.grains.weak,gustavsson:2012.grain.clustering.randomflow.lowstokes}. Several authors have pointed out that this critically links grain density fluctuations to the phenomenon of intermittency and discrete, time-coherent structures (vortices) on scales larger than the Kolmogorov scale in turbulence \citep[see][and references therein]{bec:2009.caustics.intermittency.key.to.largegrain.clustering,olla:2010.grain.preferential.concentration.randomfield.notes}. In particular, \citet{cuzzi:2001.grain.concentration.chondrules} argue that grain density fluctuations behave in a multi-fractal manner: multi-fractal scaling is a key signature of well-tested, simple geometric models for intermittency \citep[e.g.][]{sheleveque:structure.functions}. In these models, the statistics of turbulence are approximated by regarding the turbulent field as a hierarchical collection of ``stretched'' singular, coherent structures (e.g.\ vortices) on different scales \citep{dubrulle:logpoisson,shewaymire:logpoisson,chainais:2006.inf.divisible.cascade.review}. Such statistical models have been well-tested as a description of the {\em gas} turbulence statistics \citep[including gas density fluctuations; see e.g.][]{burlaga:1992.multifractal.solar.wind.density.velocity,sorriso-valvo:1999.solar.wind.intermittency.vs.time,budaev:2008.tokamak.plasma.turb.pdfs.intermittency,shezhang:2009.sheleveque.structfn.review,hopkins:2012.intermittent.turb.density.pdfs}. However, only first steps have been taken to link them to grain density fluctuations: for example, in the phenomenological cascade model fit to simulations in \citet{hogan:2007.grain.clustering.cascade.model}.
In this paper, we use these theoretical and experimental insights to build a theory which ``bridges'' between the well-studied regime of small-scale turbulence and that of large, astrophysical particles in shearing, gravitating disks. The key concepts are based on the work above: we first assume that grain density fluctuations are driven by coherent eddies, for which we can calculate the perturbation owing to a single eddy with a given scale. Building on \citet{cuzzi:2001.grain.concentration.chondrules} and others, we then attach this calculation to a well-tested, simple, geometric cascade model for turbulence which predicts the statistics of intermittent eddies. This allows us to make predictions for a wide range of quantities, which we compare to simulations and experiments.
\begin{footnotesize}
\ctable[
caption={{\normalsize Important Variables \&\ Key Equations Derived in This Paper}\label{tbl:defns}},center,star
]{lll}{
}{
\hline\hline
\multicolumn{1}{l}{Variable} &
\multicolumn{1}{l}{Definition} &
\multicolumn{1}{l}{Eq.} \\
\hline
$\rho_{\rm g}$, $c_{s}$ & mid-plane gas density and sound speed & -- \\
$R$, $\Omega_{R}$, $V_{K}$ & distance from center of gravitational potential, Keplerian orbital frequency at $R$, and Keplerian velocity ($V_{K}\equiv \Omega_{R}\,R$) & -- \\
$\scalevar\eddy$, $v\eddy$, $\mathcal{M}_{e}$, $t\eddy$ & characteristic spatial scale, velocity, Mach number ($\mathcal{M}_{e}\equiv |v\eddy|/c_{s}$) and turnover time ($t\eddy\equiv \scalevar\eddy/|v\eddy|$) of a turbulent eddy & -- \\
$\scalevar_{\rm max}$, $v\eddy(\scalevar_{\rm max})$, $\alpha$ & maximum or ``top''/driving scale of turbulence, with eddy velocity $v\eddy(\scalevar_{\rm max}) \equiv \alpha^{1/2}\,c_{s}$ & --\\
$\lambda_{\nu}$, $Re$, $St$ & viscous/Kolmogorov or ``bottom'' scale of turbulence; Reynolds number $Re\equiv(\scalevar_{\rm max}/\lambda_{\nu})^{4/3}$; and Stokes $St\equiv t_{\rm s}/t\eddy(\lambda_{\nu})$
& --\\
$\tilde{\rho}$ & mean ratio of the volume-average density of solids to gas, in the midplane ($\tilde{\rho} \equiv \langle \rho\grain \rangle / \langle \rho_{\rm g} \rangle$) & -- \\
$\tau_{\rm s}$ & dimensionless particle stopping time ($\tau_{\rm s}\equiv t_{\rm s}\,\Omega_{R}$) & \ref{eqn:eom.peculiar} \\
$\tilde{\tau}_{\rm s}$ & ratio of particle stopping time to eddy turnover time ($\tilde{\tau}_{\rm s}\equiv t_{\rm s}/t\eddy = \tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,(\scalevar\eddy/\scalevar_{\rm max})^{1-\zeta_{1}}$) & -- \\
$\eta,\,\Pi$ & difference between the mean gas circular velocity and Keplerian ($\eta\,V_{K} \equiv V_{K} - \langle V_{\rm gas} \rangle$;
$\Pi \equiv \eta\,V_{K}/c_{s}$) & \ref{eqn:eta} \\
$v_{\rm drift}$ & mean grain-gas relative drift velocity:
$v_{\rm drift} \equiv {2\,\eta\,V_{K}\,\tau_{\rm s}\,[{(1+\tilde{\rho})^{2}+\tau_{\rm s}^{2}/4}}]^{1/2}\,[{\tau_{\rm s}^{2}+(1+\tilde{\rho})^{2}}]^{-1}$
& \ref{eqn:vdrift} \\
$C_{\infty}$ & co-dimension of {\em gas} turbulence ($C_{\infty}\approx2$ in incompressible, subsonic turbulence, $\approx1$ in super-sonic turbulence) & \ref{eqn:deltaN} \\
$\zeta_{1}$ & scaling of one-point gas eddy velocity statistics, $\langle|v\eddy|\rangle\propto \scalevar\eddy^{\zeta_{1}}$ & \ref{eqn:veddy.scaling} \\
& in the multi-fractal models used: $\displaystyle \zeta_{1} = \zeta_{1}(C_{\infty}) \approx \frac{1}{9} + C_{\infty}\,{\Bigl[}1 - {\Bigl(}1 - \frac{2}{3\,C_{\infty}} {\Bigr)}^{1/3} {\Bigr]}$ & \ref{eqn:velslope} \\
$N_{\rm d}$ & ``wrapping dimension'' of the singular eddy structures driving density fluctuations & \ref{eqn:shrink.eddy} \\
& ($N_{\rm d}=2$ for simple vortices in the disk plane) & \\
\hline
\\
\hline
\multicolumn{1}{l}{--} &
\multicolumn{1}{l}{Useful variables for Equations below:} &
\multicolumn{1}{l}{Eq.} \\
& & \\
$\beta$
& $\displaystyle \beta \equiv \frac{|v\eddy(\scalevar_{\rm max})|}{|v_{\rm drift}|} =
\frac{|v\eddy(\scalevar_{\rm max})|\,[(1+\tilde{\rho})^{2}+\tau_{\rm s}^{2}]}{2\,\eta\,V_{K}\,\tau_{\rm s}\,[{(1+\tilde{\rho})^{2}+\tau_{\rm s}^{2}/4}]^{1/2}} =
\frac{(1+\tilde{\rho})^{2}+\tau_{\rm s}^{2}}{2\,\tau_{\rm s}\,[{(1+\tilde{\rho})^{2}+\tau_{\rm s}^{2}/4}]^{1/2}}\,{\Bigl(}\frac{\alpha^{1/2}}{\Pi}{\Bigr)}$
& \ref{eqn:v.laminar} \\
& & \\
$\langle\deltarhonoabs\rangle$
& $\displaystyle \langle\deltarhonoabs\rangle \equiv -\frac{N_{\rm d}\,\varpi(\tau_{\rm s},\,\tilde{\tau}_{\rm s})}{1+h(\scalevar\eddy)^{-1}} $ & \\
& & \\
& $\displaystyle h(\scalevar\eddy)\equiv -\tilde{\tau}_{\rm s}\,\ln{{\Bigl[}1 - \frac{(\scalevar\eddy/\scalevar_{\rm max})}{\tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,g(\scalevar\eddy)^{1/2}} {\Bigr]}}$, \ \ \ \ \
$\displaystyle g(\scalevar\eddy)\equiv \frac{1}{\beta^{2}} + \tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,\ln{{\Bigl[} \frac{1+\tilde{\tau}_{\rm s}(\scalevar_{\rm max})^{-1}}{1+\tilde{\tau}_{\rm s}^{-1}} {\Bigr]}}$, \
& \ref{eqn:g.timescale.function} \\
& & \\
& $\displaystyle \varpi = {\rm MAX}{\Bigl[}\varpi_{1},\ \varpi_{0}\equiv 2\,\tau_{\rm s,\,\rho}\,(1 + \tau_{\rm s,\,\rho}^{2})^{-1} {\Bigr]}$\ \ \ \ \ \ \ \ \ \
[\ $\tau_{\rm s,\,\rho}\equiv {\tau_{\rm s}}\,({1+\tilde{\rho}})$\ \ \ \ \ $\tilde{\tau}_{\rm s,\,\rho}\equiv\tilde{\tau}_{\rm s}\,(1+\tilde{\rho})$\ ]
& \\
& & \\
& $0 =
16\,\tilde{\tau}_{\rm s,\,\rho}^{3}\,\varpi_{1}^{4} +
32\,\tilde{\tau}_{\rm s,\,\rho}^{2}\,\varpi_{1}^{3} +
\tilde{\tau}_{\rm s,\,\rho}\,(20+7\,\tau_{\rm s,\,\rho}^{2})\,\varpi_{1}^{2}
+ 4\,(1 + \tau_{\rm s,\,\rho}^{2} - 3\,\tau_{\rm s,\,\rho}\,\tilde{\tau}_{\rm s,\,\rho})\,\varpi_{1} -
4\,(\tilde{\tau}_{\rm s,\,\rho}+2\,\tau_{\rm s,\,\rho})$ & \ref{eqn:varpi.full} \\
& & \\
\hline
\multicolumn{1}{l}{$\rho_{\rm p,\,max}$:} &
\multicolumn{1}{l}{Maximum local density of grains $\rho\grain$:} &
\multicolumn{1}{l}{Eq.} \\
& & \\
& $\displaystyle \ln{{\Bigl(}\frac{\rho_{\rm p,\,max}}{\langle \rho\grain \rangle} {\Bigr)}}
= C_{\infty}\,\int_{\lambda=0}^{\scalevar_{\rm max}}\,[1-\exp{(-|\deltarhonoabs|)}]\,{\rm d}\ln{\lambda}$
& \ref{eqn:rhomax} \\
& & \\
\hline
\multicolumn{1}{l}{$\displaystyle \Delta^{2}(k) = \frac{{\rm d}S_{\ln{\rho}}}{{\rm d}\ln{\lambda}}$:} &
\multicolumn{1}{l}{(Volume-Weighted) Grain log-density power spectrum (versus scale $\lambda$):} &
\multicolumn{1}{l}{Eq.} \\
& & \\
& $\displaystyle \Delta_{\ln{\rho}}^{2}{\Big(}k\equiv\frac{1}{\lambda}{\Bigr)}
= C_{\infty}\,|\deltarhonoabs|^{2}$
& \ref{eqn:pwrspec} \\
& & \\
\hline
\multicolumn{1}{l}{$P_{V}(\ln{\rho\grain})$:} &
\multicolumn{1}{l}{(Volume-weighted) Distribution of Grain Densities $\rho\grain$:} &
\multicolumn{1}{l}{Eq.} \\
& & \\
& $\displaystyle
P_{V}(\ln{\rho\grain})\,{\rm d}\ln{\rho\grain} \approx
\frac{(S^{-1}\,\mu^{2})^{m^{\prime}}\,\exp{(-S^{-1}\,\mu^{2})}}{\Gamma(m^{\prime}+1)} \, \frac{\mu}{S}\, {\rm d}\ln{\rho\grain}$
& \ref{eqn:pdf.S}-\ref{eqn:deltarho.int} \\
& & \\
& $\displaystyle m^{\prime} \equiv \frac{\mu}{S}\,{\Bigl\{} \frac{\mu^{2}}{S}\,{\Bigl[}1-\exp{{\Bigl(}-\frac{S}{\mu} {\Bigr)}}{\Bigr]}
- \ln{{\Bigl(}\frac{\rho\grain}{\langle \rho\grain \rangle} {\Bigr)}} {\Bigr\}}$,
\ \ \ \ $\displaystyle \mu \equiv C_{\infty}\,\int|\deltarhonoabs|\,{\rm d}\ln{\lambda}$,
\ \ \ \ $\displaystyle S \equiv C_{\infty}\,\int|\deltarhonoabs|^{2}\,{\rm d}\ln{\lambda}$
& \\
& & \\
\multicolumn{1}{l}{$P_{M}(\ln{\rho\grain})$:} &
\multicolumn{1}{l}{(Mass/particle-weighted) Distribution of Grain Densities $\rho\grain$:} &
\multicolumn{1}{l}{\,} \\
& & \\
& $\displaystyle
P_{M}(\ln{\rho\grain})\,{\rm d}\ln{\rho\grain} = \rho\grain\,P_{V}(\ln{\rho\grain})\,{\rm d}\ln{\rho\grain}$
& -- \\
& & \\
\hline\hline\\
& & \\
& & \\
& & \\
& & \\
& & \\
& & \\
& & \\
& & \\
}
\end{footnotesize}
\begin{footnotesize}
\ctable[
caption={{\normalsize Approximations for Large Scales and/or Grains (Appendix~\ref{sec:appendix:largescale})}\label{tbl:largescale}},center,star
]{lll}{
}{
\hline\hline
\multicolumn{1}{l}{--} &
\multicolumn{1}{l}{Useful variables:} &
\multicolumn{1}{l}{} \\
& & \\
$\varpi,\,|\delta_{0}|,\,\scalevar_{\rm crit}$ & $\displaystyle \varpi \sim 2\,\phi\,\frac{\tau_{\rm s,\,\rho}}{1 + \tau_{\rm s,\,\rho}^{2}}$\ \ \ ($\phi\sim0.8$), \ \ \ \ \ \ \ \ \ \ \ \
$\displaystyle |\delta_{0}| \equiv N_{\rm d}\,\varpi \sim 2\,N_{\rm d}\,\phi\,\frac{\tau_{\rm s,\,\rho}}{1 + \tau_{\rm s,\,\rho}^{2}}$\ , \ \ \ \ \ \ \ \ \ \ \ \
$\displaystyle \scalevar_{\rm crit} \equiv \beta^{-1/\zeta_{1}}\,\scalevar_{\rm max}$
& \\
& & \\
$\langle\deltarhonoabs\rangle$ & $\displaystyle \langle|\deltarhonoabs|\rangle = \frac{N_{\rm d}\,\varpi}{1+h(\scalevar\eddy)^{-1}}
\sim \frac{2\,N_{\rm d}\,\phi}{\tau_{\rm s,\,\rho}+\tau_{\rm s,\,\rho}^{-1}}\,{\Bigl[}1 + \beta^{-1}\,{\Bigl(} \frac{\scalevar\eddy}{\scalevar_{\rm max}} {\Bigr)}^{-\zeta_{1}} {\Bigr]}^{-1}
= |\delta_{0}|\,{\Bigl[}1 + (\scalevar\eddy/\scalevar_{\rm crit})^{-\zeta_{1}}{\Bigr]}^{-1}
$ & \\
& & \\
\hline
\multicolumn{1}{l}{$\rho_{\rm p,\,max}$:} &
\multicolumn{1}{l}{Maximum local density of grains $\rho\grain$:} &
\multicolumn{1}{l}{} \\
& & \\
& $\displaystyle \ln{{\Bigl(}\frac{\rho_{\rm p,\,max}(\lambda\rightarrow0)}{\langle \rho\grain \rangle} {\Bigr)}}
\sim \frac{C_{\infty}}{\zeta_{1}}\,\frac{|\delta_{0}|}{1+|\delta_{0}|}\,
\ln{{\Bigl[}1 + \beta\,(1+|\delta_{0}|) + \beta^{3/2}\{(1 + |\delta_{0}|^{2})^{1/2}-1\}
{\Bigr]}}$
& \\
& & \\
& $\displaystyle \rho_{\rm p,\,max}(\lambda) \propto \lambda^{-\gamma}$, with $\displaystyle \gamma \sim
\begin{cases}
{\displaystyle C_{\infty}\,[1 - \exp{(-|\delta_{0}|)]}}\ \ \ \ \hfill {\tiny (\lambda \gg \scalevar_{\rm crit})} \\
\\
{\displaystyle C_{\infty}\,|\delta_{0}|\,(\lambda/\scalevar_{\rm crit})^{\zeta_{1}}}\ \ \ \ \hfill {\tiny (\lambda \ll \scalevar_{\rm crit})} \\
\end{cases}
$ & \\
& & \\
\hline
\multicolumn{1}{l}{$\displaystyle \Delta^{2}(k)$:} &
\multicolumn{1}{l}{(Volume-Weighted) Grain linear-density and log-density power spectrum (versus scale $\lambda$):} &
\multicolumn{1}{l}{} \\
& & \\
& $\displaystyle \Delta_{\ln{\rho}}^{2}{\Big(}k\equiv\frac{1}{\lambda}{\Bigr)}
\sim {C_{\infty}\,|\delta_{0}|^{2}}{\Bigl[}{1 + (\lambda/\scalevar_{\rm crit})^{-\zeta_{1}}} {\Bigr]}^{-2} $
& \\
& & \\
& $\displaystyle \Delta_{{\rho}}^{2} \sim
\begin{cases}
{\displaystyle C_{\infty}\,|\delta_{0}|^{2}}\ \ \hfill {\tiny (\lambda \gg \scalevar_{\rm crit},\ \Delta_{\ln{\rho}}^{2}\ll 1)} \\
\\
{\displaystyle C_{\infty}\,|\delta_{0}|^{2}\,(\lambda/\scalevar_{\rm crit})^{2\,\zeta_{1}}}\ \ \hfill {\tiny (\lambda \ll \scalevar_{\rm crit},\ \Delta_{\ln{\rho}}^{2}\ll 1)} \\
\end{cases}
$
\ \ \
$\displaystyle\sim
\begin{cases}
{\displaystyle C_{\infty}\,(\lambda/\scalevar_{\rm max})^{-C_{\infty}}}\ \ \hfill {\tiny (\lambda \gg \scalevar_{\rm crit},\ |\delta_{0}|\gg1)} \\
\\
{\displaystyle 2\,C_{\infty}\,e^{\Delta N_{\rm int}}\,\frac{|\delta_{0}|}{|\deltarhonoabs|_{\rm int}}\,(\lambda/\scalevar_{\rm crit})^{\zeta_{1}}}\ \ \hfill {\tiny (\lambda \ll \scalevar_{\rm crit},\ |\delta_{0}|\gg1)} \\
\end{cases}
$
& \\
& & \\
\hline
\multicolumn{1}{l}{$P_{V}(\ln{\rho\grain})$:} &
\multicolumn{1}{l}{(Volume-weighted) Distribution of Grain Densities $\rho\grain$:} &
\multicolumn{1}{l}{} \\
& & \\
& $\displaystyle
P_{V}(\ln{\rho\grain})\,{\rm d}\ln{\rho\grain} \approx
\frac{(\Delta N_{\rm int})^{m^{\prime}}\,\exp{(-\Delta N_{\rm int})}}{\Gamma(m^{\prime}+1)} \, \frac{{\rm d}\ln{\rho\grain}}{|\deltarhonoabs|_{\rm int}}$\ ,
\ \ \ \ \ \ \ \ \ \
$\displaystyle m^{\prime} = |\deltarhonoabs|_{\rm int}^{-1}\,{\Bigl\{}\Delta N_{\rm int}\,{\Bigl[}1 - \exp{(-|\deltarhonoabs|_{\rm int})} {\Bigr]} - \ln{{\Bigl(} \frac{\rho\grain}{\langle \rho\grain \rangle}{\Bigr)}} {\Bigr\}}$
& \\
& & \\
& $\displaystyle \Delta N_{\rm int} = \frac{\mu^{2}}{S}$, \ \ \ \ $\displaystyle |\deltarhonoabs|_{\rm int} = \frac{S}{\mu}$,
\ \ \ \ $\displaystyle \mu \sim \frac{C_{\infty}}{\zeta_{1}}\,|\delta_{0}|\,\ln{(1+\beta)}$,
\ \ \ \ $\displaystyle S \sim \frac{C_{\infty}}{\zeta_{1}}\,|\delta_{0}|^{2}\,{\Bigl(}\ln{(1+\beta)} - \frac{\beta}{1+\beta}{\Bigr)}$
& \\
& & \\
\hline\hline\\
}
\end{footnotesize}
\vspace{-0.5cm}
\section{Arbitrarily Small Grains: Pure Gas Density Fluctuations}
\label{sec:smallgrains}
First consider the case where the grains are perfectly coupled to the gas ($t_{\rm s}\rightarrow0$), and their volume-average mass density ($\rho\grain$, as distinct from the {\em internal} physical density of a single, typical grain) is small compared to the gas density $\rho_{\rm g}$, so grain density fluctuations simply trace gas density fluctuations.
In both sub-sonic and super-sonic turbulence, the gas experiences density fluctuations directly driven by compressive (longitudinal) velocity fluctuations.
This leads to the well-known result, in {both} sub-sonic and super-sonic turbulence, that the density PDF becomes approximately log-normal \citep{passot:1988.proof.lognormal}, with a variance that scales as
$S_{\ln{\rho_{\rm g}}} = \ln[1+\mach_{\rm c}^{2}]$
where $\mach_{\rm c}$ is the rms compressive (longitudinal) component of the turbulent Mach number $\mathcal{M}$ (component projected along $\nabla\cdot {\bf v}$).
However, in sub-sonic turbulence, the gas density fluctuations quickly become small. Simulations of the (very thin) mid-plane dead-zone dust layers typically record $\mathcal{M}\lesssim0.1$; they confirm that the scaling above holds, but this produces correspondingly small fluctuations in $\rho_{\rm g}$ \citep[see e.g.][]{johansen:2007.streaming.instab.sims}.\footnote{Note that this does not necessarily mean that Mach numbers in the much larger-scale height gas disk are small, nor that they are unimportant.} Yet these same simulations record orders-of-magnitude fluctuations in $\rho\grain$.
\begin{figure}
\centering
\plotonesize{gr_response.pdf}{1.01}
\vspace{-0.5cm}
\caption{
``Response function'' $\varpi$ defined in \S~\ref{sec:model.encounters}: this is the mean divergence produced in the peculiar grain velocity distribution by a simple vortex with a turnover time $t\eddy$. {\em Top:} Limiting cases. First, small eddies ($t\eddy\ll\Omega^{-1}$), where $\varpi$ is a function of $\tilde{\tau}_{\rm s}\equiv t_{\rm s}/t\eddy$ alone ($\propto \tilde{\tau}_{\rm s}$ for $\tilde{\tau}_{\rm s}\ll1$, $\propto (2\tilde{\tau}_{\rm s})^{-1/2}$ for $\tilde{\tau}_{\rm s}\gg1$). Second, large eddies ($t\eddy\gg\Omega^{-1}$), where $\varpi$ is a function of $\tau_{\rm s}\equivt_{\rm s}\Omega$ alone ($\propto 2\,\tau_{\rm s}$ for $\tau_{\rm s}\ll1$, $\propto 2\,\tau_{\rm s}^{-1}$ for $\tau_{\rm s}\gg1$). {\em Bottom:} Full solution (Eq.~\ref{eqn:varpi.full}), for different $\tau_{\rm s}$ and $t\eddy$. For small grains $\tau_{\rm s}\ll0.1$, there is a broad ``resonant'' peak around $t_{\rm s}\simt\eddy$ (spanning $0.05\,t_{\rm s}\lesssimt\eddy\lesssim10\,t_{\rm s}$). On the largest scales ($t\eddy$), the value saturates -- this produces a broader and higher-amplitude ``plateau'' for large grains ($\tau_{\rm s}\gtrsim0.1$).
\label{fig:response}}
\end{figure}
\vspace{-0.5cm}
\section{Partially-Coupled Grains: The Model}
\label{sec:model}
\subsection{The Equations of Motion and Background Flow}
\label{sec:model.background}
Now consider grains with non-zero $t_{\rm s}$, in a gaseous medium and some potential field (for now we take this to be a Keplerian disk, the case of greatest interest, but generalize below). Absent grains and turbulence, the gas equilibrium is in circular orbits, at a cyclindrical radius $R$ from the potential center, with orbital frequency $\Omega(R)$. Because of pressure support, the gas does not orbit at exactly the circular velocity $V_{K}$, but at the reduced speed $V_{\rm gas}$, where
\begin{equation}
\label{eqn:eta}
\eta\,V_{K} \equiv V_{K}-\langle V_{\rm gas}(R,\,\rho\grain=0) \rangle \approx \frac{1}{2\,\rho_{\rm g}\,V_{K}}\,\frac{\partial P}{\partial \ln{R}}
\end{equation}
Define a rotating frame with origin in the disk midplane at $R$, with the $\hat{x}$ axis along the radial direction and $\hat{y}$ axis in the azimuthal (orbital $\phi$) direction; the frame rotates at the circular velocity $V_{K}(R)$, with the angular momentum vector $\boldsymbol{\Omega}$ oriented along the $\hat{z}$ axis and $\Omega_{R}\equiv \Omega(R)$.
The local equation of motion for a grain $i$ with stopping time $t_{\rm s}$ becomes
\begin{equation}
\label{eqn:eom.1}
\frac{{\rm d}{\bf v}^{\prime}_{i}}{{\rm d}t} = 2\,{\bf v}^{\prime}_{i}\times{\boldsymbol{\Omega}}_{R} + 3\,\Omega_{R}^{2}\,x_{i}\hat{x} - \Omega_{R}^{2}\,z_{i}\,\hat{z} - \frac{{\bf v}^{\prime}_{i}-{\bf u}^{\prime}}{t_{\rm s}}
\end{equation}
where ${\bf u}^{\prime}$ is the gas velocity in the rotating frame. Note that this is a Lagrangian derivative (Eq.~\ref{eqn:eom.1} follows the grain path). With no loss of generality, we can conveniently define velocities relative
to the linearized Keplerian velocities, ${\bf v} \equiv {\bf v}^{\prime}_{i} + (3/2)\,\Omega_{R}\,x\,\hat{y}$ and ${\bf u} \equiv {\bf u}^{\prime} + (3/2)\,\Omega_{R}\,x\,\hat{y}$.
\citet{nakagawa:1986.grain.drift.solution} show that for the coupled gas-grain system with dimensionless stopping time $\tau_{\rm s} \equiv t_{\rm s}\,\Omega_{R}$ and mid-plane volume-average grain-to-gas mass ratio $\tilde{\rho} \equiv \rho\grain/\rho_{\rm g}$, this leads to a quasi-steady-state equilibrium drift solution for the grains and gas, with grain velocity (in the local rotating frame) $\langle {\bf v} \rangle = {\bf v}^{d} = v_{x}^{d}\,\hat{x} + v_{y}^{d}\,\hat{y}$ and gas velocity $\langle {\bf u} \rangle = {\bf u}^{d} = u_{x}^{d}\,\hat{x} + u_{y}^{d}\,\hat{y}$:
\begin{align}
\label{eqn:vdrift}
v_{x}^{d} & = -\frac{2\,\tau_{\rm s}}{\tau_{\rm s}^{2}+(1+\tilde{\rho})^{2}}\,\eta\,V_{K} \\
v_{y}^{d} &= -\frac{1+\tilde{\rho}}{\tau_{\rm s}^{2}+(1+\tilde{\rho})^{2}}\,\eta\,V_{K} \\
u_{x}^{d} &= +\frac{2\,\tau_{\rm s}\,\tilde{\rho}}{\tau_{\rm s}^{2}+(1+\tilde{\rho})^{2}}\,\eta\,V_{K} \\
\label{eqn:vdrift.last}
u_{y}^{d} &= -\frac{\tau_{\rm s}^{2}+(1+\tilde{\rho})}{\tau_{\rm s}^{2}+(1+\tilde{\rho})^{2}}\,\eta\,V_{K} \\
|v_{\rm drift}| &= |{\bf v}^{d}-{\bf u}^{d}| = \frac{2\,\tau_{\rm s}\,\sqrt{(1+\tilde{\rho})^{2}+\tau_{\rm s}^{2}/4}}{\tau_{\rm s}^{2}+(1+\tilde{\rho})^{2}}\,\eta\,V_{K}
\end{align}
So now define the ``peculiar'' grain/gas velocity relative to the steady-state solution, ${\bf v}\equiv {\bf v}^{d} + \delta {\bf v}$ and ${\bf u} = {\bf u}^{d} + \delta {\bf u}$. Insert these definitions into Eq.~\ref{eqn:eom.1}, and -- since the turbulent velocities are much smaller than Keplerian\footnote{We show below that this is internally consistent, but this amounts to the assumption that the individual eddy sizes {\em within the dust layer} are small compared to the (full) gas disk gradient scale length, which is easily satisfied in realistic systems.} -- expand $\eta=\eta(R)$ and $V_{K}(R)$ to leading order in $x/R$. We then obtain
\begin{align}
\label{eqn:eom.peculiar}
\delta\dot{v}_{x} &\approx 2\,\Omega_{R}\,\delta v_{y} - \frac{\delta v_{x} - \delta u_{x}}{t_{\rm s}} \\
\delta\dot{v}_{y} &\approx -\frac{1}{2}\,\Omega_{R}\,\delta v_{x} - \frac{\delta v_{y} - \delta u_{y}}{t_{\rm s}}
\end{align}
The $\hat{z}$ component of Eq.~\ref{eqn:eom.1} forms a completely separable equation which is simply that of a damped harmonic oscillator. Thus retaining it has no effect on our derivation below.
\begin{figure}
\centering
\plotonesize{gr_pdf_mri.pdf}{1.01}
\vspace{-0.6cm}
\caption{Predicted grain density distribution in numerical simulations of MRI-driven turbulence with $\tau_{\rm s}=1$ and $\tilde{\rho}=0$ (no grain-gas back-reaction). The exact prediction from our Monte-Carlo method, given the simulation parameters, is shown either assuming vortices with fixed $t\eddy$ each produce the same, mean multiplicative effect (``mean $\delta \ln{\rho}$'') or draw from a Gaussian distribution (``Gaussian $\delta \ln{\rho}$''). We also show the simple closed-form fitting function (``Analytic'') derived for fluctuations on large scales (Table~\ref{tbl:largescale}). We compare the simulation results from \citet{dittrich:2013.grain.clustering.mri.disk.sims}. The agreement is very good; the simulations are not able to distinguish the (very similar) ``mean $\delta \ln{\rho}$'' and ``Gaussian $\delta \ln{\rho}$'' models.
\label{fig:grain.rho.mri}}
\end{figure}
\vspace{-0.5cm}
\subsection{Encounters Between Grains and Individual Turbulent Structures}
\label{sec:model.encounters}
\subsubsection{A Toy Model}
\label{sec:model.encounters:toy}
Now consider an idealized encounter between a grain group with $\tau_{\rm s}$ and single, coherent turbulent eddy. We'll first illustrate the key dynamics with a purely heuristic model, then follow with a rigorous derivation (for which the key equations are given in Table~\ref{tbl:defns}).
Define the eddy coherence length $\scalevar\eddy$, and some characteristic peculiar velocity difference across $\scalevar\eddy$ of $\delta u = v\eddy = \mathcal{M}_{e}\,c_{s}$, so the eddy turnover time can be defined as $t\eddy = \scalevar\eddy/|v\eddy|$.
In inertial-range turbulence, we expect these to scale as power laws, so define
\begin{align}
\label{eqn:veddy.scaling}
\langle |v\eddy| \rangle &=\langle \mathcal{M}_{e} \rangle\,c_{s} = {|} v\eddy(\scalevar_{\rm max}) {|}\,{\Bigl(}\frac{\scalevar\eddy}{\scalevar_{\rm max}} {\Bigr)}^{\zeta_{1}} \propto \scalevar\eddy^{\zeta_{1}} \\
\langle t\eddy \rangle &\equiv \frac{\scalevar\eddy}{|v\eddy|} = t\eddy(\scalevar_{\rm max})\,{\Bigl(} \frac{\scalevar\eddy}{\scalevar_{\rm max}} {\Bigr)}^{1-\zeta_{1}}
\end{align}
It is also convenient to define the dimensionless stopping time relative to either the orbital frequency or eddy turnover time:
\begin{align}
\tau_{\rm s}\equivt_{\rm s}\,\Omega \ \ \ \ \ \ \ \ , \ \ \ \ \ \ \ \ \tilde{\tau}_{\rm s}\equiv t_{\rm s}/t\eddy
\end{align}
For now, we will assume $\rho\grain\ll\rho_{\rm g}$, so that the back-reaction of the grains on gas can be neglected.
Consider a grain with $t_{\rm s}\ll t\eddy$, in a sufficiently small eddy that we can ignore the shear/gravity terms across it. Typical eddies are two-dimensional vortices, so the grain is quickly accelerated to the eddy velocity $v\eddy$ in an approximately circular orbit. This produces a centrifugal acceleration $a_{\rm cen}=\delta v_{\theta}^{2}/r \sim v\eddy^{2}/\scalevar\eddy = |v\eddy|/t\eddy$, which is balanced by pressure forces for the gas but causes the grain to drift radially out from the eddy center, at the approximate ``terminal velocity'' where this is balanced by the drag acceleration $\sim \delta v_{r}/t_{\rm s}$, so $\delta v_{r} \sim t_{\rm s}\,v\eddy^{2}/\scalevar\eddy = (t_{\rm s}/t\eddy)\,|v\eddy|$. If, instead, the eddy is sufficiently large, expansion of the centrifugal force gives $a_{\rm cen}\sim 2\,\Omega\,v\eddy$ (the $2\,{\bf v}\times{\boldsymbol{\Omega}}$ term in Eq.~\ref{eqn:eom.1}) -- i.e.\ the global centrifugal force sets a ``floor'' here, so the terminal velocity is $\delta v_{r} \sim 2\,(t_{\rm s}/\Omega^{-1})\,|v\eddy|$.
For large $t_{\rm s}$, consider (for large eddies) the approximation that the flow is locally one-dimensional shear ($\delta u_{y} \approx (v\eddy/\scalevar\eddy)\,x$, $\delta u_{x}=0$). The grains decay towards equilibrium $\delta \dot{v}_{x} \approx \delta \dot{v}_{y} \approx 0$ (assuming the eddy scale is sufficiently large that we can neglect spatial gradients along individual grain trajectories); this gives
\begin{align}
\label{eqn:dvx.eddy}
\delta v_{x} &\sim \frac{2\,x\,v\eddy}{\scalevar\eddy\,[\tau_{\rm s} + \tau_{\rm s}^{-1}]}
\end{align}
This scales as above for $\tau_{\rm s}\ll1$, but decreases for $\tau_{\rm s}\gg1$, because the grains are not efficiently acted upon by the eddy.
The grain density is determined by the continuity equation $\partial\rho\grain/\partial t + \nabla\cdot(\rho\grain\,\delta {\bf v})=0$
which we can write as
\begin{align}
\frac{{\rm D}\,\ln{\rho\grain}}{{\rm d}t} = {\Bigl(} \frac{\partial}{\partial t} + {\delta {\bf v}\cdot \nabla} {\Bigr)}\,\ln{\rho\grain} =
-\nabla\cdot \delta {\bf v}
\end{align}
where ${\rm D}/{\rm d}t$ is the Lagrangian derivative for a ``grain population.''
So the $\delta v_{x} \propto x$ term means that a population of grains, on encountering this eddy, will expand (be pushed away from the origin of the rotating frame) if $v\eddy>0$. This is just the well-known result that anti-cyclonic vortices ($v\eddy<0$) on the largest scales tend to collect grains, while cyclonic vortices ($v\eddy > 0$) disperse them; note that for the small-scale eddies, the sense is {\em always} dispersal in eddies.\footnote{This description of anti-cyclonic eddies, while common, is actually somewhat misleading. Grains always preferentially avoid regions with high absolute value of vorticity $|\boldsymbol{\omega}|\sim |{\bf v}_{e}\,\scalevar\eddy^{-1} + {\boldsymbol{\Omega}}|$. It is simply that very large eddies ($t\eddy \gtrsim\Omega^{-1}$, with $t\eddy=\scalevar\eddy/|{\bf v}_{e}|$) which are locally anti-cyclonic and in-plane ($\hat{\bf v}_{e} = -\hat{\boldsymbol{\Omega}}$) have lower $|\boldsymbol{\omega}|$ than the mean (${\bf v}_{e}=0$) Keplerian flow; so grains concentrate there by being dispersed out of higher-$|\boldsymbol{\omega}|$ regions.}
Note that above, if the eddy is just a one-dimensional flow, the grain population is preferentially dispersed in one dimension; however when the eddies are vortices in two-dimensions, the flow is radial (along each dimension). In general, non-zero $\nabla\cdot\delta{\bf v}$ will occur along $N_{\rm d}$ dimensions, where $N_{\rm d}$ is the number of dimensions along which the eddy flow is locally ``wrapped.'' For the expected case of simple vortices this is an integer $N_{\rm d}=2$, but for eddies with complicated structure, or a population of eddies, this can take any non-integer value between zero and the total spatial dimension.
So, on encountering an eddy of scale $\scalevar\eddy$, the Lagrangian population of grains with initial extent $\lambda = \scalevar\eddy$ will shrink or grow in scale according to
\begin{align}
\label{eqn:shrink.eddy}
\frac{{\rm D}\ln{\rho\grain}}{{\rm d}t} &\sim
\begin{cases}
{\displaystyle \frac{N_{\rm d}\,\tilde{\tau}_{\rm s}}{t\eddy\,(1+\mathcal{O}(\tilde{\tau}_{\rm s}^{2}))]} + \mathcal{O}(\tilde{\tau}_{\rm s}\,\eta^{2}) + \mathcal{O}(t\eddy\,\Omega)}\ \ \ \ \hfill {\tiny ({\rm small}\ \ t\eddy)}\\
\\
{\displaystyle \frac{2\,N_{\rm d}\,\tau_{\rm s}}{t\eddy\,(1+\tau_{\rm s}^{2})} + \mathcal{O}(\tau_{\rm s}\,\eta^{2}) + \mathcal{O}(t\eddy\,\Omega)^{-1}} \hfill {\tiny ({\rm large}\ \ t\eddy)}\\
\end{cases}
\end{align}
We will derive the exact scalings more precisely below, but this simple approximation actually does correctly capture the asymptotic behavior for small and large eddies.
\begin{figure}
\centering
\hspace{-0.4cm}
\plotonesize{gr_pdf_jy_ed2.pdf}{1.01}
\vspace{-0.3cm}
\caption{Grain density distribution, as Fig.~\ref{fig:grain.rho.mri}, for simulations of streaming-instability turbulence with $\tau_{\rm s}=0.1-1$ and $\tilde{\rho}=0.2-3$ (labeled). The simulations are from \citet{johansen:2007.streaming.instab.sims} and \citet{bai:2010.grain.streaming.sims.test}; where multiple simulations with different numerical methods are available we plot them, to represent differences owing purely to numerics. As expected from the stronger response functions on large scales in Fig.~\ref{fig:response}, the PDF width is larger for $\tau_{\rm s}=1$. Increasing $\tilde{\rho}$ broadens the PDF for $\tau_{\rm s}\ll1$, but narrows it when $\gg1$, consistent with our lowest-order estimate that it changes the ``effective stopping time'' as $\tau_{\rm s}\rightarrow\tau_{\rm s}\,(1+\tilde{\rho})$. However the predictions do {\em not} agree well with the case where $\tau_{\rm s}=1$ and $\tilde{\rho}=3$, probably because our assumption that the grains represent a perturbation on the gas turbulence structure is no longer valid.
\label{fig:grain.rho.jy}}
\end{figure}
We now wish to know how long (in an average sense) the perturbation affecting the local grain density distribution in Eq.~\ref{eqn:shrink.eddy} is able to act, which we define as the timescale $\delta t$. This obviously cannot be longer than the eddy coherence time, which is about an eddy turnover time $t\eddy$. Since the eddy will not, in equilibrium, accelerate grains to relative speeds {\em greater} than the flow velocity, it follows that $t_{\rm sink} = |\scalevar\eddy/\langle \delta v_{\rm induced} \rangle|$ (the timescale for grains to be fully expelled from the eddy region) is always $>t\eddy$, so this does not limit $\delta t$. However, if the grains have some non-zero initial relative velocity $v_{0}$ with respect to the eddy, and the stopping time is sufficiently long that they are not rapidly decelerated, they can drift through or cross the eddy in finite time $t_{\rm cross} \sim \scalevar\eddy/|v_{0}|$. If $t_{\rm cross} < t_{\rm s}$ and $t_{\rm cross} < t\eddy$, then $\delta t = t_{\rm cross}$ becomes the limitation. If $v_{0}$ is approximately constant (from e.g.\ global drift or much larger eddies), we have $t_{\rm cross} \sim \scalevar\eddy/|v_{0}| \sim t\eddy\,|v\eddy|/|v_{0}|$ decreasing on small scales. So we expect $\delta t/t\eddy$ is a constant $\approx1$ for large-scale eddies, then turns over as an approximate power law $\propto v\eddy \propto \scalevar\eddy^{\zeta_{1}}$, for $\scalevar\eddy$ below some $\scalevar_{\rm crit}$ where $t_{\rm cross}<{\rm MIN}(t\eddy,\,t_{\rm s})$.\footnote{It is straightforward to show that this timescale restriction guarantees our earlier assumption (where we dropped higher-order terms in the gradient of $\eta$) is valid (for timescales $<\delta t$ and sub-sonic peculiar velocities).} Again, we'll consider this in detail below.
Together, this defines what we call the ``response function'': the typical density change induced by encounter with an eddy
\begin{align}
\langle\deltarhonoabs\rangle = {\Bigl\langle}\int \frac{{\rm D}\ln{\rho\grain}}{{\rm d}t}\,{\rm d}t\,{\Bigr\rangle} \sim {\Bigl\langle}\frac{{\rm D}\ln{\rho\grain}}{{\rm d}t} {\Bigr\rangle}\,\delta t
\end{align}
\vspace{-0.2cm}
\subsubsection{Exact Solutions in Turbulence without External Gravity}
\label{sec:encounters:pure.turb}
Now we will derive the previous scalings rigorously.
Consider the behavior of grain density fluctuations in inertial-range turbulence.\footnote{We specifically will assume high Reynolds number $Re\gg1$ and Stokes number $St=t_{\rm s}/t\eddy(\lambda_{\nu})\gg1$, where $\lambda_{\nu}$ is the viscous scale, so will neglect molecular viscosity/diffusion throughout.} First we derive the ``response function'' above, i.e.\ the effect of an eddy on a grain distribution. Subtracting the bulk background flow, the grain equations of motion (Eq.~\ref{eqn:eom.peculiar}) become the Stokes equations
\begin{align}
\label{eqn:eom.noshear}
\delta\dot{{\bf v}} &= -\frac{\delta {\bf v} - \delta {\bf u}}{t_{\rm s}}
\end{align}
with the continuity equation $\partial\rho\grain/\partial t + \nabla\cdot(\rho\grain\,\delta {\bf v})=0$ which we can write as
\begin{align}
\frac{{\rm D}\,\ln{\rho\grain}}{{\rm d}t} = {\Bigl(} \frac{\partial}{\partial t} + {\delta {\bf v}\cdot \nabla} {\Bigr)}\,\ln{\rho\grain} =
-\nabla\cdot \delta {\bf v}
\end{align}
where ${\rm D}/{\rm d}t$ is the Lagrangian derivative for a ``grain population.''
Many theoretical and experimental studies have suggested that the dynamics of incompressible gas turbulence on various scales can be understood by regarding it as a collection of Burgers vortices \citep[see][and references therein]{marcu:1995.grain.burgers.vortex}. The Burgers vortex is an exact solution of the Navier-Stokes equations, and provides a model for vortices on all scales (which can be regarded as ``stretched'' Burgers vortices). In a cylindrical coordinate system centered on the vortex tube, the fluid flow components can be written as $\delta u_{z}=2\,A\,z$, $\delta u_{r}=-A\,r$, $\delta u_{\theta} = (B/2\pi\,r)\,(1-\exp{(-r^{2}/2\,r_{0}^{2}))}$, where $r_{0}$ is the vortex size, $B$ the circulation parameter, and $A = \nu/B$ is the inverse of the ``vortex Reynolds number.'' Since we consider large $Re$, $A\rightarrow0$ for the large-scale vortices, so $\delta u_{z}=\delta u_{r}=0$, and we can specify $\delta u =\delta u_{\theta} \equiv u_{0}\,(r_{0}/r)\,(1-\exp{[-r^{2}/2\,r_{0}^{2}]})$.
On scales $\lesssim 1.58\,r_{0}$, $\delta u_{\theta}\propto r + \mathcal{O}(r^{2}/2.5\,r_{0}^{2})$ increases linearly with $r$, before turning over beyond the characteristic scale and decaying to zero. So, since we specifically consider the effects of an eddy on scales {\em within} the eddy size ($\lesssim r_{0}$), we can take $\delta u_{\theta} \propto r$, in which case the eddy is entirely described by the (approximately constant) turnover time $t\eddy$ such that $\delta u_{\theta} \equiv r/t\eddy$. Note that this is now the general form for {\em any} eddy with pure circulation and constant turnover time, so while motivated by the Burgers vortex should represent real eddies on a wide range of scales.
In the vortex plane, the equations of motion (Eq.~\ref{eqn:eom.noshear}) become
\begin{align}
\nonumber \delta \dot{v}_{x} &= \delta\dot{v}_{r^{\prime}}\,\cos{\theta} - \delta\dot{v}_{\theta}\,\sin{\theta} - \dot{\theta}\,(\delta v_{r^{\prime}}\,\sin{\theta} + \delta v_{\theta}\,\cos{\theta}) \\
\label{eqn:eom.ideal.1}
&= t_{\rm s}^{-1}\,(-\delta v_{r^{\prime}}\,\cos{\theta} + \delta v_{\theta}\,\sin{\theta} - \delta u_{\theta}\,\sin{\theta} ) \\
\nonumber \delta \dot{v}_{y} &= \delta \dot{v}_{r^{\prime}}\,\sin{\theta} + \delta \dot{v}_{\theta}\,\cos{\theta} + \dot{\theta}\,(\delta v_{r^{\prime}}\,\cos{\theta} - \delta v_{\theta}\,\sin{\theta}) \\
&= t_{\rm s}^{-1}\,(-\delta v_{r^{\prime}}\,\sin{\theta} - \delta v_{\theta}\,\cos{\theta} + \delta u_{\theta}\,\cos{\theta} )
\label{eqn:eom.ideal.2}
\end{align}
with $\dot{\theta}\equiv \delta v_{\theta}/r^{\prime}$. It is straightforward to verify that the peculiar solution is given by $\delta v_{r} = \varpi\,r/t\eddy$ ($\delta v_{r}\propto \delta v_{\theta} \propto u_{\theta} \propto r \propto \exp{(\varpi\,t/t\eddy)}$) with $\varpi$ being a root of
$\varpi\,(1+\varpi\,\tilde{\tau}_{\rm s})\,(1+2\,\varpi\,\tilde{\tau}_{\rm s})^{2}-\tilde{\tau}_{\rm s}=0$, all of which are decaying solutions except the positive real root:
\begin{align}
\label{eqn:varpi.pureturb}
\varpi &= \frac{-2 + \sqrt{2\,{\Bigl(}1 + \sqrt{1+16\,\tilde{\tau}_{\rm s}^{2}} {\Bigr)}}}{4\,\tilde{\tau}_{\rm s}} \\
\varpi &\rightarrow
\begin{cases}
{\displaystyle \tilde{\tau}_{\rm s}}\ \ \ \ \ \hfill {\tiny (\tilde{\tau}_{\rm s}\ll1)} \\
{\displaystyle (2\,\tilde{\tau}_{\rm s})^{-1/2}}\ \ \ \ \ \hfill {\tiny (\tilde{\tau}_{\rm s}\gg1)} \
\end{cases}
\end{align}
Because $\delta v_{r}\propto r$ and $\delta v_{\theta}$ is independent of $\theta$, it follows that along this solution
\begin{equation}
{(}\nabla\cdot \delta {\bf v}{)}_{\rm pec} = \frac{1}{r}\frac{\partial (r\,v_{r})}{\partial r} + \frac{1}{r}\frac{\partial v_{\theta}}{\partial \theta} =
N_{\rm d}\,\frac{|v_{r}|}{r} = \frac{2\,\varpi}{t\eddy}
\end{equation}
To determine the general solution, we must consider how the eddy evolves in time, since it is able to act on the grains for only finite $\delta t$. To approximate this, consider the simplest top-hat model, $\delta u = \delta u_{\theta}\Theta(0<t<\delta t)$, where $\delta {\bf u}\propto \Theta=0$ at $t<0$ and $t>\delta t$ and $\Theta=1$ ($\delta {\bf u} = \delta u_{\theta}(r)\,\hat{\theta}$) for $0<t<\delta t$. We require the net effect of the eddy on the density field (i.e.\ the late-time result of the perturbation), so we integrate ${\rm d}\ln{\rho\grain}/{\rm d}t = -\nabla\cdot{\bf \delta v}$, from the boundary condition $\delta {\bf v}=\delta {\bf v}_{0}$ at $t<0$ until some time much longer than the eddy lifetime $t\rightarrow \infty$. For the simple top-hat form of the eddy lifetime this gives\footnote{Here we note that there are two decaying oscillatory solutions to Eqs.~\ref{eqn:eom.ideal.1}-\ref{eqn:eom.ideal.2} with decay rate $\omega^{\prime} = -1/2\,\tilde{\tau}_{\rm s}$, which correspond to the usual damped modes for grains with drag in a uniform flow (there is also a ``fast'' damped solution with $\omega^{\prime}\ll-\tilde{\tau}_{\rm s}$, but because of the rapid damping this term is not important). Although Eqs.~~\ref{eqn:eom.ideal.1}-\ref{eqn:eom.ideal.2} are non-linear (because of the mixing between $x$ and $y$ in $r$-dependence), if we linearize in $t\eddy\,\dot{\theta}/2\pi$ or $t\eddy\,\delta v_{r}/r$, it is straightforward to derive the full solution matching these terms to arbitrary initial velocities $\delta {\bf v}_{0}$. Then the solution at $t=\delta t$ is matched to the solution for the post-eddy field, (where $\delta {\bf u}=0$ so $\delta\dot{\bf v} = -\delta {\bf v}/t_{\rm s}$, so the peculiar velocities simply decay exponentially on $t_{\rm s}$). In this case it is straightforward to show that the integral over time ($t\rightarrow\infty$) of $\nabla\cdot \delta {\bf v}$ is exactly the integral of the positive real peculiar solution $|\nabla\cdot{\bf v}|_{\rm pec}$ from $t=0$ to $t=\delta t$. The finite $t_{\rm s}$ to accelerate grains to this solution is offset by the non-zero decay time post-eddy, and the boundary-matched $\delta {\bf v}_{0}$ term, being oscillatory, contributes nothing to the net density change (provided the initial $\langle {\delta }v_{r} \rangle = 0$, given by our isotropy assumption). For the fully non-linear form of the equation this is not strictly true, but performing the integrals numerically for an ensemble of test particles with an initially homogeneous, isotropic density and velocity field, we find it is a very good approximation.}
\begin{align}
\nonumber
\langle
\delta \ln{\rho} \rangle &={\Bigl\langle} - \int_{t} (\nabla\cdot \delta {\bf v})\,{\rm d}\,t {\Bigr\rangle}^{\delta u = \delta u_{\theta}\Theta(0<t<\delta t)} \\
&=- | \nabla\cdot \delta {\bf v} |_{\rm pec}\,\delta t = -{2\,\varpi}\,\frac{\delta t}{{t\eddy}}
\end{align}
where the $\langle...\rangle$ denotes an average over an assumed homogeneous, isotropic initial ensemble in position and velocity space.
\begin{figure}
\centering
\hspace{-0.2cm}
\plotonesize{gr_pdf_tc.pdf}{1.01}
\vspace{-0.6cm}
\caption{Grain density distribution, as Fig.~\ref{fig:grain.rho.mri}, for simulations of ``turbulent concentration'' from \citet{hogan:1999.turb.concentration.sims}. Here there is no shear/gravity (see \S~\ref{sec:encounters:pure.turb}), and the flow is simulated from a fixed viscous scale to various Reynolds numbers ($Re=62,\,140,\,765$). In each case the Stokes number is unity ($t_{\rm s}\approx t\eddy(\lambda_{\nu})$, where $\lambda_{\nu}$ is the viscous scale); this gives $\tilde{\tau}_{\rm s}(\lambda=\scalevar_{\rm max})$ at the top of the cascade of $\tilde{\tau}_{\rm s}(\scalevar_{\rm max})\approx0.13,\,0.03,\,0$ ($Re=62,\,765,\,\infty$). The PDF width grows logarithmically with $Re$ as we integrate over more of the broad response function in Fig.~\ref{fig:response}. However, the decay in this function at $\tilde{\tau}_{\rm s}\ll1$, and the increase in rms turbulent velocities of grains lowering their eddy crossing times, means that the PDF does not grow indefinitely as $Re\rightarrow\infty$. For comparison, pure uncorrelated (Markovian) fluctuations predict a PDF with dispersion in $\rho\grain$ of $\approx1.7$, giving a PDF that falls below the minimum plotted range here at $\log{(\rho\grain/\langle\rho\grain\rangle)}\approx0.8$ \citep{zaichik:2009.grain.clustering.theory.randomfield.review}.
\label{fig:grain.rho.tc}}
\end{figure}
Now we need to determine $\delta t$. If the grains are ``trapped'' well within the eddy, this is simply the eddy lifetime $t\eddy$. However, we have not yet accounted for the finite spatial coherence of the eddy. If the grains are moving sufficiently fast and/or if the stopping time is large, they can cross or move ``through'' the eddy (to $r\gg r_{0}\equiv \scalevar\eddy = |v\eddy|\,t\eddy$, where the eddy circulation is super-exponentially suppressed so becomes negligible) in a timescale $t_{\rm cross}\lesssim t\eddy$. Since we are integrating a rate equation, the full $\delta t$ is simply given by the harmonic mean
\begin{equation}
\delta t^{-1} = t\eddy^{-1} + t_{\rm cross}^{-1}
\end{equation}
(capturing both limits above; see \citealt{voelk:1980.grain.relative.velocity.calc,markiewicz:1991.grain.relative.velocity.calc}). The timescale for a grain to cross a distance $\scalevar\eddy=|v\eddy|\,t\eddy$ in a smooth flow (constant $\delta{\bf u}$), with initial (relative) grain-gas velocity $|v_{0}|$, is just
\begin{align}
t_{\rm cross} &=
\begin{cases}
{\displaystyle -t_{\rm s}\,\ln{{\Bigl[}1 - \frac{\scalevar\eddy}{t_{\rm s}\,|v_{0}|} {\Bigr]}}}\ \ \ \ \ \hfill {\tiny (|v_{0}|>\scalevar\eddy/t_{\rm s})} \\
{\displaystyle \infty}\ \ \ \ \ \hfill {\tiny (|v_{0}|\le\scalevar\eddy/t_{\rm s})} \
\end{cases}
\end{align}
Note that for large $t_{\rm s}$, this is just the ballistic crossing time $\rightarrow \scalevar\eddy/|v_{0}|$, but for small $|v_{0}|<\scalevar\eddy/t_{\rm s}$ this diverges because the grain is fully stopped and trapped without reaching $\lambda_{0}$. So now we need to determine $|v_{0}|$, but this is considered in \citet{voelk:1980.grain.relative.velocity.calc} and many subsequent calculations \citep[e.g.][]{markiewicz:1991.grain.relative.velocity.calc,pan:2010.grain.velocity.sims,pan:2013.grain.relative.velocity.calc}. Assuming the turbulence is isotropic and (on long timescales) velocity ``kicks'' from independent eddies are uncorrelated, then
\begin{equation}
\langle |{\bf v}_{0}|^{2} \rangle = |{\bf V}_{L}|^{2} + \langle |{\bf V}_{\rm rel}(\scalevar\eddy)^{2}| \rangle
\end{equation}
where $V_{L} = |{\bf V}_{L}|$ is the difference in the laminar bulk flow velocity of grains and gas (due to e.g.\ settling or gravity) and $\langle |{\bf V}_{\rm rel}(\scalevar\eddy)^{2} |\rangle$ represents the rms grain-eddy velocities (averaged on the eddy scale) due to the turbulence itself.
For the ``pure turbulence'' case here $V_{L}=0$, and $\langle | {\bf V}_{\rm rel}(\scalevar\eddy)^{2}| \rangle$ is derived in \citet{voelk:1980.grain.relative.velocity.calc} as
\begin{align}
\nonumber
\langle |{\bf V}_{\rm rel}(\scalevar\eddy)^{2}| \rangle &= \int_{k(\scalevar_{\rm max})}^{k(\scalevar\eddy)} {\rm d}k\,P(k)\,\frac{t_{\rm s}}{t_{\rm s} + t\eddy(k)} \\
&= |v\eddy(\scalevar_{\rm max})|^{2}\,\tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,\ln{{\Bigl[} \frac{1+\tilde{\tau}_{\rm s}(\scalevar_{\rm max})^{-1}}{1 + \tilde{\tau}_{\rm s}(\scalevar\eddy)^{-1}} {\Bigr]}}
\end{align}
where $k$ is the wavenumber and $P(k)=(p-1)\,k^{-p}$ is the velocity power spectrum ($\int {\rm d}k\,P(k)=\langle \delta {\bf u}^{2} \rangle = v\eddy(\scalevar_{\rm max})^{2}$; the closed-form expression here follows for any power-law $P(k)$; see \citealt{ormel:2007.closed.form.grain.rel.velocities}).
After some simple substitution, we now have
\begin{align}
\frac{\delta t}{t\eddy} &= {\Bigl[}1 + {\Bigl(} \frac{t_{\rm cross}}{t\eddy} {\Bigr)}^{-1} {\Bigr]}^{-1} \\
\frac{t_{\rm cross}}{t\eddy} &= -\tilde{\tau}_{\rm s}\,\ln{{\Bigl[}\,1 - \frac{(\scalevar\eddy/\scalevar_{\rm max})}{\tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,g_{0}(\scalevar\eddy)^{1/2}}\, {\Bigr]}} \\
g_{0}(\scalevar\eddy) &\equiv \tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,\ln{{\Bigl[}\, \frac{1+\tilde{\tau}_{\rm s}(\scalevar_{\rm max})^{-1}}{1+\tilde{\tau}_{\rm s}(\scalevar\eddy)^{-1}}\, {\Bigr]}}
\end{align}
giving a complete description of $|\deltarhonoabs| = 2\,\varpi\,(\delta t/t\eddy)$.
\vspace{-0.2cm}
\subsubsection{Full Solution With Shear}
\label{sec:encounters:shear.solution}
Here we present the full solution (including shear) for the density fluctuations for $N_{\rm d}=2$ vortices above. Consider the same eddies as in \S~\ref{sec:encounters:pure.turb}, but retain the shear terms from Eq.~\ref{eqn:eom.1}. Eq.~\ref{eqn:eom.ideal.1}-\ref{eqn:eom.ideal.2} become
\begin{align}
\nonumber \delta \dot{v}_{x} &= \delta\dot{v}_{r^{\prime}}\,\cos{\theta} - \delta\dot{v}_{\theta}\,\sin{\theta} - \dot{\theta}\,(\delta v_{r^{\prime}}\,\sin{\theta} + \delta v_{\theta}\,\cos{\theta}) \\
\nonumber
&= t_{\rm s}^{-1}\,(-\delta v_{r^{\prime}}\,\cos{\theta} + \delta v_{\theta}\,\sin{\theta} - \delta u_{\theta}\,\sin{\theta} ) \\
\label{eqn:eom.shear.1}
&+ 2\,\Omega_{R}\,(\delta v_{r^{\prime}}\,\sin{\theta} + \delta v_{\theta}\,\cos{\theta}) \\
\nonumber \delta \dot{v}_{y} &= \delta \dot{v}_{r^{\prime}}\,\sin{\theta} + \delta \dot{v}_{\theta}\,\cos{\theta} + \dot{\theta}\,(\delta v_{r^{\prime}}\,\cos{\theta} - \delta v_{\theta}\,\sin{\theta}) \\
\nonumber
&= t_{\rm s}^{-1}\,(-\delta v_{r^{\prime}}\,\sin{\theta} - \delta v_{\theta}\,\cos{\theta} + \delta u_{\theta}\,\cos{\theta} ) \\
&- \frac{1}{2}\,\Omega_{R}\,(\delta v_{r^{\prime}}\,\cos{\theta} - \delta v_{\theta}\,\sin{\theta})
\label{eqn:eom.shear.2}
\end{align}
First note that, even when $t\eddy\ggt_{\rm s}$, there is no $\theta$-independent solution if we retain all terms (unlike the case in \S~\ref{sec:encounters:pure.turb}, valid at all $\theta$); however, the angle dependence of the equilibrium solution is weak, and the angle-averaged solution (which can only be computed numerically) is well-approximated by the solution for $\theta\approx0$ (since this dominates the contribution to $\nabla \cdot {\delta {\bf v}}$).\footnote{Because the shear terms break the symmetry of the problem, in the equilibrium solution a grain drifts on an approximately elliptical orbit, with an epicyclic correction to the circular solution which extends the orbit along the shear direction. Considering this case, but with the time-averaged growth of the grain locations $r$, replaced with their semi-major axis distribution, gives an identical derivation.}
With that caveat, we can follow the identical procedure as \S~\ref{sec:encounters:pure.turb}. In this regime there are two solution branches: the first is again an exponentially growing solution with frequency $=\varpi/t\eddy$ and divergence
\begin{equation}
\label{eqn:rhomean.for.fullderiv}
\langle \delta \ln{\rho}\rangle_{\scalevar\eddy} = {\Bigl\langle} -\int_{0}^{\infty} (\nabla\cdot \delta {\bf v})\,{\rm d}t {\Bigr\rangle} \approx -2\,\varpi(\scalevar\eddy)\,\frac{\delta t}{t\eddy}
\end{equation}
but now with $\varpi=\varpi_{1}$ given by the positive, real root of
\begin{align}
\label{eqn:varpi.full}
0 &= \nonumber
16\,\tilde{\tau}_{\rm s}^{3}\,\varpi_{1}^{4} +
32\,\tilde{\tau}_{\rm s}^{2}\,\varpi_{1}^{3} +
\tilde{\tau}_{\rm s}\,(20+7\,\tau_{\rm s}^{2})\,\varpi_{1}^{2} \\
&
+ 4\,(1 + \tau_{\rm s}^{2} - 3\,\tau_{\rm s}\,\tilde{\tau}_{\rm s})\,\varpi_{1} -
4\,(\tilde{\tau}_{\rm s}+2\,\tau_{\rm s})
\end{align}
\begin{align}
\varpi_{1} &\rightarrow
\begin{cases}
{\displaystyle \tilde{\tau}_{\rm s}}\ \ \ \ \ \ \ \ \ \ \ \hfill {\tiny (\tilde{\tau}_{\rm s}\ll1)} \\
{\displaystyle (2\,\tilde{\tau}_{\rm s})^{-1/2}}\ \ \ \ \ \ \ \ \ \ \ \hfill {\tiny (\tilde{\tau}_{\rm s}\gg1,\ \tilde{\tau}_{\rm s}\gg\tau_{\rm s})} \\
{\displaystyle 2\,(\tau_{\rm s}+\tau_{\rm s}^{-1})^{-1}}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hfill {\tiny (\tau_{\rm s}\gg\tilde{\tau}_{\rm s})} \
\end{cases}
\end{align}
As expected, on small scales where $t\eddy\ll \Omega^{-1}$ ($\tilde{\tau}_{\rm s}\gg\tau_{\rm s}$), this reduces to the solution for turbulence without shear (Eq.~\ref{eqn:varpi.pureturb}): we can write $\varpi_{1}$ in this limit just in terms of $\tilde{\tau}_{\rm s}$, and it scales as $\tilde{\tau}_{\rm s}$ for $\tilde{\tau}_{\rm s}\ll1$ and $(2\,\tilde{\tau}_{\rm s})^{-1/2}$ for $\tilde{\tau}_{\rm s} \gg 1$. On sufficiently large scales, $\tau_{\rm s}\gg\tilde{\tau}_{\rm s}$, we recover the ``pure shear'' solution neglecting internal eddy structure, the approximately constant $\varpi_{1}\rightarrow 2/(\tau_{\rm s}+\tau_{\rm s}^{-1})$.
However, owing to the non-linearity in Eqs.~\ref{eqn:eom.shear.1}-\ref{eqn:eom.shear.2}, there is a second ``early time'' solution branch. Upon first encountering the eddy, the grains have zero mean (peculiar) vorticity, so the coherent $\delta v$ grows with time. At sufficiently early times, $\delta v$ is small and the solution to Eqs.~\ref{eqn:eom.shear.1}-\ref{eqn:eom.shear.2} is obtained by expanding to leading order in $\delta v$. After substitution to eliminate $\delta v_{\theta}$, the system simplifies to $\tau_{\rm s}^{2}\,\delta \ddot{v}_{r} + 2\,\tau_{\rm s}\,\delta \dot{v}_{r} + (1+\tau_{\rm s}^{2})\,\delta v_{r} = 2\,\tau_{\rm s}\,\delta u_{\theta}$. Just as the solution in \S~\ref{sec:encounters:pure.turb} above, this has two decaying oscillatory solutions which do not contribute to the integrated $t\rightarrow\infty$ divergence (since the equations are linearized), and peculiar solution $\delta v_{r} = 2\,\delta u_{\theta}/(\tau_{\rm s}+\tau_{\rm s}^{-1})$, i.e.\ just the ``large scale'' solution from before. This leads to an identical expression (Eq.~\ref{eqn:rhomean.for.fullderiv}) for $\langle \delta \ln{\rho} \rangle$, but with $\varpi = \varpi_{0} = 2/(\tau_{\rm s}+\tau_{\rm s}^{-1})$. This solution track dominates when $\varpi_{1}<\varpi_{0}$; when $\varpi_{1}>\varpi_{0}$, the $\varpi_{0}$ solution track dominates only for an initial time $t\ll t\eddy$, until (as $\delta v_{r}$ grows) the second-order terms in $\delta v$ become important and $\varpi\rightarrow\varpi_{1}$. Comparing to direct numerical integration, it is straightforward to verify that general solution for both regimes is accurately captured by
\begin{align}
\varpi(\scalevar\eddy) = {\rm MAX}{\Bigl[} \varpi_{1},\ \varpi_{0}=2\,(\tau_{\rm s}+\tau_{\rm s}^{-1})^{-1} {\Bigr]}
\end{align}
This is directly analogous to our heuristic estimate in \S~\ref{sec:model.encounters:toy}; global angular momentum sets a ``floor'' in the (second-order) centrifugal force, here represented by $\varpi_{0}$.
Likewise, $\delta t$ obeys the same scalings as \S~\ref{sec:encounters:pure.turb}, with ${\bf V}_{\rm rel}$ contributed by the turbulence, but now there is a non-zero laminar relative gas-grain flow, given by the equilibrium drift solution ${\bf V}_{L} = {\bf v}^{d} - {\bf u}^{d}$ in Eqs.~\ref{eqn:vdrift}-\ref{eqn:vdrift.last}:
\begin{align}
\label{eqn:v.laminar}
|{\bf V}_{L}|^{2} = \frac{4\,(1+\tilde{\rho})^{2}\,\tau_{\rm s}^{2} + \tau_{\rm s}^{4}}{[ (1+\tilde{\rho})^{2} + \tau_{\rm s}^{2} ]^{2}}\,(\eta\,V_{K})^{2} = \frac{1}{\beta^{2}}\,|v\eddy(\scalevar_{\rm max})|^{2}
\end{align}
Together with Eqs.~\ref{eqn:rhomean.for.fullderiv}-\ref{eqn:varpi.full}, we can now write
\begin{align}
\frac{\delta t}{t\eddy} &= {\Bigl[}1 + {\Bigl(}\frac{t_{\rm cross}}{t\eddy} {\Bigl)}^{-1} {\Bigr]}^{-1} \\
\frac{t_{\rm cross}}{t\eddy} &= -{\tilde{\tau}_{\rm s}}\,\ln{{\Bigl[}1 - \frac{(\scalevar\eddy/\scalevar_{\rm max})}{\tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,g(\scalevar\eddy)^{1/2}} {\Bigr]}}
\end{align}
where
\begin{align}
\label{eqn:g.timescale.function}
g(\scalevar\eddy) &\equiv \frac{\langle |{\bf v}_{0} |^{2} \rangle}{|v\eddy(\scalevar_{\rm max})|^{2}} = g_{0}(\scalevar\eddy) + \frac{1}{\beta^{2}} \\
&= \frac{1}{\beta^{2}} + \tilde{\tau}_{\rm s}(\scalevar_{\rm max})\,\ln{{\Bigl[} \frac{1+\tilde{\tau}_{\rm s}(\scalevar_{\rm max})^{-1}}{1 + \tilde{\tau}_{\rm s}(\scalevar\eddy)^{-1}} {\Bigr]}} \nonumber
\end{align}
\begin{figure}
\centering
\plotonesize{gr_rhomax.pdf}{0.98}
\vspace{-0.15cm}
\caption{{\em Top:}
Maximum grain density measured in the MRI simulations from Fig.~\ref{fig:grain.rho.mri}, as a function of population stopping time $\tau_{\rm s}$. Different simulations with various box sizes and resolution are plotted, the predictions should form an ``upper envelope.'' We compare our simple approximation for large eddies and full prediction, given the (finite) simulation resolution and turbulence properties. These agree very well up to $\tau_{\rm s}\sim$ a few, though they under-predict fluctuations when $\tau_{\rm s}\gg1$. On large scales, maximum densities increase rapidly with $\tau_{\rm s}$ up to $\tau_{\rm s}\sim1$.
We also show the prediction if the simulation were infinitely high-resolution (densities measured on arbitrarily small scales). In this regime, $\tau_{\rm s}\ll1$ grains also show large fluctuations; however, these high densities are manifest on very small scales. Roughly, convergence to this solution requires resolving eddies with $t\eddy\gtrsim0.05\,t_{\rm s}$; for the smallest $t_{\rm s}$ and given simulation turbulence properties and box sizes here, this would require a minimum $\sim(10^{6})^{3}$-cell simulation.
{\em Bottom:} Maximum grain density in simulations from \citet{johansen:2012.grain.clustering.with.particle.collisions}, averaged on different smoothing scales $\lambda$. For large grains, the predicted maximum increases on smaller smoothing scales $\propto \lambda^{-(1-2)}$.
\label{fig:rho.max}}
\end{figure}
\vspace{-0.5cm}
\subsection{Hierarchical Encounters with Many Structures}
\label{sec:model.hierarchy}
Now if we assume the {\em gas} turbulence follows a standard hierarchical multi-fractal cascade, then we can embed our estimates for behavior in individual eddy encounters into the statistics of the eddies themselves.
The multi-fractal models in \S~\ref{sec:intro} are predicated on two assumptions: (1) that the statistics are controlled by the most singular intermittent structures (which can therefore be treated discretely) producing random multiplicative effects of fixed amplitude; and (2) that over the (gas) inertial range, the number $m$ of such eddies/structures in a differential interval in scale\footnote{If the flow is ergodic, we can equivalently consider this the number of structures encountered by a Lagrangian parcel over a coherence timescale \citep{cuzzi:2001.grain.concentration.chondrules,hopkins:frag.theory}. As shown in \citet{hopkins:2012.intermittent.turb.density.pdfs}, that assumption leads to a remarkably accurate description of the first-order intermittency corrections to the {\em gas} density PDFs in super-sonic isothermal turbulence.} ($\Delta \ln{\lambda} = \ln{\lambda_{2}} - \ln{\lambda_{1}}$, i.e.\ at a point in space, the number of ``overlapping'' structures between scales $\lambda_{1}$ and $\lambda_{2}$) is Poisson-distributed
\begin{equation}
\label{eqn:poisson}
P(m) = P_{\Delta N}(m) = \frac{\Delta N^{m}}{m!}\,\exp{(-\Delta N)}
\end{equation}
with mean
\begin{equation}
\label{eqn:deltaN}
\Delta N = C_{\infty}\,|\Delta \ln{\lambda}|
\end{equation}
This is a purely geometric argument, which follows from the idea that the flow structure is scale-invariant over the inertial range \citep[see][]{shewaymire:logpoisson,dubrulle:logpoisson}. Here $C_{\infty}$ is the fractal co-dimension of the most singular structures.\footnote{We can think of this as a dimensional ``filling factor'' ($C_{\infty} = d - D_{\infty}$, where $D_{\infty}$ is the fractal dimension occupied by the most singular structures and $d$ the total spatial dimension), from which Eq.~\ref{eqn:deltaN} follows trivially. However this is {\em not} the same as $N_{\rm d}$, the ``wrapping dimension'' defined above. For example, a thin ring with $D_{\infty}=1$ can accelerate grains radially in two dimensions ($N_{\rm d}=2$).} In sub-sonic and/or incompressible turbulence (in both two and three dimensions), $C_{\infty}=2$ is theoretically expected and experimentally observed, with vortex tubes (in ``worms'' or ``filaments'') constituting the most singular structures (see \citealt{sheleveque:structure.functions,shezhang:2009.sheleveque.structfn.review} for reviews).\footnote{In super-sonic or highly magnetized, compressible turbulence, $C_{\infty}\approx1$, corresponding to sheets, is more appropriate \citep[see][]{muller.biskamp:mhd.turb.structfn,boldyrev:2002.structfn.model}. While we do not expect turbulence to be super-sonic here, $C_{\infty}$ may change when the grain density is large (discussed below).}
Now apply the model for gas velocity statistics from \citet{sheleveque:structure.functions,shewaymire:logpoisson,dubrulle:logpoisson} to the grain density statistics. Following \S~\ref{sec:model.encounters}, assume that each encounter in $\Delta\ln{\lambda}$ produces a multiplicative effect on the density statistics with the mean expected magnitude $\langle \delta \ln{\rho}\,[\scalevar\eddy = \lambda] \rangle$ on that scale.\footnote{This simplification, that structures produce fluctuations of mean magnitude (given their scale), is substantial; yet for the gas velocity statistics it appears sufficient to capture the power spectrum and PDF shape, and higher-order structure function/correlation statistics to $\gtrsim10$th order in experiments \citep[see][]{shezhang:2009.sheleveque.structfn.review}} In a probabilistic sense, as we sample the density statistics about some random point in space, the statistics on successive scales $\lambda_{1}$ and $\lambda_{2}$ are given by
\begin{align}
\nonumber
\ln{[\rho\grain(\lambda_{2})]} &= \ln{[\rho\grain(\lambda_{1})]} + m\,\langle \delta \ln{\rho}\,[\scalevar\eddy=\lambda_{1}]\rangle + \epsilon_{0} \\
\label{eqn:logpoisson.1}
&= \ln{[\rho\grain(\lambda_{1})]} - m\,|\deltarhonoabs| + \epsilon_{0}
\end{align}
where $m$ is Poisson-distributed as Eq.~\ref{eqn:poisson}-\ref{eqn:deltaN}, i.e.\
\begin{align}
P{\Bigl(}\ln{{\Bigl[}\frac{\rho\grain(\lambda_{2})}{\rho\grain(\lambda_{1})} {\Bigr]}}{\Bigr)}\,{\rm d}\ln{{\Bigl[}\frac{\rho\grain(\lambda_{2})}{\rho\grain(\lambda_{1})} {\Bigr]}} = P(m)\,{\rm d}m
\end{align}
Mass conservation trivially determines the integration constant
\begin{equation}
\epsilon_{0} = \Delta N\,[1-\exp{(-|\deltarhonoabs|_{\scalevar\eddy=\lambda_{1}})}]
\end{equation}
Physically, this should be interpreted as follows. Beginning at the ``top'' scale $\scalevar_{\rm max}$ (where $\rho\grain=\langle \rho\grain \rangle$ by definition), we can recursively divide the volume about a random point into smaller sub-volumes of size $\lambda$, each containing a (discrete) number of singular structures (vortices) with characteristic scale $\sim\lambda$. Each such vortex produces a multiplicative effect on the local density field $\delta \ln{\rho}$ (additive in log-space); we simplify the statistics by assigning each its mean expected effect $\langle \delta \ln{\rho}(\lambda,\,t_{\rm s},..) \rangle$. Per \S~\ref{sec:model.encounters}, this effect applies {\em within} the eddy region (dispersing grains from of the eddy center). But ``dispersed'' grains must go somewhere; the $\epsilon_{0}$ term simply represents the mean effect on the density in the interstices {\em between} vortices, created by their expulsion of grains. These qualitative effects are well-known from simulations and experiments (see references in \S~\ref{sec:intro}); this is their quantitative representation.
This model also determines the scaling of the gas statistics, e.g.\ $|v\eddy| \propto \lambda^{\zeta_{1}}$. The cascade models above predict that the structure functions $\sigma_{p}(\lambda)=\langle |\Delta {\bf u}(\lambda)|^{p} \rangle \equiv \langle |\delta{\bf u}({\bf x}) - \delta{\bf u}({\bf x}+{\bf \lambda})|^{p} \rangle$ scale as power laws $\sigma_{p}(\lambda)\propto \lambda^{\zeta_{p}}$ with
\begin{equation}
\label{eqn:structfn}
\zeta_{p} = (1-\gamma)\,\frac{p}{3} + C_{\infty}\,{\Bigl[}1-{\Bigl(}1 - \frac{\gamma}{C_{\infty}}{\Bigr)}^{p/3}{\Bigr]}
\end{equation}
where $\gamma=2/3$ follows generically from the Navier-Stokes equations (from the Kolmogorov $4/5$ths law).\footnote{In the original \citet{kolmogorov:turbulence} model, $\zeta_{p}=p/3$, giving the familiar $|v\eddy|\propto\scalevar\eddy^{1/3}$. The low-order differences between this and the more detailed multi-fractal models are small. So for our purposes, forcing $\zeta_{1}=1/3$ instead of Eq.~\ref{eqn:structfn} gives very similar predictions. However, a wide range of experiments favor the scaling in Eq.~\ref{eqn:structfn}, and it is the only scaling strictly self-consistent with the log-Poisson hierarchy.} What we refer to as $v\eddy$ is the one-point function (peculiar eddy velocity differences across the eddy) $p=1$, so we have
\begin{equation}
\label{eqn:velslope}
\zeta_{1} = \frac{1}{9} + C_{\infty}\,{\Bigl[}1 - {\Bigl(}1 - \frac{2}{3\,C_{\infty}} {\Bigr)}^{1/3} {\Bigr]}
\end{equation}
\begin{figure}
\hspace{-0.4cm}
\plotonesize{gr_nongaussian_ed.pdf}{1.03}
\vspace{-0.2cm}
\caption{Density PDF in laboratory experiments of water droplets in wind tunnel turbulence \citep{monchaux:2010.grain.concentration.experiments.voronoi}. {\em Top:} Range of particle density PDFs obtained (dashed), normalized by their variance. We compare the predicted log-Poisson distribution, with the same variance and a range of $|\deltarhonoabs|_{\rm int}\sim0.01-0.2$ corresponding to model predictions. {\em Bottom:} Test of log-normality: We compare the variance in $\rho\grain$ to that in $\ln{\rho\grain}$ from the same experiments, to the range predicted for log-Poisson PDFs with the predicted range in $|\deltarhonoabs|_{\rm int}$, the prediction from a lognormal distribution, and from a normal (Gaussian) distribution. The experiments favor a log-Poisson distribution as opposed to a linear Gaussian distribution or log-normal.
\label{fig:nongaussian}}
\end{figure}
\vspace{-0.5cm}
\subsection{Behavior at High Grain Densities}
\label{sec:model.highrho}
Thus far, we have neglected the back-reaction of grains on the gas (our predictions are appropriate when $\rho\grain < \rho_{\rm g}$). To extrapolate to $\rho\grain \gtrsim \rho_{\rm g}$, we require additional assumptions.
Recall, the background drift solution in Eqs.~\ref{eqn:vdrift}-\ref{eqn:vdrift.last} already accounts for $\tilde{\rho}$. So, after subtracting this flow, the Eqs.~\ref{eqn:eom.peculiar} for peculiar grain motion remain identical; but the gas equation of motion is (dropping the shear terms for simplicity)
\begin{align}
\delta \dot{{\bf u}} &= - \tilde{\rho}\,{\Bigl(}\frac{\delta {\bf u}-\delta {\bf v}}{t_{\rm s}} {\Bigr)} - \frac{1}{\rho_{\rm g}}\,\nabla \delta P_{\rm g}
\end{align}
where $\nabla \delta P_{\rm g}$ represents the peculiar hydrodynamic forces. In sub-sonic turbulence, it seems reasonable to make the {\em ansatz} that the back-reaction, while it may distort the flow $\delta {\bf u}$, does not alter the {driving} force ($\nabla \delta P_{\rm g}$) that forms the eddy.\footnote{Although we caution that this cannot be strictly true in the regime of high $\tilde{\rho}$ where the streaming instability operates \citep{goodman.pindor:2000.secular.drag.instabilities.grains,youdin.goodman:2005.streaming.instability.derivation}.} But we know that the ``zero back-reaction'' eddy structure $\delta {\bf u}_{0}\equiv \delta {\bf u}(\tilde{\rho}=0)$ is, by definition, a solution to the equation
$\delta \dot{{\bf u}}_{0} = - \rho_{\rm g}^{-1}\,\nabla \delta P_{\rm g}$.
So decompose ${\bf u}$ into the sum $\delta{\bf u}\equiv \delta {\bf u}_{0} + \delta {\bf u}^{\prime}$, substitute, and obtain
\begin{align}
\label{eqn:duprime}
\delta \dot{{\bf u}}^{\prime} &= - \tilde{\rho}\,{\Bigl(}\frac{\delta {\bf u}_{0}+\delta {\bf u}^{\prime}-\delta {\bf v}}{t_{\rm s}} {\Bigr)} =
\tilde{\rho}\,{\Bigl(}\frac{\delta{\bf v}-\delta{\bf u}_{0}}{t_{\rm s}} + \frac{\delta {\bf u}^{\prime}}{t_{\rm s}}
{\Bigr)}
\end{align}
In the limit $\tilde{\rho}\rightarrow0$, we know $\delta {\bf u}^{\prime}\rightarrow 0$. For $0<\tilde{\rho}\ll1$, we expect $|\delta {\bf u}^{\prime}|\ll \delta {\bf u}$, so we can linearize the equations of motion and obtain $\delta\dot{{\bf u}}^{\prime} \approx (\tilde{\rho}/t_{\rm s})\,(\delta {\bf v} - \delta {\bf u}_{0}) \sim \tilde{\rho}\,{\rm d}(\delta {\bf v} - \delta {\bf u}_{0})/{\rm d}t + \mathcal{O}(\tilde{\rho}^{2})$ (where the latter follows from the grain and gas momentum equations assuming $\langle |\delta {\bf u}_{0}| \rangle > \langle |\delta {\bf v} | \rangle$), so $\delta {\bf u} \approx \tilde{\rho}\,(\delta {\bf v} - \delta {\bf u}_{0} )$. In the limit $\tilde{\rho}\rightarrow\infty$ ($\rho_{\rm g}/\rho\grain = \tilde{\rho}^{-1} \rightarrow 0$), the gas is perfectly dragged by the grains, so $\delta {\bf u}\rightarrow \delta {\bf v}$ and $\delta {\bf u}^{\prime} \rightarrow \delta {\bf v} - \delta {\bf u}_{0}$. Linearizing in this limit in $\rho_{\rm g}/\rho\grain=\tilde{\rho}^{-1}$ similarly gives $\delta {\bf u}^{\prime} \sim (1-\tilde{\rho}^{-1})\,(\delta {\bf v} - \delta {\bf u}_{0}) + \mathcal{O}(\tilde{\rho}^{-2})$. We can simply interpolate between these limits by assuming that, in equilibrium
\begin{align}
\delta {\bf u}^{\prime} \sim \frac{\tilde{\rho}}{1+\tilde{\rho}}\,{\Bigl(} \delta{\bf v} - \delta{\bf u}_{0} {\Bigr)}
\end{align}
We stress that this is {\em not} exact, but it at least gives the correct asymptotic behavior. Inserting this into the equation for $\delta {\bf v}$, we have
\begin{align}
\delta\dot{\bf v} = -\frac{(\delta {\bf v} - [\delta{\bf u}_{0} + \delta {\bf u}^{\prime}])}{t_{\rm s}} \rightarrow -\frac{(\delta {\bf v} - \delta{\bf u}_{0})}{t_{\rm s}\,(1+\tilde{\rho})}
\end{align}
But this is our original equation for $\delta {\bf v}$, modulo the substitution $t_{\rm s}\rightarrowt_{\rm s}(1+\tilde{\rho})$. So -- given the extremely simple ansatz here -- our derivation of $\varpi$ and previous quantities is identical, but we should replace $t_{\rm s}$ with an ``effective'' $t_{\rm s,\,\rho}\equivt_{\rm s}\,(1+\tilde{\rho})$. At this lowest-order, back-reaction lessens the relative velocities (hence friction strength) by dragging gas with grains, and thus lengthens the ``effective'' stopping time.
The timescale $\delta t$ is of course still limited by the eddy lifetime $t\eddy$, and the crossing time solution we previously derived already accounted for $\tilde{\rho}>0$ (in the drift time), so we do not need to re-derive it. Finally, we will further assume that the back-reaction, while it may distort individual eddies, does not alter their fractal dimensions (hence other statistics like the gas power spectrum). Below, we discuss the accuracy of these assumptions.
\vspace{-0.5cm}
\section{Predictions}
\begin{figure}
\centering
\plotonesize{gr_pwrspec.pdf}{1.01}
\vspace{-0.4cm}
\caption{Power spectra of linear grain density fluctuations. We compare the streaming-instability simulations in Fig.~\ref{fig:grain.rho.jy} for $\tau_{\rm s}=0.1-1$, and the turbulent concentration simulations ({\em top left}) in \citet[][squares]{yoshimoto:2007.grain.clustering.selfsimilar.inertial.range} and \citet[][diamonds]{pan:2011.grain.clustering.midstokes.sims} for Stokes numbers $\sim5,\,10,\,50$. Simulation results within five cells of the resolution limit are shown as dashed lines; the power suppression here is is artificial. Agreement is good. For large particles, the power is preferentially concentrated on large scales. For small particles, the power spectrum is quite flat, until a turnover at scales below which $t\eddy\ll t_{\rm s}$.
\label{fig:pwrspec}}
\end{figure}
\subsection{The Shape of the Grain Density Distribution}
\label{sec:pred.rhodist}
The full grain density PDF, averaged on any spatial scale, can now be calculated.
To do, we start on the initial scale $\lambda=\scalevar_{\rm max}$. By definition, since this is the top scale of the turbulence and/or box, the density distribution is a delta function with $\rho\grain(\scalevar_{\rm max}) = \langle \rho\grain \rangle$. Now take a differential step in scale $\ln{\lambda} \rightarrow \ln{\scalevar_{\rm max}} - {\rm d}\ln{\lambda}$ and convolve this density with the PDF of density changes $P(\ln[\rho\grain(\lambda_{2})/\rho\grain(\lambda_{1})])$ from \S~\ref{sec:model.hierarchy}. Repeat until the desired scale is reached; the one-point density PDF is just this iterated to $\lambda\rightarrow0$.\footnote{Really we should truncate (or modify) this at the viscous scale $\lambda_{\nu}$; here we assume large Reynolds number $Re\rightarrow\infty$. However the distinction is important for modest Stokes numbers $St=t_{\rm s}/t\eddy(\lambda_{\nu})$ or simulations with limited resolution (small ``effective'' $St$).}
It is easiest to do this with a Monte-Carlo procedure; each point in a large ensemble represents a random point in space (thus they equally sample volume) with its own independent $\ln{\rho}_{i}$. For each step in scale $\Delta\ln{\lambda}$, draw $m=m_{i}$ for each point from the appropriate Poisson distribution for the step (Eq.~\ref{eqn:poisson}), and use $|\delta \ln{\rho}(\lambda)|$ to calculate the change in $\ln{\rho}_{i}$ (Eq.~\ref{eqn:logpoisson.1}), and repeat until the desired scale is reached.
Recall that for the largest eddies $|\deltarhonoabs|\approx 2\,N_{\rm d}/(\tau_{\rm s}+\tau_{\rm s}^{-1})$ is approximately constant. In that case, the log-Poisson distribution in \S~\ref{sec:model.hierarchy} is scale-invariant and infinitely divisible, meaning that the integrated PDF of $\ln{\rho}$ is {\em also} exactly a log-Poisson distribution on all scales, with the same $|\deltarhonoabs|\sim$\,constant, and $m$ drawn from a Poisson distribution with the integrated $\Delta N = C_{\infty}\,\ln{(\scalevar_{\rm max}/\lambda)}$.
However, if $|\deltarhonoabs|$ depends on scale (as it does on small scales), then the convolved distribution is not exactly log-Poisson. But it is quite accurately approximated by a log-Poisson with the same mean and variance as the exact convolved distribution \citep[see][]{stewart:2006.gamma.function.convolution.approximation}. These quantities add linearly with scale. Over the differential interval in scale ${\rm d}\ln{\lambda}$ ($\Delta N=C_{\infty}\,{\rm d}\ln{\lambda}$), the added variance ($\Delta S$) in ${\Delta}\ln{\rho} = \ln{(\rho\grain[\lambda_{2}]/\rho\grain[\lambda_{1}])}$ is $\Delta S = \Delta N\,|\deltarhonoabs|^{2}$. So the exact integrated variance in the final volume-weighted $\ln{\rho}$ distribution is
\begin{align}
\label{eqn:pdf.S}
S_{\ln{\rho},\,V}(\lambda) &= \int \frac{{\rm d}S_{\ln{\rho},\,V}}{{\rm d}\ln{\lambda}}\,{\rm d}\ln{\lambda} = \int_{\lambda}^{\scalevar_{\rm max}} \Delta N\,|\delta \ln{\rho}|^{2} \\
&= \int_{\lambda}^{\scalevar_{\rm max}} C_{\infty}\,|\delta \ln{\rho}(\lambda)|^{2}\,{\rm d}\ln{\lambda} \nonumber
\end{align}
And the integrated first moment (subtracting the $\epsilon_{0}$ term) is
\begin{align}
\label{eqn:pdf.mu}
\mu &= \int \Delta N\,|\deltarhonoabs| = \int_{\lambda}^{\scalevar_{\rm max}}C_{\infty}\,|\delta \ln{\rho}(\lambda)|\,{\rm d}\ln{\lambda}
\end{align}
The approximate integrated PDF on a scale $\lambda$ is then given by
\begin{align}
P_{V}&(\ln{\rho\grain})\,{\rm d}\ln{\rho\grain} \approx \frac{\Delta N_{\rm int}^{m}\,\exp{(-\Delta N_{\rm int})}}{\Gamma(m+1)}\,\frac{{\rm d}\ln{\rho\grain}}{|\deltarhonoabs|_{\rm int}} \\
\nonumber
m &= |\deltarhonoabs|_{\rm int}^{-1}\,{\Bigl\{}\Delta N_{\rm int}\,{\Bigl[}1 - \exp{(-|\deltarhonoabs|_{\rm int})} {\Bigr]} - \ln{{\Bigl(} \frac{\rho\grain}{\langle \rho\grain \rangle}{\Bigr)}} {\Bigr\}}
\end{align}
which is just the log-Poisson distribution (Eq.~\ref{eqn:logpoisson.1}) with
\begin{align}
\Delta N \rightarrow \Delta N_{\rm int} &\equiv \frac{\mu^{2}}{S_{\ln{\rho},\,V}} \\
\label{eqn:deltarho.int}
|\deltarhonoabs| \rightarrow |\delta \ln{\rho}|_{\rm int} &\equiv \frac{S_{\ln{\rho},\,V}}{\mu}
\end{align}
Note $\Delta N_{\rm int}$ is now $\sim C_{\infty}\,\langle \ln{(\scalevar_{\rm max}/\lambda)} \rangle$, where $\langle...\rangle$ denotes an average over integration weighted by $|\deltarhonoabs|$ (i.e.\ the ``effective'' dynamic range of the cascade which contributes to fluctuations). And $|\deltarhonoabs|_{\rm int}$ similarly reflects a variance-weighted mean.
This determines the volumetric grain density distribution, i.e.\ the probability, per unit volume, of a given mean grain density $\rho\grain=M_{\rm p}(V)/V$ within that volume $V$
\begin{equation}
P_{V}(\ln{\rho\grain}) = \frac{{\rm d}P_{\rm Vol}(\ln{\rho\grain})}{{\rm d}\ln{\rho\grain}}
\end{equation}
This is trivially related to the mass-weighted grain density distribution $P_{M}$, or equivalently the Lagrangian grain density distribution (distribution of grain densities at the location of each grain, rather than at random locations in the volume):
\begin{equation}
P_{M}(\ln{\rho\grain}) = \frac{{\rm d}P_{\rm Mass}(\ln{\rho\grain})}{{\rm d}\ln{\rho\grain}} = \rho\grain\,P_{V}(\ln{\rho\grain})
\end{equation}
Note that as $\Delta N_{\rm int}\rightarrow\infty$, this distribution becomes log-normal. This is generically a consequence of the central limit theorem, for sufficiently number of independent multiplicative events in the density field.
\vspace{-0.5cm}
\subsection{The Grain Density Power Spectrum}
\label{sec:pred.pwrspec}
The power spectrum of a given quantity is closely related to the real-space variance as a function of scale. Specifically, the variance in some field smoothed with an isotropic real-space window function of size $\lambda$ is related to the power spectrum by
\begin{equation}
S(\lambda) = \int {\rm d}^{3}{\bf k}\,P({\bf k})\,|W({\bf k},\,\lambda)|^{2}
\end{equation}
where $W$ is the window function. If we isotropically average, and adopt for convenience a window function which is a Fourier-space top-hat\footnote{We treat the isotropically-averaged, Fourier-space top-hat case purely for convenience, because it is usually measured and more relevant on small scales. This is not the same as assuming the power spectrum is intrinsically isotropic or that Fourier modes are uncoupled.} we obtain
\begin{equation}
S(\lambda) = \int_{k=1/\lambda}^{\infty}\,\Delta^{2}(k)\,{\rm d}\ln{k}
\end{equation}
where $\Delta^{2}(k)$ is now defined as the isotropic, dimensionless power spectrum, and is related to $S(\lambda)$ by
\begin{equation}
\frac{{\rm d}S}{{\rm d}\ln{\lambda}} = \Delta^{2}(k[\lambda])
\end{equation}
But we know how the variance ``runs'' as a function of scale, for the logarithmic density distribution. Specifically, for the distribution in Eq.~\ref{eqn:logpoisson.1}, over some differential interval in scale ${\rm d}\ln{\lambda}$ corresponding to $\Delta N=C_{\infty}\,{\rm d}\ln{\lambda}$, the variance in $\ln{\rho}$ is just $\Delta S = \Delta N\,|\deltarhonoabs|^{2}$ (and this adds linearly in scale). So
\begin{align}
\label{eqn:pwrspec}
\Delta^{2}_{\ln{\rho}}(k) &= \frac{ {\rm d}\,S_{\ln{\rho}} }{ {\rm d}\ln{\lambda} } =
C_{\infty}\,|\deltarhonoabs|^{2} = C_{\infty}\,{\Bigl[} N_{\rm d}\,\varpi(\lambda)\,\frac{\delta t}{t\eddy} {\Bigr]}^{2}
\end{align}
Recall the turnover in $(\delta t/t\eddy)$ below $\scalevar_{\rm crit}$ (\S~\ref{sec:model.encounters}), which leads to a two-power law behavior in $\Delta^{2}$:
\begin{align}
\Delta_{\ln{\rho}}^{2}(k) \propto
\begin{cases}
{\displaystyle {\rm constant}}\ \ \ \ \ \hfill {\tiny (\lambda \gg \scalevar_{\rm crit})} \\
\\
{\displaystyle v\eddy^{2} \propto k^{-2\zeta_{1}}}\ \ \ \ \ \hfill {\tiny (\lambda \ll \scalevar_{\rm crit})} \
\end{cases}
\end{align}
i.e.\ we predict a turnover/break in the power spectrum at a characteristic scale $\scalevar_{\rm crit}$ (defined in \S~\ref{sec:model.encounters} as the scale where the timescale for grains to cross an eddy is shorter than the stopping time). On large scales where eddy turnover times are long, the logarithmic statistics are nearly scale-free, but on small scales, where eddy turnover times are short compared to the stopping and drift times, the variance is suppressed.
The power spectrum for the {\em linear} density field $\rho$ is similarly trivially determined as:
\begin{equation}
\Delta_{\rho}^{2} = \frac{{\rm d}S_{\rho}}{{\rm d}\ln{\lambda}}
\end{equation}
However $S_{\rho}$ is not so trivially analytically tractable, since the total variance does not sum simply in {\em linear}\footnote{This is a general point discussed at length in \citet{hopkins:frag.theory}, Appendices~F-G; it is not, in general, possible to construct a non-trivial field distribution that is simultaneously scale-invariant under linear-space and logarithmic-space convolutions. However, as shown therein, the compound log-Poisson cascade is {\em approximately} so, to leading order in the expansion of the logarithm.} $\rho$. But it is straightforward to construct $S_{\rho}$, by simply using Eq.~\ref{eqn:logpoisson.1} to build the density PDF at each scale, directly calculating the variance in the linear $\rho$, and then differentiating. If the density PDF is approximately log-Poisson, then we can have
\begin{align}
\label{eqn:Srho.approx}
S_{\rho} \approx \exp{{\Bigl\{} \Delta N_{\rm int}\,{\Bigl(}1 - e^{-|\deltarhonoabs|_{\rm int}}{\Bigr)}^{2} {\Bigr\}}} - 1
\end{align}
This leads to the somewhat cumbersome expression for $\Delta^{2}_{\rho}$:
\begin{align}
\nonumber
\Delta^{2}_{\rho} \approx&\, C_{\infty}\,S_{\rho}\,\frac{|\deltarhonoabs|_{\rm int}}{|\delta \ln{\rho}(\lambda)|^{2}}\,
e^{-2\,|\deltarhonoabs|_{\rm int}}\,{\Bigl(}e^{|\deltarhonoabs|_{\rm int}} -1 {\Bigr)}\, \\
\nonumber
&
\times{\Bigl [} 2\,|\deltarhonoabs|_{\rm int}\,{\Bigl(}e^{|\deltarhonoabs|_{\rm int}} -1 + |\delta \ln{\rho}(\lambda)| {\Bigr)} \\
& - 2\,|\deltarhonoabs|_{\rm int}^{2}
- |\delta \ln{\rho}(\lambda)|\,{\Bigl(}e^{|\deltarhonoabs|_{\rm int}} -1 {\Bigr)}
{\Bigr]}
\end{align}
But the limits are easily understood: if $|\deltarhonoabs|_{\rm int}\approx|\delta \ln{\rho}(\lambda)|$, $\Delta^{2}_{\rho}\rightarrow C_{\infty}\,[1 - \exp{(-|\deltarhonoabs|)}]^{2}\,S_{\rho}$: this just reflects the scaling from the ``number of structures.'' If $|\deltarhonoabs|\ll1$, this further becomes $\Delta^{2}_{\rho}\simC_{\infty}\,|\deltarhonoabs|^{2} = \Delta^{2}_{\ln{\rho}}$, since for small fluctuations the linear and logarithmic descriptions are identical. If $|\deltarhonoabs|\gtrsim1$ is large, $\Delta^{2}_{\rho}\sim C_{\infty}\,\exp{(\Delta N_{\rm int})} \sim C_{\infty}\,(\lambda/\scalevar_{\rm max})^{-C_{\infty}}$ is a power-law, whose scaling (slope $C_{\infty}\sim2$) only depends on geometric scaling of the ``number of structures'' (fractal dimension occupied by vortices).
\vspace{-0.5cm}
\subsection{Correlation Functions}
The (isotropically averaged) autocorrelation function $\xi(r)$ is
\begin{align}
\xi(r) &\equiv \frac{1}{\langle\rho\grain\rangle^{2}}\,\langle(\rho\grain({\bf x})-\langle\rho\grain\rangle)\,(\rho\grain({\bf x})-\langle\rho\grain\rangle)\rangle
\end{align}
equivalently, this is the excess probability of finding a number of grains in a volume element ${\rm d}V$ at a distance $r$ from a given particle (not a random point in space)\footnote{Since we assume a uniform grain population, we treat grain mass and number densities as equivalent.}
\begin{align}
\langle {\rm d}N_{\rm p}(r,\,r+{\rm d}r) \rangle &= \langle n_{\rm p} \rangle\,{\rm d}V\,[1 + \xi(r)]
\end{align}
$\xi(r)$ is directly related to the variance $\langle (\rho\grain[r]-\langle \rho\grain\rangle)^{2}\rangle$ of the linear density field $\rho\grain[r]$ averaged on the scale $r$, by
\begin{align}
\label{eqn:corrfn.r}
\frac{1}{V(r)}\int_{V(r)}\,\xi({\bf r}^{\prime})\,{\rm d}^{3}\,{\bf r^{\prime}} &= \frac{\langle (\rho\grain(r)-\langle\rho\grain\rangle)^{2} \rangle}{\langle \rho\grain \rangle^{2}} \equiv S_{\rho}
\end{align}
\citep{peebles:1993.cosmology.textbook}.\footnote{We assume the absolute number of grains is large so we can neglect Poisson fluctuations.} So the correlation function contains the same statistical information as the density power spectrum; and if we calculate $S_{\rho}$ above it is straightforward to determine $\xi(r)$ by Eq.~\ref{eqn:corrfn.r}.
Note that if $\xi(r)$ is a power-law, $S_{\rho}(r)\sim \xi(r)$. And if $|\deltarhonoabs|_{\rm int}\ll1$ ($\tau_{\rm s}\ll1$), Eq.~\ref{eqn:Srho.approx} simply becomes $S_{\rho}\sim \Delta N_{\rm int}\,|\deltarhonoabs|_{\rm int}^{2}$. On the largest scales $t\eddy\gtrsim\Omega^{-1}$, $|\deltarhonoabs|\sim$\,constant so $\xi(r)$ rises weakly (with $\Delta N_{\rm int}$) with decreasing $r$ as $1+\xi(r) \propto \ln{(1/r)}$; approaching scales $t\eddy\simt_{\rm s}$, $|\deltarhonoabs| \propto \tilde{\tau}_{\rm s}$ rises so $\xi(r)\propto \tilde{\tau}_{\rm s}^{2} \propto \lambda^{-2\,(1-\zeta_{1})}$ rises as a power law with a slope near unity; finally on small scales $t\eddy\lesssim t_{\rm s}$, $|\deltarhonoabs|$ falls rapidly, so $|\deltarhonoabs|_{\rm int}$ and $\Delta N_{\rm int}$ converge and $\xi(r)\rightarrow$\,constant.
\begin{figure}
\centering
\hspace{-0.2cm}
\plotonesize{gr_corrfn.pdf}{1.01}
\vspace{-0.5cm}
\caption{Radial grain correlation functions, for the same simulations in Fig.~\ref{fig:pwrspec}. The predictions agree well at $St\gg1$ and/or scales $\lambda\gg\lambda_{\nu}$, but the clustering is under-predicted at $\lambda\lesssim\lambda_{\nu}$ for $St\sim1$, owing to non-inertial range effects we do not include. The shallow dotted line shows the slope predicted for Markovian (pure random-field) fluctuations (the amplitude is below the range plotted) in the inertial range ($\lambda\gg\lambda_{\nu}$), following \citet{bec:2007.grain.clustering.markovian.flow}: uncorrelated/incoherent fluctuations lead to negligible clustering when $St\gg1$.
\label{fig:correlation.functions}}
\end{figure}
\vspace{-0.5cm}
\subsection{Maximum Grain Densities}
\label{sec:pred.rhomax}
Using the predicted grain density PDFs, we can predict the maximum grain densities that will arise under various conditions.
In Eq.~\ref{eqn:logpoisson.1}, note that there is, in fact, a maximum density, given by $m=0$ (and $\epsilon_{0}=0$) on all scales. This is approximately the density where the distributions ``cut off'' in Figs.~\ref{fig:grain.rho.mri}-\ref{fig:grain.rho.jy}, steeper than a Gaussian. It is straightforward to estimate this using $\rho\grain(\scalevar_{\rm max})=\langle \rho\grain \rangle$, and taking $m=0$ on all scales:
\begin{align}
\label{eqn:rhomax}
\ln{{\Bigl(} \frac{\rho_{\rm p,\,max}[\lambda]}{\langle\rho\grain\rangle}{\Bigr)}} &= \int_{\lambda}^{\scalevar_{\rm max}} \epsilon_{0} \\
&=
\nonumber
C_{\infty}\,\int_{\lambda}^{\scalevar_{\rm max}}
{\Bigl[}1 - \exp{{(}-{|\deltarhonoabs|}{)}} {\Bigr]}
\,{\rm d}\ln{\lambda}
\end{align}
Trivially, we see
\begin{align}
\frac{{\rm d}\ln{\rho_{\rm p,\,max}}}{{\rm d}\ln{\lambda}} = -C_{\infty}\,{\Bigl[}1 - \exp{{(}-{|\delta \ln{\rho}(\lambda)|}{)}} {\Bigr]}
\end{align}
i.e.\ $\rho_{\rm p,\,max}$ behaves {\em locally} over some scale range in $\lambda$ as a power-law $\rho_{\rm p,\,max} \propto \lambda^{-\gamma}$ with slope $\gamma \equiv C_{\infty}\,[1 - \exp{(-|\deltarhonoabs|)}]$. When $|\deltarhonoabs|\ll1$ is small, $\gamma\sim C_{\infty}\,|\deltarhonoabs|$ is also small, so $\rho_{\rm p,\, max}$ grows slowly. For sufficiently large $|\deltarhonoabs|\gtrsim1$, $\gamma\simC_{\infty}\sim2$ saturates at a value determined by the fractal co-dimension of vortices -- $\rho_{\rm p,\,max}$ grows rapidly with scale, in a power-law fashion with slope $\sim2$ determined by the density of structures in turbulence.
\vspace{-0.5cm}
\section{Comparison with Simulations and Experiments}
Going forward, unless otherwise specified we will assume the theoretically preferred $C_{\infty}=2$ and $N_{\rm d}=2$. The values $\tilde{\rho}$ and $\tau_{\rm s}$ are necessarily specified for each experiment. With these values, we only need two additional parameters to completely determine our model predictions. One is the ratio of eddy turnover time to stopping time on the largest scales $\tilde{\tau}_{\rm s}(\scalevar_{\rm max})$ (or equivalently, ratio $t\eddy(\scalevar_{\rm max})/\Omega^{-1}$), the other is the ratio of mean drift to turbulent velocity $\beta \equiv |v\eddy(\scalevar_{\rm max})|/|v_{\rm drift}|$ (or equivalently the disk parameters $\alpha^{1/2}/\Pi$). These are properties of the gas turbulence and mean flow, so in some cases are pre-specified but in other cases are determined in a more complicated manner by other forces. To compare to simulations, we also (in some cases) need to account for their finite resolution, i.e.\ minimum $\lambda/\scalevar_{\rm max}$ or effective Reynolds number.
\vspace{-0.5cm}
\subsection{Density PDFs}
\subsubsection{Externally Driven MRI Turbulence}
\label{sec:pred.rhodist.mri}
First consider the simulations in \citet{dittrich:2013.grain.clustering.mri.disk.sims}. These solve the equations of motion for the coupled gas-grain system, in full magnetohydrodynamics (MHD), for a grain population with a single stopping time $t_{\rm s}$. The simulations are performed in a three-dimensional, vertically stratified shearing box in a Keplerian potential, and there is a well defined local $\Omega$, $\eta$, $\Pi=0.05$. The simulations develop the magnetorotational instability (MRI), which produces a nearly constant $\alpha\approx 0.004$ (in our units defined here) in the dust layer, and back reaction on gas from grains is ignored so we can take $\tilde{\rho}\rightarrow0$. The authors record a grain density PDF for $\tau_{\rm s}=1$ with large $\rho\grain$ fluctuations arising as the MRI develops (their Fig.~11), to which we compare in Fig.~\ref{fig:grain.rho.mri}. Since the disk is vertically stratified, $\langle \rho\grain \rangle$ depends on vertical scale height, so we would (ideally) compare our predictions separately in each vertical layer (though they are most appropriate for the disk midplane). Lacking this information, we should compare instead to the local surface density of grains relative to the mean grain surface density $\Sigma_{\rm p}/\langle \Sigma_{\rm p} \rangle$, which is independent of stratification and (as shown therein) closely reflects the distribution of mid-plane grain densities.
First we compare our exact prediction (computed via the Monte-Carlo method in \S~\ref{sec:pred.rhodist}). Our natural expectation $C_{\infty}=N_{\rm d}=2$ gives a remarkably accurate prediction of the simulation results! In fact, freeing $N_{\rm d}$ and fitting to the data does not significantly improve the agreement (best-fit $N_{\rm d}\approx1.9\pm0.1$). We also compare with our closed-form analytic approximation to the integrated density distribution, from Table~\ref{tbl:largescale}. This gives a very similar result, indicating that for this case (modest-resolution simulations, so densities are not averaged on extremely small scales, and large $\tau_{\rm s}=1$), the large-scale approximation is good.
In \S~\ref{sec:model.hierarchy}, we adopt the simplest assumption for the effects of an eddy (multiplication by $\langle \delta \ln{\rho} \rangle$). As noted there, one might extend this model by instead adopting a distribution of multipliers, with characteristic magnitude $\delta \ln{\rho}$. Here we consider one such example. For each ``event'' in $m$ in the log-Poisson hierarchy, instead of taking $\ln{\rho\grain}\rightarrow \ln{\rho\grain} + \langle \delta \ln{\rho} \rangle$, assume the ``multiplier'' is drawn from a Gaussian distribution, so $\ln{\rho\grain}\rightarrow\ln{(\rho\grain\,[1 + \mathcal{R}])}$ where $\mathcal{R}$ is a Gaussian random variable with dispersion $\langle \mathcal{R}^{2} \rangle^{1/2}=\langle \delta \ln{\rho} \rangle$. This is a somewhat arbitrary choice, but illustrative and motivated by Gaussian-like distributions in eddy velocities and lifetimes (and it has the advantage of continuously extending the predictions to all finite $\rho\grain$, while producing the same change in variance as our fiducial model over small steps in $\delta \ln{\rho}$);\footnote{Note that we do have to enforce a truncation where $\mathcal{R}>-1$ to prevent an unphysical negative density. However because the $|\deltarhonoabs|$ along individual ``steps'' is small, this has only a small effect on the predictions.} we could instead adopt a $\beta$-model as in \citet{hogan:2007.grain.clustering.cascade.model}, but it would require additional parameters. Here, however, we see that this makes little difference to the predicted PDF. This demonstrates that the variance predicted in $\rho\grain$ is dominated by the variance in the local turbulent field (the ``number of structures'' on different scales), and by the scales on which those structures appear -- not by the variance inherent to an individual structure on a specific scale.
\begin{figure}
\centering
\hspace{-0.2cm}
\plotonesize{gr_rhomax_teddy.pdf}{0.94}
\caption{Dependence of the maximum grain density in the $\tau_{\rm s}=1$ MRI simulations from Figs.~\ref{fig:grain.rho.mri} \&\ \ref{fig:rho.max} on the correlation time of the largest eddies in the simulation box ($t\eddy(\scalevar_{\rm max})$). Because of differences in the definition of correlation time, and the unknown vertical sedimentation, we treat the normalization of each axis as arbitrary: what matters here is the predicted trend. At fixed numerical resolution, $\rho_{\rm p,\,max}$ increases with $t\eddy(\scalevar_{\rm max})^{0.4-0.7}$ when $t\eddy(\scalevar_{\rm max})\lesssim1$, until saturating (with the $t\eddy(\scalevar_{\rm max})$-independent scalings in Table~\ref{tbl:largescale}) when $t\eddy(\scalevar_{\rm max})\gg1$.
\label{fig:rhomax.teddy}}
\end{figure}
\vspace{-0.5cm}
\subsubsection{Self-Driven (Streaming \&\ Kelvin-Helmholtz) Turbulence}
\label{sec:pred.rhodist.streaming}
Next, Fig.~\ref{fig:grain.rho.jy} repeats this comparison, with a different set of simulations from \citet{johansen:2007.streaming.instab.sims} and \citet{bai:2010.grain.streaming.sims.test,bai:2010.streaming.instability.basics}. These are two and three-dimensional simulations,
of non-MHD hydrodynamic shearing boxes but ignoring vertical gravity/stratification (so we can directly take the statistics in $\rho\grain/\langle \rho\grain \rangle$ as representative of our predictions). The simulations fix $\eta=0.005$, $\Pi=0.05$, $\Omega$, and $\tau_{\rm s}$ for monolithic grain populations, and have no external driving of turbulence. However they do include the grain-gas back-reaction for $\tilde{\rho} = 0.2,\,1.0,\,3.0$, and so develop some turbulence naturally via a combination of streaming and Kelvin-Helmholtz-like shear instabilities, with $\alpha\sim10^{-8}-10^{-2}$ depending on the simulation properties (but recorded for each simulation therein). The two studies adopt entirely distinct numerical methods, so where possible we show the differences owing to numerics.
In nearly every case, we see good agreement with the analytic theory. This is especially true at smaller $\tau_{\rm s}$ and $\rho\grain$; at $\tau_{\rm s}\gtrsim1$ and $\rho\grain\gtrsim1$, our theory is less applicable but still performs reasonably well. In particular, the non-linear behavior seen therein, where for example there is a large difference between $\rho\grain=0.2-1$ for $\tau_{\rm s}=0.1$ (but little change between $\rho\grain=1-3$), and the much larger fluctuations seen for $\rho\grain=0.2$ compared to $\rho\grain=1-3$ for $\tau_{\rm s}=1$, are all predicted. The large increase with $\rho\grain=0.2-1$ for $\tau_{\rm s}=0.1$ follows from $\rho\grain$ increasing the ``effective'' stopping time as discussed in \S~\ref{sec:model.highrho}, and then the effect saturates with increasing $\rho\grain\gtrsim1$. However much of the difference also owes to the different values of $\alpha$ in each simulation (the case $\tau_{\rm s}=1$, $\rho\grain=0.2$ produces a very large $\alpha$, driving much of the very large variance).
We see again that adopting the pure log-Poisson model (mean $\langle \delta \ln{\rho} \rangle$) makes little difference compared to the log-Poisson-Gaussian model discussed in \S~\ref{sec:pred.rhodist.mri} above. However, we see more clearly here that allowing for additional variance in the effects of an individual eddy does, as one might expect, increase the variance at the high-$\rho\grain$ tail of the distribution, whereas a ``strict'' log-Poisson model (our default model) has an absolute cutoff at some $\rho_{\rm p,\,max}$. Interestingly, this gives a slightly better fit at high-$\tau_{\rm s}$, and poorer at low-$\tau_{\rm s}$, perhaps indicating the relative importance of in-eddy variance in these two cases.
The one case where we see relatively poor agreement is $\tau_{\rm s}=1$, $\rho\grain=3$. Interestingly, if we simply take $\tau_{\rm s}=1$, $\rho\grain=0$ here, the prediction agrees extremely well with the simulation. This suggests that our approximation $\tau_{\rm s}\rightarrow \tau_{\rm s,\,\rho}$ with $\tau_{\rm s,\,\rho}=\tau_{\rm s}\,(1+\rho\grain)$ may ``saturate'' when $\tau_{\rm s,\,\rho}\gg1$ and no longer apply (using simply $\tau_{\rm s}$ may be more accurate). This may be expected: when $t_{\rm s}\ggt\eddy$, the eddy has little chance to act on the grains, so the ``back-reaction'' in turn only has a partial effect on the eddy velocity (whereas our simple derivation in \S~\ref{sec:model.highrho} assumed they reached equilibrium).
We should also note that the finite simulation resolution limit is important here when $\tau_{\rm s}=0.1$ and $\tilde{\rho}\ll1$: we will quantify this below.
\vspace{-0.5cm}
\subsubsection{Turbulent Concentration}
\label{sec:pred.rhodist.turbconcentration}
We now compare the density PDFs measured in ``turbulent concentration'' simulations in \citet{hogan:1999.turb.concentration.sims}; these simulations follow a driven turbulent box (no shear or self-gravity), so we should apply the version of the model from \S~\ref{sec:encounters:pure.turb}. We expect the same $N_{\rm d}$ and $C_{\infty}$, and back-reaction is not included so $\tilde{\rho}\rightarrow0$. We then need to know over what range to integrate the cascade: this is straightforward since each simulation has a well-defined Reynolds number $Re = (\scalevar_{\rm max}/\lambda_{\nu})^{4/3}$. Lacking a model for the dissipation range, we simply truncate the power exponentially when $\lambda<\lambda_{\nu}$. The simulation follows particles with Stokes numbers $St\equiv t_{\rm s}/t\eddy(\lambda_{\nu})=1$.
Perhaps surprisingly, the model agrees fairly well with the simulations. At larger $Re$, the density PDF becomes more broad, because of contributions to fluctuations over a wider range of scales (the response function in Fig.~\ref{fig:response} is broad). However, this does not grow indefinitely -- as $Re\rightarrow\infty$ we predict convergence to a finite PDF width (with $\rho_{\rm p,\,max}\sim300-1000$). This is both because the response function declines, and, as the ``top'' of the cascade becomes larger in velocity scale, the residual (logarithmically growing) offset between grain and eddy velocities becomes larger (Eq.~\ref{eqn:g.timescale.function}), suppressing the added power.
Note, though, that our model is not designed for small $St$, and we see the effect here. The highest-density tail of the PDF is not fully reproduced (the model predictions, especially at $Re=765$, cut off more steeply). We show below that this is because grains with small $St\sim1$ can continue to cluster and experience strong density fluctuations on very small scales $\lambda \lesssim \lambda_{\nu}$, which are not accounted for in our calculation. We caution care for particles where the key fluctuations lie outside the inertial range.
\vspace{-0.5cm}
\subsubsection{Experiments and Non-Gaussianity}
\label{sec:pred.rhodist.nongaussian}
In Fig.~\ref{fig:nongaussian}, we extend our comparisons to experimental data. There is a considerable experimental literature for $St\lesssim1$ particles in terrestrial turbulence (see \S~\ref{sec:intro}); unfortunately many of the measurements are either in regimes where our model does not apply or of quantities we cannot predict. However \citet{monchaux:2010.grain.concentration.experiments.voronoi} measure the density PDF in laboratory experiments of water droplets in wind tunnel turbulence,\footnote{Actually they measure the local Voronoi area around each particle: as noted therein, this is strictly equivalent to a local density PDF. We convert between the two as they do, taking the density to be the inverse area.} with $St\sim0.2-6$ and $Re\sim300-1000$ (Taylor $Re_{\lambda}=70-120$). They measure the PDF shape (normalized by its standard deviation) for a large number of experiments with different properties. The range of results, including time variation and variation across experiments, is shown in Fig.~\ref{fig:nongaussian}. We compare this with the predicted log-Poisson distribution: from \S~\ref{sec:pred.rhodist}, the variance in the log-Poisson is $S=\Delta N_{\rm int}\,|\deltarhonoabs|_{\rm int}^{2}$, so normalizing to fixed $\sigma=\sqrt{S}$, the PDF shape varies with the ratio $|\deltarhonoabs|_{\rm int}/\Delta N_{\rm int}$. Taking the predicted (modest) range in this parameter for the same range in simulation properties, we show the predicted PDF shapes. Within this range, the experimental PDF is consistent with the prediction.
The lower panel makes this more quantitative. For each PDF in \citet{monchaux:2010.grain.concentration.experiments.voronoi} we record the variance in linear $\rho\grain$ ($S_{\rho}$) and logarithmic $\ln{\rho\grain}$ ($S_{\ln{\rho\grain}}$). This scaling for different distributions is discussed in detail in \citet[][see Fig.~4 in particular]{hopkins:2012.intermittent.turb.density.pdfs}. If the distribution of $\rho\grain$ were exactly log-normal, then there is a one-to-one relation between the two: $S_{\ln{\rho\grain}} = \ln{(1 + S_{\rho})}$. This appears to form an ``upper envelope'' to the experiments. If $\rho\grain$ is distributed as a Gaussian in linear-$\rho\grain$, there is also a one-to-one relation (straightforward to compute numerically); this predicts relatively small $\ln{\rho\grain}$ variation, in conflict with the simulations. For the log-Poisson distribution, the relation depends on the second parameter $|\deltarhonoabs|_{\rm int}/\Delta N_{\rm int}$. As this $\rightarrow0$, the distribution becomes log-normal; for finite values, $S_{\rho}$ is smaller than would be predicted for a log-normal with the same $S_{\ln{\rho}}$; we compare the range predicted for plausible values of $|\deltarhonoabs|_{\rm int}$ in these experiments (similar parameters to the simulations in Fig.~\ref{fig:correlation.functions}).
We could extend this comparison by including the numerical simulations at higher $t_{\rm s}$ (and including gravity and shear). But the agreement with our predictions is already discussed, and it is evident by-eye that the distributions in Fig.~\ref{fig:grain.rho.jy} are not exactly lognormal (they are asymmetric in log-space about the median), nor can they be strictly Gaussian in linear $\rho$ (which for such large positive fluctuations would require negative densities). While it is less obvious by eye, the non-trivial fractal spectrum in the turbulent concentration experiments, discussed at length in \citet{cuzzi:2001.grain.concentration.chondrules}, also requires non-Gaussian PDFs.
\vspace{-0.5cm}
\subsection{Density Power Spectra}
Direct measurements of the power spectrum of $\ln{\rho}$ are not available for the simulations we examine here. However, \citet{johansen:2007.streaming.instab.sims} do measure the average one-dimensional power spectra of the linear grain volume density $\rho\grain$.\footnote{We exclude their simulation ``AA'' which uses the two-fluid approximation, that the authors note cannot capture the full gas-grain cascade and so predicts an artificially steep power spectrum.} We can compute this as described in \S~\ref{sec:pred.pwrspec}, and show the results in Fig.~\ref{fig:pwrspec}.\footnote{To match what was done in that paper precisely, we calculate the volumetric density PDF and corresponding variance in linear $\rho$ ($S_{\rho}$) explicitly for each scale, use this to obtain the isotropic power spectrum, then use this to realize the density distribution repeatedly on a grid matching the simulations and compute the discrete Fourier transform, and finally plot the mean absolute magnitude of the coefficients as a function of $k$.} Down to the simulation resolution limit (where the simulated power is artificially suppressed) we see good agreement. Because the power spectrum here is in linear $\rho\grain$, saturation effects dilute the clarity of the predicted transition near $t\eddy=t_{\rm s}$, but it is still apparent. Moreover we confirm the qualitative prediction that for large $\tau_{\rm s}\gtrsim0.1$, most of the power is on relatively large scales where $t\eddy\gtrsim t_{\rm cross}$ and $t_{\rm s}$. With smaller $t_{\rm s}$, there is a larger dynamic range on large scales where $t\eddy\gg t_{\rm s}$, over which the power spectrum is flatter. For very small grains, the power would become more concentrated near $t\eddy\simt_{\rm s}$, as in Fig.~\ref{fig:response}.
Freeing $C_{\infty}$ and $N_{\rm d}$, the simulations with $\tau_{\rm s}=1$ and $\tilde{\rho}\gtrsim1$ try to fit a slightly steeper slope compared to that predicted in the default model, which would imply a larger $\zeta_{1}(C_{\infty})\approx0.5$ (lower $C_{\infty}$) instead of our default $\zeta_{1}\sim1/3$, but this is not very significant at present (and no significant change in $N_{\rm d}$ is favored). If confirmed, though, this would imply that the gas turbulence in this regime behaves more like compressible, super-sonic turbulence \citep[see][]{boldyrev:2002.structfn.model}; this might be expected when $\tau_{\rm s}$ and $\rho\grain$ are large, since the dominant grains can efficiently compress the gas.
We also compare the power spectra from turbulent concentration simulations. This is the same information as contained in the correlation function, converted via Eq.~\ref{eqn:corrfn.r}, so we discuss it below.
\vspace{-0.5cm}
\subsection{Grain Correlation Functions}
In Fig.~\ref{fig:correlation.functions}, we compare our predictions to published grain correlation functions $\xi$ in turbulent concentration experiments. The same information, represented as the linear density power spectrum, is in Fig.~\ref{fig:pwrspec}. Here we compare the simulations from \citet{pan:2011.grain.clustering.midstokes.sims} and \citet{yoshimoto:2007.grain.clustering.selfsimilar.inertial.range}, with the model appropriate for ``pure turbulence'' (see \S~\ref{sec:pred.rhodist.turbconcentration} above).\footnote{Perhaps because of different definitions, the normalizations for the correlation functions $\xi$, at identical Stokes and Reynolds number, disagree between the authors at the factor $\sim$few level. However the shape of $\xi$ in all cases agrees extremely well in both studies. So we treat the normalization as arbitrary at this level and focus on the shape comparison.} The authors each simulate a range of Stokes numbers; here we only compare with $St>1$ simulations since our model is largely inapplicable to $St\lesssim1$.
For large $St=43\gg1$, $\xi(\lambda)\sim$\,constant on small scales (there is no power here since $t\eddy\llt_{\rm s}$). But there is significant power on larger scales, where $t\eddy\sim t_{\rm s}$ (for $St=43$, this is when $\lambda/\lambda_{\nu}\gtrsim100$). And $\xi(\lambda)$ in all cases truncates at the very largest scales because of the finite box size/driving scale $\scalevar_{\rm max}$. For smaller $St=10$, the rising portion of $\xi(\lambda)$ continues to smaller scales, since $t\eddy\simt_{\rm s}$ at $\lambda/\lambda_{\nu}\sim40$. For still smaller $St=5$ this extends to $\lambda/\lambda_{\nu}\sim10$. These are all confirmed in the simulations. However, for the smallest $St\sim1$, $\xi(\lambda)$ (and hence the power in density fluctuations) continues to rise even at $\lambda\lesssim \lambda_{\nu}$, where $t\eddy\ll t_{\rm s}$. This is well-known, and in fact for $St\sim1$ a power-law rise in $\xi(\lambda)$ appears to continue to $\lambda\rightarrow0$, which does not occur when $St\gg1$ \citep[see][]{squires:1991.grain.concentration.experiments,bec:2007.grain.clustering.markovian.flow,yoshimoto:2007.grain.clustering.selfsimilar.inertial.range,pan:2011.grain.clustering.midstokes.sims}. This effect is fundamentally related to the dissipation range and viscous effects not included in our model, so we do not expect to capture it. And this is why we do not reproduce the full small-scale power in the density PDFs for $St=1$ in Fig.~\ref{fig:grain.rho.tc}.
For large-scale eddies and grains with $\tau_{\rm s}\gtrsim0.1$, we can compare with the simulations in \citet{carballido:2008.grain.streaming.instab.sims}. The results are consistent, but due to limited resolution the simulations only measure significant clustering in the couple smallest bins/cells quoted (see their Fig.~9); so the constraint is not particularly useful (significantly more information is available in Fig.~\ref{fig:pwrspec}).
\vspace{-0.5cm}
\subsection{Maximum Grain Densities}
\subsubsection{Dependence on Stopping Time}
Fig.~\ref{fig:rho.max} compares these predictions for the maximum grain concentration to the maximum measured in the MRI-unstable simulations from \S~\ref{sec:pred.rhodist.mri} \citep{dittrich:2013.grain.clustering.mri.disk.sims}. Recall, here $\alpha$ and $\Pi$ are approximately constant in all cases, and grain-gas back-reaction is ignored ($\tilde{\rho}\rightarrow0$), so the only varied parameter is $\tau_{\rm s}$. In the range $\tau_{\rm s}\sim0.01-1$, our predictions are in remarkably good agreement with the simulations, with the maximum grain concentration increasing from tens of percent to factors $\gtrsim 300$, for the simple assumption $N_{\rm d}=2$. Only a small range $N_{\rm d}\sim1.8-2.2$ is allowed if we free this parameter. For $1<\tau_{\rm s}\lesssim5$, the predictions are also reasonably accurate. At very large $\tau_{\rm s} \gtrsim 10$, however, we appear to under-estimate the magnitude of fluctuations; though it also appears that there is some change in the vertical structure in these simulations relative to what is expected (discussed in \citealt{dittrich:2013.grain.clustering.mri.disk.sims}), so the large $\Sigma_{\rm p,\, max}$ may not entirely reflect midplane density fluctuations.
As noted above, it is important that we account for finite resolution here. We compare the predictions using our best-estimate of $\scalevar_{\rm max}$ (the driving scale) relative to the finite resolution limit (factor of $\sim100$ in scale), to the prediction assuming infinite resolution (and $Re\rightarrow\infty$), with density measured on infinitely small scales. For large grains, this makes little difference (most power is on large scales). For small grains $\tau_{\rm s}\ll1$, however, the difference is dramatic. Very small grains with $\tau_{\rm s}\sim0.01$ may still experience factor $\gtrsim100$ fluctuations on small scales. This should not be surprising, however -- this is already evident in the turbulent concentration simulations, which exhibit such large fluctuations (even over limited Reynolds number, but covering the range where $t_{\rm s}\simt\eddy$) despite $\tau_{\rm s}\rightarrow0$, effectively. These $\rho\grain$ fluctuations, for small grains, occur on scales where $t_{\rm s}\sim t\eddy$, and resolving their full dynamic range (getting convergence here) requires resolution of the broad peak in the response function (Fig.~\ref{fig:response}), crudely we estimate $0.05\,t_{\rm s}\lesssimt\eddy\lesssim20\,t_{\rm s}$ should be spanned. This translates, even in idealized simulations, to large Reynolds numbers $Re\gtrsim10^{4}-10^{5}$ (not surprising, since the simulations in Fig.~\ref{fig:grain.rho.tc} are not converged yet at $Re\sim1000$). And for the simulations here, which have a fixed box size at of order the dust layer scale height, this would require resolution of a factor $\sim10^{6}$ below the largest eddies scales $\scalevar_{\rm max}$ (far beyond present capabilities).
Interestingly, we predict a ``partial'' convergence: because $|\deltarhonoabs|$ is non-monotonic in $\lambda$, the fluctuations on large scales can converge at reasonable resolution (factor $\sim2$ changes relative to the simulations here make little difference to the predicted curve). Only when the resolution is increased by the much larger factor described above does the additional power manifest. So, if the ``interesting'' fluctuations are those on large scales, such simulations, or the approximations in Table~\ref{tbl:largescale}, are reasonable.
\vspace{-0.5cm}
\subsubsection{Dependence on Scale}
\citet{johansen:2012.grain.clustering.with.particle.collisions} present the maximum density as a function of scale in streaming-instability simulations similar to those in \citet{johansen:2007.streaming.instab.sims}.\footnote{We specifically compare their simulations with no collisions and no grain self-gravity.} Given $\alpha$, $\Pi$, $\tilde{\rho}$ and $\tau_{\rm s}=0.3$ specified in the simulation, it is straightforward to predict $\rho_{\rm p,\,max}(\lambda)$ and compare to their result (using $\rho\grain(\scalevar_{\rm max}) = \langle \rho\grain \rangle$ at the dust scale height). As discussed in \S~\ref{sec:pred.rhomax} and in Table~\ref{tbl:largescale}, on large scales $\rho_{\rm p,\,max}\propto \lambda^{-\gamma}$ with $\gamma\approx C_{\infty}\,[1-\exp{(-|\delta_{0}|)}]$; for $\tau_{\rm s}=0.3$ and $\tilde{\rho}=0.25$, this gives $\gamma\approx1.5$, a power-law like scaling in excellent agreement with the simulations.
\vspace{-0.5cm}
\subsubsection{Dependence on Eddy Turnover Time}
In the simulations of \citet{dittrich:2013.grain.clustering.mri.disk.sims} from Fig.~\ref{fig:rho.max}, the authors also note that in a separate series of simulations with fixed $\tau_{\rm s}=1$, they see a significant dependence of the maximum $\rho\grain$ on the lifetime/coherence time of the largest eddies. They quantify this by comparing $\rho_{\rm p,\,max}$ to (twice) the correlation time of the longest-lived Fourier modes, which should be similar to our $t\eddy(\scalevar_{\rm max})$. As discussed in Appendix~\ref{sec:appendix:largescale}, on sufficiently large scales ($t\eddy\gg \Omega^{-1}$), $|\deltarhonoabs|$ and fluctuation properties asymptote to values independent of $t\eddy$ (the scalings in Table~\ref{tbl:largescale}). However for smaller $t\eddy(\scalevar_{\rm max})\lesssim1$, since the power for large grains $\tau_{\rm s}\sim1$ is concentrated on scales with $t\eddy\simt_{\rm s}\sim \Omega^{-1}$, the integrated power will decline if the top scales only include smaller eddies. Given the asymptotic scaling of $\varpi\propto\tilde{\tau}_{\rm s}^{-1/2}$ for $\tilde{\tau}_{\rm s}\gg\tau_{\rm s}$ ($t\eddy\ll \Omega^{-1}$) and $\tilde{\tau}_{\rm s}\gg1$ ($t\eddy\llt_{\rm s}$), we expect that the power at the largest scales will scale $\propto \tilde{\tau}_{\rm s}(\scalevar_{\rm max})^{-1/2} \propto t\eddy(\scalevar_{\rm max})^{1/2}$. Performing the full calculation for comparable Reynolds number to the simulations, we indeed predict a scaling $\rho_{\rm p,\,max}\propto (t\eddy(\scalevar_{\rm max})\,\Omega)^{0.4-0.7}$ (for a range $\tau_{\rm s}\sim0.5-2$), until saturation. Fig.~\ref{fig:rhomax.teddy} explicitly compares this to the scaling in those simulations; the agreement is good.
\vspace{-0.5cm}
\subsubsection{Effects of Simulation Dimension}
Note that, in \citet{johansen:2007.streaming.instab.sims} and \citet{bai:2010.grain.streaming.sims.test,bai:2010.streaming.instability.basics}, some significant differences are found between two-dimensional and three-dimensional simulations. To first order, the differences are accounted for in our model, not because of a fundamental change in the behavior of grains in response to eddies, but rather because of the different amplitudes of turbulence and/or vertical stratification in the simulations. In three dimensions, it appears more difficult for the streaming instability to generate large-$\alpha$ turbulence. That result itself is not part of our model here, however, for the given $\alpha$ in the different simulations, our predictions appear to agree with the different simulation PDFs.
\vspace{-0.5cm}
\subsubsection{Effects of Pressure Gradients and Metallicity}
Pressure gradients ($\Pi$) and metallicity ($Z$) enter our theory indirectly, by altering the value of the parameters $\tilde{\rho}$ and $\beta$, and -- under some circumstances such as streaming-instability turbulence -- by altering the bulk turbulent properties (velocities/eddy turnover times). We have already derived the dependence of $\beta$ on $\alpha$, $\tilde{\rho}$, and $\Pi$. And we expect $\tilde{\rho} = (\Sigma_{\rm p}/\Sigma_{\rm gas})\,(h_{\rm gas}/h_{\rm p}) \approx Z\,(\tau_{\rm s}/\alpha)^{1/2}$ \citep[see][]{carballido:2006.grain.sedimentation,youdin:2007.turbulent.grain.stirring}. When other parameters are fixed (for example, if the turbulence is externally driven), it is straightforward to accommodate these parameter variations. However, the dependence of bulk turbulent properties $\alpha$ and $t\eddy(\scalevar_{\rm max})$ on $Z$ and $\Pi$ (or other disk properties) requires some additional model for the driving and generation of turbulence. \citet{bai:2010.grain.streaming.vs.diskparams} consider a survey of these parameters, in the regime where the turbulence is driven by the streaming instability, and find that the dust layer scale height, turbulent dispersion $\alpha$, and largest eddy scales depend in highly non-linear and non-monotonic fashion on $Z$ and $\Pi$. If we adopt some of the simple dimensional scalings they propose therein for these quantities, we qualitatively reproduce the same trends they see: $\rho_{\rm p\,max}$ increases with both increasing $Z$ (increasing $\tilde{\rho}$) and decreasing $\Pi$ (weaker drift, larger $\beta$), but it is difficult to construct a quantitative comparison.
\vspace{-0.5cm}
\section{Discussion}
\subsection{Summary}
We propose an analytic theoretical model for the clustering of aerodynamic grains in turbulent media (with or without external shear and gravity). We show that this leads to unique, definite predictions for quantities such as the grain density distribution, density fluctuation power spectrum, maximum grain densities, and correlation functions, as a function of grain stopping/friction time, grain-to-gas volume density ratio, and properties of the turbulence. Our predictions are specifically appropriate for inertial-range turbulence, with large Reynolds numbers and Stokes numbers ($t_{\rm s}$ large compared to the eddy turnover time at the viscous scale), the regime of most astrophysical relevance. Within this range, we compare these predictions to numerical simulations and laboratory experiments, with a wide range in stopping times and turbulence properties, and show that they agree well.
The model fundamentally assumes that grain density fluctuations are dominated by coherent turbulent eddies, presumably in the form of simple vortices. Such eddies act to accelerate grains and preferentially disperse them away from the eddy center (concentrating grains in the interstices between eddies). Qualitatively, such behavior has been observed in a wide range of simulations and experiments (see \S~\ref{sec:intro}). Quantitatively, our model first adopts a simple calculation of the effects of a vortex with a given eddy turnover time (and lifetime of the same duration) acting on an initially homogeneous, isotropic Lagrangian grain population. We then attach this calculation to a simple but well-studied geometric (multi-fractal) model for the statistics of eddy structures on different scales.
\vspace{-0.5cm}
\subsection{Key Conclusions and Predictions}
\begin{itemize}
\item {\bf Large grain density fluctuations are expected even in incompressible turbulence:} We predict that even a small aerodynamic de-coupling between gas and grains allows for large (order-of-magnitude) fluctuations in $\rho\grain$, even while {\em gas} density fluctuations are negligible.
\item {\bf Grain density fluctuations do not explicitly depend on the {\em driving} mechanisms of turbulence:} Our predictions are universal and apply equally to simulations with turbulence arising via MRI, Kelvin-Helmholtz, and streaming instabilities, or artificially (numerically) driven. The fluctuation {\em do} depend on the stopping time $t_{\rm s}$, the ratio of volume-averaged grain-to-gas densities $\tilde{\rho}$, and some basic properties of the turbulence (the Reynolds number and velocity/length/time scale at the driving scale). These can change depending on the driving, however, for given values, the details of driving are not predicted to have significant effects.
\item {\bf The grain density distribution $\rho\grain$ is log-Poisson:}
\begin{align}
P_{V}&(\ln{\rho\grain})\,{\rm d}\ln{\rho\grain} \approx \frac{\Delta N_{\rm int}^{m}\,\exp{(-\Delta N_{\rm int})}}{\Gamma(m+1)}\,\frac{{\rm d}\ln{\rho\grain}}{|\deltarhonoabs|_{\rm int}} \\
\nonumber
m &= |\deltarhonoabs|_{\rm int}^{-1}\,{\Bigl\{}\Delta N_{\rm int}\,{\Bigl[}1 - \exp{(-|\deltarhonoabs|_{\rm int})} {\Bigr]} - \ln{{\Bigl(} \frac{\rho\grain}{\langle \rho\grain \rangle}{\Bigr)}} {\Bigr\}}
\end{align}
We predict $\Delta N_{\rm int}$ and $|\deltarhonoabs|_{\rm int}$ as a function of turbulent properties. This arises because the number of singular coherent structures is quantized (Poisson), and each produces a multiplicative (logarithmic) effect on the grain density field.
Generically, we suggest that this can be used as a fitting function, where the best-fit value of $\Delta N_{\rm int} \sim C_{\infty}\,\ln{(\scalevar_{\rm max}/\lambda_{\rm min})}$ measures the dynamic range of the cascade over which density fluctuations occur, and the value $|\deltarhonoabs|_{\rm int}$ reflects the rms fluctuation amplitude ``per event'' in the turbulence.
\item {\bf On large scales ($t\eddy\gtrsim\Omega^{-1}$) shear/gravity dramatically enhances density fluctuations.}
The fluctuation ``response'' in eddies is approximately scale-free, with amplitude $|\deltarhonoabs| \sim 2\,N_{\rm d}\,(\tau_{\rm s} + \tau_{\rm s}^{-1})^{-1}$. The variance in $\ln{\rho\grain}$, and maximum values of $\rho\grain$, increase with $\tau_{\rm s}$ up to a maximum near $\tau_{\rm s}\sim1$ (where maximum $\rho\grain$ values can reach thousands of times the mean); much larger grains are too weakly coupled to experience fluctuations and behave in an approximately Poisson manner.
The maximum $\rho\grain(r)$ on a smoothing scale $r$ scales $\propto r^{-\gamma}$ with $\gamma\simC_{\infty}\,[1-\exp{(-|\deltarhonoabs|)}]$; for small grains with $|\deltarhonoabs|\ll1$ $\gamma\sim2\,|\deltarhonoabs|$ is small, so the scale-dependence is shallow. For large grains with $|\deltarhonoabs|\gtrsim1$, $\gamma$ saturates at $\sim2$ (isothermal-like).
Most of the {\em power} in $\rho\grain$ fluctuations is on large scales for large grains, while for small grains the power spectrum is approximately flat over a range of scales down to a scale $\scalevar_{\rm crit}$ where the rms eddy-crossing time becomes shorter than the grain stopping time, below which power is suppressed.
\item {\bf On small scales ($t\eddy\ll \Omega^{-1}$), grain clustering depends only on the ratio $t_{\rm s}/t\eddy$.}
The fluctuation amplitude is maximized around $t_{\rm s}\simt\eddy$, declining $\propto t_{\rm s}/t\eddy$ for $t_{\rm s}\llt\eddy$ (where eddies are ``flung out'' of vortices at speeds limited by the eddy terminal velocity $\propto t_{\rm s}$) and $\propto (t_{\rm s}/t\eddy)^{-1/2}$ for $t_{\rm s}\ggt\eddy$ (where eddies cannot fully trap grains, so their effects add incoherently in a Brownian random walk).
Integrated over a sufficiently broad cascade (Reynolds number $\rightarrow\infty$), this means that some eddies will always have $t_{\rm s}\simt\eddy$, so the integrated density variance and maximum $\rho\grain$ always converge to values only weakly dependent on the absolute value $t_{\rm s}$. The maximum $\rho\grain$ can reach several hundred times the mean grain density, even in the limit $\rho\grain\ll\rho_{\rm g}$ and $\tau_{\rm s}\ll1$. The variance is concentrated on small scales, however, and the ``resonance region'' of eddy turnover time is broad -- so resolving this in simulations or experiments requires resolved eddies at least over the range $0.05\,t_{\rm s} \lesssim t\eddy \lesssim 20\,t_{\rm s}$ (Reynolds numbers at least $\gtrsim10^{4}-10^{5}$).
The grain-grain correlation function $\xi(r)$ in this limit scales weakly on the largest scales ($\propto \ln{(1/r)}$), until approaching $t\eddy\sim t_{\rm s}$ where it rises as $\xi(r)\propto (t_{\rm s}/t\eddy)^{2} \propto r^{-2\,(1-\zeta_{1})}$ (a slope near unity), then converges (flattens to $\xi(r)\rightarrow$\,constant) below $t\eddy\lesssim t_{\rm s}$.
\item {\bf Stronger turbulence enhances clustering:} At {\em otherwise identical properties}, larger values of the Mach number, Reynolds number, or driving-scale eddy turnover time $t\eddy(\scalevar_{\rm max})\,\Omega$ give rise to a larger dynamic range of the cascade driving $\rho\grain$ fluctuations. Stronger turbulence may decrease $\langle \rho\grain \rangle$, so it is not necessarily the case that these lead to larger absolute maximum values of $\rho\grain$, but only stronger grain clumping. Conversely, larger {\em drift} (laminar relative grain-gas velocities) weakens the clustering, by suppressing the time grains interact with single eddies.
\item {\bf Higher grain-to-gas density ratios enhance clustering of small grains:} Up to a saturation level where $t_{\rm s}\,(1+\tilde{\rho}) \gtrsim1$, increasing the volume density of grains increases their effective stopping time by dragging gas in a local wake, leading to larger terminal velocities and eddy effects.
\item {\bf {\em Coherent} eddy structure is critical:} Our predictions rely fundamentally on locally coherent (albeit short-lived) structures in turbulence. We show that a purely Markovian (Gaussian random field) approximation does not produce fluctuations nearly as large (nor with the correct scaling). So in some sense inertial-range $\rho\grain$ fluctuations depend intrinsically on intermittency in the gas turbulence. This cannot be captured in models which treat density perturbations purely as a ``turbulent diffusion'' term or Brownian motion.
\end{itemize}
\vspace{-0.5cm}
\subsection{Areas for Future Work}
This paper is intended as a first step to a model of grain clustering in inertial-range turbulence, and many aspects could be improved.
We fundamentally assume that grains have no effect on the character of gas turbulence statistics (though it may drive that turbulence), which is probably not true when $\rho\grain\gtrsim1$. And indeed, we see our predictions do not agree well with the simulations when $\tilde{\rho}\gg1$. Similarly, we appear to predict too rapid a turnover in grain clustering when $\tau_{\rm s}\gg1$. In these limits, it might be more accurate to begin from the statistics of a purely collisionless grain system, and treat the gas perturbatively (essentially the opposite of our approach).
We also simplify tremendously by only considering the mean effects of eddies with a given scale; but even at fixed eddy scale there should be a distribution of eddy structure, meaning that $\delta \ln{\rho}(\lambda)$ is not simply a number but itself a distribution. More detailed models could generalize the log-Poisson statistics to allow this. Such generalizations have been developed for the pure gas statistics \citep[see][]{castaing:1996.thermodynamic.turb.cascade,chainais:2006.inf.divisible.cascade.review}; however, experimental data has been largely unable to distinguish that case from the simplified model. We show one example of such a model, which suggests that the character of the grain density PDF at high-$\rho\grain$ may be able to distinguish such higher-order models. Other such models, for example the $\beta$-models proposed in \citet{hogan:1999.turb.concentration.sims}, may be more accurate still.
Our calculation ignores grain-grain collisions, which may significantly alter the statistics on the smallest scales in high-density regions \citep[see][]{johansen:2012.grain.clustering.with.particle.collisions}. And our scalings are derived for inertial-range turbulence; the case appropriate for small Stokes numbers where concentration occurs at/below the viscous scale is much more well-studied in the terrestrial turbulence literature (see \S~\ref{sec:intro}), and may be more relevant for the smallest grains. Thus on the smallest scales where collisions and/or viscous effects dominate, our predictions are expected to break down.
\vspace{-0.5cm}
\subsection{Implications}
The model here has a range of implications for many important astrophysical questions. An analytic model for grain clustering is particularly important in order to extrapolate to regimes which cannot easily be simulated (large Reynolds numbers and/or small scales). With an analytic description of the grain density power spectrum, it becomes straightforward to apply methods such as those in \citet{hopkins:2013.turb.planet.direct.collapse} to determine the mass and/or size spectra of grain aggregations meeting various ``interesting'' criteria (such as those aggregations which are self-gravitating).
Large grain clustering is of central importance to planetesimal formation. In grain overdensities reaching $\sim100-1000$ times the mean, local grain densities in proto-planetary disks can easily exceed gas densities, triggering additional processes such as the streaming instability. It is even possible that such large fluctuations can directly exceed the Roche density and promote gravitational collapse \citep[see][]{cuzzi:2008.chondrule.planetesimal.model.secular.sandpiles}. In future work, we will use the model here to investigate the conditions under which such collapse may be possible.
Grain-grain collisions in proto-planetary disks and the ISM depend sensitively on the small-scale clustering of grains, i.e.\ $\langle n^{2}(\lambda\rightarrow0)\rangle$, which we show can differ dramatically from a homogeneous medium, even for very small grains. Even simple clumping factors $\langle n^{2} \rangle / \langle n \rangle^{2}$ can be large ($\gg 1$). Thus grain clustering can make substantial differences to quantities such as grain collision rates and approach velocities.
Radiative transfer through the dusty ISM (and consequences such as emission, absorption, and cooling via dust) also depend on dust clumping. Depending on the geometry and details of the problem, this can even extend to extremely small-scale clustering properties within the dust, where inhomogeneities cannot be resolved in current simulations and may depend critically on dust clustering even independent of gas density fluctuations.
This model should be equally applicable to terrestrial turbulence, in the case of large Reynolds and Stokes numbers. We predict that even relatively large or heavy aerosols may undergo large number density fluctuations in inertial-range turbulence. We specifically provide a theoretical framework for the observations of preferential concentration of large-$St$ grains with amplitudes larger than those corresponding to pure random-phase models \citep{bec:2007.grain.clustering.markovian.flow}, with scale-dependent Stokes number $\tilde{\tau}_{\rm s}=\tau_{\rm s}/t\eddy(\lambda)$. Measurements of the clustering scales of these particles and their amplitudes can strongly constrain the role of intermittency in preferential concentration, its geometry/fractal structure, and the nature of the singular turbulent structures that drive large particle density fluctuations.
The intention here is to provide both a predictive model and a more general framework, in which to interpret simulations and experiments of grain clumping. We provide general fitting functions, which can be used in simulations to quantify important properties of turbulent fluctuations, such as the dynamic range of the cascade contributing to fluctuations and the magnitude of coherent ``events.'' They also provide a guideline for understanding on which scales simulations can resolve clumping, and to understand the regimes to which these results can be generalized.
\vspace{-0.7cm}
\begin{small}\section*{Acknowledgments}\end{small}
We thank Jessie Christiansen and Eugene Chiang for many helpful discussions during the development of this work. We also thank Andrew Youdin, Anders Johansen, and Jeff Cuzzi for several suggestions. Support for PFH was provided by NASA through Einstein Postdoctoral Fellowship Award Number PF1-120083 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the NASA under contract NAS8-03060.\\
\vspace{-0.1cm}
|
1,941,325,220,738 | arxiv | \chapter{List of Abbreviations}
\section{Conclusion}
\label{chap:conclusion}
\subsection{EfficientNet-B0}
An EfficientNet-B0 was trained on the \gls{cdc} dataset \cite{hung2019surface} without adding any additional augmentations.
The model achieved with $96.95\%$ accuracy, $97.32\%$ precision, and $96.76\%$ recall state-of-the-art on the \gls{cdc} dataset.
The successful application of EfficientNet-B0 demonstrated that models' improvements on ImageNet can also improve performance for transfer-learning. Furthermore, it confirmed its adequacy for training a binary honeycomb classifier.
\subsection{HiCIS and HiCC datasets}
The \gls{hicc} and \gls{hicis} datasets are published on GitHub for use in research: \url{https://github.com/jdkuhnke/HiC}. \Gls{hicc} contains binary classification datasets for honeycombs in concrete. \Gls{hicis} contains datasets for detecting honeycombs with bounding boxes and instance segmentation masks labeled in the \emph{MS COCO} format. These datasets provide a basis for further research into honeycomb detection. The raw images are also included.
While the instance segmentation masks were labeled in good faith, smaller honeycombs may not get labeled accurately enough for the 224 x 224 pixels patches in large images with fractured honeycombs. As a result, it is likely that the \gls{hicc} dataset likely contains some incorrect class labels.
The \gls{hicis} dataset could be extended by adding more challenging honeycombs from the raw images supplied in the repository.
The labeling process could be eased by training an initial model and using an active learning approach. The model would continuously train on data and create predictions for new unlabeled data, then feed back into the model for additional training. Feng \acs{etal}\ \cite{feng2017deep} demonstrated that active learning could assist with labeling defect images.
\subsection{Differences in datasets obtained from real scenarios and scraped from the web}
Differences between honeycomb datasets scraped from the web and images obtained from real scenarios were explored.
Both experiments, \acs{ie} classification and detection, suggest that images scraped from the web and those obtained from the field differ substantially.
In the case of classifying patches, the EfficientNet-B0 trained on the \gls{hicc} \emph{web-s224-p224} dataset achieved significantly lower scores on the \emph{metis-s224-p224} test set, \acs{ie} 41.3\% \gls{ap} and 42.5\% \gls{ar}, compared to the model trained on \emph{metis-s112-p224} achieving 68.0\% \gls{ap} and 67.2\% \gls{ar}. Furthermore, the model trained on \emph{metis-s112-p224} achieved an 97.3\% \gls{ap} and 96.3\% \gls{ar} on the \emph{web-s224-p224} test set close to its scores of the model trained on \emph{web-s224-p224} , achieving 98.7\% \gls{ap} and 97.8\% \gls{ar}.
For the instance segmentation, the \gls{mrcnn} trained on any combination of \gls{hicis} training datasets performed better on the \emph{web} test set than the \emph{metis} test set. The different performance on both datasets indicates a difference in the distribution of the two datasets.
In conclusion, although the dataset scraped from the web does not fully represent the complete variance of honeycombs, it still represents a limited selection. However, the inclusion of web images did not necessarily improve the accuracy of the model on realistic images.
\subsection{Evaluation of instance segmentation vs. patch-based classification for honeycomb detection}
Models detecting honeycombs by classifying patches and by instance segmentation were trained on \gls{hicc} and \gls{hicis} respectively.
EfficientNet-B0 trained on \gls{hicc} \emph{metis-s112-p224} achieved the overall highest performance on \emph{metis-s112-p224} with 67.6\% \gls{ap}, 67.2\% \gls{ar}, 68.5\% precision, and 55.7\% recall. While these metrics are lower compared to similar work for crack classification \cite{zhang2016road, cha2017deep, ozgenel2018performance, dorafshan2018comparison, feng2017deep}, the extent of the differences is expected, considering the size, quality and complexity of the datasets.
Although this thesis applied \acs{gradCam} for its honeycomb classifier to assist in manual verification, it also shows that it can assist in segmenting positive honeycomb patches, potentially giving better localization masks. Fan \acs{etal}\ \cite{fan2018automatic} stated that in cases of a high ratio of negative to positive pixels, a model would be likely to learn to classify each pixel as negative. Therefore, the classification enabled by the trained EfficientNet-B0 model is essential for further research in this direction.
The \gls{mrcnn} trained on the \gls{hicis} \emph{web} and \emph{metis} datasets achieved an 12.4\% $AP_{IoU\geq50}$, 47.7\% precision, and 34.2\% recall on the \emph{metis} test set as well as an 25.6\% $AP_{IoU\geq50}$, 64.9\% precision, and 42.1\% recall on the \emph{web} test set.
Although these metrics, especially the \glspl{ap}, are again lower compared to similar work for crack detection \cite{murao2019concrete, yin2020deep}, following the same reasoning as for honeycomb classification, the extent of the differences is expected, considering the size, quality, and complexity of the datasets.
While comparing instance segmentation and patch-based classification is challenging, when assessed quantitatively and qualitatively, the differences between the methods were not significant enough to lead to an indisputable choice. However, the former had a slight edge. Therefore, the decision between those problem types will depend on the context of possible implementation in practice, particularly on which approach integrates better with active learning in a defect documentation system.
In conclusion, the user-friendly detection of honeycombs was addressed by instance segmentation and patch-based classification. The experiments confirmed that honeycombs can be recognized by \glspl{cnn}, although the small dataset still limits the performance.
\subsection{Outlook}
While this thesis developed an initial model for detecting honeycombs, the performance would not yet suffice in practical applications. Nevertheless, models trained on either the \gls{hicc} or \gls{hicis} datasets could be used in an active learning approach integrated into defect documentation systems, enabling future research into detecting construction defects as the difficulty in obtaining labels for construction defects will persist for the myriad of existing defect types.
However, to address all defect types it will be necessary to include context information, \acs{eg} architectural plans or the \gls{bim} model, such as research into image-based construction progress monitoring will enable \cite{rho2020automated,lei2019cnn,braun2020improving}.
In conclusion, \glspl{cnn} can detect honeycombs in concrete and will enable automated defect detection assisting humans in different degrees of automation until achieving satisfactory results without human verification.
\chapter{Introduction}
\section{Introduction}
\paragraph{}
Construction defects are costly for the economy.
The cost of defect elimination is between 2\% and 12.4\% of the total cost of construction \cite{lundkvist2014proactive} and much time and effort is required to inspect construction sites and document defects \cite{nguyen2015smart}.
Automating the inspection of construction projects would free up resources and may even enable more frequent inspections, leading to more efficient construction projects.
The progress in CV and ML may enable the complete automation of this process in the future.
Although deep learning is applied to many different fields, research into image-based defect detection using deep learning is still limited in the construction industry, despite its large size, and focuses on security, progress, and productivity.
In contrast, there appear to be relatively few publications on methods utilized for object detection in quality assurance in construction. So far, the research into detecting defects has been mainly limited to defects occurring in the maintenance phase of infrastructure facilities such as roads, bridges, and sewer systems. \cite{xu2020computer}
This work focuses on the detection of honeycombs, which are large surface voids in concrete, that often contain visible pebbles as the lack of cement reveals the gravel. Honeycombs may expose reinforcements, \acs{ie} \glspl{rebar}, leading to erosion, and impair the water impermeability and static strength of the concrete.
\section{Related Work}
Most research into honeycomb detection uses a variety of sensor data, but not images.
For example, Ismail and Ong (2012) \cite{ismail2012honeycomb} used mode shapes to detect honeycombs in reinforced concrete beams. Vibration is induced into the concrete beam, and the displacement caused is measured at specific locations on the beams, describing the behavior of an object under dynamic load. Furthermore, V\"olker and Shokouhi \cite{volker2015clustering} developed a multi-sensor clustering-based method for honeycomb detection, using impact-echo, ultrasound, and ground-penetrating radar data.
To the author's best knowledge, the following is the only work using regular camera images and applying \gls{ml} for honeycomb detection and is used as a baseline for this thesis.
Hung \acs{etal}\ \cite{hung2019surface} showed that \glspl{cnn} could classify concrete images with a precision of 93\% and recall of 93\% into honeycomb, crack, moss, blistering, and normal classes. When their dataset, \gls{cdc}, was published, it was limited in size and was scraped from the internet, introducing a bias, as demonstrated in Section \ref{sec:cdcDifferences}. Regarding honeycombs, for example, the images scraped from the internet are often explanatory illustrations and show the most easily identifiable honeycombs. Furthermore, the domain from which the pictures are drawn is limited to images of concrete, leading to the expectation of a high false positive rate when applied to realistic defect images. This cannot, therefore, prove the applicability of \glspl{cnn} for honeycomb detection for images of general defects as collected in a defect documentation system. Nevertheless, this demonstrates that \glspl{cnn} may be applicable for the detection of honeycombs in concrete, assuming the images depict only concrete.
In conclusion, research into defect detection focuses on maintenance defects and often uses images which are not created by potential users. In recent research, \Gls{cnn}-based approaches dominated the field, including classification and object detection. As a result, this work evaluates both approaches and explores differences between an existing dataset of honeycomb images and a realistic dataset of images taken by construction site inspectors.
\section{Methodology}
While multiple datasets of defects in concrete structures exist, they either focus on different defects or contain images scraped from the internet. Hung \acs{etal}\ \cite{hung2019surface} published the most relevant dataset used for honeycomb detection, but this was only published after data augmentation, increasing relabeling effort.
Two datasets are collected. First, \emph{Metis Systems AG} provided a set of honeycomb images. Second, similarly to Hung \acs{etal}\ \cite{hung2019surface}, a dataset was scraped from the internet, providing a baseline to similar scraped datasets from research and enabling comparison with a realistic dataset.
\subsection{Data origins}
In the context of the research project \emph{SDaC}, \emph{Metis Systems AG} provided access to the defect images documented in their proprietary software \emph{überbau}. Inspectors document defects in \emph{überbau}, assigning each defect a title, optionally any number of images, and more attributes. These photos are often taken with a smartphone, at a variety of camera angles and lighting conditions may range from brightly sunlit to dark and lit only by a flash. The defects were accessed via an internal API and filtered for the key word "honeycomb", resulting in a total of 780 images. These images were further manually classified into 191 \emph{honeycomb}, and 539 \emph{other}. The honeycomb class contains images of easily identifiable honeycombs. In contrast, \emph{other} contains images of unrelated scenery and honeycombs, that are too difficult to differentiate for initial research and would have required more resources.
To the author's knowledge, this dataset is the most extensive public collection of honeycombs in existence and is the only one comprised of images that are neither scraped from the internet nor taken especially for research.
Furthermore, all images are made public with permission of \emph{Metis Systems AG} and may be used for further research. While our definition for honeycombs may not fit that of other researchers, the raw images are also published and increase the number of publicly available photos of concrete structures with honeycombs, pores, etc., caused by errors during construction rather than deterioration.
The second dataset was obtained using a google image search similarly to Hung \acs{etal}\ \cite{hung2019surface}, who collected images for four classes. However, since only the class of honeycombs is relevant for this work, only the keyword phrases \emph{honeycomb concrete} and \emph{honeycomb on concrete surface} were used, and the number of downloaded images was increased from 50 to 100 compared to Hung \acs{etal}\ \cite{hung2019surface}. The first 100 images matching the keyword phrases were downloaded and images containing no honeycombs, those with watermarks obstructing the image, duplicates, and images of low quality were removed. The filtered images were then combined and duplicates were removed a second time resulting in 56 images depicting honeycombs.
However, the images scraped from the internet are primarily used as examples of honeycombs by different websites, introducing a bias for very clear and large honeycombs. As a result most images contain very noticeable pebble-like structures.
\subsection{Datasets}\label{sec:datasets}
Since honeycombs do not describe an object per se, the determination of its outline is a challenge in itself. In contrast, most pores have clear circular outlines as highlighted by previous work \cite{liu2017image, zhu2008detecting, zhu2010machine, yoshitake2018image,nakabayash2020automatic}.
We use two approaches to labeling our data. First, we create instance segmentation masks. Secondly, we use simple classification labels.
We apply our labeling techniques to both our datasets.
\subsubsection{Honeycombs in concrete instance segmentation}\label{sec:hicis}
The conventional definition of a honeycomb is a surface void exceeding a certain diameter. This definition can not be applied here, since estimating the size of surface voids is impossible due to the unknown scale of the images. Some works address this issue by defining a fixed area \cite{nakabayash2020automatic} or controlling the image capture process \cite{yoshitake2018image}, resulting in a fixed scale of the surfaces displayed in the images. However, our solution avoids these biases and enables detection at any scale.
We use the following definition as a labeling criterion: a honeycomb is a surface void in concrete with at least one partially visible pebble.
Finally, the \gls{hicis} datasets were created by labeling the images with instance segmentation masks according to the aforementioned honeycomb definition.
The datasets were split into train, validation, and test sets of 60\%, 20\%, and 20\%, respectively, creating the two datasets \emph{\gls{hicis} metis} and \emph{\gls{hicis} web} with three subsets each.
\subsubsection{Honeycombs in concrete classification}\label{sec:hicc}
In addition to our segmentation labels, we create several classification datasets. First, we use the \gls{cdc} dataset by Hung \acs{etal}\ \cite{hung2019surface}. The dataset was converted from multi to binary classification for honeycombs by sorting all non-honeycomb images into a single class. The resulting dataset is called \gls{cdc-bhc}.
Additionally, the classification datasets \gls{hicc} were created from our \gls{hicis} segmentation datasets.
Square patches of $224\times224$ pixels were generated from the \gls{hicis} dataset by cropping the images, then calculating the area of the instance segmentation mask in the cropped image and applying a threshold to binary classification labels.
The crop was moved over the image by either a slide of 112 pixels or 224 pixels, creating two datasets for each \gls{hicis} dataset.
The datasets created with a slide of 112 pixels contain each pixel up to four times in different patches while the others contain each pixel exactly once. Keeping the train, validation, and test splits from \gls{hicis}, assured that a specific honeycomb does not appear in different subsets. Table \ref{tab:overviewHiccDatasets} summarizes the datasets. The generated datasets originating from the \gls{hicis} dataset follow this naming convention: \emph{HiCC/\{origin\}-s\{stride\_size\}-p\{patch\_size\}}.
\begin{table}[H]
\footnotesize
\centering
\begin{tabular}{!{\extracolsep{4pt}}rrrrrrrr}
& & \multicolumn{2}{c}{\textbf{train}} & \multicolumn{2}{c}{\textbf{validation}} & \multicolumn{2}{c}{\textbf{test}}\\
\cline{3-4}\cline{5-6}\cline{7-8}
\textbf{origin} & \textbf{dataset name} & \textbf{true} & \textbf{false} & \textbf{true} & \textbf{false} & \textbf{true} & \textbf{false}\\
\toprule
\gls{cdc} \cite{hung2019surface} & \gls{cdc-bhc}-224 & 840 & 3360 & 0 & 0 & 210 & 840\\
\gls{hicis}-metis & \gls{hicc}-metis-s124-p224 & 10480 & 64359 & 3684 & 25014 & 4281 & 24700\\
\gls{hicis}-metis & \gls{hicc}-metis-s224-p224 & 2676 & 16976 & 936 & 6571 & 1080 & 6498 \\
\gls{hicis}-web & \gls{hicc}-web-s124-p224 & 573 & 823 & 156 & 28 & 132 & 20\\
\gls{hicis}-web & \gls{hicc}-web-s224-p224 & 161 & 231 & 48 & 8 & 44 & 5\\
\end{tabular}
\caption{Overview of classification datasets}\label{tab:overviewHiccDatasets}
\end{table}
The procedure generated a relatively large number of samples; more than three times larger than the original \gls{cdc} dataset.
\subsection{Transfer-learning}
CNNs often require a large amount of data and, even on modern systems, training time can span multiple days. Therefore, pre-trained models were used, reducing training time and improving the generalization of the models, as demonstrated by Özgenel and Sorguç \cite{ozgenel2018performance} for crack detection in concrete.
\subsubsection{Mask R-CNN with ResNet101 Backbone for Instance Segmentation}
\gls{mrcnn} architecture is used for instance segmentation. \Gls{mrcnn} achieved state-of-the-art performance for COCO at the time of publication by He \acs{etal}\ \cite{he2018mask} in 2017.
We used \emph{ResNet101} as a backbone.
A warmup phase starting from a learning rate of $5e-6$ and reaching $5e-3$ at the 100th epoch was used. The learning rate was then constant until the 2000th iteration onwards, when the learning rate was halved every 250 iterations except for the models trained on only \gls{hicis} \emph{metis}, for which halving started at the 1000th iteration. 512 regions of interest were generated per image. Since the GPU memory was limited, images were resized to 1024px x 1024px, and a batch size of 2 images per iteration was used. The models were trained for a total of 6000 iterations.
\subsubsection{EfficientNet for classification}\label{sec:enb-cdc}
The number of \Gls{cnn}-based classification models developed since its inception in 1995 is vast \cite{li2021survey}.
Hung \acs{etal}\ \cite{hung2019surface} used VGG19 \cite{simonyan2014very}, InceptionV3 \cite{szegedy2016rethinking} and InceptionResNetV2 \cite{szegedy2017inception} for their successful training of classifying surface damages in concrete. EfficientNet architectures are prevalent because of their performance to parameter ratio \cite{tan2019efficientnet}.
EfficientNet-L2, the best model of the EfficientNet architectures, achieves 90.2\% top 1 accuracy on Imagenet.
Since Hung \acs{etal}\ \cite{hung2019surface} used an input image size of 227x227p, the closest matching EfficientNet model, EfficientNet-B0, is used.
The application of transfer learning consisted of three stages.
First, only the output layer was trained. Second, in addition to the output layer, the last block of the EfficientNet-B0 was also trained to increase the number of trainable parameters without losing the low-level feature extractors in the early blocks. Third, the model was trained in its entirety. All training stages used Adam \cite{kingma2014adam} for optimization.
To demonstrate the viability and effectiveness of EfficientNet-B0, the model was trained on the \gls{cdc} dataset by Hung \acs{etal}, without adding any additional augmentations since \gls{cdc} is already augmented. Table \ref{tab:resCdcTrainParams} displays the training parameters.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\textbf{stage} & \textbf{epoch} & \textbf{initial learning rate} & \textbf{beta1} & \textbf{beta2} \\
\toprule
1 & 1 & 1e-2 & 0.9 & 0.9 \\
2 & 1 & 1e-5 & 0.9 & 0.9 \\
3 & 8 & 1e-8 & 0.9 & 0.9 \\
\end{tabular}
\caption{Parameters for training EfficientNet-B0 using Adam on the CDC dataset}\label{tab:resCdcTrainParams}
\end{table}
Table \ref{tab:resHicTrainParams} describes the training stages omitting the $beta1$ and $beta2$ values of the Adam optimizer since they all are set to $0.9$. In addition to randomly changing contrast, saturation, and brightness, the JPG quality of the input images was randomly set between 50 to 100.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}rcccccc}
& \multicolumn{2}{c}{\textbf{stage 1}} & \multicolumn{2}{c}{\textbf{stage 2}} & \multicolumn{2}{c}{\textbf{stage 3}} \\
\cline{2-3}\cline{3-4}\cline{5-6}
\textbf{training dataset} & \textbf{epochs} & \textbf{lr} & \textbf{epochs} & \textbf{lr} & \textbf{epochs} & \textbf{lr} \\
\toprule
\gls{cdc-bhc} & 1 & 1e-1 & 1 & 1e-4& 4& 1e-7 \\
\gls{hicc}-metis-s112-p224 & 1 & 1e-1 & 1& 1e-4& 4& 1e-7\\
\gls{hicc}-metis-s224-p224 & 1 & 1e-2 & 1& 1e-5 & 4 & 1e-8 \\
\gls{hicc}-web-s112-p224 & 1 & 1e-1 & 1 & 1e-4& 4& 1e-7 \\
\gls{hicc}-web-s224-p224 & 1 & 1e-2 & 1& 1e-5 & 4& 1e-8 \\
concat-s112-p224 & 1 & 1e-2 & 1 & 1e-5 & 1 & 1e-8 \\
concat-s224-p224 & 1 & 1e-2 & 1& 1e-5 & 4& 1e-8 \\
\end{tabular}
\caption{Parameters for training EfficientNet-B0 using Adam with $beta1=beta2=0.9$ on binary honeycomb datasets}\label{tab:resHicTrainParams}
\end{table}
\emph{Concat-s112-p224} describes the combination of \emph{metis-s112-p224} and \emph{web-s112-p224} and \emph{concat-s112-p224} including the non-sliding versions, meaning a slide of 224 pixels and a patch size of 224 pixels.
\section{Results and Discussion}
\label{chap:results}
\subsection{EfficientNet-B0 for Concrete Damage Classification}
EfficientNet-B0 achieved better performance than all models by Hung \acs{etal}\ \cite{hung2019surface}, technically reaching state of the art for the \gls{cdc} dataset.
After 10 training epochs, EfficientNet-B0 achieves an accuracy of 96.95\%, a precision of 97.32\%, and a recall of 96.76\% on the \gls{cdc} dataset. The precision and recall are higher for each class and on average than in the former best performing model, InceptionResnetV2, as shown by Table \ref{tab:compareBlub}. Furthermore, EfficientNet-B0's accuracy of 96.29\% is statistically significantly higher than VGG19's 92.29\%, InceptionV3's 90.57\% and InceptionResnetV2's 92.57\%.
In conclusion, these results demonstrate that EfficientNet-B0 should be able to reach satisfactory performance on realistic datasets if the images scraped from the web sufficiently resemble honeycombs.
\begin{table}[H]
\centering
\begin{tabular}{rcccc}
\textbf{class} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} &
\textbf{support} \\
\toprule
normal & 0.96 & 0.97 & 0.96 & 210 \\
cracked & 0.95 & 0.97 & 0.96 & 210 \\
blistering & 0.98 & 0.96 & 0.97 & 210 \\
honeycomb & 0.96 & 0.98 & 0.97 & 210 \\
moss & 1.00 & 0.97 & 0.98 & 210 \\
average & 0.97 & 0.97 & 0.97 & 1050 \\
\end{tabular}\\
$(a)$ Our finetuned EfficientNet-B0
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{rcccc}
\textbf{class} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} &
\textbf{support} \\
\toprule
normal & 0.91 & 0.93 & 0.92 & 210 \\
cracked & 0.90 & 0.90 & 0.90 & 210 \\
blistering & 0.94 & 0.91 & 0.93 & 210 \\
honeycomb & 0.89 & 0.97 & 0.93 & 210 \\
moss & 0.99 & 0.91 & 0.95 & 210 \\
avg & 0.93 & 0.93 & 0.93 & 1050 \\
\end{tabular}
\\
$(b)$ InceptionResnetV2 by Hung \acs{etal}\ \cite{hung2019surface} \\
\caption{Comparison of performance on the CDC dataset of our finetuned EfficientNet-B0 vs. InceptionResnetV2 by Hung et al. \cite{hung2019surface}}\label{tab:compareBlub}
\end{table}
\subsection{Classification Results}\label{sec:resClassificaiton}
The training of EfficientNet-B0 showed clear differences between the datasets in \gls{hicc}.
All models performed the best on their own test sets, considering \gls{ap} and \gls{ar}.
Each model achieved high performance on the \gls{hicc} \emph{web} datasets.
Table \ref{tab:enb0Metrics} displays the metrics of each model on the different test sets.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}rccccr}
\textbf{test set} & \textbf{precision} & \textbf{recall} & \textbf{ap} & \textbf{ar} & \textbf{support} \\
\toprule
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{cdc-bhc}} \\
\cline{2-6}
cdc-bhc & \textbf{0.980} & \textbf{0.943} & \textbf{0.927} & \textbf{0.972} & 210 \\
web-s112-p224 & 0.974 & 0.848 & 0.980 & 0.970 & 132 \\
web-s224-p224 & \textbf{1.000} & 0.818 & 0.981 & 0.973 & 44 \\
metis-s112-p224 & 0.238 & 0.242 & 0.189 & 0.220 & 4281 \\
metis-s224-p224 & 0.226 & 0.235 & 0.181 & 0.212 & 1080 \\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{web-s224-p224}} \\
\cline{2-6}
cdc-bhc & 0.286 & 0.676 & 0.490 & 0.478 & 210\\
web-s112-p224 & 0.991 & 0.879 & \textbf{0.987} & \textbf{0.978} & 132\\
web-s224-p224 & \textbf{1.000} & 0.841 & \textbf{0.992} & \textbf{0.994} & 44\\
metis-s112-p224 & 0.432 & 0.497 & 0.416 & 0.430 & 4281\\
metis-s224-p224 & 0.419 & 0.490 & 0.413 & 0.425 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{web-s112-p224}} \\
\cline{2-6}
cdc-bhc & 0.295 & 0.738 & 0.347 & 0.389 & 210\\
web-s112-p224 & 0.959 & \textbf{0.886} & 0.977 & 0.907 & 132\\
web-s224-p224 & \textbf{1.000} & \textbf{0.864} & 0.991 & \textbf{0.994} & 44\\
metis-s112-p224 & 0.416 & 0.404 & 0.329 & 0.342 & 4281\\
metis-s224-p224 & 0.406 & 0.400 & 0.320 & 0.341 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{metis-s224-p224}}\\
\cline{2-6}
cdc-bhc & 0.577 & 0.448 & 0.554 & 0.550 & 210\\
web-s112-p224 & \textbf{1.000} & 0.689 & 0.984 & 0.974 & 132\\
web-s224-p224 & \textbf{1.000} & 0.705 & 0.991 & 0.990 & 44\\
metis-s112-p224 & 0.621 & \textbf{0.611} & 0.640 & 0.635 & 4281\\
metis-s224-p224 & 0.617 & \textbf{0.615} & 0.623 & 0.623 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{metis-s112-p224}} \\
\cline{2-6}
cdc-bhc & 0.644 & 0.319 & 0.444 & 0.460 & 210\\
web-s112-p224 & 0.988 & 0.614 & 0.962 & 0.919 & 132\\
web-s224-p224 & \textbf{1.000} & 0.591 & 0.973 & 0.963 & 44\\
metis-s112-p224 & \textbf{0.696} & 0.568 & \textbf{0.680} & \textbf{0.677} & 4281\\
metis-s224-p224 & \textbf{0.685} & 0.557 & \textbf{0.676} & \textbf{0.672} & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{concat-s224-p224}}\\
\cline{2-6}
cdc-bhc & 0.714 & 0.524 & 0.654 & 0.659 & 210\\
web-s112-p224 & 0.991 & 0.795 & 0.986 & 0.966 & 132\\
web-s224-p224 & 1.000 & 0.773 & 0.988 & 0.983 & 44\\
metis-s112-p224 & 0.608 & 0.597 & 0.636 & 0.634 & 4281\\
metis-s224-p224 & 0.596 & 0.596 & 0.623 & 0.622 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{concat-s112-p224}} \\
\cline{2-6}
cdc-bhc & 0.545 & 0.490 & 0.589 & 0.583 & 210\\
web-s112-p224& 0.951 & 0.735 & 0.977 & 0.955 & 132\\
web-s224-p224 & 0.969 & 0.705 & 0.983 & 0.971 & 44\\
metis-s112-p224 & 0.637 & 0.588 & 0.633 & 0.627 & 4281\\
metis-s224-p224 & 0.626 & 0.579 & 0.613 & 0.613 & 1080\\
\end{tabular}
\caption{Metrics on test sets for EfficientNet-B0 trained on different training sets}\label{tab:enb0Metrics}
\end{table}
The inclusion of the \gls{hicc} \emph{web} datasets in the \emph{metis} dataset does not improve performance on the \gls{hicc} \emph{metis} dataset. However, it improves the recall on the \gls{hicc} \emph{web} datasets.
Overall the best model seems to be the one trained on the \gls{hicc} \emph{metis-s112-p224} dataset achieving the highest $AP$ and $AR$ for the \gls{hicc} \emph{metis} datasets and $AP$s and $ARs$ close to highest ones for the \gls{hicc} web datasets.
The high \gls{ap} and \gls{ar} achieved by this model indicate that the model achieved the most distinctive separation between honeycombs and non-honeycombs. Since the \gls{ap} and \gls{ar} average their metrics at different thresholds, a high value means increased independence from picking a specific threshold. To confirm the assumption a qualitative analysis using \acs{gradCam} is performed in Section \ref{sec:gradCramCompare}.
The higher $AR$ of the \emph{metis-s112-p224} model compared to its recall at a confidence threshold of $0.5$ is caused by the recall almost never reaching zero.
\subsubsection{Grad-CAM-based Comparison}\label{sec:gradCramCompare}
\Gls{gradCam} is a technique to highlight the regions in an image contributing the most to the prediction \cite{selvaraju2017grad}.
\Gls{gradCam} confirms that the models learned the structure of honeycombs on most datasets, although most models develop some bias for the upper left corner. The models trained on the web scraped datasets seem to learn the structure of honeycombs poorly, excluding the \emph{web-s224-p224} model, which shows a suitable \acs{gradCam}. Only the model trained on \gls{cdc-bhc} failed to classify the image correctly, and only the \emph{metis-s112-p224} model does not activate for the upper left corner.
The models whose training included \emph{metis} data handled images that are atypical to their training data satisfactorily. However, the web models whose training data did not contain such images were less confident in their predictions, although still classifying correctly.
The higher confidence of the other models demonstrates the better separation of classes these models learned as supported by the \gls{ap} and \gls{ar} in Table \ref{tab:enb0Metrics}.
One of the most common false positives were pictures containing pebbles. Figure \ref{fig:gradCamPebble} depicts the \glspl{gradCam} of the models classifying an image with loose pebbles lying on the ground.
\begin{figure}[H]
\footnotesize
\centering
\begin{tabular}{cccc}
& 0.99 & 0.94 & 1.0 \\
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mmetis-s112-p224/gradcam_2_a10.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mcdc-bhc/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mmetis-s224-p224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mmetis-s112-p224/gradcam_2_a5.jpg}
\\
$(a)$ original & $(b)$ cdc-bhc & $(c)$ metis-s224-p224 & $(d)$ metis-s112-p224 \\
\addlinespace[1ex] 0.49 & 0.99 & 1.0 & 1.0 \\
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mweb-s224-p224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mweb-s112-p224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mconcat-web-metis224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mconcat-web-metis/gradcam_2_a5.jpg}
\\
$(e)$ web-s224-p224 & $(f)$ web-s112-p224 & $(g)$ concat-s224-p224 & $(h)$ concat-s112-p224 \\
\end{tabular}
\caption{Grad-CAM of EfficientNet-B0 trained on different datasets for an image containing pebbles from metis-s224-p224/test}\label{fig:gradCamPebble}
\end{figure}
The models were easily confused by loose pebbles, demonstrating that the models learned that pebbles are an important visual cue for honeycombs which corresponds to the definition requiring at least a partially visible pebble. However, the models do not misclassify loose pebbles in general, as some of the examples in the next section illustrate.
In conclusion, \acs{gradCam} confirmed that the metrics-based assessment on which the EfficientNet-B trained \emph{metis-s112-p224} performs best and is the only model which did not develop a bias for the upper left corner.
Surprisingly, the inclusion of the web images did not improve the model's performance. A possible explanation could be that most web images depict excessive honeycombs or that the high preprocessing of these images for web usage is inadequate.
It's also interesting to see that \acs{gradCam} can be used as a tool for instance segmentation, despite only using a classification model as a foundation.
\subsubsection{Patch Classification}
Since the classification model has been trained on lower resolution patches, we can apply the model patch-wise onto images with larger resolutions to localize defects.
\glspl{gradCam} additionally provides assistance in locating honeycombs for verification. Figure \ref{fig:resPatchCam} demonstrates how \glspl{gradCam} assists in explaining the classification decision and helps localize the honeycomb, since the upper left tile in image $(a)$ could be easily misidentified as a false positive. However, the overlayed activation by \glspl{gradCam} shows, that the patch was correctly classified due to the small defect in its bottom right corner.
For the two upper images $(a)$ and $(b)$ a magenta border encloses each patch that exceeds the confidence threshold of $0.5$ with the confidence written in the upper left corner of the patch. The two lower images show the corresponding \glspl{gradCam} for the upper images.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/web_Honeycombing-2.jpg.jpg}
&\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/metis_12023_18216.jpeg.jpg}
\\
$(a)$ web example & $(b)$ metis example\\
\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/web_Honeycombing-2.jpg_cam.jpg}
&\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/metis_12023_18216.jpeg_cam.jpg}
\\
$(c)$ Grad-CAM overlayed & $(d)$ Grad-CAM overlayed\\
\end{tabular}
\caption{$(a)$ and $(b)$ show example images with patch-wise classification. $(c)$ and $(d)$ show the corresponding \glspl{gradCam} activations.}\label{fig:resPatchCam}
\end{figure}
If a honeycomb is positioned at the edge of a patch, \acs{gradCam} can provide assistance for human verification as illustrated by $(c)$ and $(d)$ of Figure \ref{fig:resPatchCam}. In the case of patch classification, however, \acs{gradCam} is helpful, but mainly serves its initially intended purpose of debugging \glspl{cnn}. That is, recognizing if a model overfits or learns specific undesirable features.
The images scraped from the web seem to lack backgrounds typically seen on construction sites.
While the web dataset only includes images of honeycombs that, for the most part, only contain concrete backgrounds and lack good images of construction sites, the cdc-bhc dataset also includes more differentiated images of concrete. Unfortunately, the models trained on these datasets did not handle typical backgrounds of construction sites sufficiently well, since they are outside the scope of their training data.
Figure \ref{fig:gradCamExample4} illustrates the higher false positive rates, particularly for areas not depicting concrete surfaces. The model trained on the \emph{web} data performed particularly poorly in this case, even worse than the \emph{cdc-bhc} model, although the performance metrics are higher for the web model. However, this is to be expected since the \emph{cdc-bhc} dataset contains a wider variety of negative examples.
In conclusion, adding more images depicting construction sites may further decrease the false positive rate.
\begin{figure}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[height=2.5cm]{appendix/patch_classification/cdc-bhc/metis_15034_21710.jpeg.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/web-s224-p224/metis_15034_21710.jpeg.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/metis-s112-p224/metis_15034_21710.jpeg.jpg}
\\
$(a)$ cdc-bhc & $(b)$ web-s224-p224 & $(c)$ metis-s122-p224
\end{tabular}
\caption{False positives for non-concrete patches by our finetuned EfficientNet-B0 trained on different training sets}\label{fig:gradCamExample4}
\end{figure}
Since none of the applied augmentations targeted differences in the input’s scaling, the models naturally could not learn honeycomb structures that are too large, as shown by Figure \ref{fig:resCamLarge} with an atypically close photo of a honeycomb. However, the classification model naturally failed to recognize most patches correctly as honeycombs since neither the classification model nor the training data addressed significant differences in size.
\begin{figure}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[height=2.5cm]{appendix/patch_classification/cdc-bhc/metis_30597_97910.jpeg_cam.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/web-s224-p224/metis_30597_97910.jpeg_cam.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/metis-s112-p224/metis_30597_97910.jpeg_cam.jpg}
\\
$(a)$ cdc-bhc & $(b)$ web-s224-p224 & $(c)$ metis-s122-p224
\end{tabular}
\caption{Untypical scaling of a honeycomb by our finetuned EfficientNet-B0 trained on different training sets}\label{fig:resCamLarge}
\end{figure}
This weakness at handling unexpected scales could be addressed by generating the patches from the image pyramids \cite{adelson1984pyramid}. For instance, Xiao \acs{etal}\ \cite{xiao2020surface} added image pyramids to \gls{mrcnn}, which improved the performance slightly. However, Girshick \cite{girshick2015fast} argued that the improvements gained by using image pyramids are not significant enough to justify the increase in computation time for \emph{Fast R-CNN}, and Ren \acs{etal}\ \cite{ren2015faster} argued the same for \emph{Faster R-CNN}. Since Girshick \cite{girshick2015fast} also demonstrated that the model \emph{Fast R-CNN} learns scale invariance from the training data, increasing the scale variance by adding patches of image pyramids might improve the classification model.
\subsection{Instance segmentation}
The \gls{mrcnn} models learned to detect honeycombs to a certain degree.
Figure \ref{fig:mrcnnTraining} depicts the training and validation losses as well as the bounding box \glspl{ap} on the validation set.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{appendix/plot/mrcnn_training_web}
\includegraphics[width=0.45\linewidth]{appendix/plot/mrcnn_training_metis}
\includegraphics[width=0.45\linewidth]{appendix/plot/mrcnn_training_metis_web}
\\
$(a)$ $(b)$ $(c)$ \\
\caption{Mask R-CNN training}\label{fig:mrcnnTraining}
\end{figure}
Both models trained on either dataset did not improve significantly past iteration 3000, while the model trained on both datasets converged at around 4000 iterations.
The validation \gls{ap} jumped significantly for the \emph{web} validation set compared to the \emph{metis} validation set caused by the small size of the \emph{web} dataset compared to the \emph{metis} dataset.
Table \ref{tab:resBbox} displays the metrics of the \gls{mrcnn} model trained on the \gls{hicis} \emph{web} and \emph{metis} datasets, as well as the combination of both achieved on the validations sets.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}lcccccc}
& \multicolumn{3}{c}{\textbf{Web}} & \multicolumn{3}{c}{\textbf{Metis}} \\
\cline{2-4}\cline{5-7}
\textbf{metric} & \textbf{W} & \textbf{M} & \textbf{W+M} & \textbf{W} & \textbf{M} & \textbf{W+M} \\
\toprule
$AP_{IoU \geq 50}$ & \textbf{37.4} & 16.4 & 25.6 & 8.9 & 12.3 & \textbf{12.4} \\
$AP_{IoU \geq 0.5 : 0.95 : 0.05}$ & \textbf{22.2} & 7.9 & 17.4 & 3.1 & \textbf{6.0} & 5.7 \\
$AR_{IoU \geq 0.5 : 0.95 : 0.05}$ & \textbf{28.2} & 9.6 & 18.6 & 7.6 & 8.1 & \textbf{8.8} \\
\end{tabular}
\caption{Metrics for bounding boxes}\label{tab:resBbox}
\end{table}
Table \ref{tab:resMask} presents the corresponding metrics for the instance segmentation masks.
The \emph{web} model achieved lower performance on the \emph{metis} dataset than the \emph{metis} model, although achieving a slightly higher \gls{ar}. However, the inclusion of the \emph{web} data improved the model's segmentation masks slightly.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}lcccccc}
& \multicolumn{3}{c}{\textbf{Web}} & \multicolumn{3}{c}{\textbf{Metis}} \\
\cline{2-4}\cline{5-7}
\textbf{metric} & \textbf{W} & \textbf{M} & \textbf{W+M} & \textbf{W} & \textbf{M} & \textbf{W+M} \\
\toprule
$AP_{IoU \geq 50}$ & \textbf{33.0} & 14.5 & 23.2 & 7.3 & 11.7 & \textbf{11.9} \\
$AP_{IoU \geq 0.5:0.95:0.05}$ & \textbf{17.2} & 6.7 & 15.9 & 2.5 & 4.1 &\textbf{4.4} \\
$AR_{IoU \geq 0.5:0.95:0.05}$ & \textbf{23.7} & 8.6 & 17.0 & 6.1 & 6.0 & \textbf{7.0} \\
\end{tabular}
\caption{Metrics for segmentation masks}\label{tab:resMask}
\end{table}
All models achieved higher scores on the \emph{web} test set independent of the training set combination.
The inclusion of the \emph{metis} training data decreased the performance on the \emph{web} test set. The significantly more extensive size of the \emph{metis} dataset compared to the \emph{web} dataset led to a higher emphasis on the realistic images, causing the model trained on both datasets to perform worse on the \emph{web} dataset. Therefore, the decreased performance affirms the assumption that the images scraped from the web represent the most easily recognizable honeycombs.
The inclusion of the \emph{web} training set improved the model slightly on the \emph{metis} dataset, achieving the best values out of the three models in nearly all metrics.
Furthermore, the model trained on both datasets outperformed the model trained on only the \emph{metis} dataset for recall values over 20\% as illustrated by Figure \ref{fig:resPrCurveTest}.
In conclusion, the two datasets differ significantly, with the web images representing a limited selection of honeycombs.
Figure \ref{fig:resPrCurveTest} $(a)$ and $(b)$ depict the precision-recall curves of all three models using an \gls{iou} threshold of $0.5$, respectively on the \emph{web} test set and the \emph{metis} test set. The two precision-recall curves illustrate the cause of low $AP$ and $AR$. Since the precision or recall is zero at many thresholds, the averages of those are significantly lowered, caused by the model failing to achieve precision and recall values above a certain point regardless of the confidence threshold.
\begin{figure}[H]
\centering
\begin{tabular} {cc}
\includegraphics[height=6cm]{appendix/plot/mask_all_pr_curve_test_web.jpg}
&\includegraphics[height=6cm]{appendix/plot/mask_all_pr_curve_test_metis.jpg}
\\
$(a)$ \emph{web} test set & $(b)$ \emph{metis} test set \\
\end{tabular}
\caption{Precision-recall curves}\label{fig:resPrCurveTest}
\end{figure}
In summary, metrics for bounding boxes and segmentation masks showed a performance improvement when combining both datasets, indicating that increasing the variance in the dataset by increasing its size is likely to lead to further increases in performance.
\subsubsection{Evaluation for practical application}
Since considering only the average precision and recall metrics might lead to underestimating the model's performance as explained in Section \ref{sec:apIouLimitations}, the precision and recall at a specific confidence threshold were calculated. Therefore, the specification of a suitable value for the confidence threshold was required.
Table \ref{tab:valConfPr} shows the metrics of all models on both validation sets using an \gls{iou} threshold of 50\% at three different confidence thresholds for the bounding box detections. For instance, with a confidence threshold of 0.7 the model achieves a precision of 72.5\% which is vastly higher than expected when just considering the \gls{ap}.
\begin{table}[H]
\centering
\scriptsize
\begin{tabular}{!{\extracolsep{4pt}}rccccccccc}
\multicolumn{2}{c}{} & \multicolumn{4}{c}{\textbf{Web}} & \multicolumn{4}{c}{\textbf{metis}} \\
\cline{3-6} \cline{7-10}
\addlinespace[1ex] \textbf{model} & \textbf{confidence} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} \\
\toprule
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{web}}}
& 0.3 & 0.3469 & 0.4722 & 0.4000 & 36 & 0.1386 & 0.2121 & 0.1677 & 198 \\
\multicolumn{1}{c|}{}& 0.5 & 0.4706 & 0.4444 & 0.4571 & 36& 0.1954 & 0.1799 & 0.1873 & 189 \\
\multicolumn{1}{c|}{}& 0.7 & 0.6087 & 0.3889 & 0.4746 & 36& 0.2526 & 0.1379 & 0.1784 & 174 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{metis}}}
& 0.3 & 0.5333 & 0.4444 & 0.4848 & 36 & 0.2060 & 0.3179 & 0.2500 & 195 \\
\multicolumn{1}{c|}{}& 0.5 & 0.7857 & 0.3056 & 0.4400 & 36 & 0.3077 & 0.2051 & 0.2462 & 195 \\
\multicolumn{1}{c|}{}& 0.7 & 0.7500 & 0.3000 & 0.4286 & 30 & 0.4694 & 0.1447 & 0.2212 & 159 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{W+M}}}
& 0.3 &0.6429 & 0.5000 & 0.5625 & 36 & 0.2576 & 0.3434 & 0.2944 & 198 \\
\multicolumn{1}{c|}{}& 0.5 & 0.8500 & 0.4722 & 0.6071 & 36 & 0.4000 & 0.2383 & 0.2987 & 193 \\
\multicolumn{1}{c|}{}& 0.7 & 0.9286 & 0.3611 & 0.5200 & 36 & .5283 & 0.1637 & 0.2500 & 171 \\
\end{tabular}
\caption{Differing confidence thresholds at an $IoU=0.5$ on the validation sets}\label{tab:valConfPr}
\end{table}
Table \ref{tab:testConfPr} shows the corresponding metrics achieved on the test sets compared to the validation set as shown by Table \ref{tab:valConfPr}, since the threshold must be determined on the validation set. Otherwise there would not be sufficient evidence that the determined threshold also generalizes adequately.
\begin{table}[H]
\centering
\scriptsize
\begin{tabular}{!{\extracolsep{4pt}}cccccccccc}
\multicolumn{2}{c}{} & \multicolumn{4}{c}{\textbf{web}} & \multicolumn{4}{c}{\textbf{metis}} \\
\cline{3-6} \cline{7-10}
\addlinespace[1ex] \textbf{model} & \textbf{confidence} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} \\
\toprule
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{web}}}
& 0.3 & 0.3396 & 0.6316 & 0.4417 & 57 & 0.2222 & 0.3143 & 0.2604 & 210 \\
\multicolumn{1}{c|}{}& 0.5 & 0.4225 & 0.5263 & 0.4687 & 57 & 0.3161 & 0.3216 & 0.3188 & 171\\
\multicolumn{1}{c|}{}& 0.7 & 0.5682 & 0.4386 & 0.4950 & 57 & 0.3711 & 0.2323 & 0.2857 & 155 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{metis}}}
& 0.3 & 0.4167 & 0.4386 & 0.4274 & 57 & 0.3162 & 0.4095 & 0.3568 & 210 \\
\multicolumn{1}{c|}{}& 0.5 & 0.6800 & 0.2982 & 0.4146 & 57 & 0.4911 & 0.3293 & 0.3943 & 167 \\
\multicolumn{1}{c|}{}& 0.7 & 0.8333 & 0.1923 & 0.3125 & 52 & 0.7250 & 0.2266 & 0.3452 & 128 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{W+M}}}
& 0.3 &0.4304 & 0.5965 & 0.5000 & 57 & 0.3116 & 0.4115 & 0.3546 & 209 \\
\multicolumn{1}{c|}{}& 0.5 & 0.6486 & 0.4211 & 0.5106 & 57 & 0.4786 & 0.3415 & 0.3986 & 164 \\
\multicolumn{1}{c|}{}& 0.7 &0.8333 & 0.2632 & 0.4000 & 57 & 0.6531 & 0.2462 & 0.3575 & 130 \\
\end{tabular}
\caption{Differing confidence thresholds at an $IoU=0.5$ on the test sets}\label{tab:testConfPr}
\end{table}
The confidence threshold of 0.5 was confirmed by the $F_1$ score on the \emph{metis} test set. Each model reaches its highest $F_1$ score on the \emph{web} test set at different confidence thresholds. However, the support is less than for the \emph{metis} set, and the difference in $F_1$ scores was minor, making it likely that the \emph{web} test set was not large enough. Overall the metrics confirm the selection of a confidence threshold of 0.5 based on the validation set.
The high quality of some of the model's detection rates is illustrated by image $(a)$ of Figure \ref{fig:resQualityHigh}. The model predicted a near perfect segmentation mask for the honeycomb in the image. However, this is not representative of the detection quality in general.
\begin{figure}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[height=4cm]{appendix/detection/c7/metis_eval_metis_images_test/12}
& \includegraphics[height=4cm]{appendix/detection/g/metis_images_test/22}
& \includegraphics[height=4cm]{appendix/detection/c3/metis_eval_metis_images_test/22}
\\
$(a)$ near perfect segmentation mask & $(b)$ ground truth & $(c)$ predictions\\
\end{tabular}
\caption{Near perfect segmentation mask and differing division of honeycombs}\label{fig:resQualityHigh}
\end{figure}
As illustrated by images $(b)$ and $(d)$ in Figure \ref{fig:resQualityHigh}, the model's predictions dividing honeycombs into single or multiple instances reflect the challenges of defining instances of honeycombs as discussed in section \ref{sec:hicis}. Furthermore, the difference in dividing honeycombs results in fewer true positives since the split honeycomb instances do not reach an \gls{iou} of $0.5$. One could address this issue by changing the problem type to classification and segmentation. Although an experiment classifying large honeycombs failed, as mentioned in section \ref{sec:hicc}, the challenge of unclear instance divisions may justify further research in this direction. Nevertheless, this problem does not affect verification by inspectors and, therefore, might be addressed by increasing the training data.
While the detections at a confidence threshold of $0.3$ are mostly correct, lowering the confidence threshold increased the number of false positives when encountering loose pebbles on the ground as shown in Figure \ref{fig:fpPebbles}.
\begin{figure}[H]
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{appendix/detection/c3/metis_eval_metis_images_test/35}
\includegraphics[width=0.3\textwidth]{appendix/detection/c5/metis_eval_metis_images_test/35}
\includegraphics[width=0.3\textwidth]{appendix/detection/c7/metis_eval_metis_images_test/35}
&& \\
\end{tabular}
\caption{Lose pebbles detected as honeycombs}\label{fig:fpPebbles}
\end{figure}
In conclusion, the model performed reasonably well and shows promise for object detection to target honeycombs and possible defects with a vague outline. However, the differentiation of pebbles and honeycombs needs more research. Two possible approaches are the segmentation of concrete surfaces or adding more images of loose pebbles. Bunrit \acs{etal}\ \cite{bunrit2019evaluating} demonstrated that materials used in construction such as concrete could be classified.
Furthermore, the division of honeycombs into the correct instances may be a challenge, although likely to be solved by using an adequately large dataset.
\subsection{Instance Segmentation vs. Classifying Patches}
The previous two sections explored and discussed the results of the object detection and classification approaches. While object detection and classification metrics cannot be compared directly, they may still indicate differences.
Comparing the classification model trained on \emph{\gls{hicc}-metis-s224-p224} to the \gls{mrcnn} trained on the \gls{hicis} dataset, the classification model can achieve higher precision and recall. However, considering the harshness of the \gls{iou}, the difference is not large enough to dismiss the performance of the \gls{mrcnn} model. Another consideration is that users may perceive bounding boxes with segmentation masks as more natural compared to patch classification with \acs{gradCam} as \acs{eg} bounding circles are commonly used to indicate an area of importance in an image.
The labeling of images for instance segmentation requires vastly more effort than classification. Instance masks address a much more difficult problem formulation than class labels: Location of an instance, which pixels belong to an instance, and number of instances in the image. While classification of a large image may not adequately address the user's requirement to locate the defect for faster verification, slicing an image into classifying patches may solve this problem, especially considering the effort of data acquisition.
\subsubsection{Qualitative Analysis on Realistic Images}
The validity of the quantitative comparison between instance segmentation and patch-based classification is limited because both address the problem in fundamentally different ways.
Although comparing object detection and patch-based classification using the \gls{iou} is possible \cite{veeranampalayam2020comparison}, this approach reduces the problem type to segmentation, which is not the initial problem type for either classification.
Since the quantitative comparison between the instance segmentation and patch-based classification approaches was not sufficient, an expert in defect documentation appraised the predictions manually enabled by the small size of the test set.
The qualitative analysis compared the detections and patch classifications of the best models side by side, \acs{ie} the \gls{mrcnn} trained on both datasets and the EfficientNet-B0 trained on only the \gls{hicc} \emph{metis-s112-p224} datasets respectively. For each test image the following criteria were applied. First, it was checked whether the crucial honeycomb was detected, i.e. the largest in most cases. Second, it was checked if other significant honeycombs were detected. Third, it was checked how many false positives existed and if the number exceeded the true positive detection count. Then an assessment of unsatisfactory, sufficient, or satisfactory was given. \emph{Satisfactory} describes an image with honeycomb detections that a potential user could confirm by a glance requiring at least one correct detection. \emph{Sufficient} expresses that an image with detections could not be verified easily, either due to too many false positives or failing to detect the crucial honeycomb. Finally, \emph{Unsatisfactory} describes the detection for an image would have been unusable either due to it failing to detect any honeycomb or failing to detect any honeycomb correctly.
Since a direct comparison by metrics is not possible between classification and object detection, a qualitative analysis was necessary, although its results are limited by subjective bias.
Table \ref{tab:resQualScores} summarizes the assessment. While the instance segmentation approach produced more satisfactory detections, classifying patches had fewer unsatisfactory results.
\begin{table}[H]
\centering
\begin{tabular}{rccc}
\textbf{model} & \textbf{unsatisfactory} & \textbf{sufficient} & \textbf{satisfactory} \\
\toprule
EfficientNet-B0 trained on \emph{metis-s112-p224} & 4 & 22 & 12 \\
\gls{mrcnn} trained on both datasets & 8 & 12 & 18 \\
\end{tabular}
\caption{Assessment of detecting honeycombs in 38 realistic test images}\label{tab:resQualScores}
\end{table}
When directly comparing the detections, 16 detections were of similar quality, the object detection approach achieved better results in 13 cases but performed worse for 8 images.
The comparison confirmed that the instance segmentation masks are more convenient for users to validate.
Figure \ref{fig:easyConfirm} demonstrates the advantages of instance segmentations since the segmentations are more distinctive, especially around the steel pipes.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics*[width=0.3\linewidth]{appendix/detection/c5/metis_web_eval_metis_images_test/6}
& \includegraphics*[width=0.3\linewidth]{appendix/patch_classification/metis-s112-p224/metis_31187_115285.jpeg_cam.jpg}
\\
$(a)$ instance segmentation & $(b)$ classifying patches \\
\end{tabular}
\caption{Intuitiveness of instance segmentation compared to patch classification}\label{fig:easyConfirm}
\end{figure}
However, this might be addressed by classifying sliding patches, \acs{eg} a slide of 0.5 of the patch size. When decreasing the slide to a pixel, each pixel is classified, which is essentially segmentation. However, this segmentation would be missing the instance identifications of the \gls{mrcnn} model.
Classifying by patches tended to create discontinuous honeycombs, while \gls{mrcnn} produced more continuous detections as illustrated by Figure \ref{fig:detectContinuouity}.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics*[width=0.3\linewidth]{appendix/detection/c5/metis_web_eval_metis_images_test/1}
& \includegraphics*[width=0.3\linewidth]{appendix/patch_classification/metis-s112-p224/metis_31401_122952.jpeg_cam.jpg} \\
$(a)$ & $(b)$ \\
\includegraphics*[width=0.3\linewidth]{appendix/detection/c5/metis_web_eval_metis_images_test/21}
& \includegraphics*[width=0.3\linewidth]{appendix/patch_classification/metis-s112-p224/metis_30683_103088.jpeg_cam.jpg} \\
$(c)$ & $(d)$ \\
\end{tabular}
\caption{Continuity of detections}\label{fig:detectContinuouity}
\end{figure}
In conclusion, both instance segmentation and classification with \acs{gradCam} are valid approaches for further research with larger datasets. Since the labeling of honeycombs for instance segmentation is significantly more challenging than classification, the latter could be superior for future research, especially considering the expert knowledge required and the ambiguities dividing honeycombs.
However, since instance segmentations are more natural to potential users and an initial \gls{mrcnn} model could be trained, an active learning approach could reduce labeling effort as demonstrated by Feng \acs{etal}\ \cite{feng2017deep}.
Therefore, future research should look into the problem using either approach depending on the integration of an active learning framework into a defect documentation system.
\chapter{List of Abbreviations}
\section{Methodology}
While multiple datasets of defects in concrete structures exist, they either focus on different defects or contain images scraped from the internet. Hung \acs{etal}\ \cite{hung2019surface} published the most relevant dataset used for honeycomb detection, but this was only published after data augmentation, increasing relabeling effort.
Two datasets are collected. First, \emph{Metis Systems AG} provided a set of honeycomb images. Second, similarly to Hung \acs{etal}\ \cite{hung2019surface}, a dataset was scraped from the internet, providing a baseline to similar scraped datasets from research and enabling comparison with a realistic dataset.
\subsection{Data origins}
In the context of the research project \emph{SDaC}, \emph{Metis Systems AG} provided access to the defect images documented in their proprietary software \emph{überbau}. Inspectors document defects in \emph{überbau}, assigning each defect a title, optionally any number of images, and more attributes. These photos are often taken with a smartphone, at a variety of camera angles and lighting conditions may range from brightly sunlit to dark and lit only by a flash. The defects were accessed via an internal API and filtered for the key word "honeycomb", resulting in a total of 780 images. These images were further manually classified into 191 \emph{honeycomb}, and 539 \emph{other}. The honeycomb class contains images of easily identifiable honeycombs. In contrast, \emph{other} contains images of unrelated scenery and honeycombs, that are too difficult to differentiate for initial research and would have required more resources.
To the author's knowledge, this dataset is the most extensive public collection of honeycombs in existence and is the only one comprised of images that are neither scraped from the internet nor taken especially for research.
Furthermore, all images are made public with permission of \emph{Metis Systems AG} and may be used for further research. While our definition for honeycombs may not fit that of other researchers, the raw images are also published and increase the number of publicly available photos of concrete structures with honeycombs, pores, etc., caused by errors during construction rather than deterioration.
The second dataset was obtained using a google image search similarly to Hung \acs{etal}\ \cite{hung2019surface}, who collected images for four classes. However, since only the class of honeycombs is relevant for this work, only the keyword phrases \emph{honeycomb concrete} and \emph{honeycomb on concrete surface} were used, and the number of downloaded images was increased from 50 to 100 compared to Hung \acs{etal}\ \cite{hung2019surface}. The first 100 images matching the keyword phrases were downloaded and images containing no honeycombs, those with watermarks obstructing the image, duplicates, and images of low quality were removed. The filtered images were then combined and duplicates were removed a second time resulting in 56 images depicting honeycombs.
However, the images scraped from the internet are primarily used as examples of honeycombs by different websites, introducing a bias for very clear and large honeycombs. As a result most images contain very noticeable pebble-like structures.
\subsection{Datasets}\label{sec:datasets}
Since honeycombs do not describe an object per se, the determination of its outline is a challenge in itself. In contrast, most pores have clear circular outlines as highlighted by previous work \cite{liu2017image, zhu2008detecting, zhu2010machine, yoshitake2018image,nakabayash2020automatic}.
We use two approaches to labeling our data. First, we create instance segmentation masks. Secondly, we use simple classification labels.
We apply our labeling techniques to both our datasets.
\subsubsection{Honeycombs in concrete instance segmentation}\label{sec:hicis}
The conventional definition of a honeycomb is a surface void exceeding a certain diameter. This definition can not be applied here, since estimating the size of surface voids is impossible due to the unknown scale of the images. Some works address this issue by defining a fixed area \cite{nakabayash2020automatic} or controlling the image capture process \cite{yoshitake2018image}, resulting in a fixed scale of the surfaces displayed in the images. However, our solution avoids these biases and enables detection at any scale.
We use the following definition as a labeling criterion: a honeycomb is a surface void in concrete with at least one partially visible pebble.
Finally, the \gls{hicis} datasets were created by labeling the images with instance segmentation masks according to the aforementioned honeycomb definition.
The datasets were split into train, validation, and test sets of 60\%, 20\%, and 20\%, respectively, creating the two datasets \emph{\gls{hicis} metis} and \emph{\gls{hicis} web} with three subsets each.
\subsubsection{Honeycombs in concrete classification}\label{sec:hicc}
In addition to our segmentation labels, we create several classification datasets. First, we use the \gls{cdc} dataset by Hung \acs{etal}\ \cite{hung2019surface}. The dataset was converted from multi to binary classification for honeycombs by sorting all non-honeycomb images into a single class. The resulting dataset is called \gls{cdc-bhc}.
Additionally, the classification datasets \gls{hicc} were created from our \gls{hicis} segmentation datasets.
Square patches of $224\times224$ pixels were generated from the \gls{hicis} dataset by cropping the images, then calculating the area of the instance segmentation mask in the cropped image and applying a threshold to binary classification labels.
The crop was moved over the image by either a slide of 112 pixels or 224 pixels, creating two datasets for each \gls{hicis} dataset.
The datasets created with a slide of 112 pixels contain each pixel up to four times in different patches while the others contain each pixel exactly once. Keeping the train, validation, and test splits from \gls{hicis}, assured that a specific honeycomb does not appear in different subsets. Table \ref{tab:overviewHiccDatasets} summarizes the datasets. The generated datasets originating from the \gls{hicis} dataset follow this naming convention: \emph{HiCC/\{origin\}-s\{stride\_size\}-p\{patch\_size\}}.
\begin{table}[H]
\footnotesize
\centering
\begin{tabular}{!{\extracolsep{4pt}}rrrrrrrr}
& & \multicolumn{2}{c}{\textbf{train}} & \multicolumn{2}{c}{\textbf{validation}} & \multicolumn{2}{c}{\textbf{test}}\\
\cline{3-4}\cline{5-6}\cline{7-8}
\textbf{origin} & \textbf{dataset name} & \textbf{true} & \textbf{false} & \textbf{true} & \textbf{false} & \textbf{true} & \textbf{false}\\
\toprule
\gls{cdc} \cite{hung2019surface} & \gls{cdc-bhc}-224 & 840 & 3360 & 0 & 0 & 210 & 840\\
\gls{hicis}-metis & \gls{hicc}-metis-s124-p224 & 10480 & 64359 & 3684 & 25014 & 4281 & 24700\\
\gls{hicis}-metis & \gls{hicc}-metis-s224-p224 & 2676 & 16976 & 936 & 6571 & 1080 & 6498 \\
\gls{hicis}-web & \gls{hicc}-web-s124-p224 & 573 & 823 & 156 & 28 & 132 & 20\\
\gls{hicis}-web & \gls{hicc}-web-s224-p224 & 161 & 231 & 48 & 8 & 44 & 5\\
\end{tabular}
\caption{Overview of classification datasets}\label{tab:overviewHiccDatasets}
\end{table}
The procedure generated a relatively large number of samples; more than three times larger than the original \gls{cdc} dataset.
\subsection{Transfer-learning}
CNNs often require a large amount of data and, even on modern systems, training time can span multiple days. Therefore, pre-trained models were used, reducing training time and improving the generalization of the models, as demonstrated by Özgenel and Sorguç \cite{ozgenel2018performance} for crack detection in concrete.
\subsubsection{Mask R-CNN with ResNet101 Backbone for Instance Segmentation}
\gls{mrcnn} architecture is used for instance segmentation. \Gls{mrcnn} achieved state-of-the-art performance for COCO at the time of publication by He \acs{etal}\ \cite{he2018mask} in 2017.
We used \emph{ResNet101} as a backbone.
A warmup phase starting from a learning rate of $5e-6$ and reaching $5e-3$ at the 100th epoch was used. The learning rate was then constant until the 2000th iteration onwards, when the learning rate was halved every 250 iterations except for the models trained on only \gls{hicis} \emph{metis}, for which halving started at the 1000th iteration. 512 regions of interest were generated per image. Since the GPU memory was limited, images were resized to 1024px x 1024px, and a batch size of 2 images per iteration was used. The models were trained for a total of 6000 iterations.
\subsubsection{EfficientNet for classification}\label{sec:enb-cdc}
The number of \Gls{cnn}-based classification models developed since its inception in 1995 is vast \cite{li2021survey}.
Hung \acs{etal}\ \cite{hung2019surface} used VGG19 \cite{simonyan2014very}, InceptionV3 \cite{szegedy2016rethinking} and InceptionResNetV2 \cite{szegedy2017inception} for their successful training of classifying surface damages in concrete. EfficientNet architectures are prevalent because of their performance to parameter ratio \cite{tan2019efficientnet}.
EfficientNet-L2, the best model of the EfficientNet architectures, achieves 90.2\% top 1 accuracy on Imagenet.
Since Hung \acs{etal}\ \cite{hung2019surface} used an input image size of 227x227p, the closest matching EfficientNet model, EfficientNet-B0, is used.
The application of transfer learning consisted of three stages.
First, only the output layer was trained. Second, in addition to the output layer, the last block of the EfficientNet-B0 was also trained to increase the number of trainable parameters without losing the low-level feature extractors in the early blocks. Third, the model was trained in its entirety. All training stages used Adam \cite{kingma2014adam} for optimization.
To demonstrate the viability and effectiveness of EfficientNet-B0, the model was trained on the \gls{cdc} dataset by Hung \acs{etal}, without adding any additional augmentations since \gls{cdc} is already augmented. Table \ref{tab:resCdcTrainParams} displays the training parameters.
\begin{table}[H]
\centering
\begin{tabular}{ccccc}
\textbf{stage} & \textbf{epoch} & \textbf{initial learning rate} & \textbf{beta1} & \textbf{beta2} \\
\toprule
1 & 1 & 1e-2 & 0.9 & 0.9 \\
2 & 1 & 1e-5 & 0.9 & 0.9 \\
3 & 8 & 1e-8 & 0.9 & 0.9 \\
\end{tabular}
\caption{Parameters for training EfficientNet-B0 using Adam on the CDC dataset}\label{tab:resCdcTrainParams}
\end{table}
Table \ref{tab:resHicTrainParams} describes the training stages omitting the $beta1$ and $beta2$ values of the Adam optimizer since they all are set to $0.9$. In addition to randomly changing contrast, saturation, and brightness, the JPG quality of the input images was randomly set between 50 to 100.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}rcccccc}
& \multicolumn{2}{c}{\textbf{stage 1}} & \multicolumn{2}{c}{\textbf{stage 2}} & \multicolumn{2}{c}{\textbf{stage 3}} \\
\cline{2-3}\cline{3-4}\cline{5-6}
\textbf{training dataset} & \textbf{epochs} & \textbf{lr} & \textbf{epochs} & \textbf{lr} & \textbf{epochs} & \textbf{lr} \\
\toprule
\gls{cdc-bhc} & 1 & 1e-1 & 1 & 1e-4& 4& 1e-7 \\
\gls{hicc}-metis-s112-p224 & 1 & 1e-1 & 1& 1e-4& 4& 1e-7\\
\gls{hicc}-metis-s224-p224 & 1 & 1e-2 & 1& 1e-5 & 4 & 1e-8 \\
\gls{hicc}-web-s112-p224 & 1 & 1e-1 & 1 & 1e-4& 4& 1e-7 \\
\gls{hicc}-web-s224-p224 & 1 & 1e-2 & 1& 1e-5 & 4& 1e-8 \\
concat-s112-p224 & 1 & 1e-2 & 1 & 1e-5 & 1 & 1e-8 \\
concat-s224-p224 & 1 & 1e-2 & 1& 1e-5 & 4& 1e-8 \\
\end{tabular}
\caption{Parameters for training EfficientNet-B0 using Adam with $beta1=beta2=0.9$ on binary honeycomb datasets}\label{tab:resHicTrainParams}
\end{table}
\emph{Concat-s112-p224} describes the combination of \emph{metis-s112-p224} and \emph{web-s112-p224} and \emph{concat-s112-p224} including the non-sliding versions, meaning a slide of 224 pixels and a patch size of 224 pixels.
\section{Results and Discussion}
\label{chap:results}
\subsection{EfficientNet-B0 for Concrete Damage Classification}
EfficientNet-B0 achieved better performance than all models by Hung \acs{etal}\ \cite{hung2019surface}, technically reaching state of the art for the \gls{cdc} dataset.
After 10 training epochs, EfficientNet-B0 achieves an accuracy of 96.95\%, a precision of 97.32\%, and a recall of 96.76\% on the \gls{cdc} dataset. The precision and recall are higher for each class and on average than in the former best performing model, InceptionResnetV2, as shown by Table \ref{tab:compareBlub}. Furthermore, EfficientNet-B0's accuracy of 96.29\% is statistically significantly higher than VGG19's 92.29\%, InceptionV3's 90.57\% and InceptionResnetV2's 92.57\%.
In conclusion, these results demonstrate that EfficientNet-B0 should be able to reach satisfactory performance on realistic datasets if the images scraped from the web sufficiently resemble honeycombs.
\begin{table}[H]
\centering
\begin{tabular}{rcccc}
\textbf{class} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} &
\textbf{support} \\
\toprule
normal & 0.96 & 0.97 & 0.96 & 210 \\
cracked & 0.95 & 0.97 & 0.96 & 210 \\
blistering & 0.98 & 0.96 & 0.97 & 210 \\
honeycomb & 0.96 & 0.98 & 0.97 & 210 \\
moss & 1.00 & 0.97 & 0.98 & 210 \\
average & 0.97 & 0.97 & 0.97 & 1050 \\
\end{tabular}\\
$(a)$ Our finetuned EfficientNet-B0
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{rcccc}
\textbf{class} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} &
\textbf{support} \\
\toprule
normal & 0.91 & 0.93 & 0.92 & 210 \\
cracked & 0.90 & 0.90 & 0.90 & 210 \\
blistering & 0.94 & 0.91 & 0.93 & 210 \\
honeycomb & 0.89 & 0.97 & 0.93 & 210 \\
moss & 0.99 & 0.91 & 0.95 & 210 \\
avg & 0.93 & 0.93 & 0.93 & 1050 \\
\end{tabular}
\\
$(b)$ InceptionResnetV2 by Hung \acs{etal}\ \cite{hung2019surface} \\
\caption{Comparison of performance on the CDC dataset of our finetuned EfficientNet-B0 vs. InceptionResnetV2 by Hung et al. \cite{hung2019surface}}\label{tab:compareBlub}
\end{table}
\subsection{Classification Results}\label{sec:resClassificaiton}
The training of EfficientNet-B0 showed clear differences between the datasets in \gls{hicc}.
All models performed the best on their own test sets, considering \gls{ap} and \gls{ar}.
Each model achieved high performance on the \gls{hicc} \emph{web} datasets.
Table \ref{tab:enb0Metrics} displays the metrics of each model on the different test sets.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}rccccr}
\textbf{test set} & \textbf{precision} & \textbf{recall} & \textbf{ap} & \textbf{ar} & \textbf{support} \\
\toprule
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{cdc-bhc}} \\
\cline{2-6}
cdc-bhc & \textbf{0.980} & \textbf{0.943} & \textbf{0.927} & \textbf{0.972} & 210 \\
web-s112-p224 & 0.974 & 0.848 & 0.980 & 0.970 & 132 \\
web-s224-p224 & \textbf{1.000} & 0.818 & 0.981 & 0.973 & 44 \\
metis-s112-p224 & 0.238 & 0.242 & 0.189 & 0.220 & 4281 \\
metis-s224-p224 & 0.226 & 0.235 & 0.181 & 0.212 & 1080 \\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{web-s224-p224}} \\
\cline{2-6}
cdc-bhc & 0.286 & 0.676 & 0.490 & 0.478 & 210\\
web-s112-p224 & 0.991 & 0.879 & \textbf{0.987} & \textbf{0.978} & 132\\
web-s224-p224 & \textbf{1.000} & 0.841 & \textbf{0.992} & \textbf{0.994} & 44\\
metis-s112-p224 & 0.432 & 0.497 & 0.416 & 0.430 & 4281\\
metis-s224-p224 & 0.419 & 0.490 & 0.413 & 0.425 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{web-s112-p224}} \\
\cline{2-6}
cdc-bhc & 0.295 & 0.738 & 0.347 & 0.389 & 210\\
web-s112-p224 & 0.959 & \textbf{0.886} & 0.977 & 0.907 & 132\\
web-s224-p224 & \textbf{1.000} & \textbf{0.864} & 0.991 & \textbf{0.994} & 44\\
metis-s112-p224 & 0.416 & 0.404 & 0.329 & 0.342 & 4281\\
metis-s224-p224 & 0.406 & 0.400 & 0.320 & 0.341 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{metis-s224-p224}}\\
\cline{2-6}
cdc-bhc & 0.577 & 0.448 & 0.554 & 0.550 & 210\\
web-s112-p224 & \textbf{1.000} & 0.689 & 0.984 & 0.974 & 132\\
web-s224-p224 & \textbf{1.000} & 0.705 & 0.991 & 0.990 & 44\\
metis-s112-p224 & 0.621 & \textbf{0.611} & 0.640 & 0.635 & 4281\\
metis-s224-p224 & 0.617 & \textbf{0.615} & 0.623 & 0.623 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{metis-s112-p224}} \\
\cline{2-6}
cdc-bhc & 0.644 & 0.319 & 0.444 & 0.460 & 210\\
web-s112-p224 & 0.988 & 0.614 & 0.962 & 0.919 & 132\\
web-s224-p224 & \textbf{1.000} & 0.591 & 0.973 & 0.963 & 44\\
metis-s112-p224 & \textbf{0.696} & 0.568 & \textbf{0.680} & \textbf{0.677} & 4281\\
metis-s224-p224 & \textbf{0.685} & 0.557 & \textbf{0.676} & \textbf{0.672} & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{concat-s224-p224}}\\
\cline{2-6}
cdc-bhc & 0.714 & 0.524 & 0.654 & 0.659 & 210\\
web-s112-p224 & 0.991 & 0.795 & 0.986 & 0.966 & 132\\
web-s224-p224 & 1.000 & 0.773 & 0.988 & 0.983 & 44\\
metis-s112-p224 & 0.608 & 0.597 & 0.636 & 0.634 & 4281\\
metis-s224-p224 & 0.596 & 0.596 & 0.623 & 0.622 & 1080\\
\addlinespace[1ex] & \multicolumn{5}{c}{\textbf{concat-s112-p224}} \\
\cline{2-6}
cdc-bhc & 0.545 & 0.490 & 0.589 & 0.583 & 210\\
web-s112-p224& 0.951 & 0.735 & 0.977 & 0.955 & 132\\
web-s224-p224 & 0.969 & 0.705 & 0.983 & 0.971 & 44\\
metis-s112-p224 & 0.637 & 0.588 & 0.633 & 0.627 & 4281\\
metis-s224-p224 & 0.626 & 0.579 & 0.613 & 0.613 & 1080\\
\end{tabular}
\caption{Metrics on test sets for EfficientNet-B0 trained on different training sets}\label{tab:enb0Metrics}
\end{table}
The inclusion of the \gls{hicc} \emph{web} datasets in the \emph{metis} dataset does not improve performance on the \gls{hicc} \emph{metis} dataset. However, it improves the recall on the \gls{hicc} \emph{web} datasets.
Overall the best model seems to be the one trained on the \gls{hicc} \emph{metis-s112-p224} dataset achieving the highest $AP$ and $AR$ for the \gls{hicc} \emph{metis} datasets and $AP$s and $ARs$ close to highest ones for the \gls{hicc} web datasets.
The high \gls{ap} and \gls{ar} achieved by this model indicate that the model achieved the most distinctive separation between honeycombs and non-honeycombs. Since the \gls{ap} and \gls{ar} average their metrics at different thresholds, a high value means increased independence from picking a specific threshold. To confirm the assumption a qualitative analysis using \acs{gradCam} is performed in Section \ref{sec:gradCramCompare}.
The higher $AR$ of the \emph{metis-s112-p224} model compared to its recall at a confidence threshold of $0.5$ is caused by the recall almost never reaching zero.
\subsubsection{Grad-CAM-based Comparison}\label{sec:gradCramCompare}
\Gls{gradCam} is a technique to highlight the regions in an image contributing the most to the prediction \cite{selvaraju2017grad}.
\Gls{gradCam} confirms that the models learned the structure of honeycombs on most datasets, although most models develop some bias for the upper left corner. The models trained on the web scraped datasets seem to learn the structure of honeycombs poorly, excluding the \emph{web-s224-p224} model, which shows a suitable \acs{gradCam}. Only the model trained on \gls{cdc-bhc} failed to classify the image correctly, and only the \emph{metis-s112-p224} model does not activate for the upper left corner.
The models whose training included \emph{metis} data handled images that are atypical to their training data satisfactorily. However, the web models whose training data did not contain such images were less confident in their predictions, although still classifying correctly.
The higher confidence of the other models demonstrates the better separation of classes these models learned as supported by the \gls{ap} and \gls{ar} in Table \ref{tab:enb0Metrics}.
One of the most common false positives were pictures containing pebbles. Figure \ref{fig:gradCamPebble} depicts the \glspl{gradCam} of the models classifying an image with loose pebbles lying on the ground.
\begin{figure}[H]
\footnotesize
\centering
\begin{tabular}{cccc}
& 0.99 & 0.94 & 1.0 \\
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mmetis-s112-p224/gradcam_2_a10.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mcdc-bhc/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mmetis-s224-p224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mmetis-s112-p224/gradcam_2_a5.jpg}
\\
$(a)$ original & $(b)$ cdc-bhc & $(c)$ metis-s224-p224 & $(d)$ metis-s112-p224 \\
\addlinespace[1ex] 0.49 & 0.99 & 1.0 & 1.0 \\
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mweb-s224-p224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mweb-s112-p224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mconcat-web-metis224/gradcam_2_a5.jpg}
&
\includegraphics[width=0.15\linewidth,trim={0 0 0 7mm},clip]{appendix/cam/dsmetis/mconcat-web-metis/gradcam_2_a5.jpg}
\\
$(e)$ web-s224-p224 & $(f)$ web-s112-p224 & $(g)$ concat-s224-p224 & $(h)$ concat-s112-p224 \\
\end{tabular}
\caption{Grad-CAM of EfficientNet-B0 trained on different datasets for an image containing pebbles from metis-s224-p224/test}\label{fig:gradCamPebble}
\end{figure}
The models were easily confused by loose pebbles, demonstrating that the models learned that pebbles are an important visual cue for honeycombs which corresponds to the definition requiring at least a partially visible pebble. However, the models do not misclassify loose pebbles in general, as some of the examples in the next section illustrate.
In conclusion, \acs{gradCam} confirmed that the metrics-based assessment on which the EfficientNet-B trained \emph{metis-s112-p224} performs best and is the only model which did not develop a bias for the upper left corner.
Surprisingly, the inclusion of the web images did not improve the model's performance. A possible explanation could be that most web images depict excessive honeycombs or that the high preprocessing of these images for web usage is inadequate.
It's also interesting to see that \acs{gradCam} can be used as a tool for instance segmentation, despite only using a classification model as a foundation.
\subsubsection{Patch Classification}
Since the classification model has been trained on lower resolution patches, we can apply the model patch-wise onto images with larger resolutions to localize defects.
\glspl{gradCam} additionally provides assistance in locating honeycombs for verification. Figure \ref{fig:resPatchCam} demonstrates how \glspl{gradCam} assists in explaining the classification decision and helps localize the honeycomb, since the upper left tile in image $(a)$ could be easily misidentified as a false positive. However, the overlayed activation by \glspl{gradCam} shows, that the patch was correctly classified due to the small defect in its bottom right corner.
For the two upper images $(a)$ and $(b)$ a magenta border encloses each patch that exceeds the confidence threshold of $0.5$ with the confidence written in the upper left corner of the patch. The two lower images show the corresponding \glspl{gradCam} for the upper images.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/web_Honeycombing-2.jpg.jpg}
&\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/metis_12023_18216.jpeg.jpg}
\\
$(a)$ web example & $(b)$ metis example\\
\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/web_Honeycombing-2.jpg_cam.jpg}
&\includegraphics[height=4cm]{appendix/patch_classification/metis-s112-p224/metis_12023_18216.jpeg_cam.jpg}
\\
$(c)$ Grad-CAM overlayed & $(d)$ Grad-CAM overlayed\\
\end{tabular}
\caption{$(a)$ and $(b)$ show example images with patch-wise classification. $(c)$ and $(d)$ show the corresponding \glspl{gradCam} activations.}\label{fig:resPatchCam}
\end{figure}
If a honeycomb is positioned at the edge of a patch, \acs{gradCam} can provide assistance for human verification as illustrated by $(c)$ and $(d)$ of Figure \ref{fig:resPatchCam}. In the case of patch classification, however, \acs{gradCam} is helpful, but mainly serves its initially intended purpose of debugging \glspl{cnn}. That is, recognizing if a model overfits or learns specific undesirable features.
The images scraped from the web seem to lack backgrounds typically seen on construction sites.
While the web dataset only includes images of honeycombs that, for the most part, only contain concrete backgrounds and lack good images of construction sites, the cdc-bhc dataset also includes more differentiated images of concrete. Unfortunately, the models trained on these datasets did not handle typical backgrounds of construction sites sufficiently well, since they are outside the scope of their training data.
Figure \ref{fig:gradCamExample4} illustrates the higher false positive rates, particularly for areas not depicting concrete surfaces. The model trained on the \emph{web} data performed particularly poorly in this case, even worse than the \emph{cdc-bhc} model, although the performance metrics are higher for the web model. However, this is to be expected since the \emph{cdc-bhc} dataset contains a wider variety of negative examples.
In conclusion, adding more images depicting construction sites may further decrease the false positive rate.
\begin{figure}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[height=2.5cm]{appendix/patch_classification/cdc-bhc/metis_15034_21710.jpeg.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/web-s224-p224/metis_15034_21710.jpeg.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/metis-s112-p224/metis_15034_21710.jpeg.jpg}
\\
$(a)$ cdc-bhc & $(b)$ web-s224-p224 & $(c)$ metis-s122-p224
\end{tabular}
\caption{False positives for non-concrete patches by our finetuned EfficientNet-B0 trained on different training sets}\label{fig:gradCamExample4}
\end{figure}
Since none of the applied augmentations targeted differences in the input’s scaling, the models naturally could not learn honeycomb structures that are too large, as shown by Figure \ref{fig:resCamLarge} with an atypically close photo of a honeycomb. However, the classification model naturally failed to recognize most patches correctly as honeycombs since neither the classification model nor the training data addressed significant differences in size.
\begin{figure}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[height=2.5cm]{appendix/patch_classification/cdc-bhc/metis_30597_97910.jpeg_cam.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/web-s224-p224/metis_30597_97910.jpeg_cam.jpg}
&\includegraphics[height=2.5cm]{appendix/patch_classification/metis-s112-p224/metis_30597_97910.jpeg_cam.jpg}
\\
$(a)$ cdc-bhc & $(b)$ web-s224-p224 & $(c)$ metis-s122-p224
\end{tabular}
\caption{Untypical scaling of a honeycomb by our finetuned EfficientNet-B0 trained on different training sets}\label{fig:resCamLarge}
\end{figure}
This weakness at handling unexpected scales could be addressed by generating the patches from the image pyramids \cite{adelson1984pyramid}. For instance, Xiao \acs{etal}\ \cite{xiao2020surface} added image pyramids to \gls{mrcnn}, which improved the performance slightly. However, Girshick \cite{girshick2015fast} argued that the improvements gained by using image pyramids are not significant enough to justify the increase in computation time for \emph{Fast R-CNN}, and Ren \acs{etal}\ \cite{ren2015faster} argued the same for \emph{Faster R-CNN}. Since Girshick \cite{girshick2015fast} also demonstrated that the model \emph{Fast R-CNN} learns scale invariance from the training data, increasing the scale variance by adding patches of image pyramids might improve the classification model.
\subsection{Instance segmentation}
The \gls{mrcnn} models learned to detect honeycombs to a certain degree.
Figure \ref{fig:mrcnnTraining} depicts the training and validation losses as well as the bounding box \glspl{ap} on the validation set.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{appendix/plot/mrcnn_training_web}
\includegraphics[width=0.45\linewidth]{appendix/plot/mrcnn_training_metis}
\includegraphics[width=0.45\linewidth]{appendix/plot/mrcnn_training_metis_web}
\\
$(a)$ $(b)$ $(c)$ \\
\caption{Mask R-CNN training}\label{fig:mrcnnTraining}
\end{figure}
Both models trained on either dataset did not improve significantly past iteration 3000, while the model trained on both datasets converged at around 4000 iterations.
The validation \gls{ap} jumped significantly for the \emph{web} validation set compared to the \emph{metis} validation set caused by the small size of the \emph{web} dataset compared to the \emph{metis} dataset.
Table \ref{tab:resBbox} displays the metrics of the \gls{mrcnn} model trained on the \gls{hicis} \emph{web} and \emph{metis} datasets, as well as the combination of both achieved on the validations sets.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}lcccccc}
& \multicolumn{3}{c}{\textbf{Web}} & \multicolumn{3}{c}{\textbf{Metis}} \\
\cline{2-4}\cline{5-7}
\textbf{metric} & \textbf{W} & \textbf{M} & \textbf{W+M} & \textbf{W} & \textbf{M} & \textbf{W+M} \\
\toprule
$AP_{IoU \geq 50}$ & \textbf{37.4} & 16.4 & 25.6 & 8.9 & 12.3 & \textbf{12.4} \\
$AP_{IoU \geq 0.5 : 0.95 : 0.05}$ & \textbf{22.2} & 7.9 & 17.4 & 3.1 & \textbf{6.0} & 5.7 \\
$AR_{IoU \geq 0.5 : 0.95 : 0.05}$ & \textbf{28.2} & 9.6 & 18.6 & 7.6 & 8.1 & \textbf{8.8} \\
\end{tabular}
\caption{Metrics for bounding boxes}\label{tab:resBbox}
\end{table}
Table \ref{tab:resMask} presents the corresponding metrics for the instance segmentation masks.
The \emph{web} model achieved lower performance on the \emph{metis} dataset than the \emph{metis} model, although achieving a slightly higher \gls{ar}. However, the inclusion of the \emph{web} data improved the model's segmentation masks slightly.
\begin{table}[H]
\centering
\begin{tabular}{!{\extracolsep{4pt}}lcccccc}
& \multicolumn{3}{c}{\textbf{Web}} & \multicolumn{3}{c}{\textbf{Metis}} \\
\cline{2-4}\cline{5-7}
\textbf{metric} & \textbf{W} & \textbf{M} & \textbf{W+M} & \textbf{W} & \textbf{M} & \textbf{W+M} \\
\toprule
$AP_{IoU \geq 50}$ & \textbf{33.0} & 14.5 & 23.2 & 7.3 & 11.7 & \textbf{11.9} \\
$AP_{IoU \geq 0.5:0.95:0.05}$ & \textbf{17.2} & 6.7 & 15.9 & 2.5 & 4.1 &\textbf{4.4} \\
$AR_{IoU \geq 0.5:0.95:0.05}$ & \textbf{23.7} & 8.6 & 17.0 & 6.1 & 6.0 & \textbf{7.0} \\
\end{tabular}
\caption{Metrics for segmentation masks}\label{tab:resMask}
\end{table}
All models achieved higher scores on the \emph{web} test set independent of the training set combination.
The inclusion of the \emph{metis} training data decreased the performance on the \emph{web} test set. The significantly more extensive size of the \emph{metis} dataset compared to the \emph{web} dataset led to a higher emphasis on the realistic images, causing the model trained on both datasets to perform worse on the \emph{web} dataset. Therefore, the decreased performance affirms the assumption that the images scraped from the web represent the most easily recognizable honeycombs.
The inclusion of the \emph{web} training set improved the model slightly on the \emph{metis} dataset, achieving the best values out of the three models in nearly all metrics.
Furthermore, the model trained on both datasets outperformed the model trained on only the \emph{metis} dataset for recall values over 20\% as illustrated by Figure \ref{fig:resPrCurveTest}.
In conclusion, the two datasets differ significantly, with the web images representing a limited selection of honeycombs.
Figure \ref{fig:resPrCurveTest} $(a)$ and $(b)$ depict the precision-recall curves of all three models using an \gls{iou} threshold of $0.5$, respectively on the \emph{web} test set and the \emph{metis} test set. The two precision-recall curves illustrate the cause of low $AP$ and $AR$. Since the precision or recall is zero at many thresholds, the averages of those are significantly lowered, caused by the model failing to achieve precision and recall values above a certain point regardless of the confidence threshold.
\begin{figure}[H]
\centering
\begin{tabular} {cc}
\includegraphics[height=6cm]{appendix/plot/mask_all_pr_curve_test_web.jpg}
&\includegraphics[height=6cm]{appendix/plot/mask_all_pr_curve_test_metis.jpg}
\\
$(a)$ \emph{web} test set & $(b)$ \emph{metis} test set \\
\end{tabular}
\caption{Precision-recall curves}\label{fig:resPrCurveTest}
\end{figure}
In summary, metrics for bounding boxes and segmentation masks showed a performance improvement when combining both datasets, indicating that increasing the variance in the dataset by increasing its size is likely to lead to further increases in performance.
\subsubsection{Evaluation for practical application}
Since considering only the average precision and recall metrics might lead to underestimating the model's performance as explained in Section \ref{sec:apIouLimitations}, the precision and recall at a specific confidence threshold were calculated. Therefore, the specification of a suitable value for the confidence threshold was required.
Table \ref{tab:valConfPr} shows the metrics of all models on both validation sets using an \gls{iou} threshold of 50\% at three different confidence thresholds for the bounding box detections. For instance, with a confidence threshold of 0.7 the model achieves a precision of 72.5\% which is vastly higher than expected when just considering the \gls{ap}.
\begin{table}[H]
\centering
\scriptsize
\begin{tabular}{!{\extracolsep{4pt}}rccccccccc}
\multicolumn{2}{c}{} & \multicolumn{4}{c}{\textbf{Web}} & \multicolumn{4}{c}{\textbf{metis}} \\
\cline{3-6} \cline{7-10}
\addlinespace[1ex] \textbf{model} & \textbf{confidence} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} \\
\toprule
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{web}}}
& 0.3 & 0.3469 & 0.4722 & 0.4000 & 36 & 0.1386 & 0.2121 & 0.1677 & 198 \\
\multicolumn{1}{c|}{}& 0.5 & 0.4706 & 0.4444 & 0.4571 & 36& 0.1954 & 0.1799 & 0.1873 & 189 \\
\multicolumn{1}{c|}{}& 0.7 & 0.6087 & 0.3889 & 0.4746 & 36& 0.2526 & 0.1379 & 0.1784 & 174 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{metis}}}
& 0.3 & 0.5333 & 0.4444 & 0.4848 & 36 & 0.2060 & 0.3179 & 0.2500 & 195 \\
\multicolumn{1}{c|}{}& 0.5 & 0.7857 & 0.3056 & 0.4400 & 36 & 0.3077 & 0.2051 & 0.2462 & 195 \\
\multicolumn{1}{c|}{}& 0.7 & 0.7500 & 0.3000 & 0.4286 & 30 & 0.4694 & 0.1447 & 0.2212 & 159 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{W+M}}}
& 0.3 &0.6429 & 0.5000 & 0.5625 & 36 & 0.2576 & 0.3434 & 0.2944 & 198 \\
\multicolumn{1}{c|}{}& 0.5 & 0.8500 & 0.4722 & 0.6071 & 36 & 0.4000 & 0.2383 & 0.2987 & 193 \\
\multicolumn{1}{c|}{}& 0.7 & 0.9286 & 0.3611 & 0.5200 & 36 & .5283 & 0.1637 & 0.2500 & 171 \\
\end{tabular}
\caption{Differing confidence thresholds at an $IoU=0.5$ on the validation sets}\label{tab:valConfPr}
\end{table}
Table \ref{tab:testConfPr} shows the corresponding metrics achieved on the test sets compared to the validation set as shown by Table \ref{tab:valConfPr}, since the threshold must be determined on the validation set. Otherwise there would not be sufficient evidence that the determined threshold also generalizes adequately.
\begin{table}[H]
\centering
\scriptsize
\begin{tabular}{!{\extracolsep{4pt}}cccccccccc}
\multicolumn{2}{c}{} & \multicolumn{4}{c}{\textbf{web}} & \multicolumn{4}{c}{\textbf{metis}} \\
\cline{3-6} \cline{7-10}
\addlinespace[1ex] \textbf{model} & \textbf{confidence} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} & \textbf{precision} & \textbf{recall} & \textbf{f1-score} & \textbf{support} \\
\toprule
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{web}}}
& 0.3 & 0.3396 & 0.6316 & 0.4417 & 57 & 0.2222 & 0.3143 & 0.2604 & 210 \\
\multicolumn{1}{c|}{}& 0.5 & 0.4225 & 0.5263 & 0.4687 & 57 & 0.3161 & 0.3216 & 0.3188 & 171\\
\multicolumn{1}{c|}{}& 0.7 & 0.5682 & 0.4386 & 0.4950 & 57 & 0.3711 & 0.2323 & 0.2857 & 155 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{metis}}}
& 0.3 & 0.4167 & 0.4386 & 0.4274 & 57 & 0.3162 & 0.4095 & 0.3568 & 210 \\
\multicolumn{1}{c|}{}& 0.5 & 0.6800 & 0.2982 & 0.4146 & 57 & 0.4911 & 0.3293 & 0.3943 & 167 \\
\multicolumn{1}{c|}{}& 0.7 & 0.8333 & 0.1923 & 0.3125 & 52 & 0.7250 & 0.2266 & 0.3452 & 128 \\
\addlinespace[1ex] \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{W+M}}}
& 0.3 &0.4304 & 0.5965 & 0.5000 & 57 & 0.3116 & 0.4115 & 0.3546 & 209 \\
\multicolumn{1}{c|}{}& 0.5 & 0.6486 & 0.4211 & 0.5106 & 57 & 0.4786 & 0.3415 & 0.3986 & 164 \\
\multicolumn{1}{c|}{}& 0.7 &0.8333 & 0.2632 & 0.4000 & 57 & 0.6531 & 0.2462 & 0.3575 & 130 \\
\end{tabular}
\caption{Differing confidence thresholds at an $IoU=0.5$ on the test sets}\label{tab:testConfPr}
\end{table}
The confidence threshold of 0.5 was confirmed by the $F_1$ score on the \emph{metis} test set. Each model reaches its highest $F_1$ score on the \emph{web} test set at different confidence thresholds. However, the support is less than for the \emph{metis} set, and the difference in $F_1$ scores was minor, making it likely that the \emph{web} test set was not large enough. Overall the metrics confirm the selection of a confidence threshold of 0.5 based on the validation set.
The high quality of some of the model's detection rates is illustrated by image $(a)$ of Figure \ref{fig:resQualityHigh}. The model predicted a near perfect segmentation mask for the honeycomb in the image. However, this is not representative of the detection quality in general.
\begin{figure}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[height=4cm]{appendix/detection/c7/metis_eval_metis_images_test/12}
& \includegraphics[height=4cm]{appendix/detection/g/metis_images_test/22}
& \includegraphics[height=4cm]{appendix/detection/c3/metis_eval_metis_images_test/22}
\\
$(a)$ near perfect segmentation mask & $(b)$ ground truth & $(c)$ predictions\\
\end{tabular}
\caption{Near perfect segmentation mask and differing division of honeycombs}\label{fig:resQualityHigh}
\end{figure}
As illustrated by images $(b)$ and $(d)$ in Figure \ref{fig:resQualityHigh}, the model's predictions dividing honeycombs into single or multiple instances reflect the challenges of defining instances of honeycombs as discussed in section \ref{sec:hicis}. Furthermore, the difference in dividing honeycombs results in fewer true positives since the split honeycomb instances do not reach an \gls{iou} of $0.5$. One could address this issue by changing the problem type to classification and segmentation. Although an experiment classifying large honeycombs failed, as mentioned in section \ref{sec:hicc}, the challenge of unclear instance divisions may justify further research in this direction. Nevertheless, this problem does not affect verification by inspectors and, therefore, might be addressed by increasing the training data.
While the detections at a confidence threshold of $0.3$ are mostly correct, lowering the confidence threshold increased the number of false positives when encountering loose pebbles on the ground as shown in Figure \ref{fig:fpPebbles}.
\begin{figure}[H]
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{appendix/detection/c3/metis_eval_metis_images_test/35}
\includegraphics[width=0.3\textwidth]{appendix/detection/c5/metis_eval_metis_images_test/35}
\includegraphics[width=0.3\textwidth]{appendix/detection/c7/metis_eval_metis_images_test/35}
&& \\
\end{tabular}
\caption{Lose pebbles detected as honeycombs}\label{fig:fpPebbles}
\end{figure}
In conclusion, the model performed reasonably well and shows promise for object detection to target honeycombs and possible defects with a vague outline. However, the differentiation of pebbles and honeycombs needs more research. Two possible approaches are the segmentation of concrete surfaces or adding more images of loose pebbles. Bunrit \acs{etal}\ \cite{bunrit2019evaluating} demonstrated that materials used in construction such as concrete could be classified.
Furthermore, the division of honeycombs into the correct instances may be a challenge, although likely to be solved by using an adequately large dataset.
\subsection{Instance Segmentation vs. Classifying Patches}
The previous two sections explored and discussed the results of the object detection and classification approaches. While object detection and classification metrics cannot be compared directly, they may still indicate differences.
Comparing the classification model trained on \emph{\gls{hicc}-metis-s224-p224} to the \gls{mrcnn} trained on the \gls{hicis} dataset, the classification model can achieve higher precision and recall. However, considering the harshness of the \gls{iou}, the difference is not large enough to dismiss the performance of the \gls{mrcnn} model. Another consideration is that users may perceive bounding boxes with segmentation masks as more natural compared to patch classification with \acs{gradCam} as \acs{eg} bounding circles are commonly used to indicate an area of importance in an image.
The labeling of images for instance segmentation requires vastly more effort than classification. Instance masks address a much more difficult problem formulation than class labels: Location of an instance, which pixels belong to an instance, and number of instances in the image. While classification of a large image may not adequately address the user's requirement to locate the defect for faster verification, slicing an image into classifying patches may solve this problem, especially considering the effort of data acquisition.
\subsubsection{Qualitative Analysis on Realistic Images}
The validity of the quantitative comparison between instance segmentation and patch-based classification is limited because both address the problem in fundamentally different ways.
Although comparing object detection and patch-based classification using the \gls{iou} is possible \cite{veeranampalayam2020comparison}, this approach reduces the problem type to segmentation, which is not the initial problem type for either classification.
Since the quantitative comparison between the instance segmentation and patch-based classification approaches was not sufficient, an expert in defect documentation appraised the predictions manually enabled by the small size of the test set.
The qualitative analysis compared the detections and patch classifications of the best models side by side, \acs{ie} the \gls{mrcnn} trained on both datasets and the EfficientNet-B0 trained on only the \gls{hicc} \emph{metis-s112-p224} datasets respectively. For each test image the following criteria were applied. First, it was checked whether the crucial honeycomb was detected, i.e. the largest in most cases. Second, it was checked if other significant honeycombs were detected. Third, it was checked how many false positives existed and if the number exceeded the true positive detection count. Then an assessment of unsatisfactory, sufficient, or satisfactory was given. \emph{Satisfactory} describes an image with honeycomb detections that a potential user could confirm by a glance requiring at least one correct detection. \emph{Sufficient} expresses that an image with detections could not be verified easily, either due to too many false positives or failing to detect the crucial honeycomb. Finally, \emph{Unsatisfactory} describes the detection for an image would have been unusable either due to it failing to detect any honeycomb or failing to detect any honeycomb correctly.
Since a direct comparison by metrics is not possible between classification and object detection, a qualitative analysis was necessary, although its results are limited by subjective bias.
Table \ref{tab:resQualScores} summarizes the assessment. While the instance segmentation approach produced more satisfactory detections, classifying patches had fewer unsatisfactory results.
\begin{table}[H]
\centering
\begin{tabular}{rccc}
\textbf{model} & \textbf{unsatisfactory} & \textbf{sufficient} & \textbf{satisfactory} \\
\toprule
EfficientNet-B0 trained on \emph{metis-s112-p224} & 4 & 22 & 12 \\
\gls{mrcnn} trained on both datasets & 8 & 12 & 18 \\
\end{tabular}
\caption{Assessment of detecting honeycombs in 38 realistic test images}\label{tab:resQualScores}
\end{table}
When directly comparing the detections, 16 detections were of similar quality, the object detection approach achieved better results in 13 cases but performed worse for 8 images.
The comparison confirmed that the instance segmentation masks are more convenient for users to validate.
Figure \ref{fig:easyConfirm} demonstrates the advantages of instance segmentations since the segmentations are more distinctive, especially around the steel pipes.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics*[width=0.3\linewidth]{appendix/detection/c5/metis_web_eval_metis_images_test/6}
& \includegraphics*[width=0.3\linewidth]{appendix/patch_classification/metis-s112-p224/metis_31187_115285.jpeg_cam.jpg}
\\
$(a)$ instance segmentation & $(b)$ classifying patches \\
\end{tabular}
\caption{Intuitiveness of instance segmentation compared to patch classification}\label{fig:easyConfirm}
\end{figure}
However, this might be addressed by classifying sliding patches, \acs{eg} a slide of 0.5 of the patch size. When decreasing the slide to a pixel, each pixel is classified, which is essentially segmentation. However, this segmentation would be missing the instance identifications of the \gls{mrcnn} model.
Classifying by patches tended to create discontinuous honeycombs, while \gls{mrcnn} produced more continuous detections as illustrated by Figure \ref{fig:detectContinuouity}.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics*[width=0.3\linewidth]{appendix/detection/c5/metis_web_eval_metis_images_test/1}
& \includegraphics*[width=0.3\linewidth]{appendix/patch_classification/metis-s112-p224/metis_31401_122952.jpeg_cam.jpg} \\
$(a)$ & $(b)$ \\
\includegraphics*[width=0.3\linewidth]{appendix/detection/c5/metis_web_eval_metis_images_test/21}
& \includegraphics*[width=0.3\linewidth]{appendix/patch_classification/metis-s112-p224/metis_30683_103088.jpeg_cam.jpg} \\
$(c)$ & $(d)$ \\
\end{tabular}
\caption{Continuity of detections}\label{fig:detectContinuouity}
\end{figure}
In conclusion, both instance segmentation and classification with \acs{gradCam} are valid approaches for further research with larger datasets. Since the labeling of honeycombs for instance segmentation is significantly more challenging than classification, the latter could be superior for future research, especially considering the expert knowledge required and the ambiguities dividing honeycombs.
However, since instance segmentations are more natural to potential users and an initial \gls{mrcnn} model could be trained, an active learning approach could reduce labeling effort as demonstrated by Feng \acs{etal}\ \cite{feng2017deep}.
Therefore, future research should look into the problem using either approach depending on the integration of an active learning framework into a defect documentation system.
\chapter{Introduction}
\section{Introduction}
\paragraph{}
Construction defects are costly for the economy.
The cost of defect elimination is between 2\% and 12.4\% of the total cost of construction \cite{lundkvist2014proactive} and much time and effort is required to inspect construction sites and document defects \cite{nguyen2015smart}.
Automating the inspection of construction projects would free up resources and may even enable more frequent inspections, leading to more efficient construction projects.
The progress in CV and ML may enable the complete automation of this process in the future.
Although deep learning is applied to many different fields, research into image-based defect detection using deep learning is still limited in the construction industry, despite its large size, and focuses on security, progress, and productivity.
In contrast, there appear to be relatively few publications on methods utilized for object detection in quality assurance in construction. So far, the research into detecting defects has been mainly limited to defects occurring in the maintenance phase of infrastructure facilities such as roads, bridges, and sewer systems. \cite{xu2020computer}
This work focuses on the detection of honeycombs, which are large surface voids in concrete, that often contain visible pebbles as the lack of cement reveals the gravel. Honeycombs may expose reinforcements, \acs{ie} \glspl{rebar}, leading to erosion, and impair the water impermeability and static strength of the concrete.
\section{Related Work}
Most research into honeycomb detection uses a variety of sensor data, but not images.
For example, Ismail and Ong (2012) \cite{ismail2012honeycomb} used mode shapes to detect honeycombs in reinforced concrete beams. Vibration is induced into the concrete beam, and the displacement caused is measured at specific locations on the beams, describing the behavior of an object under dynamic load. Furthermore, V\"olker and Shokouhi \cite{volker2015clustering} developed a multi-sensor clustering-based method for honeycomb detection, using impact-echo, ultrasound, and ground-penetrating radar data.
To the author's best knowledge, the following is the only work using regular camera images and applying \gls{ml} for honeycomb detection and is used as a baseline for this thesis.
Hung \acs{etal}\ \cite{hung2019surface} showed that \glspl{cnn} could classify concrete images with a precision of 93\% and recall of 93\% into honeycomb, crack, moss, blistering, and normal classes. When their dataset, \gls{cdc}, was published, it was limited in size and was scraped from the internet, introducing a bias, as demonstrated in Section \ref{sec:cdcDifferences}. Regarding honeycombs, for example, the images scraped from the internet are often explanatory illustrations and show the most easily identifiable honeycombs. Furthermore, the domain from which the pictures are drawn is limited to images of concrete, leading to the expectation of a high false positive rate when applied to realistic defect images. This cannot, therefore, prove the applicability of \glspl{cnn} for honeycomb detection for images of general defects as collected in a defect documentation system. Nevertheless, this demonstrates that \glspl{cnn} may be applicable for the detection of honeycombs in concrete, assuming the images depict only concrete.
In conclusion, research into defect detection focuses on maintenance defects and often uses images which are not created by potential users. In recent research, \Gls{cnn}-based approaches dominated the field, including classification and object detection. As a result, this work evaluates both approaches and explores differences between an existing dataset of honeycomb images and a realistic dataset of images taken by construction site inspectors.
\section{Conclusion}
\label{chap:conclusion}
\subsection{EfficientNet-B0}
An EfficientNet-B0 was trained on the \gls{cdc} dataset \cite{hung2019surface} without adding any additional augmentations.
The model achieved with $96.95\%$ accuracy, $97.32\%$ precision, and $96.76\%$ recall state-of-the-art on the \gls{cdc} dataset.
The successful application of EfficientNet-B0 demonstrated that models' improvements on ImageNet can also improve performance for transfer-learning. Furthermore, it confirmed its adequacy for training a binary honeycomb classifier.
\subsection{HiCIS and HiCC datasets}
The \gls{hicc} and \gls{hicis} datasets are published on GitHub for use in research: \url{https://github.com/jdkuhnke/HiC}. \Gls{hicc} contains binary classification datasets for honeycombs in concrete. \Gls{hicis} contains datasets for detecting honeycombs with bounding boxes and instance segmentation masks labeled in the \emph{MS COCO} format. These datasets provide a basis for further research into honeycomb detection. The raw images are also included.
While the instance segmentation masks were labeled in good faith, smaller honeycombs may not get labeled accurately enough for the 224 x 224 pixels patches in large images with fractured honeycombs. As a result, it is likely that the \gls{hicc} dataset likely contains some incorrect class labels.
The \gls{hicis} dataset could be extended by adding more challenging honeycombs from the raw images supplied in the repository.
The labeling process could be eased by training an initial model and using an active learning approach. The model would continuously train on data and create predictions for new unlabeled data, then feed back into the model for additional training. Feng \acs{etal}\ \cite{feng2017deep} demonstrated that active learning could assist with labeling defect images.
\subsection{Differences in datasets obtained from real scenarios and scraped from the web}
Differences between honeycomb datasets scraped from the web and images obtained from real scenarios were explored.
Both experiments, \acs{ie} classification and detection, suggest that images scraped from the web and those obtained from the field differ substantially.
In the case of classifying patches, the EfficientNet-B0 trained on the \gls{hicc} \emph{web-s224-p224} dataset achieved significantly lower scores on the \emph{metis-s224-p224} test set, \acs{ie} 41.3\% \gls{ap} and 42.5\% \gls{ar}, compared to the model trained on \emph{metis-s112-p224} achieving 68.0\% \gls{ap} and 67.2\% \gls{ar}. Furthermore, the model trained on \emph{metis-s112-p224} achieved an 97.3\% \gls{ap} and 96.3\% \gls{ar} on the \emph{web-s224-p224} test set close to its scores of the model trained on \emph{web-s224-p224} , achieving 98.7\% \gls{ap} and 97.8\% \gls{ar}.
For the instance segmentation, the \gls{mrcnn} trained on any combination of \gls{hicis} training datasets performed better on the \emph{web} test set than the \emph{metis} test set. The different performance on both datasets indicates a difference in the distribution of the two datasets.
In conclusion, although the dataset scraped from the web does not fully represent the complete variance of honeycombs, it still represents a limited selection. However, the inclusion of web images did not necessarily improve the accuracy of the model on realistic images.
\subsection{Evaluation of instance segmentation vs. patch-based classification for honeycomb detection}
Models detecting honeycombs by classifying patches and by instance segmentation were trained on \gls{hicc} and \gls{hicis} respectively.
EfficientNet-B0 trained on \gls{hicc} \emph{metis-s112-p224} achieved the overall highest performance on \emph{metis-s112-p224} with 67.6\% \gls{ap}, 67.2\% \gls{ar}, 68.5\% precision, and 55.7\% recall. While these metrics are lower compared to similar work for crack classification \cite{zhang2016road, cha2017deep, ozgenel2018performance, dorafshan2018comparison, feng2017deep}, the extent of the differences is expected, considering the size, quality and complexity of the datasets.
Although this thesis applied \acs{gradCam} for its honeycomb classifier to assist in manual verification, it also shows that it can assist in segmenting positive honeycomb patches, potentially giving better localization masks. Fan \acs{etal}\ \cite{fan2018automatic} stated that in cases of a high ratio of negative to positive pixels, a model would be likely to learn to classify each pixel as negative. Therefore, the classification enabled by the trained EfficientNet-B0 model is essential for further research in this direction.
The \gls{mrcnn} trained on the \gls{hicis} \emph{web} and \emph{metis} datasets achieved an 12.4\% $AP_{IoU\geq50}$, 47.7\% precision, and 34.2\% recall on the \emph{metis} test set as well as an 25.6\% $AP_{IoU\geq50}$, 64.9\% precision, and 42.1\% recall on the \emph{web} test set.
Although these metrics, especially the \glspl{ap}, are again lower compared to similar work for crack detection \cite{murao2019concrete, yin2020deep}, following the same reasoning as for honeycomb classification, the extent of the differences is expected, considering the size, quality, and complexity of the datasets.
While comparing instance segmentation and patch-based classification is challenging, when assessed quantitatively and qualitatively, the differences between the methods were not significant enough to lead to an indisputable choice. However, the former had a slight edge. Therefore, the decision between those problem types will depend on the context of possible implementation in practice, particularly on which approach integrates better with active learning in a defect documentation system.
In conclusion, the user-friendly detection of honeycombs was addressed by instance segmentation and patch-based classification. The experiments confirmed that honeycombs can be recognized by \glspl{cnn}, although the small dataset still limits the performance.
\subsection{Outlook}
While this thesis developed an initial model for detecting honeycombs, the performance would not yet suffice in practical applications. Nevertheless, models trained on either the \gls{hicc} or \gls{hicis} datasets could be used in an active learning approach integrated into defect documentation systems, enabling future research into detecting construction defects as the difficulty in obtaining labels for construction defects will persist for the myriad of existing defect types.
However, to address all defect types it will be necessary to include context information, \acs{eg} architectural plans or the \gls{bim} model, such as research into image-based construction progress monitoring will enable \cite{rho2020automated,lei2019cnn,braun2020improving}.
In conclusion, \glspl{cnn} can detect honeycombs in concrete and will enable automated defect detection assisting humans in different degrees of automation until achieving satisfactory results without human verification.
|
1,941,325,220,739 | arxiv | \section{Introduction}
\label{sec:intro}
The collapse of the core of a massive star at the end of its life forms a hot and dense object known as a proto-neutron star which cools via the emission of neutrinos over a period of $\sim 10\;{\rm s}$ \cite{2002RvMP...74.1015W,2007PhR...442...38J}. The spectra and flavor distribution of the neutrinos that emerge from the supernova are not the same as those emitted from the proto-neutron star: for a recent review see Mirizzi \emph{et al.} \cite{2016NCimR..39....1M}. At the present time the most sophisticated calculations of the neutrino flavor transformation adopt the so-called `bulb' model: the neutrino source is a spherically symmetric, hard neutrinosphere, the calculation assumes a steady state, and neutrinos are followed along multiple trajectories characterized by their angle of emission relative to the radial direction - the `multi-angle' approach \cite{Duan:2006an,Duan:2006jv}. The Hamiltonian governing the flavor evolution for a single neutrino depends on the local density profile plus a contribution from all the other neutrinos which are escaping the proto-neutron star - the neutrino self-interaction. The neutrino self-interaction depends upon the neutrino luminosity, mean energy and a term proportional to $1 - \cos\Theta$ due to the current-current nature of the weak interaction where $\Theta$ is the angle between two neutrino trajectories. Curiously, while the density profile and the neutrino spectra are sometimes taken from hydrodynamical simulations of supernova which include General Relativistic (GR) effects either exactly or approximately, e.g. from the simulations by Fischer \emph{et al.} \cite{2010A&A...517A..80F}, the calculations of the neutrino flavor transformation ignore them.
The flavor transformation that occurs in a supernova will alter the expected signal from the next Galactic supernova \cite{2008PhRvD..77d5023K,2009PhRvL.103g1101G,2011JCAP...10..019V,2013PhRvD..88b3008L}, as well as modify the Diffuse Supernova Neutrino Background \cite{2008JCAP...09..013C,2010ARNPS..60..439B,2010PhRvD..81e3002G,2011PhLB..702..209C,2012JCAP...07..012L,2013PhRvD..88h3012N,2016APh....79...49L}, and the nucleosynthesis that occurs in the neutrino driven wind \cite{1994ApJ...433..229W,2010JCAP...06..007C,2011JPhG...38c5201D,2015ApJ...808..188P,2015PhRvD..91f5016W}. Neutrino heating in the region behind the shock is thought to be the mechanism by which the star explodes and such heating depends upon the neutrino spectra of each flavor which depends upon the flavor transformation \cite{1985ApJ...295...14B,1996A&A...306..167J}. With so many different consequences of flavor transformation, one wonders how including GR in the flavor transformation calculations might alter our expectations.
GR effects upon neutrino oscillations in vacuum have been considered on several occasions e.g.\ \cite{1979GReGr..11..391S,1996GReGr..28.1161A,1996PhRvD..54.1587P,1997PhRvD..56.1895F,1998PThPh.100.1145K,1999PhRvD..59f7301B,2005PhRvD..71g3011L,2015GReGr..47...62V}. The inclusion of matter is occasionally considered \cite{1997PhRvD..55.7960C,2004GReGr..36.2183A,2013JCAP...06..015D,2016NuPhB.911..563Z} and the effect of GR usually limited to a shift in location and adiabaticity of the Mikheyev-Smirnov-Wolfenstein (MSW) resonance \cite{Mikheyev:1985aa,1978PhRvD..17.2369W} via the redshift of the neutrino energy. The effects of GR upon neutrino self-interactions have not been considered. The effect of GR has also been studied for the neutrinos emitted from the accretion disk surrounding a black hole formed in the merger of two neutron stars, a black hole and a neutron star, or in a collapsar. For example, Caballero, McLaughlin and Surman \cite{2012ApJ...745..170C} studied the GR effects for accretion disk neutrinos (but without neutrino transformation) and found the effects upon the nucleosynthesis were large because of the significant changes to the neutrino flux.
The aim of this paper is to explore the GR effects upon flavor transformation in supernovae including neutrino self-interactions and determine whether they might be important in different phases of the explosion. Our paper is organized as follows. In section \S\ref{sec:description} we describe our calculation and how the GR effects are included. Section \S\ref{sec:results} contains our results for the two representative cases we study: luminosities, mean and rms energies, density profiles and source compactness characteristic of the accretion phase, and a different set representative of the cooling phase. In section \S\ref{sec:halo} we discuss the conditions that lead to the formation of a neutrino halo - neutrinos that were emitted but which later turned-around and returned to the proto-neutron star. We present a summary and our conclusions in section \S\ref{sec:conclusions}.
\section{Calculation Description}
\label{sec:description}
\subsection{GR Effects Upon Neutrinos}
Before describing the formulation of neutrino oscillations in a curved spacetime, we first describe the three general relativistic effects that will be important. For this paper we adopt an exterior Schwarzschild metric for the space beyond the neutrinosphere\footnote{For simplicity we ignore the gravitational effect of the matter outside the neutrinosphere.} which is given by
\begin{equation}\label{eqn:metric1}
d{\tau^2} = B\left(r\right)dt^{2} - \frac{dr^{2}}{B\left(r\right)} - r^{2}d\psi^{2} - r^{2}{\sin^2}\psi\,d\phi^{2},
\end{equation}
where the function $B(r)$ is $B\left(r\right) = 1 - r_s/r$ and $r_s$ is the Schwarzschild radius given by $r_s = 2 G M$ with $M$ the gravitational mass. Throughout our paper we set $\hbar=c=1$. Since the rest mass of all neutrino species are much smaller than the typical energies of supernova neutrinos, we can comfortably take the ultra-relativistic limit and assume neutrinos follow null geodesics just like photons. The Schwarzschild metric is isotropic so all geodesics are planar. By setting $d{\tau^2} = 0$ and $d\phi=0$ so that the geodesic lies in the plane perpendicular to the equatorial plane, we obtain
\begin{equation}\label{eqn:metric2}
B\left(r\right)dt^{2} = \frac{dr^{2}}{B\left(r\right)} + r^{2}d{\psi^2}.
\end{equation}
The energy of a neutrino $E$ decreases as it climbs out of the gravitational well such that its energy at a given radial coordinate $r$ relative to its energy at $r\rightarrow \infty$, $E_{\infty}$, is
\begin{equation}
\frac{E}{E_{\infty}}= \frac{1}{\sqrt{B\left(r\right)}}.
\end{equation}
The angular momentum $\ell$ of the neutrino also decreases as it climbs out of the potential well by the same scaling. This means the ratio of the neutrino's angular momentum to its energy is constant and in our chosen plane is given by
\begin{equation}\label{eqn:metric3}
\frac{\ell}{E} = \frac{r^2}{B\left(r\right)} \left|\frac{d\psi}{dt}\right| = b
\end{equation}
where $b$ is a constant called the \emph{impact parameter}. The impact parameter can be evaluated at the neutrinosphere $r=R_{\nu}$ where we find it is given by
\begin{equation}
b = \frac{R_{\nu}\sin\theta_R}{\sqrt{1 - r_s/R_{\nu}} } ,
\end{equation}
where $\theta_R$ is the the emission angle of the neutrino with respect to the radial direction at the neutrinosphere. Using Eq. (\ref{eqn:metric3}) to eliminate $dt$ from Eq. (\ref{eqn:metric2}) we find\footnote{Here the plus sign is for outgoing neutrinos, the minus sign is for ingoing neutrinos, this is true for all following equations.}
\begin{equation}
d\psi = \pm{\left[ {\frac{1}{b^2} - \frac{1}{r^2}\,B\left( r \right)} \right]^{-1/2}}\frac{dr}{r^2},
\end{equation}
This equation can be used to describe the neutrino trajectory associated with a certain emission angle $\theta_R$. Or using Eq. (\ref{eqn:metric3}) to eliminate $d\psi$ from Eq. (\ref{eqn:metric2}) gives
\begin{equation} \label{eqn:dt}
dt = \pm\frac{1}{B\left(r\right)}\frac{dr}{\sqrt {1 - \frac{b^2}{r^2}B\left(r\right)} }.
\end{equation}
For an observer at position $r$ the relation between the coordinate time $t$ and the local proper time\footnote{The ``local proper time'' is defined as the clock time of an observer sitting at a particular point along the neutrino trajectory.} $\tau$ is
\begin{equation}
d\tau^2 = B\left(r\right) dt^2
\end{equation}
so using the result from Eq. (\ref{eqn:dt}) we find
\begin{equation}\label{eqn:proper time}
d\tau = \pm\frac{1}{\sqrt{B\left(r\right)}}\frac{dr}{\sqrt{1 - \frac{b^2}{r^2}B\left(r\right)} }.
\end{equation}
This collection of equations will be useful when we describe flavor oscillations in a curved spacetime.
\subsection{Neutrino Oscillations In A Curved Spacetime}
\begin{figure}[b]
\centering
\includegraphics[scale=0.6]{fig_angles.eps}
\caption{The schematic of a neutrino trajectory in strong gravitational field. Here $R_{\nu}$ is the radius of neutrinosphere, $r$ is the distance from the center, $\theta_R$ is the emission angle, $\psi$ is the polar angle, and $\theta$ is the angle of intersection.}
\label{fig:angles}
\end{figure}
Our calculations of the effects of GR on neutrino flavor transformation are based upon the neutrino bulb model established by Duan \emph{et al.} \cite{Duan:2006an,Duan:2006jv}. In this model, neutrinos are emitted from a hard neutrinosphere with radius $R_{\nu}$ and for simplicity we assume the angular distribution of emission is half-isotropic. The setup is illustrated in Fig. \ref{fig:angles} which shows the trajectory of a neutrino emitted at the neutrinosphere $R_{\nu}$ with angle $\theta_{R}$ relative to the radial direction. After propagating to radial coordinate $r$ with angle $\psi$ relative to the radial direction at the point of emission, it makes an angle $\theta$ relative to the radial direction at $(r, \psi)$. The formulation of neutrino flavor transformation in a curved spacetime has been considered on multiple occasions \cite{1997PhRvD..56.1895F,1998PThPh.100.1145K,1999PhRvD..59f7301B,2005PhRvD..71g3011L,2015GReGr..47...62V,1997PhRvD..55.7960C}. The flavor state at some local proper time $\tau$ of a neutrino with momentum ${\bf q}$ is related to the flavor state at the local proper time of emission $\tau_0$ with momentum ${\bf q}_0$ via an evolution matrix $S(\tau,{\bf{q}};\tau_0,{\bf q}_0)$ which evolves according to the Schr\"odinger equation. In a curved spacetime the evolution matrices evolves with the local proper time $\tau$ as
\begin{equation}
i\frac{dS}{d\tau} = H\left(\tau\right)S.
\end{equation}
Here $H$ is the Hamiltonian which is also a function of the local proper time for the case of neutrinos in a non-uniform medium. The local proper time $\tau$ may be replaced with the radial coordinate $r$ by using Eq. (\ref{eqn:proper time}) once the impact parameter/emission angle is given. Similarly, the evolution of the antineutrinos is given by an evolution matrix $\bar{S}$ which evolves according to a Hamiltonian $\bar{H}$. Once the evolution matrix has been found, the probability that a neutrino in some generic initial state $\nu_{j}$ with momentum ${\bf q_0}$ at $\tau_0$ is later detected as state $\nu_i$ at proper time $\tau$ and momentum ${\bf q}$ is $P(\nu_j \rightarrow \nu_i) = P_{ij} = |S_{ij}(\tau,{\bf{q}};\tau_0,{\bf q}_0)|^2$.
The Hamiltonian $H$ is the sum of three terms: $H = H_V + H_{M} + H_{SI}$, where $H_{V}$ is the vacuum term, $H_M$ is the matter term to describe the effect of passing through matter, and $H_{SI}$ is a term due to neutrino self-interactions. For the antineutrinos the Hamiltonian is also a sum of three terms with $\bar{H} = \bar{H}_V + \bar{H}_{M} + \bar{H}_{SI}$, which are related to the corresponding terms in the neutrino Hamiltonian via $\bar{H}_V = H_{V}^{\ast}$, $\bar{H}_M = -\bar{H}^{\ast}_M$, $\bar{H}_{SI} = -\bar{H}^{\ast}_{SI}$. In a flat spacetime the vacuum term for a neutrino with energy $E$ takes the form of
\begin{equation}
H^{(f)}_{V} = \frac{1}{2E}\,U_V \left( \begin{array}{*{20}{c}} m_1^2 & 0 & 0 \\ 0 & m_2^2 & 0 \\ 0 & 0 & m_3^2 \end{array} \right) U_V^{\dagger}
\end{equation}
\\
where $m_i$ are the neutrino masses and $U_V$ is the unitary matrix relating the `mass' and flavor bases. The flavor basis is denoted by the superscript $(f)$ upon relevant quantities and we order the rows/columns as $e$, $\mu$, $\tau$ (here $\tau$ is the neutrino flavor, not local proper time). We adopt the Particle Data Group parameterization of the matrix $U_V$ which is in terms of three mixing angles $\theta_{12}$, $\theta_{13}$ and $\theta_{23}$ plus a CP violating phase $\delta_{CP}$ \cite{Olive:2016xmw}. In a curved spacetime the energy of a neutrino is dependent on position due to the gravitational redshift so the vacuum term will change accordingly and is
\begin{equation}
H^{(f)}_{V} = \frac{\sqrt{B(r)}}{2E_{\infty}}\,U_V \left( \begin{array}{*{20}{c}} m_1^2 & 0 & 0 \\ 0 & m_2^2 & 0 \\ 0 & 0 & m_3^2 \end{array} \right) U_V^{\dagger}.
\end{equation}
\\
The matter Hamiltonian $H_M$ in the flavor basis depends upon the electron density $n_{e}(r)$ and is simply
\begin{equation}
H^{(f)}_{M} = \sqrt{2}\,G_F\,n_e(r) \left( \begin{array}{*{20}{c}} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right).
\end{equation}
\subsection{The GR correction to neutrino self-interactions}
In addition to the vacuum and matter terms, in a neutrino dense environment such as a supernova we must add to the Hamiltonian a term due to neutrino self-interactions. The form of the self-interaction is
\begin{widetext}
\begin{equation}
\label{eqn:self-coupling}
{ H_{SI}}\left( {r,{\bf{q}}} \right) = \sqrt 2 {G_F}\sum\limits_{\alpha = e,\mu ,\tau } \int \left( 1 - {\bf{\hat q}} \cdot {\bf{\hat q}}' \right)\left[ \rho_{\alpha }\left( r,{\bf q}' \right)\,dn_{\alpha}\left( r,{\bf q}' \right) - \rho _{\bar\alpha}^*\left( r,{\bf q}' \right)\,dn_{\bar\alpha}\left( r,{\bf q}' \right) \right]\,dq'
\end{equation}
\end{widetext}
where $\rho_{\alpha}(r,{\bf q})$ is the density matrix of the neutrinos at position $r$ with momentum ${\bf q}$ and initial flavor $\alpha$ defined as $\rho_{\alpha}(r,{\bf q})=\psi_{\alpha}(r,{\bf q})\psi^{\dag}_{\alpha}(r,{\bf q})$, with $\psi_{\alpha}(r,{\bf q})$ being the corresponding normalized neutrino wave function, $dn_{\alpha}(r,{\bf q})$ is the differential neutrino number density \cite{Duan:2006an}, which is the differential contribution to the neutrino number density at $r$ from those neutrinos with initial flavor $\alpha$ and energy $\left|{\bf q}\right|$ propagating in the directions between ${\bf{\hat q}}$ and ${\bf {\hat q}}+d{\bf{\hat q}}$, per unit energy (the hats on ${\bf q}$ and ${\bf q}'$ indicate unit vectors). Note that here we have replaced the local proper time $\tau$ with the radial coordinate $r$ to denote the location along a given neutrino trajectory.
In order to use Eq. (\ref{eqn:self-coupling}) we have to first specify the expression for $dn_{\alpha}(r,{\bf q})$. This requires relating the neutrino momenta ${\bf q}$ at radial coordinate $r$ back to their values ${\bf q}_0$ at the neutrinosphere where they are initialized. After this relationship is obtained we can substitute $dn_{\alpha}({r,\bf q})$ with $dn_{\alpha}(R_{\nu},{\bf q_0})$ and calculate $H_{SI}$ by integrating over the neutrino momentum distributions at the neutrinosphere. While the magnitude of ${\bf q}$ is related to the magnitude of ${\bf q_0}$ via an energy redshift $q=q_0\sqrt{B(R_{\nu})/B(r)}$, relating $\hat{\bf q}$ to $\hat{\bf q}_0$ means finding the relation between the emission angle $\theta_R$ and the angle $\theta$ shown in Fig. \ref{fig:angles} since the neutrino trajectory is planar. In flat spacetime, the relation between $\theta_R$ and $\theta$ can be found through geometric arguments \cite{Duan:2006an}. In a curved spacetime, however, $\theta$ and $\theta_R$ might be expected to be related only after solving for the neutrino trajectory. But fortunately, for the Schwarzschild metric the relation between $\theta$ and $\theta_R$ can also be found simply by making use of the fact that the impact parameter $b$ is a conserved quantity along each neutrino trajectory \cite{2012ApJ...745..170C}. It makes no difference whether the impact parameter is evaluated at $R_{\nu}$ or at $r$, therefore $b(r)=b(R_{\nu})$. Using this conserved quantity we must have
\begin{equation}\label{eqn:b_conservation}
\frac{r\sin \theta}{\sqrt{1 - r_s/r}} = \frac{R_{\nu}\sin\theta_R}{\sqrt{1 - r_s/R_{\nu}} },
\end{equation}
from which we find
\begin{equation}\label{eqn:cos(theta)}
\cos\theta = \sqrt{1 - \left( \frac{R_{\nu}\sin\theta_R}{r} \right)^2 \left( \frac{1 - r_s/r}{1 - r_s/R_{\nu} } \right)} .
\end{equation}
In Fig. \ref{fig:theta_vs_thetaR} we plot the angle $\theta$ as a function of emission angle $\theta_R$ for three different ratios of $r_s$ to $R_{\nu}$ at $r=10\,R_{\nu}$. The figure shows that for each particular emission angle $\theta_R$, the trajectory bending effect always makes the angle $\theta$ larger than without GR. In the bulb model $\left( 1 - {\bf{\hat q}} \cdot {\bf{\hat q}}' \right)$ is found to be equivalent to $\left( 1 - \cos\theta \cos\theta' \right)$ after averaging over the angles in the plane perpendicular to the radial direction. Thus the correction to $\cos\theta$ by GR increases the magnitude of $H_{SI}$ by increasing the value of $1 - {\bf{\hat q}} \cdot {\bf{\hat q}}'$ for every neutrino.
\begin{figure}[t]
\includegraphics[scale=0.5]{fig_theta_vs_thetaR.eps}
\caption{The relationship between $\theta$ and $\theta_R$ for $r_s/R_{\nu}=0,0.2\;\rm{and}\; 0.5$ evaluated at $r=10\,R_{\nu}$.}
\label{fig:theta_vs_thetaR}
\end{figure}
Now we have the expression relating $\theta$ to $\theta_R$, we can write the expression for the differential number density, after taking time dilation into account, as
\begin{widetext}
\begin{equation}
\label{eqn:dn_alpha}
dn_{\alpha}\left(r,{\bf q}\right)\equiv dn_{\alpha}\left(r,q,\theta\right)\equiv dn_{\alpha}\left(R_{\nu},q_0,\theta_R\right) = \frac{1}{2\pi r^{2}\sqrt{B(r)}}\,\left[ \frac{L_{\alpha,\infty}}{\left\langle E_{\alpha,\infty} \right\rangle} \right]\,f_{\alpha}\left( q_0 \right)\left(\frac{\cos\theta_R}{\cos\theta}\right)\left(\frac{dq_0}{dq}\right)\,d\cos\theta_{R},
\end{equation}
\end{widetext}
where $f_{\alpha}\left( q_0 \right)$ is the normalized distribution function for flavor $\alpha$ with momentum $q_0$ that redshifts to $q$ at $r$, $L_{\alpha,\infty}$ is the luminosity of flavor $\alpha$ at infinity if no flavor transformation had occurred, and similarly $\left\langle E_{\alpha,\infty}\right\rangle$ is the mean energy of neutrinos of flavor $\alpha$ at infinity again assuming no flavor transformation had occurred. The expression for the antineutrinos is similar. The derivation of Eq. (\ref{eqn:dn_alpha}) can be found in the Appendix.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{fig_enlarged_angle.eps}
\caption{The neutrino trajectories converging at $r=3R_{\nu}$ for (a) $r_s/R_{\nu}=0$ and (b) $r_s/R_{\nu}=0.6$.}
\label{fig:enlarged_angle}
\end{figure}
The density matrix $\rho_{\alpha}(r,{\bf q})$ for neutrinos at $r$ with momentum ${\bf q}$ is related to the corresponding density matrix at the neutrinosphere via $\rho_{\alpha}(r,{\bf q}) =S(r,{\bf q};R_{\nu},{\bf q}_0)\,\rho_{\alpha}(R_{\nu},{\bf{q}}_0)\,S^{\dag}(r,{\bf q};R_{\nu},{\bf q}_0)$ and the same for the antineutrinos using the evolution matrix $\bar{S}(r,{\bf q};R_{\nu},{\bf q}_0)$.
Combining these equations together, we obtain the GR corrected expression of neutrino self-interaction in curved spacetime as
\begin{widetext}
\begin{eqnarray}\label{eqn:GR_self_coupling}
H_{SI}\left( {r,{\bf{q}}} \right) & = & \frac{\sqrt{2}\,G_{F}}{2\pi r^{2}\,\sqrt{B(r)}} \sum\limits_{\alpha = e,\mu ,\tau } \nonumber \\
& & \times \int \left( 1 - \cos\theta \cos\theta' \right) \left\{ \left[\frac{L_{\alpha,\infty}}{\left\langle E_{\alpha,\infty}\right\rangle} \right]\, \,\rho_{\alpha}(r,{\bf q'}) \, f_{\alpha}\left(q'_0 \right) - \left[\frac{L_{\bar\alpha,\infty}}{\left\langle E_{\bar\alpha,\infty}\right\rangle } \, \right]\, \rho^{\star}_{\bar\alpha}(r,{\bf q'}) \,f_{\bar\alpha}\left(q'_0 \right) \right\} \left(\frac{ \cos\theta_{R}'}{ \cos\theta' }\right)\,d\cos\theta_{R}'\,dq'_0. \nonumber \\ & &
\end{eqnarray}
\end{widetext}
When we take the weak gravity limit $r_s \ll r$ and $r_s \ll R_{\nu}$ we find this expression reduces to the same equation found in Duan \emph{et al.} \cite{Duan:2006an,Duan:2006jv}. This equation includes two GR effects: trajectory bending and time dilation (the energy redshift of the luminosity cancels with the energy redshift of the mean energy). In order to appreciate how significant the GR effects can be for the self-interaction Hamiltonian we show in Fig. \ref{fig:enlarged_angle} the neutrino trajectories which converge at a certain point above the surface of the central proto-neutron star. From the perspective of an observer at this point, the neutrinos seem to be coming from an expanded source whose radius is increased by a factor of $\sqrt{(1-r_s/r)/(1-r_s/R_{\nu})}$, which can be seen from Eq. (\ref{eqn:cos(theta)}). As noted earlier, the effect of trajectory bending causes the neutrino trajectories to cross at larger angles than in the case without GR. Time dilation also enhances the self-interaction because it leads to a larger effective neutrino flow rate. Close to the neutrinosphere time dilation is the larger effect because the effect of trajectory bending is small. At larger radii the situation is reversed with trajectory bending more important than time dilation.
To quantify the magnitude of the GR effects upon the self-interaction we show in the top panel of Fig. \ref{fig:enhancement_factor} the enhancement of the self-interaction due to GR, which is defined to be the ratio of the magnitude of the self-interaction potential with GR effects to that without, as a function of the coordinate $r$ and assuming no flavor oscillation occurs, for different values of $r_s/R_{\nu}$. The striking feature of the GR effects is that, even though the spacetime curvature is only pronounced near the proto-neutron star, the enhancement of the neutrino self-coupling turns out to be a long-range effect that is asymptotic to a value greater than unity which depends upon the ratio $r_s/R_{\nu}$. Since the influence of GR on neutrino flavor transformation is not just a local effect, it can have repercussions upon processes at larger radii such as neutrino heating in the accretion phase and nucleosynthesis in the cooling phase.
\begin{figure}
\includegraphics[scale=0.5]{fig_enhancement_factor1.eps}
\centering
\includegraphics[scale=0.5]{fig_enhancement_factor2.eps}
\caption{\emph{Top:}
The enhancement factor as a function of distance for three different ratios of the Schwarzschild radius relative to the neutrinosphere radius. \emph{Bottom:} The enhancement factor as a function of compactness, at three different distances. The two vertical dashed lines indicate the compactness of the sources we use in our calculations for the accretion phase and cooling phase, respectively. \label{fig:enhancement_factor} }
\end{figure}
As we have seen, the magnitude of the GR effect is governed by ratio of the radius of the neutrinosphere relative to the Schwarzschild radius of the proto-neutron star which itself is just proportional to the mass of the proto-neutron star. This suggests we define a neutrino `compactness' - similar to the definition of compactness found in O'Connor \& Ott \cite{2013ApJ...762..126O} - as
\begin{equation}
\xi_{\nu} = \frac{M/M_{\odot}}{R_{\nu}/10{\;\rm{km}}} = \frac{r_s/2.95{\;\rm{km}}}{R_{\nu}/10{\;\rm{km}}}= 3.39\frac{r_s}{R_{\nu}}.
\end{equation}
In the bottom panel of Fig. \ref{fig:enhancement_factor} we plot the enhancement factor as a function of compactness at different distances from the center of the proto-neutron star. For a very compact neutrino source we find the enhancement of the self-interaction can be as large as a factor of $300\%$ if $\xi_{\nu} \sim 2.26$ which corresponds to $r_s/R_{\nu}=2/3$. We shall explain the significance of this compactness in section \S\ref{sec:halo}. The blue line in this figures shows the enhancement factor at the neutrinosphere, where the trajectory bending effect is minimal. Here the enhancement is purely due to time dilation.
\section{Numerical Calculations}
\label{sec:results}
With the formulation complete and with the insights gained from the computation of the enhancement as a function of compactness, we proceed to compute numerically the multi-angle neutrino flavor evolution for two representative cases. These are a density profile, neutrino spectra and compactness typical of the accretion phase of a supernova, and one representative of the cooling phase. The neutrino mixing angles and square mass differences we adopt are $m^2_2-m^2_1=7.5\times10^{-5}\;\text{eV}^2$, $m^2_{3}-m^2_{2}=-2.32\times10^{-3}\;\text{eV}^2$ $\theta_{12}=33.9^\circ$ $\theta_{13}=9^\circ$ and $\theta_{23}=45^\circ$. The CP phase $\delta_{CP}$ is set to zero. We do not consider a normal mass ordering on the basis of the results by Chakraborty \emph{et al.} \cite{2011PhRvL.107o1101C} and Wu \emph{et al.} \cite{2015PhRvD..91f5016W}.
\subsection{Application to SN accretion phase}
\begin{figure}[t]
\includegraphics[clip,angle=0,width=\linewidth]{fig_density.eps}
\caption{The matter density profiles of the $10.8\;\rm{M_{\odot}}$ simulation by Fischer \emph{et al.} \cite{2010A&A...517A..80F} at postbounce times $t_{pb} = 0.3\;{\rm s}$ (red solid line) and $t_{pb} = 2.8\;{\rm s}$ (blue solid line).}
\label{fig:density}
\end{figure}
\begin{table}[b]
\begin{tabular}{lc}
Flavor\; & \;Luminosity $L_{\alpha,\infty}$ \; \\
\hline
$e$ & $41.52\times 10^{51}\;{\rm erg/s}$ \\
$\mu$, $\tau$ & $14.23\times 10^{51}\;{\rm erg/s}$ \\
$\bar{e}$ & $42.35\times 10^{51}\;{\rm erg/s}$ \\
$\bar{\mu}$, $\bar{\tau}$ & $14.39\times 10^{51}\;{\rm erg/s}$ \\
\\Flavor\; & \;Mean Energy $\langle E_{\alpha,\infty}\rangle$ \; \\
\hline
$e$ & $10.39\;{\rm MeV}$ \\
$\mu$, $\tau$ & $16.19\;{\rm MeV}$ \\
$\bar{e}$ & $12.67\;{\rm MeV}$ \\
$\bar{\mu}$, $\bar{\tau}$ & $16.40\;{\rm MeV}$
\end{tabular}
\caption{The luminosities and mean energies used for the accretion phase calculation.} \label{tab:accretionphase}
\end{table}
For the accretion phase we use the density profile at $t_{pb}=0.3\;{\rm s}$ postbounce from Fischer \emph{et al.} \cite{2010A&A...517A..80F} for the $10.8\;{\rm M_{\odot}}$ progenitor. As previously stated, this simulation includes GR effects in both the hydrodynamics and evolution of the neutrino phase space density (see Liebend{\"o}rfer \emph{et al.} \cite{2004ApJS..150..263L} for further details about the code). The density profile at this snapshot time is shown by the red line in Fig. \ref{fig:density}. We set the neutrinosphere radius to be $R_{\nu}=25\;{\rm km}$ which corresponds to the minimum of the electron fraction for this model at this time. This working definition for the neutrinosphere radius comes from noting the coincidence of the electron fraction minimum and the neutrinosphere radii shown in figures (7) and (8) in Fischer \emph{et al.} and produces a curve which is similar to figure (15) found in their paper. We note that the value of $R_{\nu}$ we adopt is different from the value estimated by others, e.g.\ \cite{2012PhRvD..85k3007S,2011PhRvL.107o1101C}, which tend to use relatively larger values for $R_{\nu}$ during the accretion phase. From the simulation we find the mass enclosed within the $R_{\nu}=25\;{\rm km}$ radius is $M = 1.33\;{\rm M_{\odot}}$, giving a compactness of $\xi_{\nu} = 0.53$. The neutrino luminosities and mean energies we use are also taken from the same simulation and are listed in table (\ref{tab:accretionphase}). To save computational resources we use a source distribution $f_{\alpha}(q_0)$ which is a delta-function at a single energy taken to be $15\;{\rm MeV}$. Single-energy calculations were also undertaken by Chakraborty \emph{et al.} \cite{2011PhRvD..84b5002C} when they also studied the self-interaction effects during the accretion phase. As previously stated, the angular distribution is assumed to be half-isotropic which is the same distribution used in Duan \emph{et al.} \cite{Duan:2006an,Duan:2006jv}.
\begin{figure}[t]
\includegraphics[clip,angle=270,width=\linewidth]{fig_accretion.eps}
\caption{The survival probability of electron neutrinos as a function of distance in the SN accretion phase, when $t_{pb}=0.3$ {\rm s}. The result is averaged over all angular bins. $R_{\nu}$ is set to $25$ {\rm km}, red solid line and blue dotted line are the results with and without GR effect, respectively. The vertical dashed lines labeled $r_{sync}$ and $r_{end}$ are the predicted beginning and ending locations of bipolar oscillations as given by the equations given in \cite{2011PhRvD..84b5002C}. The position of the shock wave is also indicated and labeled as $r_{shock}$.}
\label{fig:accretion_phase}
\end{figure}
Our results are shown in Fig. \ref{fig:accretion_phase} which is a plot of the electron flavor survival probability averaged over all angular bins as a function of distance. In the figure we also include three vertical dashed lines to indicate the start of the bipolar oscillation region, the position of the shockwave, and the end of the bipolar oscillation region. The predictions for the beginning and end of the bipolar oscillation region come from equations given in Chakraborty \emph{et al.} \cite{2011PhRvD..84b5002C}. The change in the angle-averaged survival probability $P_{ee}$ which occurs at $r\sim 475\;{\rm km}$ is simply decoherence \cite{2011PhRvL.107o1101C}. Comparing the results with and without GR effects we see the decoherence is slightly delayed when GR is included but the difference is only of order $\sim 20\;{\rm km}$ and the final result is identical to the case without GR. Thus it appears GR has little effect upon flavor transformation during the accretion phase and where little change occurs is in a region where it has little consequence.
\subsection{Application to SN cooling phase}
As the proto-neutron star cools it contracts which increases the compactness. The sensitivity of the neutrino self-interaction to the compactness means we might expect a larger effect from GR during the cooling phase. To test whether this is the case we use the density profile at $t_{pb}=2.8\;{\rm s}$ postbounce from the Fischer \emph{et al.} \cite{2010A&A...517A..80F} simulation for the same $10.8\;{\rm M_{\odot}}$ progenitor and which is shown by the blue line in Fig. \ref{fig:density}. We set the neutrinosphere radius to be $R_{\nu}=17\;{\rm km}$ which, again, is close to the minimum of the electron fraction for this model at this time and consistent with figure (15) from Fischer \emph{et al.}. The mass enclosed within this radius is $M\approx 1.44\;{\rm M_{\odot}}$, giving a compactness of $\xi_{\nu} = 0.85$. For this cooling epoch calculation we use multi-energy as well as multi-angle. The neutrino energy range is chosen to be $E_{\infty} = 1\;{\rm MeV}$ to $E_{\infty} = 60\;{\rm MeV}$, and is divided into 300, equally spaced, energy bins. To generate the neutrino spectra for flavor $\alpha$ at the neutrinosphere we use the luminosities, mean energies and rms energies at this snapshot of the simulation - listed in table (\ref{tab:coolingphase}) - and insert them into the pinched thermal spectrum of Keil, Raffelt and Janka \cite{2003ApJ...590..971K} which has the form
\begin{equation}
{f_\alpha }({q_0}) = \frac{{{{({A_\alpha } + 1)}^{{A_\alpha } + 1}}q_0^{{A_\alpha }}}}{{{{\langle {E_{\alpha ,{R_\nu }}}\rangle }^{{A_\alpha } + 1}}\Gamma ({A_\alpha } + 1)}}\exp \left( { - \frac{{({A_\alpha } + 1)\,{q_0}}}{{\langle {E_{\alpha ,{R_\nu }}}\rangle }}} \right),
\end{equation}
with $\langle E_{\alpha ,{R_\nu }}\rangle = \langle E_{\alpha ,\infty}\rangle /\sqrt{B(R_{\nu})}$ and the pinch parameter $A_{\alpha}$ for flavor $\alpha$ is given by
\begin{equation}
A_{\alpha} = \frac{2\,\langle E_{\alpha,\infty}\rangle^2 - \langle E^2_{\alpha,\infty}\rangle }{\langle E^2_{\alpha,\infty}\rangle - \langle E_{\alpha,\infty}\rangle^2 }.
\end{equation}
\begin{table}[b]
\begin{tabular}{lc}
Flavor\; & \;\;Luminosity $L_{\alpha,\infty}$ \\
\hline
$e$ & $2.504\times 10^{51}\;{\rm erg/s}$ \\
$\mu$, $\tau$ & $2.864\times 10^{51}\;{\rm erg/s}$ \\
$\bar{e}$ & $2.277\times 10^{51}\;{\rm erg/s}$ \\
$\bar{\mu}$, $\bar{\tau}$ & $2.875\times 10^{51}\;{\rm erg/s}$ \\
\\
Flavor\; & \;Mean Energy $\langle E_{\alpha,\infty}\rangle$\; \\
\hline
$e$ & $9.891\;{\rm MeV}$ \\
$\mu$, $\tau$ & $12.66\;{\rm MeV}$ \\
$\bar{e}$ & $11.83\;{\rm MeV}$ \\
$\bar{\mu}$, $\bar{\tau}$ & $12.70\;{\rm MeV}$ \\
\\
Flavor\; & \;rms Energy $\sqrt{ \langle E^2_{\alpha,\infty}\rangle }$\; \\
\hline
$e$ & $11.12\;{\rm MeV}$ \\
$\mu$, $\tau$ & $14.99\;{\rm MeV}$ \\
$\bar{e}$ & $13.65\;{\rm MeV}$ \\
$\bar{\mu}$, $\bar{\tau}$ & $15.07\;{\rm MeV}$
\end{tabular}
\caption{The luminosities, mean energies, and rms energies used for the cooling phase calculation.} \label{tab:coolingphase}
\end{table}
\begin{figure}[t]
\includegraphics[clip,angle=270,width=\linewidth]{fig_cooling.eps}
\caption{The survival probability of electron neutrinos as a function of distance using neutrino spectra and a density profile taken from the cooling phase of a simulation of a $10.8\;{\rm M_{\odot}}$ progenitor by Fischer \emph{et al.} \cite{2010A&A...517A..80F}. The electron flavor survival probability is averaged over all angular bins and energy bins. The red solid line and blue dotted line are the results with and without GR effects respectively.}
\label{fig:cooling_phase}
\end{figure}
The result of this calculation is shown in Fig. \ref{fig:cooling_phase} where we plot the electron neutrino flavor survival probability averaged over all angular bins and energy bins (using the emitted neutrino spectrum as the weighting function) as a function of distance. At this epoch self-interaction effects occur much closer to the proto-neutron star and the effect of GR is more important. The net result of adding GR is to delay the onset of bipolar oscillations by around $25\;{\rm km}$ and once more we find the probability at large radii are almost identical to that without GR. But while this shift in the onset of bipolar oscillations may seem small, we note the neutrino flavor evolution in the region from $50\;{\rm km} \lesssim r \lesssim 500\;{\rm km}$ was found to be crucial for determining the nucleosynthesis yields in the calculations by Duan \emph{et al.} \cite{2011JPhG...38c5201D} and Wu \emph{et al.} \cite{2015PhRvD..91f5016W} so even a relatively small delay of flavor transformation caused by GR might have a consequence.
\section{The GR Neutrino Halo}
\label{sec:halo}
So far we have considered only cases where all neutrinos propagate to $r\rightarrow \infty$. However if the compactness of the source becomes too large the neutrinosphere becomes smaller than the ``photon sphere'', whose radius is $3r_s/2$. When this occurs there will be a critical emission angle for neutrinos beyond which they cannot escape to infinity. Following the argument in Hartle \cite{2003gieg.book.....H}, one can obtain a condition for the neutrinos to escape to infinity to be
\begin{equation}\label{critical_angle}
\frac{2}{{3\sqrt 3 }}\frac{{{R_\nu }}}{{{r_s}}}\frac{1}{{\sqrt {1 - {r_s}/{R_\nu }} }}\sin{\theta_R} < 1.
\end{equation}
We show three example neutrino trajectories for the case where $R_{\nu}/r_s < 3/2 $ in Fig. \ref{fig:GR_neutrino_halo}. Trajectories 1 and 2 are open and a neutrino emitted along these trajectories will propagate to infinity: the trajectories of neutrinos emitted at sufficiently large angles - such as trajectory 3 - will turn around and return to the proto-neutron star. Note that the farthest place where a neutrino can turn around is the photon sphere. The consequence of such trajectories are included in simulations which include GR. In principle there is a substantial change to the flavor evolution calculations when neutrinos start to follow trajectories such as the trajectory 3 in Fig. \ref{fig:GR_neutrino_halo} because they lead to the formation of a neutrino `halo' around the proto-neutron star, similar to the neutrino halos produced by scattering on matter \cite{2012PhRvL.108z1104C,2013PhRvD..87h5037C}.
\begin{figure}[t]
\includegraphics[clip,angle=0,width=\linewidth]{fig_GR_neutrino_halo.eps}
\caption{Typical neutrino trajectories near a ultra-compact source. The inner dashed lines and the outer dashed lines represent the Schwarzschild radius and the photon sphere respectively. The three trajectories correspond to three different emission angles.}
\label{fig:GR_neutrino_halo}
\end{figure}
\begin{figure}[b]
\includegraphics[clip,angle=0,width=\linewidth]{fig_critical_angle.eps}
\caption{The maximum emission angle of neutrinos that can escape the source, for different values of $R_{\nu}/r_s$. The vertical dashed line indicates the position of the photon sphere.}
\label{fig:critical_angle}
\end{figure}
From Eq. (\ref{critical_angle}) we can evaluate the critical angle as a function of $R_{\nu}/r_s$. The relation between the critical angle as a function of $R_{\nu}/r_s$ is shown in Fig. \ref{fig:critical_angle}. If $R_{\nu}/r_s>3/2$, clearly neutrinos with all emission angles can escape and no neutrino halo is formed. We define a critical compactness $\xi_{\nu\star}$ to be the case where $R_{\nu}/r_s = 3 /2$ and find it equal to $\xi_{\nu\star} = 2.26$ - the value discussed earlier. The compactness of the sources we have considered for our previous numerical calculations did not approach this value because the mass of the proto-neutrons star is not sufficiently large and the neutrinospheres lay beyond the photon sphere. To reach the critical compactness for formation of the halo we require a more massive proto-neutron star with a smaller neutrinosphere.
Whether a proto-neutron star surpasses the critical compactness while the proto-neutron star is still cooling via neutrino emission will depend upon the Equation of State of dense matter and the neutrino opacity \cite{2016PhR...621..127L,2016arXiv161110226H}.
Note that from causality, the radius of a neutron star is required to be greater than $R_{NS} \gtrsim 2.823 M$ \cite{2016PhR...621..127L} which, if we set $R_{\nu} = R_{NS}$, corresponds to a compactness of $\xi_{\nu} = 2.4$, which is beyond the critical value $\xi_{\nu\star}$. A halo will certainly form immediately preceding the collapse of a proto-neutron star to a black hole.
The formation of a neutrino halo has consequences for the cooling of the proto-neutron star as well as the flavor transformation due to neutrino self-interaction. One can find a presentation of the changes that occur to the emitted neutrino spectra as the mass of the proto-neutron star approaches its maximum mass in Liebend{\"o}rfer \emph{et al.} \cite{2004ApJS..150..263L}. In their simulations, as the maximum mass is approached (but before the black holes forms) the luminosity of the $\mu$ and $\tau$ flavors increases due to contraction of the proto-neutron star while the luminosities of electron neutrino and electron antineutrinos drop. The mean energies of all flavors increases.
When a halo forms, in principle, one would have to completely change how the flavor calculations are undertaken in the halo region - the zone between the neutrinosphere and the photon sphere. In such cases the flavor evolution up to the photon sphere cannot be treated as an initial value problem - as we have done in this paper - because the flavor evolution up to the photon sphere of outward moving neutrinos is affected by neutrinos that were also emitted in an outward direction but which turned around and are now moving inwards. Thus in the halo region a paradigm beyond the bulb model would be needed to correctly deal with the flavor evolution. Prevailing understanding from the extant literature would indicate that in the case of three active flavors of neutrino emitted spherically symmetrically, one should not expect flavor transformation within the halo: if this is true then the only effect of the formation of a halo would be to alter the luminosity and angular distribution of the neutrinos beyond the photon sphere (which now becomes the effective neutrinosphere). But in other circumstances - such as calculations that include sterile neutrinos \cite{1999PhRvC..59.2873M,2006PhRvD..73i3007B,2012JCAP...01..013T,2014PhRvD..90j3007W,2014PhRvD..90c3013E} or calculations with non-standard neutrino interactions \cite{2007PhRvD..76e3001E,2008PhRvD..78k3004B,2010PhRvD..81f3003E,2012PhRvD..85k3007S} - flavor transformation can occur much closer to the neutrinosphere in which case the formation of a halo may have greater consequences.
\section{Summary and Conclusions}
\label{sec:conclusions}
In this paper we have considered the effects of General Relativity upon neutrino flavor transformation in a core-collapse supernova. We adopted a Schwarzschild metric to describe the spacetime and included three GR effects - trajectory bending, time dilation, and energy redshift. Of the three, time dilation is the major effect close to the proto-neutron star, whose role is replaced by trajectory bending at larger radii. The size of the GR effects were found to scale with a single parameter which is the compactness of the source: the relative ratio of the Schwarzschild radius to the neutrinosphere radius. For large compactness with $R_{\nu}$ close to the radius of the photon sphere, the neutrino self-interaction Hamiltonian can be up to approximately three times larger than without GR. We calculated the flavor evolution in two representative cases to determine whether the GR effects led to significant differences compared to calculations without GR. These cases were a density profile and neutrino spectra typical of the accretion phase, and a density profile and neutrino spectra typical of the cooling phase. In both cases we found the effect of GR was to delay the onset of flavor transformation but for the accretion phase the flavor transformation occurred due to decoherence at large radii where the change would have little consequence. In contrast, the change to the onset of bipolar oscillations during the cooling phase may be more important because it is much closer to the proto-neutron star and may impact the nucleosynthesis in the neutrino driven wind. Finally, we showed that GR effects can produce a halo of neutrinos surrounding the proto-neutron star for very compact neutrino sources. If a halo forms then, in principle, one would have to treat flavor transformation in the halo region using a different technique than the usual approach of treating it as an initial-value problem.
\begin{acknowledgments}
This research was supported at NC State University by DOE award DE-FG02-10ER41577.
\end{acknowledgments}
|
1,941,325,220,740 | arxiv | \section{Introduction}
Overdetermined boundary value problems in partial differential equations have connections
to various fields in mathematics; they emerge in the study of isoperimetric inequalities,
optimal design and ill-posed and free boundary problems, to name a few. In many such
problems one's interest is focused on a specific feature: the shape of the domain considered;
mainly, its (spherical) symmetry, as in Serrin's landmark paper \cite{Se} and its many offsprings
(see \cite{We}, \cite{Al}, \cite{Fr}, \cite{Le}, \cite{Pa}, and the references therein).
With the present paper, we want to start a more detailed analysis of overdetermined problems in the plane,
by exploiting the full power of the theory of analytic functions.
As a case study, we shall analyse what appears to be the simplest situation:
in a planar bounded domain $\Omega$ with boundary $\partial\Omega$
of class $C^{1,\alpha}$, we shall consider the problem
\begin{eqnarray}
-\Delta U\!\!\!&=&\!\!\!\delta_{\zeta_c} \hspace{1.5cm} \mbox{in } \Omega, \label{pb1} \\
U\!\!\!&=&\!\!\!0 \hspace{1.5cm} \mbox{on } \partial\Omega, \label{pb2} \\
\frac{\partial U}{\partial\nu}\!\!\!&=&\!\!\!\varphi
\hspace{1.5cm}\mbox{on } \partial\Omega. \label{pb3}
\end{eqnarray}
where $\nu$ is the {\it interior} normal direction to $\partial\Omega$,
$\delta_{\zeta_c}$ is the Dirac delta centered at a given point $\zeta_c\in\Omega$
and $\varphi:\partial\Omega\rightarrow\mathbb{R}$ is a positive given function of arclength,
measured counterclockwise from a reference point on $\partial\Omega$.
Problem (\ref{pb1})-(\ref{pb3}) can be interpreted as a free-boundary problem: find
a domain $\Omega$ whose Green's function $U$ with pole at $\zeta_c$ has gradient
with values on the boundary that fit those of the given function $\varphi$.
This formulation serve as a basis to model, for example, the Hele-Shaw flow, as
done in \cite{Gu} and \cite{Sa}.
By means of the Riemann Mapping Theorem, the solution of (\ref{pb1})-(\ref{pb2})
can be esplicitly written in terms of a conformal mapping $f$ from the unit disk $D$
to $\Omega$, which is uniquely determined if satisfies some suitable normalizing conditions.
Since it turns out that the normal derivative of $U$ on $\partial\Omega$ is proportional
to the modulus of the inverse of $f,$
then by (\ref{pb3}) and classical results on holomorphic functions, we can derive an explicit formula
for $f$ in terms of $\varphi$ (see section \S2 for details). With the help
of such a formula, we obtain the following results:
\begin{itemize}
\item[(i)] existence and uniqueness theorems for a domain $\Omega$
satisfying (\ref{pb1})-(\ref{pb3}) (Theorems \ref{tom} and \ref{esistenza});
\item[(ii)] symmetry results relating the invariance of $\varphi$ under certain groups
of transformations to that of $\Omega$ (Theorems \ref{rot} and \ref{simasse});
of course, when $\varphi$ in constant,
we obtain that $\Omega$ is a disk --- a well-known result (see \cite{Pa}, \cite{Le} \cite{Al});
\item[(iii)] a formula relating the interior normal derivative of the
Green's function to the curvature of $\partial\Omega.$
\end{itemize}
\section{Construction of a forward operator and its inverse}
In what follows, $D$ will always be the open unit disk in $\mathbb{C}$ centered at $0.$
Let us recall some basic facts of harmonic and complex analysis. We refer the reader to
\cite{go} and \cite{Ma} for more details. If $\Omega\subseteq\mathbb{C}$ is a simply connected domain
bounded by a Jordan curve and
$\zeta_c\in\Omega$, then, by the Riemann Mapping Theorem, $\Omega$ is the image of an analytic function
$f:D\rightarrow\Omega$ which induces a homeomorphism between the closures
$\overline D$ and $\overline\Omega$, has non-zero derivative $f'$ in $D$ and
is such that $f(0)=\zeta_c.$
Moreover, if $\Omega$ is of class $C^{1,\alpha},$ $0<\alpha<1,$ that is its boundary
$\partial\Omega$ is locally the graph of a function of class $C^{1,\alpha},$ then,
by Kellogg's theorem, we can infer that $f\in C^{1,\alpha}(\overline D)$ (see \cite{go}).
The following elementary lemma will be useful in the sequel.
\begin{lemma}\label{teo.}
Let $\Omega$ be a bounded simply connected domain in $\mathbb{C}$ and
$f:D\rightarrow\Omega$ be one-to-one and analytic with $f\in C^1(\overline D),$ $0<\alpha<1.$
Then there exists $\gamma\in\mathbb{R}$ such that
\begin{equation}\label{rappre}
f'(z)=e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log|f'(e^{it})|dt\right\}
\end{equation}
for every $z\in D$.
\end{lemma}
\begin{proof}
The function
$$
f'(z)\exp\left\{-\frac 1{2\pi}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log|f'(e^{it})|dt\right\},
\hspace{1cm}z\in D,
$$
is analytic, never zero in $D$ and has unitary modulus on $\partial D$; hence
it equals the number $e^{i\gamma}$ for some $\gamma\in\mathbb{R}.$
\end{proof}
With these premises, given two distinct numbers $\zeta_c$ and $\zeta_b\in\mathbb{C},$ we consider
\begin{center}
the set
${\mathscr O}$ of all $C^{1,\alpha},$ $0<\alpha<1,$
simply connected
\end{center}
\begin{center}
bounded domains such that
$\zeta_c\in\Omega$ and $\zeta_b\in\partial\Omega$.
\end{center}
We can put $\mathscr O$ in one-to-one
correspondence with
\begin{center}
the class
$\mathscr F$ of all one-to-one analytic mappings
\end{center}
\begin{center}
$f\in C^{1,\alpha}(\overline D)$
such that $f(0)=\zeta_c$ and $f(1)=\zeta_b$.
\end{center}
In fact, the arbitrary parameter
$\gamma$ in (\ref{rappre}) can be determined by observing that
\begin{equation}\label{detalfa}
\zeta_b-\zeta_c=\int_0^1f'(t)dt.
\end{equation}
\par
We now construct our forward operator $\mathcal{T}$ as the one that associates to each $\Omega$ in $\mathscr O$
the interior normal derivative $\frac{\partial U}{\partial\nu}$ --- as function of the arclength,
measured counterclockwise on $\partial\Omega,$ starting from $\zeta_b$ --- of the solution
of (\ref{pb1})-(\ref{pb2}). With our identification of $\mathscr O$ with $\mathscr F$ in mind, for
$f\in\mathscr F$, $\mathcal{T}(f)$ is a function of arclength $s\in[0,|\partial\Omega|]$ and it is defined
by the following remarks.
First, notice that, by Gauss-Green's formula, if $U$ satisfies (\ref{pb1})-(\ref{pb2}), then
\begin{equation*
v(\zeta_c)=\int_{\partial\Omega}v(\zeta)\frac{\partial U}{\partial\nu}(\zeta)ds(\zeta)
\end{equation*}
for every function $v\in C^1(\overline\Omega)\cap C^2(\Omega)$ which is harmonic in $\Omega.$
Secondly, recall that any such function $v$ satisfies the well-known formula
\begin{equation*
v(\zeta)=\frac 1{2\pi}\int_{\partial\Omega}v(\zeta')
\frac{1-|f^{-1}(\zeta)|^2}{|f^{-1}(\zeta)-f^{-1}(\zeta')|}\frac{ds(\zeta')}{|f'(f^{-1}(\zeta'))|},
\hspace{1cm}\zeta\in\Omega,
\end{equation*}
if $\partial\Omega$ is rectifiable (see \cite{Ma}). By comparing the last two formulas (with $\zeta=\zeta_c=f(0)),$ we obtain that
\begin{equation*
\frac{\partial U}{\partial\nu}(\zeta)=\frac 1{2\pi|f'(f^{-1}(\zeta))|},
\hspace{1.5cm}\zeta\in\partial\Omega.
\end{equation*}
Thirdly, since the arclength on $\partial\Omega$ is related to $f$ by the formula
\begin{equation}
\label{arclength}
s(\theta)=\int_0^{\theta}|f'(e^{it})|dt,\hspace{1.5cm}\theta\in[0,2\pi],
\end{equation}
the values $\mathcal{T}(f)(s),$ $s\in[0,|\partial\Omega|],$ can be defined parametrically
by
\begin{equation}\label{param}
s = \int_0^{\theta}|f'(e^{it})|dt,\ \ \mathcal T(f)=\frac 1{2\pi|f'(e^{i\theta})|},
\ \ \theta\in[0,2\pi].
\end{equation}
It is clear that $\mathcal T(f)\in C^{0,\alpha}([0,|\partial\Omega|])$
and also that
$$
\int_0^{|\partial\Omega|}\mathcal{T}(f)(s)ds=1,\hspace{1.5cm}\mathcal{T}(f)>0\mbox{ on }[0,|\partial\Omega|],
$$
for all $f\in\mathscr F.$
We shall now prove that $\mathcal T$ is injective by showing
that each $\varphi$ in the range of $\mathcal{T}$ determines only one $f\in\mathscr F.$
In fact, for
$\varphi\in C^{0,\alpha}([0,|\partial\Omega|])$ in the
range of $\mathcal{T}$, by formulas (\ref{param}) it turns out that
\begin{equation}\label{fi}
2\pi\varphi(s(\theta))s'(\theta)=1,\hspace{1.5cm}\theta\in[0,2\pi].
\end{equation}
This last formula, once
integrated between $0$ and $\theta$, gives
\begin{equation}\label{teta}
s(\theta)=\Phi^{-1}(\theta),\hspace{1.5cm}\theta\in[0,2\pi],
\end{equation}
where $\Phi^{-1}$ is the inverse of $\Phi:[0,|\partial\Omega|]\rightarrow[0,2\pi]$ defined by
\begin{equation}\label{teta2}
\Phi(s)=2\pi\int_0^s\varphi(\sigma)d\sigma,\hspace{1.5cm}s\in[0,|\partial\Omega|].
\end{equation}
By the same formulas (\ref{param}), we then obtain that
\begin{equation}\label{f' g}
|f'(e^{i\theta})|=\frac 1{2\pi\varphi(\Phi^{-1}(\theta))},\hspace{1.5cm}\theta\in[0,2\pi],
\end{equation}
and hence (\ref{rappre}) gives
\begin{equation}\label{rappre2}
f'(z)=e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log\frac 1{2\pi\varphi(\Phi^{-1}(t))}dt\right\},
\hspace{1.5cm}z\in D,
\end{equation}
where $\gamma$ is determined by (\ref{detalfa}).
Therefore, for any $\varphi$ in the range of $\mathcal{T}$, a unique $f\in\mathscr F$
such that $\mathcal{T}(f)=\varphi$ is determined by
$$
f(z)=\zeta_c+\int_0^1f'(tz)zdt,\hspace{1.5cm}z\in D,
$$
with $f'$ given by (\ref{rappre2}).
We collect these remarks in the following theorem.
\begin{theorem}\label{tom}
Given $\Omega\in\mathscr O$, let $\zeta_b$ be a reference point on $\partial\Omega$ from which the
arclength on $\partial\Omega$ is measured counterclockwise.
\par
If $\varphi$ is the interior normal derivative of the Green's function on $\partial\Omega$ (as function of the
arclength), then a unique $f\in\mathscr F$ is determined such that
$\mathcal{T}(f)=\varphi$ and its derivative is given by
\begin{equation}\label{nicola}
f'(z)=e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log\frac 1{2\pi\varphi(s(t))}dt\right\},
\ \ z\in D,
\end{equation}
where $s$ and $\Phi$ are defined by (\ref{teta}) and (\ref{teta2}), respectively.
Moreover, the constant $\gamma$ is determined by
\begin{equation}\label{condi}
e^{i\gamma}\int_0^1\exp\left\{\frac 1{2\pi}\int_0^{2\pi}\frac{e^{i\tau}+t}{e^{i\tau}-t}
\log\frac 1{2\pi\varphi(s(\tau))}d\tau\right\}dt=\zeta_b-\zeta_c.
\end{equation}
\end{theorem}
\vspace{.5cm}
Theorem \ref{tom} tells us that the operator $\mathcal T$ is injective. A discussion about
its surjectivity is beyond the aims of this paper.
Far from being complete, we want here to suggest the
following criterion.
Referring to \cite{Du}, let us introduce the so called \emph{boundary rotation} of a function $f$
defined in $D$:
$$
\rho=\lim_{r\rightarrow1^-}
\int_0^{2\pi}\left|{\mbox Re}\left\{1+\frac{zf''(z)}{f'(z)}\right\}
\right|d\theta,\ \ z=re^{i\theta}\in D.
$$
We consider the class $\mathscr V$ of all normalized functions
$$
f(z)=z+a_2z^2+a_3z^3+...
$$
which are analytic, locally univalent and with $\rho<+\infty$.
The proof of the surjectivity of $\mathcal T$ relies
on the problem of finding an analytic and univalent function
$f$ from the disk to $f(D)=\Omega$. Therefore, we have to look for sufficient conditions for univalence.
The following theorem is based on a sufficient condition, due to Paatero, that
says that any function in
the class $\mathscr V$ with $\rho\le4\pi$ is univalent (see \cite{Du}).
\begin{theorem}\label{esistenza}
Let $\varphi\in C^1(\mathbb{R})$ be $L$-periodic, strictly positive and satisfying the compatibility condition
$\int_0^L\varphi(s)ds=1.$ If, moreover, $\varphi$ satisfies the condition
$$
\max_{[0,L]}\left|\frac{\varphi'(s)}{\varphi^2(s)}\right|\le2\pi,
$$
then $\mathcal T$ is surjective.
\end{theorem}
\begin{proof}
By Theorem \ref{tom}, we know that a function $f\in\mathscr F$ such that $\mathcal T(f)=\varphi$ must satisfy
(\ref{nicola}). Thus, we have to check Paatero's condition on (\ref{nicola}). From that expression
we deduce that
$$
\frac{f''(z)}{f'(z)}=\frac1{2\pi}\int_0^{2\pi}\frac{2e^{it}}{(e^{it}-z)^2}\log\frac1{2\pi\varphi(s(t))}dt,
$$
being $s$ defined as in (\ref{teta}) and (\ref{teta2}).
Now, by observing that
$$
\frac{d}{dt}\left(\frac{e^{it}+z}{e^{it}-z}\right)=\frac{-2ize^{it}}{(e^{it}-z)^2},
$$
we can integrate by parts and obtain that
$$
\frac{-izf''(z)}{f'(z)}=\frac1{2\pi}\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\frac{\varphi'(s(t))s'(t)}
{\varphi(s(t))}dt.
$$
By the maximum modulus principle, we can estimate, for $z\in D,$
\begin{eqnarray*}
\left|{\mbox Re}\left\{1+
\frac{zf''(z)}{f'(z)}
\right\}\right| &\le& 1+\left|\frac{-izf''(z)}{f'(z)}\right| \\
&\le& 1+\max_{[0,2\pi]}\left|\frac{\varphi'(s(t))s'(t)}{\varphi(s(t))}\right|,
\end{eqnarray*}
and, from (\ref{fi}), we have that $\varphi'(s)s'/\varphi(s)=\varphi'(s)/ 2\pi\varphi^2(s).$
Therefore, we can estimate the boundary rotation of $f$ in the following way:
$$
\rho\le\int_0^{2\pi}\left(1+\max_{[0,2\pi]}
\left|\frac{\varphi'(s(t))}{2\pi\varphi^2(s(t))}\right|\right)d\theta=
2\pi\left(1+\max_{[0,L]}
\left|\frac{\varphi'(s)}{2\pi\varphi^2(s)}\right|\right).
$$
By our assumptions, it follows that $\rho\le4\pi$ and hence, from Paatero's
criterion for univalence, $f$ is a homeomorphism from the disk onto $f(D).$
\end{proof}
\section{Symmetries}
\begin{remark}
Theorem \ref{tom} allows us to rediscover a result already proved in \cite{Pa} and also in
\cite{Le} and \cite{Al}:
if $\varphi$ is constant, then $\Omega$ is a disk. More precisely,
given $\Omega\in\mathscr O$ with perimeter $L$, let
$\varphi$ be constantly equal to $C>0.$
From (\ref{nicola}), we obtain that
$$
f'(z)=e^{i\gamma}\exp\left\{\frac 1{2\pi}\log\frac 1{2\pi C}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}dt\right\}=\frac{e^{i\gamma}}{2\pi C},
$$
since $\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}dt=2\pi$. Therefore,
we get that
$$
f(z)=\zeta_c+\frac{e^{i\gamma}}{2\pi C}\, z,\hspace{1.5cm}z\in D,
$$
that is $\Omega$ is the disk centered at $\zeta_c$ with radius $\frac 1{2\pi C}.$
\end{remark}
Now we want to show how some other symmetry properties of $\Omega$
can be derived from some invariance properties of $\varphi$
and viceversa.
In what follows, for $\Omega\in\mathscr O,$ let $L=|\partial\Omega|$ and let $\varphi$ denote
the values of the interior normal
derivative on $\partial\Omega$ (as function of arclength) of the Green's function
of $\Omega.$
In the next theorem, we will identify $\varphi$
with its $L$-periodic extension to $\mathbb{R}$ and $\mathcal{R}_{\zeta,\beta}$ will denote the clockwise
rotation of an angle $\beta$
around a point $\zeta.$
\begin{theorem}\label{rot}
Given $\Omega\in\mathscr O,$
for every $n=2,3,...,$
\begin{center}
$\mathcal{R}_{\zeta_c,\frac{2\pi}n}(\Omega)=\Omega$
if and only if $\varphi$ is $\frac Ln$-periodic.
\end{center}
\end{theorem}
\begin{proof}
Let us fix $n$ and suppose $\varphi$ measured counterclockwise from
$\zeta_b\in\partial\Omega$. Let $f\in\mathscr F$ be the unique analytic function from $D$ to $\Omega$
such that $f(0)=\zeta_c$ and $f(1)=\zeta_b.$
(i) If $\Omega$ is invariant by rotations of angle $\frac{2\pi}n$ around $\zeta_c,$
then $f$ satisfies
$$
f(ze^{i\frac {2\pi}n})=\zeta_c+[f(z)-\zeta_c]e^{i\frac {2\pi}n},\hspace{1.5cm}z\in D.
$$
By differentiating this expression, we obtain $f'(ze^{i\frac {2\pi}n})=f'(z),$ from which
$$
s\left(\theta +\frac{2\pi}n\right)=\int_{0}^{\theta+\frac{2\pi}n}|f'(e^{it})|dt=
s(\theta)+\int_{\theta}^{\theta+\frac{2\pi}n}|f'(e^{it})|dt,
$$
and hence
\begin{equation}
\label{stheta}
s\left(\theta +\frac{2\pi}n\right)=
s(\theta)+s\left(\frac{2\pi}n\right),\ \ \theta\in\mathbb{R}.
\end{equation}
Now, being
$$
L=s(2\pi)=s\left(\frac{n-1}n2\pi\right)
+s\left(\frac{2\pi}n\right)=...=ns\left(\frac{2\pi}n\right),
$$
we have that $s\left(\theta+\frac{2\pi}n\right)=s(\theta)+\frac Ln.$
Thus, \eqref{stheta} and \eqref{fi}-\eqref{f' g} imply that
$$
\varphi\left(s(\theta)+\frac Ln\right)=\varphi\left(s\left(\theta+\frac{2\pi}n\right)\right)
=\frac 1{2\pi|f'(e^{i(\theta+\frac{2\pi}n)})|}=\frac 1{2\pi|f'(e^{i\theta})|}=
\varphi(s(\theta)),
$$
and hence, for every $s\in\mathbb{R},$
$$
\varphi\left(s+\frac Ln\right)=\varphi(s).
$$
(ii) If now $\varphi$ is $\frac Ln$-periodic, from (\ref{teta2}) we write
\begin{equation}\label{shu}
\Phi\left(s+\frac Ln\right)=2\pi\int_0^{s+\frac Ln}\varphi(\sigma)d\sigma=
\Phi(s)+\Phi\left(\frac Ln\right).
\end{equation}
Since (\ref{teta}) holds, it follows that
$$
2\pi=\Phi(s(2\pi))=\Phi(L)=\Phi\left(\frac{n-1}nL\right)+
\Phi\left(\frac Ln\right)=...=n\Phi\left(\frac Ln\right),
$$
and hence
$$
\Phi\left(\frac Ln\right)=\frac{2\pi}n=\theta+\frac{2\pi}n-\theta=
\Phi\left(s\left(\theta+\frac{2\pi}n\right)\right)-\Phi(s(\theta)).
$$
From this and (\ref{shu}), we infer that
$$
\Phi\left(s\left(\theta+\frac{2\pi}n\right)\right)=
\Phi(s(\theta))+\Phi\left(\frac Ln\right)=\Phi\left(s(\theta)+\frac Ln\right),
$$
and, thanks to the invertibility of $\Phi$, we obtain
$$
s\left(\theta+\frac{2\pi}n\right)=s(\theta)+\frac Ln,\hspace{1.5cm}\theta\in\mathbb{R}.
$$
By this formula, (\ref{nicola}) and the periodicity of $\varphi$, it follows that
\begin{eqnarray*}
f'(z)&=& e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log\frac 1{2\pi\varphi(s(t))}dt\right\} \\
&=& e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log\frac 1{2\pi\varphi\left(s\left(\frac{2\pi}n+t\right)-\frac Ln\right)}dt\right\} \\
&=& e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log\frac 1{2\pi\varphi\left(s\left(\frac{2\pi}n+t\right)\right)}dt\right\}.
\end{eqnarray*}
By a change of variables, we thus get
\begin{eqnarray*}
f'(z)&=& e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_{\frac{2\pi}n}^{2\pi+\frac{2\pi}n}
\frac{e^{i(t-\frac{2\pi}n)}+z}{e^{i(t-\frac{2\pi}n)}-z}
\log \frac 1{2\pi\varphi(s(t))}dt\right\} \\
&=& e^{i\gamma}\exp\left\{\frac 1{2\pi}
\int_0^{2\pi}
\frac{e^{it}+ze^{i\frac{2\pi}n}}{e^{it}-ze^{i\frac{2\pi}n}}
\log \frac 1{2\pi\varphi(s(t))}dt\right\} \\
&=& f'(ze^{i\frac{2\pi}n}).
\end{eqnarray*}
Finally we find
$$
f(z)-\zeta_c=\int_0^1f'(tz)zdt=\int_0^1f'(tze^{i\frac{2\pi}n})zdt=[f(ze^{i\frac{2\pi}n})-\zeta_c]
e^{-i\frac{2\pi}n},
$$
and hence $\mathcal{R}_{\zeta_c,\frac{2\pi}n}\Omega=\Omega$.
\end{proof}
In what follows, $\mathcal{M}$ will denote mirror-reflection with
respect to a given axis.
\begin{theorem}\label{simasse}
A domain $\Omega\in\mathscr O$
is symmetric with respect
to a generic axis if and only if
$\varphi(s)=\varphi(L-s)$ for all $s\in[0,L].$
\par
Here $\varphi$ is measured counterclockwise starting from an intersection point of the
axis with $\partial\Omega$.
\end{theorem}
\begin{proof}
(i) Suppose $\Omega$ symmetric with rispect
to a given axis, that is $\mathcal{M}(\Omega)=\Omega.$
Short of rotations and translations, we can assume the symmetry axis to coincide with the real axis,
so that $\mathcal{M} z$ is the conjugate $\overline{z}$ of $z.$
\par
Let $f\in\mathscr F$ be the unique mapping from $D$ to $\Omega$
such that $f(0)=\zeta_c$ and $f(1)=\zeta_b,$ where $\zeta_c\in\Omega$ and
$\zeta_b\in\partial\Omega$ are some reference points on the symmetry axis.
We keep in mind that arclength on $\partial \Omega$ is measured counterclockwise from $\zeta_b.$
\par
It is clear that $\zeta_c-\zeta_b\in\mathbb{R}$
and
\begin{equation}\label{sim}
\overline{f(z)}=f(\overline z);
\end{equation}
thus,
$$
\overline{f(e^{i\theta})}=f(e^{i(2\pi-\theta)}), \ \ \theta\in[0,2\pi].
$$
Differentiating the latter formula with respect to $\theta$ and taking the modulus, yields
\begin{equation}\label{f'}
|f'(e^{i\theta})|=|f'(e^{i(2\pi-\theta)})| , \ \ \theta\in[0,2\pi];
\end{equation}
thus, from (\ref{arclength}), we have that
$$
s(2\pi-\theta)=L-s(\theta),\hspace{1.5cm}\theta\in\mathbb{R}.
$$
From this formula and (\ref{f' g}), we obtain:
$$
\varphi(L-s(\theta))=\varphi(s(2\pi-\theta))
=\frac 1{2\pi|f'(e^{i(2\pi-\theta)})|},\hspace{1.5cm}\theta\in\mathbb{R}.
$$
Finally, from (\ref{f'}), it follows that
$$
\varphi(s)=\varphi(L-s),\hspace{1.5cm}s\in[0,L].
$$
(ii) Suppose now $\varphi(s)=\varphi(L-s)$ for all $s\in\mathbb{R}.$
From (\ref{teta2}) we write
$$
\Phi(L-s)=2\pi\int_0^{L-s}\varphi(\sigma)d\sigma=
2\pi\int_s^L\varphi(L-\sigma)d\sigma=2\pi-\Phi(s),\ \ s\in[0,L].
$$
This property of $\Phi$ and (\ref{teta}) imply that
$$
\Phi(s(2\pi-\theta))=2\pi-\theta=2\pi-\Phi(s(\theta))=\Phi(L-s(\theta)),
$$
and hence
$$
s(2\pi-\theta)=L-s(\theta),\hspace{1.5cm}\theta\in\mathbb{R},
$$
by the invertibility of $\Phi$. Then, by differentiating, we have that
$$
|f'(e^{i(2\pi-\theta)})|=s'(2\pi-\theta)=s'(\theta)=|f'(e^{i\theta})|
$$
for every $\theta\in\mathbb{R}$. Thus, by a change of variable and by simple
properties of the complex conjugate, we can write that, for $z\in D$,
\begin{eqnarray*}
\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log|f'(e^{it})|dt
&=& \int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}\log|f'(e^{i(2\pi-t)})|dt \\
&=& \int_0^{2\pi}\frac{e^{i(2\pi-t)}+z}{e^{i(2\pi-t)}-z}\log|f'(e^{it})|dt \\
&=& \overline{\left(\int_0^{2\pi}\frac{e^{it}+\overline z}{e^{it}-\overline z}\log|f'(e^{it})|dt\right)}.
\end{eqnarray*}
Therefore,
modulo a rotation,
we have obtained that
\begin{equation*}\label{sim f'}
f'(z)=\overline{f'(\overline z)},\ \ z\in D,
\end{equation*}
and hence
\begin{equation*}
f(z)=\overline{f(\overline z)},\ \ z\in D,
\end{equation*}
modulo a translation. Thus, $\mathcal{M}(\Omega)=\Omega$ for some reflection $\mathcal{M}.$
\end{proof}
\section{A formula involving curvature}
Recall that the curvature (with sign) $\kappa$ of a planar curve can be defined by the formula
\begin{equation}\label{curvatura}
\kappa=\frac{d\psi}{ds},
\end{equation}
where $\psi$ is the angle between the positive real axis and the tangent (unit) vector.
By using the conformal map $f:D\rightarrow\Omega$ already introduced and
the Hilbert transform, we can express the curvature $\kappa$ of $\partial\Omega$ in terms
of the interior normal derivative $\varphi$ of the Green's function of $\Omega$.
\begin{theorem}
Let $\Omega\in\mathscr O$ and $\varphi$ be defined as usual.
Then $\varphi$ and the curvature $\kappa$ of $\partial\Omega$ are related by the formula:
\begin{equation}\label{curvaturaomega}
\kappa(s)=2\pi\varphi(s)\left[1-\frac 1{2\pi}\int_0^{|\partial\Omega|}
\cot\left(\frac{\Phi(s)-\Phi(\sigma)}2\right)\frac d{d\sigma}(\log\varphi)(\sigma)d\sigma\right],
\end{equation}
for $s\in[0,|\partial\Omega|]$, where $\Phi$ is defined as in (\ref{teta2}).
\end{theorem}
\begin{proof}
Let $f:D\rightarrow\Omega$ be as usual. Now we compute $\kappa$ in terms of $f.$
Define
$$
\omega(\theta)=\arg(f'(e^{i\theta}))
$$
for $\theta\in[0,2\pi];$ the angle $\psi$ in (\ref{curvatura}) is given by
$$
\psi(\theta)=\arg\left(\frac d{d\theta}f(e^{i\theta})\right)=\omega(\theta)+\frac{\pi}2+\theta.
$$
From (\ref{curvatura}) and (\ref{fi}), we have that
\begin{equation}\label{curvaturafi}
\kappa(s)=\frac{d\psi}{d\theta}\frac{d\theta}{ds}=2\pi\varphi(s)[1+\omega'(\theta)],
\hspace{1.5cm}s\in[0,\partial\Omega].
\end{equation}
As is well-known (see \cite{Ko} and \cite{Pr}), since $\log|f'|$ and $\arg f'$ are the real and the imaginary
part of the analytic function $\log f',$ we have that
\begin{equation}\label{argomento}
\arg f'(e^{i\theta})=\mathcal H(\log s')(\theta),
\end{equation}
being $s'(\theta)=|f'(e^{i\theta})|.$
Here, $\mathcal H$ is the
Hilbert transformation on the unit circle, namely,
$$
\mathcal H(\log s')(\theta)=\frac 1{2\pi}\int_0^{2\pi}\cot\left(\frac{\theta-t}2\right)\log(s'(t))dt.
$$
In our notations, (\ref{argomento}) can be rewritten as
$$
\omega=\mathcal H(\log s');
$$
thus,
$$
\omega'=\mathcal H(s''/s'),
$$
since $\mathcal H$ and $\displaystyle\frac{d}{d\theta}$ commute. From (\ref{curvaturafi}), we infer that
$$
\kappa(s(\theta))=2\pi\varphi\left[1+\mathcal H\left(\frac{s''}{s'}\right)(\theta)\right],
\ \ \theta\in[0,2\pi],
$$
and hence
$$
\kappa(s(\theta))=
2\pi\varphi(s(\theta))\left[1-\frac 1{2\pi}\int_0^{2\pi}
\cot\left(\frac{\theta-t}2\right)\frac{\varphi'(s(t))}{2\pi\varphi^2(s(t))}dt\right],
\ \ \theta\in[0,2\pi],
$$
from (\ref{fi}). Finally, we obtain (\ref{curvaturaomega}) by operating the change of variable
$\sigma=s(t)$ and by using (\ref{teta}).
\end{proof}
\begin{remark}
Let $\mathcal D2\mathcal N$ denote the Dirichlet-to-Neumann operator, that is
$\mathcal D2\mathcal N$ maps the values on $\partial\Omega$ of any harmonic function in $\Omega$
to the values of its (interior) normal derivative on $\partial\Omega.$ Then, formula
(\ref{curvaturaomega}) can be rewritten as
$$
\kappa=2\pi\varphi[1+\mathcal D2\mathcal N(\log(\varphi))].
$$
\end{remark}
\section*{Acknowledgments} This research was partially supported by a PRIN
grant of the italian MIUR and by INdAM-GNAMPA.
|
1,941,325,220,741 | arxiv |
\section{Conclusion}
\label{sec:conclude}
In this paper, we proposed a novel gateway design for app markets called AGChain\xspace.
By nature, it is a blockchain-based gateway for permanent, distributed, and secure app delegation for massive apps stored in existing markets (and custom apps via GitHub URLs).
We designed a smart contract and IPFS based architecture to make AGChain\xspace permanent and distributed, for which we overcame two previously underestimated design challenges to significantly reduce gas costs in our smart contract and make IPFS-based file storage really distributed.
We further addressed three AGChain\xspace-specific system challenges to make it secure and sustainable.
We have implemented a publicly available AGChain\xspace prototype (\url{https://www.agchain.ltd/}) on Ethereum with 2,485 LOC.
We then experimentally evaluated its performance and gas costs, and empirically demonstrated security effectiveness and decentralization.
The evaluation shows that on average, AGChain\xspace introduced 12\% performance overhead in 140 app tests from seven app markets and costed 0.00008466 Ether per app uploading.
We will continue to improve AGChain\xspace for it to be more user-friendly, secure, and gas-efficient.
\section{The Core AGChain\xspace Design}
\label{sec:design}
In this section, we introduce the core design of AGChain\xspace, including its objectives, threat model, the overall design, and two major challenges we addressed.
\subsection{Design Objectives and Threat Model}
\label{sec:objective}
\textbf{Design objectives.}
Our goal is to build a blockchain-based gateway that can provide permanent, distributed, and secure app delegation from existing app markets.
Note that we do not aim to fully replace existing markets because apps on AGChain\xspace eventually also come from them\footnote{As mentioned in Section\xspace\ref{sec:intro}, AGChain\xspace also supports uploading custom apps not available in any app markets, by asking users to list them via GitHub URLs.}.
In this way, existing app markets can still serve massive regular users, while users with security awareness or backup purposes can leverage the proxy of AGChain\xspace to securely download an app from a third-party market or permanently store an app on chain.
Hence, AGChain\xspace is more like a gateway of existing app markets, instead of a standalone market.
More specifically, we have the following major design objectives:
\begin{itemize}
\item \textit{Permanent delegation.}
Upon an app is uploaded to AGChain\xspace, we aim to achieve the permanent storage and delegation of this app.
The immutable nature of blockchain makes it an ideal technology for this purpose.
To utilize it for AGChain\xspace, we make these two choices.
First, we leverage an existing and mature blockchain rather than build our own blockchain as in Infnote~\cite{Infnote20}.
This is because mainstream blockchains, such as Bitcoin and Ethereum, have accumulated enough nodes over the years and are thus robust against many attacks.
Second, to program our logic on blockchain, we write a smart contract instead of creating a virtualchain as in ~\cite{IoTDataBlockchain17, Blockstack16, Virtualchain16}, since smart contract is much lightweight and also powerful in terms of Turing-completeness.
Note that only for the apps \textit{once} accessed via AGChain\xspace, we can provide permanent access.
\item \textit{Distributed delegation.}
Since raw files are not suitable to be directly stored on blockchain due to the high storage and gas cost (see Section\xspace\ref{sec:background}), we still need an efficient yet distributed file storage.
In this paper, we employ IPFS for its feature of being both distributed and permanent, whereas centralized cloud services cannot guarantee they always exist (some are even censored, such as Google Drive and Dropbox).
The basic idea of using IPFS in AGChain\xspace is to store raw app files on IPFS and keep their corresponding IPFS indexes on chain.
However, we surprisingly found that IPFS does not duplicate files to other IPFS nodes if no request.
To make it distributed, we build a consortium network by\\ utilizing IPFS gateways and crowdsourced server nodes.
\item \textit{Secure delegation.}
A practically desirable objective is to achieve secure app download delegation for apps from the markets without HTTPS downloading (called unprotected markets thereafter) because this can immediately benefit millions of Chinese market users (see Section\xspace\ref{sec:insecureDownload}).
Specifically, we need to securely retrieve apps from existing app markets \textit{without} a dedicated and trusted network path.
Besides the download security, we also need to guarantee apps uploaded to AGChain\xspace are \textit{not} repackaged~\cite{DroidMOSS12} in their original markets.
\end{itemize}
Besides these core objectives, we desire one more property, \textit{sustainable delegation}, to make AGChain\xspace self-sustainable when providing monetary incentives to crowdsourced server nodes.
Together with the secure delegation, we will explain these two objectives in Section\xspace\ref{sec:sustain} due to their implementation details.
In this section, we focus on achieving permanent and distributed delegation for AGChain\xspace.
\textbf{Adversary model.}
In this paper, we assume adversaries:
\begin{itemize}
\item \textit{Cannot break TLS (Transport Layer Security) and hash (e.g., SHA-256) algorithms.}
\item \textit{Cannot compromise the underlying (Etherum) blockchain and IPFS storage.}
\item \textit{Cannot exploit our smart contract and core server code.}
We can leverage recent advances on verification of smart contracts~\cite{Securify18} and write SGX-enabled\footnote{SGX (Software Guard eXtension) is a TEE (trusted execution environment) technique developed by Intel to protect selected code and data.} server code~\cite{RUSTSGX19} to enhance our contract and server code's security, although this is out of the scope of this paper.
\end{itemize}
We do, however, assume that attackers \textit{can} intercept unprotected network traffic, upload repackaged apps to original app markets, compromise or replace our front-end code (since it is at the user side), access our smart contract, etc.
\subsection{The Overall System Design}
\label{sec:overall}
\begin{figure*}[t!]
\begin{adjustbox}{center}
\includegraphics[width=0.9\textwidth]{newWorkflow}
\end{adjustbox}
\caption{\small A high-level design and the workflow of AGChain\xspace, which consists of four components marked in the green color.}
\label{fig:overall}
\end{figure*}
\textbf{Architecture.}
Figure~\ref{fig:overall} presents AGChain\xspace's high-level design.
As highlighted in the green color, it has four components as follows:
- \textit{Smart contract.}
The most important component is a novel (gas-efficient) smart contract, which stores all app metadata on chain and duplicates them on most of Ethereum nodes worldwide.
With these data (including IPFS file indexes) and their programmed storing and retrieving logic, this smart contract is the actual control party of AGChain\xspace's entire logic.
- \textit{IPFS network.}
Another core component is an IPFS (consortium) network, which stores raw app files in a distributed manner.
The stored apps then can be automatically routed and retrieved through IPFS's content-addressing~\cite{IPFSContentAddressing}.
Note that with the smart contract and IPFS components, we guarantee the permanent and decentralized app access in AGChain\xspace.
- \textit{Server nodes}.
To achieve app download security and repackaging checking, we also need server(s) to retrieve apps from existing markets, inspect their security, and schedule their uploading as these tasks cannot be performed in blockchain.
Since any machine with our server code could be a server node, we propose an incentive mechanism (details in Section\xspace\ref{sec:chargeFees}) to motivate servers to join AGChain\xspace\footnote{To guarantee the security of our server code, we require the joined servers to grant us the full access permission, e.g, via setting in Amazon AWS~\cite{AWSChangePermission}.}.
These \textit{crowdsourced} server nodes further enhance the decentralization.
- \textit{Front-end.} Finally, we provide a front-end web interface to help uploaders and downloaders interact with AGChain\xspace. Note that for app downloading, our front-end directly communicates with smart contract and IPFS without the server.
\textbf{Upload workflow.}
Figure~\ref{fig:overall} also shows the overall workflow of AGChain\xspace.
As shown in left part, Alice wants to securely download an app (e.g., Alice is a security-sensitive user) or permanently store an app on chain for future usage (e.g., Alice is a developer or a user who wants to backup the current version of an app).
She then acts as an app uploader:
\begin{itemize}
\item[1.] Alice just needs to input the original app page URL and clicks the ``upload'' button in the front-end. AGChain\xspace automatically finishes all the remaining steps.
\item[2.] The front-end transfers the URL to one server node.
\item[3.] After knowing the URL, the server analyzes the corresponding app page to obtain the download URL that points to the actual APK file (the file format used by Android apps), and retrieves it from its original market.
\item[4.] For the app markets using insecure app downloading (e.g., those seven markets in Table~\ref{tab:3rdmarkets}), AGChain\xspace performs one more step to check whether the retrieved file has been tampered with during network transmission.
\item[5.] For all third-party markets, we conduct repackaging checks to prevent repackaged apps~\cite{DroidMOSS12} from polluting AGChain\xspace.
We propose a lightweight yet effective certificate ID based mechanism; see details in Section\xspace\ref{sec:repackageDetection}.
During this process, we also parse the APK file to extract its package name and version (besides the certificate ID).
\item[6.] The server then uploads the raw APK file to IPFS and obtains its corresponding IPFS hash (i.e., the file index).
\item[7.] Finally, the server invokes the smart contract to store all app metadata and IPFS hash on chain. We choose the widely-used Ethereum as our underlying blockchain.
\item[8.] To avoid Alice from waiting a long time for blockchain transaction confirmation (typically 9$\sim$13 seconds, according to our tests), AGChain\xspace simultaneously returns the result of package name, app version, and IPFS hash.
\end{itemize}
\textbf{Download workflow.}
As shown in the right part of Figure~\ref{fig:overall}, Bob acts as an app downloader to download apps (e.g., the app uploaded by Alice) that are already in AGChain\xspace:
\begin{itemize}
\item[a)] Bob inputs (or browses) the package name and version of the app that he wants to download in the front-end.
\item[b)] The front-end then automatically invokes the smart contract to retrieve the corresponding IPFS hash.
\item[c)] The front-end further locates the nearest IPFS network node to download the APK file and returns it to Bob.
\end{itemize}
\textbf{Major challenges.}
In the course of developing AGChain\xspace, we identify two previously underestimated challenges:
\begin{description}
\item[C1:] \textit{How to minimize gas costs in the smart contract?}
Since each app upload initializes a blockchain transaction and costs gas fees, it is critical to minimize such costs in our smart contract.
While there were some code-level gas optimizations~\cite{GASPER17,MadMax18,GASOL20}, they are still far from enough.
In Section\xspace\ref{sec:contractdesign}, we propose design-level optimizations to significantly reduce gas costs by a factor of 18.
\item[C2:] \textit{How to make IPFS storage distributed?}
As mentioned in Section\xspace\ref{sec:objective}, we surprisingly found that IPFS files stay only in the original IPFS node and are cached at IPFS gateways only when there are requests.
Once the original node goes offline, the entire IPFS network can no longer access the file (availability gets restored when one node adds the same content back).
To make IPFS storage distributed in the first place, we build an IPFS consortium network (Section\xspace\ref{sec:ipfsdesign}) that timely asks IPFS gateways and crowdsourced server nodes to backup files.
\end{description}
\subsection{Gas-Efficient Smart Contract for App Metadata Storing and Retrieving}
\label{sec:contractdesign}
In this subsection, we propose a set of mechanisms to minimize gas costs for app metadata storing (i.e., app uploads), eliminate gas costs for metadata retrieving (i.e., app downloads), and achieve a whitelist mechanism for preventing misuse of our contract functions.
These design-level optimizations are much more efficient than code-level optimizations~\cite{GASPER17,MadMax18,GASOL20} and can guide future smart contract design.
\textbf{Minimizing gas costs via log-based contract storage.}
We found that a major source of gas inefficiency comes from data storage in the smart contract.
Like many other smart contracts, AGChain\xspace needs to store app metadata as a structure defined in the contract.
However, such data storage operation, via the \texttt{SSTORE} instruction in Ethereum Virtual Machine (EVM), changes the internal block states and Ethereum's world state~\cite{WorldState}.
Therefore, it is expensive, costing 20,000 Gas\footnote{The gas cost/fee is the product of gas price (or Gwei) and the Gas consumed.} per operation~\cite{EthereumYellow14}.
Since numerous app metadata records will be stored in AGChain\xspace, it would cost a large amount of gas fees if we use the traditional contract structure.
Fortunately, we identify a logging interface in the EVM, which can be used to permanently log data in transaction receipts via the \texttt{LOG} instruction~\cite{EthereumYellow14}.
Since it changes only the block headers and does not need to change the world state, only 375 Gas is consumed per operation.
The underlying logging mechanism is complicated, and we refer interested readers to the Ethereum yellow paper~\cite{EthereumYellow14, EthLog} for more details.
While log-based contract storage significantly reduces gas fees, it has no structure information.
We thus ask our server nodes to recover the app metadata structure, which includes the app package name, app package version, app certificate serial number, the original market page URL, the repackaging detection result, and the important IPFS hash.
\textbf{Offloading on-chain duplicate check to server nodes.}
Another major source of gas costs originates from the duplicate check, which checks duplicated app metadata before storing it on chain.
Originally, we deployed such a check in the smart contract, but we found that it is costly since each check needs to iterate over the entire app metadata structure.
Moreover, an in-contract structure has to be maintained, which causes the first log-based optimization unadoptable.
Therefore, we offload this on-chain duplicate check to server nodes, which query the latest app metadata from the smart contract before uploading any records.
If a duplicate exists, the uploading will stop.
\textbf{Further reducing gas costs via batch uploads.}
With the above two default optimization mechanisms, we significantly reduce gas costs by a factor of 15.75 (0.001347 v.s. 0.0000855 Ether, before and after the optimization).
We further reduce the costs by providing a back-end interface of batch uploads.
Our experiment shows that by batching 10 uploads together, we save gas by a factor of 2.02 (compared with 10 times of normal uploads).
This factor increases to 2.65 for a batch upload of 100 records.
These suggest that for a large number of app uploads (e.g., a company uploads all its apps), we can use batch uploads to minimize gas costs.
\textbf{Eliminating gas costs for all app downloads.}
While app uploading certainly consumes gas, we find a way to eliminate gas costs for all app downloads.
For a normal contract data structure, we originally used the \texttt{view} function modifier to describe the data retrieving contract function since it does not change any contract state.
Invoking such a view-only function (even by external parties) will not initiate any blockchain transaction, and thus no gas fee is needed.
However, since we have switched to log-based contract storage, we create a bloom filter~\cite{EthBloom, EthLog} to quickly locate the block headers containing our data logs, regenerate the original logs, and extract IPFS hashes from them.
Since this task is performed only at the server side via the web3 Python APIs~\cite{Web3Python}, the contract side will not cost any gas.
\textbf{Achieving a whitelist mechanism for access control.}
Since smart contract has no access control mechanism, a contract function can be invoked by anyone.
This implies that an adversary can invoke our \texttt{storeApp()} function to upload any app in our scenario.
To present malicious uploads from interfering AGChain\xspace's data, we implement a lightweight whitelist mechanism that consists of two function modifiers, \texttt{onlyOwner(address caller)} and \texttt{onlyWhitelist(address caller)}, where the parameter \texttt{address} is the invoking party's Ethereum account address.
By enforcing the \path{onlyWhitelist} \path{(msg.sender)} modifier to check the caller of \texttt{storeApp()}, we can guarantee that only an account in the whitelist can upload apps.
We also implement two contract functions to add or delete an account address from the whitelist, and they are enforced by the \texttt{onlyOwner(msg.sender)} modifier.
Note that the owner is our contract creator, and we gradually add each authorized server node into the whitelist.
By designing such a hierarchical whitelist, we can achieve effective access control to avoid malicious data injecting into AGChain\xspace.
\subsection{IPFS Consortium Network for Distri-\\buted File Uploading and Downloading}
\label{sec:ipfsdesign}
To address the challenge C2, AGChain\xspace leverages IPFS gateways and crowdsourced server nodes to cache or backup apps so that they are really distributed in the IPFS network.
With these gateways and servers, AGChain\xspace essentially forms an IPFS consortium network for distributed app access.
\textbf{Periodically caching app files at IPFS gateways.}
As mentioned in the challenge C2, an IPFS file stays only in the original IPFS node and is routed for access through content-addressing~\cite{IPFSContentAddressing}.
Once this node goes offline, the entire IPFS network can no longer access the file.
Fortunately, our experiment found that an IPFS gateway would cache the file for a certain period of time when there is a file access through the gateway.
We thus leverage this observation to intentionally mimic normal user requests for caching apps at IPFS gateways.
Specifically, we deploy a script in the server to send file requests through various IPFS gateways.
These requests are periodically sent before the IPFS garbage collector cleans our app files.
In this way, we guarantee that a copy or multiple copies of app files are always available in IPFS gateways.
\textbf{Timely backing up apps at crowdsourced server nodes.}
To achieve really distributed app storage, we ask each crowdsourced server to serve as an IPFS storage node and timely backup apps in our IPFS consortium network.
Each server node first uses the \texttt{ipfs daemon} command to run as an IPFS node.
It then runs a consortium synchronization script, which gets a list of IPFS hashes of the apps in AGChain\xspace and retrieves raw app files from the IPFS network.
To avoid being collected by the IPFS garbage collection, the script pins these raw apps locally using the \texttt{ipfs pin} command.
In this way, we increase the data redundancy in AGChain\xspace and improve the distribution of app storage.
\textbf{Identifying fast IPFS gateways for app downloading.}
Besides distributed app uploads, we also propose a mechanism to make app downloading distributed and fast.
Specifically, AGChain\xspace's front-end tests the RTT (Round-Trip Time) of public IPFS gateways, chooses the gateway with the lowest RTT, and downloads the raw app file from it.
This not only improves the performance of app downloading in IPFS but also prevents potential censorship of some IPFS gateways.
\section{Discussion}
\label{sec:discuss}
In this section, we discuss several potential improvements AGChain\xspace could integrate with in the near future.
\textbf{More user-friendly.}
In our current AGChain\xspace prototype, we require users to download apps according to their package names and version numbers.
A more user-friendly way is to let them input or search app names.
To do so, we can crawl the app page information from existing app markets and maintain a table of app names (in different languages) and their corresponding package names at the server side.
Since app names are not the unique identifiers as package names, there is no need to store them in the smart contract.
By providing a more user-friendly search of apps, we also reduce the possibility of a usability risk that an adversary uses a slightly different package name to cheat users for downloading a fake app.
\textbf{Inviting developers.}
Although AGChain\xspace provides a real-time app delegation for users to download apps just in time, it would be beneficial if AGChain\xspace already stores a large number of apps.
This also increases user-friendliness.
To do so, we plan to invite developers on Google Play to submit their apps to AGChain\xspace, which helps their apps reach more users who desire decentralized access.
Specifically, since each Google Play app page lists developers' email, we can crawl that information and send automatic emails to them.
To give developers more incentives, we can pay uploading gas fees for the first 10,000 developers who join our program and help them automatically upload their apps.
\textbf{More secure.}
While we have performed repackaging checks to detect repackaged apps, AGChain\xspace currently cannot detect the more general Android malware.
We plan to mitigate this issue by incorporating the malware scan of VirusTotal~\cite{VirusTotal}.
Specifically, we will use VirusTotal APIs to scan each uploaded app at the server side and save the URL of scanned reports as a part of app metadata into our smart contract.
In this way, users can check those scan reports during app downloading, and we will also give explicit warnings for suspicious apps.
\section{Evaluation}
\label{sec:evaluate}
In this section, we first experimentally evaluate the performance and gas costs of AGChain\xspace, and then empirically demonstrate its security effectiveness and decentralization.
\subsection{Performance}
\label{sec:performance}
\begin{table}[t!]
\caption{\small Average processing time introduced by AGChain\xspace.}
\vspace{-2ex}
\label{tab:performance}
\begin{adjustbox}{center}
\scalebox{0.85}{
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{Market} & APK & Normal & \multicolumn{4}{c}{Additional Processing Time (ms)} \\
\multirow{2}{*}{ID} & Size & Download & Check- & Repack- & IPFS & Overall \\
& (Mb) & Time (ms) & sum & aging & Upload & \% \\
\midrule
1 & 20.09 & 71063.2 & 820.1 & 377.1 & 340.7 & 2.16\% \\
2 & 27.26 & 10105.5 & 917.0 & 433.8 & 429.1 & 17.61\% \\
3 & 20.87 & 14819.2 & 1203.3 & 408.6 & 415.8 & 13.68\% \\
4 & 20.63 & 20090.9 & 1328.0 & 355.8 & 387.4 & 10.31\% \\
5 & 30.52 & 13854.8 & 990.1 & 444.9 & 393.2 & 13.20\% \\
6 & 25.29 & 21875.8 & 1870.7 & 369.8 & 396.1 & 12.05\%\\
7 & 23.86 & 17345.3 & 1013.0 & 414.3 & 432.7 & 10.72\% \\
\bottomrule
\end{tabular}
}
\end{adjustbox}
\end{table}
To fairly evaluate additional time introduced by AGChain\xspace, we use our server code to record both normal downloading time (part of step 3 in Fig.\xspace~\ref{fig:overall}) and AGChain\xspace's processing time (the rest of step 3 and steps 4 to 6; see Fig.\xspace~\ref{fig:overall}).
Note that we do not count step 8 as part of AGChain\xspace's overhead, because we simultaneously return results to users without waiting for the transaction to be confirmed.
Totally, we test 140 different apps from seven markets (20 tests each) that require secure delegation.
Moreover, we perform these tests in different days.
Table~\ref{tab:performance} lists the average results of each tested market.
We first see that normal downloading time simply relies on the network quality between app markets and our AWS server instead of APK file sizes.
On top of this, AGChain\xspace introduces three steps of additional processing times, including
(i) about one second (or 1s) of extracting and validating checksums;
(ii) $\sim$0.4s of performing repackaging checks;
and (iii) $\sim$0.4s of uploading apps to IPFS.
With these, the overall overhead introduced by AGChain\xspace is from 2.16\% to 17.61\%, with the median and average of 12.05\% and 11.39\%, respectively.
We thus conclude that AGChain\xspace's performance overhead is 12\%, a reasonable performance cost for a blockchain-based system.
\subsection{Gas Costs}
\label{sec:gas}
\begin{figure}[t!]
\begin{adjustbox}{center}
\includegraphics[width=0.38\textwidth]{NewGasFee}
\end{adjustbox}
\caption{\small CDF plot of gas fees per uploading transaction.}
\label{fig:GasFee}
\end{figure}
During the 140 performance tests, we also collect their corresponding gas fees in Ether (tested in the Ethereum Rinkeby network).
One Ether is around 1,000 USD (at the historically high range) at the time of our submission.
Fig.\xspace\ref{fig:GasFee} shows the CDF (cumulative distribution function) plot per app uploading in AGChain\xspace.
We can see that over 95\% gas fees are in the range of 0.00008245 Ether (0.082 USD) and 0.00008773 Ether (0.088 USD).
Only four tests consumed a gas fee over 0.0001 Ether, ranging from 0.00010374 and 0.00012461 Ether.
The average of all 140 gas fees is 0.00008466 (0.085 USD).
Since only uploads in AGChain\xspace costs gas and one upload can serve all future downloads,
we believe that such a gas cost is acceptable.
To further reduce gas in the future, we will explore other smart contract platforms (e.g., the gas-free EOS \cite{EOS} and Hyperledger Fabric \cite{Hyperledger}) beyond the currently widely-used Ethereum.
\subsection{Security}
\label{sec:security}
Since there are no real-world attacks against AGChain\xspace, we mimic a MITM (Man-In-The-Middle) attack and a repackaging attack to demonstrate AGChain\xspace's security effectiveness.
\begin{figure*}[t!]
\begin{adjustbox}{center}
\includegraphics[width=1.0\textwidth]{repackTable}
\end{adjustbox}
\caption{\small A screenshot (with the surrounding table added) to demonstrate a repackaged app successfully detected by AGChain\xspace.}
\label{fig:repackDemo}
\end{figure*}
\begin{figure}[t!]
\begin{adjustbox}{center}
\includegraphics[width=0.38\textwidth]{MITMdemo}
\end{adjustbox}
\caption{\small A screenshot of AGChain\xspace stopping a MITM attack.}
\label{fig:MITMdemo}
\end{figure}
\textbf{Preventing a MITM attack.}
To mimic the MITM attack, we originally tried to control the network traffic of our AWS server using the mitmproxy tool~\cite{MITM} (since we cannot redirect app markets' traffic as real adversaries).
However, it turned out that AWS disallows this.
Hence, we have to redirect the target app page URL directly in our server code.
Specifically, when a user inputs this particular Tudou Video URL (\url{http://www.appchina.com/app/com.tudou.android}) in AGChain\xspace, the actual app download URL will become the URL of a similar yet fake app (\url{https://github.com/apkchain2020/RepackagedAPK/blob/master/com.tudouship.android_4159.apk}) we prepared.
During this process, AGChain\xspace finds that the MD5 retrieved from the original app page is f5580d6a58bb9d
97c27929f1a9c585f1 while the MD5 calculated from the downloaded APK file is a05e5187f4e9eb434bc3bbd792e35c54.
Since the two checksums are different, AGChain\xspace shows an alert box to the user, as shown in Fig.\xspace~\ref{fig:MITMdemo}, and does not allow this app to be uploaded.
\textbf{Detecting a repackaged app.}
To mimic the repackaging attack, we first select a target repackaged app that has ground truth as in \cite{Repack19} and also appears in Chinese app markets.
Our choice is the SuperSu rooting app on the Baidu market (\url{https://shouji.baidu.com/software/11569169.html}).
We then upload this original app to AGChain\xspace and find that it ``passes'' the repackaging check, as shown in Fig.\xspace~\ref{fig:repackDemo}.
We further upload the repackaged app via the URL of \url{https://github.com/apkchain2020/RepackagedAPK/blob/master/eu.chainfire.supersu_repack.apk}.
This time AGChain\xspace finds that the certificate serial number is different from that in our pre-generated certificate ID database using apps.
Indeed, Fig.\xspace~\ref{fig:repackDemo} also shows that the two serial numbers are different.
Hence, for the repackaged app, it ``fails'' the check.
\subsection{Decentralization}
\label{sec:decentralization}
During the 140 performance tests in different timings, we also identify IPFS gateways in over 20 different locations worldwide, which demonstrate the decentralization of AGChain\xspace.
Table~\ref{tab:IPFSgateways} lists the IPFS gateways we have identified.
We can see that they are distributed in 21 different locations worldwide.
Around a half, ten, gateways are located in US.
This is probably because many IPFS nodes run in cloud servers, which are mainly provided by US companies.
Other than US, Europe holds seven gateways out of the remaining 11.
Compared with US and Europe, there are only three gateways in Asia (the remaining one gateway in Canada).
However, we believe that with the recent popularity of FinTech in Asia~\cite{AsiaFinTech20}, more and more IPFS nodes will be deployed locally in Asia for faster connections.
Additionally, according to the RTT result in Table~\ref{tab:IPFSgateways}, nearby IPFS gateways usually have small RTTs, which suggests the value of identifying fast IPFS gateways in AGChain\xspace for app downloading (see Section\xspace\ref{sec:ipfsdesign}).
\begin{table}[t!]
\caption{\small IPFS gateways identified in 21 different locations.}
\vspace{-2ex}
\label{tab:IPFSgateways}
\begin{adjustbox}{center}
\scalebox{0.8}{
\begin{tabular}{cccc}
\toprule
Gateway Domain & IP Address & Location & RTT (s) \\
\midrule
ipfs.jbb.one & 47.52.139.252 & Hong Kong SAR & 0.04 \\
ipfs.smartsignature.io & 13.231.230.12 & Tokyo, Japan & 0.07 \\
10.via0.com & 104.27.129.45 & San Francisco, U.S.A & 0.13 \\
ipfs.kavin.rocks & 104.28.5.229 & Dallas, U.S.A & 0.14 \\
ipfs.runfission.com & 34.233.130.24 & Ashburn, U.S.A & 0.25 \\
ipfs.k1ic.com & 39.101.143.85 & Beijing, China & 0.51 \\
ipfs.2read.net & 195.201.149.81 & Gunzenhausen, DE & 0.55 \\
ipfs.drink.cafe & 98.126.159.6 & Orange, U.S.A & 0.56 \\
gateway.pinata.cloud & 165.227.144.202 & Frankfurt, Germany & 0.56 \\
ipfs.telos.miami & 138.68.29.104 & Santa Clara, U.S.A & 0.57 \\
hardbin.com & 174.138.8.194 & Amsterdam, NL & 0.57 \\
ipfs.fleek.co & 44.240.5.243 & Portland, U.S.A & 0.57 \\
ipfs.greyh.at & 35.208.63.54 & Council Bluffs, U.S.A & 0.69 \\
gateway.temporal.cloud & 207.6.222.55 & Surrey, Canada & 0.70 \\
ipfs.azurewebsites.net & 13.66.138.105 & Redmond, U.S.A & 0.72 \\
ipfs.best-practice.se & 193.11.118.5 & Eskilstuna, Sweden & 0.73 \\
ipfs.overpi.com & 66.228.43.184 & Cedar Knolls, U.S.A. & 0.74 \\
jorropo.net & 163.172.31.60 & Paris, France & 0.76 \\
jorropo.ovh & 51.75.127.200 & Roubaix, France & 0.76 \\
ipfs.stibarc.com & 74.140.55.163 & Delaware, U.S.A & 0.81 \\
ipfs.sloppyta.co & 51.68.154.205 & Warsaw, Poland & 0.83 \\
\bottomrule
\end{tabular}
}
\end{adjustbox}
\end{table}
\section{Implementation}
\label{fig:homepage}
We have implemented a public AGChain\xspace prototype (\url{https://www.agchain.ltd/}) on the Ethereum blockchain.
Fig.\xspace~\ref{fig:homepage} shows a screenshot of its front-end homepage.
In this section, we summarize AGChain\xspace's implementation details.
Our current AGChain\xspace prototype consists of a total of 2,485 LOC (lines of code), excluding all the library code used.
Table~\ref{tab:LineOfCode} lists a breakdown of AGChain\xspace's LOC across different components and programming languages.
\begin{table}[t!]
\caption{\small A breakdown of LOC (lines of code) in AGChain\xspace.}
\vspace{-2ex}
\label{tab:LineOfCode}
\begin{adjustbox}{center}
\scalebox{0.85}{
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|}
\cline{2-6}
\multicolumn{1}{c|}{} & Front-end & Server & Contract & IPFS & \multirow{2}{*}{AGChain\xspace} \\
\multicolumn{1}{c|}{} & (F) & (S) & (C) & (I) & \\
\hline
JavaScript & 781 & & (302 in F)$^*$& (80 in F) & 781 \\
\hline
Java & & 849 & & (36 in S) & 849 \\
\hline
Python & & 723 & (445 in S)& & 723 \\
\hline
Solidity & & & 51 & & 51 \\
\hline
CSS & 81 & & & & 81 \\
\hline
Sum & 862 & 1,572 & 51 + (747) & (116)& 2,485 \\
\hline
\end{tabular}
\begin{tablenotes}
\item $^*$This means that 302 lines of code in front-end are related to smart contract. \hspace*{1.1ex}Other brackets, such as (445 in S) and (80 in F), are similar.
\end{tablenotes}
\end{threeparttable}
}
\end{adjustbox}
\end{table}
\begin{figure}[t!]
\begin{adjustbox}{center}
\includegraphics[width=0.4\textwidth]{newHomepage}
\end{adjustbox}
\caption{\small A screenshot of AGChain\xspace's front-end homepage (cutted). Besides uploading and downloading apps, users can click the ``Explore App'' button to browse the apps stored in AGChain\xspace. The bottom part will output the metadata of each new upload, which can help users immediately download the app from AGChain\xspace using the ``Download APK'' button.}
\label{fig:homepage}
\end{figure}
\textbf{Front-end.}
We implement the front-end user interface using the React web framework~\cite{React}.
Hence, the HTML and CSS code is minimal, and some HTML contents are also dynamically generated using JavaScript.
Besides user interfaces, we write 302 JavaScript LOC on top of the web3.js library~\cite{Web3JS} to query our smart contract for retrieving IPFS hashes.
To execute IPFS commands in JavaScript for app downloading, we write additional 80 JavaScript LOC based on the js-ipfs library~\cite{JsIPFS}.
Totally, the front-end is implemented in 862 LOC.
\textbf{Server node.}
We implement our server code in a total of 1,572 LOC and run it on the AWS (Amazon Web Services).
Specifically, we write 849 Java LOC to handle requests from the front-end, securely download apps from existing markets, perform repackaging checks, and upload APK files to IPFS.
Moreover, we write 445 Python LOC to leverage web3 Python APIs to interact with smart contract.
Lastly, for APK file parsing (used in repackaging checks), we leverage the Androguard library~\cite{Androguard} and write 278 Python LOC on top of it.
One major task of the server node is to retrieve apps from the market URLs specified by users.
However, such URLs are often only the page URLs (e.g., \url{https://shouji.baidu.com/software/11569169.html}) instead of direct download URLs.
To automatically extract download URLs, we leverage the insight that each market has a fixed transition pattern from a page URL to a download URL.
Hence, we can pre-analyze those markets to obtain their download URL patterns.
Most markets' download URLs can be directly extracted from their HTML tags, e.g., the aforementioned Baidu market example.
A few markets, e.g., Meizu and AnZhi, require to calculate URLs from its JavaScript code.
In our current prototype, we have analyzed the download patterns of all seven markets that need AGChain\xspace's secure app download delegation.
\textbf{Smart contract.}
We implement the smart contract with 51 LOC in the Solidity language, which was reduced from 128 LOC in the earlier version since we no longer define in-contract data structures (see Section\xspace\ref{sec:contractdesign}).
Therefore, our smart contract mainly describes a list of functions, e.g., \texttt{storeApp()} for uploading app metadata to the blockchain.
In particular, to avoid the smart contract being misused by unauthorized parties, we set that only whitelisted server nodes can invoke the \texttt{storeApp()} function by adding a function modifier of checking the transaction sender.
However, this also prevents the front-end's \texttt{estimateGas()} web3 API from estimating gas (see Section\xspace\ref{sec:chargeFees}).
To address this issue, we create an additional function called \texttt{storeApp\_estimate()} that duplicates \texttt{storeApp()}'s functionality but does not execute data push operations.
This function will be executed by the EVM instead of AGChain\xspace transactions.
\textbf{IPFS module.}
Since we do not modify the IPFS network, IPFS-related code is implemented in other components.
For example, we write 80 JavaScript LOC for the front-end to download apps from IPFS and 36 Java LOC (without counting Java file operation code) for the server node to upload apps to IPFS.
Additionally, we need to activate the IPFS node on the server by running the \texttt{ipfs daemon} command.
\section{Introduction}
\label{sec:intro}
Smartphones are getting increasingly popular around the world.
The popularity is (partially) driven by the large amount of feature-rich mobile apps in various app markets.
Apart from the official marketplaces like Google Play and Apple Store, some third-party app markets, e.g., Amazon AppStore and Baidu Market, appeared as the important supplementary of official app markets.
They provide more app options for Android users and are also popular (especially in China).
Nevertheless, all these official and third-party app markets are in a centralized design, which falls short in the following several aspects (more details are listed in \S\ref{sec:motivate}):
\begin{description}
\item[P1:] \textit{No transparent listing or permanent access.}
Firstly, centralized app markets could enforce strict listing policies and delist the apps as they wish.
For example, 13.7\% (1,146) of the 8,359 popular apps we measured in late 2018 have been delisted (by either the market or developers themselves) after one year in late 2019.
\item[P2:] \textit{No world-wide and even censored access.}
Secondly, many apps on Google Play and Apple Store are available only in certain countries.
Moreover, Google Play itself is being censored in some regions, causing no easy app access for the users in those affected areas.
\item[P3:] \textit{No enough security guarantee.}
Thirdly, we find that a significant portion of third-party Android app markets do not provide secure app downloading.
For example, half of the top 14 Chinese app markets download apps via the insecure HTTP, such as the widely used Baidu and 360 app markets.
Moreover, even for the markets with secure downloading, they generally have no checking of app repackaging~\cite{DroidMOSS12} as in official markets.
\end{description}
While some of these problems (P1 to P3) individually look\footnote{Since general users have no paid VPN and the security status of Chinese app markets would not change in a short period of time, those technical means may not work as thought.} resolvable by technical means like asking users to use VPN (for P2) and forcing markets to use HTTPS (for P3), these means do not address the fundamental limitation in current app markets --- \textit{the lack of permanent and distributed (and even secure) app access}.
Moreover, the first problem P1 is technically unresolvable except periodic backup.
As a result, users have to worry that a certain app could ``disappear'' from the markets one day.
A famous example is that the Google Play app is no longer available on Huawei phones due to the Huawei ban~\cite{HuaweiBan}.
Even for the apps legitimately taken down by the markets, such as malicious apps, they are still useful for certain users like security researchers.
Addressing P1 requires markets to guarantee permanent app access; however, this is impossible since they are a centralized entity.
A natural thought of tackling these problems as a whole is to build a decentralized app market from scratch, as similar to the typical blockchain-based work~\cite{IoTDataBlockchain17, PubChain19, DClaims19, BBox20}.
However, this kind of design also brings a new problem because massive apps are already stored in existing markets.
Without enough apps, a decentralized market is meaningless for end users.
In this paper, we balance all these considerations and propose a novel architecture that combines the advantages of traditional IT infrastructure and decentralized blockchain.
Specifically, we propose a blockchain-based gateway that acts as a bridge between end users and app markets.
We call this novel app gateway AGChain\xspace (\underline{A}pp \underline{G}ateway \underline{Chain}).
With AGChain\xspace, users can leverage the (permanent, distributed, and secure) delegation of AGChain\xspace for indirect app downloads if they worry about the direct app downloads from existing markets.
During this process, existing markets still provide services as usual except for the apps that have been delisted.
More specifically, if a user wants to delegate an app download from an existing market URL\footnote{Besides apps on markets, we also support users to upload custom apps via GitHub URLs, e.g., https://github.com/agchain/Test/blob/master/A.apk, considering that both Google Play and Apple Store do not allow certain apps like third-party market apps to be listed.}, she can input that URL to AGChain\xspace and AGChain\xspace will automatically (i) retrieve the corresponding app from its original market, (ii) upload the raw app file to a decentralized storage, and (iii) save the app file index and important metadata on chain for controlling all future downloads directly from AGChain\xspace.
After delegation, the user (and other users) can permanently download the app from AGChain\xspace in a distributed manner.
We choose to use smart contract~\cite{buterin2014next} for programming our logic on blockchain and IPFS (Interplanetary File System)~\cite{IPFS14} for our decentralized storage.
However, instead of straightforwardly using them as in other blockchain- and IPFS-based systems~\cite{IoTDataBlockchain17, PubChain19, DClaims19, BBox20}, we identify and overcome two previously underestimated challenges.
First, we significantly reduce gas costs by proposing a set of design-level mechanisms (as compared with code-level gas optimizations~\cite{GASPER17,MadMax18,GASOL20}) for gas-efficient app storing and retrieving on chain.
Notably, we transform the typical in-contract data structures to transaction log-based contract storage, which reduces gas by a factor of 53 per operation (20,000 v.s. 375 Gas).
Second, we surprisingly found that IPFS is not distributed by default --- files stay only in the original IPFS node and are cached at IPFS gateways only when there are requests.
To make IPFS really distributed, we build an IPFS consortium network that periodically caches app files at IPFS gateways and timely backup apps at crowdsourced server nodes.
We also propose a mechanism to identify fast and uncensored IPFS gateways for distributed app downloading.
To further make AGChain\xspace secure and sustainable, we still need to address three AGChain\xspace-specific system challenges.
First, to securely retrieve apps from existing app markets without the network security guarantee, we extract and validate the checksums that are potentially embedded in apps' market pages, and also achieve alternative security when there are no checksums.
Second, to avoid repackaged apps from polluting our market, we propose a mechanism that exploits a lightweight yet effective app certificate field to detect repackaged apps, and experimentally validate it using 15,297 repackaged app pairs.
Third, as a monetary incentive for crowdsourced server nodes, we design a mechanism of charging upload fees to maintain the platform self-sustainability.
We have implemented a public AGChain\xspace (\url{https://www.agchain.ltd/}) on the widely-used Ethereum blockchain with 2,485 lines of code in Solidity, Python, Java, and JavaScript.
To evaluate AGChain\xspace, we not only empirically demonstrate its security effectiveness (via preventing man-in-the-middle and repackaging attacks against our app delegation) and decentralization (IPFS gateways discovered in over 21 different locations worldwide), but also experimentally measure the performance and gas costs.
On average, AGChain\xspace introduces 12\% performance overhead in 140 app tests from seven app markets, and costs 0.00008466 Ether (around 0.085 USD) per app upload.
We further provide a batch upload mechanism to save gas by a factor of between 2.02 (for a batch of 10 uploads) and 2.65 (for a batch of 100 uploads).
Besides minimal gas costs for app uploads, AGChain\xspace does \textit{not} consume gas for app downloads since they do not change contract state.
To sum up, this paper makes the following contributions:
\begin{itemize}
\item (A novel gateway design for app markets)
We propose a blockchain-based gateway to enable permanent, distributed, and secure app delegation for massive apps stored in existing markets (and custom apps via GitHub URLs).
Our idea of combining traditional IT infrastructure and decentralized blockchain opens a new door for better blockchain system design in the future.
\item (Addressing design and system challenges)
We propose mechanisms to achieve gas-efficient smart contract and distributed IPFS design.
We also overcome three system challenges that are specific to AGChain\xspace.
\item (Implementation and extensive evaluation)
We have implemented a publicly available prototype, AGChain\xspace, on Ethereum and conducted extensive evaluation on its performance, gas costs, security, and decentralization.
\end{itemize}
\section*{Acknowledgements}
This work is partially supported by a direct grant (ref. no. 4055127) from The Chinese University of Hong Kong.
\bibliographystyle{ACM-Reference-Format}
\section{Motivation and Background}
\label{sec:motivate}
In this section, we motivate the need of permanent, distributed, and secure app access by pinpointing pitfalls in existing app markets.
We then present the background required for understanding our blockchain-based design.
\subsection{The Delisted Apps on Google Play}
Centralized app markets could enforce strict listing policies and delist the apps as they wish.
While it is known that Google Play and Apple Store do not allow apps like third-party market apps to be listed, it is unclear how many uploaded apps were once delisted.
We estimate this percentage by specifically measuring the number of delisted apps in a set of 8,359 apps that we initially collected from Google Play in November 2018.
These apps are all popular, with one million installs each.
We then re-crawled them in the same country after one year in November 2019 and found that as high as 13.7\% (1,146) of them had been delisted in this period of time.
Although some of them could be intentionally withdrawn by their developers, we believe that a considerable portion of the total 1,146 delisted apps was due to the violation of Google Play's recent Developer Distribution policies~\cite{GooglePlayChange}.
\subsection{App Cases of No World-wide Access}
Besides no permanent app access, a more easily observable pitfall is that many apps on Google Play and Apple Store are available only in certain countries.
For example, the TVB media app~\cite{TVBban} and the popular Hulu app~\cite{Huluban} are restricted within Hong Kong and US/Japan, respectively.
Besides country-specific app control, many English-based apps are not available in Chinese app markets (and vice versa).
As a result, users who need to download apps in other countries or languages have to switch their iTunes accounts to other countries~\cite{APPLEbypass} or use VPN to bypass Google Play's checking~\cite{PLAYbypass}.
Note that no world-wide app access is not only caused by markets' country-specific control but also by network-side censorship.
For example, Google Play itself is being censored in some regions~\cite{GooglePlayBan}, causing no app access for affected users.
\subsection{Insecure App Downloading in Third-party App Markets}
\label{sec:insecureDownload}
Lastly, an unexpected yet serious pitfall is that not all app markets provide secure app downloading as Google Play and Apple Store.
Indeed, we find that half of the top 14 Chinese Android app markets still use insecure app downloading via HTTP, which makes the injection of a repackaged app~\cite{DroidMOSS12} possible through public WiFis~\cite{PublicWiFi} or network hijacking~\cite{BGPHijacking13}.
This causes a severe security risk, since around one billion Internet users in China have to use third-party app markets to download and install apps due to the Google Play ban.
\begin{table}[t!]
\caption{The measurement result of app download security in the top 16 Chinese Android app markets~\cite{ChineseAppMarkets18}.}
\vspace{-2ex}
\label{tab:3rdmarkets}
\begin{adjustbox}{center}
\scalebox{0.9}{
\begin{tabular}{ccc}
\toprule
Market Name & Company Type & Secure App Downloading?\\
\midrule
Tencent Myapp & Web Co. & HTTPS \green{\ding{52}} \\
Baidu Market & Web Co. & HTTP \red{\ding{55}} \\
360 Market & Web Co. & HTTP \red{\ding{55}} \\
\midrule
OPPO Market & HW Vendor & No Website Download\\
Huawei Market & HW Vendor & HTTPS \green{\ding{52}} \\
Xiaomi Market & HW Vendor & HTTPS \green{\ding{52}} \\
Meizu Market & HW Vendor & HTTP \red{\ding{55}} \\
Lenovo MM & HW Vendor & HTTP \red{\ding{55}} \\
\midrule
HiApk & Specialized & Cannot Find the Website \\
Wandoujia & Specialized & HTTPS \green{\ding{52}} \\
PC Online & Specialized & HTTPS \green{\ding{52}} \\
LIQU & Specialized & HTTPS \green{\ding{52}} \\
25PP & Specialized & HTTPS \green{\ding{52}} \\
App China & Specialized & HTTP \red{\ding{55}} \\
Sougou & Specialized & HTTP \red{\ding{55}} \\
AnZhi & Specialized & HTTP \red{\ding{55}} \\
\bottomrule
\end{tabular}
}
\end{adjustbox}
\end{table}
Table~\ref{tab:3rdmarkets} shows our measurement result of the top 16 Chinese Android app markets that were selected by a recent app market study~\cite{ChineseAppMarkets18}.
Specifically, we measure whether these markets use HTTPS as their medium to host app downloads.
Note that some market websites may host Web content using HTTPS but provide the actual app downloading only in lightweight HTTP.
We distinguish such a situation by inspecting the app downloading traffic \textit{only}.
As shown in Table~\ref{tab:3rdmarkets}, half of 14 Chinese app markets (two markets do not provide the web-based app downloading) use insecure HTTP downloading, which include markets from Internet giants (e.g., Baidu) and hardware smartphone vendors (e.g., Meizu and Lenovo), and also specialized app markets (e.g., Anzhi and App China).
\subsection{Relevant Technical Background}
\label{sec:background}
To address the pitfalls mentioned above, we propose a novel blockchain-based architecture that leverages Ethereum and IPFS.
We now introduce the relevant background as follows.
\textbf{Blockchain.}
Technically, a typical blockchain is a public and distributed ledger.
It records transactions that are immutable, verifiable, and permanent~\cite{Blockchain16}.
Therefore, blockchain can be utilized as a decentralized database.
The trust among different network nodes is guaranteed by the so-called \textit{consensus} (e.g., Proof of Work) instead of the authority of a specific institution.
Consensus is the key for all nodes on a blockchain to maintain the same ledger in a way that the authenticity could be recognized by each node in the network.
\textbf{Ethereum.}
Ethereum~\cite{buterin2014next} is the second largest blockchain system and is a widely-used smart contract platform.
A smart contract is a contract that has been programmed in advance with a sequence of rules and regulations for self-executing.
Solidity is the primary language for programming smart contracts on Ethereum.
In particular, it is a Turing-complete language, which suggests that developers could achieve arbitrary functionality on smart contracts theoretically.
To prevent denial-of-service attacks, users need to consume gas fees to send transactions on Ethereum.
The gas fees are paid in Ethereum's native cryptocurrency called Ether (or ETH).
\textbf{IPFS.}
IPFS (Interplanetary File System)~\cite{IPFS14} is a peer-to-peer file sharing system, where files are stored in a distributed way and routed using the content-addressing \cite{curran2018interplanetary}.
IPFS was proposed because the storage of large files on blockchain is inefficient and with high costs.
Specifically, all nodes in a blockchain network need to endorse the entire ledger and synchronize the files stored.
As a result, unnecessary redundancy will lead to a huge waste of storage, and the latency of ledger synchronicity will also be significantly increased.
To address this limitation, we can store data only in the IPFS storage nodes and keep the unique and permanent IPFS address (called IPFS hash) in the blockchain network.
\section{Related Work}
\label{sec:related}
AGChain\xspace is mostly related to several recent works~\cite{IoTDataBlockchain17, PubChain19, DClaims19, BBox20} that also leveraged the blockchain and IPFS technology to build decentralized systems in different domains.
For example, PubChain~\cite{PubChain19} is a decentralized publication platform, which saved paper metadata in the blockchain layer and stored raw paper files in IPFS.
In particular, it designed an incentive mechanism called PubCoin to reward the participants through a process we summarized as ``publishing or reviewing as mining.''
Another paper on arXiv, DClaims~\cite{DClaims19}, presented a censorship-resistant service that uses decentralized web notations to disseminate information on the Internet.
Similar to AGChain\xspace, it used Ethereum as the blockchain platform and integrated IPFS as the backend data storage.
Since web notating could be very frequent among a large number of users, DClaims built a small network of nodes to assemble multiple blockchain transactions and broadcast them together so that it could reduce the average cost of each transaction.
Besides these works, Shafagh et al.~\cite{IoTDataBlockchain17} published a pioneer study on integrating blockchain and IPFS, which was inspired by the Blockstack~\cite{Blockstack16} system's four-layer design, namely blockchain, virtualchain, routing, and storage layers.
Compared with these related studies, AGChain\xspace is unique in the following four aspects:
Firstly, unlike other blockchain systems that asked users to choose between the existing IT infrastructure and theirs, we did not aim to replace existing app markets with AGChain\xspace.
Instead, we designed AGChain\xspace to be a gateway that not only brings permanent app delegation to end users but also makes use of massive apps in existing markets.
We believe that such a design opens a new direction on combining the advantages of traditional IT infrastructure and decentralized blockchain.
Secondly, we significantly reduced gas costs in AGChain\xspace by proposing a set of design-level mechanisms.
In contrast, no aforementioned related works~\cite{IoTDataBlockchain17, PubChain19, DClaims19, BBox20} tried to do that.
There were some other works on gas optimization but they focused only on the language-level optimizations.
Specifically, GASPER~\cite{GASPER17} identified gas-costly smart contract coding patterns and summarized them into two categories, loop-related and useless codes.
MadMax~\cite{MadMax18} leveraged control- and data-flow analysis of smart contracts' bytecode to detect the gas-related vulnerabilities, including unbounded mass operations, non-isolated external calls, and integer overflows.
Besides these two detection works, GASOL~\cite{GASOL20} introduced a gas optimization approach by replacing one multiple accesses to the global storage data with several single accesses to the data in local memory.
An access to the local memory (3 Gas per access) costs much fewer than an access to the storage data (each write access costs 20 Gas in the worst case and 5 Gas in the best case).
However, such an optimization is still at the code level and designed only for the specific gas-costly pattern.
Thirdly, no aforementioned related works~\cite{IoTDataBlockchain17, PubChain19, DClaims19, BBox20} mentioned the undistributed problem of IPFS, where IPFS files are cached in other nodes only when the node requests that file.
Besides our attempt to this problem in Section\xspace\ref{sec:ipfsdesign}, IPFS designers themselves also tried to address this issue.
Recently, they have launched Filecoin~\cite{Filecoin}, an incentive token mechanism for encouraging peers in the IPFS network to stay online and take the responsibility to store files.
However, Filecoin requires users to pay for file storage, and the entire progress of storing 1MB file takes five to ten minutes in the current Filecoin network~\cite{Filecoin}.
Due to these limitations, we built our own IPFS consortium network that leveraged IPFS gateways and crowdsourced server nodes to periodically backup files.
Lastly, we encountered three context-specific challenges that are unique to AGChain\xspace, as already explained in Section\xspace\ref{sec:sustain}.
\section{Making It Secure \& Sustainable}
\label{sec:sustain}
So far, we have designed AGChain\xspace to be permanent and distributed through contract-based distribution and IPFS-based storage.
However, to make it secure and sustainable, we still need to address three AGChain\xspace-specific system challenges:
\begin{description}
\item[C3:] \textit{How to securely retrieve apps from the app markets without network security guarantee?}
Recall that our server needs to retrieve apps from existing markets before uploading them to IPFS.
This is straightforward for markets with HTTPS but difficult for those using insecure HTTP (see Section\xspace\ref{sec:insecureDownload}) because there is no underlying network security guarantee.
To address this, we propose two modes of secure app retrieval in Section\xspace\ref{sec:secureRetrieval}.
\item[C4:] \textit{How to avoid repackaged apps from polluting our market?}
Addressing challenge C3 guarantees that the retrieved app is the same as the one in its original app market.
However, if the market is a third-party app market, the app might be already repackaged before our (secure) retrieval.
Hence, we need a mechanism, as proposed in Section\xspace\ref{sec:repackageDetection}, to detect repackaged apps~\cite{DroidMOSS12} that are different from their official Google Play versions.
\item[C5:] \textit{How to make AGChain\xspace self-sustainable?}
As mentioned in Section\xspace~\ref{sec:overall}, each crowdsourced server node consumes computer resources and gas fees for app uploading in AGChain\xspace.
This is unsustainable if there is no monetary mechanism to incent these nodes.
Since uploaders take an advantage (e.g., advertising their apps or permanently storing an app for security research) from AGChain\xspace, it is reasonable to charge them an upload fee for paying crowdsourced servers.
We will present this mechanism in Section\xspace\ref{sec:chargeFees}.
With upload fees, we can also prevent spammers from misusing AGChain\xspace.
\end{description}
\subsection{Secure App Retrieval from (Existing) Unprotected Markets}
\label{sec:secureRetrieval}
In this subsection, we present two methods that collectively achieve secure app retrieval even for those unprotected markets, e.g., seven markets using HTTP downloading in Table~\ref{tab:3rdmarkets}.
\textbf{Secure app downloading via checksums.}
We find that although those seven markets use HTTP to download apps, most of them allow users to browse app pages via HTTPS (e.g., \url{https://zhushou.360.cn/detail/index/soft\_id/95487}) and also allow us to extract app APK file checksums (e.g., MD5) from those pages' HTML source code.
For example, three app markets (Baidu, 360, and AppChina) directly embed the checksums in their app download URLs.
One app market, Lenovo MM, embeds the checksum in its app HTML page's \texttt{<script>} data section.
Hence, by analyzing these markets' app HTML pages, we can securely obtain app checksums and further compare them with the calculated checksums of app APK files we retrieve via HTTP.
If they match, we conclude that the retrieved app is not replaced by adversaries.
Furthermore, although Sogou market does not provide the checksum information, we find that its HTTP download URL can be directly transformed into an HTTPS version.
As a result, we can achieve guaranteed app download security for five out of a total of seven unprotected markets.
\textbf{Alternative security with no checksums.}
For the remaining app markets that contain no checksums, e.g., Meizu and Anzhi, we propose a mechanism to achieve alternative download security.
The basic idea is to check the Google Play counterpart of each downloaded app.
Specifically, by leveraging the AndroZoo app repository~\cite{AndroZoo16} that contains over ten million apps from Google Play, we can check if a downloaded app is in this repository.
If it is, we conclude that the downloaded app from a third-party market is not compromised during the download process.
Furthermore, even if an app is not in the repository, we can extract its developer certificate and check whether it is from a Google Play developer.
Since the AndroZoo repository keeps evolving with Google Play, we have high confidence that most apps could be covered by either our app- or develop-level checking.
For minimal apps that still miss, AGChain\xspace will give a warning that these apps might not be securely retrieved.
\subsection{Exploiting App Certificate Info for Repackaging Detection}
\label{sec:repackageDetection}
In this subsection, we present a mechanism of exploiting app certificate to \textit{accurately} detect repackaged apps that might be uploaded to AGChain\xspace from third-party app markets.
Unlike traditional app repackaging detection~\cite{DroidMOSS12, Repack19}, the identifier of uploaded apps to AGChain\xspace, i.e., app package name, is fixed.
We can leverage this package name identifier to locate a corresponding official app on Google Play.
We then just need a mechanism to differentiate these two versions of apps, and more specifically, whether they are from the same developer.
We thus investigate the app certificate data structure, which includes the certificate serial number, issuer, subject, and X509v3 extensions.
Among these fields, we find that serial number (e.g., 0x706a633e) is the most lightweight metric that adversaries cannot manipulate (because they do not have developers' app signing key).
In contrast, other fields like issuer and subject are manipulatable, and X509v3 is more complex than the serial number.
We further experimentally validate our detection idea.
Specifically, we use a dataset~\cite{Repack19} of 15,297 repackaged Android app \textit{pairs} (i.e., each pair contains an original app and a repackaged app) as our ground truth.
We also develop a script that automatically extracts the package name and serial number from a given app APK file.
This script leverages the popular Androguard library to parse APK files.
Note that this script has been integrated into AGChain\xspace's server code.
By running our script on 15,297 app pairs, we find a total of 2,270 pairs with the same package name each, and no single pair has the same serial number.
For the other 13,027 repackaged pairs, every pair of their package names is different.
This experiment shows that our serial number based mechanism is 100\% accurate.
\subsection{Charging Upload Fees to Maintain the Platform Self-Sustainability}
\label{sec:chargeFees}
In this subsection, we design a mechanism of charging app upload fees to pay crowdsourced server nodes in AGChain\xspace so that the entire AGChain\xspace platform is sustainable.
We thus revise the original design of AGChain\xspace to charge upload fees just before each server node invokes (IPFS and) smart contract.
More specifically, we add two more steps between step 5 and 6 in Fig.\xspace~\ref{fig:overall}.
The first step is to send an estimated transaction fee to our smart contract, for which we add one more function called \texttt{DonateGasFee()} in our smart contract.
We set this function \texttt{payable} so that the user side (in the form of our JavaScript code) can invoke it with a fee, e.g., \texttt{DonateGasFee().send(from: userAccount, value: fee)}.
After this call is successfully executed in blockchain, the user side will receive a transaction ID.
In the second step, our JavaScript code automatically sends this transaction ID (and a few other metadata) to the server for verification.
We now explain how our JavaScript code estimates the gas fee and how the server verifies the payment transaction.
To calculate a gas fee at the user side, we invoke a web3 JavaScript function called \texttt{estimateGas()} to estimate the required gas of executing the transaction in the EVM.
For the transaction verification at the server, we first query the transaction according to its ID, and then determine (i) whether the destination address of this transaction is our smart contract, (ii) whether the upload fee exceeds our estimated gas, and (iii) whether this transaction is never used before.
Only when the three conditions are all satisfied, the server then continues the actual app uploading to IPFS and Ethereum.
|
1,941,325,220,742 | arxiv | \section{Introduction}\label{sec:intro}
In recent years, there has been a remarkable development
in hadron spectroscopy.
One of the most interesting observations is
the evidence of the exotic baryon $\Theta^+$, reported first
by the LEPS collaboration~\cite{Nakano:2003qx}.
Subsequently, a signal of another exotic state $\Xi^{--}$
was observed~\cite{Alt:2003vb}.
The spin and parity of $\Theta^+$ and $\Xi^{--}$
are not yet determined experimentally.
Since these states can be constructed minimally with five valence quarks,
they are called pentaquarks.
Evidences of the exotic pentaquarks have
been stimulating many theoretical studies~\cite{Roy:2003hk,Zhu:2004xa,
Oka:2004xh,Diakonov:2004ie,Jaffe:2004ph,Sasaki:2004vz,Goeke:2004ht}.
In the study of these exotic particles,
it should be important to identify other members with nonexotic
flavors in the same SU(3) multiplet which the exotic particles
belong to.
This is naturally expected from the successes of $SU(3)$ flavor
symmetry with its breaking in hadron masses and
interactions~\cite{deSwart:1963gc}.
In other words, the existence of exotic particles
would require the flavor partners,
if the flavor SU(3) symmetry
plays the same role as in the ordinary
three-quark baryons.
An interesting proposal was made by Jaffe and Wilczek~\cite{Jaffe:2003sg},
based on the assumption of the strong diquark correlation in hadrons
and the representation mixing of an octet ($\bm{8}$) with
an antidecuplet ($\overline{\bm{10}}$).
The attractive diquark correlation in the scalar-isoscalar channel
leads to the spin and parity $J^P=1/2^+$
for the $\Theta^+$.
With the ideal mixing of $\bm{8}$ and $\overline{\bm{10}}$,
in which states are classified by the number of strange and
antistrange quarks, $N(1710)$ and
$N(1440)$ resonances are well fit as members of the multiplet
together with the $\Theta^+$.
However, it was pointed out that mixing angles close to the ideal one
encountered a problem in the decay pattern
of $N(1710)\to \pi N$ and $N(1440)\to \pi N$.
Rather, their decays implied a small mixing
angle~\cite{Cohen:2004gu,Pakvasa:2004pg,Praszalowicz:2004xh}.
This is intuitively understood
by observing the broad decay width of $N(1440)\to \pi N$
and the narrow widths of $N(1710)\to \pi N$ and
$\Theta\to K N$~\cite{Cohen:2004gu}.
Employing the $\bm{8}$-$\overline{\bm{10}}$ mixing scenario,
here we examine the possibilities to assign other quantum
numbers, such as $1/2^-$, $3/2^+$, $3/2^-$, and search
the nucleon partners among the known resonances.
For convenience,
properties of relevant resonances
are summarized in Appendix~\ref{sec:Expinfo}.
The present study is based on the
flavor SU(3) symmetry, experimental
mass spectra and decay widths of the $\Theta^+$, the $\Xi^{--}$
and known baryon resonances.
Hence, our analysis presented here is phenomenological, but
does not rely upon any specific models.
For instance, we do not have to specify the quark contents of the baryons.
Although the exotic states require minimally five quarks, nonexotic
partners do not have to.
Instead, we expect that the resulting properties such as masses and
decay rates reflect information from which we hope to learn internal
structure of the baryons.
\section{Analysis with pure antidecuplet}\label{sec:bar10}
First we briefly discuss the case where the
$\Theta^+$ belongs to the pure $\overline{\bm{10}}$
without mixing with other representations.
In this case, the masses of particles belonging to the $\overline{\bm{10}}$
can be determined by the Gell-Mann--Okubo (GMO) mass formula with equal
splitting
\begin{align}
M(\overline{\bm{10}};Y)
&\equiv \bra{\overline{\bm{10}};Y}\mathcal{H}
\ket{\overline{\bm{10}};Y}
= M_{\overline{\bm{10}}} - aY ,
\label{eq:bar10mass_1}
\end{align}
where $Y$ is the hypercharge of the state,
and $\mathcal{H}$ denotes the mass matrix.
Note that at this point the spin and parity $J^P$ are not yet
specified. This will be assigned as explained below.
In Eq.~\eqref{eq:bar10mass_1},
there are two parameters, $M_{\overline{\bm{10}}}$ and $a$, which are not
determined by the flavor SU(3) symmetry.
However, we can estimate the order of these parameters
by considering their physical meanings.
For instance, in a constituent quark model,
$\overline{\bm{10}}$ can be minimally expressed as four
quarks and one antiquark. Therefore, $M_{\overline{\bm{10}}}$
should be larger than the masses of three-quark
baryons, such as the lowest-lying octet baryons.
In this picture, the
mass difference of $\Xi(ssqq\overline{q})$ and $\Theta(qqqq\overline{s})$,
namely $3a$, should
be the constituent mass difference of the $s$ and the $ud$ quarks,
which is about 100-250 MeV~\cite{Hosaka:2004mv}.
On the other hand, in the chiral quark soliton model,
$3a$ is related to the pion nucleon sigma term~\cite{Schweitzer:2003fg}.
In this picture $3a$ can take values in the range of 300-400 MeV,
due to the experimental uncertainty of the pion nucleon sigma
term $\Sigma_{\pi N}=$64-79 MeV~\cite{Diakonov:2003jj,
Praszalowicz:2004mt,Ellis:2004uz}.
Note that in the chiral quark model, spin and parity
are assigned as $J^P=1/2^+$ for the antidecuplet.
Taking into account the above estimation, we test several
parameter sets fixed by the experimentally known masses
of particles.
The results are summarized
in Table~\ref{tbl:bar10result}.
First, we determine the parameters by accommodating $\Theta(1540)$
and $\Xi(1860)$ in the multiplet.
In this case we obtain the mass of the $N$ and $\Sigma$ states at
1647 and 1753 MeV, respectively.
Since these values are close to the masses
of the $1/2^-$ baryons $N(1650)$ and $\Sigma(1750)$,
we expect their spin and parity to be $J^P=1/2^-$.
For $J^P=1/2^+$, we adopt the $N(1710)$ as the nucleon partner,
and predict the $\Sigma$ and $\Xi$ states.
This assignment corresponds to the original
assignment of the prediction~\cite{Diakonov:1997mm}.
For $J^P=3/2^+$, we pick up $N(1720)$, and for $J^P=3/2^-$, N(1700).
In the three cases of $J^P=1/2^+, 3/2^{\pm}$,
the exotic $\Xi$ resonance is predicted to be higher than 2 GeV,
and the inclusion of $\Xi(1860)$ in the same multiplet seems to be
difficult.
Furthermore, the $\Sigma$ states around 1.8-1.9 GeV are not well
assigned (either two-star for $J^P=1/2^+$, or not seen for
$J^P=3/2^{\pm}$).
Therefore, fitting the masses in the pure antidecuplet scheme seems to favor
$J^P=1/2^-$.
Next we study the decay width of the $N^*$ resonances
with the above assignments.
For the decay of a resonance $R$,
we define the dimensionless coupling constant $g_R$ by
\begin{equation}
\Gamma_R\equiv
g_R^{2}F_I \frac{p^{2l+1}}{M_R^{2l}} ,
\label{eq:coupling}
\end{equation}
where $p$ is the relative three momentum of the decaying particles
in the resonance rest frame, $\Gamma_R$ and $M_R$ are
the decay width and the mass of the resonance $R$.
$F_I$ is the isospin factor, which takes the value
2 for $\Theta\to KN$ and 3 for $N^*\to \pi N$.
Assuming flavor $SU(3)$ symmetry,
a relation between the coupling constants of $\Theta \to K N$ and
$N^*\to \pi N$ is given by:
\begin{equation}
g_{\Theta KN}=\sqrt{6}g_{N^*\pi N} .
\label{eq:relation}
\end{equation}
Here we adopt the definition of the coupling constant
in Ref.~\cite{Lee:2004bs}.
Note that this definition
is different from
Refs.~\cite{Cohen:2004gu,Pakvasa:2004pg},
in which $g\equiv \sqrt{g_R^2 F_I}$ is used.
With these formulae~\eqref{eq:coupling} and \eqref{eq:relation},
we calculate the decay width of the $\Theta^+$
from those of $N^*\to \pi N$ of the nucleon partner.
Results are also shown in Table~\ref{tbl:bar10result}.
We quote the errors coming from experimental uncertainties
in the total decay widths and branching ratios,
taken from the Particle Data Group~\cite{Eidelman:2004wy}.
It is easily seen
that as the partial wave of the two-body final states becomes higher,
the decay width of the resonance becomes narrower, due to the
effect of the centrifugal barrier.
Considering the experimental width of the $\Theta^+$,
the results of $J^P=3/2^-$, $3/2^+$, $1/2^+$ are acceptable,
but the result of the $J^P=1/2^-$ case, which is of the order of
hundred MeV, is unrealistic.
In summary, it seems difficult to regard the $\Theta^+$ as a member of
the pure antidecuplet $\overline{\bm{10}}$ together with known
resonances of $J^P=1/2^{\pm},3/2^{\pm}$,
in fitting both their masses and decay widths.
\begin{table}[tbp]
\centering
\caption{Summary of section~\ref{sec:bar10}.
Mass spectra and $\Theta^+$ decay width are shown
for several assignments of quantum numbers.
For $1/2^-$
the masses of $\Theta$
and $\Xi$ are the input parameters, while for $1/2^+,3/2^{\pm}$,
the masses of $\Theta$ and $N$ are the input parameters.
Values in parenthesis are the predictions,
and we show the candidates to be assigned for the states.
All values are listed in units of MeV.}
\begin{ruledtabular}
\begin{tabular}{cccccl}
$J^P$ & $M_{\Theta}$ & $M_{N}$ & $M_{\Sigma}$ & $M_{\Xi}$
& $\Gamma_{\Theta}$ \\
\hline
& 1540 & [1647] & [1753] & 1860 & \\
$1/2^-$ & & $N(1650)$ & $\Sigma(1750)$ & &
$156.1 \ {}^{+90.8}_{-73.3}$ \\
& 1540 & 1710 & [1880] &[2050] & \\
$1/2^+$ & & & $\Sigma(1880)$ & $\Xi(2030)$ &
$\phantom{00}7.2 \ {}^{+15.3}_{-4.6}$ \\
& 1540 & 1720 & [1900] & [2080] & \\
$3/2^+$ & & & - & - & $\phantom{0}10.6 \ {}^{+7.0}_{-5.0}$ \\
& 1540 & 1700 & [1860] & [2020] & \\
$3/2^-$ & & & - & $\Xi(2030)$
& $\phantom{00}1.3 \ {}^{+1.2}_{-0.9}$ \\
\end{tabular}
\end{ruledtabular}
\label{tbl:bar10result}
\end{table}
\section{Analysis with octet-antidecuplet mixing}\label{sec:8bar10}
In this section we consider the representation mixing
between $\overline{\bm{10}}$ and $\bm{8}$.
In principle, it is possible to take into account the mixing with
multiplets of higher dimension, such as $\bm{27}$ and $\bm{35}$.
However, particles in
such higher representations will have heavier masses.
Furthermore, the higher representations bring more states
with exotic quantum numbers, which are not controlled by the known
experimental information.
Here we work under the assumption of minimal
$\bm{8}$-$\overline{\bm{10}}$ mixing.
Also we do not consider the possible mixing with other octets,
such as ground states~\cite{Guzey:2005mc}.
The nucleon and $\Sigma$ states in the $\bm{8}$
will mix with the states
in the $\overline{\bm{10}}$ of the same quantum numbers.
Denoting the mixing angles of the $N$ and the $\Sigma$ as
$\theta_N$ and $\theta_{\Sigma}$,
the physical states are represented as
\begin{equation}
\begin{split}
\ket{N_1} =& \ket{\bm{8},N} \cos\theta_N
- \ket{\overline{\bm{10}},N} \sin\theta_N , \\
\ket{N_2} =& \ket{\overline{\bm{10}},N} \cos\theta_N
+ \ket{\bm{8},N} \sin\theta_N ,
\end{split}
\label{eq:Nmixing}
\end{equation}
and
\begin{equation}
\begin{split}
\ket{\Sigma_1} =& \ket{\bm{8},\Sigma} \cos\theta_\Sigma
- \ket{\overline{\bm{10}},\Sigma} \sin\theta_\Sigma , \\
\ket{\Sigma_2} =& \ket{\overline{\bm{10}},\Sigma} \cos\theta_\Sigma
+ \ket{\bm{8},\Sigma} \sin\theta_\Sigma .
\end{split}
\label{eq:Sigmamixing}
\end{equation}
To avoid redundant duplication,
the domain of the mixing angles is restricted in $0\leq \theta < \pi/2$,
and we will find solutions for $N_1$ and $\Sigma_1$ lighter
than $N_2$ and $\Sigma_2$, respectively.
The reason for these restrictions is explained in Appendix~\ref{sec:Mixing}.
When we construct $\overline{\bm{10}}$ and $\bm{8}$ from five quarks,
the eigenvalues of the strange quark (antiquark) number operator $n_s$
of nucleon states become fractional.
In the scenario of the ideal mixing of Jaffe and Wilczek,
the physical states are given as
\begin{align}
\ket{N_1}
&= \sqrt{\frac{2}{3}}\ket{\bm{8},N}
-\sqrt{\frac{1}{3}}\ket{\overline{\bm{10}},N} ,
\label{eq:idealN1} \\
\ket{N_2}
&= \sqrt{\frac{2}{3}}\ket{\overline{\bm{10}},N}
+\sqrt{\frac{1}{3}}\ket{\bm{8},N} ,
\label{eq:idealN2}
\end{align}
such that $\bra{N_1}n_s\ket{N_1}=0$ and $\bra{N_2}n_s\ket{N_2}=2$.
In this case, the mixing angle is
\begin{equation}
\theta_N\sim 35.2^{\circ} .
\label{eq:Nideal}
\end{equation}
This value will be compared with the angle obtained from the mass
spectrum of known resonances.
In the Jaffe-Wilczek model~\cite{Jaffe:2003sg},
N(1440) and N(1710) are assigned
to $N_1$ and $N_2$, respectively.
Notice that the separation of the $s\bar{s}$ component in the ideal
mixing is only meaningful for mixing between five-quark states,
while the number of quarks in the baryons is arbitrary in the
present general framework.
It is worth mentioning that the mixing angle $\theta_N$
for $1/2^+$ case is calculated
through the dynamical study of constituent quark
model~\cite{Stancu:2004du}. The resulting value is
$\theta_N\sim 35.34^{\circ}$, which is very close to the ideal mixing
angle~\eqref{eq:Nideal}.
\subsection{Mass spectrum}\label{subsec:mass}
Let us start with the
GMO mass formulae for $\overline{\bm{10}}$ and $\bm{8}$ :
\begin{align}
M(\overline{\bm{10}};Y)
&\equiv \bra{\overline{\bm{10}};Y} \mathcal{H}
\ket{\overline{\bm{10}};Y}
= M_{\overline{\bm{10}}} - aY ,
\label{eq:bar10mass} \\
M(\bm{8};I,Y)
&\equiv \bra{\bm{8};I,Y} \mathcal{H} \ket{\bm{8};I,Y}
\nonumber \\
&= M_{\bm{8}} - bY + c
\left[ I(I+1) -\frac{1}{4}Y^2\right] ,
\label{eq:8mass}
\end{align}
where $Y$ and $I$ are the hypercharge and the isospin of the state.
Under representation mixing as in Eqs.~\eqref{eq:Nmixing} and
\eqref{eq:Sigmamixing},
the two nucleons $(N_{\bm{8}},N_{\overline{\bm{10}}})$
and the two sigma states
$(\Sigma_{\bm{8}},\Sigma_{\overline{\bm{10}}})$
mix, and their mass matrices are given by $2\times 2$ matrices.
The diagonal components are given by Eqs.~\eqref{eq:bar10mass}
and \eqref{eq:8mass}, while the off-diagonal elements are given as
\begin{equation}
\bra{\bm{8},N}\mathcal{H}\ket{\overline{\bm{10}},N}
=\bra{\bm{8},\Sigma}\mathcal{H}\ket{\overline{\bm{10}},\Sigma}
\equiv \delta .
\label{eq:delta}
\end{equation}
The equivalence of the two off-diagonal elements can be verified when
the symmetry breaking term is given by $\lambda_8$
due to the large strange quark mass~\cite{Diakonov:2003jj}.
The physical states $\ket{N_i}$ and $\ket{\Sigma_i}$
diagonalize $\mathcal{H}$.
Therefore,
we have the relations
\begin{equation}
\tan 2\theta_N
= \frac{2\delta}{M_{\overline{\bm{10}}}
-M_{\bm{8}}-a+b-\frac{1}{2}c}, \label{eq:Nmix}
\end{equation}
and
\begin{equation}
\tan 2\theta_{\Sigma}
= \frac{2\delta}{M_{\overline{\bm{10}}}
-M_{\bm{8}}-2c}. \label{eq:Sigmamix}
\end{equation}
Now we have the mass formulae for the states
\begin{align}
M_{\Theta}
=& M_{\overline{\bm{10}}}-2a ,
\label{eq:Mtheta} \\
M_{N_1}
=& \left(M_{\bm{8}}-b+\frac{1}{2}c\right)\cos^2\theta_N
+\left(M_{\overline{\bm{10}}}-a\right)\sin^2\theta_N
\nonumber \\
&-\delta \sin 2\theta_N ,
\label{eq:MN1} \\
M_{N_2}
=& \left(M_{\bm{8}}-b+\frac{1}{2}c\right)\sin^2\theta_N
+\left(M_{\overline{\bm{10}}}-a\right)\cos^2\theta_N
\nonumber \\
&+\delta \sin 2\theta_N ,
\label{eq:MN2} \\
M_{\Sigma_1}
=& \left(M_{\bm{8}}+2c\right)\cos^2\theta_{\Sigma}
+M_{\overline{\bm{10}}}\sin^2\theta_{\Sigma}
-\delta \sin 2\theta_{\Sigma} ,
\label{eq:MSigma1} \\
M_{\Sigma_2}
=& \left(M_{\bm{8}}+2c\right)\sin^2\theta_{\Sigma}
+M_{\overline{\bm{10}}}\cos^2\theta_{\Sigma}
+\delta \sin 2\theta_{\Sigma} ,
\label{eq:MSigma2} \\
M_{\Lambda}
=&M_{\bm{8}} ,
\label{eq:MLambda} \\
M_{\Xi_8}
=&M_{\bm{8}}+b+\frac{1}{2}c ,
\label{eq:MXi8} \\
M_{\Xi_{\overline{\bm{10}}}}
=&M_{\overline{\bm{10}}}+a .
\label{eq:MXibar10}
\end{align}
We have altogether six parameters $M_{\bm{8}}$,
$M_{\overline{\bm{10}}}$, $a$, $b$, $c$ and $\delta$.
Let us first examine the case of $J^P=1/2^+$~\cite{Pakvasa:2004pg}.
Possible candidates for the partners of the exotic states
$\Theta(1540)$ and $\Xi_{\overline{10}}(1860)$ are the following:
\begin{align}
&N(1440), \ N(1710),
\nonumber \\
&\Lambda(1600),
\nonumber \\
&\Sigma(1660),\ \Sigma(1880).
\nonumber
\end{align}
In order to fix the six parameters, we need
to assign six particles as input.
Using
$\Theta(1540)$, $N_1(1440)$, $N_2(1710)$,
$\Lambda(1600)$, $\Sigma_1(1660)$, $\Xi_{\overline{10}}(1860)$,
we obtain the parameters as given in Table~\ref{tbl:param1}.
The resulting mass spectrum together with the two
predicted masses,
$\Sigma_1=1894$ MeV and $\Xi_8=1797$ MeV, are given in
Table~\ref{tbl:result1}
and also shown in the left panel of Fig.~\ref{fig:spectrum}.
For reference, in Table~\ref{tbl:param1} and
\ref{tbl:result1} we show the parameters and masses of
Ref.~\cite{Pakvasa:2004pg}, in which
all known resonances including $\Sigma(1660)$ and $\Sigma(1880)$
are used to perform the $\chi^2$ fitting.
In Fig.~\ref{fig:spectrum}, the spectra from experiment
and those before the representation
mixing are also plotted.
As we see in Table~\ref{tbl:result1} and Fig.~\ref{fig:spectrum},
without using the $\Sigma_2$ for the fitting,
this state appears in the proper position to be assigned as
$\Sigma(1880)$.
Considering the experimental uncertainty in the masses,
these two parameter sets (the one
determined in this work and the one
in Ref.~\cite{Pakvasa:2004pg})
can be regarded as the same one.
In both cases, we need a new $\Xi$ state
around 1790-1800 MeV,
but the overall description of the mass spectrum
is acceptable.
Note that the mixing angle $\theta_N\sim 30^{\circ}$
is compatible with the one of the ideal mixing~\eqref{eq:Nideal},
if we consider the experimental uncertainty
of masses~\cite{Pakvasa:2004pg}.
It is interesting to observe that in the spectrum of the octet,
as shown in Fig.~\ref{fig:spectrum},
the $\Xi_8$ and the $\Sigma_8$ are almost degenerate,
reflecting the large value for the parameter $c\sim$ 100 MeV,
which is responsible for the splitting of $\Lambda$ and $\Sigma$.
For the ground state octet, Eq.~\eqref{eq:8mass}
is well satisfied with $b=139.3$ MeV and $c=40.2$
MeV~\cite{Diakonov:2003jj}.
This point will be discussed later.
\begin{table}[tbp]
\centering
\caption{Parameters for $1/2^+$ case.
All values are listed in MeV except for the mixing angles.}
\begin{ruledtabular}
\begin{tabular}{lllllllll}
& $M_{\bm{8}}$ & $M_{\overline{\bm{10}}}$ & $a$
& $b$ & $c$ & $\delta$
& $\theta_N$ & $\theta_{\Sigma}$ \\
\hline
This work & 1600 & 1753.3 & 106.7 & 146.7 & 100.1 & 114.4
& $29.0^{\circ}$ & $50.8^{\circ}$ \\
Ref.~\cite{Pakvasa:2004pg}
& 1600 & 1755 & 107 & 144 & 93 & 123
& $29.7^{\circ}$ & $41.4^{\circ}$ \\
\end{tabular}
\end{ruledtabular}
\label{tbl:param1}
\end{table}
\begin{table}[tbp]
\centering
\caption{Mass spectra for $1/2^+$ case. All values are listed in
MeV.
Values in parenthesis ($\Sigma_2$ and $\Xi_{\bm{8}}$
of Set 1, $\Xi_8$ of Ref.~\cite{Pakvasa:2004pg})
are predictions (those which are not used in the fitting).}
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
& $\Theta$ & $N_1$ & $N_2$ & $\Sigma_1$ & $\Sigma_2$
& $\Lambda$ & $\Xi_{\bm{8}}$ & $\Xi_{\overline{\bm{10}}}$ \\
\hline
This work & 1540 & 1440 & 1710 & 1660
& [1894] & 1600 & [1797] & 1860 \\
Ref.~\cite{Pakvasa:2004pg}
& 1541 & 1432 & 1718 & 1650
& 1891 & 1600 & [1791] & 1862
\end{tabular}
\end{ruledtabular}
\label{tbl:result1}
\end{table}
\begin{figure*}[tbp]
\centering
\includegraphics[width=16cm,clip]{spectrum_mono.eps}
\caption{Results of mass spectra with representation mixing.
Theoretical masses of the octet, antidecuplet, and the one with mixing
are compared with the experimental masses.
In the left panel, we show the results with $J^P=1/2^+$,
while the results with $J^P=3/2^-$
(set 1) are presented in the right panel.}
\label{fig:spectrum}
\end{figure*}%
Now we examine the other cases of $J^P$.
For $J^P={1/2^-}$, as we observed in the previous
section, the pure $\overline{\bm{10}}$ assignment works well
for the mass spectrum, which implies that the mixing
with $\bm{8}$ is small,
as long as we adopt $N(1650)$ and $\Sigma(1750)$ in the multiplet.
Then the results of $1/2^-$ with the mixing
do not change from the previous results of the pure
$\overline{\bm{10}}$ assignment,
which eventually lead to
a broad width of $\Theta^+\to KN$ of order 100 MeV.
Hence, it is not realistic to assign $1/2^-$,
even if we consider the representation mixing.
Next we consider the $3/2^+$ case.
In this case candidate states
are not well established.
As we see in Appendix~\ref{sec:Expinfo},
no state exists for $\Sigma$ and $\Xi$, except for
two- or one-star resonances.
Furthermore, the states are distributed
in a wide energy range,
and sometimes
it is not possible to assign these
particles in the $\bm{8}$-$\overline{\bm{10}}$
representation scheme.
For instance, if we choose
N(1720), N(1900), $\Lambda(1890)$, $\Sigma(1840)$
and exotic states, no solution is found for the mixing angle.
Therefore, at this moment, it is not meaningful to study
the $3/2^+$ case unless more states
with $3/2^+$ will be observed.
Now we look at the $3/2^-$ case.
In contrast to the $3/2^+$ case,
there are several well-established resonances.
Possible candidates are
\begin{align*}
&N(1520),\ N(1700), \\
&\Lambda(1520),\ \Lambda(1690), \\
&\Sigma(1670),\ \Sigma(1940), \\
&\Xi(1820).
\end{align*}
Following the same procedure as before,
we first
choose the following four resonances as inputs:
$\Theta(1540)$, $N_1(1700)$, $N_2(1520)$,
and $\Xi_{3/2}(1860)$.
For the remaining two to determine the six parameters,
we examine four different choices of $\Sigma$ and $\Lambda$ states;
\begin{eqnarray}
\Sigma(1670) &{\rm and}& \Lambda(1690)\; \; \; {\rm (set 1)},
\nonumber \\
\Sigma(1670) &{\rm and}& \Lambda(1520)\; \; \; {\rm (set 2)},
\nonumber \\
\Sigma(1940) &{\rm and}& \Lambda(1690)\; \; \; {\rm (set 3)},
\nonumber \\
\Sigma(1940) &{\rm and}& \Lambda(1520)\; \; \; {\rm (set 4)}.
\end{eqnarray}
We have obtained the parameters as given in Table~\ref{tbl:param2},
and predicted masses of other members are shown in
Table~\ref{tbl:result2}.
The masses of $N(1520)$ and $N(1700)$ determine the
mixing angle of nucleons $\theta_N\sim 33^{\circ}$,
which is close to the ideal one.
In the parameter sets 1 and 2 (sets 3 and 4),
the $\Sigma(1670)$ state of a lower mass
(the $\Sigma(1940)$ state of a higher mass)
is chosen but with different $\Lambda$'s,
$\Lambda(1690)$ and $\Lambda(1520)$.
Accordingly, they predict the higher
$\Sigma(1834)$ state (the lower $\Sigma(1717)$ state)
with the mixing angle
$\theta_\Sigma = 44.6^{\circ} (= 66.2^{\circ})$.
Interestingly,
parameters of set 1 provide $M_{\Xi_8}\sim 1837$ MeV,
which is close to the known three-star resonance $\Xi(1820)$
of $J^P=3/2^-$.
Parameters of set 4 predict $M_{\Xi_8}\sim 1659$ MeV,
which is close to another known resonance $\Xi(1690)$.
Since the $J^P$ of this state is not known, this fitting scheme
predicts $J^P$ of $\Xi(1690)$ to be $3/2^-$.
In these two cases,
we have obtained acceptable assignments, especially for
set 1, although a
new $\Sigma$ state is necessary to complete
the multiplet in both cases.
The spectrum of set 1 is also shown in Fig.~\ref{fig:spectrum}.
Let us briefly look at the octet and antidecuplet
spectra of $1/2^+$ and $3/2^-$ resonances as shown in
Fig.~\ref{fig:spectrum}.
The antidecuplet spectrum is simple, since the GMO
mass formula contains only one parameter which describes the
size of the splitting.
Contrarily, the octet spectrum contains two parameters which
could reflect more information on different internal structure.
As mentioned before, in the octet spectrum of $1/2^+$,
the mass of $\Sigma_8$ is pushed up slightly above
$\Xi_8$, significantly higher than $\Lambda_8$.
This pattern resembles the octet spectrum which is obtained
in the Jaffe-Wilczek model, where the baryons are made with
two flavor $\bar{\bm{3}}$ diquarks and one antiquark.
In contrast, the spectrum of the octet of $3/2^-$
resembles the one of the ground state octet,
what is reflected in the parameters $(b,c)=(131.9,30.5)$ MeV,
close to $(b,c)=(139.3,40.2)$ MeV for
the ground states.
This is not far from the prediction of an
additive quark model of three valence quarks.
It would be interesting to investigate further the quark
contents from such a different pattern of the mass spectrum.
\begin{table}[tbp]
\centering
\caption{Parameters for $3/2^-$ case.
All values are listed in MeV except for the mixing angles.}
\begin{ruledtabular}
\begin{tabular}{lllllllll}
& $M_{\bm{8}}$ & $M_{\overline{\bm{10}}}$ & $a$
& $b$ & $c$ & $\delta$
& $\theta_N$ & $\theta_{\Sigma}$ \\
\hline
set 1 & 1690 & 1753.3 & 106.7 & 131.9 & \phantom{0}30.5 & 82.2
& $33.0^{\circ}$ & $44.6^{\circ}$ \\
set 2 & 1520 & 1753.3 & 106.7 & \phantom{00}4.4 & 115.5 & 82.2
& $33.0^{\circ}$ & $44.6^{\circ}$ \\
set 3 & 1690 & 1753.3 & 106.7 & 170.1 & 106.9 & 82.2
& $33.0^{\circ}$ & $66.2^{\circ}$ \\
set 4 & 1520 & 1753.3 & 106.7 & \phantom{0}42.6 & 191.9 & 82.2
& $33.0^{\circ}$ & $66.2^{\circ}$ \\
\end{tabular}
\end{ruledtabular}
\label{tbl:param2}
\end{table}
\begin{table}[tbp]
\centering
\caption{Mass spectra for $3/2^-$ case. All values are listed in
MeV.
Values in parenthesis
are predictions (those which are not used in the fitting).}
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
& $\Theta$ & $N_1$ & $N_2$ & $\Sigma_1$ & $\Sigma_2$
& $\Lambda$ & $\Xi_{\bm{8}}$ & $\Xi_{\overline{\bm{10}}}$ \\
\hline
set 1 & 1540 & 1520 & 1700 & 1670
& [1834] & 1690 & [1837] & 1860 \\
set 2 & 1540 & 1520 & 1700 & 1670
& [1834] & 1520 & [1582] & 1860 \\
set 3 & 1540 & 1520 & 1700
& [1717] & 1940 & 1690 & [1914] & 1860 \\
set 4 & 1540 & 1520 & 1700
& [1717] & 1940 & 1520 & [1659] & 1860 \\
\end{tabular}
\end{ruledtabular}
\label{tbl:result2}
\end{table}
\subsection{Decay width}
Here we study the consistency of the mixing angle
obtained from mass spectra and the one obtained from
nucleon decay widths.
Using Eq.~\eqref{eq:relation},
we define a universal coupling constant $g_{\overline{\bm{10}}}$ as
\begin{equation}
g_{\Theta KN} = \sqrt{6} g_{N_{\overline{\bm{10}}} \pi N}
\equiv g_{\overline{\bm{10}}} .
\label{eq:bar10coupling}
\end{equation}
The coupling constants of the $\pi N$ decay modes
from the $N_{\bm{8}}$, $N_1$,
and $N_2$ are defined as
$g_{N_8}$, $g_{N_1}$, and $g_{N_2}$,
respectively.
The coupling constants of the physical nucleons $N_1$ and $N_2$ are
\begin{align}
g_{N_1}
&=g_{N_{\bm{8}}} \cos\theta_N
-\frac{g_{\overline{\bm{10}}}}{\sqrt{6}}\sin\theta_N
, \label{eq:N1coupling} \\
g_{N_2}
&=\frac{g_{\overline{\bm{10}}}}{\sqrt{6}}
\cos\theta_N
+g_{N_{\bm{8}}}\sin\theta_N ,\label{eq:N2coupling}
\end{align}
which are related to the decay widths
through Eq.~\eqref{eq:coupling}.
However, we cannot fix the relative phase
between $g_{N_{\bm{8}}}$ and $g_{\overline{\bm{10}}}$.
Hence, there are two possibilities
of mixing angles both of which
reproduce the same decay widths.
In Refs.~\cite{Pakvasa:2004pg,Praszalowicz:2004xh},
one mixing angle is determined
by neglecting $g_{\overline{\bm{10}}}$ in
Eqs.~\eqref{eq:N1coupling} and \eqref{eq:N2coupling},
which is considered to be small due to the narrow width of $\Theta^+$.
Here we include the effect of $g_{\overline{\bm{10}}}$ explicitly.
Let us examine the two cases, $1/2^+$ and $3/2^-$,
in which we have obtained reasonable mass spectra.
The data for decay widths and branching ratios
to the $\pi N$ channel
of relevant nucleon resonances
are shown in Table~\ref{tbl:Ndecay}.
Using the mixing angle determined from the mass spectrum and
experimental information of $N^*\to \pi N$ decays,
we obtain the decay width of the $\Theta^+$
as shown in Table~\ref{tbl:Thetadecay}.
The widths calculated with the ideal mixing angle are
also presented for reference.
Among the two values,
the former corresponds to the same signs of $g_{N_{\bm{8}}}$
and $g_{\overline{\bm{10}}}$ (phase 1),
while the latter to the opposite signs (phase 2).
\begin{table}[tbp]
\centering
\caption{Experimental data for the decay of
$N^*$ resonances.
Values in parenthesis are the central values
quoted in PDG~\cite{Eidelman:2004wy}.}
\begin{ruledtabular}
\begin{tabular}{ccrr}
$J^P$ & Resonance & $\Gamma_{tot}$ [MeV]
& Fraction ($\Gamma_{\pi N}/\Gamma_{tot}$) \\
\hline
$1/2^+$ & N(1440) & 250-450 (350) & 60-70 (65) \% \\
& N(1710) & 50-250 (100) & 10-20 (15) \% \\
$3/2^-$ & N(1520) & 110-135 (120) & 50-60 (55) \% \\
& N(1700) & 50-150 (100) & 5-15 (10) \% \\
\end{tabular}
\end{ruledtabular}
\label{tbl:Ndecay}
\end{table}
\begin{table}[tbp]
\centering
\caption{Decay width of $\Theta^+$ determined from
the nucleon decays and the mixing angle obtained from the mass
spectra.
Phase 1 corresponds to the same signs of $g_{N_{\bm{8}}}$
and $g_{\overline{\bm{10}}}$,
while phase 2 corresponds to the opposite signs.
All values are listed in MeV.}
\begin{ruledtabular}
\begin{tabular}{ccll}
$J^P$ & $\theta_N$ & Phase 1
& Phase 2 \\
\hline
$1/2^+$ & $29^{\circ}$ (Mass) & 29.1 & 103.3 \\
& $35.2^{\circ}$ (Ideal) & 49.3 & 131.8 \\
$3/2^-$ & $33^{\circ}$ (Mass) & \phantom{0}3.1 & \phantom{0}20.0 \\
& $35.2^{\circ}$ (Ideal) & \phantom{0}3.9 &
\phantom{0}21.3 \\
\end{tabular}
\end{ruledtabular}
\label{tbl:Thetadecay}
\end{table}
For the $1/2^+$ case, the width is about 30 MeV when the mixing angle is
determined by the mass spectrum, while about 50 MeV for the ideal
mixing angle.
Both values exceed the upper bound of the experimentally observed width.
In contrast, the case $3/2^-$ predicts much narrower widths
of the order of a few MeV both for the two mixing angles,
which are compatible with the experimental upper bound
of the $\Theta^+$ width.
Alternatively, we can determine $\theta_N$
using the experimental decay widths of
$\Theta \to KN$, $N_1\to \pi N$ and $N_2\to \pi N$.
Here we choose the decay width of $\Theta^+$ as 1 MeV.
Using the central values of the decay widths of
N(1440) and N(1710) and the experimental uncertainty,
we obtain the nucleon mixing angle for the $1/2^+$ case
\begin{equation}
\begin{split}
\theta_N =& 6^{\circ} \ {}^{+9^{\circ}}_{-4^{\circ}} , \\
\theta_N=&14^{\circ}\ {}^{+10^{\circ}}_{-4^{\circ}} ,
\end{split}
\label{eq:P11}
\end{equation}
where the former corresponds to the phase 1 and
the latter to the phase 2.
On the other hand,
with $N(1520)$ and $N(1700)$, the mixing angle for the $3/2^-$ case is
\begin{equation}
\begin{split}
\theta_N =& 9^{\circ} \ {}^{+9^{\circ}}_{-8^{\circ}} , \\
\theta_N=&24^{\circ}\ {}^{+9^{\circ}}_{-9^{\circ}} .
\end{split}
\label{eq:D13}
\end{equation}
For the case of $1/2^+$, the mixing angle of Eq.~\eqref{eq:P11}
may be compared with $\theta_N\sim 30^{\circ}$, which is determined
from the fitting to the masses. If we consider the large uncertainty
of the $\pi N$ decay width of $N(1440)$, the mixing
angle~\eqref{eq:P11} can be $24^{\circ}$, which is not very far from
the angle determined by the masses $\theta_N\sim 30^{\circ}$.
On the other hand, for the case of $3/2^-$, the mixing angle~\eqref{eq:D13}
agrees well with the angle determined by the masses $\theta_N\sim 33^{\circ}$.
Considering the agreement of mixing angles and
the relatively small uncertainties in the experimental decay widths,
the results with the $3/2^-$ case are favorable in the present fitting
analysis.
\section{Summary and discussion}\label{sec:Summary}
We have studied the mass spectra and decay widths of the baryons
belonging to the $\bm{8}$ and $\overline{\bm{10}}$ based on the
flavor SU(3) symmetry.
As pointed out previously, it was confirmed again
the inconsistency between the mass spectrum
and decay widths of flavor partners
in the octet-antidecuplet mixing scenario
with $J^P=1/2^+$.
However, the assignment of $J^P=3/2^-$ particles
in the mixing scenario well reproduced the mass spectrum
as well as
the decay widths of $\Theta(1540)$,
$N(1520)$, and $N(1700)$.
Assignment of $3/2^-$
predicted a new $\Sigma$ state
at around 1840 MeV, and the nucleon mixing angle
was close to the one of ideal mixing.
The $1/2^-$ assignment was not realistic
since the widths were too large for $\Theta^+$.
In order to investigate the $3/2^+$ case,
better established experimental data
of the resonances were needed.
The assignment of $J^P=3/2^-$ for exotic baryons seems reasonable
also in a quark model especially when narrow width of the $\Theta^+$
is to be
explained~\cite{Hosaka:2004bn}.
The $(0s)^5$ configuration for the $3/2^-$ $\Theta^+$ is dominated by
the $K^*N$ configuration~\cite{Takeuchi:2004fv},
which however cannot be the decay channel
due to the masses of $K^*$ and $N$ higher than the mass of $\Theta^+$.
Hence we expect naturally (in addition to a naive suppression
mechanism due to the $d$-wave $KN$ decay) a strong suppression of the
decay of the $\Theta^+$.
The possibility
of the spin $3/2$ for the $\Theta^+$ or its excited states
has been discussed not only in quark
models~\cite{Kanada-Enyo:2004bn,Inoue:2004cc,Hosaka:2004bn,
Takeuchi:2004fv}, but also
in the $KN$ potential model~\cite{Kahana:2003rv},
the $K\Delta$ resonance model~\cite{Sarkar:2004sc}
and QCD sum rule calculations~\cite{Nishikawa:2004tk}.
The $3/2^-$ resonances of nonexotic quantum numbers have been also
studied in various models of hadrons. A conventional quark model
description with a $1p$ excitation of a single quark orbit has been
successful qualitatively~\cite{Isgur:1978xj}.
Such three-quark states can couple with meson-baryon states which
could be a source for the five- (or more-) quark content of the
resonance.
In the chiral unitary approach, $3/2^-$ states are generated by
$s$-wave scattering states of an octet meson and a decuplet
baryon~\cite{Kolomeitsev:2003kt,Sarkar:2004sc,Sarkar:2004jh}.
By construction, the resulting resonances are largely dominated by
five-quark content.
These two approaches generate octet baryons which
will eventually mix with the antidecuplet partners to generate the
physical baryons. In other words, careful investigation of the octet
states before mixing will provide further information.
In the present phenomenological study, we have found that $J^P=3/2^-$
seems to fit observations to date.
As we have known, other identifications
have been also discussed in the literature,
for instance, using large $N_c$
expansion~\cite{Jenkins:2004vb,Wessling:2004ag,Pirjol:2004dw}.
It is therefore important to determine the
quantum numbers of $\Theta^+$ in experiments~\cite{Hyodo:2003th,
Nakayama:2003by,Thomas:2003ak,Oh:2003xg,
Hanhart:2003xp,Nam:2004qy,Uzikov:2004bk,Nakayama:2004um,Hanhart:2004re,
Roberts:2004rh,Uzikov:2004er,Oh:2004wp},
not only for the exotic particles
but also for the baryon spectroscopy of nonexotic particles.
Study of high spin states in phenomenological models
and calculations based on QCD are strongly
encouraged~\cite{Nishikawa:2004tk}.
\begin{acknowledgments}
We would like to thank Shi-Lin Zhu and Seung-il Nam
for useful discussions.
We acknowledge to
Manuel J. Vicente Vacas and Daniel Cabrera
for helpful comments.
This work supported in part by the Grant for Scientific Research
((C) No.16540252) from the Ministry of Education, Culture,
Science and Technology, Japan.
\end{acknowledgments}
|
1,941,325,220,743 | arxiv | \section{Introduction}
\noindent In the last decades, numerous efforts have been made in algorithms that can learn from data streams. Most traditional methods for this purpose assume the stationarity of the data. However, when the underlying source generating the data stream, i.e., the joint distribution $\mathbb{P}_t(\ve{X},y)$, is not stationary, the optimal decision rule should change over time. This is a phenomena known as concept drift~\cite{ditzler2015learning,krawczyk2017ensemble}. Detecting such concept drifts is essential for the algorithm to adapt itself to the evolving data.
Concept drift can manifest two fundamental forms of changes from the Bayesian perspective \cite{kelly1999impact}: 1) a change in the marginal probability $\mathbb{P}_t(\ve{X})$; 2) a change in the posterior probability $\mathbb{P}_t(y|\ve{X})$. Existing studies in this field primarily concentrate on detecting posterior distribution change $\mathbb{P}_t(y|\ve{X})$, also known as the real drift~\cite{widmer1993effective}, as it clearly indicates the optimal decision rule. On the other hand, only a little work aims at detecting the virtual drift \cite{hoens2012learning}, which only affects $\mathbb{P}_t(\ve{X})$. In practice, one type of concept drift typically appears in combination with the other~\cite{tsymbal2004problem}. Most methods for real drift detection assume that the true labels are available immediately after the classifier makes a prediction. However, this assumption is over-optimistic, since it could involve the annotation of data by expensive means in terms of cost and labor time. The virtual drift detection, though making no use of true label $y_t$, has the issue of wrong interpretation (i.e., interpreting a virtual drift as the real drift). Such wrong interpretation could provide wrong decision about classifier update which still require labeled data~\cite{krawczyk2017ensemble}.
To address these issues simultaneously, we propose a novel Hierarchical Hypothesis Testing (HHT) framework with a \textbf{Request-and-Reverify} strategy for concept drift detection. HHT incorporates two layers of hypothesis tests. Different from the existing HHT methods~\cite{alippi2017hierarchical,yu2017concept}, our HHT framework is the first attempt to use labels for concept drift detection \textbf{only when necessary}. It ensures that the test statistic (derived in a fully unsupervised manner) in Layer-I captures the most important properties of the underlying distributions, and adjusts itself well in a more powerful yet conservative manner that only requires labeled data when necessary in Layer-II. Two methods, namely Hierarchical Hypothesis Testing with Classification Uncertainty (HHT-CU) and Hierarchical Hypothesis Testing with Attribute-wise ``Goodness-of-fit'' (HHT-AG), are proposed under this framework in this paper. The first method incrementally tracks the distribution change with the defined $\emph{classification uncertainty}$ measurement in Layer-I, and uses permutation test in Layer-II, whereas the second method uses the standard Kolmogorov-Smirnov (KS) test in Layer-I and two-dimensional ($2$D) KS test \cite{peacock1983two} in Layer-II. We test both proposed methods in benchmark datasets. Our methods demonstrate overwhelming advantages over state-of-the-art unsupervised methods. Moreover, though using significantly fewer labels, our methods outperform supervised methods like DDM~\cite{gama2004learning}.
\section{Background Knowledge} \label{section2}
\subsection{Problem Formulation}
Given a continuous stream of labeled samples $\{\ve{X}_t,y_t\}$, $t=1,2,...,T$, a classification model $\hat{f}$ can be learned so that $\hat{f}(\ve{X}_t) \mapsto y_t$. Here, $\ve{X}_t \in \mathbb{R}^d$ represents a $d$-dimensional feature vector, and $y_t$ is a discrete class label. Let $(\ve{X}_{T+1},\ve{X}_{T+2},...,\ve{X}_{T+N})$ be a sequence of new samples that comes chronologically with unknown labels. At time $T+N$, we split the samples in a set $S_A=(\ve{X}_{T+N-n_A+1},\ve{X}_{T+N-n_A+2},...,\ve{X}_{T+N})$ of $n_A$ recent ones and a set $S_B=(\ve{X}_{T+1},\ve{X}_{T+2},...,\ve{X}_{T+N-n_A})$ containing the $(N-n_A)$ samples that appear prior to those in $S_A$. The problem of \emph{concept drift detection} is identifying whether or not the source $\mathscr{P}$ (i.e., the joint distribution $\mathbb{P}_t(\ve{X},y)$\footnote{The distributions are deliberated subscripted with time index $t$ to explicitly emphasize their time-varying characteristics.}) that generates samples in $S_A$ is the same as that in $S_B$ (even without access to the true labels $y_t$)~\cite{ditzler2015learning,krawczyk2017ensemble}. Once such a drift is found, the machine can request a window of labeled data to update $\hat{f}$ and employ the new classifier to predict labels of incoming data.
\subsection{Related Work}
The techniques for concept drift detection can be divided into two categories depending on reliance of labels~\cite{sethi2017reliable}: supervised (or explicit) drift detectors and unsupervised (or implicit) drift detectors. \textbf{Supervised Drift Detectors} rely heavily on true labels, as they typically monitor one error metrics associated with classification loss. Although much progress has been made on concept drift detection in the supervised manner, its assumption that the ground truth labels are available immediately for all already classified instances is typically over-optimistic. \textbf{Unsupervised Drift Detectors}, on the other hand, explore to detect concept drifts without using true labels. Most unsupervised concept drift detection methods concentrate on performing multivariate statistical tests to detect the changes of feature values $\mathbb{P}_t(\ve{X})$, such as the Conjunctive Normal Form (CNF) density estimation test~\cite{dries2009adaptive} and the Hellinger distance based density estimation test~\cite{ditzler2011hellinger}. Considering their high computational complexity, an alternative approach is to conduct univariate test on each attribute of features independently. For example, \cite{reis2016fast} develops an incremental (sequential) KS test which can achieve exactly the same performance as the conventional batch-based KS test.
Besides modeling virtual drifts of $\mathbb{P}_t(\ve{X})$, recent research in unsupervised drift detection attempts to model the real drifts by monitoring the classifier output $\hat{y}_t$ or posterior probability as an alternative to $y_t$. The Confidence Distribution Batch Detection (CDBD) approach~\cite{lindstrom2011drift} uses Kullback-Leibler (KL) divergence to compare the classifier output values from two batches. A drift is signaled if the divergence exceeds a threshold. This work is extended in \cite{kim2017efficient} by substituting the classifier output value with the classifier confidence measurement. Another representative method is the Margin Density Drift Detection (MD3) algorithm~\cite{sethi2017reliable}, which tracks the proportion of samples that are within a classifier (i.e., SVM) margin and uses an active learning strategy in \cite{zliobaite2014active} to interactively query the information source to obtain true labels. Though not requiring true labels for concept drift detection, the major drawback of these unsupervised drift detectors is that they are prone to false positives as it is difficult to distinguish noise from distribution changes. Moreover, the wrong interpretation of virtual drifts could cause wrong decision for classifier update which require not only more labeled data but also unnecessary classifier re-training~\cite{krawczyk2017ensemble}.
\section{Request-and-Reverify HHT Approach}
\begin{figure}
\centering
\includegraphics[height=3.5cm,width=8.5cm]{hierarchical_architecture}
\caption{The Request-and-Reverify Hierarchical Hypothesis Testing framework for concept drift detection with expensive labels.}
\label{fig:hierarchical architecture}
\end{figure}
The observations on the existing supervised and unsupervised concept drift detection methods motivate us to propose the Request-and-Reverify Hierarchial Hypothesis Testing framework (see Fig.~\ref{fig:hierarchical architecture}). Specifically, our layer-I test is operated in a fully unsupervised manner that does not require any labels. Once a potential drift is signaled by Layer-I, the Layer-II test is activated to confirm (or deny) the validity of the suspected drift. The result of the Layer-II is fed back to the Layer-I to reconfigure or restart Layer-I once needed.%
In this way, the upper bound of HHT's Type-I error is determined by the significance level of its Layer-I test, whereas the lower bound of HHT's Type-II error is determined by the power of its Layer-I test. Our Layer-I test (and most existing single layer concept drift detectors) has low Type-II error (i.e., is able to accurately detect concept drifts), but has relatively higher Type-I error (i.e., is prone to generate false alarms). The incorporation of the Layer-II test is supposed to reduce false alarms, thus decreasing the Type-I error. The cost is that the Type-II error could be increased at the same time. In our work, we request true labels to conduct a more precise Layer-II test, so that we can significantly decrease the Type-I error with minimum increase in the Type-II error.
\subsection{HHT with Classification Uncertainty (\small{HHT-CU})}
Our first method, HHT-CU, detects concept drift by tracking the $\emph{classification uncertainty}$ measurement $u_t=\|\hat{y}_t-\hat{\mathbb{P}}(y_t|\ve{X}_t)\|_2$, where $\|\cdot\|_2$ denotes the $\ell_2$ distance, $\hat{\mathbb{P}}(y_t|\ve{X}_t)$ is the posterior probability estimated by the classifier at time index $t$, and $\hat{y}_t$ is the target label encoded from $\hat{\mathbb{P}}(y_t|\ve{X}_t)$ using the $1$-of-$K$ coding scheme~\cite{bishop2006pattern}. Intuitively, the distance between $\hat{y}_t$ and $\hat{\mathbb{P}}(y_t|\ve{X}_t)$ measures the $\emph{classification uncertainty}$ for the current classifier, and the statistic derived from this measurement should be stationary (i.e., no ``significant'' distribution change) in a stable concept. Therefore, the dramatic change of the uncertainty mean value may suggest a potential concept drift.
Different from the existing work that typically monitors the derived statistic with the three-sigma rule in statistical process control~\cite{montgomery2009introduction}, we use the Hoeffding's inequality \cite{hoeffding1963probability} to monitor the moving average of $u_t$ in our Layer-I test.
\begin{theorem}[Hoeffding's inequality]
Let $X_1$, $X_2$,..., $X_n$ be independent random variables such that $X_i\in[a_i,b_i]$, and let $\bar{X}=\frac{1}{n}\sum^{n}_{i=1}X_i$, then for $\varepsilon\geq0$:
\begin{equation}
\mathbb{P}\{\bar{X}-\mathbb{E}(\bar{X})\geq\varepsilon\}\leq e^{\frac{-2n^2\varepsilon^2}{\sum^{n}_{i=1}(b_i-a_i)^2}}.
\end{equation}
\end{theorem}
where $\mathbb{E}$ denotes the expectation. Using this theorem, given a specific significance level $\alpha$, the error $\varepsilon_{\alpha}$ can be computed as:
\begin{equation}\label{error_X}
\varepsilon_{\alpha}=\sqrt{\frac{1}{2n}\ln{\frac{1}{\alpha}}}.
\end{equation}
The Hoeffding's inequality does not require an assumption on the probabilistic distribution of $u_t$. This makes it well suited in learning from real data streams \cite{frias2015online}. Moreover, the Corollary~\ref{theorem:corrol} proposed by Hoeffding \cite{hoeffding1963probability} can be directly applied to detect significant changes in the moving average of streaming values.
\begin{corollary}[Layer-I test of HHT-CU]\label{theorem:corrol}
If $X_1$, $X_2$, ..., $X_n$, $X_{n+1}$, ..., $X_{n+m}$ be independent random variables with values in the interval $[a,b]$, and if $\bar{X}=\frac{1}{n}\sum^{n}_{i=1}X_i$ and $\bar{Z}=\frac{1}{n+m}\sum^{n+m}_{i=1}X_i$, then for $\varepsilon\geq0$:
\begin{equation}
\mathbb{P}\{\bar{X}-\bar{Z}-(\mathbb{E}(\bar{X})-\mathbb{E}(\bar{Z}))\geq\varepsilon\}\leq e^{\frac{-2n(n+m)\varepsilon^2}{m(b-a)^2}}.
\end{equation}
\end{corollary}
By definition, $u_t\in[0,\sqrt{\frac{K-1}{K}}]$, where $K$ is the number of classes. $\bar{X}$ denotes the \emph{classification uncertainty} moving average before a cutoff point, and $\bar{Z}$ denotes the moving average over the whole sequence. The rule to reject the null hypothesis $H_0: \mathbb{E}(\bar{X})> \mathbb{E}(\bar{Z})$ against the alternative one $H_1: \mathbb{E}(\bar{X})\leq\mathbb{E}(\bar{Z})$ at the significance level $\alpha$ will be $\bar{Z}-\bar{X}\geq\varepsilon_{\alpha}$, where
\begin{equation}\label{error_Z}
\varepsilon_{\alpha}=\sqrt{\frac{K-1}{K}}\times\sqrt{\frac{m}{2n(n+m)}\ln{\frac{1}{\alpha}}}.
\end{equation}
Regarding the cutoff point, a reliable location can be estimated from the minimum value of $\bar{X}_i+\varepsilon_{\bar{X}_i}$ ($1\leq i\leq n+m$)~\cite{gama2004learning,frias2015online}. This is because $\bar{X}_i$ keeps approximately constant in a stable concept, thus $\bar{X}_i+\varepsilon_{\bar{X}_i}$ must reduce its value correspondingly.
The Layer-II test aims to reduce false positives signaled by Layer-I. Here, we use the permutation test which is described in \cite{yu2017concept}. Different from \cite{yu2017concept}, which trains only one classifier $f_{ord}$ using $S_{ord}$ and evaluates it on $S'_{ord}$ to get a zero-one loss $\hat{E}_{ord}$, we train another classifier $f'_{ord}$ using $S'_{ord}$ and evaluate it on $S_{ord}$ to get another zero-one loss $\hat{E}'_{ord}$. We reject the null hypothesis if either $\hat{E}_{ord}$ or $\hat{E}'_{ord}$ deviates too much from the prediction loss of the shuffled splits. The proposed HHT-CU is summarized in Algorithm \ref{HHT-CU}, where the window size $N$ is set as the number of labeled samples to train the initial classifier $\hat{f}$.
\begin{algorithm}[htb]
\small
\caption{\small{HHT with Classification Uncertainty (HHT-CU)}}
\label{HHT-CU}
\begin{algorithmic}[1]
\Require
Unlabeled stream $\{\mathbf{X}_t\}_{t=0}^\infty$ where $\mathbf{X}_t\in \mathbb{R}^d$; Initially trained classifier $\hat{f}$; Layer-I significance level $\Theta_1$; Layer-II significance level $\Theta_2$; Window size $N$.
\Ensure
Detected drift time index $\{T_{cd}\}$; Potential drift time index $\{T_{pot}\}$.
\Variables
\State $\bar{X}_{cut}$: moving average of $u_1$, $u_2$, ..., $u_{cut}$;
\State $\bar{Z}_n$: moving average of $u_1$, $u_2$, ..., $u_n$;
\State $\varepsilon_{\bar{X}_{cut}}$ and $\varepsilon_{\bar{Z}_n}$: error bounds computed using Eqs. (\ref{error_X}) and (\ref{error_Z}) respectively;
\EndVariables
\State $\{T_{cd}\}=\phi$; $\{T_{pot}\}=\phi$;
\For {$t = 1$ to $\infty$}
\State Compute $u_t$ using $\hat{f}$;
\State Update $\bar{Z}_t$ and $\varepsilon_{\bar{Z}_t}$ by adding $u_t$;
\If {$\bar{Z}_t+\varepsilon_{\bar{Z}_t}\leq \bar{X}_{cut}+\varepsilon_{\bar{X}_{cut}}$}
\State $\bar{X}_{cut}=\bar{Z}_t$; $\varepsilon_{\bar{X}_{cut}}=\varepsilon_{\bar{Z}_t}$;
\EndIf
\If {$H_0: \mathbb{E}(\bar{X}_{cut})\geq\mathbb{E}(\bar{Z}_t)$ is rejected at $\Theta_1$}
\State $\{T_{pot}\}\leftarrow t$;
\State Request $2N$ labeled samples $\{\mathbf{X}_i,y_i\}_{i=t-N}^{t+N-1}$;
\State Perform Layer-II test using $\{\mathbf{X}_i,y_i\}_{i=t-N}^{t+N-1}$ at $\Theta_2$;
\If {(Layer-II confirms the potentiality of $t$)}
\State $\{T_{cd}\}\leftarrow t$;
\State Update $\hat{f}$ using $\{\mathbf{X}_i,y_i\}_{i=t}^{t+N-1}$;
\State Initialize declared variables;
\Else
\State Discard $t$;
\State Restart Layer-I test;
\EndIf
\EndIf
\EndFor
\end{algorithmic}
\normalsize
\end{algorithm}
\subsection{HHT with Attribute-wise ``Goodness of fit'' (HHT-AG))}
The general idea behind HHT-AG is to explicitly model $\mathbb{P}_t(\ve{X},y)$ with limited access to $y$. To this end, a feasible solution is to detect potential drift points in Layer-I by just modeling $\mathbb{P}_t(\ve{X})$, and then require limited labeled data to confirm (or deny) the suspected time index in Layer-II.
The Layer-I test of HHT-AG conducts ``Goodness-of-fit'' test on each attribute $x^k|_{k=1}^d$ individually to determine whether $\ve{X}$ from two windows differ: a baseline (or reference) window $W_1$ containing the first $N$ items of the stream that occur after the last detected change; and a sliding window $W_2$ containing $N$ items that follow $W_1$. We slide the $W_2$ one step forward whenever a new item appears on the stream. A potential concept drift is signaled if at least for one attribute there is a distribution change. Factoring $\mathbb{P}_t(\ve{X})$ into $\prod_{k=1}^d\mathbb{P}_t(x^k)$ for multivariate change detection is initially proposed in \cite{kifer2004detecting}. Since then, this factorization strategy becomes widely used~\cite{vzliobaite2010change,reis2016fast}. Sadly, no existing work provides a theoretical foundation of this factorization strategy. In our perspective, one possible explanation is the Sklar's Theorem~\cite{Sklar1959Fonctions}, which states that if $\mathbb{H}$ is a $d$-dimensional joint distribution function and if $\mathbb{F}_1$, $\mathbb{F}_2$, ..., $\mathbb{F}_d$ are its corresponding marginal distribution functions, then there exists a $d$-copula $C$: $[0,1]^d\rightarrow[0,1]$ such that:
\begin{equation}
\mathbb{H}(\mathbf{X})=C(\mathbb{F}_1(x^1),\mathbb{F}_2(x^2),...,\mathbb{F}_d(x^d)).
\end{equation}
The density function (if exists) can thus be represented as:
\small
\begin{equation}
\mathbb{P}(\mathbf{X})=c(\mathbb{F}_1(x^1),\mathbb{F}_2(x^2),...,\mathbb{F}_d(x^d))\prod_{k=1}^d\mathbb{P}(x^k)\varpropto\prod_{k=1}^d\mathbb{P}(x^k),\nonumber
\end{equation}
\normalsize
where $c$ is the density of the copula $C$.
Though Sklar does not show practical ways on how to calculate $C$, this Theorem demonstrates that if $\mathbb{P}(\ve{X})$ changes, we can infer that one of $\mathbb{P}(x_i)$ should also changes; otherwise, if none of the $\mathbb{P}(x_i)$ changes, the $\mathbb{P}(\ve{X})$ would not be likely to change.
This paper selects Kolmogorov-Smirnov (KS) test to measure the discrepancy of $\mathbb{P}_t(x^k)|_{k=1}^d$ in two windows. Specifically, the KS test rejects the null hypothesis, i.e., the observations in sets $\mathbf{A}$ and $\mathbf{B}$ originate from the same distribution, at significance level $\alpha$ if the following inequality holds:
\begin{equation}
\sup\limits_{x} |\mathbb{F}_\mathbf{A}(x)-\mathbb{F}_\mathbf{B}(x)|>s(\alpha)\sqrt{\frac{m+n}{mn}},
\end{equation}
where $\mathbb{F}_\mathbf{C}(x)=\frac{1}{|\mathbf{C}|}\sum\mathbf{1}_{\{c\in\mathbf{C},c\leq x\}}$ denotes the empirical distribution function (an estimation to the cumulative distribution function $\mathbb{P}(X<x)$), $s(\alpha)$ is a $\alpha$-specific value that can be retrieved from a known table, $m$ and $n$ are the cardinality of set $\mathbf{A}$ and set $\mathbf{B}$ respectively.
We then validate the potential drift points by requiring true labels of data that come from $W_1$ and $W_2$ in Layer-II. The Layer-II test of HHT-AG makes the conditionally independent factor assumption \cite{bishop2006pattern} (a.k.a. the ``naive Bayes'' assumption), i.e., $\mathbb{P}(x^i|x^j,y)=\mathbb{P}(x^i|y)$ ($1\leq i\neq j\leq d$). Thus, the joint distribution $\mathbb{P}_t(\ve{X},y)$ can be represented as:
\small
\begin{align}
\mathbb{P}_t(\ve{X},y)&=\mathbb{P}_t(y)\mathbb{P}_t(x^d|y)\mathbb{P}_t(x^{d-1}|x^d,y)...\mathbb{P}_t(x^1|x^2,...,x^d,y) \nonumber\\
&\varpropto \mathbb{P}_t(y)\prod_{k=1}^d\mathbb{P}_t(x^k|y)\varpropto\prod_{k=1}^d\mathbb{P}_t(x^k,y).
\label{Eq_nB}
\end{align}
\normalsize
According to Eq.~(\ref{Eq_nB}), we perform $d$ independent two-dimensional (2D) KS tests~\cite{peacock1983two} on each bivariate distribution $\mathbb{P}_t(x^k,y)|_{k=1}^d$ individually. The 2D KS test is a generalization of KS test on 2D plane. Although the cumulative probability distribution is not well-defined in more than one dimension, Peacock's insight is that a good surrogate is the integrated probability in each of the four quadrants for a given point $(x,y)$, i.e., $\mathbb{P}(X\leq x,Y\leq y)$, $\mathbb{P}(X\leq x,Y\geq y)$, $\mathbb{P}(X\geq x,Y\leq y)$ and $\mathbb{P}(X\geq x,Y\geq y)$. Similarly, a potential drift is confirmed if the 2D KS test rejects the null hypothesis for at least one of the $d$ bivariate distributions. HHT-AG is summarized in Algorithm \ref{HHT-AG}, where the window size $N$ is set as the number of labeled samples to train the initial classifier $\hat{f}$.
\setlength{\textfloatsep}{10pt plus 1.0pt minus 20.0pt}
\begin{algorithm}[htb]
\small
\caption{\small{HHT with Attribute-wise Goodness of fit (HHT-AG)}}
\label{HHT-AG}
\begin{algorithmic}[1]
\Require
Unlabeled stream $\{\mathbf{X}_t\}_{t=0}^\infty$ where $\mathbf{X}_t\in \mathbb{R}^d$; Significance level $\Theta_1$; Significance level $\Theta_2$ ($=\Theta_1$ by default); Window size $N$.
\Ensure
Detected drift time index $\{T_{cd}\}$; Potential drift time index $\{T_{pot}\}$.
\For {$i = 1$ to $d$}
\State $c_0\leftarrow 0$;
\State $\text{W}_{1,i}\leftarrow$first $N$ points in $x^i$ from time $c_0$;
\State $\text{W}_{2,i}\leftarrow$next $N$ points in $x^i$ in stream;
\EndFor
\While {not end of stream}
\For {$i = 1$ to $d$}
\State Slide $\text{W}_{2,i}$ by $1$ point;
\State Perform KS test with $\Theta_1$ on $\text{W}_{1,i}$ and $\text{W}_{2,i}$;
\If {(KS test rejects the null hypothesis)}
\State $\{T_{pot}\}\leftarrow$current time;
\State $\text{W}_{1}\leftarrow$first $N$ tuples in $(x^i,y)$ from time $c_0$;
\State $\text{W}_{2}\leftarrow$next $N$ tuples in $(x^i,y)$ in stream;
\State Perform 2D KS test with $\Theta_2$ on $\text{W}_{1}$ and $\text{W}_{2}$;
\If {(2D KS test rejects the null hypothesis)}
\State $c_0\leftarrow$current time;
\State $\{T_{cd}\}\leftarrow$current time;
\State Clear all windows and \textbf{GOTO} Step $1$;
\EndIf
\EndIf
\EndFor
\EndWhile
\end{algorithmic}
\normalsize
\end{algorithm}
\section{Experiments} \label{experiments}
Two sets of experiments are performed to evaluate the performance of HHT-CU and HHT-AG. First, quantitative metrics and plots are presented to demonstrate HHT-CU and HHT-AG's effectiveness and superiority over state-of-the-art approaches on benchmark synthetic data. Then, we validate, via three real-world applications, the effectiveness of the proposed HHT-CU and HHT-AG on streaming data classification and the accuracy of its detected concept drift points. This paper selects soft margin SVM as the baseline classifier because of its accuracy and robustness.
\subsection{Experimental Setup} \label{section5.1}
We compare the results with three baseline methods, three topline supervised methods, and two state-of-the-art unsupervised methods for concept drift detection. The first two baselines, DDM~\cite{gama2004learning} and EDDM~\cite{Baena2006Early}, are the most popular supervised drift detector. The third one, we refer to as Attribute-wise KS test (A-KS)~\cite{vzliobaite2010change,reis2016fast}, is a benchmark unsupervised drift detector that has been proved effective in real applications. Note that, A-KS is equivalent to the Layer-I test of HHT-AG. The toplines selected for comparison are LFR~\cite{wang2015concept}, HLFR~\cite{yu2017concept} and HDDM~\cite{frias2015online}. HLFR is the first method on concept drift detection with HHT framework, whereas HDDM introduces Hoeffding's inequality on concept drift detection. All of these methods are operated in supervised manner and significantly outperform DDM. However, LFR and HLFR can only support binary classification. In addition, we also compare with MD3~\cite{sethi2017reliable} and CDBD~\cite{lindstrom2011drift}, the state-of-the-art concept drift detectors that attempt to model $\mathbb{P}_t(y|\ve{X})$ without access to $y$. We use the parameters recommended in the papers for each competing method. The detailed values on significance levels or thresholds (if there exist) are shown in Table \ref{Tab:parameter}.
\begin{table}[!hbpt]
\small
\begin{center}
\begin{tabular}{cccccc}\hline
Algorithms & Significance levels (or thresholds) \\ \hline
\textbf{HHT-CU} & $\Theta_1=0.01$, $\Theta_2=0.01$ \\
\textbf{HHT-AG} & $\Theta_1=0.001$, $\Theta_2=0.001$ \\
A-KS & $\Theta=0.001$ \\
MD3 & $\Theta=3$ \\
HLFR & $\delta_\star=0.01$, $\epsilon_\star=0.00001$, $\eta=0.01$ \\
LFR & $\delta_\star=0.01$, $\epsilon_\star=0.00001$ \\
DDM & $\alpha=3$, $\beta=2$ \\
EDDM & $\alpha=0.95$, $\beta=0.9$ \\
HDDM & $\alpha_W=0.005$, $\alpha_D=0.001$ \\
\hline
\end{tabular}
\caption{\small{Parameter settings for all competing algorithms.}}\label{Tab:parameter}
\end{center}
\end{table}
\subsection{Results on Benchmark Synthetic Data} \label{section5.2}
We first compare the performance of the HHT-CU and HHT-AG against aforementioned concept drift approaches on benchmark synthetic data. Eight datasets are selected from \cite{souza2015data,dyer2014compose}, namely 2CDT, 2CHT, UG-2C-2D, MG-2C-2D, 4CR, 4CRE-V1, 4CE1CF, 5CVT. Among them, 2CDT, 2CHT, UG-2C-2D and MG-2C-2D are binary-class datasets, while 4CR, 4CRE-V1, 4CE1CF and 5CVT have multiple classes. To facilitate detection evaluation, we cluster each dataset into $5$ segments to introduce $4$ abrupt drift points, thus controlling ground truth drift points and allowing precise quantitative analysis. Quantitative comparison is performed by evaluating detection quality. To this end, the True Positive ($\emph{TP}$) detection is defined as a detection within a fixed delay range after the precise concept change time. The False Negative ($\emph{FN}$) is defined as missing a detection within the delay range, and the False Positive ($\emph{FP}$) is defined as a detection outside the delay range range or an extra detection in the range. The detection quality is measured jointly with Precision, Recall and delay detection using $\mathbf{Precision}$-$\mathbf{Range}$ curve and $\mathbf{Recall}$-$\mathbf{Range}$ curve respectively (see Fig.~\ref{fig:UG_2C_2D} for an example), where $\mathbf{Precision}=\emph{TP}/(\emph{TP}+\emph{FP})$, and $\mathbf{Recall}=\emph{TP}/(\emph{TP}+\emph{FN})$.
\begin{figure}
\centering
\begin{tabular}{cccc}
\subfigure[] {\includegraphics[width=0.48\linewidth]{UG_2C_2D_precision_supervised_large_emb}}
\subfigure[] {\includegraphics[width=0.48\linewidth]{UG_2C_2D_recall_supervised_large_emb}}
\end{tabular}
\caption{\small{Concept drift detection evaluation using (a) the $\mathbf{Precision}$-$\mathbf{Range}$ (PR) curve and (b) the $\mathbf{Recall}$-$\mathbf{Range}$ (RR) curve on UG-2C-2D dataset over $100$ Monte-Carlo trails. The X-axis represents the predefined detection delay range, whereas the Y-axis denotes the corresponding Precision or Recall values. For a specific delay range, a higher $\mathbf{Precision}$ or $\mathbf{Recall}$ value suggests better performance. This figure shows our methods and their supervised counterparts.}}
\label{fig:UG_2C_2D}
\end{figure}
For a straightforward comparison, Table~\ref{Tab:synthetic_labels} reports the number of required labeled samples (in percentage) for each algorithm, whereas Table~\ref{Tab:NAUC} summarizes the Normalized Area Under the Curve (NAUC) values for two kinds of curves. As can be seen, HLFR and LFR can provide the most accurate detection as expected. However, they are only applicable for binary-class datasets and require true labels for the entire data stream. Our proposed HHT-CU and HHT-AG, although slightly inferior to HLFR or LFR, can strike the best tradeoff between detection accuracy and the portion of required labels, especially considering the overwhelming advantage over MD3 and CDBD that are the most relevant counterparts. Although the detection module of MD3 and CDBD are operated in fully unsupervised manner, they either fail to provide reliable detection results or generate too much false positives which may, by contrast, require even more true labels (for classifier update). Meanwhile, it is very encouraging to find that HHT-CU can achieve comparable or even better results than DDM (i.e., the most popular supervised drift detector) with significantly fewer labels. This suggests that our $\emph{classification uncertainty}$ is as sensitive as the total classification accuracy in DDM to monitor the nonstationary environment. And, we can see HHT-AG can significantly improve the Precision value compared to A-KS. This suggests the effectiveness of Layer-II test on reverifying the validity of suspected drifts and denying false alarms. In addition, in the extreme cases when $\mathbb{P}(\ve{X})$ remains unchanged but $\mathbb{P}(y|\ve{X})$ does change, our methods (and the state-of-the-art unsupervised methods) are not able to detect the concept drift which is the change of the joint distribution $\mathbb{P}(\ve{X},y)$. This limitation is demonstrated in our experiments on the synthetic 4CR dataset where $\mathbb{P}(\ve{X})$ remains the same.
\begin{table}[htb]
\small
\setlength{\textfloatsep}{0pt plus 1.0pt minus 10.0pt}
\begin{center}
\begin{tabular}{cccccccccc}\hline
& HHT-CU & HHT-AG & A-KS & MD3 & CDBD \\ \hline
2CDT & \textcolor{blue}{28.97} & $96.58$ & $78.29$ & \textcolor{red}{13.69} & $96.32$ \\
2CHT & \textcolor{blue}{28.12} & $98.01$ & $77.44$ & \textcolor{red}{11.71} & $93.19$ \\
UG-2C-2D & \textcolor{blue}{28.43} & $37.37$ & \textcolor{red}{18.68} & 31.21 & $88.02$ \\
MG-2C-2D & $25.51$ & $45.46$ & \textcolor{blue}{21.36} & \textcolor{red}{11.83} & $83.58$ \\
4CR & $0$ & $0$ & $0$ \\
4CRE-V1 & \textcolor{blue}{29.15} & $40.44$ & \textcolor{red}{20.22} \\
4CR1CF & \textcolor{blue}{12.69} & $33.64$ & \textcolor{red}{8.33} \\
5CVT & \textcolor{red}{34.65} & \textcolor{blue}{35.98} & $44.45$ \\ \hline
\end{tabular}
\caption{\small{Averaged number of required labeled samples in the testing set ($\%$) for all competing algorithms. The performances of supervised detectors (HLFR, LFR, DDM, EDDM, and HDDM) are omitted because they require all the true labels (i.e., 100\%). Also, MD3 and CDBD cannot be applied to multi-class datasets including 4CR, 4CRE-V1, 4CR1CF, and 5CVT. The least and second least labeled samples used for each dataset are marked with \textcolor{red}{red} and \textcolor{blue}{blue} respectively. Our methods and A-KS do not detect any drifts on 4CR, and thus they use ``0'' labeled samples for 4CR. MD3 in many cases uses the least labels, but its detection accuracy is the worst as in Table~\ref{Tab:NAUC}.}}\label{Tab:synthetic_labels}
\end{center}
\end{table}
\setlength{\textfloatsep}{10pt plus 1.0pt minus 50.0pt}
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\begin{table*}[!hbpt]
\small
\begin{tabularx}{\textwidth}{@{}lYYYYYYYYYYY@{}}
\toprule
&\multicolumn{2}{c}{\bfseries Our methods}
&\multicolumn{3}{c}{\bfseries Unsupervised methods}
&\multicolumn{3}{c}{\bfseries Supervised methods} \\
\cmidrule(lr){2-3} \cmidrule(l){4-6} \cmidrule(l){7-11}
& HHT-CU & HHT-AG & A-KS & MD3 & CDBD & HLFR & LFR & DDM & EDDM & HDDM \\
\midrule
2CDT & $\textcolor{red}{0.92}/\textcolor{red}{0.92}$ & $0.15/0.48$ & $0.13/0.62$ & $0.02/0.01$ & $0.08/\textcolor{green}{0.85}$ & $\textcolor{green}{0.82}/0.79$ & $0.81/0.80$ & $0.79/0.79$ & $0.77/0.77$ & $\textcolor{blue}{0.91}/\textcolor{blue}{0.91}$ \\
2CHT & $\textcolor{green}{0.86}/0.86$ & $0.15/0.48$ & $0.15/0.49$ & $0.02/0.01$ & $0.09/\textcolor{blue}{0.91}$ & $\textcolor{red}{0.93}/\textcolor{red}{0.93}$ & $\textcolor{red}{0.93}/\textcolor{red}{0.93}$ & $0.60/0.60$ & $\textcolor{blue}{0.89}/\textcolor{green}{0.89}$ & $\textcolor{blue}{0.89}/\textcolor{green}{0.89}$\\
UG-2C-2D & $\textcolor{green}{0.58}/0.58$ & $0.16/0.13$ & $0.08/0.05$ & $0.01/0.07$ & $0.04/\textcolor{red}{0.87}$ & $\textcolor{blue}{0.64}/0.42$ & $0.52/\textcolor{blue}{0.72}$ & $0.43/0.62$ & $0.33/0.49$ & $\textcolor{red}{0.88}/\textcolor{green}{0.67}$\\
MG-2C-2D & $\textcolor{green}{0.52}/0.52$ & $0.26/0.49$ & $0.21/0.43$ & $0.05/0.16$ & $0.02/\textcolor{blue}{0.80}$ & $\textcolor{red}{0.74}/\textcolor{green}{0.74}$ & $0.46/\textcolor{red}{0.91}$ & $0.37/0.60$ & $0.34/0.73$ & $\textcolor{blue}{0.68}/0.73$\\
4CR & $-/0$ & $-/0$ & $-/0$ & & & & & $\textcolor{blue}{0.94}/\textcolor{blue}{0.94}$ & $\textcolor{green}{0.86}/\textcolor{green}{0.86}$ & $\textcolor{red}{0.98}/\textcolor{red}{0.98}$ \\
4CRE-V1 & $\textcolor{green}{0.78}/\textcolor{green}{0.78}$ & $0.21/0.21$ & $0.19/0.21$ & & & & & $0.20/0.22$ & $\textcolor{blue}{0.84}/\textcolor{blue}{0.84}$ & $\textcolor{red}{0.98}/\textcolor{red}{0.98}$ \\
4CR1CF & $\textcolor{green}{0.49}/0.49$ & $\textcolor{blue}{0.66}/\textcolor{blue}{0.86}$ & $0.43/0.45$ & & & & & $0.10/0.50$ & $0.35/\textcolor{green}{0.63}$ & $\textcolor{red}{0.89}/\textcolor{red}{0.89}$ \\
5CVT & $\textcolor{red}{0.53}/\textcolor{green}{0.73}$ & $0.16/\textcolor{blue}{0.75}$ & $0.16/\textcolor{red}{0.84}$ & & & & & $\textcolor{blue}{0.43}/\textcolor{green}{0.73}$ & $0.28/0.65$ & $\textcolor{green}{0.35}/0.41$ \\
\bottomrule
\end{tabularx}
\caption{\small{Averaged Normalized Area Under the Curve (NAUC) values for $\mathbf{Precision}$-$\mathbf{Range}$ curve (left side of the forward slash) and $\mathbf{Recall}$-$\mathbf{Range}$ curve (right side of the forward slash) of all competing algorithms. A higher value indicates better performance. The best three results are marked with \textcolor{red}{red}, \textcolor{blue}{blue} and \textcolor{green}{green} respectively. ``-'' denotes no concept drift is detected. MD3, CDBD, HLFR and LFR cannot be applied to multi-class datasets including 4CR, 4CRE-V1, 4CR1CF, and 5CVT. In general, we can see the proposed methods overwhelmingly outperform the unsupervised methods, and achieve similar performances of the supervised methods. In addition, attention should be paid on 4CR, in which only DDM, EDDM, and HDDM can provide satisfactory detection results. This suggests that purely monitoring \emph{classification uncertainty} or modeling the marginal distribution $\mathbb{P}(\ve{X})$ become invalid when there is no change on $\mathbb{P}(\ve{X})$. In this case, sufficient ground truth labels are the prerequisite for reliable detection.}}\label{Tab:NAUC}
\end{table*}
\subsection{Results on Real-world Data} \label{section5.3}
In this section, we evaluate algorithm performance on real-world streaming data classification in a non-stationary environment. Three widely used real-world datasets are selected, namely \textbf{USENET1}~\cite{katakis2008ensemble}, \textbf{Keystroke}~\cite{souza2015data} and \textbf{Posture}~\cite{kaluvza2010agent}. The descriptions on these three datasets are available in \cite{yu2017concept,reis2016fast}. For each dataset, we also select the same number of labeled instances to train the initial classifier as suggested in \cite{yu2017concept,reis2016fast}.
The concept drift detection results and streaming classification results are summarized in Table~\ref{Tab:real_metric}. We measure the cumulative classification accuracy and the portion of required labels to evaluate prediction quality. Since the classes are balanced, the classification accuracy is also a good indicator. In these experiments, our proposed HHT-CU and HHT-AG always feature significantly less amount of false positives, while maintaining good true positive rate for concept drift detection. This suggests the effectiveness of the proposed hierarchical architecture on concept drift reverification. The HHT-CU can achieve overall the best performance in terms of accurate drift detection, streaming classification, as well as the rational utilization of labeled data.
\section{Conclusion}
This paper presents a novel Hierarchical Hypothesis Testing (HHT) framework with a \textbf{Request-and-Reverify} strategy to detect concept drifts. Two methods, namely HHT with Classification Uncertainty (HHT-CU) and HHT with Attribute-wise ``Goodness-of-fit'' (HHT-AG), are proposed respectively under this framework. Our methods significantly outperform the state-of-the-art unsupervised counterparts, and are even comparable or superior to the popular supervised methods with significantly fewer labels. The results indicate our progress on using far fewer labels to perform accurate concept drift detection. The HHT framework is highly effective in deciding label requests and validating detection candidates.
\setlength{\textfloatsep}{10pt plus 1.0pt minus 30.0pt}
\begin{table}[htbp]
\small
\centering
\subtable[\small{USENET1}]{
\begin{tabular}{cccccccccc}\hline
& Precision & Recall & Delay & Accuracy & Labels \\ \hline
\textbf{HHT-CU} & $\mathbf{1.00}$ & $\mathbf{1.00}$ & $13.25$ & $\mathbf{85}$ & $\mathbf{30.77}$ \\
\textbf{HHT-AG} & - & 0 & - & $57$ & $0$ \\
A-KS & - & 0 & - & $57$ & $0$ \\
MD3 & $0.14$ & $0.25$ & $16$ & $76$ & $71.85$ \\
CDBD & $0.10$ & $0.75$ & $\mathbf{3.33}$ & $82$ & $91.15$ \\
HLFR & $0.75$ & $0.75$ & $11.67$ & $84$ & $100$ \\
LFR & $0.75$ & $0.75$ & $11.67$ & $84$ & $100$ \\
DDM & $0.75$ & $0.75$ & $18.33$ & $83$ & $100$ \\
EDDM & $\mathbf{1.00}$ & $\mathbf{1.00}$ & $57.25$ & $81$ & $100$ \\
HDDM & $\mathbf{1.00}$ & $\mathbf{1.00}$ & $17.75$ & $83$ & $100$ \\
NA & - & 0 & - & $57$ & $0$ \\\hline
\end{tabular}
}
\subtable[\small{Keystroke}]{
\begin{tabular}{cccccccccc}\hline
& Precision & Recall & Delay & Accuracy & Labels \\ \hline
\textbf{HHT-CU} & $\mathbf{1.00}$ & $\mathbf{0.14}$ & $1.5$ & $\mathbf{88}$ & $\mathbf{14.29}$ \\
\textbf{HHT-AG} & $0.5$ & $\mathbf{0.14}$ & $\mathbf{1}$ & $81$ & $57.11$ \\
A-KS & $0.25$ & $\mathbf{0.14}$ & $\mathbf{1}$ & $79$ & $52.43$ \\
DDM & - & 0 & - & $67$ & $100$ \\
EDDM & 0.33 & 0 & - & $68$ & $100$ \\
HDDM & $\mathbf{1.00}$ & $\mathbf{0.14}$ & $\mathbf{1}$ & $86$ & $100$ \\
NA & - & 0 & - & $56$ & $0$ \\ \hline
\end{tabular}
}
\subtable[\small{Posture}]{
\begin{tabular}{cccccccccc}\hline
& Precision & Recall & Delay & Accuracy & Labels \\ \hline
\textbf{HHT-CU} & $\mathbf{1.00}$ & $\mathbf{1.00}$ & $2421.8$ & $\mathbf{56}$ & $14.60$ \\
\textbf{HHT-AG} & $\mathbf{1.00}$ & $\mathbf{1.00}$ & $\mathbf{406}$ & $55$ & $17.97$ \\
A-KS & $\mathbf{1.00}$ & $\mathbf{1.00}$ & $\mathbf{406}$ & $55$ & $\mathbf{10.54}$ \\
DDM & $0.75$ & $0.75$ & $3318.67$ & $54$ & $100$ \\
EDDM & $0.75$ & $0.75$ & $1253.4$ & $54$ & $100$ \\
HDDM & $\mathbf{1.00}$ & $\mathbf{1.00}$ & $689.25$ & $\mathbf{56}$ & $100$ \\
NA & - & 0 & - & $46$ & $0$ \\ \hline
\end{tabular}
}
\caption{\small{Quantitative metrics on real-world applications. The $\mathbf{Precision}$, $\mathbf{Recall}$ and $\mathbf{Delay}$ denote the concept drift detection precision value, recall value and detection delay, whereas the $\mathbf{Accuracy}$ and $\mathbf{Labels}$ denote the cumulative classification accuracy and required portion of true labels in the testing set (\%). ``-'' denotes no concept drift is detected or the detected drift points all are false alarms. ``NA'': using initial classifier without any update.}}\label{Tab:real_metric}
\end{table}
\setlength{\textfloatsep}{10pt plus 1.0pt minus 10.0pt}
\nocite{gonccalves2014comparative,sobolewski2013concept,brzezinski2014prequential,wang2013concept}
\bibliographystyle{named}
|
1,941,325,220,744 | arxiv | \section{Introduction}
The covid-2019 pandemic is an extraordinary event in the history of our
civilization. Billions of people have never been restricted in their homes
before. Self-isolation is conducive to esoteric reflection, and we
begun to ruminate about Gaia.
Life on earth, once arisen, never completely disappeared, although
it survived at least five major extinctions. Isn't that a small (or big)
miracle?
Gaia hypothesis suggests that Nature's mercy is neither an accident
nor a benevolent deity's work, but instead is the inevitable result
of interactions between organisms and their environment \citep{1}.
In poetic terms, the Earth with its ecosystem is a gigantic organism
that harmonizes itself with the help of many invisible feedbacks.
Unfortunately, human activity causes environmental degradation,
which in its destructive effect on the ecosystem rapidly approaches
the level of the asteroid impact that end the dinosaur era.
We better stop the destabilization of Gaia, because otherwise either
we succeed in this reckless enterprise and destroy the ecosystem and
our own existence, or Gaia recognizes our malicious character and
invisible feedbacks eliminate us, as a destabilizing element. However,
in order to harmonize with Gaia, we must improve our understanding
of these invisible feedbacks.
\section{Solar activity and pandemics}
At first glance (and at the second too) there can be no connection
between pandemics and solar activity. However, this is exactly what
Alexander Chizhevsky discovered \citep{2} many years ago.
Independently, Hope-Simpson observed the same correlation
between influenza pandemics and sunspot maximums \citep{3}.
Hoyle and Wickramasinghe confirmed these findings \citep{4}
indicating that the two phenomena have kept in step over some
17 solar cycles.
Interestingly, the previous two Corona virus epidemics, the severe
acute respiratory syndrome SARS-CoV and the Middle-East
respiratory syndrome MERS-CoV both occurred at double peaks
in the sunspot cycle \citep{5}.
A more general result states that most pandemics in the past
occurred near the sunspot extrema (maxima or minima)
\citep{6,7}. In a sense, the present covid-19 pandemic was
predicted based on this idea \citep{8,8A}.
Of course, such an outlandish idea should be taken with a grain of salt,
and not everybody believes in it \citep{9}. However, we think this strange
idea cannot be dismissed easily. The correlation in the data is so obvious
that different people have noticed it. We already mentioned Chizhevsky
and Hope-Simpson. It seems that the first person who linked the
sunspot cycle to the malaria epidemics, as early as in 1881, was
C. Meldrum \citep{10}. In 1936, Gill noted that the association of
malaria epidemics with the epochs of maximum and minimum
of sunspots is extremely close \citep{10}.
Solar activity can affect biological organisms in various ways.
Of particular interest is the influence mediated by geomagnetic and
extra-low frequency (ELF) electromagnetic fields \citep{11,12}.
The Schumann resonance with base frequency of about 8 Hz is a global
electromagnetic resonance excited by natural lightning activity within the
Earth-ionosphere waveguide \citep{13}. Life evolved on Earth under the
constant presence of Schumann resonances, and thus, we cannot
exclude that ELF electromagnetic fields played a role in biological
evolution.
Human activity has become a source of widespread electromagnetic
pollution, which raises concerns about the possible dangerous health
effects \citep{14}. Although studies on the possible biological effects of
ELF and other artificial electromagnetic fields remain largely
controversial, there is a growing evidence that ELF fields cause
numerous types of changes in cells \citep{15,16,17}.
\section{Biological effects of ELF electromagnetic fields}
Modern life is not conceived without electricity with its power lines and
appliances, without telecommunication devices. A byproduct of this
technical revolution is an ever-growing number of sources of artificial
electromagnetic fields, both in ELF and radio frequency range, and this
circumstance, as already mentioned, elicits public health concerns
\citep{14}.
The scope of interactions of electromagnetic fields (EMF) with living matter
is rich and diverse, requiring simultaneously an unbiased, open-minded,
careful and cautious approach when studying the influence of EMF on
biological processes.
Despite a large body of literature devoted to biological effects of ELF
magnetic fields (in contrast to ELF electric fields, magnetic fields easily
can penetrate biological tissues), no coherent picture has emerged so far
regarding the plausibility of such effects, or regarding the interaction
mechanisms.
Epidemiological studies that have focused on the potential health hazards of
EMFs are largely controversial. About half of the studies found such effects,
but the other half failed to find them \citep{18}. The reason for these
conflicting results is unclear.
Nonetheless, ample and compelling evidence had been accumulated indicating
that ELF electromagnetic fields has important effects on cell functioning
\citep{19,20,21,22,23,24,24A}. The nature of these effects is not entirely
clear. The problem is that such effects are observed for very weak magnetic
fields, so weak that any such effect is expected to be masked by thermal noise
\citep{25}.
Perhaps, the extreme sensitivity of living organisms to weak electromagnetic
fields is not completely unexpected from the point of view of evolutionary
biology, since life arose and evolved on Earth with the constant presence of
natural ELF electromagnetic fields, especially Schumann resonances
\citep{26,27}.
Remarkably, there is a striking similarity between natural ELF signals and
human brain electroencephalograms \citep{28}. Amazingly, many species exhibit,
irrespective of the size and complexity of their brain, essentially similar
low-frequency electrical activity \citep{29}, and it is possible that the
dominant frequencies of brain waves may be an evolutionary result of the
presence and influence of the resonant ELF electromagnetic background of
Schumann \citep{30}.
An interesting hypothesis is that ELF background fields played an important
role in the evolution of biological systems and are used by them as a means of
stochastic synchronization for various biorhythms \citep{27}. The Schumann
resonance frequencies are mainly controlled by the Earth's radius, which has
remained constant over billions of years. Therefore, these frequencies can
play a special role for the regulatory pathways of living organisms, the
Schumann resonances providing a synchronization reference signal, a Zeitgeber
(time giver) \citep{31}.
This hypothesis, while attractive, has a serious drawback: it remains a mystery
how living cells can detect a Schumann resonance signal that is so weak (the
magnetic component is only several $pT$) compared to the ubiquitous thermal
noise. The clue maybe is provided by spatially and temporally coherent
interactions of Schumann resonances with a large ensemble of components of the
system \citep{32}. For example, the human body contains about $10^{14}$ cells
and $10^{10}$ cortical neurons.
If the ELF electromagnetic fields, and in particular the Schumann resonances,
really play a regulatory role in biological processes, then the effect of
solar activity on living organisms will not look so mysterious, since solar
activity changes the geomagnetic field and can lead to geomagnetic storms, as
well as to changes in the parameters of the ionosphere, and, consequently, to
a change in the parameters of Schumann resonances and ELF radiation background.
There are some indications that abrupt changes in geomagnetic and solar
activity, as well as geomagnetic storms, can act as stressors that alter the
regulatory processes of organisms, blood pressure, immune, neurological,
cardiac and some other important life-supporting processes in living organisms
\citep{33}. There are studies that indicate that geomagnetic disturbances can
exacerbate existing diseases, can lead to cardiac arrhythmias, cardiovascular
disease, a significant increase in hospitalization rates for mental disorders,
depression, suicide attempts, homicide and traffic accidents (see \citep{33}
and references therein).
There are several hypotheses that could explain this strange connection
between solar activity and geomagnetic disturbances and mental health.
According to the visceral theory of sleep \citep{34}, the same cortical
neurons that process exteroceptive information in the waking state switch to
processing information from various internal organs during sleep. At the same
time, a violation of the switching process, when visceral information is
mistakenly interpreted by the brain as exteroceptive, can manifest itself as
a mental disorder \citep{35}. If these hypotheses are correct, and if
geomagnetic disturbances can influence the brain's switching mechanisms, then
the unexpected link between solar activity and mental disorders could be
explained.
\section{Concluding remarks}
Scientific progress has greatly expanded the ability of humankind to cause
large-scale changes in the environment. Unfortunately, we do not always
understand the subtle feedback loops that operate in the biosphere to predict
all consequences of such changes. An amusing example of the unexpected outcome
of a large-scale human intervention in nature is provided in \citep{36}.
For unknown reasons, fish in the Gulf of Mexico position themselves over
buried oil pipelines off the shore of Texas, orienting themselves directly
above the buried pipeline at a height of 1-3 meters above the seabed and
perpendicular to the axis of the pipeline. Presumably they are responding to
some electromagnetic stimuli, such as remnant magnetism in pipeline sections,
voltage gradient induced by corrosion protection devices, or transient signals
induced into the pipeline by remote lightnings or solar wind induced magnetic
storms \citep{36}.
The research of biological effects was intensified at about 1967 as part of an
evaluation of the environmental impact of a proposed ELF military antenna
(Project Sanguine) \citep{37}. Unfortunately, the presence of military and
commercial components in this research makes it politically very sensitive
\citep{26}. Nonetheless, in light of the results so far available, it would
be too irresponsible to dismiss such effects as being implausible \citep{32}.
On the practical side, if ELF fields cause biological effects, whatever
the unknown mechanism of this interaction, we can try to use these effects
in our fight against SARS-CoV-2 and similar infections. It is known that
bioelectric signals generated by the metabolic activity of cells are in an
ELF range, therefore by interfering with these signals by external low
intensity ELF electromagnetic fields we can suppress microbial or
bacterial activities \citep{18a,19a,20a}.
We believe that under the burden of the Covid-19 pandemic, research in
this direction should be intensified. Studies of the antimicrobial effects
of ELF electromagnetic fields are not expected to be too expensive.
If successful, it promises a non-invasive, inexpensive, safe and fast
technique to fight infections \citep{19a,20a}.
Is Covid-19 pandemic related to the deepest sunspot minimum for a century
we are experiencing now? During sunspot minimum solar magnetic field gets
weaker and, as a result, galactic cosmic rays flux entering the earth
increases. There are some indications that an increase in the flux of cosmic
rays can lead to an increase in lightning activity on earth \citep{38,39} and
thus change the natural ELF electromagnetic background. As noted in
\citep{26}, changing the electromagnetic background poses a twofold challenge
to us: weakening the immune system due to constant stress and more severe
illnesses, since electromagnetic fields can stimulate bacterial growth and
increase their resistance to antibiotics.
Increased cosmic rays can lead also to appearance of novel virion strains due
to induced mutation and genetic recombination events \citep{8A,40},
especially if viruses spread even beyond the tropopause (new bacteria
have been found in the stratosphere and even on the exterior of the
International Space Station, orbiting at a altitude of 400~km) \citep{40}.
Interestingly, the idea that the flux of galactic cosmic rays can affect
the ELF electromagnetic background can be tested using the Forbush effect
\cite{39A}. During solar flares, the flux of galactic cosmic rays decreases
rapidly (over a day or less) due to modification of the near-Earth
interplanetary magnetic field. This so-called Forbush decrease is transient
and is followed by a gradual recovery over several days \cite{39B,39C,39D}.
Based on the measurements in the Kola peninsula, it was demonstrated that
in all ten events of significant Forbush-decreases, the intensity of the
ELF-atmospherics decreased (down to their complete disappearance) \cite{39A}.
It was hypothesized that this phenomenon is caused by a decrease in the
intensity of discharges of a special type (sprites and jets) as a result of
a decrease in atmospheric ionization at altitudes of 10-30~km during the
Forbush decrease in the flux of galactic cosmic rays \cite{39A}.
Cosmic ray forcing of the climate acts simultaneously and with the same
sign throughout the entire globe and operates on all time scales from days
to hundreds of millions of years \cite{39E}. For this reason, even a
relatively small forcing can lead to a large climatic response over time
\cite{39E}. To unravel the anthropogenic contribution to the current climate
change and assess its danger, which is now the subject of much public
concern and controversy, we need to understand physical mechanisms underlying
the influence of solar and cosmic ray variability on climate and their impact
on the biosphere.
It has recently been shown that bats, like many other animals with highly
developed magnetosensory skills, use magnetic field for orientation
and can sense even very weak magnetic fields \cite{40A,40B,40C}. Perhaps,
this magnetoreception is influenced by ELF electromagnetic background
\cite{40D,40E}. Another source of possible influence is the change in
cloudiness due to the increased flux of galactic cosmic rays, as bats have
been shown to calibrate their magnetic compasses with sunset cues \cite{40F}.
So, one can imagine the following scenario \footnote{Suggested by an
anonymous reviewer}. Changes in the ELF electromagnetic background, caused
by the increased flux of cosmic rays due to unusually deep sunspot minimum,
can cause abnormal movements of the population of bats and affect the time
of their arrival and departure. Delayed arrival or departure and longer travel
times can increase the population of bats in some areas, thereby increasing
competition for limited food supplies, and can also increase the likelihood
of interspecies transmission of the virus. Besides, increased level of
irradiation increases genetic recombination rates, as was demonstrated
in laboratories during the 1950s and 1960s \cite{6}. Finally, under the
influence of these circumstances, the new coronavirus successfully recombined
and caused the Covid-19 pandemic.
We end this article with a funny observation from \citep{40}. The Italian word
influencia means influence, meaning the influence of the stars in the case of
influenza illness. This etiology reflects the belief of our ancestors that
events in the sky and events on Earth are interconnected. It may well be that
they were right.
\section*{Acknowledgments}
The authors thank Olga Chashchina and an anonymous reviewer for critical
comments that helped to improve the manuscript.
\section*{Conflict of interest statement}
The author declares that the research was conducted in the absence of any
commercial or financial relationships that could be construed as a potential
conflict of interest.
\section*{References}
|
1,941,325,220,745 | arxiv | \section{Introduction}
\label{sec:intro}
Degeneracy is an important concept discussed in most textbooks on quantum
mechanics\cite{CDL77} and quantum chemistry\cite{P68} and several textbooks
on mathematics show its relationship with symmetry\cite{H62,C90}. In recent
years there has been great interest in non-Hermitian quantum mechanics\cite
{B05,B07} (and references therein) that gives rise to the concept of
exceptional points\cite{HS90,H00,HH01,H04,GRS07}, also known as defective
points\cite{MF80}, that also play a relevant role in perturbation theory\cite
{K95}. Non-Hermitian quantum mechanics and exceptional points have become so
relevant nowadays that there have been several pedagogical papers published
recently on the subject\cite{BBJ03,DV18,F18,LXLL20}.
The effect of exceptional points is most dramatically illustrated by
parameter-dependent Hamiltonians. As the model parameter approaches an
exceptional point two (or sometimes more) real eigenvalues approach each
other and coalesce. They emerge on the other side of the exceptional point
as a pair of complex-conjugate numbers. This coalescence is different from
degeneracy because at the exceptional point there is only one linearly
independent eigenvector. However, it is sometimes called non-Hermitian
degeneracy as opposed to Hermitian degeneracy\cite{BO98}.
The purpose of this paper is to illustrate the difference between
coalescence and degeneracy by means of a simple, exactly solvable
one-parameter model. In section~\ref{sec:Model} we discuss the model, in
section~\ref{sec:symmetry} we discuss degeneracy from the point of view of
symmetry and, finally, in section~\ref{sec:conclusions}~we summarize the
main results of the paper and draw conclusions.
\section{The model}
\label{sec:Model}
In order to illustrate both degeneracy and coalescence of eigenvalues we
propose the non-symmetric matrix
\begin{equation}
\mathbf{H}(\beta )=\left(
\begin{array}{lll}
0 & 1 & 1 \\
1 & 0 & 1 \\
\beta & 1 & 0
\end{array}
\right) , \label{eq:H}
\end{equation}
that has the following eigenvalues
\begin{eqnarray}
E_{1} &=&-1,\;E_{2}=\frac{1}{2}\left( 1-\sqrt{4\beta +5}\right) ,\;E_{3}
\frac{1}{2}\left( 1+\sqrt{4\beta +5}\right) ,\;\beta <1, \nonumber \\
E_{1} &=&\frac{1}{2}\left( 1-\sqrt{4\beta +5}\right) ,\;E_{2}=-1,\;E_{3}
\frac{1}{2}\left( 1+\sqrt{4\beta +5}\right) ,\;\beta >1,
\label{eq:Eigenvalues(beta)}
\end{eqnarray}
labelled so that $E_{1}\leq E_{2}\leq E_{3}$. The real and imaginary parts
of these eigenvalues are shown in figures \ref{Fig:ReE} and \ref{Fig:ImE},
respectively.
We appreciate that $E_{1}$ and $E_{2}$ cross at $\beta =1$ and swap their
relative order. These eigenvalues become degenerate at $\beta =1$ and a set
of three orthonormal eigenvectors of $\mathbf{H}(1)$ are
\begin{eqnarray}
E_{1} &=&E_{2}=-1,\;\mathbf{v}_{1}=\frac{1}{\sqrt{6}}\left(
\begin{array}{c}
1 \\
1 \\
-2
\end{array}
\right) ,\;\mathbf{v}_{2}=\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
1 \\
-1 \\
0
\end{array}
\right) , \nonumber \\
E_{3} &=&2,\;\mathbf{v}_{3}=\frac{1}{\sqrt{3}}\left(
\begin{array}{l}
1 \\
1 \\
1
\end{array}
\right) . \label{eq:Eigenval_eigenvect_beta=1}
\end{eqnarray}
The symmetric matrix $\mathbf{H}(1)$ can be diagonalized by means of the
orthogonal matrix $\mathbf{C}$ constructed from the eigenvectors (\ref
{eq:Eigenval_eigenvect_beta=1}) in the usual way\cite{P68}:
\begin{equation}
\mathbf{C}^{t}\mathbf{H}(1)\mathbf{C=}\left(
\begin{array}{ccc}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 2
\end{array}
\right) ,\;\mathbf{C}=\frac{1}{6}\left(
\begin{array}{ccc}
\sqrt{6} & 3\sqrt{2} & 2\sqrt{3} \\
\sqrt{6} & -3\sqrt{2} & 2\sqrt{3} \\
-2\sqrt{6} & 0 & 2\sqrt{3}
\end{array}
\right) . \label{eq:Diag_H(1)}
\end{equation}
Note that the matrix $\mathbf{H}(1)$ exhibits $3$ linearly independent
eigenvectors, two of them degenerate. Besides, $E_{1}$ and $E_{2}$ remain
real before and after the point $\beta =1$ as shown in figure \ref{Fig:ReE}.
This is the usual degeneracy commonly found in quantum mechanics\cite{CDL77}
and quantum chemistry\cite{P68}.
On the other hand, the eigenvalues $E_{2}$ and $E_{3}$ coalesce at $\beta
=-5/4$ and become a pair of complex conjugate numbers when $\beta <-5/4$
(see figures \ref{Fig:ReE} and \ref{Fig:ImE}). The matrix $\mathbf{H}(-5/4)$
exhibits eigenvalues and eigenvectors
\begin{eqnarray}
E_{1} &=&-1,\;\mathbf{v}_{1}=\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
0 \\
1 \\
-1
\end{array}
\right) , \nonumber \\
E_{2} &=&E_{3}=\frac{1}{2},\;\mathbf{v}_{2}=\frac{1}{3}\left(
\begin{array}{c}
2 \\
2 \\
-1
\end{array}
\right) . \label{eq:Eigenvalues_beta=-5/4}
\end{eqnarray}
In this case there are only two linearly independent eigenvectors
and the matrix $\mathbf{H}(-5/4)$ is defective
(non-diagonalizable). One can obtain other suitable vectors by
means of a Jordan chain\cite{GRS07} \newline(see,
https://en.wikipedia.org/wiki/Generalized\_eigenvector\#Jordan\_chains,
for
examples). In the present case we obtain a third column vector $\mathbf{v
_{3}$ with elements $c_{1}$, $c_{2}$ and $c_{3}$ from
\begin{equation}
\left[ \mathbf{H}\left( -\frac{5}{4}\right) -\frac{1}{2}\mathbf{I}\right]
\left(
\begin{array}{c}
c_{1} \\
c_{2} \\
c_{3}
\end{array}
\right) =\mathbf{v}_{2}, \label{eq:Jordan_chain}
\end{equation}
where $\mathbf{I}$ is the $3\times 3$ identity matrix. One possible solution
is
\begin{equation}
\mathbf{v}_{3}=\frac{1}{6}\left(
\begin{array}{c}
6 \\
6 \\
1
\end{array}
\right) . \label{eq:Jordan_vector}
\end{equation}
With these three vectors we can convert $\mathbf{H}\left( -\frac{5}{4
\right) $ into a Jordan matrix
\begin{equation}
\mathbf{S}^{-1}\mathbf{H}\left( -\frac{5}{4}\right) \mathbf{S}=\left(
\begin{array}{c|cc}
-1 & 0 & 0 \\ \hline
0 & \frac{1}{2} & 1 \\
0 & 0 & \frac{1}{2}
\end{array}
\right) ,\;\mathbf{S}=\frac{1}{6}\left(
\begin{array}{ccc}
0 & 4 & 6 \\
3\sqrt{2} & 4 & 6 \\
-3\sqrt{2} & -2 & 1
\end{array}
\right) , \label{eq:Jordan_matrix}
\end{equation}
where the two Jordan blocks are explicitly indicated.
\section{Symmetry}
\label{sec:symmetry}
The matrix $\mathbf{H}(1)$ can be thought as a some kind of description of
three identical objects. Therefore, the six orthogonal matrices $\mathbf{U
_{i}$, $i=0,1,\ldots ,5$ that carry out the six permutations of three
objects $\left( c_{1}\;c_{2}\;c_{3}\right) $ should leave $\mathbf{H}(1)$
invariant. In order to construct such matrices we proceed as indicated in
what follows:
\begin{eqnarray}
\left(
\begin{array}{l}
c_{1}^{\prime } \\
c_{2}^{\prime } \\
\vdots \\
c_{N}^{\prime }
\end{array}
\right) &=&\mathbf{U}\left(
\begin{array}{l}
c_{1} \\
c_{2} \\
\vdots \\
c_{N}
\end{array}
\right) , \nonumber \\
c_{i}^{\prime } &=&\sum_{j=1}^{N}u_{ij}c_{j},\;u_{ij}=\frac{\partial
c_{i}^{\prime }}{\partial c_{j}}, \label{eq:U_construction}
\end{eqnarray}
where $u_{1j}$, $i,j=1,2,\ldots ,N$, are the matrix elements of $\mathbf{U}
. As an example, consider the cyclic permutation
\begin{eqnarray}
\left(
\begin{array}{l}
c_{3} \\
c_{1} \\
c_{2}
\end{array}
\right) &=&\mathbf{U}_{1}\left(
\begin{array}{l}
c_{1} \\
c_{2} \\
c_{3}
\end{array}
\right) , \nonumber \\
c_{1}^{\prime } &=&c_{3},\;c_{2}^{\prime }=c_{1},\;c_{3}^{\prime }=c_{2}.
\label{eq:U_construction_example}
\end{eqnarray}
In this way we construct the group of matrices $\left\{ \mathbf{U}_{0},\
\mathbf{U}_{1},\,\ldots ,\mathbf{U}_{5}\right\}
\begin{eqnarray}
\mathbf{U}_{0} &=&\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}
\right) ,\;\mathbf{U}_{1}=\left(
\begin{array}{ccc}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0
\end{array}
\right) ,\;\mathbf{U}_{2}=\left(
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0
\end{array}
\right) , \nonumber \\
\mathbf{U}_{3} &=&\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0
\end{array}
\right) ,\;\mathbf{U}_{4}=\left(
\begin{array}{ccc}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0
\end{array}
\right) ,\;\mathbf{U}_{5}=\left(
\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{array}
\right) , \label{eq:group_matrices}
\end{eqnarray}
that leave the matrix $\mathbf{H}(1)$ invariant: $\mathbf{U}_{i}^{t}\mathbf{
}(1)\mathbf{U}_{i}=\mathbf{H}(1)$. The group of the six permutations of
three objects (including the identity $\mathbf{U}_{0}$) is commonly known as
the symmetric group $S_{3}$\cite{H62} that is isomorphic to $D_{3}$ and
C_{3v}$\cite{P68,C90}. When $\beta \neq 1$ the only matrix that leaves
\mathbf{H}(\beta )$ invariant is $\mathbf{U}_{0}$.
\section{Conclusions}
\label{sec:conclusions}
In this paper we compare two apparently similar concepts: degeneracy and
coalescence. Although such concepts have been discussed in the past, here we
propose a simple, exactly solvable model that exhibits both. In our opinion
this model is suitable for the discussion of these concepts in an
introductory course on quantum mechanics. In addition, this simple model is
also suitable for the illustration of the relationship between symmetry and
degeneracy. It is quite easy to construct the orthogonal matrices that
commute with the Hamiltonian one that becomes symmetric at $\beta =1$ and
exhibits the greatest degree of degeneracy. All the required algebraic
calculations can be more easily carried out by means of available computer
algebra software. For this reason this model is suitable for learning the
application of such tools.
|
1,941,325,220,746 | arxiv | \section{Introduction} \end{center}
\vspace{1.0cm}
Proteins are biomolecules that
perform and control almost all functions in all living organisms.
Their biological functions include catalysis (enzymes),
muscle contraction (titin), transport of ions (hemoglobin), transmission of information
between specific cells and organs (hormones), activities in the
immune system (antibodies), passage of molecules across cell membranes etc.
The long process of life evolution has designed proteins in the natural world
in such a mysterious way that
under normal physiological conditions (pH $\approx$ 7, $T$ = 20-40 C,
atmospheric pressure) they acquire well defined compact three-dimensional
shapes, known as the native conformations. Only in these conformation proteins
are biologically active. Proteins unfold to more extended conformations, if
the mentioned above conditions are changed or upon application of denaturant
agents like urea or guanidinum chloride.
If the physiological conditions are restored, then most of
proteins refold spontaneously to their native states \cite{Anfinsen_Science73}.
Proteins can also change their shape, if they are subjected
to an
external mechanical force.
The protein folding theory deals with two main problems. One of them is
a prediction of native conformation for a given sequence of amino acids.
This is referred to as the protein folding.
The another one is a design problem (inverse folding), where a target
conformation is known and one has to find what sequence would fold into
this conformation.
The understanding of folding mechanisms and protein design
have attracted an intensive experimental and theoretical interest over
the past few decades as
they can provide insights into our knowledge
about living bodies. The ability to predict the folded form from its sequence would widen
the knowledge of genes.
The genetic code is a sequence of nucleotides in DNA that
determines amino acid sequences for protein synthesis.
Only after synthesis and completion of folding process proteins
can perform their myriad functions.
In the protein folding problem one achieved two major results. From the kinetics prospect, it is
widely accepted that folding follows the funnel picture, i.e.
there exist a numerous number of routes to the native state (NS) \cite{Leopold_PNAS92}.
The corresponding free energy landscape (FEL) looks like a funnel. This new point
of view is in sharp contrast with the picture \cite{Baldwin_Nature94}, which assumes
that the folding progresses along a single pathway. The funnel theory
resolved the so called Lenvithal paradox \cite{Levinthal_JCP68}, according to which folding times
would be astronomically large, while proteins in {\em vivo} fold within
$\mu$s to a few minutes. From the thermodynamics point of view, both
experiment and theory showed that the folding is highly cooperative
\cite{Ptitsyn_book}. The transition from a denaturated state (DS) to the folded
one is first order. However, due to small
free energies of stability of the NS, relative to the
unfolded states ($5 - 20 k_BT$), the possibility of a marginally second
order transition is not excluded \cite{MSLi_PRL04}.
Recently Fernandez and coworkers \cite{Fernandez_Sci04} have carried out
force clamp experiments in which proteins are forced to refold under the
weak quenched force. Since the force increases the folding time
and initial
conformations can be controlled by the end-to-end distance, this technique
facilitates the study of protein folding mechanisms.
Moreover, by varying the external force
one can estimate the distance between the DS and transition state (TS) \cite{Fernandez_Sci04,MSLi_PNAS06} or,
in other words, the force clamp can serve as a complementary
tool for studying the FEL of biomolecules.
After the pioneering AFM experiment of Gaub {\em et al.}
\cite{Florin_Science94}, the study of mechanical unfolding and
stability of biomolecules becomes flourish.
Proteins are pulled either by the constant force, or by force ramped
with a constant loading rate. An explanation for this rapidly
developing field is that
single molecules force spectroscopy (SMFS) techniques have a number of advantages compared to conventional folding studies. First,
unlike ensemble measurements, it is possible to observe differences
in nature of individual unfolding events of a single molecule.
Second, the end-to-end distance is a well-defined reaction coordinate
and it makes comparison of
theory with experiments easier. Remember that a choice of a good reaction
coordinate for describing folding remains elusive.
Third, the single molecule technique allows not only for
establishing the mechanical resistance but also for deciphering FEL of biomolecules. Fourth, SMFS is able to reveal the nature of atomic interactions.
It is worthy to note that
studies of protein unfolding are not of academic interest only.
They are very relevant as the unfolding plays
a critically important role in several processes in cells
\cite{Matoushek_COSP03}. For example, unfolding occurs in process of protein
translocation across some membranes. There is reversible
unfolding during action of proteins such a titin. Full or partial
unfolding is a key step in amyloidosis.
Despite much progress in experiments and theory, many questions remain open.
What is the molecular mechanism of protein folding of some important proteins?
Can we use approximate theories for them?
Does the size of proteins matter for the cooperativity of the folding-unfolding
transition?
One of the drawbacks of the force clamp technique \cite{Fernandez_Sci04}
is that it fixes one end of a protein.
While thermodynamic quantities do not depend on what end is anchored,
folding pathways which are kinetic in nature may depend on it.
Then it is unclear if this technique
probes the same folding pathways as in the case when both termini are free.
Although in single molecule experiments, one does not know what end of
a biomolecule is attached to the surface, it would be interesting to
know the effect of end
fixation on unfolding pathways. Predictions from
this kind of simulations will be
useful at a later stage of development, when experimentalists can
exactly control
what end is pulled.
Recently, experiments \cite{Brockwell_NSB03,Dietz_PNAS06a} have shown that
the pulling geometry has a pronounced effect on the unfolding free
energy landscape. The question is can one describe this phenomenon
theoretically. The role of non-native interactions in mechanical unfolding of proteins remains largely unknown. It is well known that an external
force increases folding barriers making the configuration sampling difficult.
A natural question arises is if one can can develop a efficient method
to overcome this problem. Such a method would be highly useful
for calculating thermodynamic quantities of a biomolecule subjected to an
mechanical external force.
In this thesis we address the following questions.
\begin{enumerate}
\item
We have
studied the folding mechanism of the protein domain hbSBD
(PDB ID: 1ZWV) of the mammalian
mitochondrial branched-chain $\alpha$-ketoacid dehydrogenase
(BCKD) complex in detail, using Langevin simulation and CD experiments.
Our results support its two-state behavior.
\item
The cooperativity of the denaturation transition of proteins was investigated
with the help of lattice as well as off-lattice models. Our studies
reveal that the sharpness of this transition enhances as the number of amino acids grows. The corresponding scaling behavior is governed by an universal
critical exponent.
\item
It was shown that refolding pathways of single
$\alpha\beta$-protein ubiquitin (Ub) depend on what end is anchored to
the surface. Namely, the fixation of the N-terminal changes refolding
pathways but anchoring the C-terminal leaves them unchanged.
Interestingly, the end fixation has no effect on multi-domain Ub.
\item
The FEL of Ub and fourth domain of {\em Dictyostelium discoideum}
filamin (DDFLN4) was deciphered.
We have studied the effect of pulling direction on the FEL of Ub.
In agreement with the experiments, pulling at Lys48 and C-terminal
increases the distance between the NS and TS about
two-fold compared to the case when the force is applied to two termini.
\item
A new force replica exchange (RE) method was developed for efficient configuration
sampling of biomolecules pulled by an external mechanical force. Contrary to
the standard temperature RE, the exchange is carried out between different
forces (replicas). Our method was successfully applied to study thermodynamics of
a three-domain Ub.
\item
Using the Go modeling and all-atom models with explicit water, we have studied
the mechanical unfolding mechanism of DDFLN4 in detail. We predict that, contrary to the experiments
of Rief group \cite{Schwaiger_NSB04}, an additional unfolding peak
would occur at the end-to-end $\Delta R \approx 1.5 $nm in the
force-extension curve. Our study reveals the important role of non-native interactions which are responsible for a peak located at $\Delta R \approx 22 $nm. This peak can not be encountered by the Go models in which the non-native
interactions are neglected. Our finding may stimulate further experimental and theoretical studies on this protein.
\end{enumerate}
My thesis is organized as follows:
Chapter 2 presents basic concepts about proteins.
Experimental and theoretical tools for studying protein folding and unfolding are discussed in Chapter 3.
Our theoretical results on the size dependence of the cooperativity
index which characterizes the sharpness of the melting transition are
provided in Chapter 4.
Chapter 5 is devoted to the simulation of the hbSBD domain using the
Go-modeling. Our new force RE and its application to a three-domain Ub
are presented in Chapter 6. In Chapter 7 and 8 I presented results concerning refolding under quench force and unfolding of ubiquitin and its trimer. Both, mechanical and thermal unfolding pathways will be presented.
The last Chapters 9 and 10 discuss the results of all-atom molecular dynamics
and Go simulations for mechanical unfolding of the protein DDFLN4.
The results presented in this thesis are based on the following works:
\begin{enumerate}
\item
M. Kouza, C.-F. Chang, S. Hayryan, T.-H. Yu, M. S. Li, T.-H. Huang,
and C.-K. Hu,
Biophysical Journal {\bf 89}, 3353 (2005).
\item
M. Kouza, M. S. Li, E. P. O'Brien Jr., C.-K. Hu, and D. Thirumalai,
Journal of Physical Chemistry A \textbf{110}, 671 (2006)
\item
M. S. Li, M. Kouza, and C.-K. Hu,
Biophysical Journal {\bf 92}, 547 (2007)
\item
M. Kouza, C.-K. Hu and M. S. Li,
Journal of Chemical Physics {\bf 128}, 045103 (2008).
\item
M. S. Li and M. Kouza,
Dependence of protein mechanical unfolding pathways on pulling speeds,
submitted for publication.
\item
M. Kouza, and M. S. Li, Protein mechanical unfolding: importance of non-native interactions,
submitted for publication.
\end{enumerate}
\newpage
\begin{center} \section{Basic concepts} \end{center}
\subsection {What is protein?}
The word "protein" which comes from Greek means "the primary importance". As mentioned above,
they play a crucial role in living organisms.
Our muscles, organs, hormones, antibodies and enzymes are
made up of proteins. They are about 50\% of the dry weight of cells.
Proteins are used as a mediator in the process
of how the genetic information moves around the cell or in another words transmits
from parents to children (Fig. \ref{mediator}). Composed of DNA, genes keep the genetic
code as it is a basic unit of heredity.
Our various characteristics such as color of hair,
eyes and skin are determined after very complicated processes.
In brief, at first linear strand of DNA
in gene is transcribed to mRNA and this information is then "translated" into a
protein sequence. Afterwards proteins start to fold up to get biologically functional
three-dimensional structures, such as various pigments, enzymes and hormones. One protein is responsible for skin color, another
one - for hair color. Hemoglobin gives the color of our blood and carry out the transport
functions, etc. Therefore, proteins perform a lot of diverse
functions and understanding of mechanisms of their
folding/unfolding is essential to know how a living body works.
\begin{figure}[!hbtp]
\epsfxsize=4.5in
\vspace{5 mm}
\centerline{\epsffile{./Figs/Fig1.eps}}
\linespread{0.8}
\caption{The connection between genetic information, DNA and protein.
This image and the rest of molecular graphics in this dissertation were made using
VMD \cite{VMD}, xmgrace, xfig and gimp software.}
\label{mediator}
\vspace{5 mm}
\end{figure}
The number of proteins is huge. The protein data bank (http://www.rcsb.org) contains
about 54500 protein entries (as of November 2008) and this number keeps growing
rapidly.
Proteins are complex compounds that are typically constructed from one
set of 20 amino acids. Each amino
acid has an amino end ( $-NH_2$) and an acid end
(carboxylic group -COOH). In the middle of amino acid there is an alpha carbon
to which hydrogen and one of 20 different side groups are attached
(Fig. \ref{peptbond}a).
The structure of side group determines which of 20 amino acids we have.
The simplest amino acid is Glysine, which has only a single
hydrogen atom in its side group. Other aminoacids have more
complicated construction, that can contain carbon, hydrogen, oxygen, nitrogen
or sulfur (e.g., Fig. \ref{peptbond}b).
Amino acids are denoted either by one letter or by three letters.
Phenylalanine, for example, is labeled as Phe or F.
There are several ways for classification of amino acids. Here we divide them
into four groups basing on their interactions
with water, their natural solvent.
These groups are:
\begin{enumerate}
\item \label{en1} Alanine (Ala/A), Isoleucine (Ile/I), Leucine (Leu/L), Methionine (Met/M),
Phenylalanine (Phe/F), Proline (Pro/P),
Tryptophan (Trp/W), Valine (Val/V).
\item \label{en2} Asparagine (Asn/N), Cysteine (Cys/C), Glutamine (Gln/Q), Glycine (Gly/G),
Serine (Ser/S), Threonine (Thr/T),
Tyrosine (Tyr/Y).
\item \label{en3} Arginine (Arg/R), Histidine (His/H), Lysine (Lys/K).
\item \label{en4} Aspartic acid (Asp/D), Glutamic acid (Glu/E).
\end{enumerate}
\begin{figure}[!hbtp]
\epsfxsize=4.2in
\vspace{5 mm}
\centerline{\epsffile{./Figs/peptide_bond2.eps}}
\linespread{0.8}
\caption{ (a) Components of an amino acid: C - central carbon
atom, H - hydrogen atom, $H_{3}N$ - amino group, $COO^{-}$ -
carboxyl group, R - radical group.
(b) Examples of three amino acids, which shows the differences in radical groups.
(c) Formation of a peptide bond. The carboxyl group of amino acid
1 is linked to the adjacent amino group of amino acid 2.} \label{peptbond}
\vspace{5 mm}
\end{figure}
Here one and three-letter notations of amino acids are given in brackets.
Group \ref{en1} is made of non
polar hydrophobic residues. The three other groups are made of
hydrophilic residues. From an electrostatic point of view, groups
\ref{en2}, \ref{en3} and \ref{en4} contain polar neutral, positively charged and negatively charged residues,
respectively.
In order to make proteins, amino acids link together in long chains
by a chemical reaction in which a water molecule is released and thus
peptide bond is created (Fig. \ref{peptbond}c).
Hence, protein is a chain of amino acids connected via peptide bonds having free amino group at
one end and carboxylic group at the other one.
The sequence of linked amino acids is known
as a {\bf primary structure} of a protein (Fig. \ref{str}a). The structure is
stabilized by hydrogen
bonding between the amine and carboxylic groups.
Pauling and Corey\cite{Pauling_PNAS51a,Pauling_PNAS51b} theoretically predicted that proteins should
exhibit some local ordering, now known as {\bf secondary structures}.
Based on energy considerations, they showed that
there are certain
regular structures which maximize the number of hydrogen bonds (HBs)
between the C-O and the H-N groups of the backbone.
Depending on angles
between the carbon and the nitrogen, and the carbon and carboxylic group, the
secondary structures may be either alpha-helices or beta-sheets
(Fig. \ref{str}b). Helices are one-dimensional structures, where the HBs are aligned
with its axis. There are 3.6 amino acids per helix
turn, and the typical size of a helix is 5 turns. $\beta$-strands
are quasi two-dimensional structures. The H-bonds are
perpendicular to the strands. A typical $\beta$-sheet has a length of
8 amino acids, and consists of approximately 3 strands.
In addition to helices and beta strands, secondary structures may be
turns or loops.
The third type
of protein structure is called {\bf tertiary structure} (Fig. \ref{str}c). It is an overall
topology of the folded polypeptide chain. A variety of bonding interactions
between the side chains of the amino acids determines this structure.
Finally, the {\bf quaternary structure} (Fig. \ref{str}d) involves
multiple folded protein molecules as a multi-subunit complex.
\begin{figure}
\epsfxsize=5.5in
\vspace{5 mm}
\centerline{\epsffile{./Figs/protein_structures_.eps}}
\linespread{0.8}
\caption{Levels of protein structures. (a) An example of primary structures
or sequences. (b) Alpha helix and beta strand are main secondary structures. The green dashed lines shows HBs.
(c) Tertiary structure of protein (PDB ID: 2CGP).
(d) Quaternary structure from two domains (PDB ID: 1CGP).
}
\label{str}
\vspace{5 mm}
\end{figure}
\subsection{The possible states of proteins}
Although it was long believed that proteins are either denaturated or native,
it seems now well established that they may exist in at least three different phases.
The following classification is widely accepted:
\begin{enumerate}
\item {\bf Native state}\\
In this state, the protein is said to be folded and has its full biological activity. Three dimensional
native structure is well-defined and unique, having a compact and globular shape. Basically, the conformational entropy of the NS is zero.
\item {\bf Denaturated states}\\
These states of proteins lack their biological activity.
Depending on external conditions, there
exists at least two denaturated phases:
\begin{enumerate}
\item {\it Coil state} \\
In this state, a denaturated protein has no definite shape.
Although there might be local aggregation phenomena, it is fairly well
described as the swollen phase of a homopolymer in a good solvent.
Coil state has large conformational entropy.
\item {\it Molten globule} \\
At low pH (acidic conditions), some proteins may exist in a compact
state, named ``molten globule'' \cite{Ptitsyn_book}. This state is compact
having a
globular shape, but it does not have a well defined structure and bears
strong resemblance to the collapsed phase of a homopolymer in a
bad solvent. It is slightly less compact
than the NS, and has finite conformational
entropy.
\end{enumerate}
\end{enumerate}
In vitro, the transition between the various phases is controlled by
temperature, pH, denaturant agent such as urea or guanidinum chloride.
\subsection{Protein folding}
{\em Protein folding is a process in which a protein reaches the NS
starting from denaturated ones.}
Understanding this complicated process has attracted attention of researchers
for over forty years.
Although a number of issues remain unsolved, several universal features have been obtained.
Here we briefly discuss the state of art of this field.
\subsubsection{Experimental techniques}
To determine protein structures one mainly uses
the X-ray crystallography \cite{Kendrew_Nature60} and NMR \cite{Bax_JBNMR97}.
About 85\% of structures that have been deposited in Protein Data Bank was determined
by X-ray diffraction method.
NMR generally gives a
worse resolution compared to X-ray crystallography and it is limited to relatively
small biomolecules. However, this method has the advantage that it does not require crystallization
and permits to study proteins in their natural environments.
Since proteins fold within a few microseconds to seconds, the folding process
can be studied using the fluorescence, circular dichroism (CD) {\em etc}
\cite{Nolting_book}. CD, which is directly related to this thesis,
is based on the differential absorption of left- and
right-handed circularly polarized light.
It allows for determination of secondary structures and also for changes
in protein structure,
providing possibility to observe folding/unfolding transition
experimentally.
As the fraction of the folded conformation $f_N$ depends on the ellipticity
$\theta$ linearly (see Eq. \ref{theta_fN_eq} below),
one can obtain it as a function of
$T$ or chemical denaturant by measuring $\theta$.
\subsubsection{ Thermodynamics of folding}
The protein folding is a spontaneous process which obeys the main
thermodynamical principles. Considering a protein and solvent as a isolated
system, in accord with the second thermodynamic law,
their total entropy has the tendency to increase,
$\Delta S_{prot}+\Delta S_{sol} \ge 0$. Here $\Delta S_{prot}$ and
$\Delta S_{sol}$ are the protein and solvent entropy.
If a protein absorpts from the environment heat $Q$, then
$\Delta S_{prot}=-\frac{Q}{T}$ ($-Q$ is the heat obtained by the solvent
from the protein). Therefore, we have $Q - T\Delta S_{prot} \le 0$.
In the isobaric process, $\Delta H=Q$ as the system does not perform work,
where $H$ is the enthalpy. Assuming $\Delta G = \Delta H - T\Delta S_{prot}$,
we obtain
\begin{equation}
\Delta G = \Delta H - T\Delta S_{prot} \le 0.
\label{folding_thermo_eq}
\end{equation}
In the isothermic process ($T$=const), $G$ in Eq. (\ref{folding_thermo_eq})
is the
Gibbs free energy of protein ($G=H-TS_{prot}$). Thus the folding
proceeds in such a way that the Gibbs free energy decreases.
This is reasonable because the system always tries to get a state with
minimal free energy.
As the system progresses to the NS, $\Delta S_{prot}$ should decrease
disfavoring the condition (\ref{folding_thermo_eq}).
However, this condition can be fulfilled, provided $\Delta H$ decreases.
One can show that this is the case
taking into account the hydrophobic effect which increases the solvent
entropy (or decrease of $H$) by burying hydrophobic residues
in the core region \cite{Fersht_book}.
Thus, from the thermodynamics point of view the protein folding process
is governed by the interplay of two conflicting factors:
(a) the decrease of configurational entropy humps the folding
and (b) the increase of the solvent entropy speeds it up.
\subsubsection{Levinthal's paradox and funnel picture of folding}
Let us consider a protein which has only 100 amino acids.
Using a trivial model where there are just two possible orientations per
residue, we obtain $2^{100}$
possible conformational states. If one assumes that an jump from
one conformation to the another one requires
100 picoseconds, then it would take about $5\times10^8$ years
to check up all the conformations before acquiring
the NS. However, in reality, typical folding times range from
microseconds to seconds. It is quite surprising that
proteins are designed in such a way, that they can find correct NS in very
short time. This puzzle is known as Levinthal's paradox\cite{Levinthal_JCP68}.
\begin{wrapfigure}{r}{0.42\textwidth}
\includegraphics[width=0.40\textwidth]{./Figs/freeenergy_.eps}
\hfill\begin{minipage}{6.8 cm}
\linespread{0.8}
\caption{(a) Flat energy landscape, which
corresponds to blind search for the NS.
(b) Funnel-like FEL proposed by Wolynes and co-workers. \label{free_energy}}
\end{minipage}
\end{wrapfigure}
To resolve this paradox, Wolynes
and coworkers \cite{Leopold_PNAS92,Onuchic_COSB04} propose
the theory based on the folding FEL.
According to their theory, the Levinthal's scenario or the {\em old view}
corresponds to random search for the NS on a flat FEL (Fig. \ref{free_energy}a)
traveling along
a single deterministic pathway.
Such a blind search would lead to astronomically large folding times.
Instead of the old view, the {\em new view} states that the FEL
has a "funnel"-like shape (Fig. \ref{free_energy}b) and folding pathways are multiple. If some pathways get stuck
somewhere, then other pathways would lead to the NS. In the funnel one can
observe a bottleneck region which corresponds to an ensemble of conformations
of TS. By what ever pathway a protein folds, it has to overcome
the TS (rate-limiting step). The folding on a rugged FEL is slower
than on the smooth one due to kinetic traps.
It should be noted that very likely that the funnel FEL occurs only in
systems which satisfy the principle of {\em minimal frustration}
\cite{Bryngelson_PNAS87}. Presumably, Mother Nature selects only those sequences that
fulfill this principle.
Nowadays, the funnel theory was confirmed both theoretically
\cite{Clementi_JMB00,Koga_JMB01}
and experimentally \cite{Jin_Structure03} and it is widely accepted
in the scientific community.
\subsubsection{Folding mechanisms}
The funnel theory gives a global picture about folding. In this section we
are interested in pathways navigated by an ensemble of denaturated
states of a polypeptide chain en route to the native conformation. The quest to
answer this question has led to discovering diverse mechanisms by which
proteins fold.
\paragraph{Diffusion-collision mechanism.}
This is one of the earliest mechanisms, in which folding
pathway is not unbiased
\cite{Kim_ARB90}.
Local secondary structures are assumed to
form independently, then they diffuse until a collision in which a
native tertiary structure is formed.\\
\paragraph{Hydrophobic-collapse mechanism.}
Here one assumes that a proteins collapses
quickly around hydrophobic residues forming an intermediate state (IS)
\cite{Ptitsyn_TBCS95}.
After
that, it rearranges in such a way that secondary structures gradually
appear.\\
\paragraph{Nucleation-collapse mechanism.}
This was suggested by Wetlaufer long time ago \cite{Wetlaufer_PNAS73}
to explain the efficient folding of proteins.
In this mechanism several
neighboring residues are suggested to
form a secondary structure as a folding nucleus.
Starting from this nucleus, occurrence of secondary structures propagates
to remaining amino acids leading to formation of the native conformation.
In the other words, after formation of a well defined nucleus, a protein
collapses quickly to the NS. Thus, this mechanism with
a single nucleus is probably
applied to those proteins which fold fast and without intermediates.
Contrary to the old picture of single nucleus
\cite{Wetlaufer_PNAS73,Shakhnovich_Nature96}, simulations \cite{Guo_FD97}
and experiments \cite{Viguera_NSB96}
showed that there are several nucleation regions.
The contacts between the residues in these regions occur with
varying probability in the TS. This observation allows one
to propose the multiple folding nuclei mechanism, which asserts that, in the
folding nuclei, there is a distribution of contacts , with some occurring
with higher probability than others \cite{Klimov_JMB98}.
The rationale for this mechanism is that sizes of nuclei are small
(typically of 10-15 residues
\cite{Guo_Biopolymer95,Wolynes_PNAS97}) and the linear density of hydrophobic
amino acids along a chain is roughly constant.
The nucleation-collapse mechanism with multiple nuclei is also called
as {\em nucleation-condensation} one.\\
\paragraph{Kinetic partitioning mechanism.}
It should be noted that topological frustration is an inherent property
of all polypeptide chains. It is a direct consequence of the polymeric nature
of proteins, as well as of the competing interactions (hydrophobic residues,
which prefer the formation of compact structures, and hydrophilic residues,
which are better accommodated by extended conformations. It is for this reason
that an ideal protein, which has complete compatibility between local and
nonlocal interactions, does not exists, as was first recognized by Go
\cite{Go_ARBB83}.
The basic consequences of the complex free energy surface arising from
topological frustration leads naturally to the kinetic partitioning mechanism
\cite{Thirumalai_TCA97}. The main idea of this mechanism is as follows.
Imagine en ensemble of denaturated molecules in search of the native conformation.
It is clear that the partition factor $\Phi$ would reach the NS
rapidly without being trapped in the low energy minima.
The remaining fraction (1-$\Phi$) would be trapped in one or more minima
and reach the native basin by activated transitions on longer times scales
\cite{Thirumalai_JPI95}. Structures of trap-minima are intermediates
that slow the folding process. So, the fraction of molecules $\Phi$ that reaches
the native basin rapidly follows a two-state scenario without population
of any intermediates. A detailed kinetic analysis of the remaining
fraction of molecules (1-$\Phi$) showed that they reach the NS
through a three-stage multipathway mechanism \cite{Veitshans_FD97}.
Experiments on hen-egg lysozyme \cite{Thirumalai_TCA97}
, e.g.,
seem to support
the kinetic partitioning mechanism, which is valid for
folding via intermediates.
\subsubsection{Two- and multi-state folding}
Folding pathways and rates are defined by functions of proteins.
They could not fold too fast, as this may hump cells which continuously
synthesize chains. Presumably, by evolution sequences were selected
in such a way that there is neither universal
nor the most efficient mechanism for
all of them. Instead, the folding process may share features of
different mechanisms mentioned above. For example, the pool of molecules on
the fast track in the kinetic partitioning mechanism, reaches the
native basin through the nucleation collapse mechanism.
Regardless of the folding mechanism is universal or not, it is useful to divide
proteins into two groups. One of them includes two-state
molecules that fold
without intermediates, i.e.
they get folded after crossing a single
TS. Proteins which fold via intermediates belong to the
another group. These multi-state proteins have more than one TS.
The list of two- and three-state folders
is available in Ref. \cite{Jackson_FD98}.
Recently, it was suggested that the folding may proceed in down-hill manner
without any TS \cite{Munoz_Science02}. This problem is under debate.
\subsection{Mechanical unfolding of protein}
The last ten years have witnessed an intense activity SMFS experiments in detecting
inter and
intramolecular forces of biological systems to understand
their functions and structures. Much of the research has been focused
on the elastic properties of proteins, DNA, and RNA, i.e, their response to an
external force, following the seminal papers by Rief {\em et al.} \cite{Rief_Science97},
and Tskhovrebova {\em et al.} \cite{Tskhovrebova_Nature97}.
The main advantage of the SMFS is its ability
to separate out the fluctuations of individual molecules from the
ensemble average behavior observed in traditional bulk biochemical
experiments. Thus, using the SMFS one can
measure detailed distributions, describing certain molecular properties
(for example, the distribution of unfolding forces of biomolecules
\cite{Rief_Science97}) and observe possible intermediates in
chemical reactions. This technique can be used to decipher the unfolding
FEL of biomolecules \cite{Bustamante_ARBiochem_04}.
The SMFS studies
provided unexpected insights into the strength of forces driving biological
processes as well as determined various biological interactions which leads
to the mechanical stability of biological structures.
\subsubsection{Atomic force microscopy}
There are a number of techniques for manipulating single molecules:
\begin{wrapfigure}{r}{0.48\textwidth}\centering
\includegraphics[width=0.42\textwidth]{./Figs/exp_.eps}
\hfill \begin{minipage}{7.4 cm}
\linespread{0.8}
\caption{ (a) Schematic representation of AFM technique. (b) Cartoon for the spring constant of the cantilever.\label{AFM_fig}}
\end{minipage}
\end{wrapfigure}
the atomic force microscopy (AFM) \cite{Binnig_PRL86},
the laser optical tweezer (LOT), magnetic tweezers
, bio-membrane force probe, {\em etc}.
In this section we briefly discuss the AFM
which is used to probe the mechanical response of proteins
under external force.
In AFM, one terminal
of a biomolecules is anchored to a surface and the another one
to a force sensor (Fig. \ref{AFM_fig}a).
The molecule is stretched by increasing the distance between the surface
and the force sensor, which is a micron-sized cantilever.
The force measured on experiments
is proportional to the
displacement of the cantilever.
If the stiffness of the cantilever $k$ is known,
then a biomolecule experiences the force $f = k\delta x$,
where $\delta x$ is a cantilever bending which is detected by the laser.
In general, the resulting force versus extension curve is used in combination
with theories for obtaining
mechanical properties of biomolecules.
The spring constant of AFM cantilever tip is typically
$k = 10 - 1000$ pN/nm. The value of $k$ and thermal
fluctuations define spatial and force resolution in AFM experiments
because when the cantilever is kept at a fixed position the force
acting on the tip and the distance between the substrate and the tip fluctuate. The respective fluctuations are
\begin{equation}
<\delta x^2> = k_BT/k,
\label{fluc_dx_eq}
\end{equation}
and
\begin{equation}
<\delta f^2> = kk_BT.
\label{fluc_f_eq}
\end{equation}
Here $k_B$ is the Boltzmann constant.
For $k=10$ pN/nm and the room temperature
$k_BT \approx 4$ pN nm we have $\sqrt{<\delta x^2>} \approx 0.6$ nm
and $\sqrt{<\delta f^2>} \approx 6$ pN. Thus, AFM can probe
unfolding of proteins which have unfolding force
of $\sim 100$ pN, but it is not precise enough for studying, nucleic acids and molecular motors as these biomolecules have lower mechanical resistance.
For these biomolecules, one can use, e.g.
LOT which has the resolution $\sqrt{<\delta f^2} \sim 0.1$ pN.
\subsubsection{Mechanical resistance of proteins}
Proteins are pulled either by a constant force, $f$=const, or by
a force ramped linearly with time, $f=kvt$, where $k$ is the cantilever
stiffness, and $v$ is a pulling speed. In AFM experiments typical
$v \sim 100$ nm/s is used \cite{Rief_Science97}.
Remarkably, the force-extension curve obtained in the constant
rate pulling experiments has the saw-tooth shape due to domain by domain unfolding
(Fig. \ref{force_ext_Sci97_fig}a).
\begin{figure}[!htbp]
\vspace{7 mm}
\epsfxsize=6.3in
\centerline{\epsffile{./Figs/force_ext_Sci97_concept_.eps}}
\linespread{0.8}
\caption{(a) Force-extension curve obtained by stretching
of a Ig8 titin fragment. Each peak corresponds to unfolding of a
single domain. Smooth curves are fits to the worm-like chain model. Taken from
Ref. \cite{Rief_Science97}. (b) Sketch of dependence of the force on
the extension for a spring, polymer and proteins.
}
\label{force_ext_Sci97_fig}
\vspace{7 mm}
\end{figure}
Here each peak corresponds to unfolding
of one domain.
Grubmuller {\em et al} \cite{Grubmuller_Science96} and Schulten {\em et al}
\cite{Izrailev_BJ97} were first to reproduce this remarkable
result by steered MD (SMD) simulations. The saw-tooth shape is not trivial
if we recall that
a simple spring displays
the linear dependence of $f$ on extension obeying the Hooke law, while for polymers one has
a monotonic dependence which may be fitted to the worm-like chain (WLC)
model \cite{Marko_Macromolecules95} (Fig. \ref{force_ext_Sci97_fig}b).
A non-monotonic behavior is clearly caused by complexity of the
native topology of proteins.
To characterize protein mechanical stability,
one use the unfolding force $f_u$, which
is identified as the maximum force, $f_{max}$, in
the force-extension profile, $f_u \equiv f_{max}$. If this profile has
several local maxima, then we choose the largest one. Note that
$f_u$ depends on pulling speed logarithmically,
$f_u \sim \ln v$ \cite{Evans_BJ97}.
Most of the proteins studied so far display varying
degree of mechanical resistance.
Accumulated experimental and
theoretical results \cite{Sulkowska_BJ08,MSLi_BJ07a}
have revealed a number of factors that govern mechanical resistance.
As a consequence of the local nature of applied force,
the type of secondary structural motif is thought to be important, with
$\beta$-sheet structures being more mechanically resistant than all
$\alpha$-helix ones \cite{MSLi_BJ07a}.
For example, $\beta$-protein I27 and
$\alpha/\beta$-protein Ub have $f_u \approx 200$ pN which is
considerably higher than $f_u \approx 30$ pN for purely $\alpha$-spectrin
\cite{Rief_JMB99}.
Since the secondary
structure content is closely related to the contact order \cite{Plaxco_JMB98},
$f_u$ was shown to depend on the later linearly \cite{MSLi_BJ07a}.
In addition to secondary structure, tertiary structure may influence
the mechanical resistance. The 24-domain ankyrin, e.g., is mechanically more
stable than single- or six-domain one \cite{Lee_Nature06}.
The mechanical stability depends on pulling geometry
\cite{Dietz_PNAS06}.
The points of application of the force to a protein
and
the pulling direction do matter. If a force is applied parallel to HBs (unzipping), then
$\beta$-proteins are less stable than the case where the force direction
is orthogonal to them (shearing).
The mechanical stability
can be affected by ligand binding
\cite{Cao_PNAS07}
and disulphide bond formation
\cite{Wiita_Nature07}. Finally, note that the mechanical resistance of proteins
can be captured not only by all-atom SMD \cite{Sotomayor_Science07}, but
also by simple Go models \cite{Sulkowska_BJ08,MSLi_BJ07a}.
This is because the mechanical unfolding is mainly governed by the native
topology and native topology-based Go models suffice. However, in this thesis, we will show that in some cases non-native interactions can not be neglected.
\subsubsection{Construction of unfolding free energy landscape by SMFS}
Deciphering FEL is a difficult task as it is a function of many variables.
Usually, one projects it into one- or two-dimensional space. The validity of
such approximate mapping is not {\em a priory}
clear and experiments should be used
to justify this. In the mechanical unfolding case, however, the end-to-end
extension $\Delta R$ can serve as a good reaction coordinate and FEL can be mapped
into this dimension. Thus, considering FEL as a function of $\Delta R$,
one can estimate the distance between the NS and TS,
$x_u$, using either the dependencies of unfolding rates on the
external force \cite{MSLi_BJ07}
or the dependencies of $f$ on pulling speed $v$
\cite{Carrion-Vasquez_PNAS99}. Unfolding barriers may be also extracted
with the help of the non-linear kinetic theory
\cite{Dudko_PRL06} (see below).
Experiments and simulations \cite{MSLi_BJ07a} showed that $x_u$ varies
between 2 - 15 \AA , depending on the secondary structure content or
the contact order. The smaller CO , the larger is $x_u$. It is remarkable
that $x_u$ and unfolding force $f_u$ are mutually related. Namely,
using a simple network model, Dietz and Rief \cite{Dietz_PRL08} argued that
$x_uf_u$ $\approx$ 50 pN nm for many proteins.
\newpage
\begin{center} \section{Modeling, Computational tools and theoretical background} \end{center}
\subsection{Modeling of Proteins}
In this section we briefly discuss main models used to study
protein dynamics.
\subsubsection{Lattice models}
In last about fifteen years, considerable insight into
thermodynamics and kinetics of protein
folding has been gained due to simple lattice models
\cite{Dill_ProteinSci95, Kolinski_book96}.
Here
amino acids are
represented by single beads which are located at vertices
of a cubic lattice. The most important difference
from homopolymer models is that amino acid sequences and the role of
contacts should be taken into account. Due to the constraint that a contact
is formed if two residues are nearest neighbors, but not
successive in sequence, a contacts between residues $i$ and $j$ is
allowed provided $|i-j| \ge 3$. In the simple Go modeling \cite{Go_ARBB83},
the interaction between two beads which form a native contact is
assumed to be attractive, while the non-native interaction is repulsive.
This energy choice guarantees that the native conformation has the lowest
energy.
In more realistic models specific interactions between amino acids are taken into account.
Several kinds of potentials
\cite{Miyazawa_Macromolecules85,Kolinski_JCP93,Betancourt_ProSci99} are used to
describe these interactions.
A next natural step to mimic more realistic features of
proteins such as a dense core packing
is to include the rotamer degrees of freedom \cite{Kolinski_Proteins96}. One of
the simplest models is a cubic lattice of a backbone sequence
of $N$ beads,
to which a side bead representing a side chain is attached
\cite{Bromberg_ProteinSci94} (Fig. \ref{models}).
The system has in total 2$N$ beads. Here we consider a Go model, where the energy of a conformation is \cite{Kouza_JPCA06}
\begin{eqnarray}
E \; = \; \epsilon _{bb} \sum_{i=1,j>i+1}^{N} \,
\delta _{r_{ij}^{bb},a}
+ \epsilon _{bs} \sum_{i=1,j\neq i}^{N} \, \delta _{r_{ij}^{bs},a}
+ \epsilon _{ss} \sum_{i=1,j>i}^{N} \, \delta _{r_{ij}^{ss},a} \; ,
\label{energy_eq_lattice}
\end{eqnarray}
where $\epsilon _{bb}, \epsilon _{bs}$ and $\epsilon _{ss}$ are
backbone-backbone(BB-BB), backbone-side chain
(BB-SC) and side chain-side chain (SC-SC) contact energies, respectively.
The distances $r_{ij}^{bb}, r_{ij}^{bs}$ and $r_{ij}^{ss}$ are between
BB, BS and SS beads, respectively.
The contact energies $\epsilon _{bb}, \epsilon _{bs}$
and $\epsilon _{ss}$ are taken to be -1 (in units of k$_{b}$T) for native
and 0 for non-native interactions. The neglect of interactions between residues
not present in the NS is the approximation used in the Go model.
In order to monitor protein dynamics usually one use the standard move set
which includes the tail flip,
corner flip,
and crankshaft
for backbone beads.
The Metropolis
criterion is applied to accept or reject moves \cite{Kolinski_book96}.
While lattice models have been widely used in the protein folding
problem \cite{Kolinski_book96}, they attract little attention in the mechanical unfolding
simulation \cite{Socci_PNAS99}.
In present thesis, we employed this model to study the cooperativity of
the folding-unfolding transition.
\begin{figure}
\epsfxsize=5.5in
\vspace{5 mm}
\centerline{\epsffile{./Figs/Go_.eps}}
\linespread{0.8}
\caption{Representation of protein conformation by lattice model with side chain (a), off-lattice C$_\alpha$-Go model (b) and all-atom model (c). }
\label{models}
\vspace{5 mm}
\end{figure}
\subsubsection{Off-lattice coarse-grained Go modeling}
The major shortcoming of lattice models is that beads are confined to
lattice vertices and it does not allow for describing the protein shape
accurately. This can be remedied with the help of off-lattice models in which
beads representing amino acids can occupy any positions (Fig. \ref{models}b).
A number of off-lattice coarse-grained models with realistic
interactions (not Go) between amino acids
have been developed to study the
mechanical resistance of proteins
\cite{Klimov_PNAS00,Kirmizialtin_JCP05}.
However, it is not an easy task to construct such models for
long proteins.
In the pioneer paper \cite{Go_ARBB83} Go introduced a very simple model
in which non-native interactions are ignored. This native topology-based
model turns out to be highly useful in predicting the folding
mechanisms and deciphering the free energy landscapes of two-state proteins
\cite{Takaga_PNAS99,Clementi_JMB00,Koga_JMB01}.
On the other hand, in mechanically unfolding one stretches
a protein from its native conformation, unfolding properties are mainly
governed by its native topology.
Therefore, the native-topology-based or Go modeling is suitable
for studying the mechanical unfolding. Various versions of Go models
\cite{Clementi_JMB00,Cieplak_Proteins02,Karanicolas_ProSci02,West_BJ06,Hyeon_Structure06,MSLi_BJ07} have been applied to this problem.
In this thesis we will focus on
the variant of Clementi {\em et al.}
\cite{Clementi_JMB00}.
Here one uses coarse-grained continuum representation for a protein
in which only the positions of C$_{\alpha}$-carbons are retained.
The interactions between residues are assumed to be Go-like and
the energy of such a model is as follows \cite{Clementi_JMB00}
\begin{eqnarray}
E \; &=& \; \sum_{bonds} K_r (r_i - r_{0i})^2 + \sum_{angles}
K_{\theta} (\theta_i - \theta_{0i})^2 \nonumber \\
&+& \sum_{dihedral} \{ K_{\phi}^{(1)} [1 - \cos (\phi_i -
\phi_{0i})] + K_{\phi}^{(3)} [1 - \cos 3(\phi_i - \phi_{0i})] \}
\nonumber \\
& + &\sum_{i>j-3}^{NC} \epsilon_H \left[ 5\left(
\frac{r_{0ij}}{r_{ij}} \right)^{12} - 6 \left(
\frac{r_{0ij}}{r_{ij}}\right)^{10}\right] + \sum_{i>j-3}^{NNC}
\epsilon_H \left(\frac{C}{r_{ij}}\right)^{12} + E_f
. \label{Hamiltonian}
\end{eqnarray}
Here $\Delta \phi_i=\phi_i - \phi_{0i}$,
$r_{i,i+1}$ is the distance between beads $i$ and $i+1$, $\theta_i$
is the bond angle
between bonds $(i-1)$ and $i$,
and $\phi_i$ is the dihedral angle around the $i$th bond and
$r_{ij}$ is the distance between the $i$th and $j$th residues.
Subscripts ``0'', ``NC'' and ``NNC'' refer to the native
conformation, native contacts and non-native contacts,
respectively. Residues $i$ and $j$ are in native contact if
$r_{0ij}$ is less than a cutoff distance $d_c$ taken to be $d_c =
6.5$ \AA, where $r_{0ij}$ is the distance between the residues in
the native conformation.
The local interaction in Eq. (\ref{Hamiltonian}) involves three first terms.
The harmonic term
accounts for chain
connectivity (Fig. \ref{interactions}a), while the second term represents the bond angle potential (Fig. \ref{interactions}b).
The potential for the
dihedral angle degrees of freedom (Fig. \ref{interactions}c) is given by the third term in
Eq. (\ref{Hamiltonian}). The non-local interaction energy between residues that are
separated by at least 3 beads is given by 10-12 Lennard-Jones potential (Fig. \ref{interactions}e).
A soft sphere repulsive potential
(the fifth term in Eq. \ref{Hamiltonian})
disfavors the formation of non-native contacts.
The last term accounts for the force applied to C and N termini
along the end-to-end
vector $\vec{R}$.
We choose $K_r =
100 \epsilon _H/\AA^2$, $K_{\theta} = 20 \epsilon _H/rad^2,
K_{\phi}^{(1)} = \epsilon _H$, and
$K_{\phi}^{(3)} = 0.5 \epsilon _H$, where $\epsilon_H$ is the
characteristic hydrogen bond energy and $C = 4$ \AA.
\begin{figure}[!hbtp]
\epsfxsize=6.5in
\vspace{0.2in}
\centerline{\epsffile{./Figs/c_.eps}}
\caption{Schematic representation for covalent bonding (a), bond angle interactions (b), proper torsion potential (c), improper dihedral angles (d), long range Van der Waals (e) and electrostatic interactions (f).}
\label{interactions}
\end{figure}
In the constant force simulations the last term in Eq. (\ref{Hamiltonian})
is
\begin{equation}
E_f \; = \; -\vec{f}.\vec{r},
\label{E_f_constant_eq}
\end{equation}
where $\vec{r}$ is the end-to-end vector and $\vec{f}$ is the force applied
either to both termini or to one of them.
In the constant velocity force simulation we fix the N-terminal and pull the
C-terminal by force
\begin{equation}
f \; = \; k(vt -x),
\label{E_f_velocity_eq}
\end{equation}
where $x$ is the displacement of the pulled atom from its original position
\cite{Lu_BJ98}, and
the pulling direction was chosen along the vector from fixed atom to
pulled atom.
In order to mimic AFM experiments (see
section {\em Experimental technique}), throughout this thesis we used the $k= K_r = 100 \epsilon _H/\AA^2 \; \approx 100$ pN/nm, which has the same order of magnitude as those for cantilever stiffness.
\subsubsection{All-atom models}
The intensive theoretical study of
protein folding has been performed with the help of all-atom simulations
\cite{Isralewitz_COSB01,Gao_PCCP06,Sotomayor_Science07}.
All-atom models include the local interaction
and the non-bonded terms. The later include
the (6-12) Lenard-Jones potential,
the electro-static interaction,
and the interaction with environment. The all-atom model with the CHARMM
force field \cite{Brooks_JCC83} and explicit TIP3 water \cite{Jorgenson_JCP83}
has been employed first by Grubmuller {\em et al.}
\cite{Grubmuller_Science96} to compute the rupture force of the
streptavidin-biovitin complex. Two years later a similar model was
successfully applied by
Schulten and coworkers \cite{Lu_BJ98} to the titin domain I27.
The NAMD software \cite{Phillips_JCC05} developed by this group is now widely
used for stretching biomolecules by the constant mechanical force and by the
force with constant loading rate (see
recent review \cite{Isralewitz_COSB01}
for more references).
NAMD works with not only CHARMM but also with AMBER potential parameters
\cite{Weiner_JCC81},
and file formats.
Recently, it becomes possible to use the GROMACS software \cite{Gunstren_96}
for all-atom simulations of mechanical unfolding of proteins in explicit water.
As we will present results obtained for mechanical unfolding of DDFLN4
using the Gromacs software, we discuss it in more detail.
Gromacs force field we use provides parameters for all atoms in a system, including water molecules and hydrogen atoms.
The general functional form of a force filed consists of two terms:
\begin{equation}
E_{total}=E_{bonded} + E_{nonbonded}
\end{equation}
where $E_{bonded}$ is the bonded term which is related to atoms that are linked by covalent bonds and $E_{nonbonded}$ is
the nonbonded one which is described the long-range electrostatic and van der Waals forces.
{\bf Bonded interactions}.
The potential function for bonded interactions can be subdivided into four parts: covalent bond-stretching, angle-bending, improper dihedrals and proper dihedrals. The bond stretching between two covalently bonded atoms $i$ and $j$ is represented by a harmonic potential
\begin{equation}
V_b(r_{ij})=\frac{1}{2}k_{ij}^b(r_{ij}-b_{ij})^2
\end{equation}
where $r_{ij}$ is the actual bond length, $b_{ij}$ the reference bond lengh, $k_{ij}$ the bond stretching force constant. Both reference bond lengths and force constants are specific for each pair of bound atoms and they are usually extracted from experimental data or from quantum mechanical calculations.
The bond angle bending interactions between a triplet of atoms {\it i-j-k} are also represented by a harmonic potential on the angle $\theta_{ijk}$
\begin{equation}
V_a(\theta_{ijk})=\frac{1}{2}k_{ijk}^{\theta}(\theta_{ijk}-\theta_{ijk}^0)^2
\end{equation}
where $k_{ijk}^{\theta}$ is the angle bending force constant, $\theta_{ijk}$ and $\theta_{ijk}^0$ are the actual and reference angles, respectively. Values of $k_{ijk}^{\theta}$ and $\theta_{ijk}^0$ depend on chemical type of atoms.
Proper dihedral angles are defined according to the IUPAC/IUB convention (Fig. \ref{interactions}c), where $\phi$ is the angle between the {\it ijk} and the {\it ikl} planes, with zero corresponding to the {\it cis} configuration ({\it i} and {\it l} on the same side). To mimic rotation barriers around the bond the periodic cosine form of potential is used.
\begin{equation}
V_d(\phi_{ijkl})=k_{\phi}(1+cos(n\phi-\phi_s))
\end{equation}
where $k_{\phi}$ is dihedral angle force constant, $\phi_s$ is the dihedral angle (Fig. \ref{interactions}c), and $n$=1,2,3 is a coefficient of symmetry.
Improper potential is used to maintain planarity in a molecular structure. The torsional angle definition is shown in the figure \ref{interactions}d. The angle $\xi_{ijkl}$ still depends on the same two planes ijk and jkl, as can be seen in the figure with the atom i in the center instead on one of the ends of the dihedral chain. Since this potential used to maintain planarity, it only has one minimum and a harmonic potential can be used:
\begin{equation}
V_{id}(\xi_{ijkl})=\frac{1}{2}k_{\xi}(\xi_{ijkl}-\xi_0)^2
\end{equation}
where $k_{\xi}$ is improper dihedral angle bending force constant, $\xi_{ijkl}$ - improper dihedral angle.
{\bf Nonbonded interactions}. They act between atoms within the same protein as well as between different molecules in large protein complexes. Non bonded interactions are divided into two parts: electrostatic (Fig. \ref{interactions}f) and Van der Waals (Fig. \ref{interactions}e) interactions.
The electrostatic interactions are modeled by Coulomb potential:
\begin{equation}
V_c(r_{ij})=\frac{q_iq_j}{4\pi\epsilon_0r_{ij}}
\end{equation}
where $q_i$ and $q_j$ are atomic charges, $r_{ij}$ distance between atoms i and j, $\epsilon_0$ the electrical permittivity of space.
The interactions between two uncharged atoms are described by the Lennard-Jones potential
\begin{equation}
V_{LJ}(r_{ij})=\frac{C_{ij}^{12}}{r_{ij}^{12}} - \frac{C_{ij}^6}{r_{ij}^6}
\end{equation}
where $C_{ij}^{12}$ and $C_{ij}^6$ are specific Lennard-Jones parameters which depend on pairs of atom types.
{\bf SPC water model.} To calculate the interactions between molecules in solvent, we use a model of the individual water molecules what tell us where the charges reside. Gromacs software uses SPC or Simple Charge Model to represent water molecules. The water molecule has three centers of concentrated charge: the partial positive charges on the hydrogen atoms are balanced by an appropriately negative charge located on the oxygen atom. An oxygen atom also gets the Lennard-Jones parameters for computing intermolecular interactions between different molecules. Van der Waals interactions involving hydrogen atoms are not calculated.
\subsection{Molecular Dynamics}
One of the important tools that have been employed to study the
biomolecules are the molecular dynamics (MD) simulations.
It was first introduced by Alder and Wainwright in 1957 to study
the interaction of hard spheres. In 1977, the first biomolecules,
the bovine pancreatic trypsin inhibitor (BPTI) protein, was simulated
using this technique.
Nowadays, the MD technique is quite common in the
study of biomolecules such as solvated proteins, protein-DNA
complexes as well as lipid systems addressing a variety of issues
including the thermodynamics of ligand-binding, the folding and unfolding
of proteins.
It is important to note that biomolecules exhibit a wide range
of time scales over which specific processes take place. For
example, local motion which involves atomic fluctuation, side
chain motion, and loop motion occurs in the length scale of 0.01
to 5 {\AA} and the time involved in such process is of the order
of 10$^{-15}$ to 10$^{-12}$ s. The motion of a helix, protein domain
or subunit falls under the rigid body motion whose typical length
scales are in between 1 -- 10 {\AA} and time involved in such motion
is in between 10$^{-9}$ to $10^{-6}$ s.
Large-scale motion consists of helix-coil
transitions or folding unfolding transition, which is more than
5 {\AA} and time involved is about 10$^{-7}$ to 10$^{1}$ s.
Typical time scales for protein folding are 10$^{-6}$ to 10$^{1}$ s
\cite{Kubelka_COSB04}. In unfolding experiments, to stretch out
a protein of length $10^2$ nm, one needs time $\sim$ 1 s using
a pulling speed
$v \sim 10^2$ nm/s \cite{Rief_Science97}.
The steered MD (SMD) that combines the stretching condition with the
standard MD was initiated by Schulten and coworkers \cite{Isralewitz_COSB01}.
They simulated the force-unfolding of a number of proteins
showing atomic details of the molecular motion under force. The focus was on the
rupture events of HBs that stabilized the structures. The structural and energetic analysis enabled them to identify the origin of free energy
barrier and intermediates during mechanical unfolding.
However, one has to notice that there is enormous difference
between the simulation condition used in SMD and real experiment.
In order to stretch out proteins within a reasonable amount of CPU time,
SMD simulations at constant pulling speed use eight to ten orders of higher
pulling speed, and one to two orders of larger spring constant than
those of AFM experiments. Therefore, effective force acting on the molecule
is about three-four orders higher. It is unlikely, that the dynamics under
such an extreme condition can mimic real experiments, and one has to be very
careful about comparison of simulation results with experimental ones.
In literature the word "steered" also means MD at extreme conditions,
where constant force and constant pulling speed are chosen very high.
Excellent reviews on MD and its
use in biochemistry and biophysics are numerous (see, e.g.,
\cite{Adcock_ChemRev06} and references therein).
Below, we only focus on the Brownian dynamics as
well as on the second-order Verlet
method for the Langevin dynamics simulation , which have been intensively used
to obtain main results presented in this thesis.
\subsubsection{Langevin dynamics simulation}
The Langevin equation is a stochastic differential equation which introduces
friction and noise
terms into Newton's second law to approximate effects of
temperature and environment:
\begin{equation}
m\frac{d^2 \vec{r}}{d t^2} = \vec{F}_c - \gamma \frac{d\vec{r}}{dt} + \vec{\Gamma} \equiv \vec{F}.
\label{Langevin_eq}
\end{equation}
where $\Gamma$ is a random force, $m$ the mass of a bead, $\gamma$ the friction
coefficient, and
$\vec{F}_c = -d\vec{E}/d\vec{r}$. Here the configuration energy $E$ for the Go model,
for example, is given by Eq. (\ref{Hamiltonian}).
The random force $\Gamma$ is taken to be a Gaussian random
variable with white noise spectrum and is related to the friction coefficient by the
fluctuation-dissipation relation:
\begin{equation}
<\Gamma (t) \Gamma (t')> = 2\gamma k_BT\delta(t-t')
\label{Noise}
\end{equation}
where $k_B$ is a Boltzmann's constant, $\gamma$ friction coefficient, $T$ temperature and $\delta(t-t')$ the Dirac delta function.
The friction term only influences kinetic but not thermodynamic properties.
In the low friction regime, where $\gamma < 25\frac{m}{\tau_L}$
(the time unit $\tau_L = (ma^2/\epsilon_H)^{1/2} \approx 3$ ps),
Eq. (\ref{Langevin_eq})
can be solved using the second-order
Velocity Verlet algorithm \cite{Swope_JCP82}:
\begin{eqnarray}
x(t+\Delta t) \; = \; x(t) + \dot{x}(t)\Delta t + \frac {1}{2m}F(t)(\Delta t)^2,
\end{eqnarray}
\begin{eqnarray}
\dot{x}(t+\Delta t) \; = \; \left(1-\frac{\gamma\Delta t}{2m}\right)\left[1 - \frac{\gamma\Delta t}{2m} +
\left(\frac{\gamma\Delta t}{2m}\right)^2\right]\dot{x}(t) + \qquad \nonumber\\
\left(1 - \frac{\gamma\Delta t}{2m} + \left(\frac{\gamma\Delta t}{2m}\right)^2\right)(F_c(t) + \Gamma (t) +
F_c(t+\Delta t) + \Gamma (t+\Delta t))\frac{\Delta t}{2m} + o(\Delta t^2),
\end{eqnarray}
with the time step $\Delta t = 0.005 \tau_L$.
\subsubsection{Brownian dynamics}
In the overdamped limit ($\gamma > 25\frac{m}{\tau_L}$) the inertia term can be neglected, and we obtain a much simpler equation:
\begin{equation}
\frac{dr}{dt} = \frac{1}{\gamma}(F_c + \Gamma).
\label{overdamped_eq}
\end{equation}
This equation may be solved using the simple Euler method which gives
the position of a biomolecule at the time $t + \Delta t$ as follows:
\begin{equation}
x(\Delta t+t) = x(t) + \frac{\Delta t}{\gamma} (F_c + \Gamma).
\label{Euler}
\end{equation}
Due to the large value of $\gamma$ we can choose the time step
$\Delta t = 0.1 \tau_L$ which is 20-fold larger than the low viscosity
case. Since the water has $\gamma \approx 50\frac{m}{\tau_L}$
\cite{Veitshans_FD97},
the Euler method is
valid for studying protein dynamics.
\subsection{Theoretical background}
In this section we present basic formulas used throughout my thesis.
\subsubsection{Cooperativity of folding-unfolding transition}
The sharpness of the fold-unfolded transition might be characterized
quantitatively
via the cooperativity index $\Omega _c$ which is defined as follows
\cite{MSLi_Polymer04}
\begin{equation}
\Omega_c=\frac{T_F^2}{\Delta T}
\biggl(\frac{df_N}{dT}\biggr)_{T=T_F},
\label{cooper_index_eq}
\end{equation}
where $\Delta T$ is the transition width and $f_N$ the probability of being
in the NS. The larger $\Omega _c$, the sharper is the transition.
$f_N$ is defined as the thermodynamic average of the
fraction of native contacts $\chi$, $f_N = <\chi >$. For off-lattice models, $\chi$ is \cite{Camacho_PNAS93}:
\begin{equation}
\chi \; = \frac{1}{Q_{total}} \sum_{i<j+1}^N \,\;
\theta (1.2r_{0ij} - r_{ij}) \Delta_{ij}
\label{chi_eq_Go}
\end{equation}
where
$\Delta_{ij}$ is equal to 1 if residues $i$ and $j$ form a native
contact and 0 otherwise and $\theta (x)$ is the Heaviside
function. The argument of this function guarantees that
a native contact between $i$ and $j$ is classified as formed
when $r_{ij}$ is shorter than 1.2$r_{0ij}$ \cite{Clementi_JMB00}.
In the lattice model with side chain (LMSC) case, we have
\begin{eqnarray}
\chi \; = \; \frac{1}{2N^{2} - 3N + 1} \left[ \sum_{i<j} \,
\delta (r_{ij}^{ss} - r_{ij}^{ss,N})
+ \sum_{i<j+1} \, \delta (r_{ij}^{bb} - r_{ij}^{bb,N})
+ \sum_{i \neq j} \, \delta (r_{ij}^{bs} - r_{ij}^{bs,N}) \; \right].
\label{chi_eq_lattice}
\end{eqnarray}
Here $bb$, $bs$ and $ss$ refer to backbone-backbone, backbone-side chain
and side chain-side chain pairs, respectively.
\subsubsection{Kinetic theory for mechanical unfolding of biomolecules}
One of the notable aspects in force experiments on single biomolecules
is that the end-to-end extension $\Delta R$ is directly measurable or controlled by
instrumentation. $\Delta R$ becomes a natural reaction coordinate for describing
mechanical processes.
The theoretical framework for understanding the effect of external constant
force on
rupture rates was first discussed in the context
of cell-cell adhesion by Bell in 1978 \cite{Bell_Sci78}.
Evans and Rirchie have extended his theory to the case when the loading force
increases linearly with time
\cite{Evans_BJ97}. The phenomenological Bell theory is based on the assumption that the
TS does not move under stretching. Since this assumption is not true,
Dudko {\em et al} \cite{Dudko_PRL06}
have developed the microscopic theory which is free from this shortcoming.
In this section we discuss the phenomenological as well as microscopic
kinetics theory.
\paragraph{Bell theory for constant force case.}
Suppose the external constant force, $f$, is applied to the termini of a biomolecule.
The deformation of the FEL under force is schematically
shown in Fig. \ref{conceptual_TS_fig}.
Assuming that the force does not change
the distance between the NS and TS ($x_u(f)=x_u(0)$),
Bell \cite{Bell_Sci78} stated that
the activation energy is changed to
$\Delta G^{\ddag}_u(f) = \Delta G^{\ddag}_u(0)
- fx_u$, where $x_u =x_u(0)$. In general, the proportionality factor $x_u$
has the dimension of length
and may be viewed as the width of the potential.
Using the Arrhenius law, Bell obtained the following
formula for the unfolding/unbinding rate constant \cite{Kramers_Physica40}:
\begin{equation}
k_{u} (f) \; = \; k_{u}(0) \exp (fx_{u}/k_BT),
\label{Bell_Ku_eq}
\end{equation}
where $k_{u}(0)$ is the rate constant is the unfolding rate constant
in the absence of a force. If a reaction takes place in condensed phase,
then according to the Kramers theory the prefactor $k_{u}(0)$ is equal
\begin{equation}
k_{u}(0) \; = \; \frac{\omega_0\omega_{ts}}{2\pi \gamma}\exp(-\Delta G^{\ddag}_u(0)/k_BT).
\label{Omega}
\end{equation}
Here $\gamma$ is a solvent viscosity, $\omega_0$ the angular frequency (curvature) at the reactant bottom, and $\omega_{ts}$ the curvature at barrier top of
the effective reaction coordinate \cite{Kramers_Physica40}.
For biological reactions, which belong to the Kramers category,
$\frac{\omega_0\omega_{ts}}{2\pi \gamma} \sim 1 \mu$s \cite{Levinthal_JCP68}.
\begin{figure}
\vspace{5 mm}
\epsfxsize=4.2in
\centerline{\epsffile{./Figs/conceptual_TS.eps}}
\linespread{0.8}
\caption{Conceptual plot for the FEL without (blue) and
under (red) the external force. $x_u$ is the shift of $x_u$
in the presence of force.}
\label{conceptual_TS_fig}
\vspace{5 mm}
\end{figure}
It is important to note that the unfolding rate grows exponentially
with the force. This is the hallmark of the Bell model.
Even Eq. (\ref{Bell_Ku_eq}) is very simple, as we will
see below, it fits most of experimental data very well.
Using Eq. (\ref{Bell_Ku_eq}),
one can extract the distance $x_u$, or the location of the TS.
\paragraph{Bell theory for force ramp case.}
Assuming that the force increases linearly
with a rate $v$,
Evans and Rirchie
in their seminal paper \cite{Evans_BJ97}, have shown that the
distribution of unfolding force $P(f)$ obeys the following equation:
\begin{equation}
P(f) \; = \; \frac{k_{u}(f)}{v} \exp \{ \frac{k_BT}{x_uv}
\left[ k_u(0)-k_u(f) \right]\},
\end{equation}
where $k_u(f)$ is given by Eq. (\ref{Bell_Ku_eq}).
Then, the most probable unbinding force or the maximum of force distribution
$f_{max}$, obtained from the condition $dP(f)/df|_{f=f_{max}}=0$,
is
\begin{equation}
f_{max}=\frac{k_BT}{x_u}ln\frac{kvx_u}{k_u(0)k_BT}.
\label{f_logV_eq}
\end{equation}
The logarithmic dependence of $f_{max}$ on the pulling speed $v$ was
confirmed by enumerous experiments and simulations
\cite{Klimov_PNAS99,Kouza_JCP08}.
\paragraph{Beyond Bell approximation.}
The major shortcoming of the the Bell approximation is
the assumption that $x_u$ does not depend on
the external force. Upon force application the location of TS
should move
closer to
the NS reducing $x_u$ (Fig \ref{conceptual_TS_fig}),
as postulated by Hammond in the context
of chemical reactions of small organic molecules
\cite{Hammond_JACS53}.
The Hammond behavior has been
observed in protein folding experiments
\cite{Matouschek_Biochemistry95}
and simulations \cite{Lacks_BJ05}.
Recently, assuming that $x_u$ depends on the external force
and using the Kramers theory,
several groups \cite{Schlierf_BJ06,Dudko_PRL06}
have tried to go beyond the Bell approximation. We follow
Dudko {\em et al.} who proposed
the following force dependence for the unfolding time \cite{Dudko_PRL06}:
\begin{eqnarray}
\tau _u \; = \; \tau _u^0
\left(1 - \frac{\nu x_u}{\Delta G^{\ddagger}}\right)^{1-1/\nu}
\exp\lbrace -\frac{\Delta G^{\ddagger}}{k_BT}[1-(1-\nu x_uf/\Delta G^{\ddagger})^{1/\nu}]\rbrace.
\label{Dudko_eq}
\end{eqnarray}
Here, $\Delta G^{\ddagger}$ is the unfolding barrier, and $\nu = 1/2$ and 2/3
for the cusp \cite{Hummer_BJ03} and the
linear-cubic free energy surface \cite{Dudko_PNAS03}, respectively.
Note that
$\nu =1$ corresponds to the phenomenological
Bell theory (Eq. \ref{Bell_Ku_eq}), where $\tau_u=1/k_u$.
An important consequence following from
Eq. (\ref{Dudko_eq}), is that one can apply it to estimate not only
$x_u$, but also $\Delta G^{\ddagger}$, if $\nu \ne 1 $. Expressions
for the distribution
of unfolding forces and the $f_{max}$ for arbitrary $\nu$ may be found in
\cite{Dudko_PRL06}.
\subsubsection{Kinetic theory for refolding of biomolecules.}
In force-clamp experiments \cite{Fernandez_Sci04}, a protein refolds under
the quenched force. Then,
in the Bell approximation,
the external force increases the folding barrier
(see Fig. \ref{conceptual_TS_fig}) by amount
$\Delta G^{\ddag}_f = fx_f$, where $x_f = x_f(0)$ is
a distance between the DS
and the TS. Therefore, the refolding time
reads as
\begin{equation}
\tau_{f} (f) \; = \; \tau_{f}(0) \exp (fx_{f}/k_BT).
\label{Bell_Kf_eq}
\end{equation}
Using this equation and the force dependence of $\tau_{f} (f)$,
one can extract $x_f$
\cite{Fernandez_Sci04,MSLi_PNAS06,MSLi_BJ07}.
One can extend the nonlinear theory of Dudko {\em et al} \cite{Dudko_PRL06}
to the refolding case by replacing $x_u \rightarrow -x_f$
in, e.g.,
Eq. (\ref{Dudko_eq}). Then the folding barriers can be estimated using the
microscopy theory with $\nu \neq 1$.
\subsection{Progressive variable}
In order to probe folding/refolding pathways, for $i$-th trajectory
we introduce the progressive variable
\begin{equation}
\delta _i =
t/\tau^i_{f}.
\label{progress_fold_eq}
\end{equation}
Here $\tau^i_{f}$ is the folding time, which is is defined
as a time to get the NS starting from the denaturated one
for the $i$-th trajectory.
Then one
can average the fraction of native contacts over many trajectories
in a unique time window
$0 \le \delta _i \le 1$ and monitor the folding sequencing with
the help of the progressive variable $\delta$.
In the case of unfolding, the progressive variable is defined in a similar
way:
\begin{equation}
\delta _i =
t/\tau^i_{u}.
\label{progress_unfold_eq}
\end{equation}
Here $\tau^i_{u}$ is the folding time, which is is defined
as a time to get a rod conformation starting from the NS for the $i$-th
trajectory.
The unfolding time, $\tau_u$,
is the average of first passage times to reach a rod conformation.
Different trajectories start from the same native
conformation but, with different random number seeds.
In order
to get the reasonable estimate for $\tau_u$,
for each case we have generated 30 - 50 trajectories.
Unfolding pathways were probed by monitoring
the fraction of native contacts of secondary structures as a
function of progressive variable $\delta$.
\clearpage
\begin{center}\section{ Effect of finite size on cooperativity and rates of protein folding}\end{center}
\subsection{Introduction}
Single domain globular proteins are mesoscopic systems that self-assemble,
under folding conditions, to a compact state with definite topology. Given
that the folded states of proteins are only on the order of tens of
Angstroms
(the radius of gyration $R_g \approx 3 N^{\frac {1}{3}}$ \AA $~$
\cite{Dima_JPCB04} where $N$ is
the number of amino acids) it is surprising that they undergo highly
cooperative transitions from an ensemble of unfolded states to the NS
\cite{Privalov_APC79}.
Similarly, there is a wide spread in the folding
times as well \cite{Galzitskaya_Proteins03}.
The rates of folding vary by nearly nine orders of
magnitude. Sometime ago it was shown theoretically that the folding time
,$\tau_F$, should depend on $N$
\cite{Finkelstein_FoldDes97}
but only recently has experimental
data confirmed this prediction
\cite{Galzitskaya_Proteins03,MSLi_Polymer04,Ivankov_PNAS04}.
It has been shown that $\tau_F$
can be approximately evaluated using $\tau_F \approx \tau_F^0
\exp(N^{\beta})$ where $1/2 \le \beta < 2/3$ with
the prefactor $\tau_F^0$ being on the order
of a $\mu s$.
Much less attention has been paid to finite size effects on the
cooperativity of transition from unfolded states to the
native basin of attraction (NBA). Because
$N$ is finite, large conformational fluctuations are possible but
require careful examination \cite{Klimov_JCC02,MSLi_Polymer04}.
For large enough $N$ it is likely that the
folding or melting temperature itself may not be unique
\cite{Holtzer_BJ97}.
Although substantial variations in $T_m$ are unlikely it has already been shown that the there
is a range of temperatures over which individual residues in a protein achieve their
NS ordering \cite{Holtzer_BJ97}.
On the other hand, the global cooperativity, as measured by the
dimensionless parameter $\Omega_c$ (Eq. \ref{cooper_index_eq}) has been
shown to scale as \cite{MSLi_PRL04}
\begin{equation}
\Omega_c \approx N^{\zeta}
\label{omega}
\end{equation}
Having used the scaling arguments and analogy with a magnetic system, it was shown that \cite{MSLi_PRL04}
\begin{equation}
\zeta= 1+ \gamma \approx 2.2
\label{2dot2}
\end{equation}
where the magnetic susceptibility exponent $\gamma \approx 1.2$. This result in not trivial because the protein melting transition is first order \cite{Privalov_APC79}, for which $\zeta=2$ \cite{Naganathan_JACS05}. Let us mention the main steps leading to Eq. (\ref{2dot2}).
The folding temperature can be identified with the peak in $d<\chi>/dT$
or in the fluctuations in $\chi$, namely, $\Delta \chi = <\chi^2> - <\chi>^2$. Using an analogy to magnetic systems, we identify $T(\partial <\chi>/\partial h)=\Delta \chi$ where $h$ is an "ordering field" that is conjugate to $\chi$. Since $\Delta \chi$ is dimensionless, we expect $h \approx T$ for proteins, and hence, $T(\partial <\chi>/\partial T)$ is like susceptibility. Hence, the scaling of $\Omega_c$ on $N$ should follow the way $(T_F/\Delta T) \Delta \chi$ changes with $N$ \cite{Kohn_PNAS04}.
For efficient folding in proteins $T_F \approx T_\Theta$ \cite{Klimov_PRL97}, where $T_\Theta$ is the temperature at which the coil-globule transition occurs. It has been argued that $T_F$ for proteins may well be a tricritical point, because the transition at $T_F$ is first-order while the collapse transition is (typically) second-order.
Then, as temperature approaches from above, we expect that the characteristics of polypeptide chain at $T_\Theta$ should manifest themselves in the folding cooperativity.
At or above $T_F$, the susceptibility $\Delta \chi$ should scales with $\Delta T$ as $\Delta \chi \sim \Delta T^{- \gamma}$ as predicted by the scaling theory for second order transitions \cite{Fisher_PRB82}. Therefore, $\Omega_c \sim \Delta T^{-(1+\gamma)}$. taking into account that $\Delta T \sim N^{-1}$ \cite{Grosberg_Book94} we come to Eqs. \ref{omega} and \ref{2dot2}.
In this chapter we use LMSC,
off-lattice Go models for 23 proteins and
experimental results for a number of proteins to further confirm the
theoretical predictions (Eq. \ref{omega} and \ref{2dot2}). Our results show that $\zeta \approx 2.22$ which
is \textit{distinct from the expected
result} ($\zeta = 2.0$) \textit{for a strong first order transition} \cite{Fisher_PRB82}.
Our another goal is to study the dependence of the folding time on the number of amino acids.
The larger data set of proteins for which folding rates are available
shows that the folding time scales as
\begin{equation}
\tau_F = \tau_0 \exp(cN^{\beta})
\end{equation}
with $c \approx 1.1$, $\beta = 1/2$ and $\tau_0 \approx 0.2 \mu s$.
The results presented in this chapter are taken from Ref. \cite{Kouza_JPCA06}.
\subsection{Models and methods}
The LMSC (Eq. \ref {energy_eq_lattice}) and coarse-grained off-lattice model (Eq. \ref{Hamiltonian}) \cite{Clementi_JMB00} were used.
For the LMSC we performed Monte Carlo simulations using
the previously well-tested move set MS3 \cite{Li_JPCB02}. This move set ensures that
ergodicity is obtained efficiently even for $N=50$, it uses
single, double and triple bead moves \cite{Betancourt_JCP98}.
Following standard practice the thermodynamic properties are computed
using the multiple histogram method \cite{Ferrenberg_PRL89}. The kinetic simulations are carried out
by a quench from high temperature to a temperature at which the NBA
is preferentially populated. The folding times are calculated
from the distribution of first passage times.
For off-lattice models, we assume the dynamics of the polypeptide chain obeys the Langevin
equation. The equations of motion were integrated using the velocity form
of the Verlet algorithm with the time step $\Delta t = 0.005 \tau_L$,
where $\tau_L = (ma^2/\epsilon_H)^{1/2} \approx 3$ ps.
In order to calculate the thermodynamic quantities we collected
histograms for the energy and native contacts
at five or six different temperatures
(at each temperature 20 - 50 trajectories were generated depending on proteins).
As with the LMSC we used the multiple histogram method \cite{Ferrenberg_PRL89}
to obtain the thermodynamic parameters at all temperatures.
For off-lattice and LMSC models the probability of being in the NS is computed
using Eq. (\ref{chi_eq_Go}) and Eq. (\ref{chi_eq_lattice}), respectively.
The extent of cooperativity of the transition to the NBA from the ensemble of
unfolded states is measured using the dimensionless parameter $\Omega_c$ (Eq. \ref{cooper_index_eq}). Two points about $\Omega_c$ are noteworthy. (1) For
proteins that melt by a two-state transition it is trivial to show that
$\Delta H_{vH} = 4k_B\Delta T\Omega _c$, where $\Delta H_{vH}$ is the
van't Hoff enthalpy at $T_F$. For an infinitely sharp two-state transition
there is a latent heat release at $T_F$, at which $C_p$ can be approximated
by a delta-function. In this case $\Omega_c \rightarrow \infty$ which implies
that $\Delta H_{vH}$ and the calorimetric enthalpy $\Delta H_{cal}$
(obtained by integrating the temperature dependence of the specific heat
$C_p$ ) would coincide. It is logical to infer
that as $\Omega_c$ increases the ratio $\kappa = \Delta H_{vH}/\Delta H_{cal}$
should approach unity.
(2) Even for moderate sized proteins that undergo a two-state transition
$\kappa \approx 1$ \cite{Privalov_APC79}.
It is known that the extent of cooperativity depends on external
conditions as has been demonstrated for thermal denaturation of CI2 at
several values of pH \cite{Jackson_Biochemistry91}. The values of $\kappa$ for all
pH values are $\approx 1$.
However, the variation in cooperativity of CI2 as pH varies are
reflected in the changes in $\Omega _c$ \cite{Klimov_FD98}.
Therefore, we believe that $\Omega _c$, that varies in the
range $0 < \Omega _c < \infty$, is a better descriptor of the extent of
cooperativity than $\kappa$. The latter merely tests the applicability
of the two-state approximation.
\vspace{5 mm}
\begin{figure}[!htbp]
\includegraphics[width=0.47\textwidth]{./Figs/fig1_new.eps}
\hfill
\linespread{0.8}
\parbox[b]{0.47\textwidth}{\caption{The temperature dependence of $f_N$ and $df_N/dT$ for $\beta$-hairpin
($N=16$) and CpsB ($N=67$). The scale for $df_N/dT$ is given on the right.
(a): the experimental curves were
obtained using
$\Delta H = 11.6$ kcal/mol,
$T_m=297$ K and $\Delta H = 54.4$ kcal/mol and $T_m= 354.5$ K for
$\beta$-hairpin and CpsB, respectively.
(b): the simulation results were calculated from $f_N = <\chi (T)>$.
The Go model gives only a qualitatively reliable estimates of $f_N(T)$.\\\\\\\\\\\\}\label{hairpin_CspB_fig}}
\\
\end{figure}
\subsection{Results}
\subsubsection{Dependence of cooperativity $\Omega_c$ on number of aminoacids $N$}
For the 23 Go proteins listed in Table \ref{scalling_table1}, we calculated $\Omega_c$ from
the temperature dependence of $f_N$.
In Fig. \ref{hairpin_CspB_fig}
we compare the temperature dependence of $f_N(T)$ and $df_N(T)/dT$ for
$\beta$-hairpin ($N=16$) and {\it Bacillus subtilis} (CpsB, $N=67$).
It is clear that the transition width and the amplitudes of $df_N/dT$
obtained using Go models, compare only qualitatively well with experiments.
As pointed out by Kaya and Chan \cite{Kaya_SFG00,Kaya_JMB03,Chan_ME04,Kaya_PRL00},
the simple Go-like models consistently
underestimate the extent of cooperativity. Nevertheless, both the models and
experiments show that $\Omega_c$ increases dramatically as $N$ increases
(Fig. \ref{hairpin_CspB_fig}).
The variation of $\Omega_c$ with $N$ for the 23 proteins obtained from
the simulations of Go models is given in Fig. \ref{Scal_Omega_fig}.
From the ln$\Omega_c$-ln$N$ plot we obtain $\zeta = 2.40 \pm 0.20$
and $\zeta = 2.35 \pm 0.07$ for off-lattice models and LMSC, respectively. These
values of $\zeta$ deviate from the theoretical prediction
$\zeta \approx 2.22$.
We suspect that this is due to large fluctuations in the NS of
polypeptide chains that are represented using minimal models.
Nevertheless, the results for the minimal models rule out
the value of $\zeta = 2$ that is predicted for systems that undergo first
order transition. The near coincidence of $\zeta$ for both models show that
the details of interactions are not relevant.
\begin{wrapfigure}{r}{0.47\textwidth}
\includegraphics[width=0.40\textwidth]{./Figs/fig2_new.eps}
\hfill\begin{minipage}{7.7 cm}
\linespread{0.8}
\caption{Plot of ln$\Omega_c$ as a function of ln$N$.
The red line is a fit to the simulation data for the 23
off-lattice Go proteins from which we estimate
$\zeta =2.40 \pm 0.20$. The black line is a fit to the lattice models
with side chains ($N = 18, 24, 32, 40$ and 50) with
$\zeta = 2.35 \pm 0.07$.
The blue line is a fit to the experimental values of
$\Omega_c$ for 34 proteins (Table \ref{scalling_table2})
with $\zeta = 2.17 \pm 0.09$. The larger deviation in $\zeta$ for the minimal
models is due to lack of all the interactions that stabilize the NS. \label{Scal_Omega_fig}}
\end{minipage}
\\
\end{wrapfigure}
For the thirty four proteins (Table \ref{scalling_table2}) for which we could find thermal
denaturation data, we calculated $\Omega_c$ using the $\Delta H$,
and $T_F$ (referred to as the melting temperature $T_m$ in the experimental
literature).
From the plot of ln$\Omega_c$ versus ln$N$ we find that
$\zeta = 2.17 \pm 0.09$. The experimental value of $\zeta$, which also
deviates from $\zeta = 2$, is in much better agreement with the theoretical
prediction. The analysis of experimental data requires care because the
compiled results were obtained from a number of different laboratories around
the world. Each laboratory uses different methods to analyze the raw
experimental data which invariably lead to varying methods to
estimate errors in
$\Delta H$ and $T_m$. To estimate the error bar for $\zeta$ it is important
to consider the errors in the computation of $\Omega_c$.
Using the reported experimental errors in $T_m$ and
$\Delta H$ we calculated the variance $\delta^2\Omega_c$ using the standard
expression for the error propagation \cite{MSLi_PRL04}.
\subsubsection{Dependence of folding free energy barrier on number of amino acids $N$}
The simultaneous presence of stabilizing (between hydrophobic residues) and
destabilizing interactions involving polar and charged residues in
polypeptide chain renders the NS only marginally stable
\cite{Poland_book}.
The hydrophobic residues enable the formation of compact structures while
polar and charged residues, for whom water is a good solvent, are better
accommodated by extended conformations. Thus, in the folded state the
average energy gain per residue (compared to expanded states) is
$-\epsilon _H (\approx (1 - 2)$ kcal/mol) whereas due to chain connectivity
and surface area burial the loss in free energy of exposed residues is
$\epsilon _P \approx \epsilon _H$. Because there is a large number of
solvent-mediated interactions that stabilize the NS,
even when $N$ is small, it follows from the
central limit theorem that the barrier height $\beta \Delta G^{\ddagger}$,
whose lower bound is the stabilizing free energy should scale as
$\Delta G^{\ddagger} \sim k_BT\sqrt{N}$ \cite{Thirumalai_JPI95}.
\begin{wrapfigure}{r}{0.47\textwidth}
\includegraphics[width=0.40\textwidth]{./Figs/fig3_new.eps}
\hfill\begin{minipage}{7.5 cm}
\linespread{0.8}
\caption{Folding rate of 69 real proteins (squares) is plotted as
a function of $N^{1/2}$ (the straight line represent the fit
$y = 1.54 -1.10x$ with the correlation coefficient $R=0.74$).
The open circles represent the data obtained for 23
off-lattice Go proteins (see Table \ref{scalling_table1})
(the linear fit $y = 9.84 - x$ and $R=0.92$).
The triangles denote the data obtained for lattice models with side
chains ($N = 18, 24, 32, 40$ and 50, the linear fit
$y = -4.01 - 1.1x$ and $R=0.98$). For real proteins
and off-lattice Go proteins $k_F$ is measured in $\mu s^{-1}$, whereas
for the lattice models it is measured in MCS$^{-1}$
where MCS is Monte Carlo steps. \label{real_pro_fig}}
\end{minipage}
\end{wrapfigure}
A different physical picture has been used to argue that
$\Delta G^{\ddagger} \sim k_BTN^{2/3}$ \cite{Finkelstein_FoldDes97,Wolynes_PNAS97}.
Both the scenarios show that the barrier to folding rates
scales sublinearly with $N$.
The dependence of ln$k_F$ ($k_F = \tau_F^{-1}$) on $N$ using experimental
data for 69 proteins \cite{Naganathan_JACS05}
and the simulation results for the 23 proteins is
consistent with the predicted behavior that
$\Delta G^{\ddagger} = ck_BT\sqrt{N}$ with $c \approx 1$ (Fig. \ref{real_pro_fig}). The correlation
between the experimental results and the theoretical fit is 0.74
which is similar to the previous analysis using a set of 57
proteins \cite{MSLi_Polymer04}. It should be noted that the data can also be fit using
$\Delta G^{\ddagger} \sim k_BTN^{2/3}$.
The prefactor $\tau_F^0$ using the $N^{2/3}$ fit is over
an order of magnitude larger than
for the $N^{1/2}$ behavior. In the absence of accurate
measurements for a larger data set of proteins it is difficult to
distinguish between the two power laws for $\Delta G^{\ddagger}$.
\begin{table}[h]
\scriptsize{
\begin{tabular}{|c|c|c|c|c|}
\hline
Protein&$N$&PDB code$^{\rm a}$&$\Omega_c^{\rm b}$&$\delta \Omega_c^{\rm c}$ \\
\hline
$\beta$-hairpin&$16$&1PGB&2.29&0.02\\
\hline
$\alpha$-helix&$21$&no code&0.803&0.002\\
\hline
WW domain&$34$&1PIN&3.79&0.02\\
\hline
Villin headpiece&$36$&1VII&3.51&0.01\\
\hline
YAP65&$40$&1K5R&3.63&0.05\\
\hline
E3BD&$45$& &7.21&0.05 \\
\hline
hbSBD&$52$&1ZWV&51.4&0.2\\
\hline
Protein G&$56$&1PGB&16.98&0.89\\
\hline
SH3 domain ($\alpha$-spectrum)&$57$&1SHG&74.03&1.35\\
\hline
SH3 domain (fyn)&$59$&1SHF&103.95&5.06\\
\hline
IgG-binding domain of streptococcal protein L&$63$&1HZ6&21.18&0.39\\
\hline
Chymotrypsin Inhibitor 2 (CI-2)&$65$&2CI2&33.23&1.66\\
\hline
CspB (Bacillus subtilis)&$67$&1CSP&66.87&2.18\\
\hline
CspA&$69$&1MJC&117.23&13.33\\
\hline
Ubiquitin&$76$&1UBQ&117.8&11.1\\
\hline
Activation domain procarboxypeptidase A2&$80$&1AYE&73.7&3.1\\
\hline
His-containing phosphocarrier protein&$85$&1POH&74.52&4.2\\
\hline
hbLBD&$87$&1K8M&15.8&0.2\\
\hline
Tenascin (short form)&$89$&1TEN&39.11&1.14\\
\hline
Twitchin Ig repeat 27&$89$&1TIT&44.85&0.66\\
\hline
S6&$97$&1RIS&48.69&1.31\\
\hline
FKBP12&$107$&1FKB&95.52&3.85\\
\hline
Ribonuclease A&$124$&1A5P&69.05&2.84\\
\hline
\end{tabular}
\linespread{0.8}
\caption{List of 23 proteins used in the simulations.
(a) The NS for use in the Go model is obtained from the structures deposited in the Protein Data Bank. (b) $\Omega _c$ is calculated
using equation (\ref{cooper_index_eq}).
(c) 2 $\delta \Omega _c = |\Omega _c - \Omega _{c_1}| + |\Omega _c - \Omega _{c_2}|$, where $\Omega _{c_1}$ and $\Omega _{c_2}$ are
values of the cooperativity measure obtained by retaining only one-half the conformations used to compute $\Omega _c$.}
\label{scalling_table1}
\vspace{3 mm}
}
\end{table}
Previous studies \cite{Klimov_JCP98}
have shown that there is a correlation between folding
rates and $Z$-score which can be defined as
\begin{equation}
Z_G \; = \; \frac{G_N - <G_U>}{\sigma} ,
\label{Zscore_eq}
\end{equation}
where $G_N$ is the free energy of the NS, $<G_U>$ is the average free
energy of the unfolded states and $\sigma$ is the dispersion in the free
energy of the unfolded states. From the fluctuation formula it follows that
$\sigma = \sqrt{k_BT^2C_p}$ so that
\begin{equation}
Z_G \; = \; \frac{\Delta G}{\sqrt{k_BT^2C_p}} .
\label{Zscore1_eq}
\end{equation}
Since $\Delta G$ and $C_p$ are extensive it follows that $Z_G \sim N^{1/2}$.
This observation establishes an intrinsic connection between the
thermodynamics and kinetics of protein folding that involves formation and
rearrangement of non-covalent interactions. In an interesting
recent note \cite{Naganathan_JACS05}
it has been argued that the finding
$\Delta G^{\ddagger} \sim k_BT\sqrt{N}$ can be
interpreted in terms of $n_{\sigma}$ in which $\Delta G$ in
Eq. (\ref{Zscore1_eq}) is replaced by $\Delta H$. In either case, there
appears to be a thermodynamic rationale for the sublinear scaling
of the folding free energy barrier.
\begin{table}[!htbp]
{\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Protein&$N$&$\Omega_c^a$&$\delta \Omega_c^b$& &Protein&$N$&$\Omega_c^a$&$\delta \Omega_c^b$ \\
\cline{1-4} \cline{6-9}
BH8 $\beta$-hairpin \cite{Dyer}&12&12.9&0.5& &SS07d \cite{Knapp_JMB96}&64&555.2&56.2\\
\cline{1-4} \cline{6-9}
HP1 $\beta$-hairpin \cite{Xu_JACS03}&15&8.9&0.1& &CI2 \cite{Jackson_Biochemistry91}&65&691.2&17.0 \\
\cline{1-4} \cline{6-9}
MrH3a $\beta$-hairpin \cite{Dyer}&16&54.1&6.2& &CspTm \cite{Wassenberg_JMB99} &66&558.2&56.3 \\
\cline{1-4} \cline{6-9}
$\beta$-hairpin \cite{Honda_JMB00}&16&33.8&7.4& &Btk SH3 \cite{Knapp_Proteins98} &67&316.4&25.9\\
\cline{1-4} \cline{6-9}
Trp-cage protein \cite{Qui_JACS02}&20&24.8&5.1& &binary pattern protein \cite{Roy_Biochemistry00} &74&273.9&30.5 \\
\cline{1-4} \cline{6-9}
$\alpha$-helix \cite{Williams_Biochemistry96}&21&23.5&7.9& &ADA2h \cite{Villegas_Biochemistry95} &80&332.0&35.2\\
\cline{1-4} \cline{6-9}
villin headpeace \cite{Kubelka_JMB03}&35&112.2&9.6& &hbLBD \cite{Naik_ProtSc04} &87&903.1&11.1 \\
\cline{1-4} \cline{6-9}
FBP28 WW domain$^c$ \cite{Ferguson_PNAS01}&37&107.1&8.9& &tenascin Fn3 domain \cite{Clarke_JMB97} &91&842.4&56.6\\
\cline{1-4} \cline{6-9}
FBP28 W30A WW domain$^c$ \cite{Ferguson_PNAS01} &37&90.4&8.8& &Sa RNase \cite{Pace_JMB98} &96&1651.1&166.6 \\
\cline{1-4} \cline{6-9}
WW prototype$^c$ \cite{Ferguson_PNAS01}&38&93.8&8.4& &Sa3 RNase \cite{Pace_JMB98}&97&852.7&86.0\\
\cline{1-4} \cline{6-9}
YAP WW$^c$ \cite{Ferguson_PNAS01}&40&96.9&18.5& &HPr \cite{VanNuland_Biochemistry98}&98&975.6&61.9 \\
\cline{1-4} \cline{6-9}
BBL \cite{Ferguson_p}&47&128.2&18.0& &Sa2 RNase \cite{Pace_JMB98} &99&1535.0&156.9 \\
\cline{1-4} \cline{6-9}
PSBD domain \cite{Ferguson_p}&47&282.8&24.0& &barnase \cite{Martinez_Biochemistry_94}&110&2860.1&286.0 \\
\cline{1-4} \cline{6-9}
PSBD domain \cite{Ferguson_p}&50&176.2&13.0& &RNase A \cite{Arnold_Biochemistry97}&125&3038.5&42.6 \\
\cline{1-4} \cline{6-9}
hbSBD \cite{Kouza_BJ05} &52&71.8&6.3& &RNase B \cite{Arnold_Biochemistry97}&125&3038.4&87.5\\
\cline{1-4} \cline{6-9}
B1 domain of protein G \cite{Alexander_Biochemistry92} &56&525.7&12.5& &lysozyme \cite{Hirai_JPC99} &129&1014.1&187.3 \\
\cline{1-4} \cline{6-9}
B2 domain of protein G \cite{Alexander_Biochemistry92} &56&468.4&20.0& &interleukin-1$\beta$ \cite{Makhatadze_Biochemistry94} &153&1189.6&128.6\\\hline
\end{tabular}
}
\linespread{0.8}
\caption{List of 34 proteins for which $\Omega _c$ is calculated
using experimental data. The calculated $\Omega _c$ values from experiments
are significantly larger than those obtained using the Go models (see Table \ref{scalling_table1}).
a) $\Omega _c$ is computed at $T = T_F = T_m$ using the experimental values
of $\Delta H$ and $T_m$.
b) The error in $\delta \Omega_c$ is computed using the proceedure given in \cite{MSLi_PRL04,Gutin_PRL96}.
c) Data are averaged over two salt conditions at pH 7.0.
\vspace{5 mm}}
\label{scalling_table2}
\end{table}
\subsection{Conclusions}
We have reexamined the dependence of the extent of cooperativity as a function
of $N$ using lattice models with side chains, off-lattice models and experimental data on thermal denaturation.
The finding that $\Omega _c \sim N^{\zeta}$ at $T \approx T_F$ with $\zeta > 2$
provides additional support for the earlier theoretical predictions \cite{MSLi_PRL04}. More
importantly, the present work also shows that the theoretical value for
$\zeta$ is independent of the precise model used which implies that $\zeta$
is universal. It is surprising to find such general characteristics for
proteins for which specificity is often an important property. We should note
that accurate value of $\zeta$ and $\Omega _c$ can only be obtained using
more refined models that perhaps include desolvation
penalty \cite{Kaya_JMB03,Cheung_PNAS02}
In accord with a number of theoretical predictions
\cite{Thirumalai_JPI95,Finkelstein_FD97,Wolynes_PNAS97,Gutin_PRL96,Li_JPCB02,Koga_JMB01}
we found that the folding free energy barrier scales only sublinearly
with $N$. The relatively small barrier is in accord with the marginal stability
of proteins. Since the barriers to global unfolding is relatively small it
follows that there must be large conformational fluctuations even when the
protein is in the NBA. Indeed, recent experiments show that such dynamical
fluctuations that are localized in various regions of a monomeric protein might
play an important functional role. These observations suggest that small barriers in proteins and RNA \cite{Hyeon_Biochemistry05}
might be an evolved characteristics of all natural sequences.
\newpage
\begin{center} \section{Folding of the protein hbSBD} \end{center}
\subsection{Introduction}
Understanding the dynamics and mechanism of protein
folding remains one of the most challenging problems in molecular
biology \cite{Dagget_Trends03}. Single domain $\alpha$ proteins attract
much attention of researchers because most of them fold faster
than $\beta$ and $\alpha\beta$ proteins \cite{Jackson_FD98,Kubelka_COSB04}
due to relatively simple energy landscapes and one can, therefore,
use them to probe main aspects of the funnel theory
\cite{Bryngelson_Proteins1995}. Recently, the study of this class of proteins
becomes even more attractive because the one-state or downhill
folding may occur in some small $\alpha$-proteins
\cite{Munoz_Science02}.
The mammalian mitochondrial branched-chain $\alpha$-ketoacid
dehydrogenase (BCKD) complex catalyzes the oxidative decarboxylation
of branched-chain $\alpha$-ketoacids derived from leucine, isoleucine
and valine to give rise to branched-chain acyl-CoAs.
In patients with inherited maple syrup urine disease,
the activity of the BCKD complex is deficient, which
is manifested by often fatal acidosis and mental retardation \cite{ccf1}.
The BCKD multi-enzyme complex (4,000 KDa in size) is organized about
a cubic 24-mer core of dihydrolipoyl transacylase (E2), with multiple
copies of hetero-tetrameric decarboxylase (E1), a homodimeric
dihydrogenase (E3), a kinase (BCK) and a phosphatase attached
through ionic interactions. The E2 chain of the human BCKD
complex, similar to other related multi-functional enzymes
\cite{ccf2}, consists of three domains: The amino-terminal
lipoyl-bearing domain (hbLBD, 1-84), the interim E1/E3
subunit-binding domain (hbSBD, 104-152) and the carboxy-terminal
inner-core domain. The structures of these domains serve as bases
for modeling interactions of the E2 component with other
components of $\alpha$-ketoacid dehydrogenase complexes. The
structure of hbSBD (Fig. \ref{hbSBD_12}a) has been determined by NMR
spectroscopy, and the main function of the hbSBD is to attach both
E1 and E3 to the E2 core \cite{ccf3}. The two-helix structure of
this domain is reminiscent of the small protein BBL
\cite{Ferguson_JMB04} which may be a good candidate for observation of
downhill folding \cite{Munoz_Science02,Munoz_JACS04}. So the study of hbSBD is
interesting not only because of the important biological role of
the BCKD complex in human metabolism but also for illuminating
folding mechanisms.
From the biological point of view, hbSBD could be less stable than
hbLBD and one of our goals is, therefore, to check this by the
CD experiments. In this paper we study the
thermal folding-unfolding transition in the hbSBD by the CD
technique in the absence of urea and pH=7.5. Our thermodynamic
data do not show evidence for the downhill folding and they are
well fitted by the two-state model. We obtained folding
temperature $ T_F = 317.8 \pm 1.95$ K and the transition enthalpy
$\Delta H_G = 19.67 \pm 2.67$ kcal/mol. Comparison of such
thermodynamic parameters of hbSBD with those for hbLBD shows that
hbSBD is indeed less stable as required by its biological
function. However, the value of $\Delta H_G$ for hbSBD is still
higher than those of two-state $\alpha$-proteins reported in
\cite{Eaton_COSB04}, which indicates that the folding process in the
hbSBD domain is highly cooperative.
\begin{figure}
\epsfxsize=5.2in
\vspace{5 mm}
\centerline{\epsffile{./Figs/hbSBD_12.eps}}
\linespread{0.8}
\caption{ (a) Ribbon representation of the structure
of hbSBD domain. The helix region H$_{1}$ and H$_2$ include residues
Pro12 - Glu20 and Lys39 - Glu47, respectively. (b) Dependence of the mean residue molar ellipticity on the wave length for 18 values of temperatures between 278 and 363 K.}
\label{hbSBD_12}
\vspace{5 mm}
\end{figure}
From the theoretical point of view it is very interesting to
establish if the two-state foldability of hbSBD can be captured by
some model. The all-atom model would be the best choice for a
detailed description of the system but the study of hbSBD requires
very expensive CPU simulations. Therefore we employed the
off-lattice coarse-grained Go-like model \cite{Go_ARBB83,Clementi_JMB00}
which is simple and allows for a thorough characterization of
folding properties. In this model amino acids are represented by
point particles or beads located at positions of $C_{\alpha}$
atoms. The Go model is defined through the experimentally
determined native structure \cite{ccf3}, and it captures essential
aspects of the important role played by the native structure
\cite{Clementi_JMB00,Takada_PNAS1999}.
It should be noted that
the Go model by itself can not be employed to ascertain the two-state behavior
of proteins.
However, one can use it in conjunction with experiments providing
the two-state folding because this model does not {\it always} provide
the two-state behavior as have been clearly shown in the seminal work
of Clementi {\it et al.} \cite{Clementi_JMB00}. In fact,
the Go model correctly captures not only the two-state folding of
proteins CI2 and SH3 (more two-state Go folders may be found
in Ref. \cite{Koga_JMB01})
but also intermediates of the three-state folder
barnase, RNAse H and CheY \cite{Clementi_JMB00}.
The reason for this is that
the simple Go model ignores the energetic frustration but it still takes
the topological frustration into account.
Therefore, it can capture intermediates
that occur due to topological constraints but not those
emerging from the frustration of the contact interactions.
With the help of Langevin dynamics
simulations and the histogram method \cite{Ferrenberg_PRL89} we have
shown that, in agreement with our CD data, hbSBD is a two-state
folder with a well-defined TS in the free
energy landscape. The two helix regions were found to be
highly structured in the TS. The two-state behavior of hbSBD
is also supported by our kinetics study
showing that the folding kinetics follows the single exponential scenario.
The two-state folding obtained in our simulations suggests that for hbSBD
the topological frustration is more important than the energetic factor.
The dimensionless quantity, $\Omega _c$
\cite{Klimov_FD98}, which characterizes the structural
cooperativity of the thermal denaturation transition was computed
and the reasonable agreement between the CD experiments and Go
simulations was obtained. Incorporation of side chains may give a
better agreement \cite{Klimov_FD98,Li_Physica05} but this
problem is beyond the scope of the thesis.
The material presented in this chapter is based on our work \cite{Kouza_BJ05}.
\subsection{Materials and Methods}
\subsubsection{Sample Preparation}
hbSBD protein was purified from the BL21(DE3) strain of
\textit{E. coli }containing a plasmid that carried the gene of
hbLBD(1-84), a TEV cleavage site in the linker region, and hbSBD
(104-152), generously provided to us by Dr. D.T. Chuang of the
Southwestern Medical Center, University of Texas. There is an
extra glycine in front of Glu104 which is left over after TEV
cleavage, and extra leucine,
glutamic acid at the C-terminus before six histidine residues.
The protein was purified by Ni-NTA affinity chromatography, and the
purity of the protein was found to be better than 95\%,
based on the Coomassie blue-stained gel. The complete sequence of
$N=52$ residues for hbSBD is\\
(G)EIKGRKTLATPAVRRLAMENNIKLSEVVGSGKDGRILKEDILNYLEKQT(L)(E).
\subsubsection{Circular Dichroism}
CD measurements were carried out in Aviv CD spectrometer model 202
with temperature and stir control units at different temperature
taken from
260nm to 195nm. All experiments were carried at 1 nm bandwidth
in 1.0 cm quartz square cuvette thermostated to $\pm 0.1^o$C.
Protein concentration ($\sim$ 50 uM) was determined by UV absorbance at 280nm
using $\epsilon _{280nm}$=1280 M$^{-1}$cm$^{-1}$ with 50mM phosphate buffer at pH7.5.
Temperature control was achieved using a circulating water bath system,
and the equilibrium time was three minutes for each temperature point.
The data
was collected at each 2K increment in temperature. The study was
performed at heating rate
of 10$^{o}$C/min and equilibration time of 3 minutes.
The volume changes as a result of thermal expansion as well
as evaporation of water were neglected.
\subsubsection{Fitting Procedure}
Suppose the thermal denaturation is a two-state
transition, we can write the ellipticity as
\begin{equation}
\theta \; \, = \; \, \theta _D + (\theta _N - \theta _D)f_N \, ,
\label{theta_fN_eq}
\end{equation}
where $\theta _D$ and $\theta _N$ are values for the denaturated and folded states. The fraction of the folded conformation
$f_N$ is expressed as \cite{Privalov_APC79}
\begin{eqnarray}
f_N \; \, &=& \; \, \frac{1}{1 + \exp (-\Delta G_T/T)} \, ,\nonumber \\
\Delta G_T \; \, &=& \; \, \Delta H_T - T\Delta S_T \; = \;
\Delta H_G\left(1 - \frac{T}{T_G}\right) \nonumber \\
&+&\Delta C_p \left[(T-T_G) - T \ln \frac{T}{T_G}\right] \, .
\label{fN_twostate_eq}
\end{eqnarray}
Here $\Delta H_G$ and $\Delta C_p$ are jumps of the enthalpy
and heat capacity at the mid-point temperature $T_G$ (also known
as melting or
folding temperature) of thermal transition, respectively.
Some other thermodynamic characterization of stability
such as the temperature of maximum stability ($T_S$), the
temperature with zero enthalpy ($T_H$), and the conformational
stability ($\Delta G_S$) at $T_S$ can be computed
from results of regression analysis \cite{Becktel_Biopolymers87}
\begin{eqnarray}
\ln \frac{T_G}{T_S} \; &=& \; \frac{\Delta H_G}{T_G\Delta C_p}, \\
T_H \; &=& \; T_G - \frac{\Delta H_G}{\Delta C_p}, \\
\Delta G_S \; &=& \; \Delta C_p (T_S - T_H).
\label{parameters_eq}
\end{eqnarray}
Using Eqs. (\ref{theta_fN_eq}) - (\ref{parameters_eq})
we can obtain all thermodynamic parameters from CD data.
It should be noted that the fitting of Eq. (\ref{fN_twostate_eq})
with $\Delta C_p > 0$ allows for an additional cold denaturation
\cite{Privalov_CRBMB90} at temperatures much lower than the room
temperature . The temperature of such a transition, $T_G'$, may be
obtained by the same fitting procedure with an additional
constraint of $\Delta H_G <0$. Since the cold denaturation
transition is not seen in Go models, to compare the simulation
results to the experimental ones we also use the approximation in
which $\Delta C_p=0$.
\subsubsection{Simulation}
We use coarse-grained continuum representation for hbSBD
protein, in which only the positions of 52 C$_{\alpha}$-carbons
are retained. We adopt the off-lattice version of the Go model
\cite{Go_ARBB83} where the interaction between residues forming native
contacts is assumed to be attractive and the non-native
interactions - repulsive (Eq. \ref{Hamiltonian}).
The nativeness of any configuration is measured by the number of
native contacts $Q$. We define that the $i$th and $j$th residues
are in the native contact if $r_{0ij}$ is smaller than a cutoff
distance $d_c$ taken to be $d_c = 7.5$ \AA,
where $r_{0ij}$ is the distance between the $i$th and $j$th residues in
the native conformation. Using this definition and the native
conformation of Ref. \onlinecite{ccf3}, we found that the total
number of native contacts $Q_{total} = 62$. To study the
probability of being in the NS we use the following
overlap function as in Eq. (\ref{chi_eq_Go}).
The overlap function $\chi$, which is one if the
conformation of the polypeptide chain coincides with the native
structure and zero for unfolded conformations, can serve as an
order parameter for the folding-unfolding transition. The
probability of being in the NS, $f_N$, which can be
measured by the CD and other experimental techniques, is defined
as $f_N = <\chi>$, where $<...>$ stands for a thermal average.
The dynamics of the system is obtained by integrating the following Langevin
equation \cite{Allen_book} (Eq. \ref{Langevin_eq}). The Verlet algorithm \cite{Swope_JCP82} was employed.
It should be noted that the folding thermodynamics
does not depend on the environment viscosity (or on $\zeta$)
but the folding kinetics depends
on it \cite{Klimov_PRL97}. We chose the dimensionless parameter
$\tilde{\zeta} = (\frac{a^2}{m\epsilon_H})^{1/2}\zeta = 8$, where
$m$ is the mass of a bead and $a$ is the bond length between successive beads.
One can show that this value of $\tilde{\zeta}$ belongs to the
interval of the viscosity where the folding
kinetics is fast. We have tried other values of $\tilde{\zeta}$
but the results
remain unchanged qualitatively.
All thermodynamic quantities are obtained by the histogram method
\cite{Ferrenberg_PRL89}.
\subsection{Results}
\subsubsection{CD Experiments}
\noindent
The structure of hbSBD is shown in Figure \ref{hbSBD_12}a.
Its conformational stability is investigated
in present study by analyzing the unfolding transition induced by
temperature as monitored
by CD, similar to that described previously \cite{Naik_FEBS02,Naik_ProtSc04}.
The reversibility of thermal denaturation was ascertained by monitoring
the return of the CD signal upon cooling from 95$^{o}$C to 22 $^{o}$C;
immediately after the conclusion of the thermal transition.
The transition was found to be more than 80\% reversible.
Loss in reversibility to greater extent was observed on prolonged
exposure of the sample to higher temperatures.
This loss of reversibility is presumably due to irreversible
aggregation or decomposition. Figure \ref{hbSBD_12}b
shows the wavelength dependence of mean residue molar
ellipticity of hbSBD at various temperatures between 278K and 363K.
In a separate study, the thermal unfolding transition as monitored
by ellipticity at 228 nm was found to be independent of hbSBD
concentration in the range of 2 uM to 10 uM. It was also found to be
unaffected by change in heating rate between 2$^{o}$C/min to 20$^{o}$C/min.
These observations suggest absence of stable intermediates in heat
induced denaturation of hbSBD. A valley at around 220 nm,
characteristics of the helical secondary structure is evident for
hbSBD.
Figure \ref{hbSBD_345}a shows the temperature dependence of the
population of the native conformation, $f_N$, for wave lengths
$\lambda = 208, 212$ and 222 nm. We first try to fit these data to
Eq. (\ref{fN_twostate_eq}) with $\Delta C_p \ne 0$. The fitting
procedure gives slightly different values for the folding (or
melting) temperature and the enthalpy jump for three sets of
parameters. Averaging over three values, we obtain $ T_G = 317.8
\pm 1.95$ K and $\Delta H_G = 19.67 \pm 2.67$ kcal/mol. Other
thermodynamic quantities are shown on the first row of Table \ref{hbSBD_table1}.
The similar fit but with $\Delta C_p=0$ gives the
thermodynamic parameters shown on the second row of this table.
Since the experimental data are nicely fitted to the two-state
model we expect that the downhill scenario does not applied to the
hbSBD domain.
\begin{figure}
\epsfxsize=5.2in
\vspace{5 mm}
\centerline{\epsffile{./Figs/hbSBD_345.eps}}
\linespread{0.8}
\caption{ (a) Temperature dependence of the fraction
of folded conformations $f_N$, obtained from the ellipticity
$\theta$ by Eq. (\ref{fN_twostate_eq}), for wave lengths $\lambda$
= 208 (blue circles), 212 (red squares) and 222 nm (green
diamonds). The solid lines corresponds to the two state fit given
by Eq. (\ref{fN_twostate_eq}) with $\Delta C_p \ne 0$. We obtained
$T_G=T_F= 317.8 \pm 1.9$ K, $\Delta H_G = 19.67 \pm 2.67$ kcal/mol
and $\Delta C_p = 0.387\pm0.054$. (b) The dependence of $f_N$ for various sets
of parameters. The blue and red curves correspond to the
thermodynamic parameters presented on the first and the second
rows of Table \ref{hbSBD_table1}, respectively. Open circles refer to simulation
results for the Go model. The solid black curve is the two-state
fit ($\Delta C_p=0$) which gives $\Delta H_G= 11.46$ kcal/mol and
$T_F=317.9$. (c) The upper part refers to the temperature
dependence of $df_N/dT$ obtained by the simulations (red) and the
CD experiments (blue). The experimental curve is plotted using
two-state parameters with $\Delta C_p = 0$ (see, the second row on
Table \ref{hbSBD_table1}). The temperature dependence of the heat capacity $C_V(T)$
is presented in the lower part. The dotted lines illustrate the
base line substraction. The results are averaged over 20 samples. }
\label{hbSBD_345}
\vspace{5 mm}
\end{figure}
For the experimentally studied temperature interval two types of
the two-state fit (\ref{fN_twostate_eq}) with $\Delta C_p=0$ and
$\Delta C_p \ne 0$ give almost the same values for $T_G$, $\Delta
H_G$ and $\Delta S_G$. However, pronounced different behaviors of
the population of the native basin, $f_N$, occur when we
interpolate results to the low temperature region (Fig. \ref{hbSBD_345}b).
For the $\Delta C_p=0$ case, $f_N$ approaches
the unity as $T \rightarrow 0$ but it goes down for $\Delta C_p
\ne 0$. This means that the $\Delta C_p \ne 0$ fit is valid if the
second cold denaturation transition may occur at $T_G$'. This
phenomenon was observed in single domains as well as in
multi-domain globular proteins \cite{Privalov_CRBMB90}. We predict that
the cold denaturation of hbSBD may take place at $T_G' \approx
212$ K which is lower than $T_G' \approx 249.8$ K for hbLBD
shown on the 4th row of Table \ref{hbSBD_table1}.
It would be of great interest to carry out the cold denaturation
experiments in cryo-solvent to elucidate this issue.
To compare the stability of the hbSBD domain with the hbLBD domain
which has been studied in detail previously \cite{Naik_ProtSc04} we also
present the thermodynamic data of the latter on Table \ref{hbSBD_table1}. Clearly,
hbSBD is less stable than hbLBD by its smaller $\Delta G_S$ and
lower $T_G$ values. This is consistent with their respective
backbone dynamics as revealed by $^{15}$N-T$_1$, $^{15}$N-T$_2$,
and $^{15}$N-$^1$H NOE studies of these two domains using
uniformly $^{15}$N-labeled protein samples (Chang and Huang,
unpublished results). Biologically, hbSBD must bind to either E1
or E3 at different stages of the catalytic cycle, thus it needs to
be flexible to adapt to local environments of the active sites of
E1 and E3. On the other hand, the function of hbLBD is to permit
its Lys44 residue to channel acetyl group between donor and
acceptor molecules and only the Lys44 residue needs to be flexible
\cite{Chang_JBC02}. In addition, the NMR observation for the
longer fragment (comprising residues 1-168 of the E2 component)
also showed that the hbLBD region would remain structured after
several months while the hbSBD domain could de-grate in a shorter
time.
\begin{center}\begin{table}[htbp]
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
&&\scriptsize$\Delta{H}_G$&\scriptsize$\Delta{C_p}$&\scriptsize$\Delta{S_G}$&&&\scriptsize$\Delta{G_S}$&\\
\scriptsize Domain&\scriptsize $T_G(K)$&\scriptsize kcal/mol/K)&\scriptsize (kcal/mol/K)&\scriptsize(cal/mol/K)&\scriptsize$T_S(K)$&\scriptsize$T_H(K)$&\scriptsize(kcal/mol)&\scriptsize$T'_G(K)$\\
\hline
\scriptsize SBD(exp)&\scriptsize$317.8\pm1.9$&\scriptsize$19.67\pm2.67$&\scriptsize$0.387\pm0.054$&\scriptsize$61.64\pm7.36$&\scriptsize$270.9\pm2.0$&\scriptsize$267.0\pm2.1$&\scriptsize$1.4\pm0.1$&\scriptsize$212\pm2.5$\\
\hline
\scriptsize SBD(exp)&\scriptsize$317.9\pm2.2$&\scriptsize$20.02\pm3.11$&\scriptsize$0.0$&\scriptsize$62.96\pm9.92$&\scriptsize$-$&\scriptsize$-$&\scriptsize$-$&\scriptsize-\\
\hline
\scriptsize SBD(sim)&\scriptsize$317.9\pm7.95$&\scriptsize$11.46\pm0.29$&\scriptsize$0.0$&\scriptsize$36.05\pm1.85$&\scriptsize$-$&\scriptsize$-$&\scriptsize$-$&\scriptsize-\\
\hline
\scriptsize LBD(exp)&\scriptsize$344.0\pm0.2$&\scriptsize$78.96\pm1.28$&\scriptsize$1.51\pm0.04$&\scriptsize$229.5\pm3.7$&\scriptsize$295.7\pm3.7$&\scriptsize$291.9\pm1.3$&\scriptsize$5.7\pm0.2$&\scriptsize$249.8\pm1.1$\\
\hline
\end{tabular}
\linespread{0.8}
\caption{Thermodynamic parameters obtained from
the CD experiments and simulations for hbSBD domain. The results
shown on the first and fourth rows were obtained by fitting
experimental data to the two-state equation (\ref{fN_twostate_eq})
with $\Delta C_p \ne 0$. The second and third rows corresponding
to the fit with $\Delta C_p = 0$. The results for hbLBD are taken
from Ref. \cite{Naik_ProtSc04} for comparison.}
\label{hbSBD_table1}
\end{table}
\end{center}
\subsubsection{Folding Thermodynamics from simulations}
In order to calculate the thermodynamics quantities we have collected
histograms for the energy and native contacts
at six values of temperature: $T = 0.4, 0.5, 0.6, 0.7,0.8$
and 1.0 $\epsilon_H/k_B$. For sampling,
at each temperature 30 trajectories
of $16\times 10^7$ time steps have been generated with initial
$4\times 10^7$ steps discarded for thermalization.
The reweighting histogram method \cite{Ferrenberg_PRL89} was used
to obtain the thermodynamics parameters at all temperatures.
Figure \ref{hbSBD_345}b (open circles) shows the temperature
dependence of population of the NS, defined as the
renormalized number of native contacts
for the Go model. Since there is no cold denaturation for this model,
to obtain the thermodynamic parameters we fit $f_N$ to the
two-state model (Eq. \ref{fN_twostate_eq}) with $\Delta C_p=0$.
The fit (black curve) works pretty well around the transition
temperature but it gets worse at high $T$ due to slow decay of
$f_N$ which is characteristic for almost all of theoretical
models. In fitting we have chosen the hydrogen bond energy
$\epsilon_H = 0.91$ kcal/mol in Hamiltonian (\ref{Hamiltonian}) so
that $T_G = 0.7 \epsilon_H/k_B$ coincides with
the experimental value 317.8 K. From the
fit we obtain $\Delta H_G = 11.46$ kcal/mol which is smaller than
the experimental value indicating that the Go model is less
stable compared to the real hbSBD.
Figure \ref{hbSBD_345}c shows the temperature dependence of derivative of the
fraction of native contacts with respect to temperature $df_N/dT$
and the
specific heat $C_v$ obtained from the Go simulations. The collapse
temperature $T_{\theta}$, defined as the temperature at which
$C_v$ is maximal, almost coincides with the folding temperature
$T_F$ (at $T_F$ the structural susceptibility has maximum).
According to Klimov and Thirumalai \cite{Klimov_PRL96},
the dimensionless parameter $\sigma = \frac{|T_{\theta}-T_F|}{T_F}$
may serve as an indicator for foldablity of proteins. Namely,
sequences with $\sigma \leq 0.1$ fold much faster that
those which have the same number of residues but with $\sigma$
exceeding 0.5. From this perspective, having $\sigma \approx 0$ hbSBD
is supposed to be a
good folder {\it in silico}. However, one has to be cautious about
this conclusion because
the pronounced correlation between folding times $\tau _F$
and the equilibrium parameter
$\sigma$, observed for simple on- and off-lattice models
\cite{Klimov_PRL96,Veitshans_FD97} may be not valid for proteins in laboratory
\cite{Gillepse_ARB04}. In our opinion, since the data collected from
theoretical and
experimental studies are limited, further studies are required to
clarify the relationship between $\tau _F$ and $\sigma$.
Using experimental values for
$T_G$ (as $T_F$) and $\Delta H_G$ and the two-state model with $\Delta C_p
=0$ (see Table \ref{hbSBD_table1}) we can obtain the temperature dependence of the
population of NS $f_N$ and, therefore, $df_N/dT$ for
hbSBD (Fig. \ref{hbSBD_345}c). Clearly, the folding-unfolding transition
{\it in vitro} is sharper than
in the Go modeling. One of possible reasons is that our Go
model ignores the side chain which can enhance the cooperativity of
the denaturation transition \cite{Klimov_FD98}.
The sharpness of the fold-unfolded transition might be characterized
quantitatively via the cooperativity index $\Omega _c$ (Eq. \ref{cooper_index_eq}).
From Fig. \ref{hbSBD_345}c, we obtain
$\Omega_c = 51.6$ and 71.3 for the Go model and CD experiments,
respectively. Given the simplicity of the Go model used here the
agreement in $\Omega_c$ should be considered reasonable. We can
also estimate $\Omega _c$ from the scaling law suggested in Ref.
\onlinecite{MSLi_PRL04}, $\Omega _c = 0.0057 \times N^{\mu}$, where
exponent $\mu$ is universal and expressed via the random walk
susceptibility exponent $\gamma$ as $\mu = 1+\gamma
\approx 2.22 (\gamma
\approx 1.22$). Then we get $\Omega_c \approx 36.7$ which is lower
than the experimental as well as simulation result. This means
that hbSBD {\em in vitro} is, on average, more cooperative than
other two-state folders.
Another measure for the cooperativity is
$\kappa _2$ which is defined as \cite{Kaya_PRL00}
$\kappa _2 = \Delta H_{vh}/\Delta H_{cal}$, where
$\Delta H_{vh} \; = \; 2T_{max}\sqrt{k_B C_V(T_{max})}$
and $\Delta H_{cal} \; = \; \int_0^{\infty} C_V(T)dT$,
are the van't Hoff and the calorimetric enthalpy, respectively,
$C_V(T)$ is the specific heat. Without the baseline substraction
in $C_V(T)$ \cite{Chan_ME04}, for the Go model of hbSBD we
obtained $\kappa _2 \approx 0.25$. Applying the baseline
substraction
as shown in the lower part of Fig. \ref{hbSBD_345}c
we got $\kappa _2 \approx 0.5$ which is still much lower than
$\kappa _2 \approx 1$ for a truly all-or-none transition.
Since $\kappa _2$ is an extensive parameter, its low value
is due to the shortcomings of the off-lattice
Go models but not due to the finite size effects.
More rigid lattice models give better results for
the calorimetric cooperativity \cite{Li_Physica05}.
Thus, for the hbSBD domain the Go model gives
the better agreement with our CD experiments for the
structural cooperativity
$\Omega_c$ than for the calorimetric measure $\kappa _2$.
\subsubsection{Free Energy Profile}
To get more evidence that hbSBD is a two-state folder we
study the free energy profile using some quantity as a reaction
coordinate. The precise reaction coordinate for a
multi-dimensional process such as protein folding is difficult to
ascertain. However, Onuchic and coworkers \cite{Nymeyer_PNAS98}
have argued that, for minimally frustrated systems such as Go
models, the number of native contact $Q$ may be appropriate. Fig.
\ref{hbSBDfig6}a shows the dependence of free energy on $Q$ for $T=T_F$. Since
there is only one local maximum corresponding to the transition
state (TS), hbSBD is a two-state folder. This is not unexpectable
for hbSBD which contains only helices. The fact that the simple Go model
correctly captures the two-state behavior as was observed in the CD
experiments, suggests that the energetic frustration ignored in this model
plays a minor role compared to the topological frustration
\cite{Clementi_JMB00}.
\begin{figure}
\epsfxsize=4.2in
\centerline{\epsffile{./Figs/hbSBDfig6_.eps}}
\linespread{0.8}
\caption{ (a) The dependence of free energy on the
number of native contacts $Q$ at $T=T_F$. The typical structures
of the DS , TS and folded
state are also drawn. The helix regions
H$_1$ (green) and H$_2$ (orange) of the TS structure involve residues 13 - 19
and 39 - 48, respectively. For the folded state structure H$_2$ is the same as for
the TS structure but H$_1$ has two residues more (13 - 21).
(b) Distributions of RMSD for three
ensembles shown in (a). The average values of RMSD are equal to 9.8,
4.9 and 3.2 \AA$~$ for the DS, TS and folded state, respectively.}
\label{hbSBDfig6}
\vspace{3 mm}
\end{figure}
We have sorted out structures of the DS , TS
and the folded state at $T=T_F$ generating $10^4$
conformations in equilibrium. The distributions of the RMSD,
$P_{\rm RMSD}$,
of these states are
plotted in Fig. \ref{hbSBDfig6}b. As expected, $P_{\rm RMSD}$ for
the DS spreads out more than that for the TS and folded state. According to
the free energy profile in Fig. \ref{hbSBDfig6}a, the TS conformations
have 26 - 40 native contacts. We have found that the size (number
of folded residues) \cite{Bai_ProtSc04} of the TS is equal to 32. Comparing this size
with the total number of residues ($N=52$) we see that the fraction of
folded residues in the TS is higher than the typical value
for real two-state proteins
\cite{Bai_ProtSc04}. This is probably an artifact of Go models \cite{Kouza_BJ05}.
The TS conformations are relatively compact having
the ratio $<R_g^{TS}>/R_g^{NS} \approx 1.14$, where $<R_g^{TS}>$
is the average radius of gyration of the TS ensemble and $R_g^{NS}$
is the radius of gyration of the native conformation shown in Fig. \ref{hbSBD_12}a.
Since the RMSD, calculated only for two helices, is about 0.8 \AA
the structures of two helices in the TS
are not distorted much. It is also evident from
the typical structure of the TS shown in Fig. \ref{hbSBDfig6}b where
the helix regions H$_1$ and H$_2$ involve residues 13 - 19 and 39 - 48,
respectively (a residue is considered to be in the helix state if its
dihedral angle is about 60$^o$).
Note that H$_1$ has two residues less compared to
H$_1$ in the native conformation (see the caption to Fig. \ref{hbSBD_12}a)
but H$_2$ has even one bead more than its NS counterpart.
Overall, the averaged RMSD of the TS conformations from the
native conformation (Fig. \ref{hbSBD_12}a) is about
4.9 \AA$~$ indicating that the TS is not close to the native one. As seen
from Figs. \ref{hbSBDfig6}a and \ref{hbSBD_12}a, the main difference comes
from the tail parts. The most probable conformations
(corresponding to maximum of $P_{\rm RMSD}$ in Fig. \ref{hbSBDfig6}b of the folded state have RMSD about 2.5 \AA. This value is reasonable from the point
of view of the experimental structure resolution.
\subsubsection{Folding Kinetics}
The two-state foldability, obtained from the thermodynamics simulations
may be also probed by studying
the folding kinetics. For this purpose we monitored the
time dependence of the fraction of unfolded trajectories $P_u(t)$ defined
as follows \cite{Klimov_COSB99}
\begin{equation}
P_u(t) \, = \, 1 - \int_0^t P^{\textstyle{N}}_{fp}(s)ds,
\label{Pu_eq}
\end{equation}
where $P^{\textstyle{N}}_{fp}$ is the distribution of first passage folding
times
\begin{equation}
P^{\textstyle{N}}_{fp} \, = \, \frac{1}{M} \sum_{i=1}^{M}
\delta (s - \tau _{f,1i}).
\label{Pn_eq}
\end{equation}
Here $\tau _{f,1i}$ is time for the $i$th trajectory
to reach the NS for the first time,
$M$ is the total number of trajectories used in simulations.
A trajectory is said to be folded if all of native contacts form.
As seen from
Eqs. \ref{Pu_eq} and \ref{Pn_eq},
$P_u(t)$ is the fraction of trajectories which do not reach
the NS at time $t$.
In the two-state scenario the folding becomes triggered after
overcoming only one free energy barrier between the TS
and the denaturated one.
Therefore, $P_u(t)$ should be a single exponential,
i.e. $P_u(t) \sim \exp(-t/\tau_F)$ (a multi-exponential
behavior occurs in the case when the folding proceeds via intermediates)
\cite{Klimov_COSB99}.
\begin{wrapfigure}{r}{0.48\textwidth}\centering
\includegraphics[width=0.45\textwidth]{./Figs/hbSBDfig7.eps}
\hfill \begin{minipage}{7 cm}
\linespread{0.8}
\caption{The semi-logarithmic plot of the time
dependence of the fraction of unfolded trajectories at $T=T_F$.
The distribution $P_u(t)$
was obtained from first passage times of 400 trajectories, which
start from random conformations.
The straight line corresponds to the fit ln $P_{\rm u}(t) =
-t/\tau_F$, where $\tau _F = 0.1 \mu$s. \label{hbSBDfig7}}
\end{minipage}
\end{wrapfigure}
Since the function $P_u(t)$ can be measured directly by a number of experimental techniques \cite{Greene_Methods04,Dyson_ME2005},
the single exponential kinetics of two-state folders
is supported by a large body of experimental work (see, i.e. Ref.
\cite{Naik_FEBS02} and references there).
Fig. \ref{hbSBDfig7} shows the semi-logarithmic plot for $P_u(t)$ at $T=T_F$ for
the Go model.
Since the single exponential fit works pretty well, one can expect
that intermediates do not occur on the folding pathways.
Thus, together with the thermodynamics data our kinetic study supports
the two-state behavior of the hbSBD domain as observed
on the CD experiments.
From the linear fit in Fig. \ref{hbSBDfig7} we obtain the folding time
$\tau_F \approx 0.1 \mu$s.
This value is consistent with the
estimate of the folding time defined as the average value of the
first passage times.
If we use the empirical formula for the folding time
$\tau_F = \tau_F^0 \exp(1.1N^{1/2})$, where prefactor
$\tau_F^0=0.4 \mu$s and $N$ is a number of amino acids \cite{MSLi_Polymer04}
then $\tau_F = 1.1\times 10^3 \mu$s for $N=52$. This value is
about four orders of magnitude larger than that obtained from the
Go model.
Thus the Go model can capture the two-state feature of
the denaturation transition for hbSBD domain but not folding
times.
\subsection{Discussion}
We have used CD technique and the Langevin dynamics to study the
mechanism of folding of hbSBD. Our results suggest that this
domain is a two-state folder. The CD experiments reveal that the
hbSBD domain is less stable than the hbLBD domain in the same
BCKD complex, but it is more stable and cooperative compared to
other fast folding $\alpha$ proteins.
Both the thermodynamics and
kinetics results, obtained from the Langevin dynamics
simulations, show that the simple Go model correctly captures
the two-state feature of folding.
It should be noted that the two-state behavior is not the natural
consequence of the Go modeling because it allows for fishing folding
intermediates caused by the topological frustration. From this standpoint
it may be used to decipher the foldability of model proteins
for which the topological frustration dominates.
The reasonable agreement between
the results obtained by the Go modeling
and our CD experiments,
suggests that the NS topology of hbSBD is more important
than the energetic factor.
The theoretical model gives the reasonable
agreement with the CD experimental data for the structural
cooperativity $\Omega _c$. However, the calorimetric cooperativity
criterion $\kappa _2 \approx 1$ for two-state folders is hard to
fulfill within the Go model. From the $\Delta C_p \ne 0$ fitting
procedure we predict that the cold denaturation of hbSBD may occur
at $T \approx 212$ K and it would be very interesting to verify
this prediction experimentally. We are using the package SMMP
\cite{Eisenmenger_CPC01} and a parallel algorithm \cite{Hayryan_JCC01} to
perform all-atom simulation of hbSBD to check the relevant
results. \vskip 2 mm
\newpage
\begin{center}
\section{Force-Temperature phase diagram of single and three domain ubiquitin. New force replica exchange method} \end{center}
\subsection{Introduction}
Protein Ub continues to attract the attention of researchers because
there exist many processes in living systems where it plays
the vital role. Usually, Ub presents in the form of a polyubiquitin chain
that is conjugated to other proteins.
Different Ub linkages lead to different biological functions.
In case of Lys48-C and N-C linkages polyubiquitin chain serves as a signal for
degradation proteins \cite{Thrower_EMBO2000, Kirisako_EMBO2006}, whereas
in the Lys63-C case it plays completely different functions, including
DNA repair,
polysome stability and endocytosis
\cite{Hofmann_Cell1999, Spence_Cell2000, Galan_EMBO1997}.
When one studies thermodynamics of a large system like multi-domain
Ub the problem of slow dynamics occurs,
due to the rough FEL.
This
problem might be remedied using
the standard RE method in
the temperature space in the absence of external force
\cite{Hukushima_JPSC96,Sugita_ChemPhysLett99,Phuong_Proteins05}
as well as in the presence of it \cite{Li_JPCB06}.
However,
if one wants to construct the force-temperature phase diagram,
then this approach becomes inconvenient because one has to collect
data at different values of forces.
Moreover, the external force increases unfolding barriers and a system may
get trapped in some local minima. In order to have better sampling for a system
subject to external force we propose a new RE method \cite{Kouza_JCP08} in which
the exchange is carried not in the temperature space but in the force space,
i.e.
the exchange between different force values. This procedure would help
the system to escape from local minima efficiently.
In this chapter we address two topics. First, we develop
a new version of the RE method to study thermodynamics of a
large system under the force. The basic idea is that for
a given temperature we perform simulation at
different values of force and the exchange between them is carried out according
to the Metropolis rule.
This new approach has been employed to obtain the force-temperature phase
diagram of the three-domain Ub, which will be referred to as trimer . Within our choice
of force replicas it speeds up computation about four times
compared to the conventional simulation.
Second, we construct the temperature-force $T-f$ phase diagram of Ub and its trimer which
allows us to to determine the equilibrium critical force $f_c$ separating
the folded and unfolded regions.
This chapter is based on Ref. \cite{Kouza_JCP08}.
\subsection{Model}
Figure \ref{ubiquitin_struture_fig} shows native conformations for single Ub and trimer. Native conformation of Ub is taken form the PDB (1UBQ) and with the choice of cutoff distance $d_c=6.5\AA$ it has 99 native contacts. NS of three-domain Ub. is not available yet and we have to construct it for Go modeling. To make it we translate one unit by the distance $a=3.82\AA$ and slightly rotate it, then translate and rotate one more to have nine interdomain contacts (about 10\% of the intra-domain contacts). There are 18 inter- and 297 intradomain native contacts.
\begin{figure}
\epsfxsize=4.5in
\vspace{0.2in}
\centerline{\epsffile{./Figs/NS_.eps}}
\linespread{0.8}
\caption{(a) NS conformation of Ub taken from the PDB
(PDB ID: 1ubq). There are five $\beta$-strands: S1 (2-6), S2 (12-16),
S3 (41-45), S4 (48-49) and S5 (65-71), and one helix A (23-34).
(b) Structures B, C, D and E consist of pairs of strands (S1,S2),
(S1,S5), (S3,S5) and (S3,S4), respectively. In the text we also refer to helix A as the structure A. (c) The native conformation of trimer was designed as described in section 6.2. There are 18 inter- and 297 intra-domain native contacts}
\label{ubiquitin_struture_fig}
\end{figure}
We use coarse-grained continuum representation for Ub and trimer in which only the positions of $C_\alpha$-carbons are retained. The energy of Go-type model \cite{Clementi_JMB00} is described by Eq. (\ref{Hamiltonian}).
In order to obtain the $T-f$ phase diagram, we use the fraction of native contacts or
the overlap function as in Eq. (\ref{chi_eq_Go}).
The $T-f$ phase diagram ( a plot of $1-f_N$ as
a function of $f$ and $T$) and thermodynamic quantities were
obtained by the multiple histogram method \cite{Ferrenberg_PRL89}
extended to the case when the external force is applied to the
termini \cite{Klimov_PNAS99,Klimov_JPCB01}. In this case the
reweighting is carried out not only for temperature but also for
force. We collected data for six values of $T$ at $f=0$ and for
five values of $f$ at a fixed value of $T$. The duration of MD
runs for each trajectory was chosen to be long enough to get the
system fully equilibrating (9$\times 10^5 \tau_L$ from which
1.5$\times 10^5 \tau_L$ were spent on equilibration). For a given
value of $T$ and $f$ we have generated 40 independent
trajectories for thermal averaging.
\subsection{Force-Temperature diagram for single ubiquitin}
\begin{figure}
\includegraphics[width=6.3in]{./Figs/diagr_upd_.eps}
\caption{(a) The $T-f$ phase diagram obtained by the extended
histogram method. The force is applied to termini N and C.
The color code for $1-<\chi(T,f)>$ is given on the right.
The blue color corresponds to the state in the NBA, while the red
color indicates the unfolded states. The
vertical dashed line refers to $T=0.85 T_F \approx 285$ K at which
most of simulations have been performed. (b) The temperature
dependence
of $f_N$ (open circles) defined as the
renormalized number of native contacts.
The solid line refers to
the two-state fit to the simulation data.
The dashed line represents the experimental two-state curve with
$\Delta H_{\rm m}$ = 48.96 kcal/mol and $T_m = 332.5$K \cite{Thomas_PNAS01}.}
\label{diagram_fN_fig}
\end{figure}
The $T-f$ phase diagram, obtained by the extended
histogram method, is shown in
Fig. \ref{diagram_fN_fig}{\em a}. The folding-unfolding
transition, defined by the yellow region, is sharp in the low
temperature region but it becomes less cooperative (the fuzzy
transition region is wider) as $T$ increases.
The weak reentrancy (the critical force slightly increases with
$T$) occurs at low temperatures.
This seemingly strange phenomenon occurs as a result of
competition between the energy gain
and the entropy loss upon stretching.
The similar cold unzipping
transition was also observed in a number of models for
heteropolymers \cite{Shakhnovich_PRE02} and proteins
\cite{Klimov_PNAS99} including the C$_{\alpha}$-Go model for I27
(MS Li, unpublished results). As follows from the phase diagram,
at $T=285$ K the critical force $f_c \approx 30$ pN which is close
to $f_c \approx 25$ pN, estimated from the experimental pulling
data. To estimate $f_c$ from experimental pulling data
we use $f_{max} \approx f_c {\rm ln}(v/v_{min})$ \cite{Evans_BJ97} (see also
Eq. \ref{f_logV_eq}),
where $f_{max}$ is the maximal force needed to unfold a protein at
the pulling speed $v$. From the raw data in Fig. 3b of
Ref. \cite{Carrion-Vazquez_NSB03} we obtain $f_c \approx$ 25 pN.
Given the simplicity
of the model this agreement can be considered satisfactory and it validates
the use of the Go model.
Figure \ref{diagram_fN_fig}{\em b} shows the temperature
dependence of population of the NS. Fitting to the
standard two-state curve $f_N = \frac{1}{1 + \exp[-\Delta
H_m(1-\frac{T}{T_m})/k_BT]}$, one can see that it works pretty well
(solid curve) around the transition temperature but it gets worse
at high $T$ due to slow decay of $f_N$. Such a behavior is
characteristic for almost all of theoretical models
\cite{Kouza_BJ05} including the all-atom ones
\cite{Phuong_Proteins05}. In fitting we have chosen the hydrogen
bond energy $\epsilon_H = 0.98$ kcal/mol in Hamiltonian
(\ref{Hamiltonian}) so that $T_F = T_m = 0.675 \epsilon_H/k_B$
coincides with the experimental value 332.5 K
\cite{Thomas_PNAS01}. From the fit we obtain $\Delta H_{\rm m} =
11.4$ kcal/mol which is smaller than the experimental value 48.96
kcal/mol indicating that the Go model is, as expected, less stable
compared to the real Ub. Taking into account non-native contacts
and more realistic interactions between side chain atoms is
expected to increase the stability of the system.
The cooperativity of the denaturation transition may be characterized by the cooperativity index, $\Omega_c$ given by Eq. (\ref{cooper_index_eq}).
From simulation data for $f_N$ presented
\begin{wrapfigure}{r}{0.47\textwidth}
\includegraphics[width=0.46\textwidth]{./Figs/free_Q_barrier_.eps}
\hfill\begin{minipage}{7.7 cm}
\linespread{0.8}
\caption{(a) The dependence of the free energy on
$Q$ for selected values of $f$ at $T=T_F$.(b) The
dependence of folding and unfolding barriers, obtained from the
free energy profiles, on $f$. The linear fits $y = 0.36 + 0.218x$
and $y=0.54 - 0.029x$ correspond to $\Delta F_{f}$ and $\Delta
F_{u}$, respectively. From these fits we obtain $x_f
\approx $ 10 nm and $x_{u} \approx$ 0.13 nm. \label{free_Q_barrier_fig}}
\end{minipage}
\end{wrapfigure}
in Fig. \ref{diagram_fN_fig}{\em b}, we have
$\Omega_c \approx 57$ which is
considerably lower than the experimental value $\Omega_c \approx
384$ obtained with the help of $\Delta H_{\rm m}$ = 48.96
kcal/mol and $T_m = 332.5$K \cite{Thomas_PNAS01} .
The
underestimation of $\Omega _c$ in our simulations is not only a
shortcoming of the off-lattice Go model \cite{Kouza_JPCA06} but
also a common problem of much more sophisticated force fields in
all-atom models \cite{Phuong_Proteins05}.
Another measure of the cooperativity is the ratio between the
van't Hoff and the calorimetric enthalpy, $\kappa _2$
\cite{Kaya_PRL00}. For the Go Ub we obtained $\kappa _2 \approx
0.19$. Applying the base line subtraction \cite{Chan_ME04} gives
$\kappa _2 \approx 0.42$ which is still much below $\kappa _2
\approx 1 $ for the truly one-or-none transition. Since $\kappa
_2$ is an extensive parameter, its low value is due to the
shortcomings of the off-lattice Go models but not due to the
finite size effects. More rigid lattice models give better results
for the calorimetric cooperativity $\kappa _2$
\cite{Li_Physica05}.
Figure \ref{free_Q_barrier_fig}{\em a} shows the free energy as a
function of $Q$ for several values of force at $T=T_F$. Since
there are only two minima, our results support the two-state
picture of Ub \cite{Schlierf_PNAS04,Chung_PNAS05}. As expected,
the external
force increases the folding barrier, $\Delta F_F$
($\Delta F_F = F_{TS} - F_{DS}$) and it lowers
the unfolding barrier, $\Delta F_{u}$ ($\Delta F_{u} = F_{TS} - F_{NS}$).
From the linear fits in
Fig. \ref{free_Q_barrier_fig}{\em b} we obtain $x_f = \Delta F_f/f \approx 1$ nm, and $x_{u} = \Delta F_{u}/f \approx 0.13$ nm.
Note that $x_f$ is very
close to $x_f \approx$ 0.96 nm obtained from refolding
times at a bit lower temperature $T=285$ K (see Fig. \ref{refold_Ub_trimer} below).
However, $x_{u}$ is lower than the experimental value 0.24 nm \cite{Carrion-Vazquez_NSB03}.
This difference may be caused
by either sensitivity of $x_{u}$ to the temperature or the determination
of $x_{u}$ from the
approximate FEL as a function of a single
coordinate $Q$ is not sufficiently accurate.
In Chapter 8, we will show that a more accurate estimate of $x_u$
may be obtained from the dependence of unfolding times on the external force
(Eq. \ref{Bell_Ku_eq}).
We have also studied the FEL using $\Delta R$ as a reaction
coordinate.
The dependence of $F$ on $\Delta R$ was found to be smoother
(results not shown) compared
to what was obtained by Kirmizialtin {\em et al.} \cite{Kirmizialtin_JCP05}
using a more elaborated model
\cite{Sorenson_Proteins02}
which involves the non-native interactions.
\subsection{New force replica exchange method}
The equilibration of long peptides at low temperatures is a computationally
expensive job. In order to speed up computation of thermodynamic quantities
we extend the standard RE
method (with replicas at different temperatures)
developed for spin \cite{Hukushima_JPSC96} and peptide systems
\cite{Sugita_ChemPhysLett99} to the case when the RE is
performed between states with different values of
the external force $\lbrace f_i \rbrace$.
Suppose for a given temperature
we have $M$ replicas $\lbrace x_i, f_i\rbrace$, where
$\lbrace x_i \rbrace$ denotes coordinates and velocities of
residues. Then the statistical
sum of the extended ensemble is
\begin{eqnarray}
Z \; = \; \int \ldots \int dx_1 \ldots dx_M \exp(- \sum_{i=1}^M\beta H(x_i)) = \prod_{i=1}^MZ(f_i).
\label{Z_total_eq}
\end{eqnarray}
The total distribution function has the following form
\begin{eqnarray}
P(\lbrace x,f\rbrace) &=& \prod_{i=1}^M P_{eq}(x_i,f_i), \nonumber\\
P_{eq}(x,f) &=& Z^{-1}(f)\exp(-\beta H(x,f)).
\label{P_total_eq}
\end{eqnarray}
For a Markov process the detailed balance condition reads as:
\begin{eqnarray}
P(.\,\!.\,\!.\,\!, x_m f_m, .\,\!.\,\!.\,\!, x_n f_n, .\,\!.\,\!.\,\!) W(x_m f_m \vert x_n f_n)
\!=\! P(.\,\!.\,\!.\,\!, x_n f_m, .\,\!.\,\!.\,\!, x_m f_n, .\,\!.\,\!.\,\!) W(x_n f_m \vert x_m f_n),
\label{Markov_eq}
\end{eqnarray}
where $W(x_m f_m \vert x_n f_n)$ is the rate of transition
$\lbrace x_m, f_m \rbrace \rightarrow \lbrace x_n, f_n \rbrace$.
Using
\begin{eqnarray}
H(x,f) = H_0(x) - \vec{f}\vec{R} ,
\end{eqnarray}
and Eq. (\ref{Markov_eq})
we obtain
\begin{eqnarray}
\frac{W(x_m f_m \vert x_n f_n)}{W(x_n f_m \vert x_m f_n)} \; = \;
\frac{P(\ldots, x_m f_m, \ldots, x_n f_n, \ldots)}{P(\ldots, x_n f_m,
\ldots, x_m f_n, \ldots)} \; = \\ \nonumber \; \frac{\exp[-\beta(H_0(x_n) -
\vec{f}_m\vec{R}_n)
- \beta(H_0(x_m) - \vec{f}_n\vec{R}_m)]}{\exp[-\beta(H_0(x_m) -
\vec{f}_m\vec{R}_m)
- \beta(H_0(x_n) - \vec{f}_n\vec{R}_n)]} \; = \; \exp(-\Delta),
\end{eqnarray}
with
\begin{eqnarray}
\Delta &=& \beta (\vec{f}_m - \vec{f}_n) (\vec{R}_m - \vec{R}_n).
\label{Delta_eq}
\end{eqnarray}
This gives us the following Metropolis rule for accepting or rejecting
the exchange between replicas $f_n$ and $f_m$:
\begin{eqnarray}
W(x f_m | x' f_n) = \left\{ \begin{array}{ll}
1&, \qquad \mbox{$\Delta < 0$}\\
\exp(-\Delta)&, \qquad \mbox{$\Delta > 0$}\end{array} \right.
\label{Metropolis_eq}
\end{eqnarray}
\subsection{Force-Temperature diagram for three domain ubiquitin}
Since the three-domain Ub is rather
long peptide (228 residues), we apply the RE method to
obtain its $T-f$ phase diagram.
We have performed two sets of the RE simulations. In the first set we fixed
$f=0$ and the RE is carried out in the standard temperature replica space
\cite{Sugita_ChemPhysLett99}, where
12 values of $T$ were chosen in the interval $\left[0.46, 0.82\right]$
in such a way that the RE acceptance ratio was 15-33\%.
This procedure speeds up the equilibration of our system nearly
ten-fold compared to the standard computation without the use of RE.
In the second set, the RE simulation was performed in the force replica
space at $T=0.53$ using the Metropolis rule given by Eq. (\ref{Metropolis_eq}).
We have also used 12 replicas with different values of $f$
in the interval $0 \leq f \leq 0.6$ to get
the acceptance ratio about 12\%.
Even for this modest acceptance rate our new RE scheme accelerates
the equilibration
of the three-domain Ub about four-fold. One can expect better
performance by increasing the number of replicas.
However, within our computational facilities we were restricted to
parallel runs on 12 processors for 12 replicas.
The system was equilibrated
during first 10$^5 \tau_L$, after which histograms for the energy, the native
contacts and end-to-end distances were collected
for $4\times 10^5 \tau_L$ .
For each replica, we have generated 25 independent trajectories for
thermal averaging.
Using the data from two sets of the RE simulations and the
extended reweighting technique \cite{Ferrenberg_PRL89} in the
temperature and force space
\cite{Klimov_JPCB01} we obtained the $T-f$ phase diagram and the
thermodynamic quantities of the trimer.
\begin{figure}[!htbp]
\epsfxsize=6.1in
\vspace{0.2in}
\centerline{\epsffile{./Figs/fig2_r150_.eps}}
\linespread{0.8}
\caption{ (a) The $T-f$ phase diagram obtained by the extended
RE and
histogram method for trimer. The force is applied to termini N and C.
The color code for $1-f_N$ is given on the right.
Blue corresponds to the state in the NBA, while red
indicates the unfolded states. The
vertical dashed line denotes to $T=0.85 T_F \approx 285$ K, at which
most of simulations have been performed.
(b) Temperature dependence of the specific heat $C_V$ (right axis) and
$df_N/dT$ (left axis) at $f=0$. Their peaks coincide at $T=T_F$.
(c) The dependence of the free energy of the trimer on the total number
of native contacts
$Q$ at $T=T_F$.}
\label{diagram}
\end{figure}
The $T-f$ phase diagram
(Fig. \ref{diagram}a) was obtained by monitoring the probability
of being in the NS, $f_N$, as a function of $T$ and $f$.
The folding-unfolding
transition (the yellow region) is sharp in the low
temperature region, but it becomes less cooperative (the fuzzy
transition region is wider) as $T$ increases.
The folding temperature in the absence of force (peak of $C_v$ or
$df_N/dT$ in Fig. \ref{diagram}b) is equal $T_F=0.64 \epsilon_H/k_B$
which is a bit lower than $T_F=0.67 \epsilon_H/k_B$
of the single Ub \cite{MSLi_BJ07}.
This reflects the fact the folding of the trimer is less cooperative compared
to the monomer due to a small number of native contacts between domains.
One can ascertain this by calculating
the cooperativity index, $\Omega_c$ \cite{Klimov_FD98,MSLi_PRL04}
for the denaturation transition.
From simulation data for $df_N/dT$ presented in
Fig. \ref{diagram}b, we obtain
$\Omega_c \approx 40$ which is indeed lower than $\Omega_c \approx 57$ for
the single Ub \cite{MSLi_BJ07} obtained by the same Go model.
According to our previous estimate \cite{MSLi_BJ07},
the experimental value $\Omega_c \approx
384$ is considerably higher than the Go value.
Although the present Go model does not provide the realistic
estimate for cooperativity, it still mimics the experimental
fact, that
folding of a multi-domain protein remains
cooperative observed for not only Ub but also other proteins.
Fig. \ref{diagram}c shows the free energy as a function of native contacts
at $T=T_F$. The folding/unfolding barrier is rather low ($\approx$ 1 kcal/mol),
and is comparable with the case of single Ub \cite{MSLi_BJ07}.
The low barrier is probably an artifact of the simple Go modeling.
The double minimum structure suggests that the trimer is a two-state folder.
\subsection{Conclusions}
We constructed the $T$-$f$ phase diagrams of single and three-domain
Ub and showed that both are two-state folders.
The standard temperature RE method was extended to the case when the
force replicas
are considered at a fixed temperature. One can extend the RE method to
cover both temperature and force replicas, as
has been done
for all-atom simulations \cite{Paschek_PRL04} where pressure
is used instead of force.
One caveat of the force RE method is that the acceptance depends on the
end-to-end distance (Eq. \ref{Delta_eq} and \ref{Metropolis_eq}),
and becomes inefficient for long proteins.
We can overcome this by increasing the number of replicas,
but this will increase
CPU time substantially. Thus, the question of improving the force RE approach
for long biomolecules remains open.
\newpage
\begin{center}\section{Refolding of single and three domain ubiquitin under quenched force}\end{center}
\subsection{Introduction}
Deciphering the folding and unfolding pathways and FEL
of biomolecules remains a
challenge in molecular biology. Traditionally, folding and unfolding are
monitored by changing temperature or concentration of chemical denaturants.
In these experiments, due to thermal fluctuations of
initial unfolded conformations, it is difficult to describe the folding
mechanisms in an unambiguous way.
\cite{Fisher_TBS99,Fernandez_Sci04}.
Recently, Fernandez and coworkers \cite{Fernandez_Sci04} have applied
the force-clamp technique (Fig. \ref{force_clamp}) to probe refolding of Ub under quench
force, $f_q$, which is smaller than the equilibrium critical force
separating
the folded and unfolded states.
Here, one can
control starting conformations which are well prepared by applying
the large initial
force of several hundreds of pN.
Monitoring folding events as
a function of the end-to-end distance ($R$) they have made the following
important observations:
\begin{enumerate}
\item Contrary to the standard folding from the
thermal denaturated ensemble (TDE) the refolding under the quenched
force is a multiple
stepwise process.
\item The force-quench refolding time obeys the Bell formula
\cite{Bell_Sci78}, $\tau_F \approx \tau_F^0
\exp(f_qx_f/k_BT)$, where $\tau_F^0$ is the folding time
in the absence of the quench
force and $x_f$ is the average location of the TS.
\end{enumerate}
Motivated by the experiments of Fernandez and Li \cite{Fernandez_Sci04},
Li {\em et al} have studied \cite{MSLi_PNAS06}
the refolding of the domain I27 of the human muscle protein
using the C$_{\alpha}$-Go model \cite{Clementi_JMB00}
and the four-strand
$\beta$-barrel model sequence S1 \cite{Klimov_PNAS00}
(for this sequence the nonnative interactions are also taken into account).
Basically, we have reproduced qualitatively
the major experimental findings listed above. In addition, we have
shown that the refolding is two-state process in which the folding
to the NBA follows the quick collapse from initial stretched
conformations with low entropy. The corresponding kinetics can be described by the
bi-exponential time dependence, contrary to the single exponential
behavior of the folding from the TDE with high entropy.
To make the direct comparison with the experiments of
Fernandez and Li \cite{Fernandez_Sci04}, in this chapter we
performed simulations for a single domain
Ub using the C$_{\alpha}$-Go model \cite{Clementi_JMB00}.
Because the study of
refolding of 76-residue Ub (Fig. \ref{ubiquitin_struture_fig}{\em a})
by all-atom simulations is beyond
present computational facilities the Go modeling is an
appropriate choice. Most of the simulations have been carried out
at $T = 0.85T_F = 285$ K. Our present results for refolding upon
the force quench are in the qualitative agreement with the
experimental findings of Fernandez and Li, and with those obtained
for I27 and S1 theoretically \cite{MSLi_PNAS06}. A number of
quantitative differences between I27 and Ub will be also
discussed. For Ub we have found the average location of the
TS $x_f \approx 0.96$ nm which is in
reasonable agreement with the experimental value 0.8 nm
\cite{Fernandez_Sci04}.
\begin{figure}
\epsfxsize=3.8in
\centerline{\epsffile{./Figs/force_clamp_concept_.eps}}
\linespread{0.8}
\caption{Representation of an experimental protocol of force-clamp spectroscopy.
First a protein is stretched under force of hundreds pN.
Then the external force is reduced to the quenched value
$f_q$ and this force is kept fixed during the refolding
process.}\label{force_clamp}
\end{figure}
Since the quench force slows down the folding process, it is easier to monitor refolding pathways.
However, this begs the important question as to whether the
force-clamp experiments with
one end of the protein anchored probes the same folding pathways as a free-end
protein.
Recently, using a simple Go-like model, it has been shown that
fixing the N-terminal of Ub changes its
folding pathways
\cite{Szymczak_JCP06}. If it is so, the force-clamp technique
in which the N-terminal is anchored is not useful
for prediction of folding pathways of the free-end Ub.
Using the Go model \cite{Clementi_JMB00}
we have shown that,
in agreement with an earlier study \cite{Szymczak_JCP06},
fixing
N-terminal of the single Ub changes its folding pathways. Our new finding
is that
anchoring
C-terminal leaves them unchanged. More importantly,
we have found that for the three-domain Ub
with either end fixed,
each domain follows the same folding pathways as for the free-end single
domain. Therefore,
to probe the folding pathways of Ub by the
force-clamp technique one can either use the single domain with C-terminal
fixed, or several domains with either end fixed.
In order to check if the effect of fixing one terminus is valid for other
proteins, we have studied the titin domain I27.
It turns out
that the fixation of one end of a polypeptide chain
does not change the refolding pathways of I27.
Therefore the
force-clamp can always predict the refolding pathways of the single as well as
multi-domain I27. Our study suggests that the effect of the end fixation
is not universal for all proteins, and the force-clamp
spectroscopy should be applied with caution.
The material of this chapter was taken from Refs. \cite{MSLi_BJ07, Kouza_JCP08}.
\subsection{Refolding of single ubiquitin under quenched force}
As in the previous chapter, we used the C$_{\alpha}$-Go model
(Eq. \ref{Hamiltonian}) to study refolding.
Folding pathways were probed by monitoring the fractions of native
contacts of secondary structures as a function of
the progressive variable $\delta$
(Eq. \ref{progress_fold_eq}).
\subsubsection{Stepwise refolding of single Ubiquitin}
Our protocol for studying the refolding of Ub is identical to what has been
done on the experiments of Fernandez and Li \cite{Fernandez_Sci04}.
We first apply the force $f_I \approx 70$ pN to prepare initial
conformations (the
protein is stretched if $R \ge 0.8 L$, where the contour length $L = 28.7 $ nm).
Starting from the force denaturated ensemble (FDE) we quenched the force
to $f_q < f_c$ and then monitored the refolding process by following
the time dependence of the number of native
contacts $Q(t)$, $R(t)$ and the radius of gyration
$R_g(t)$ for typically 50 independent trajectories.
\begin{figure}[!htbp]
\includegraphics[width=0.62\textwidth]{./Figs/ContRgR_time.eps}
\hfill
\linespread{0.8}
\parbox[b]{0.35\textwidth}{\caption{(a) and (b) The time dependence of $Q$, $R$ and $R_g$
for two typical trajectories starting from FDE
($f_q=0$ and $T=285$ K).
The arrows 1, 2 and 3 in (a) correspond to time 3.1 ($R=10.9$ nm),
9.3 ($R=7.9$ nm) and 17.5 ns ($R=5$ nm).
The arrow 4 marks the folding time $\tau _F$ = 62 ns ($R=2.87$ nm)
when all of 99
native contacts are formed. (c) and (d) are the same as in (a) and (b) but
for $f_q$ = 6.25 pN. The corresponding arrows refer to $t=$
7.5 ($R=11.2$ nm), 32 ($R=9.4$ nm), 95 ns
($R=4.8$ nm) and $\tau _F = 175$ ns ($R=3.65$ nm).\\\\\\\\\\\\}\label{ContRgR_time_fig}}
\vspace{5 mm}
\\
\end{figure}
Figure \ref{ContRgR_time_fig} shows considerable diversity of
refolding pathways. In accord with experiments
\cite{Fernandez_Sci04} and simulations for I27 \cite{MSLi_PNAS06},
the reduction of $R$ occurs in a stepwise manner. In the $f_q=0$
case (Fig. \ref{ContRgR_time_fig}{\em a}) $R$ decreases
continuously from $\approx 18$ nm to 7.5 nm (stage 1) and
fluctuates around this value for about 3 ns (stage 2). The further
reduction to $R \approx 4.5$ nm (stage 3) until a transition to
the NBA. The stepwise nature of variation of $Q(t)$ is also
clearly shown up but it is more masked for $R_g(t)$. Although we
can interpret another trajectory for $f_q=0$ (Fig.
\ref{ContRgR_time_fig}b) in the same way, the time scales are
different. Thus, the refolding routes are highly heterogeneous.
The pathway diversity is also evident for $f_q >0$
(Fig. \ref{ContRgR_time_fig}{\em c}
and {\em d}). Although the picture remains qualitatively the same as in the
$f_q=0$ case, the time scales for different steps becomes much larger.
The molecule fluctuates around $R \approx 7$ nm, e.g.,
for $\approx 60$ ns (stage 2 in Fig. \ref{ContRgR_time_fig}{\em c})
which is considerably longer
than $\approx 3$ ns in Fig. \ref{ContRgR_time_fig}{\em a}.
The variation of $R_g(t)$ becomes more drastic
compared to the $f_q=0$ case.
Figure \ref{ContRgR_time_av_fig} shows the time dependence of
$<R(t)>, <Q(t)>$ and $<R_g(t)>$, where $<...>$ stands for
averaging over 50 trajectories. The left and right panels
correspond to the long and short time windows, respectively. For
the TDE case (Fig. \ref{ContRgR_time_av_fig}{\em a} and {\em b})
the single exponential fit works pretty well for $<R(t)>$ for the
whole time interval. A little departure from this behavior is seen
for $<Q(t)>$ and $<R_g(t)>$ for $t < 2$ ns (Fig.
\ref{ContRgR_time_av_fig}{\em b}). Contrary to the TDE case, even
for $f_q=0$ (Fig. \ref{ContRgR_time_av_fig}{\em c} and {\em d})
the difference between the single and bi-exponential fits is
evident not only for $<Q(t)>$ and $<R_g(t)>$ but also for
$<R(t)>$. The time scales, above which two fits become eventually
identical, are slightly different for three quantities (Fig.
\ref{ContRgR_time_av_fig}{\em d}). The failure of the single
exponential behavior becomes more and more evident with the
increase of $f_q$, as demonstrated in Figs.
\ref{ContRgR_time_av_fig}{\em e} and {\em f} for the FDE case with
$f_q = 6.25$ pN.
\begin{figure}[!htbp]
\includegraphics[width=0.68\textwidth]{./Figs/ContRgR_time_av_.eps}
\hfill
\linespread{0.8}
\parbox[b]{0.3\textwidth}{\caption{(a) The time dependence of $<Q(t)>$, $<R(t)>$ and $<R_g(t)>$
when the refolding starts from TDE.
(b) The same as in (a) but for the short time scale.
(c) and (d) The same as in (a) and (b) but for FDE with $f_q=0$.
(e) and (f) The same as in (c) and (d) but for $f_q$=6.25 pN.\\\\\\\\\\\\\\\\\\\\\\\\}\label{ContRgR_time_av_fig}}
\vspace{5 mm}
\\
\end{figure}
Thus, in agreement with our previous results, obtained for I27 and
the sequence S1 \cite{MSLi_PNAS06},
starting from FDE the refolding kinetics compiles of the fast and
slow phase. The characteristic time scales for these phases may be
obtained using a sum of two exponentials,$<A(t)> = A_0 + A_1
\exp(-t/\tau^A_1) + A_2 \exp(-t/\tau^A_2)$, where $A$ stands for
$R$, $R_g$ or $Q$. Here $\tau^A_1$ characterizes the burst-phase
(first stage) while $\tau^A_2$ may be either the collapse time
(for $R$ and $R_g$) or the folding time (for $Q$) ($\tau^A_1 <
\tau^A_2$). As in the case of I27 and S1 \cite{MSLi_PNAS06},
$\tau^R_1$ and $\tau^{R_g}_1$ are almost independent on $f_q$
(results not shown). We attribute this to the fact that the quench
force ($f_q^{max} \approx 9$ pN) is much lower than the entropy
force ($f_e$) needed to stretch the protein. At $T=285$ K, one has
to apply $f_e \approx 140$ pN for stretching Ub to 0.8 $L$. Since
$f_q^{max} << f_e$ the initial compaction of the chain that is
driven by $f_e$ is not sensitive to the small values of $f_q$.
Contrary to $\tau^A_1$, $\tau^A_2$ was found to increase with $f_q$
exponentially. Moreover,
$\tau^R_2 < \tau^{R_g}_2 < \tau _F$ implying that the chain compaction
occurs before the acquisition of the NS.
\subsubsection{Refolding pathways of single Ubiquitin}
In order to study refolding under small quenched force we follow the same
protocol as in the experiments \cite{Fernandez_Sci04}.
First, a large force ($\approx 130$ pN) is applied to both termini
to prepare the initial stretched conformations. This force is
then released, but a weak quench force, $f_q$, is applied to study the refolding process.
The refolding of a single Ub was studied
\cite{MSLi_BJ07,Szymczak_JCP06} in the presence or absence of
the quench force.
Fixing the N-terminal
was found to change the refolding pathways of the free-end
Ub \cite{Szymczak_JCP06}, but the effect of anchoring the C-terminal
has not been studied yet. Here we study this problem in detail, monitoring
the time dependence of native contacts of secondary structures
(Fig. \ref{single_ub_pathways_fig}).
Since the quench force increases the folding time but leaves
the folding pathways unchanged, we present only the results for $f_q=0$
(Fig. \ref{single_ub_pathways_fig}).
Interestingly, the fixed C-terminal and free-end cases have the identical
folding sequencing
\begin{equation}
S2 \rightarrow S4 \rightarrow A \rightarrow
S1 \rightarrow (S3,S5).
\label{free-end_pathways_eq}
\end{equation}
\begin{figure}
\vspace{5mm}
\epsfxsize=6.3in
\vspace{0.2in}
\centerline{\epsffile{./Figs/fig3.eps}}
\linespread{0.8}
\caption{ The dependence of native contacts of $\beta$-strands
and the helix A on the progressive variable $\delta$ when the N-terminal
is fixed (a), both ends are free (b), and C-terminal is fixed
(c). The results are averaged over 200 trajectories.
(d) The probability of refolding pathways in three cases.
each value is written on top of the histograms.}
\label{single_ub_pathways_fig}
\end{figure}
This is reverse of the unfolding pathway under thermal fluctuations
\cite{MSLi_BJ07}.
As discussed in detail by Li {\em et al.}
\cite{MSLi_BJ07}, Eq. (\ref{free-end_pathways_eq})
partially agrees with the folding \cite{Went_PEDS05}
and unfolding \cite{Cordier_JMB02} experiments, and simulations
\cite{Fernandez_JCP01,Fernandez_Proteins02,Sorenson_Proteins02}.
Our new finding here is that keeping the C-terminal fixed does not
change the folding pathways.
One should keep in mind that the dominant pathway given by
Eq. (\ref{free-end_pathways_eq})
is valid in the statistical sense.
It occurs in about 52\% and 58\% of events for the free end and C-anchored
cases (Fig. \ref{single_ub_pathways_fig}d), respectively.
The probability of observing an alternate pathway
($S2 \rightarrow S4 \rightarrow A \rightarrow
S3 \rightarrow S1 \rightarrow S5)$ is $\approx 44$ \% and 36 \% for these
two cases
(Fig. \ref{single_ub_pathways_fig}d). The difference between these two pathways
is only in sequencing of S1 and S3. Other pathways, denoted in green,
are also possible
but they are rather minor.
In the case when the N-terminal is fixed (Fig. \ref{single_ub_pathways_fig})
we have the following sequencing
\begin{equation}
S4 \rightarrow S2 \rightarrow A \rightarrow S3 \rightarrow S1 \rightarrow S5
\label{fixedN_pathways_eq}
\end{equation}
which is, in agreement with Ref. \onlinecite{Szymczak_JCP06},
different from the free-end case. We present
folding pathways as the sequencing of
secondary structures, making comparison with experiments easier
than an approach based on the time
evolution of individual contacts \cite{Szymczak_JCP06}.
The main pathway (Eq. \ref{fixedN_pathways_eq})
occurs in $\approx 68$ \% of events (Fig. \ref{single_ub_pathways_fig}d),
while the competing sequencing $S4 \rightarrow S2 \rightarrow A \rightarrow S1
\rightarrow (S1, S5)$ (28 \%) and other minor pathways are also possible.
From Eq. (\ref{free-end_pathways_eq}) and (\ref{fixedN_pathways_eq}) it follows
that the force-clamp technique can probe the folding pathways of Ub if one anchors
the C-terminal but not the N-one.
In order to check the robustness of our prediction for refolding pathways
(Eqs. \ref{free-end_pathways_eq} and \ref{fixedN_pathways_eq}),
obtained for the friction $\zeta = 2 \frac{m}{\tau_L}$, we have performed
simulations for the water friction $\zeta = 50 \frac{m}{\tau_L}$. Our
results (not shown) demonstrate that although the folding time
is about twenty times longer compared to the
$\zeta = 2 \frac{m}{\tau_L}$ case, the pathways remain the same.
Thus, within the framework of Go modeling,
the effect of the N-terminus fixation
on refolding pathways of Ub is not an artifact of fast dynamics,
occurring for both large and small friction.
It would be very interesting to verify our prediction
using more sophisticated models. This question is left for future studies.
\subsection{Refolding pathways of three-domain Ubiquitin}
\begin{figure}
\epsfxsize=6.2in
\vspace{0.2in}
\centerline{\epsffile{./Figs/fig4_r100_.eps}}
\linespread{0.8}
\caption{(a) The time dependence of $Q$, $R$ and $R_g$ at $T=285$ K for the free end case for trimer. (b) The same as in (a) but for the N-fixed case.
The red line is a bi-exponential fit $A(t) = A_0 + a_1\exp(-t/\tau_1)
+ a_2\exp(-t/\tau_2)$.
Results for the C-fixed case are similar to the
$N$-fixed case, and are not shown.}
\label{Q_Rnc_Rg_trimer_fig}
\end{figure}
The time dependence of the total number of native contacts, $Q$, $R$ and
the gyration radius, $R_g$, is presented in
Fig. \ref{Q_Rnc_Rg_trimer_fig} for the trimer.
The folding time
$\tau _f \approx$ 553 ns and 936 ns for the free end and N-fixed cases,
respectively. The fact that anchoring one
end slows down refolding by a factor of nearly 2
implies that diffusion-collision processes
\cite{Karplus_Nature76} play an important role in
the Ub folding. Namely, as follows from the diffusion-collision model,
the time required for formation contacts is inversely
proportional to the diffusion coefficient, $D$, of a pair of spherical
units. If one of them is idle, $D$ is halved and
the time needed to form contacts increases accordingly.
The similar effect for unfolding was observed in our recent
work \cite{MSLi_BJ07}.
From the bi-exponential fitting, we obtain two time scales
for collapsing ($\tau_1$) and compaction ($\tau_2$) where $\tau_1 < \tau_2$.
For $R$, e.g., $\tau_1^R \approx 2.4$ ns and $\tau_2^R \approx 52.3$ ns if
two ends are free, and $\tau_1^R \approx 8.8$ ns and $\tau_2^R \approx 148$ ns
for the fixed-N case. Similar results hold for the time evolution of
$R_g$. Thus, the collapse is much faster than the barrier
limited folding process.
Monitoring the time evolution of $\Delta R$ and of
the number of native contacts, one can show (results not shown)
that the refolding of the trimer is staircase-like as observed in the
simulations \cite{Best_Science05,MSLi_BJ07} and the experiments
\cite{Fernandez_Sci04}.
\begin{figure}[!hbtp]
\epsfxsize=5.6in
\vspace{0.2in}
\linespread{0.8}
\centerline{\epsffile{./Figs/fig5_.eps}}
\caption{The same as in Fig. \ref{single_ub_pathways_fig} but for the trimer.
The numbers 1, 2 and 3 refer to the first, second and third domain.
The last row represents the results averaged over three domains.
The fractions of native contacts of each secondary
structure are averaged over 100 trajectories.}
\label{trimer_pathways_detail_fig}
\end{figure}
\begin{wrapfigure}{r}{0.46\textwidth}
\includegraphics[width=0.44\textwidth]{./Figs/fig6.eps}
\hfill
\begin{minipage}{6.9 cm}
\linespread{0.8}
\caption{(a) The probability of different refolding
pathways for the trimer. Each value is shown on top of
the histograms. \label{trimer_Prfpw_fig}}
\end{minipage}
\end{wrapfigure}
Fig. \ref{trimer_pathways_detail_fig} shows the dependence of the number
of native contacts of the secondary structures of each domain on $\delta$
for three situations: both termini are free and one or the other
of them is fixed.
In each of these cases the folding pathways of three domains
are identical. Interestingly, they are the same,
as given by Eq. (\ref{free-end_pathways_eq}), regardless
of we keep one end fixed or not.
As evident from Fig. \ref{trimer_Prfpw_fig},
although the dominant pathway is the same for three cases its
probabilities are different. It is equal 68\%,
44\% and 43\% for the
C-fixed, free-end and N-fixed cases, respectively. For the last two cases,
the competing pathway
S$_2 \rightarrow$ S$_4 \rightarrow$ A $\rightarrow$ S$_3 \rightarrow$ S$_1 \rightarrow$ S$_5$
has a reasonably high
probability of $\approx$ 40\%.
The irrelevance of one-end fixation for refolding
pathways of a multi-domain Ub may be understood
as follows.
Recall that applying the low
quenched force to both termini does not change folding pathways of
single Ub \cite{MSLi_BJ07}. So in the three-domain case,
with the N-end of the first domain fixed,
both termini of the first and second domains are
effectively subjected to external force, and their pathways should remain the
same as in the free-end case. The N-terminal of the third domain is tethered
to the second domain but this would have much weaker effect compared to
the case when it is anchored to a surface. Thus this unit has almost free
ends and its pathways remain unchanged.
Overall, the "boundary" effect
gets weaker as the number of domains becomes
bigger. In order to check this argument, we have performed simulations for
the two-domain Ub. It turns out that the sequencing is roughly the same as
in Fig. \ref{trimer_pathways_detail_fig}, but the common tendency is less
pronounced (results not shown) compared to the trimer case.
Thus we predict that the force-clamp technique can probe
folding pathways of free Ub if one uses either the single domain with the C-terminus
anchored, or the multi-domain construction.
\begin{figure}[!htbp]
\epsfxsize=5.2in
\vspace{0.2in}
\centerline{\epsffile{./Figs/fig7_r100_.eps}}
\linespread{0.8}
\caption{The dependence of the total number of native
contacts on $\delta$ for the first (green), second (red) and third
(blue) domains. Typical snapshots of the initial, middle and final
conformations for three cases when both two ends are free or one
of them is fixed. The effect of anchoring one terminus
on the folding sequencing of domains is clearly evident.
In the bottom we show the probability of refolding pathways for three
cases. Its value is written on the top of histograms.}
\label{trimer_pathways_overall_fig}
\vspace{5 mm}
\end{figure}
Although fixing one end of the trimer does not influence folding pathways
of individual secondary structures, it affects the folding sequencing
of individual domains (Fig. \ref{trimer_pathways_overall_fig}).
We have the following sequencing $(1,3) \rightarrow 2$,
$3 \rightarrow 2 \rightarrow 1$ and $1 \rightarrow 2 \rightarrow 3$
for the free-end, N-terminal fixed and C-terminal fixed, respectively.
These scenarios are supported by typical snapshots shown in
Fig. \ref{trimer_pathways_overall_fig}. It should be noted that the domain at
the free end folds first in all of three cases in statistical
sense (also true for the two-domain case).
As follows from the bottom of Fig. \ref{trimer_pathways_overall_fig}, if
two ends are free then each of them folds first in about 40 out of 100
observations. The middle unit may fold first, but with much lower probability
of about 15\%. This value remains almost unchanged when one of the ends
is anchored,
and the probability that
the non-fixed unit folds increases to $\ge 80$\%.
\subsection{Is the effect of fixing one terminus on refolding pathways universal?}
We now ask if the effect of fixing one end
on refolding pathway, observed for Ub, is also valid for other proteins?
To answer this question, we study the single domain I27 from
the muscle protein titin.
We choose this protein as a good candidate
from the conceptual point of view
because its $\beta$-sandwich structure
(see Fig. \ref{titin_str_ref_pathways_fig}a) is very
different from $\alpha/\beta$-structure of Ub.
\begin{wrapfigure}{r}{0.49\textwidth}
\includegraphics[width=0.46\textwidth]{./Figs/fig9.eps}
\hfill\begin{minipage}{8.0 cm}
\linespread{0.8}
\caption{(a) NS conformation of Ig27 domain of titin(PDB ID: 1tit). There are 8 $\beta$-strands: A (4-7), A' (11-15),
B (18-25), C (32-36), D (47-52), E (55-61), F(69-75) and
G (78-88). The dependence of native contacts of different
$\beta$-strands on the progressive variable $\delta$ for the case
when two ends are free (b), the N-terminus is fixed (c) and the
C-terminal is anchored (d).
(e) The probability of observing refolding pathways for three
cases. Each value is written on top of the histograms. \label{titin_str_ref_pathways_fig}}
\end{minipage}
\end{wrapfigure}
Moreover, because I27 is subject to
mechanical stress under physiological conditions
\cite{Erickson_Science97}, it is instructive to study
refolding from extended conformations generated by force.
There have been extensive unfolding (see recent review \cite{Sotomayor_Science07}
for
references) and refolding \cite{MSLi_PNAS06} studies
on this system, but the effect of one-end fixation on folding
sequencing of individual secondary structures have not been considered
either theoretically or experimentally.
As follows from Fig. \ref{titin_str_ref_pathways_fig}b,
if two ends are
free then strands A, B and E fold at nearly the same rate.
The pathways of the N-fixed and C-fixed cases are identical,
and they are almost the same as in the free end case
except that the strand A seems to fold after B and E.
Thus, keeping the N-terminus fixed has much weaker effect on the folding
sequencing as compared to the single Ub.
Overall the
effect of anchoring one terminus
has a little effect on the refolding pathways of I27, and
we have the following common sequencing
\begin{equation}
D \rightarrow (B,E) \rightarrow (A,G,A') \rightarrow F \rightarrow C
\end{equation}
for all three cases.
The probability of observing this main pathways varies between 70 and 78\%
(Fig. \ref{titin_str_ref_pathways_fig}e). The second pathway,
D $\rightarrow$ (A,A',B,E,G) $\rightarrow$ (F,C), has considerably lower
probability. Other minor routes to the NS are also possible.
Because the multi-domain construction weakens this effect, we expect that
the force-clamp spectroscopy can probe refolding pathways for a single and
poly-I27. More importantly, our study reveals that the influence of fixation
on refolding
pathways may depend on the native topology of proteins.
\subsection{Free energy landscape}
Figure \ref{refold_Ub_trimer} shows the dependence of the folding
times on $f_q$. Using the Bell-type formula (Eq. \ref{Bell_Kf_eq}) and
the linear fit in Fig. \ref{refold_Ub_trimer}, we obtain $x_f = 0.96 \pm 0.15$
nm which is in acceptable agreement with the
experimental value $x_f \approx 0.8$ nm
\cite{Fernandez_Sci04}.
\begin{wrapfigure}{r}{0.47\textwidth}
\includegraphics[width=0.45\textwidth]{./Figs/Kf_vs_ffc_plus_trimer.eps}
\hfill\begin{minipage}{7.5 cm}
\linespread{0.8}
\caption{The dependence of folding times on the quench force at
$T=285$ K. $\tau_F$ was computed as the average of the first passage times
($\tau_F$ is the same as $\tau^Q_2$ extracted from the
bi-exponential fit for $<Q(t)>$). The result is averaged over 30 -
50 trajectories for each value of $f_q$. From the linear fits $y = 3.951
+ 0.267x$ and $y = 6.257 + 0.207x$ we obtain $x_f = 0.96 \pm 0.15$ nm for single Ub (black circles and curve) and $x_f = 0.74 \pm 0.07$ nm for trimer (red squares and curve), respectively.
\label{refold_Ub_trimer}}
\end{minipage}
\end{wrapfigure}
The linear growth of the free energy barrier to folding with $f_q$
is due to
the stabilization of the random coil states under the force.
Our estimate for Ub is higher than
$x_f \approx 0.6$ nm obtained for I27 \cite{MSLi_PNAS06}.
One of possible reasons for such a pronounced difference is that we used
the cutoff distance $d_c=0.65$ and 0.6 nm in the Go model for Ub and I27, respectively. The larger value of $d_c$ would make a protein
more stable (more native contacts) and it may change the FEL
leading to enhancement of $x_f$. This problem requires
further investigation.
From Fig. \ref{refold_Ub_trimer} we obtain $x_f = 0.74 \pm 0.07$ nm
for trimer. Within the error bars this value coincides with
$x_f = 0.96 \pm 0.15$ nm for Ub, and also with the experimental result $x_f \approx 0.80$ nm
\cite{Fernandez_Sci04}. Our results suggest that the multi-domain structure
leaves $x_f$ almost unchanged.
\subsection{Conclusions}
We have shown that, in agreement with the experiments \cite{Fernandez_Sci04},
refolding of Ub under quenched force proceeds in a stepwise manner.
The effect of the one-terminal fixation on refolding pathways
depends on individual protein and it gets weaker
by a multi-domain construction.
Our theoretical estimate of $x_f$ for single Ub
is close to the experimental one and it remains almost the same for
three-domain case.
\newpage
\begin{center}
\section{Mechanical and thermal unfolding of single and three domain Ubiquitin}
\end{center}
\subsection{Introduction}
Experimentally,
the unfolding of the poly-Ub has been studied by applying a constant
force \cite{Schlierf_PNAS04}.
The mechanical unfolding of Ub has previously investigated using Go-like
\cite{West_BJ06} and all-atom models \cite{West_BJ06,Irback_PNAS05}.
In particular,
Irb\"ack {\em et al.} have explored mechanical
unfolding pathways
of structures A, B, C, D and E
(see the definition of these structures and the $\beta$-strands
in the caption to
Fig. \ref{ubiquitin_struture_fig}) and the existence of intermediates in detail.
We present our results on mechanical unfolding of Ub
for five following reasons.
\begin{enumerate}
\item The barrier to the mechanical unfolding has not been computed.
\item Experiments of Schlierf {\em et al.} \cite{Schlierf_PNAS04} have suggested that
cluster 1 (strands S1, S2 and the helix A) unfolds after cluster 2
(strands S3, S4 and S5). However, this observation has not yet
been studied theoretically.
\item Since the structure C, which consists of the strands S1 and S5,
unzips first, Irb\"ack {\em et al.} pointed out that the strand S5 unfolds before S2 or the
terminal strands follows the unfolding pathway
S1 $\rightarrow$ S5 $\rightarrow$ S2. This conclusion may be incorrect because
it has been obtained from the breaking of the contacts within the structure C.
\item In pulling and force-clamp experiments the external force is applied
to one end of proteins whereas the other end is kept fixed. Therefore,
one important question emerges is how fixing one terminus affects the unfolding
sequencing of Ub. This issue has not been addressed by Irb\"ack {\em et al.}
\cite{Irback_PNAS05}.
\item Using a simplified all-atom model it was shown \cite{Irback_PNAS05}
that mechanical intermediates occur more frequently than
in experiments \cite{Schlierf_PNAS04}. It is relevant to ask
if a C$_{\alpha}$-Go model can capture similar intermediates as this may shed
light on the role of non-native interactions.
\end{enumerate}
From the force dependence of mechanical unfolding times, we
estimated the distance between the NS and the TS to be $x _{u} \approx 0.24$ nm which is close to the
experimental results of Carrion-Vazquez {\em et al.}
\cite{Carrion-Vazquez_NSB03} and Schlierf {\em et al.}
\cite{Schlierf_PNAS04}. In agreement with the experiments
\cite{Schlierf_PNAS04}, cluster 1 was found to unfold after cluster 2
in our simulations.
Applying the force to the both termini,
we studied the mechanical unfolding pathways of the terminal strands
in detail and obtained the sequencing S1 $\rightarrow$ S2 $\rightarrow$ S5
which is different from the result of Irb\"ack {\em et al.}.
When the N-terminus is fixed and the C-terminus is pulled by a
constant force the unfolding sequencing was found to be very different
from the previous case. The unzipping initiates, for example,
from the C-terminus
but not from the N-one. Anchoring the C-end is shown to have a little effect
on unfolding pathways.
We have
demonstrated that the present C$_{\alpha}$-Go model does not capture rare
mechanical intermediates, presumably due to the lack of non-native interactions.
Nevertheless, it can correctly describe the two-state
unfolding of Ub \cite{Schlierf_PNAS04}.
It is well known that thermal unfolding pathways may be very different
from the mechanical ones, as has been shown for the
domain I27 \cite{Paci_PNAS00}.
This is because the force is applied locally to the termini while
thermal fluctuations have the
global effect on the entire protein. In the force case unzipping should
propagate from the termini whereas under thermal fluctuations the most
unstable part of a polypeptide chain unfolds first.
The unfolding of Ub under thermal fluctuations was investigated experimentally
by Cordier and Grzesiek \cite{Cordier_JMB02} and by Chung {\em et al.}
\cite{Chung_PNAS05}. If one assumes that unfolding is the reverse of the
refolding process then one can infer information about the unfolding
pathways from the experimentally determined $\phi$-values \cite{Went_PEDS05}
and $\psi$-values \cite{Krantz_JMB04,Sosnick_ChemRev06}.
The most comprehensive $\phi$-value
analysis is that of Went and Jackson. They found that the
C-terminal region which has very low $\phi$-values unfolds first and then the
strand S1 breaks before full unfolding of the $\alpha$ helix fragment A occurs.
However, the detailed unfolding sequencing of the other strands remains unknown.
Theoretically, the thermal unfolding of Ub at high temperatures has been
studied by all-atom
MD simulations by Alonso and Daggett
\cite{Alonso_ProSci98} and Larios {\em et al.} \cite{Larios_JMB04}. In
the latter work the unfolding pathways were not explored. Alonso and Daggett
have found that the $\alpha$-helix fragment A is the most resilient towards
temperature but the structure B breaks as early as the structure C.
The fact that B unfolds early contradicts not only the results for the
$\phi$-values obtained experimentally by Went and Jackson \cite{Went_PEDS05}
but also findings from a high resolution
NMR \cite{Cordier_JMB02}. Moreover, the
sequencing of unfolding events for the structures D and E was not studied.
What information about the thermal unfolding
pathways of Ub can be inferred from the folding
simulations of various coarse-grained models?
Using a semi-empirical approach Fernandez predicted \cite{Fernandez_JCP01}
that the nucleation site involves the $\beta$-strands S1 and S5. This
suggests that thermal fluctuations break
these strands last but
what happens to the other parts of the protein remain unknown.
Furthermore, the late breaking of S5 contradicts the unfolding
\cite{Cordier_JMB02} and folding \cite{Went_PEDS05} experiments.
From later folding simulations of Fernandez {\em et al.}
\cite{Fernandez_Proteins02,Fernandez_PhysicaA02} one can infer
that the structures
A, B and C unzip late. Since this information is gained from $\phi$-values,
it is difficult to determine the sequencing of unfolding events even for these
fragments.
Using the results of Gilis and Rooman \cite{Gilis_Proteins01} we can
only expect that
the structures A and B unfold last. In addition,
with the help of a three-bead model it was found
\cite{Sorenson_Proteins02} that the C-terminal
loop structure is the last to fold in the folding process and most
likely plays a spectator role in the folding kinetics. This implies that
the strands S4, S5 and the second helix (residues 38-40) would unzip first
but again the full unfolding sequencing can not be inferred from this study.
Thus, neither the direct MD \cite{Alonso_ProSci98} nor
indirect folding simulations \cite{Fernandez_JCP01,Fernandez_Proteins02,Fernandez_PhysicaA02,Gilis_Proteins01,Sorenson_Proteins02}
provide a complete picture of the thermal unfolding pathways for Ub.
One of our aims is to decipher the complete thermal unfolding sequencing
and compare
it with the mechanical one.
The mechanical and thermal routes to the DSs have been found
to be very different from each other.
Under the force the $\beta$-strand S1, e.g.,
unfolds first, while thermal fluctuations detach strand S5 first.
The later observation
is in good agreement with NMR data of Cordier
and Grzesiek \cite{Cordier_JMB02}.
A detailed comparison with available experimental and simulation
data on the unfolding sequencing will be presented.
The free energy barrier to thermal
unfolding was also calculated.
Another part of this chapter was inspired by the recent
pulling experiments of Yang {\em et al.} \cite{Yang_RSI06}.
Performing the
experiments in the temperature interval between 278 and
318 K, they found that the unfolding
force (maximum force in the force-extension profile), $f_u$,
of Ub depends on temperature
linearly. In addition, the corresponding slopes of the linear behavior
have been found to be
independent of pulling velocities.
An interesting question that arises is if the linear dependence
of $f_u$ on $T$ is valid for this
particular region, or it holds for the whole temperature interval.
Using the same Go model \cite{Clementi_JMB00}, we can reproduce the
experimental results of Yang {\em et al.} \cite{Yang_RSI06}
on the quasi-quantitative level.
More importantly, we have shown that for the entire temperature
interval the dependence is not linear, because a protein is not an
entropic spring in the temperature regime studied.
We have
studied the effect of multi-domain construction
and linkage
on the location
of the TS along the end-to-end distance reaction
coordinate, $x_u$.
It is found that the multi-domain construction has a minor effect on $x_u$
but,
in agreement with the experiments
\cite{Carrion-Vazquez_NSB03}, the Lys48-C linkage has
the strong effect on it.
Using the microscopic theory for unfolding dynamics \cite{Dudko_PRL06},
we have determined the unfolding barrier for Ub.
This chapter is based on the results presented in Refs. \cite{MSLi_BJ07, Kouza_JCP08}.
\subsection{Materials and Methods}
We use the Go-like model (Eq. \ref{Hamiltonian})
for the single as well as multi-domain Ub.
It should be noted that the folding thermodynamics
does not depend on the environment viscosity (or on $\zeta$)
but the folding kinetics depends
on it. Most of our simulations (if not stated otherwise)
were performed at the friction
$\zeta = 2\frac{m}{\tau_L}$, where the folding is fast.
The
equations of motion
were integrated using the velocity form
of the Verlet algorithm \cite{Swope_JCP82}
with the time step $\Delta t = 0.005 \tau_L$
(Chapter 3). In order to check
the robustness
of our predictions for refolding pathways, limited computations
were carried out
for the friction $\zeta = 50\frac{m}{\tau_L}$ which is believed
to correspond to the viscosity of water \cite{Veitshans_FD97}).
In this overdamped limit we use the Euler method
(Eq. \ref{Euler}) for integration
and the time step $\Delta t = 0.1 \tau_L$.
The progressive variable $\delta$
(Eq. \ref{progress_unfold_eq}) was used to probe folding pathways.
In the constant velocity force simulation, we fix the N-terminal
and follow the procedure described in Section 3.1.2.
The pulling speeds are set equal
$\nu = 3.6\times 10^7$ nm/s and 4.55 $\times 10^8$ nm/s which are
about 5 - 6 orders of magnitude faster than those used in experiments
\cite{Yang_RSI06}.
\subsection{Mechanical unfolding pathways}
\subsubsection{Absence of mechanical unfolding intermediates in C$_{\alpha}$-Go model}
In order to study the unfolding dynamics
of Ub, Schlierf {\em et al.} \cite{Schlierf_PNAS04} have
performed the AFM experiments at a constant force $f = 100, 140$
and 200 pN. The unfolding intermediates were recorded in about $5
\%$ of 800 events at different forces. The typical distance
between the initial and intermediate states is $\Delta R = 8.1 \pm
0.7$ nm \cite{Schlierf_PNAS04}. However, the intermediates do not
affect the two-state behavior of the polypeptide chain. Using the
all-atom models Irb\"ack {\em et al.} \cite{Irback_PNAS05} have also
observed the intermediates in the region 6.7 nm $< R < 18.5$ nm.
Although the percentage of intermediates is higher than in the
experiments, the two-state unfolding events remain dominating.
To check the existence of force-induced intermediates in our model, we
have performed the unfolding simulations for $f=70, 100,
140$ and 200 pN. Because the results are qualitatively similar for
all values of force, we present $f=100$ pN case only.
Figure \ref{uf100pN_long_fig}a shows the time dependence of $R(t)$
for fifteen runs starting from the native value $R_N \approx 3.9$ nm.
For all trajectories the plateau occurs at $R \approx 4.4$ nm. As
seen below, passing this plateau corresponds to breaking of intra-structure
native contacts of structure C. At this stage the chain ends get almost
stretched out, but the rest of the polypeptide chain remains
native-like. The plateau is washed out when we average over many
trajectories and $<R(t)>$ is well fitted by a single exponential
(Fig. \ref{uf100pN_long_fig}a), in accord
with the two-state behavior of Ub \cite{Schlierf_PNAS04}.
The existence of the plateau observed for individual unfolding
events in Fig. \ref{uf100pN_long_fig}a agrees with the all-atom
simulation results of Irb\"ack {\em et al.} \cite{Irback_PNAS05} who
have also recorded the similar plateau at $R \approx 4.6$ nm at
short time scales. However unfolding intermediates at larger
extensions do not occur in our simulations. This is probably
related to neglect of the non-native interactions in the
C$_{\alpha}$-Go model. Nevertheless, this simple model provides
the correct two-state unfolding picture of Ub in the statistical
sense.
\subsubsection{Mechanical unfolding pathways: force is applied to both termini}
Here we focus on the
mechanical unfolding pathways by monitoring the number of native
contacts as a function of the end-to-end extension $\Delta R
\equiv R-R_{\rm eq}$, where $R_{\rm eq}$ is the
equilibrium value of $R$. For $T=285$ K, $R_{\rm eq} \approx 3.4$ nm.
Following Schlierf {\em et al.}
\cite{Schlierf_PNAS04}, we first divide Ub into two clusters.
Cluster 1 consists of strands S1, S2 and the helix A (42 native
contacts) and cluster 2 - strands S3, S4 and S5 (35 native
contacts). The dependence of fraction of intra-cluster native
contacts is shown in Fig. \ref{uf100pN_long_fig}b
for $f = 70$
and 200 pN (similar results for $f = 100$ and 140 pN are not shown). In
agreement with the experiments \cite{Schlierf_PNAS04} the cluster
2 unfolds first. The unfolding of these clusters becomes more and
more synchronous upon decreasing $f$. At $f = 70$ pN the competition
with thermal
fluctuations becomes so important that two clusters may unzip
almost simultaneously. Experiments at low forces are needed to
verify this observation.
\begin{figure}[!hbtp]
\epsfxsize=4.2in
\vspace{0.2in}
\centerline{\epsffile{./Figs/79_r50.eps}}
\linespread{0.8}
\caption{ (a) Time dependence of the end-to-end distance for $f=100$ pN.
The thin curves refer to fifteen representative trajectories. The averaged
over 200 trajectory $<R(t)>$ is represented by the thick line. The dashed curve
is the single exponential fit $<R(t)> = 21.08 -16.81\exp(-x/\tau_{u})$,
where $\tau_{u}\approx 11.8$ ns. (b) The dependence of fraction of the native contacts on $\Delta R$
for cluster 1 (solid lines) and cluster 2 (dashed lines) at
$f=70 pN$ and 200 pN.
The results are averaged over 200 independent
trajectories.
The arrow points to $\Delta R$ = 8.1 nm.}
\label{uf100pN_long_fig}
\end{figure}
The arrow in Fig. \ref{uf100pN_long_fig}b
marks the position
$\Delta R = 8.1$ nm,
where some intermediates were recorded in the experiments
\cite{Schlierf_PNAS04}. At this point
there is intensive loss of native contacts of the cluster 2 suggesting that
the intermediates observed on the experiments are conformations in which
most of the contacts of this
cluster are already broken but the cluster 1 remains relatively
structured ($\approx 40\%$ contacts). One can expect that the cluster 1
is more ordered in the intermediate conformations if the side chains and
realistic interactions between amino acids are taken into account.
To compare the mechanical unfolding pathways of Ub with the
all-atom simulation results \cite{Irback_PNAS05} we discuss the
sequencing of helix A and structures B, C, D and E in more detail. We
monitor the intra-structure native contacts and all contacts
separately. The later include not only the contacts within a given
structure but also the contacts between it and the rest of the
protein. It should be noted that Irb\"ack {\em et al.} have studied
the unfolding pathways based on the evolution of the
intra-structure contacts. Fig. \ref{dom_ext_100pN_fig}a shows the
dependence of the fraction of intra-structure contacts on $\Delta
R$ at $f=100$ pN. At $\Delta R \approx $ 1nm, which corresponds to
the plateau in Fig. \ref{uf100pN_long_fig}a, most of the contacts
of C are broken.
In agreement with the all-atom simulations
\cite{Irback_PNAS05}, the unzipping follows C $\rightarrow$ B
$\rightarrow$ D $\rightarrow$ E $\rightarrow$ A. Since C consists
of the terminal strands S1 and S5, it was suggested that these
fragments unfold first. However, this scenario may be no
longer valid if one considers not only intra-structure
contacts but also other possible ones (Fig.
\ref{dom_ext_100pN_fig}{\em b}). In this case the statistically
preferred sequencing is B $\rightarrow$ C $\rightarrow$ D
$\rightarrow$ E $\rightarrow$ A which holds not only for $f$=100
pN but also for other values of $f$. If it is true then S2 unfold
even before S5. To make this point more transparent,
we plot the fraction of contacts for S1, S2 and S5 as a
function of $\Delta R$ (Fig.
\ref{dom_ext_100pN_fig}{\em c})
for a typical trajectory.
Clearly, S5 detaches from the core part of a protein
after S2 (see also the snapshot
in Fig.
\ref{dom_ext_100pN_fig}{\em d}).
So, instead of the sequencing S1 $\rightarrow$ S5 $\rightarrow$ S2
proposed by Irb\"ack {\em et al.}, we obtain
S1 $\rightarrow$ S2 $\rightarrow$ S5.
\begin{figure}[!htbp]
\epsfxsize=6.3in
\vspace{0.2in}
\centerline{\epsffile{./Figs/cont_ext_vs_dom_ext_100pn.eps}}
\linespread{0.8}
\caption{(a) The dependence of fraction of the intra-structure native contacts
on $\Delta R$ for structures A, B, C, D and E at
$f=100$ pN. (b) The same as in a) but for all native contacts.
(c) The dependence of fraction of the native contacts on $\Delta R$
for strand S1, S2 and S5
($f=200 pN$). The vertical dashed line marks the position of the plateau
at $\Delta R \approx$ 1 nm. (d) The snapshot, chosen at the extension
marked by the arrow in c), shows that S2 unfolds before S5.
At this point all native contacts of S1 and S2 have already broken
while 50$\%$ of the native contacts of S5 are still present. (e) The dependence of fraction of the native contacts on extension
for A and all $\beta$-strands at
$f=70 pN$. (f) The same as in e) but for $f=200$ pN. The arrow points to
$\Delta R = 8.1$ nm where the intermediates are recorded on the experiments
\cite{Schlierf_PNAS04}. The results are averaged over 200 trajectories.}
\label{dom_ext_100pN_fig}
\end{figure}
The dependence of the fraction
of native contacts on $\Delta R$ for individual strands
is shown in Fig. \ref{dom_ext_100pN_fig}{\em e}
($f=70$ pN) and
Fig. \ref{dom_ext_100pN_fig}{\em f}
($f$=200 pN). At $\Delta = 8.1$ nm contacts of S1, S2 and S5 are already broken
whereas S4 and A remain largely structured. In terms of $\beta$-strands and A
we can interpret the intermediates observed in the experiments of
Schlierf {\em et al.} \cite{Schlierf_PNAS04} as conformations with
well structured S4 and A, and low ordering of S3. This interpretation is
more precise compared to the above argument
based on unfolding of two clusters because if one considers the average
number of native contacts, then
the cluster 2 is unstructured in the IS
(Fig. \ref{uf100pN_long_fig}b,
but its strand S4 remains highly structured
(Figs. \ref{dom_ext_100pN_fig}{\em e-f}).
From Figs. \ref{dom_ext_100pN_fig}{\em e-f}
we obtain the following mechanical unfolding
sequencing
\begin{equation}
{\rm S1} \rightarrow {\rm S2} \rightarrow {\rm S5}
\rightarrow {\rm S3} \rightarrow {\rm S4} \rightarrow {\rm A}.
\label{mechanical_sequencing}
\end{equation}
It should be noted that the sequencing
(\ref{mechanical_sequencing}) is valid in the statistical sense.
In some trajectories S5 unfolds even before S1 and S2 or the
native contacts of S1, S2 and S5 may be broken at the same time
scale (Table \ref{SimTime_trimer}).
\begin{table}
\begin{tabular}{c|c|c|c} \hline
Force (pN)~&~S1 $\rightarrow$ S2 $\rightarrow$ S5 ($\%$) &~S5
$\rightarrow$ S1 $\rightarrow$ S2 ($\%$)& (S1,S2,S5) ($\%$) \\\hline
70 & 81 & 8 & 11\\
100 & 76 & 10 & 14\\
140 & 53 & 23 & 24\\
200 & 49 & 26 & 25\\\hline
\end{tabular}
\linespread{0.8}
\caption{Dependence of unfolding pathways on the external
force. There are three possible scenarios: S1 $\rightarrow$
S2 $\rightarrow$ S5, S5 $\rightarrow$ S1 $\rightarrow$ S2, and three strands
unzip almost simultaneously (S1,S2,S5). The probabilities of
observing these events are given in percentage.\label{SimTime_trimer}}
\vspace{5 mm}
\end{table}
From the Table \ref{SimTime_trimer}
it follows that the probability of having
S1 unfolded first decreases with lowering $f$ but the main
trend Eq. (\ref{mechanical_sequencing}) remains unchanged.
One has to stress again that the sequencing of the terminal strands
S1, S2 and S5 given by Eq. (\ref{mechanical_sequencing}) is different from
what proposed by Irb\"ack {\em et al.} based on the breaking of
the intra-structure contacts of C.
Unfortunately, there are no experimental data available for
comparison with our theoretical prediction.
\subsubsection{Mechanical unfolding pathways: One end is fixed}
{\em N-terminus is fixed}.
Here we adopted the same procedure as in the previous section
except the N-terminus is held fixed during simulations. As in the process
where both of the termini are subjected to force, one can show that
the cluster 1
unfolds after the cluster 2 (results not shown).
From Fig. \ref{cont_snap_fixN_f200pN_fig}
we obtain the following unfolding pathways
\begin{subequations}
\begin{equation}
{\rm C} \rightarrow {\rm D} \rightarrow {\rm E} \rightarrow {\rm B} \rightarrow {\rm A},
\label{mechan_fixN_sequencing_struc}
\end{equation}
\begin{equation}
{\rm S5} \rightarrow {\rm S3} \rightarrow {\rm S4} \rightarrow {\rm S1} \rightarrow {\rm S2}
\rightarrow {\rm A},
\label{mechan_fixN_sequencing}
\end{equation}
\end{subequations}
which are also valid for the other values of force ($f$=70, 100 and 140 pN).
Similar to the case when the force is applied to both ends, the structure
C unravels first and the helix A remains the most stable. However, the
sequencing of B, D and E changes markedly compared to the result
obtained by Irb\"ack {\em et al} \cite{Irback_PNAS05}
(Fig. \ref{dom_ext_100pN_fig}a).
\begin{figure}
\epsfxsize=3.5in
\vspace{0.2in}
\centerline{\epsffile{./Figs/cont_snap_fixNC_f200pN_.eps}}
\linespread{0.8}
\caption{(a) The dependence of fraction of the
intra-structure native contacts on extension
for all structures at
$f=200 pN$. The N-terminus is fixed and the external force is applied via
the C-terminus. (b) The same as in (a) but for the native contacts of all
individual $\beta$-strands and helix A .
The results are averaged over 200 trajectories.
(c) A typical snapshot which shows that S$_5$ is fully detached from
the core while S$_1$
and S$_2$ still have $\approx 50\%$ and 100\% contacts, respectively.
(d) The same as in (b) but the C-end is anchored and N-end is pulled.
The strong drop in the fraction of native contacts of S$_4$ at $\Delta R \approx 7.5$ nm does not correspond
to the substantial change of structure as it has only 3 native
contacts in total.}
\label{cont_snap_fixN_f200pN_fig}
\end{figure}
As evident from Eqs. \ref{mechanical_sequencing} and \ref{mechan_fixN_sequencing}, anchoring the first terminal has a much more pronounced effect on the
unfolding pathways of individual strands. In particular,
unzipping commences from the
C-terminus instead of from the N-one.
Fig. \ref{cont_snap_fixN_f200pN_fig}{\em c} shows a typical snapshot
where one can see clearly that S$_5$ detaches first. At the first glance,
this fact may seem trivial because S$_5$ experiences the external force
directly.
However, our experience on unfolding pathways of the well studied domain I27
from the human cardiac titin, e.g., shows that it may be not the case.
Namely, as follows from the pulling experiments \cite{Marszalek_Nature99}
and simulations \cite{Lu_Proteins99},
the strand A from the N-terminus
unravels first although this terminus
is kept fixed. From this point of view, what strand of Ub detaches first
is not {\em a priory} clear. In our opinion, it depends on the
interplay between the native topology and the speed of tension propagation.
The later factor probably plays a more important role for Ub while the
opposite situation happens with I27.
One of possible reasons is related to the high stability of the helix A
which does not allow either for the N-terminal to unravel first or
for seriality in unfolding starting from the C-end.
{\em C-terminus is fixed}.
One can show that unfolding pathways of structures A,B, C, D and E remain
exactly the same as in the case when Ub has been pulled from both termini
(see Fig. \ref{dom_ext_100pN_fig}{\em a-b}). Concerning the individual strands,
a slight difference is observed for S$_5$
(compare Fig. \ref{cont_snap_fixN_f200pN_fig}{\em d} and
Fig. \ref{dom_ext_100pN_fig}{\em e}). Most of the native contacts of this domain break
before S$_3$ and S$_4$, except
the long tail at extension $\Delta R \gtrsim$ 11 nm due to high
mechanical stability of
only one
contact between residues 61 and 65 (the highest resistance of
this pair is probably due to the fact that among 25 possible contacts
of S$_5$ it has the shortest distance $|61-65|=4$ in sequence).
This scenario holds in about 90\%
of trajectories whereas
S$_5$ unravels
completely earlier than S$_3$ and S$_4$ in the remaining trajectories.
Thus, anchoring C-terminus has much less effect on unfolding pathways compared
to the case when the N-end is immobile.
It is worthwhile to note that, experimentally one has studied the effect of
extension geometry on the
mechanical stability of Ub fixing its C-terminus \cite{Carrion-Vazquez_NSB03}.
The greatest mechanical strength (the longest unfolding time) occurs when
the protein is extended between N- and C-termini. This result has been
supported by Monte Carlo \cite{Carrion-Vazquez_NSB03} as well as MD
\cite{West_BJ06} simulations. However the mechanical
unfolding sequencing has not
been studied yet. It would be interesting to check our results on the
effect of fixing one end on Ub mechanical unfolding pathways
by experiments.
\subsection{Free energy landscape}
In experiments one usually uses the Bell formula \cite{Bell_Sci78}
(Eq. \ref{Bell_Ku_eq})
to extract $x_u$ for two-state proteins from the force dependence
of unfolding times. This formula is valid if the location of
the TS does not move under external force.
However, under external force the TS moves toward NS. In this case, one can
use Eq. (\ref{Dudko_eq})
to estimate not only
$x_u$ but also $G^{\ddagger}$ for $\nu = 1/2$ and 2/3.
This will be done in this section for the single Ub and the trimer.
\begin{figure}
\epsfxsize=4.1in
\centerline{\epsffile{./Figs/fig10.eps}}
\linespread{0.8}
\caption{The semi-log plot for the force dependence of unfolding times at $T=285$ K.
Crosses and squares refer the the single Ub and the trimer
with the force applied to N- and C-terminal, respectively.
Circles refer to the
single Ub with the force applied to Lys48 and C-terminal.
Depending on $f$, 30-50 folding events
were using for averaging. In the Bell approximation,
if the N- and C-terminal of the trimer are pulled then
we have the linear fit $y = 10.448 - 0.066x$ (black line) and $x_u \approx$ 0.24 nm.
The same value of $x_u$ was obtained for the single Ub \cite{MSLi_BJ07}.
In the case when we pull at Lys48 and C-terminal of single Ub the linear fit
(black line) at
low forces is $y = 11.963 - 0.168x$ and $x_u = 0.61$ nm.
The correlation level of fitting is about 0.99.
The red and blue curves correspond to the fits with $\nu =1/2$ and
$2/3$, respectively (Eq. \ref{Dudko_eq2}).}
\label{refold_unfold_vs_force_fig}
\end{figure}
\subsubsection{Single Ub}
Using the Bell approximation and Fig. \ref{refold_unfold_vs_force_fig},
we have $x_u \approx 2.4 \AA \,$
\cite{MSLi_BJ07,Kouza_JCP08} which is consistent
with the experimental data $x_u = 1.4 - 2.5$ \AA
\cite{Carrion-Vazquez_NSB03,Schlierf_PNAS04,Chyan_BJ04}.
With the help of an all-atom simulation Li {\em et al.} \cite{Li_JCP04}
have shown that $x_u$ does depend on $f$. At low forces,
where the Bell approximation
is valid \cite{MSLi_BJ07}, they
obtained $x_u = 10$ \AA , which is noticeably higher than our and
the experimental value. Presumably,
this is due to the fact
that these authors computed $x_u$ from equilibrium
data, but their sampling was not good enough for such a long protein as Ub.
We now use Eq. (\ref{Dudko_eq2}) with
$\nu = 2/3$ and $\nu = 1/2$ to compute $x_u$ and $\Delta G^{\ddagger}$.
The regions,
where the $\nu = 2/3$ and $\nu = 1/2$ fits works well, are wider than that for
the Bell scenario (Fig. \ref{refold_unfold_vs_force_fig}). However these fits
can not to cover the entire
force interval. The values of $\tau _u^0, x_u$ and $\Delta G^{\ddagger}$ obtained from
the fitting procedure are listed in Table \ref{Dudkotable}.
According to Ref. \onlinecite{Dudko_PRL06},
all of these quantities increase with decreasing $\nu$.
In our opinion, the microscopic theory
($\nu = 2/3$ and $\nu = 1/2$) gives too high a value for
$x_u$ compared to its typical
experimental value \cite{Carrion-Vazquez_NSB03,Schlierf_PNAS04,Chyan_BJ04}.
However, the latter was calculated from fitting experimental
data to the Bell formula,
and it is not clear how much the microscopic theory would change the result.
\begin{center}\begin{table}[!htbp]
\begin{tabular}{|c|*{9}{c|}}
\hline
& \multicolumn{3}{c|}{Ub}& \multicolumn{3}{c|}{Lys48-C}& \multicolumn{3}{c|}{trimer}\\
\hline
$\nu $& 1/2& 2/3& 1& 1/2 &2/3 &1& 1/2& 2/3 & 1\\
$\quad\tau_U^0 (\mu s)\quad$&13200&1289&9.1&4627&2304&157&1814&756&47\\
$x_{u} (\AA)\quad$ &7.92&5.86&2.4&12.35&10.59&6.1&6.21&5.09&2.4\\
$\quad\Delta G^{\ddagger}(k_BT)\quad$&17.39&14.22&-&15.90&13.94&-&13.49&11.64&-\\
\hline
\end{tabular}
\linespread{0.8}
\caption{Dependence of $x_u$ on fitting procedures for the three-domain Ub and Lys48-C. $\nu =1$ corresponds to the phenomenological Bell approximation (Eq. \ref{Bell_Ku_eq}). $\nu = 1/2$ and 2/3 refer to the microscopic theory (Eq. \ref{Dudko_eq2}). For Ub and trimer
the force is applied to both termini.\label{Dudkotable}}
\end{table}
\end{center}
In order to estimate the unfolding barrier of Ub from the available
experimental data
and compare it with our theoretical estimate, we use the
following formula
\begin{equation}
\Delta G^{\ddagger} = -k_BT\ln(\tau _A/\tau _u^0)
\label{UnfBarrier_eq}
\end{equation}
where $\tau _u^0$ denotes the unfolding time in the absence of force and
$\tau _A$ is a typical unfolding prefactor. Since $\tau _A$ for unfolding is
not known, we use the typical value for folding $\tau _A = 1 \mu$s
\cite{MSLi_Polymer04,Schuler_Nature02}.
Using $\tau _u^0 = 10^4/4$ s
\cite{Khorasanizadeh_Biochem93} and Eq. (\ref{UnfBarrier_eq}) we obtain
$\Delta G^{\ddagger} = 21.6 k_BT$ which is in reasonable agreement
with our result
$\Delta G^{\ddagger} \approx 17.4 k_BT$, followed from the microscopic fit
with $\nu = 1/2$.
Using the GB/SA continuum solvation model \cite{Qiu_JPCA97} and the
CHARMM27 force
field \cite{MacKerell_JPCB98}
Li and Makarov \cite{Li_JCP04,Li_JPCB04}
obtained a much
higher
value $\Delta G^{\ddagger} = 29$ kcal/mol $\approx 48.6 k_BT$.
Again,
the large
departure from the experimental result may be related to
poor sampling or to the force filed they used.
\subsubsection{The effect of linkage on $x_u$ for single Ub}
One of the most interesting experimental results of
Carrion-Vazquez {\em et al.}\cite{Carrion-Vazquez_NSB03}
is that pulling Ub at different positions changes $x_u$ drastically. Namely,
if the force is applied at the C-terminal and Lys48, then
in the Bell approximation $x_u \approx 6.3$ \AA ,
which is about two and half times larger than the case when the termini N and C
are pulled.
Using the all-atom model
Li and Makarov \cite{Li_JCP04} have shown
that $x_u$ is much larger than 10 \AA. Thus, a
theoretical reliable estimate for $x_u$ of Lys48-C Ub is not available.
Our aim is to compute $x_u$
employing the present Go-like model \cite{Clementi_JMB00} as
it is successful
in predicting $x_u$ for the N-C Ub.
Fig. \ref{refold_unfold_vs_force_fig} shows the force dependence of
unfolding time of the fragment Lys48-C when the force is
applied to Lys48 and C-terminus. The unfolding time is defined
as the averaged time to stretch this fragment. From the linear fit
($\nu =1$ in Fig. \ref{refold_unfold_vs_force_fig}) at
low forces we obtain $x_u \approx 0.61$ nm which is in good agreement
with the experiment \cite{Carrion-Vazquez_NSB03}.
The Go model is suitable for estimating $x_u$ for not only Ub,
but also for other proteins \cite{MSLi_BJ07a} because
the unfolding is mainly governed by the native topology.
The fact that $x_u$ for the linkage Lys48-C is larger than that of the N-C
Ub may be understood using our recent observation \cite{MSLi_BJ07a}
that it anti-correlates with the contact order (CO) \cite{Plaxco_JMB98}.
Defining contact formation between any two amino acids ($|i-j| \geq 1$)
as occurring when
the distance between the centers of mass
of side chains $d_{ij} \leq 6.0$ \AA
(see also $http://depts.washington.edu/bakerpg/contact$\_$order/$),
we obtain CO equal 0.075 and 0.15 for the Lys48-C and N-C Ub,
respectively. Thus, $x_u$ of the Lys48-C linkage is larger
than that of the
N-C case because its CO is smaller. This result suggests that
the anti-correlation between $x_u$ and CO may hold
not only when proteins are pulled at termini \cite{MSLi_BJ07a}, but also
when the force is applied to different positions.
Note that the linker (not linkage) effect on $x_u$ has been
studied for protein L \cite{West_PRE06}. It seems
that this effect is
less pronounced compared the effect caused by changing pulling direction
studied here.
We have carried out the microscopic fit for $\nu =1/2$ and $2/3$
(Fig. \ref{refold_unfold_vs_force_fig}). As in the N-C Ub case,
$x_u$ is larger than its Bell value.
However the linkage at Lys48 has a little effect on the activation energy
$\Delta G^{\ddagger}$ (Table \ref{Dudkotable}).
\subsubsection{Determination of $x_u$ for the three-domain ubiquitin}
Since the trimer is a two-state folder (Fig. \ref{diagram}c),
one can determine
its averaged distance between the NS and TS, $x_u$,
along the end-to-end distance reaction coordinate using kinetic
theory \cite{Bell_Sci78,Dudko_PRL06}.
We now ask if the multi-domain structure of Ub changes $x_u$.
As in the
single Ub case \cite{MSLi_BJ07}, there exists a critical force
$f_c \approx 120$pN
separating the low force
and high force regimes (Fig. \ref{refold_unfold_vs_force_fig}).
In the high force region, where the
unfolding barrier disappears, the unfolding time depends on $f$ linearly
(fitting curve not shown) as predicted
theoretically by Evans and Ritchie \cite{Evans_BJ97}.
In the Bell approximation, from the linear fit
(Fig. \ref{refold_unfold_vs_force_fig}) we obtain
$x_u\approx$ 0.24 nm which is exactly
the same as for the single Ub \cite{MSLi_BJ07}.
The values of $\tau _U^0, x_u$ and $\Delta G^{\ddagger}$, extracted
from
the nonlinear fit (Fig. \ref{refold_unfold_vs_force_fig}), are presented
in Table \ref{Dudkotable}. For both $\nu = 1/2$ and $\nu = 2/3$,
$\Delta G^{\ddagger}$ is a bit lower than that
for the single Ub.
In the Bell approximation,
the value of $x_u$ is the same for the single and three-domain Ub but
it is no longer valid for the $\nu = 2/3$ and $\nu = 1/2$ cases.
It would be interesting to perform experiments to check this result and
to see the effect of multiple domain structure on the FEL.
\subsection{Thermal unfolding of Ubiquitin}
\subsubsection{Thermal unfolding pathways}
To study the thermal unfolding the simulation was started from the NS
conformation and it was terminated when all of the native contacts are broken.
Two hundreds trajectories were generated with different random seed numbers. The fractions of
native contacts of helix A and five $\beta$-strands are averaged
over all trajectories for the time window $0 \le \delta \le 1$.
The unfolding routes are studied by monitoring these fractions as
a function of $\delta$. Above $T \approx 500$ K
the strong thermal fluctuations (entropy driven regime) make all
strands and helix A unfold almost simultaneously. Below this
temperature the statistical preference for the unfolding
sequencing is observed. We focus on $T=370$ and 425 K. As in the
case of the mechanical unfolding the cluster 2 unfolds before
cluster 1 (results not shown). However, the main departure from
the mechanical behavior is that the strong resistance to thermal
fluctuations of the cluster 1 is mainly due to the stability of
strand S2 but not of helix A (compare Fig.
\ref{cont_time_thermal_unfold_fig}{\em c} and {\em d} with Fig.
\ref{dom_ext_100pN_fig}{\em e-f}.
The unfolding of cluster 2 before cluster 1
is qualitatively consistent with the experimental
observation that
the C-terminal fragment (residues 36-76)
is largely unstructured while native-like structure persists in
the N-terminal fragment (residues 1-35)
\cite{Bofill_JMB05,Cox_JMB93,Jourdan_Biochem00}.
This is also consistent with the data from the folding simulations
\cite{Sorenson_Proteins02} as well as with the experiments of Went and Jackson
\cite{Went_PEDS05} who have shown that the $\phi$-values $\approx 0$ in the
C-terminal region. However, our finding is at odds with the high $\phi$-values
obtained for several residues in this region by all-atom simulations \cite{Marianayagam_BPC04}
and by a semi-empirical approach \cite{Fernandez_JCP01}.
One possible reason for
high $\phi$-values
in the C-terminal region is due to the force fields.
For example, Marianayagam
and Jackson have employed the GROMOS 96 force field \cite{Gunstren_96} within
the software GROMACS software package \cite{Berendsen_CPC95}.
It would be useful to check if the other force fields give the same result or
not.
\begin{figure}[!hbtp]
\epsfxsize=4.2in
\vspace{0.2in}
\centerline{\epsffile{./Figs/cont_time_thermal_unfold.eps}}
\linespread{0.8}
\caption{ (a) The dependence of fraction of intra-structure
native contacts on
the progressive variable $\delta$ for all structures at $T$=425 K.
(b) The same as in (a) but for $T=370$ K. (c) The dependence of
the all native contacts of the $\beta$-strands and helix A at
$T$=425 K. (d) The same as in (c) but for $T=370$ K.}
\label{cont_time_thermal_unfold_fig}
\end{figure}
The evolution of the fraction of intra-structure contacts of A, B, C, D and E
is shown in Fig. \ref{cont_time_thermal_unfold_fig}{\em a} ($T=425$ K)
and {\em b} ($T=$370 K).
Roughly we have the unfolding sequencing,
given by Eq. (\ref{thermal_sequencing_struc}),
which strongly differs
from the mechanical one. The large stability of the $\alpha$ helix fragment
A against thermal fluctuations is consistent with the all-atom unfolding simulations
\cite{Alonso_ProSci98} and the experiments \cite{Went_PEDS05}.
The N-terminal structure B unfolds even after the core
part E and at $T=370$ K its stability is comparable with helix A.
The fact that B can withstand thermal fluctuations at high temperatures
agrees with the experimental results of Went and Jackson \cite{Went_PEDS05}
and of Cordier and Grzesiek
\cite{Cordier_JMB02} who used the notation $\beta _1/\beta _2$ instead of B.
This also agrees with the results of Gilis and Rooman \cite{Gilis_Proteins01}
who used a coarse-grained model but disagrees with results from
all-atom simulations
\cite{Alonso_ProSci98}. This disagreement is probably due to the fact that
Alonso and Daggett studied only two short trajectories and B did not
completely unfold \cite{Alonso_ProSci98}.
The early unzipping of the structure C (Eq. \ref{thermal_sequencing_struc}) is
consistent with the MD prediction \cite{Alonso_ProSci98}.
Thus our thermal unfolding sequencing (Eq. \ref{thermal_sequencing_struc}) is more complete
compared to the all-atom simulation
and it gives the reasonable agreement with the experiments.
We now consider the thermal unstability of individual $\beta$-strands
and helix A.
At $T$ = 370 K
(Fig. \ref{cont_time_thermal_unfold_fig}{\em d}) the trend that S2
unfolds after S4 is more evident compared to the $T=425$ K case
(Fig. \ref{cont_time_thermal_unfold_fig}{\em c}). Overall, the
simple Go model leads to the sequencing given by Eq. (\ref{thermal_sequencing}).
\begin{subequations}
\begin{equation}
{\rm (C,D)} \rightarrow {\rm E} \rightarrow {\rm B} \rightarrow {\rm A}
\label{thermal_sequencing_struc}
\end{equation}
\begin{equation}
{\rm S5} \rightarrow {\rm S3} \rightarrow {\rm S1} \rightarrow {\rm A}
\rightarrow {\rm (S4,S2)}.
\label{thermal_sequencing}
\end{equation}
\end{subequations}
From Eq. (\ref{mechanical_sequencing}), \ref{mechan_fixN_sequencing}
and \ref{thermal_sequencing} it is
obvious that the thermal unfolding pathways of individual strands
markedly differ from
the mechanical ones. This is not surprising because the force should unfold
the termini first while under thermal fluctuations the most unstable part
is expected to detach first.
Interestingly, for the structures the thermal and mechanical
pathways
(compare Eq. (\ref{thermal_sequencing_struc})
and \ref{mechan_fixN_sequencing_struc}) are almost identical except that
the sequencing of C and D is less pronounced in the former case.
This coincidence is probably accidental.
The fact that S5 unfolds first agrees with the high-resolution NMR
data of Cordier and Grzesiek \cite{Cordier_JMB02} who studied the
temperature dependence of HBs of Ub.
However, using the $\psi$-value analysis Krantz {\em et al} \cite{Krantz_JMB04}
have found that S5 (B3 in their notation) breaks even after S1 and S2.
One of possible reasons is that, as pointed out by Fersht \cite{Fersht_PNAS04},
if there is any plasticity in the TS which can accommodate the
crosslink between the metal and bi-histidines, then $\psi$-values would be
significantly greater than zero even for an unstructured region, leading to
an overestimation of structure in the TS.
In agreement with our results,
the $\phi$-value analysis \cite{Went_PEDS05} yields that S5 breaks before S1
and A but it fails to determine whether S5 breaks before S3.
By modeling the
amide I vibrations Chung {\em et al.} \cite{Chung_PNAS05}
argued that S1 and S2 are
more stable than S3, S4 and S5. Eq.
\ref{thermal_sequencing} shows that the thermal stability of S1
and S2 is indeed higher than S3 and S5 but S4 may be more stable
than S1. The reason for only partial agreement between our results
and those of Chung {\em et al.} remains unclear. It may be caused
either by the simplicity of the Go model or by the model proposed
in Ref. \cite{Chung_PNAS05}.
The relatively high stability of S4 (Eq. \ref{thermal_sequencing}) is
supported by the $\psi$-value analysis \cite{Krantz_JMB04}.
\begin{figure}[!hbtp]
\epsfxsize=2.5in
\vspace{0.3in}
\centerline{\epsffile{./Figs/uftime_T_.eps}}
\linespread{0.8}
\caption{Dependence of thermal unfolding time $\tau _{u}$ on
$\epsilon _H/T$,
where $\epsilon _H$ is the hydrogen bond energy. The straight line is a fit
$y = -8.01 + 10.48x$. \label{uftime_T_fig}}
\end{figure}
\subsubsection{Thermal unfolding barrier}
Figure \ref{uftime_T_fig} shows the temperature dependence of the
unfolding time $\tau_{u}$ which depends on the thermal unfolding
barrier, $\Delta F^T_{u}$, exponentially, $\tau_{u} \approx
\tau_{u}^0 \exp(\Delta F^T_{u}/k_BT)$. From the linear fit in
Fig. \ref{uftime_T_fig} we obtain $\Delta F^T_{u} \approx 10.48
\epsilon_h \approx 10.3$ kcal/mol.
It is interesting to note that $\Delta F^T_{u}$ is compatible with
$\Delta H_m \approx 11.4$ kcal/mol obtained from the equilibrium data
(Fig. \ref{diagram_fN_fig}{\em b}). However, the latter is defined
by an equilibrium constant (the free energy difference
between NS and DS) but not by the rate constant
(see, for example, Ref. \onlinecite{Noronha_BJ04}).
\subsection{Dependence of unfolding force of single Ubiquitin on $T$}
Recently, using the improved temperature control technique to perform
the pulling experiments
for the single Ub, Yang {\em et al.} \cite{Yang_RSI06}
have found that the unfolding force
depends on $T$ linearly
for 278 K $ \le T \le$ 318 K, and the slope of linear behavior
does not depend on pulling speeds.
Our goal is
to see if the present Go model
can reproduce this result at least qualitatively, and more importantly,
to check whether the linear dependence holds for the whole temperature
interval where $f_{max} > 0$.
The pulling simulations have been carried at two speeds following the
protocol described
in Chapter 3.
Fig. \ref{fmax_T_fig}a shows the force-extension profile of the single
Ub for $T=288$ and 318 K at the pulling speed $v= 4.55\times 10^8$ nm/s.
The peak is lowered as $T$ increases because thermal fluctuations promote
the unfolding of the system. In addition the peak moves toward a
lower extension.
This fact is also understandable, because at higher $T$ a protein can
unfold
at lower extensions due to thermal fluctuations.
For $T=318$ K, e.g., the maximum force is located at the extension
of $\approx 0.6$ nm, which
corresponds to the plateau observed in the time dependence of
the end-to-end distance under constant force
\cite{Irback_PNAS05,MSLi_BJ07}.
One can show that, in agreement with Chyan {\em et al.}
\cite{Chyan_BJ04}, at this maximum the extension between
strands S$_1$ and S$_5$ is $\approx$ 0.25 nm. Beyond the
maximum, all of the
native contacts between strands S$_1$ and S$_5$ are broken.
At this stage, the chain ends are almost
stretched out, but the rest of the polypeptide chain remains
native-like.
The temperature dependence of the unfolding force, $f_{max}$,
is shown in Fig. \ref{fmax_T_fig}b
for 278 K $\le T \le$ 318 K, and for two pulling speeds.
The experimental results of Yang {\em et al.} are also presented
for comparison. Clearly,
in agreement with experiments \cite{Yang_RSI06}
linear behavior is observed and
the corresponding slopes do not depend on $v$.
Using the fit $f_{max} = f_{max}^0 - \gamma T$ we obtain the ratio
between the simulation and experimental slopes
$\gamma _{sim}/\gamma _{exp} \approx 0.56$.
Thus, the Go model gives
a weaker temperature dependence compared to the experiments.
Given the simplicity of this model, the agreement between theory and experiment
should be considered reasonable, but it would be interesting to check if
a fuller accounting of non-native contacts and environment can improve
our results.
\begin{figure}
\epsfxsize=5.2in
\centerline{\epsffile{./Figs/fig11.eps}}
\linespread{0.8}
\caption{(a) The force-extension profile obtained at
$T=285$ K (black) and 318 K (red) at the pulling speed
$v= 4.55\times 10^8$ nm/s. $f_{max}$ is located at the extension
$\approx 1$ nm and 0.6 nm for $T=285$ K and 318 K, respectively.
The results
are averaged over 50 independent trajectories.
(b) The dependence of $f_{max}$ on temperature for two values
of $\nu$. The experimental data are taken from
Ref. \onlinecite{Yang_RSI06} for comparison.
The linear fits for the simulations are
$y = 494.95 - 1.241x$ and $y = 580.69 - 1.335x$. For the experimental sets
we have $y = 811.6 - 2.2x$ and $y = 960.25 - 2.375x$.
(c) The dependence temperature of $f_{max}$ for the whole temperature
region and two values
of $\nu$. The arrow marks the crossover between two nearly linear regimes.}
\label{fmax_T_fig}
\end{figure}
As evident from Fig. \ref{fmax_T_fig}c,
the dependence of $f_{max}$ on $T$ ceases
to be linear for the whole temperature interval.
The nonlinear temperature dependence of $f_{max}$ may be understood
qualitatively using the simple theory of Evans and K. Ritchie
\cite{Evans_BJ97}. For the external force linearly ramped
with time,
the unfolding
force is given by Eq. (\ref{Bell_Ku_eq}).
(A more complicated microscopic
expression for $f_{max}$ is provided by Eq. \ref{Dudko_eq}).
Since $\tau_U^0$ is temperature dependent and $x_u$ also displays a weak
temperature dependence \cite{Imparato_PRL07}, the resulting $T$-dependence
should be nonlinear.
This result can also be understood by noting that the temperatures considered
here are low enough so that we are not in the entropic limit,
where the linear dependence would be valid for the worm-like model
\cite{Marko_Macromolecules95}.
The arrow in Fig. \ref{fmax_T_fig}c separates two regimes of the $T$-dependence
of $f_{max}$. The crossover takes place roughly in the temperature
interval where the temperature dependence of the equilibrium
critical force changes the slope (Fig. \ref{diagram_fN_fig}).
At low temperatures, thermal fluctuations are weak and
the temperature dependence of $f_{max}$
is weaker compared to the high temperature regime.
Thus the linear dependence observed in the experiments of Yang {\em et al.}
\cite{Yang_RSI06} is valid, but only in the narrow $T$-interval.
\subsection{Conclusions}
To summarize, in this chapter we have obtained the following novel results.
It was shown that the refolding of Ub is a two-stage process in which
the "burst" phase exists on very short time scales.
Using the
dependence of the refolding and unfolding
on $f$, $x_f$, $x_{u}$ and unfolding barriers were computed. Our results
for FEL parameters are in acceptable agreement
with the experiments. It has been demonstrated
that fixing the N-terminus of Ub has much
stronger effect on mechanical unfolding pathways compared to the case
when the C-end is anchored. In comparison with
previous studies, we provide a more
complete picture for thermal unfolding pathways which are very different
from the mechanical ones.
Mechanically strand S1 is the most unstable
whereas the thermal fluctuations break contacts of S5 first.
We have shown that, in agreement with the experiment of Carrion-Vazquez
{\em et al.}
\cite{Carrion-Vazquez_NSB03}, the Lys48-C linkage
changes $x_u$ drastically. From the point of view of
biological function,
the linkage Lys63-C is very important, but the study of its mechanical
properties is not as interesting as the Lys48-C because this fragment
is almost stretched out in the NS.
Finally, we have reproduced an experiment \cite{Yang_RSI06}
of the linear temperature dependence of unfolding force of Ub
on the quasi-quantitative level.
Moreover, we have shown that for the whole
temperature region the dependence of $f_{max}$ on $T$ is nonlinear,
and the observed linear dependence is valid only for a narrow temperature
interval. This behavior should be common for all proteins because it
reflects the fact that
the entropic limit is not applicable to all temperatures.
\newpage
\begin{center}
\section{Dependence of protein mechanical unfolding pathways on pulling speeds}
\end{center}
\subsection{Introduction}
As cytoskeletal proteins, large actin-binding proteins play a key roles
in cell organization, mechanics and signalling\cite{Stossel_NRMCB01}.
During the process of permanent cytoskeleton reorganization, all
involved participants are subject to mechanical stress. One
of them is DDFLN4 protein,
which binds different components of
actin-binding protein. Therefore, understanding the mechanical response of
this domain to a stretched force is of great interest.
Recently, using the AFM experiments,
Schwaiger {\em et al.} \cite{Schwaiger_NSMB04,Schwaiger_EMBO05} have obtained two major results for DDFLN4.
First, this domain (Fig. \ref{native_ddfln4_strands_fig})
unfolds via intermediates as the force-extension curve displays two peaks
centered at $\Delta R \approx 12$ nm and
$\Delta R \approx 22$ nm.
Second, with the help of loop mutations, it was suggested
that during the first unfolding event (first peak) strands A and B
unfold first.
Therefore, strands C - G form a stable intermediate structure, which then
unfolds in the second unfolding event (second peak).
In addition, Schwaiger {\em et al.}
\cite{Schwaiger_EMBO05} have also determined the
FEL parameters of DDFLN4.
With the help of the C$_{\alpha}$-Go model \cite{Clementi_JMB00}, Li {\em et al.}
\cite{MSLi_JCP08}
have demonstrated that the mechanical unfolding of DDFLN4 does follow
the three-state
scenario but the full agreement between theory and experiments was not
obtained. The simulations \cite{MSLi_JCP08} showed
that two peaks in the force-extension profile occur
at $\Delta R \approx 1.5$ nm and 11 nm, i.e.,
the Go modeling does not detect the peak
at $\Delta R \approx 22$ nm. Instead, it predicts the existence of
a peak not far from the native
conformation. More importantly, theoretical unfolding pathways
\cite{MSLi_JCP08} are very different from the
experimental ones \cite{Schwaiger_NSMB04}:
the unfolding initiates from the C-terminal,
but not from the N-terminal terminal as shown by the experiments.
It should be noted that the pulling speed used in the previous simulations
is about five orders of magnitude larger than
the experimental value \cite{Schwaiger_NSMB04}.
Therefore, a natural
question
emerges is if the discrepancy between theory and experiments is due
to huge difference in pulling speeds.
Motivated by this, we have carried low-$v$ simulations, using the Go
model \cite{Clementi_JMB00}.
Interestingly,
we uncovered that unfolding pathways of DDFLN4 depend on the pulling speed and
only at
$v \sim 10^4$ nm/s, the theoretical unfolding sequencing coincides with
the experimental one \cite{Schwaiger_NSMB04}.
However, even at low loading rates,
the existence of the peak at $\Delta R \approx 1.5$ nm
remains robust
and the Go modeling does not capture the maximum at $\Delta R \approx 22$ nm.
In the previous work \cite{MSLi_JCP08},
using dependencies
of unfolding times on external forces,
the distance between the NS and the first transition state (TS1),
$x_{u1}$, and the distance between IS and the second
transition state (TS2), $x_{u2}$, of DDFLN4 have been estimated
(see Fig. \ref{free_3state_concept_fig}.
In the Bell approximation, the agreement between the theory and
experiments \cite{Schwaiger_EMBO05} was reasonable.
However, in the non-Bell approximation
\cite{Dudko_PRL06}, the theoretical values of $x_{u1}$, and $x_{u2}$
seem to be high \cite{MSLi_JCP08}.
In addition the unfolding barrier between the
TS1 and NS,
$\Delta G^{\ddagger}_1$, is clearly higher than its experimental
counterpart (Table \ref{DDFLN4_table}).
\begin{figure}
\epsfxsize=4.2in
\vspace{0.2in}
\centerline{\epsffile{./Figs/free_3state_concept.eps}}
\linespread{0.8}
\caption{Schematic plot of
the free energy landscape
for a three-state protein as a function of the end-to-end distance.
$x_{u1}$ and $x_{u2}$ refer to the distance between the
NS and
TS1
and the distance between IS and TS2.
The unfolding barrier
$\Delta G^{\ddagger}_1 = G_{TS1} - G_{NS}$ and
$\Delta G^{\ddagger}_2 = G_{TS2} - G_{IS}$.}
\label{free_3state_concept_fig}
\vspace{5 mm}
\end{figure}
In this chapter \cite{MSLi_JCP09}, assuming that the microscopic kinetic theory
\cite{Dudko_PRL06} holds for a three-state protein, we calculated
$x_{ui} (i=1,2)$ and unfolding barriers
by a different method
which is based on dependencies of peaks in the force-extension curve
on $v$. Our present estimations for
the unfolding FEL parameters are more reasonable
compared to the previous ones \cite{MSLi_JCP08}.
Finally, we have also studied thermal unfolding
pathways of DDFLN4 and shown
that the
mechanical unfolding pathways are different from the thermal ones.
This chapter is based on the results from Ref. \cite{MSLi_JCP09}.
\subsection{Method}
\begin{figure}
\includegraphics[scale=0.5]{./Figs/ab_.eps}
\linespread{0.8}
\caption{ (a) NS conformation of
DDFLN4 taken from the PDB
(PDB ID: 1ksr). There are seven $\beta$-strands: A (6-9), B (22-28),
C (43-48), D (57-59), E (64-69), F (75-83), and
G (94-97).
In the NS there are 15, 39, 23, 10, 27, 49, and 20 native contacts
formed by strands A, B, C, D, E, F, and G with
the rest of the protein, respectively.
The end-to-end distance in the NS $R_{NS}=40.2$ \AA.
(b) There are 7 pairs of strands, which have the nonzero number
of mutual native contacts
in the NS. These pairs are P$_{\textrm{AB}}$,
P$_{\textrm{AF}}$, P$_{\textrm{BE}}$,
P$_{\textrm{CD}}$, P$_{\textrm{CF}}$, P$_{\textrm{DE}}$, and P$_{\textrm{FG}}$.
The number of native contacts between them
are 11, 1, 13, 2, 16, 8, and 11,
respectively.}
\label{native_ddfln4_strands_fig}
\end{figure}
The native conformation
of DDFLN4, which has seven $\beta$-strands, enumerated as
A to G,
was taken from the PDB (PI: 1KSR,
Fig. \ref{native_ddfln4_strands_fig}a).
We assume that
residues $i$ and $j$ are in native contact if the distance
between them in the native conformation,
is shorter than a cutoff distance $d_c =
6.5$ \AA.
With this choice of $d_c$, the molecule has 163 native contacts.
Native contacts exist between seven pairs
of $\beta$-strands
P$_{\textrm{AB}}$,
P$_{\textrm{AF}}$, P$_{\textrm{BE}}$,
P$_{\textrm{CD}}$, P$_{\textrm{CF}}$, P$_{\textrm{DE}}$, and P$_{\textrm{FG}}$
(Fig. \ref{native_ddfln4_strands_fig}b).
We used the C$_{\alpha}$-Go model \cite{Clementi_JMB00} for a molecule.
The corresponding parameters of this model are chosen as in Chapter 4.
The simulations were carried out in the over-damped limit
with the water viscosity $\zeta = 50\frac{m}{\tau_L}$
The Brownian dynamics equation (Eq. \ref{overdamped_eq}) was numerically solved by the simple Euler method (Eq. \ref{Euler}).
Due to the large viscosity, we can choose a large time step
$\Delta t = 0.1 \tau_L$, and this choice allows us to study unfolding at low
loading rates.
In the constant velocity force simulations, we follow the protocol described in
section 3.1.2.
The mechanical unfolding sequencing
was studied by
monitoring the fraction of native contacts of the $\beta$-strands
and of their seven pairs as a function of $\Delta R$,
which is admitted a good reaction coordinate.
\subsection{Results}
\subsubsection{Robustness of peak at end-to-end extension $\Delta R \approx 1.5$ nm and absence
of maximum at $\Delta R \approx 22$ nm at low pulling speeds}
\begin{figure}
\epsfxsize=5.0in
\vspace{0.2in}
\centerline{\epsffile{./Figs/BJ_force_ext_traj_r100.eps}}
\linespread{0.8}
\caption{(a) Typical force-extension curves for
$v =7.2\times 10^6$ nm/s. (b) The same as in (a) but for $v=6.4\times 10^5$ nm/s. (c) The same as in (a) but for $v=5.8\times 10^4$ nm/s. The arrow roughly refers to locations of additional peaks for two trajectories (red and green).
(d) The same as in (c) but for $v=2.6\times 10^4$ mn/s.}
\label{force_ext_traj_fig}
\vspace{2 mm}
\end{figure}
In the previous high pulling speed
($v = 3.6\times 10^7$ nm/s) Go simulations
\cite{MSLi_JCP08}, the force-extension curve shows two
peaks at $\Delta R \approx 1.5$ nm and 10 nm, while the experiments
showed that peaks appear at $\Delta R \approx 12$ nm and 22 nm.
The question we ask if one can reproduce the experimental results at
low pulling speeds. Within our computational facilities, we were
able to perform simulations at the lowest $v = 2.6\times 10^4$ nm/s
which is about three orders of magnitude lower than that used
before \cite{MSLi_JCP08}.
Fig. \ref{force_ext_traj_fig} show
force-extension curves for four representative pulling speeds.
For the highest $v = 7.2\times 10^6$ nm/s
(Fig. \ref{force_ext_traj_fig}a), there are two peaks
located at extensions $\Delta R \approx 1.5$ nm and 9 nm.
As evident from Figs. \ref{force_ext_traj_fig}b, c and d,
the existence of the first peak remains robust against reduction of $v$.
Positions
of $f_{max1}$ weakly fluctuate over the range
$0.9 \lesssim \Delta R \lesssim 1.8$ nm for all values of $v$
(Fig. \ref{dist_fmax_pos_fig}). As $v$ is reduced, $f_{max1}$ decreases but this peak does not
vanish if one interpolates our results to the lowest pulling speed
$v_{exp} = 200$ nm/s
used in the experiments \cite{Schwaiger_NSMB04}
(see below).
\begin{wrapfigure}{l}{0.42\textwidth}
\includegraphics[width=0.40\textwidth]{./Figs/BJ_dist_fmax_pos_r100.eps}
\hfill\begin{minipage}{6.3 cm}
\linespread{0.8}
\caption{Distributions of positions of $f_{max1}$ and
$f_{max2}$ for $v =7.2\times 10^6$ (black), $ 6.4\times 10^5$ (red) , $5.8\times 10^4$ (blue) and 2.6$\times 10^4$ mn/s (green. \label{dist_fmax_pos_fig}}
\end{minipage}
\end{wrapfigure}
Thus, opposed to the experiments, the first peak
occurs already at small end-to-end extensions.
We do not exclude a possibility that such a peak was
overlooked by experiments,
as it happened with the titin domain
I27. Recall that, for this domain the first
AFM experiment \cite{Rief_Science97}
did not trace the hump which was observed in the later
simulations \cite{Lu_BJ98} and experiments \cite{Marszalek_Nature99}.
Positions of the second peak $f_{max2}$
are more scattered compared to $f_{max1}$, ranging
from about 8 nm to 12 nm (Fig. \ref{dist_fmax_pos_fig}). Overall, they
move toward higher values upon
reduction of $v$ (Fig. \ref{force_ext_traj_fig}). If at $v=6.4\times 10^5$ nm/s only about 15$\%$
trajectories display $\Delta R_{max2} > 10$ nm, then this percentage reaches
65$\%$ and 97\% for $v=5.8\times 10^4$ nm/s and $2.6\times 10^4$ nm/s,
respectively (Fig. \ref{dist_fmax_pos_fig}).
At low $v$, unfolding pathways show rich diversity.
For $v \gtrsim 6.4\times 10^5$ nm/s, the force-extension profile shows
only two peaks in all trajectories
studied (Fig. \ref{force_ext_traj_fig}a and \ref{force_ext_traj_fig}b),while
for lower speeds $v = 5.8\times 10^4$ nm/s and $2.6\times 10^4$ nm/s,
about $4\%$ trajectories display even four peaks
(Fig. \ref{force_ext_traj_fig}c and
\ref{force_ext_traj_fig}d), i.e. the four-state behavior.
We do not observe any peak at $\Delta R \approx 22$ nm for all
loading rates (Fig. \ref{force_ext_traj_fig}),
and it is very unlikely that it will appear at lower
values of $v$.
Thus, the Go model, in which non-native interactions are neglected,
fails to reproduce this experimental observation.
Whether inclusion of non-native interactions would
cure this problem requires further studies.
\subsubsection{Dependence of mechanical pathways on loading rates}
The considerable fluctuations of peak positions and
occurrence of even three peaks already suggest that unfolding
pathways, which are kinetic in nature, may change if $v$ is varied.
To clarify this point in more detail, we show $\Delta R$-dependencies
of native contacts of all $\beta$-strands
and their pairs for $v=7.2\times 10^6$ nm/s
(Figs. \ref{cont_ext_v15_fig}{\em a,b}) and $v=2.6\times 10^4$ nm/s
(Figs. \ref{cont_ext_v15_fig}{\em c,d}). For $v=7.2\times 10^6$ nm/s, one has the
following unfolding pathways:
\begin{subequations}
\begin{equation}
G \rightarrow F \rightarrow (C,E,D) \rightarrow B \rightarrow A,
\label{pathways_v6_strand_eq}
\end{equation}
\begin{equation}
P_{AF} \rightarrow P_{BE} \rightarrow (P_{FG}, P_{CF}) \rightarrow P_{CD}
\rightarrow P_{DE}
\rightarrow P_{AB}.
\label{pathways_v6_pair_eq}
\end{equation}
\end{subequations}
According to this scenario, the unfolding initiates from the C-terminal,
while the experiments \cite{Schwaiger_NSMB04} showed that
strands A and B unfold first.
For $v=2.6\times 10^4$ nm/s, Fig. \ref{cont_ext_v15_fig}{\em c} gives
the following sequencing
\begin{subequations}
\begin{equation}
(A,B) \rightarrow (C,D,E) \rightarrow (F,G),
\label{pathways_v15_strand_eq}
\end{equation}
\begin{equation}
P_{AF} \rightarrow (P_{BE},P_{AB}) \rightarrow P_{CF} \rightarrow
(P_{CD},P_{DE},P_{FG}).
\label{pathways_v15_pair_eq}
\end{equation}
\end{subequations}
\begin{figure}[!hbtp]
\epsfxsize=4.5in
\vspace{0.2in}
\centerline{\epsffile{./Figs/v6_v15_r100.eps}}
\linespread{0.8}
\caption{ (a) Dependences of averaged
fractions of native contacts
formed by seven strands on $\Delta R$ for $v = 7.2\times 10^6$ nm/s.
(b) The same as in (a) but for pairs of strands.
(c)-(d) The same as in a)-b) but
for $v=2.6\times 10^4$ nm/s.
Results were averaged over 50 trajectories.}
\label{cont_ext_v15_fig}
\end{figure}
We obtain the very interesting result that at this low loading rate,
in agreement with the AFM experiments
\cite{Schwaiger_NSMB04}, the N-terminal detaches
from a protein first.
For both values of $v$,
the first peak
corresponds to breaking of native contacts between
strands A and F (Fig. \ref{cont_ext_v15_fig}{\em d} and Fig. \ref{cont_ext_v15_fig}{\em b}).
However, the structure of unfolding intermediates, which correspond to this
peak, depends on $v$.
For $v=7.2\times 10^6$ nm/s (Fig. \ref{cont_ext_v15_fig}{\em a,b}), at
$\Delta R \approx 1.5$ nm, native contacts between F and G are
broken and strand G has already
been unstructured (Fig. \ref{cont_ext_v15_fig}a). Therefore, for this
pulling speed, the intermediate consists of
six ordered strands A-F
(see Fig. \ref{snapshot_v6_v15_fig}a
for a typical snapshot).
In the $v=2.6\times 10^4$ nm/s case, just after the first peak,
none of strands
unfolds completely (Fig. \ref{cont_ext_v15_fig}{\em c}),
although
(A,F) and (B,E) contacts have been already broken (Fig. \ref{cont_ext_v15_fig}{\em d}).
Thus, the intermediate looks very different from the high $v$ case, as it has
all secondary structures partially structured
(see (Fig. \ref{snapshot_v6_v15_fig}b) for a typical snapshot).
Since the experiments \cite{Schwaiger_NSMB04}
showed that intermediate structures contain five ordered strands
C-G, intermediates predicted by simulations are more ordered than the
experimental ones. Even though,
our low loading rate Go simulations provide the same pathways
as on the experiments. The difference between theory and experiments
in intermediate structures comes from
different locations of the first peak.
It remains unclear if this is a shortcoming of Go models or of
the experiments because it is hard to imagine that a $\beta$-protein like
DDFLN4
displays
the first peak at such a large extension $\Delta R \approx 12$ nm
\cite{Schwaiger_NSMB04}. The force-extension curve of
the titin domain I27, which has a similar native topology, for example, displays the
first peak at $\Delta R \approx 0.8$ nm \cite{Marszalek_Nature99}.
From this prospect, the theoretical result is more favorable.
\begin{figure}
\epsfxsize=5.2in
\vspace{0.2in}
\centerline{\epsffile{./Figs/snapshot_v6_v15_r100.eps}}
\linespread{0.8}
\caption{ (a) Typical snapshot obtained at $\Delta R = 2$ nm
and $v= 7.2\times 10^6$ nm/s. A single contact between strand A (blue spheres)
and strand F (orange) was broken. Native contacts between F and G (red) are also
broken and G completely unfolds. (b) The same as in (a) but
for $v=2.6\times 10^4$ nm/s. Native contacts between A and F and between
B and E are broken but all strands are remain partially structured.
(c) Typical snapshot obtained at $\Delta R = 11$ nm
and $v= 7.2\times 10^6$ nm/s. Native contacts between pairs are broken except
those between strands A and B.
All 11 unbroken contacts are marked by solid lines. Strands A and B
do not unfold yet.
(d) The same as in (c) but for $v=2.6\times 10^4$ nm/s.
Two from 11 native contacts between F and G are broken (dashed lines).
Contacts between other pairs are already broken, but F and G remain structured.
}
\label{snapshot_v6_v15_fig}
\end{figure}
The strong dependence of unfolding pathways on loading rates is
also clearly seen from structures around the second peak.
In the $v=7.2\times 10^6$ nm/s case,
at $\Delta R \approx 11$ nm,
strands A and B remain structured, while other strands detach
from a protein core (Fig. \ref{cont_ext_v15_fig}{\em a} and
Fig. \ref{snapshot_v6_v15_fig}c). This is entirely different from
the low loading case,
where A and B completely unfold
but F and G still survive (Fig. \ref{cont_ext_v15_fig}{\em c} and
Fig. \ref{snapshot_v6_v15_fig}d).
The result, obtained for $v=2.6\times 10^4$ nm/s,
is in full agreement with the experiments \cite{Schwaiger_NSMB04}
that at $\Delta R \approx 12$ nm, A and B detached from the core.
Note that the unfolding pathways given by Eq. (\ref{pathways_v6_strand_eq}),
\ref{pathways_v6_pair_eq},
\ref{pathways_v15_strand_eq}, and
\ref{pathways_v15_pair_eq}
are valid in the statistical sense. In all 50 trajectories studied
for $v=7.2\times 10^5$ nm/s, strands A and B always unfold last, and F and G
unfold first (Eq. \ref{pathways_v6_strand_eq}), while the sequencing of
unfolding events for C, D and E depends on individual trajectories.
At $v=2.6\times 10^4$ nm/s, most of trajectories follow
the pathway given by Eq. (\ref{pathways_v15_strand_eq}), but
we have observed a few unusual pathways, as it is illustrated in
Fig. \ref{reentrance_FG_fig}. Having three peaks in
the force-extension profile,
the evolution of native contacts of
F and G display an atypical behavior.
At $\Delta R \approx 7$ nm, these strands fully unfold
(Fig. \ref{reentrance_FG_fig}c),
but they refold again at $\Delta R \approx 11$ nm (Fig. \ref{reentrance_FG_fig}b
and \ref{reentrance_FG_fig}d). Their final unfolding takes place
around $\Delta R \approx 16.5$ nm. As follows from
Fig. \ref{reentrance_FG_fig}b, the first peak in
Fig. \ref{reentrance_FG_fig}a corresponds to unfolding of G. Strands
A and B unfold after passing the second peak, while the third maximum occurs
due to unfolding of C-G , i.e. of a core part
shown in Fig. \ref{reentrance_FG_fig}d.
\begin{figure}
\epsfxsize=4.5in
\vspace{0.2in}
\centerline{\epsffile{./Figs/reentrance_FG_.eps}}
\linespread{0.8}
\caption{(a) Force-extension curve for
an atypical unfolding pathway at $v = 2.6\times 10^4$ nm/s. (b)
Dependence of fractions of native contacts of seven strands on $\Delta R$.
Snapshot at $\Delta R = 7.4$ nm (c) and $\Delta R = 11$ nm (d).
\label{reentrance_FG_fig}}
\end{figure}
The dependence of unfolding pathways on $v$ is understandable.
If a protein is pulled very fast, the perturbation, caused
by the external force, does not
have enough time to propagate to the fixed N-terminal before the C-terminal
unfolds. Therefore, at very high $v$, we have the pathway given by
Eq. (\ref{pathways_v6_strand_eq}). In the opposite limit,
it does matter
what end is pulled as the external force
is uniformly felt along a chain. Then, a strand, which has
a weaker
link with the core, would unfold first.
\subsubsection{Computation of free energy landscape parameters}
As mentioned above, at low loading rates, for some trajectories,
the force-extension curve
does not show two, but three peaks. However,
the percentage of such trajectories is rather
small,
we will neglect them and consider DDFLN4 as a three-state
protein.
Recently,
using dependencies of unfolding times on the
constant external force and
the non-linear kinetic theory \cite{Dudko_PRL06},
we obtained distances $x_{u1} \approx x_{u2} \approx 13 \AA$
\cite{MSLi_JCP08}.
These values seem to be large for $\beta$-proteins like DDFLN4,
which are supposed to have smaller $x_u$ compared to $\alpha/\beta$- and
$\alpha$-ones \cite{MSLi_BJ07a}.
A clear difference between theory and experiments was also observed
for the unfolding barrier $\Delta G^{\ddagger}_1$.
In order to see if one can improve our previous
results,
we will extract the FEL parameters by a different approach.
Namely, assuming that all FEL parameters of the three-state DDFLN4,
including
the barrier between the second TS and
the IS $\Delta G^{\ddagger}_2$ (see
Ref. \onlinecite{MSLi_JCP08} for the definition),
can be determined from dependencies of $f_{max1}$ and $f_{max2}$ on $v$,
we calculate them in the the Bell-Evans-Rirchie (BER) approximation
as well as beyond this approximation.
\paragraph{Estimation of $x_{u1}$ and $x_{u2}$ in the BER approximation}
In this approximation,
$x_{u1}$ and $x_{u2}$ are related to $v$, $f_{max1}$ and
$f_{max2}$ by the following equation \cite{Evans_BJ97}:
\begin{equation}
f_{maxi} \; = \; \frac{k_BT}{x_{ui} }
\ln \left[ \frac{vx_{ui}}{k_{ui}(0)k_BT}\right], i = 1,2,
\label{f_logV_eq2}
\end{equation}
where $k_{ui}(0)$ is unfolding rates at zero external force.
In the low force regime ($v \lesssim 2\times 10^6$ nm/s), the
dependence of $f_{max}$ on $v$ is logarithmic and
$x_{u1}$ and $x_{u2}$ are defined by
slopes of linear fits in
Fig. \ref{fmax_Nfix_v_fig}. Their values are listed in Table \ref{DDFLN4_table}.
The estimate of $x_{u2}$
agrees very well with the experimental \cite{Schwaiger_EMBO05}
as well as with the previous theoretical result \cite{MSLi_JCP08}.
The present value of
$x_{u1}$ agrees with the experiments better than the old one
\cite{MSLi_JCP08}.
Presumably, this is because it has been estimated by the same procedure
as in the experiments \cite{Schwaiger_EMBO05}.
It is important to note that
the logarithmic behavior is observed only at low enough $v$. At high
loading rates, the dependence of $f_{max}$ on $v$ becomes power-law.
This explains why all-atom simulations, performed
at $v \sim 10^9$ nm/s for most of proteins, are not able to provide
reasonable estimations for $x_u$.
The another interesting question is if the peak at
$\Delta R \approx 1.5$ nm disappears at loading rates used in the experiments
\cite{Schwaiger_EMBO05}. Assuming that the logarithmic dependence
in Fig. \ref{fmax_Nfix_v_fig} has the same slope at low $v$, we interpolate
our results to $v_{exp} = 200$ nm/s and obtain
$f_{max1}(v_{exp}) \approx 40$ pN.
Thus, in the framework of the Go model, the existence of
the first peak is robust at experimental speeds.
\begin{figure}
\epsfxsize=4.2in
\vspace{0.2in}
\centerline{\epsffile{./Figs/fmax_Nfix_v.eps}}
\linespread{0.8}
\caption{Dependences of $F_{max1}$ (open circles)
and $F_{max2}$ (open squares) on $v$.
Results were obtained by using the Go model.
Straight lines are fits to the BER equation
($y = -20.33 + 11.424ln(x)$ and $y= 11.54 + 6.528ln(x)$ for $F_{max1}$
and $F_{max2}$, respectively). Here $f_{max}$ and $v$ are measured in pN
and nm/s, respectively.
From these fits we obtain
$x_{u1}=3.2 \AA\,$ and $x_{u2}=5.5 \AA$.
The solid circle and triangle correspond to $f_{max1} \approx 40$ pN
and $f_{max2} \approx 46$ pN,
obtained by interpolation of linear fits to the experimental
value $v = 200$ nm/s.
Fitting to the nonlinear microscopic theory (dashed lines) gives
$x_{u1}=7.0 \AA\, \Delta G^{\ddagger}_1 = 19.9 k_BT,
x_{u2} = 9.7 \AA\,$, and $\Delta G^{\ddagger}_2 = 20.9 k_BT$.}
\label{fmax_Nfix_v_fig}
\end{figure}
\paragraph{Beyond the BER approximation}
In the BER approximation, one assumes that the location
of the TS does not move under the action of an
external force.
Beyond this approximation, $x_u$ and unfolding barriers can be extracted,
using the following formula
\cite{Dudko_PRL06}:
\begin{equation}
f_{max} \, = \frac{\Delta G^{\ddagger}}{\nu x_u} \left\{ 1-
\left[\frac{k_BT}{\Delta G^{\ddagger}} \textrm{ln} \frac{k_BT k_u(0) e^{\Delta G^{\ddagger}/k_BT + \gamma}}{x_u v}\right]^{\nu} \right\}
\label{Dudko_eq2}
\end{equation}
Here, $\Delta G^{\ddagger}$ is the unfolding barrier, $\nu = 1/2$ and 2/3
for the cusp \cite{Hummer_BJ03} and the
linear-cubic free energy surface \cite{Dudko_PNAS03}, respectively.
$\gamma \approx 0.577$ is the Euler-Mascheroni constant.
Note that
$\nu =1$ corresponds to the phenomenological
BER theory (Eq. \ref{f_logV_eq2}).
If $\nu \ne 1 $, then
Eq. (\ref{Dudko_eq2}) can be used to estimate not only
$x_u$, but also $\Delta G^{\ddagger}$.
Since the fitting with $\nu = 1/2$ is valid in a wider force
interval
compared to the $\nu = 2/3$ case, we
consider the former case only.
The region,
where the $\nu = 1/2$ fit works well, is expectantly wider than that for
the Bell scenario (Fig. \ref{fmax_Nfix_v_fig}).
From the nonlinear fitting (Eq. \ref{Dudko_eq2}),
we obtain
$x_{u1}=7.0 \AA\,$, and
$x_{u2} = 9.7 \AA\,$ which
are about twice as large as the Bell estimates (Table \ref{DDFLN4_table}).
Using AFM data, Schlierf and Rief \cite{Schlierf_BJ06},
have shown that beyond BER approximation
$x_u \approx 11 \AA\,$. This value is close to our estimate for $x_{u2}$.
However, a full comparison with experiments is not possible as
these authors did not consider $x_{u1}$ and $x_{u2}$ separately.
The present estimations of these quantities are
clearly lower than the previous one \cite{MSLi_JCP08} (Table \ref{DDFLN4_table}).
The lower values
of $x_{u}$ would be more favorable because they are expected to
be not high for beta-rich proteins \cite{MSLi_BJ07a} like DDFLN4.
Thus, beyond BER approximation,
the method based on Eq. (\ref{Dudko_eq2}) provides more reasonable
estimations for $x_{ui}$ compared to the method, where these
parameters are extracted
from unfolding rates \cite{MSLi_JCP08}. However,
in order to decide what method is better,
more experimental studies are required.
The corresponding values for $\Delta G^{\ddagger}_1$, and $\Delta G^{\ddagger}_2$
are listed in Table \ref{DDFLN4_table}.
The experimental and previous theoretical
results \cite{MSLi_JCP08} are also shown for comparison.
The present estimates for both barriers agree with the
experimental data, while
the previous theoretical value of $\Delta G^{\ddagger}_1$
fits to experiments worse than
the current one.
\begin{table}
\begin{center}
\begin{tabular}{lll|lllr}
& \multicolumn{2}{c|}{BER approximation} &\multicolumn{4}{c}{Beyond BER approximation}\\ \cline{2-7}
& \; $x_{u1}(\AA)$ \;& \; $x_{u2}(\AA)$ \; & \; $x_{u1}(\AA)$ \;& \; $x_{u2}(\AA)$ \; & \; $\Delta G^{\ddagger}_1/k_BT \;$ & \; $\Delta G^{\ddagger}_2/k_BT \; $ \\
\hline
Theory \cite{MSLi_JCP08}&\; 6.3 $\pm$ 0.2 \; & \; 5.1 $\pm$ 0.2 \;&\; 13.1 \; &\; 12.6 \; & \; 25.8 \; & \; 18.7 \; \\
Theory (this work)&\; 3.2 $\pm$ 0.2 \; & \; 5.5 $\pm$ 0.2 \;&\; 7.0 \;& \; 9.7 \; & \; 19.9 \; & \; \; 20.9 \; \\
Exp. \cite{Schwaiger_EMBO05,Schlierf_BJ06} & \; 4.0 $\pm 0.4$ \; & \; 5.3 $\pm$ 0.4 \; & & &\; 17.4 \; & \; 17.2 \;\\
\hline
\end{tabular}
\end{center}
\linespread{0.8}
\caption{Parameters $x_{u1}$, and $x_{u2}$ were obtained in the
Bell and beyond-Bell approximation. Theoretical values of the unfolding
barriers were extracted from the microscopic theory of Dudko {\em et al}
(Eq. \ref{Dudko_eq})
with $\nu = 1/2$. The experimental estimates were taken from
Ref. \onlinecite{MSLi_JCP08}.\label{DDFLN4_table}}
\end{table}
\subsubsection{Thermal unfolding pathways}
In order to see if the thermal unfolding pathways are different
from the mechanical ones, we performed zero-force simulations
at $T=410$ K. The progress variable $\delta$ is used
as a reaction coordinate to monitor pathways (see Chapter 3).
From Fig. \ref{ther_unfold_pathways_snap_fig}, we have the following sequencing
for strands and their pairs:
\begin{subequations}
\begin{equation}
G \rightarrow (B, C, E) \rightarrow (A, F, D),
\label{thermal_pathway_str_eq}
\end{equation}
\begin{equation}
P_{AF} \rightarrow P_{BE} \rightarrow (P_{CD}, P_{CF}) \rightarrow
(P{AB}, P_{FG}, P_{DE}).
\label{thermal_pathway_pair_eq}
\end{equation}
\end{subequations}
It should be noted that these pathways are just
major ones as other pathways
are also possible. The pathway given by Eq. (\ref{thermal_pathway_pair_eq}),
e.g., occurs in 35\% of events.
About 20\% of trajectories follow
$P_{AF} \rightarrow P_{CF} \rightarrow P_{BE} \rightarrow
(P_{CD},P{AB}, P_{FG}, P_{DE})$ scenario. We have also observed the sequencing
$P_{AF} \rightarrow P_{BE} \rightarrow
(P_{CF},P{AB}, P_{FG}, P_{DE}) \rightarrow P_{CD}$, and
$P_{BE} \rightarrow P_{AF} \rightarrow
(P_{CD},P{CF}, P_{AB}, P_{FG},P_{DE})$ in 12\% and 10\% of runs, respectively.
Thus,
due to strong thermal fluctuations,
thermal unfolding pathways are more diverse compared to mechanical ones.
From Eqs. \ref{pathways_v6_strand_eq}, \ref{pathways_v6_pair_eq},
\ref{pathways_v15_strand_eq}, \ref{pathways_v15_pair_eq},
\ref{thermal_pathway_str_eq}, and \ref{thermal_pathway_pair_eq}, it is clear that thermal unfolding
pathways of DDFLN4 are different from the mechanical pathways.
This is also
illustrated in
Fig. \ref{ther_unfold_pathways_snap_fig}c.
As in the mechanical case
(Fig. \ref{snapshot_v6_v15_fig}a and \ref{snapshot_v6_v15_fig}b),
the contact between A and F is broken,
but the molecule is much
less compact at the same end-to-end distance.
Although 7 contacts ($\approx 64$\%) between strands
F and G remain survive, all contacts of pairs
$P_{AF}, P_{BE}$ and $P_{CD}$ are already broken.
\begin{figure}[!hbtp]
\epsfxsize=4.2in
\centerline{\epsffile{./Figs/ther_unfold_pathways_snap.eps}}
\linespread{0.8}
\caption{Thermal unfolding pathways. (a) Dependence of
native contact fractions of
seven strands on the progress variable $\delta$ at $T=410$ K.
(b) The same as in (a) but for seven strand pairs.
(c) A typical snapshot at
$\Delta R \approx 1.8$ nm. The contact between
strands S1 and S6 is broken but 7 contacts between strands S6 and S7
(solid lines) still
survive.}
\label{ther_unfold_pathways_snap_fig}
\end{figure}
The difference between mechanical and thermal unfolding pathways
is attributed to the fact
that thermal fluctuations have a global effect on the biomolecule,
while the force acts only on its termini. Such a difference was also observed
for other proteins like I27 \cite{Paci_PNAS00} and Ub
\cite{MSLi_BJ07,Mitternacht_Proteins06}.
We have also studied folding pathways of DDFLN4 at $T=285$ K. It turns
out that they are reverse of the thermal unfolding pathways given
by Eqs. \ref{thermal_pathway_str_eq} and \ref{thermal_pathway_pair_eq}.
It would be interesting to test our prediction on thermal folding/unfolding
of this domain experimentally.
\subsection{Conclusions}
The key result of this chapter is that
mechanical unfolding pathways of DDFLN4 depend on loading rates.
At large $v$ the C-terminal unfolds first, but the N-terminal
unfolds at low $v \sim 10^4$ nm/s. The agreement with the
experiments \cite{Schwaiger_NSMB04}
is obtained only
in low loading rate simulations.
The dependence of mechanical unfolding pathways on the loading rates
was also observed for I27 (M.S. Li, unpublished). On the other hand,
the previous studies \cite{Irback_PNAS05,MSLi_BJ07} showed that mechanical unfolding pathways
of the two-state Ub do not depend on the force strength.
Since DDFLN4 and I27 are three-state
proteins, one may think that
the unfolding pathway change with variation of the pulling
speed,
is universal
for proteins that unfold via intermediates.
A more comprehensive study is needed to verify this
interesting issue.
Dependencies of unfolding forces on pulling speeds have been widely used
to probe FEL of two-state proteins \cite{Best_PNAS02}.
However, to our best knowledge,
here we have made a first attempt to apply this approach
to extract not only
$x_{ui}$, but also
$\Delta G^{\ddagger}_i$ ($i= 1,$ and 2) for a three-state protein.
This allows us to improve our previous results \cite{MSLi_JCP08}.
More importantly, a better agreement with the experimental data
\cite{Schwaiger_EMBO05,Schlierf_BJ06} suggests that this method
is also applicable to
other multi-state biomolecules.
Our study clearly shows that the low loading
rate regime, where FEL parameters can be estimated, occurs at
$ v \leq 10^6$ nm/s which are about two-three orders of magnitude
lower than those used in all-atom simulations.
Therefore, at present, deciphering unfolding FEL of long proteins by
all-atom simulations with explicit water is computationally prohibited.
From this prospect, coarse-grained models are of great help.
We predict the existence of a peak at $\Delta R \sim 1.5$ nm even
at pulling speeds used in now a day experimental setups.
This result would stimulate
new experiments on mechanical properties of DDFLN4.
Capturing the experimentally observed peak at $\Delta R \sim 22$ nm
remains a challenge to theory.
\clearpage
\begin{center}
\section{Protein mechanical unfolding: importance of non-native interactions}
\end{center}
\subsection{Introduction}
In this chapter, we continue to study the mechanical unfolding of DDFLN4 using the all-atom simulations. Motivation for this is that Go model can not explain some experimental results.
Namely, in the AFM force-extension curve (Schwaiger {\em et al.} \cite{Schwaiger_NSMB04,Schwaiger_EMBO05} observed two peaks at $\Delta R \approx 12$ and 22 nm.
However, using a Go model \cite{Clementi_JMB00}, Li {\em et al.}\cite{MSLi_JCP08} and Kouza and Li (chapter 9) have also obtained two peaks but they are located at
$\Delta R \approx 1.5$ and 11 nm. A natural question to ask is if the disagreement
between experiments and theory is due to over-simplification of
the Go modeling, where non-native interactions between residues are omitted.
In order to answer this question, we have performed
all-atom
MD simulations,
using the GROMOS96 force field 43a1 \cite{Gunstren_96} and the SPC explicit water solvent \cite{Berendsen81}.
We have shown that,
two peaks do appear at almost
the same positions as in the experiments \cite{Schwaiger_NSMB04,Schwaiger_EMBO05} and more importantly, the peak at $\Delta R \approx 22$ nm comes from the non-native interactions. It explains why it has not been seen in the previous Go simulations\cite{MSLi_JCP08}.
In our opinion, this result is very important as it opposes to the common belief
\cite{West_BJ06,MSLi_BJ07a} that mechanical unfolding properties are governed by the native topology.
In addition to two peaks at large $\Delta R$, in agreement with the Go results \cite{MSLi_JCP08},
we have also observed a maximum at $\Delta R \approx 2$ nm. Because such a peak
was not detected by the AFM experiments \cite{Schwaiger_NSMB04,Schwaiger_EMBO05},
further experimental and theoretical studies are required to clarify this point.
The results of this chapter are adapted from Ref. \cite{Kouza_JCP09}.
\subsection{Materials and Methods}
We used the GROMOS96 force field 43a1 \cite{Gunstren_96} to model
DDFLN4 which has 100 amino acids, and the SPC water model \cite{Berendsen81}
to describe the solvent (see also chapter 4). The Gromacs version 3.3.1 has been employed.
The protein was placed in an cubic box with the
edges of 4.0, 4.5 and 43 nm, and with 76000 - 78000 water molecules (Fig. \ref{native_ddfln4_strands_fig2}).
\begin{figure}[!htbp]
\epsfxsize=6.3in
\vspace{0.2in}
\centerline{\epsffile{./Figs/native_ddfln4_strands2_.eps}}
\linespread{0.8}
\caption{The solvated system in the orthorhombic box of water (cyan).
VMD software \cite{VMD}
was used for a plot.
\label{native_ddfln4_strands_fig2}}
\end{figure}
In all simulations, the GROMACS program suite \cite{Berendsen_CPC95,Lindahl01}
was employed. The equations of motion were integrated by
using a leap-frog algorithm with a time step of 2 fs.
The LINCS \cite{Hess_JCC97} was used to constrain bond lengths
with a relative geometric tolerance of $10^{-4}$.
We used the
particle-mesh Ewald method to treat the long-range electrostatic
interactions \cite{Darden93}.
The nonbonded interaction pair-list were
updated every 10 fs, using a cutoff of 1.2 nm.
The protein was minimized using the steepest decent
method. Subsequently, unconstrained MD
simulation was performed to equilibrate the solvated system for 100 ps
at constant pressure (1 atm) and temperature $T=300$ K with the help of
the Berendsen coupling procedure \cite{Berendsen84}.
The system was then equilibrated further at
constant temperature $T$ = 300 K and constant volume.
Afterward, the N-terminal was kept fixed and the
force was applied to the C-terminal through a virtual cantilever moving
at the constant velocity $v$ along the biggest $z$-axis of simulation box.
During the simulations, the spring constant was chosen
as $k=1000 kJ/(mol \times nm^2) \approx 1700 $ pN/nm which is an upper
limit for $k$ of a
cantilever used in AFM experiments.
Movement of the pulled termini causes an extension of
the protein and the total force can be measured by $F=kvt$.
The resulting force is computed for each time step to generate a
force extension profile, which has peaks showing the most mechanically
stable places in a protein.
Overall, the simulation procedure is similar to the experimental one,
except that pulling speeds in our simulations
are several orders of magnitude higher than those used in experiments.
We have performed simulations for
$v= 10^{6}, 5\times 10^{6}, 1.2\times 10^{7}$,
and $2.5\times 10^{7}$ nm/s, while in the AFM experiments one took
$v \sim 100 - 1000$ nm/s \cite{Schwaiger_NSMB04}.
For each value of $v$ we have generated 4 trajectories.
A backbone contact between amino acids $i$ and $j$ ($|i-j| > 3$)
is defined as formed if the distance between
two corresponding C$_{\alpha}$-atoms
is smaller than a cutoff distance $d_c=6.5$ \AA .
With this choice, the molecule has 163 native contacts.
A hydrogen bond is formed provided
the distance between donor D (or atom N) and
acceptor A (or atom O) $\leq 3.5 \AA \,$ and the angle D-H-A
$\ge 145^{\circ}$.
The unfolding process was studied by monitoring the dependence
of numbers of backbone contacts and HBs
formed by seven $\beta$-strands enumerated as
A to G (Fig. \ref{native_ddfln4_strands_fig}a)
on the end-to-end extension.
In the NS, backbone contacts exist between seven pairs
of $\beta$-strands
P$_{\textrm{AB}}$,
P$_{\textrm{AF}}$, P$_{\textrm{BE}}$,
P$_{\textrm{CD}}$, P$_{\textrm{CF}}$, P$_{\textrm{DE}}$, and P$_{\textrm{FG}}$
as shown in Fig. \ref{native_ddfln4_strands_fig}b.
Additional information on unfolding pathways was also obtained
from the evolution of numbers of contacts
of these pairs.
\subsection{Results}
\subsubsection {Existence of three peaks in force-extension profile}
Since the results obtained for four pulling speeds ({\em Material and Methods})
are qualitatively similar, we will focus on the smallest
$v=10^6$ nm/s case.
The force extension curve, obtained at
this speed, for the trajectory 1, can be divided into four regions (Fig. \ref{fe1}):
\begin{figure}[!htbp]
\epsfxsize=4.0in
\vspace{0.2in}
\centerline{\epsffile{./Figs/force-ext_traj1_.eps}}
\linespread{0.8}
\caption{Force-extension profile for trajectory 1 for $v=10^6$ nm/s.
Vertical dashed lines separate four unfolding regimes.
Shown are typical snapshots around three peaks.
Heights of peaks (from left) are $f_{max1}=695$ pN, $f_{max2}=704$ pN,
and $f_{max3}=626$ pN.
\label{fe1}}
\end{figure}
{\em Region I ($0 \lesssim \Delta R \lesssim 2.42$ nm)}.
Due to thermal fluctuations, the total force fluctuates a lot,
but, in general,
it increases and reaches the first maximum $f_{max1}=695$ pN
at $\Delta R$ 2.42 nm.
A typical snapshot before the first unfolding event (Fig. \ref{fe1})
shows that structures remain native-like.
During the first period, the N-terminal part is being extended,
but the protein maintains all $\beta$-sheet secondary structures
(Fig. \ref{2nm}b).
Although, the unfolding starts from the N-terminal (Fig. \ref{2nm}b),
after the first peak, strand G from
the C-termini got unfolded first (Fig. \ref{2nm}c and \ref{2nm}f).
In order to understand the nature of this peak
on the molecular level, we consider the evolution of HBs in detail.
As a molecule departs from the NS,
non-native HBs are created and at $\Delta R = 2.1$ nm, e.g.,
a non-native $\beta$-strand between amino acids 87 and
92 (Fig. \ref{2nm}b)
is formed. This leads to
increase of the number of HBs between F and G
from 4 (Fig. \ref{2nm}d) to 9 (Fig. \ref{2nm}e).
Structures with the enhanced number of HBs should show strong resistance to
the external perturbation and the first peak occurs due to
their unfolding (Fig. \ref{2nm}b).
It should be noted that this maximum was observed in
the Go simulations \cite{MSLi_JCP08,MSLi_JCP09},
but not in the experiments \cite{Schwaiger_NSMB04,Schwaiger_EMBO05}.
Both all-atom and Go simulations reveal that the unfolding
of G strand is responsible for its occurrence.
\begin{figure}
\vspace{5.5 mm}
\includegraphics[width=0.97\textwidth]{./Figs/concat2_.eps}
\linespread{0.8}
\caption{(a) The NS conformation is shown for comparison with
the other ones. (b) A typical conformation before the first unfolding
event takes place ($\Delta R \approx$ 2.1 nm).
The yellow arrow shows
a part of protein which starts to unfold. An additional
non-native $\beta$-strand between amino acids 87 and 92 is marked by
black color. (c)
A conformation after the first peak, at $\Delta R \approx$ 2.8 nm,
where strand G has already
detached from the core. (d) The same as in (a) but
4 HBs
(green color) between $\beta$-strands are displayed. (e) The same as in (b)
but all 9 HBs are shown.
(f) The same as in (c) but
broken HBs (purple) between F and G are displayed.
\label{2nm}}
\end{figure}
{\em Region II ($2.42 nm \lesssim \Delta R \lesssim 13.36$ nm):}
After the first peak, the force drops rapidly
from 695 to 300 pN and secondary structure elements begin to break down.
During this period, strands A, F and G unfold completely,
whereas B, C, D and E strands remain structured
(see Fig. \ref{fe1}
for a typical snapshot).
{\em Region III ($13.36 nm \lesssim \Delta R \lesssim 22.1$ nm:}
During the second and third stages, the complete unfolding of
strands D and E takes place. Strands
B and C undergo significant conformational changes,
losing their equilibrium HBs. Even though a core
formed by these
strands remains compact (see bottom of Fig. \ref{fe1}
for a typical snapshot).
Below we will show in detail that
the third peak is associated with breaking
of non-native HBs between strands B and C.
{\em Region IV ($\Delta R \gtrsim 22.1$ nm:} After
breaking of non-native HBs between B and C,
the polypeptide chain gradually reaches its rod state.
The existence of three pronounced peaks is robust as they are observed
in all four studied trajectories
(similar results obtained in other three runs are not shown).
It is also clearly evident from Fig. \ref{fea},
which displays the force-extension curve averaged over 4
trajectories.
\begin{figure}[!htbp]
\epsfxsize=3.5in
\vspace{0.2in}
\centerline{\epsffile{./Figs/force-ext_av_.eps}}
\linespread{0.8}
\caption{The averaged over 4 trajectories force-extension profile.
$v=10^6$ nm/s.}
\label{fea}
\end{figure}
\subsubsection{Importance of non-native interactions}
As mentioned above, the third peak at $\Delta R \approx 22$ nm was observed in the experiments
but not in Go models \cite{MSLi_JCP08,MSLi_JCP09}, where non-native interactions are omitted.
In this section, we show, at molecular level, that these very interactions lead to its existence.
To this end, we plot the dependence of the number of native contacts formed by seven strands and their pairs on $\Delta R$.
The first peak corresponds to unfolding of strand G (Fig. \ref{cont_ext_traj1_fig}a)
as all (A,F) and (F,G) contacts are broken just after passing it (Fig. \ref{cont_ext_traj1_fig}b).
Thus, the structure of the first IS1, which corresponds to this peak, consists of 6 ordered strands A-F
(see Fig. \ref{2nm}c for a typical snapshot).
The second unfolding event
is associated with full unfolding of A and F and drastic decrease of
native contacts of B and C (Fig. \ref{cont_ext_traj1_fig}a.
After the second peak
only (B,E), (C,D) and (D,E) native contacts survive (\ref{cont_ext_traj1_fig}b).
The structure of the second intermediate state (IS2) contains partially
structured strands B, C, D and E. A typical snapshot is displayed
in top of Fig. \ref{fe1}.
Remarkably, for $\Delta R \gtrsim 17$ nm, none of native contacts exists,
except very small fluctuation of a few contacts of strand
B around $\Delta R \approx 22.5$ nm (Fig. \ref{cont_ext_traj1_fig}a).
Such a fluctuation is negligible as it
is not even manifested in existence of native contacts
between corresponding pairs (A,B) and (B,E) (Fig. \ref{cont_ext_traj1_fig}b).
Therefore, we come to a very interesting conclusion that the third peak
centered at $\Delta R \approx 22.5$ nm is not related to native interactions.
This explains why it was not detected by simulations \cite{MSLi_JCP08,MSLi_JCP09}
using the Go model \cite{Clementi_JMB00}.
\begin{figure}[!htbp]
\epsfxsize=3.8in
\vspace{0.2in}
\centerline{\epsffile{./Figs/cont_ext_traj1_.eps}}
\linespread{0.8}
\caption{(a) Dependence of the number of native backbone contacts
formed by individual strands on $\Delta R$. Arrows refer to
positions of three peaks in the force-extension curve. (b) The same
as in (a) but for pairs of strands. (c)
The same as in (a) but for all contacts (native and non-native).
(d) The same as in (c) but for HBs.
\label{cont_ext_traj1_fig}}
\end{figure}
The mechanism underlying occurrence of the third peak may be revealed
using the results shown in Fig. \ref{cont_ext_traj1_fig}c, where
the number of all backbone contacts (native and non-native)
is plotted as a function of $\Delta R$. Since,
for $\Delta R \gtrsim 17$ nm, native contacts vanish, this peak
is associated with an abrupt decrease of non-native contacts between strands
B and C. Its nature may be also understood by monitoring
the dependence of HBs on $\Delta R$ (Fig. \ref{cont_ext_traj1_fig}d),
which shows that
the last maximum is caused by loss of
HBs of these strands.
More precisely, five HBs between B and C, which were not present in
the native conformation, are broken (Fig. \ref{HB_pair_ext_traj1_fig}).
Interestingly, these bonds appear at $\Delta R \gtrsim$ 15 nm, i.e.
after the second unfolding event (Fig. \ref{HB_pair_ext_traj1_fig}).
Thus, our study can not only reproduce the experimentally observed
peak at $\Delta R \approx 22$ nm, but also shed light on its nature
on the molecular level.
From this perspective, all-atom simulations are superior to experiments.
\begin{figure}[!htbp]
\includegraphics[width=0.48\textwidth]{./Figs/HB_pair_ext_traj1_.eps}
\linespread{0.8}
{\caption {Dependence of the number of HBs between pairs of strands.
Red arrow refers to a position where non-native HBs between strands
B and C start to appear. Their
creation leads to the maximum centered at $\Delta R \approx 22.4$ nm.
Upper snapshot shows five HBs between B an C before the third
unfolding event.
Lower snapshot is a fragment after the third peak,
where all HBs are already broken
(purple dotted lines).}
\label{HB_pair_ext_traj1_fig}}
\end{figure}
One corollary from Fig. \ref{cont_ext_traj1_fig}a-d is that one can not
provide a complete description of the unfolding process based on the evolution
of only native contacts.
It is because, as a molecule extends, its
secondary structures
change and new non-native secondary structures may occur.
Beyond the extension of 17-18 nm (see snapshot at bottom of Fig. \ref{fe1}),
e.g., the protein
lost all native contacts, but it does not get a
extended state without any structures.
Therefore, a full description of mechanical unfolding may be obtained
by monitoring either all backbone contacts or HBs,
as these two quantities give the same unfolding picture
(Fig. \ref{cont_ext_traj1_fig}c and \ref{cont_ext_traj1_fig}d).
\subsubsection{Unfolding pathways}
\begin{figure}
\includegraphics[width=5in]{./Figs/HB_pair_ext_traj2_4_.eps}
\linespread{0.8}
\caption {The $\Delta R$ dependence of the
number of HBs, formed by
seven strands, for trajectory 2, 3 and 4.
$v=10^6$ nm/s.
\label{HB_pair_ext_traj2_4_fig}}
\end{figure}
To obtain sequencing of unfolding events,
we use dependencies of the number of HBs on $\Delta R$.
From Fig. \ref{cont_ext_traj1_fig}d
and Fig. \ref{HB_pair_ext_traj2_4_fig},
we have the following unfolding pathways for four trajectories:
\begin{eqnarray}
G \rightarrow F \rightarrow A \rightarrow (D,E) \rightarrow (B,C), \; \;
\textrm{Trajectory 1}, \nonumber\\
G \rightarrow F \rightarrow A \rightarrow B \rightarrow C \rightarrow (D,E),
\; \; \textrm{Trajectory 2}, \nonumber \\
G \rightarrow F \rightarrow A \rightarrow E \rightarrow B \rightarrow D \rightarrow
C, \; \; \textrm{Trajectory 3} \nonumber\\
G \rightarrow F \rightarrow A \rightarrow (D,E) \rightarrow C \rightarrow B,
\; \; \textrm{Trajectory 4}.
\label{pathways_eq}
\end{eqnarray}
Although four pathways, given by Eq. (\ref{pathways_eq}) are different,
they share a common feature that the C-terminal unfolds first.
This is consistent with the results obtained by Go simulations
at high pulling speeds $v \sim 10^6$ nm/s \cite{MSLi_JCP08},
but contradicts to the experiments \cite{Schwaiger_NSMB04,Schwaiger_EMBO05},
which showed that strands A and B from the N-termini unfold first.
On the other hand, our more recent Go simulations \cite{MSLi_JCP09}
have revealed that the agreement with the experimental results
is achieved if one performs simulations at relatively low
pulling speeds $v \sim 10^4$ nm/s.
Therefore, one can expect that the difference in
sequencing of unfolding events between present
all-atom results and the experimental ones is merely due to
large values of $v$ we used. In order to check this,
one has to carry out all-atom simulations, at least,
at $v \sim 10^4$ nm/s, but such a task is
far beyond present computational facilities.
\subsubsection{Dependence of unfolding forces on the pulling speed}
The question we now ask is whether
the unfolding FEL of DDFLN4 can be probed by
all-atom simulations with
explicit water.
To this end, we performed simulations at various loading speeds
and monitor the dependence of
$f_{maxi} (i=1,2,$ and 3) on $v$
(Fig. \ref{force_extension_all_fig}).
\begin{figure}[!hbtp]
\epsfxsize=3.4in
\centerline{\epsffile{./Figs/force_extension_all_.eps}}
\linespread{0.8}
\caption{Force-extension profiles for four values of $v$ shown next to the
curves.
\label{force_extension_all_fig}}
\end{figure}
In accordance with theory \cite{Evans_BJ97},
heights of three peaks decrease as $v$ is lowered (Fig. \ref{force_extension_all_fig}).
Since the force-extension curve displays three peaks, within the framework
of all-atom models, the mechanical unfolding of DDFLN4 follows
a four-state scenario (Fig. \ref{fel_schem_fmax_logv_fig}a),
but not the three-state one as suggested by the experiments
\cite{Schwaiger_NSMB04,Schwaiger_EMBO05} and Go simulations \cite{MSLi_JCP08}.
The corresponding FEL should have three transition states
denoted by TS1, TS2 and TS3. Remember that the first
and second peaks in the force-extension profile correspond
to IS1 and IS2.
Assuming that the BER theory \cite{Bell_Sci78,Evans_BJ97}
holds for a four-state biomolecule, one can extract the distances
$x_{u1}$ (between NS and TS1), $x_{u2}$
(between IS1 and TS2),
and $x_{u3}$ (between IS2 and TS3) from Eq. (\ref{f_logV_eq2}).
From the linear fits (Fig. \ref{fel_schem_fmax_logv_fig}b),
we have $x_{u1} = 0.91 \AA , x_{u2} = 0.17 \AA \, ,$ and $x_{u3} = 0.18 \AA$.
These values are far below the typical $x_u \approx 5 \AA\,$, obtained in
the experiments \cite{Schwaiger_EMBO05}
as well as in the Go simulations \cite{MSLi_JCP08,MSLi_JCP09}.
This difference comes from the fact that pulling speeds used in all-atom
simulations are to high (Fig. \ref{fel_schem_fmax_logv_fig}).
It clearly follows from Eq. (\ref{f_logV_eq2}), which shows that $x_u$ depends
on what interval of $v$ we use: the larger are values of $f_{max}$, the smaller $x_u$.
Thus, to obtain $x_{ui}$ close to its experimental counterpart, one has to reduce
$v$ by several orders of magnitude and this problem becomes unfeasible.
It is also clear why now a day all-atom simulations
with explicit water can not be used to reproduce the FEL
parameters, obtained from experiments. From this point of view coarse-grained
models are of great help \cite{MSLi_BJ07a,MSLi_JCP08}.
The kinetic microscopic theory \cite{Dudko_PRL06},
which is valid beyond the BER approximation, can be applied
to extract unfolding barriers $\Delta G^{\ddagger}_i (i=1,2,$ and 3). Their values are
not presented as we are far from the interval of pulling speeds used
in experiments.
Since the first peak was not observed in the experiments
\cite{Schwaiger_NSMB04,Schwaiger_EMBO05},
a natural question emerges is whether it is an artifact of high pulling speeds used in our simulations.
Except data at the highest value of $v$ (Fig. \ref{fel_schem_fmax_logv_fig}b),
within error bars three maxima are compatible. Therefore,
the peak centered at $\Delta R \approx 2$ nm is expected to remain
at experimental loading rates.
\cite{Schwaiger_NSMB04}.
The force-extension curve of
the titin domain I27, which has a similar native topology, for example, displays the
first peak at $\Delta R \approx 0.8$ nm \cite{Marszalek_Nature99}.
\begin{figure}[!htbp]
\epsfxsize=3.5in
\centerline{\epsffile{./Figs/fel_schem_fmax_logv.eps}}
\linespread{0.8}
\caption{(a) Schematic plot for the free energy $G$ as a function of $\Delta R$. $\Delta G^{\ddagger}_i (i=1,2,$ and 3) refers to unfolding barriers.
The meaning of other notations is given in the text.
(b) Dependence of heights of three peaks on $v$. Results are averaged over
four trajectories for each value of $v$.
Straight lines refer to
linear fits by Eq. (\ref{f_logV_eq2}) ($y_1 = 163 + 44x, y_2 = -2692 + 235x$ and
$y_3 = -2630 + 227x$) through three low-$v$ data points.
These fits give $x_{u1} = 0.91 \AA, x_{u2} =
0.17 \AA \,$, and $x_{u3} =0.18 \AA$.
\label{fel_schem_fmax_logv_fig}}
\end{figure}
One of possible reasons of why the experiments did not detect this
maximum is related to a strong linker effect as
a single DDFLN4 domain is sandwiched between Ig domains
I27-30 and domains I31-34 from titin \cite{Schwaiger_NSMB04}.
\subsection{Conclusions}
Using the all-atom simulations, we have reproduced the experimental result
on existence of two peaks located at $\Delta R \approx 12$ and 22 nm.
Our key result is that the later maximum occurs
due to breaking of five non-native HBs between strand B and C.
It can not be encountered by the Go models in which non-native
interactions are neglected \cite{MSLi_JCP08,MSLi_JCP09}.
Thus, our result points to the importance of these
interactions for the mechanical unfolding of DDFLN4.
The description of elastic properties of other proteins may be not
complete ignoring non-native interactions.
This conclusion is valuable as the unfolding by an external
force is widely believed
to be solely governed by native topology of proteins.
Our all-atom simulation study supports the result obtained by the Go model \cite{MSLi_JCP08,MSLi_JCP09}
that an additional peak occurs at
$\Delta R \approx 2$ nm due to unfolding of strand G.
However, it was not observed by the AFM experiments
of Schwaiger {\em et al} \cite{Schwaiger_NSMB04,Schwaiger_EMBO05}.
In order to solve this controversy, one has to carry out not only
simulations with
other force fields but also additional experiments.
\clearpage
\begin{center}\section*{CONCLUSIONS}\end{center}
In this thesis we have obtained the following new results.
By collecting experimental data and performing extensive on- and off-lattice
coarse-grained simulations, it was found that the scaling
exponent for the cooperativity of folding-unfolding transition
$\zeta \approx 2.2$.
This value is clearly higher than the characteristic for the first
order transition value $\zeta = 2$. Our result supports the previous
conjecture
\cite{MSLi_PRL04} that the melting point is a tricritical point, where
the first and second order transition lines meet.
Having used CD technique and Go simulations, we studied the folding of protein
domain hbSBD in detail. Its thermodynamic parameters such as $\Delta H_G,
\Delta C_p, \Delta S_G$, and $\Delta G_S$ were determined.
Both experiments and theory support the two-state behavior of hbSBD.
With the help of the Go modeling, we have constructed
the FEL for
single and three-domain Ub, and DDFLN4. Our estimations of $x_u$, $x_f$ and
$\Delta G^{\ddag}_u$ are in acceptable agreement with the experimental data.
The effect of pulling direction on FEL was also studied for single Ub.
Pulling at Lys48 and C-termini deforms the unfolding FEL
as it increases the distance between the NS and TS.
It has been shown that unfolding pathways of Ub depend on what terminal
is kept fixed. But it remains unclear if this is a real effect or
merely an artifact of high pulling speeds we used in simulations. This
problem requires further investigation.
It is commonly believed that protein unfolding is governed by the native
topology and non-native interactions play a minor
role. However, having performed Gromacs all-atom simulations
for DDFLN4, for the first
time, we have demonstrated that it may depends on the non-native interactions.
Namely, they are responsible for occurrence of a peak located
at $\Delta R \approx 22$ nm in the force-extension curve. This peak was not seen in Go models
as they take into account only native interactions.
In addition, based on the Go as well as all-atom simulations, we
predict that an addition peak should appear at $\Delta R \approx 1.5$ nm.
Since such a peak was not observed in the experiments, our results
are expected to draw attention of experimentalists to this
fascinating problem.
Our new force RE method is interesting from the methodological point of view.
Its successful application to construction of the $T-f$ phase diagram
of the three-domain Ub shows that it might be applied to other
biomolecules.
\newpage
|
1,941,325,220,747 | arxiv | \section{Introduction}
An extensive air-shower develops when a primary high-energy cosmic particle interacts with molecules in the atmosphere, generating a cascade of secondary particles.
The electrons and positrons in the shower produce coherent, broadband and impulsive electromagnetic signals, that can be detected in the tens to hundreds of MHz frequency range.
Radio impulses from air-showers have been known and measured since 1960s, but it was only in the last decades, thanks to the developments in digital signal processing, that this domain experienced a rebirth as a promising astroparticle detection technique~\cite{TimsReview,Schroder:2016hrv,Connolly16}. The results of for example AERA (Auger Engineering Radio Array) and Tunka-Rex (Tunka Radio Extension) on the energy reconstruction of the primary particle~\cite{AERAEnergy,2017EPJWC.13501003S} and the latter and LOFAR (LOw Frequency ARray) on the measurement of the mass composition~\cite{LOFARMass,Bezyazeekov:2018yjw} show that radio detection has become competitive with standard methods such as fluorescence light.
Where LOFAR and AERA used a scintillator-based trigger, autonomous detection by a self-triggered radio detection set-up was successfully shown by TREND last year~\cite{TREND}, and earlier already by the ARIANNA~\cite{Barwick:2014pca} and ANITA~\cite{Gorham:2008yk} experiments while the AERA experiment also recorded self-triggered radio event identified as EAS signal by coincidence with the Auger Surface Detector~\cite{2012JInst...7P1023A}.
These successes are due to drastic technological advances, but also to the considerable progress made in the understanding and modeling of the radio emission mechanisms of air showers.
Both macroscopic and microscopic approaches can be found in the literature to model the radio-emission of air-showers. The former are mostly analytical and are based on the modeling or fitting of the global physical effects that contribute to the radio emission from extensive air-showers (e.g., \cite{Kahn66,Allan:1971,Falcke03,Scholten08, Scholten18}). The latter are numerical simulations that treat particles in the air-shower individually, and compute their electromagnetic radiation from first principles. Uncertainties then stem from the calculation of the air-shower itself. Several simulation codes exist on the market with different levels of complexity, e.g., SELFAS~\cite{SELFAS}, MGMR~\cite{Scholten08}, EVA~\cite{EVA}, CoREAS~\cite{CoREAS}, ZHAireS~\cite{Zhaires}. In the last years their results have started to converge, and were found to be consistent with radio-signal measurements taken under laboratory conditions~\cite{SLAC} and with air-shower arrays \cite{LOPES,Apel:2016gws}.
Macroscopic approaches are fast --being analytical--, and enable one to grasp a physical insight on the various features of the radio signal, that are not yet fully understood. But they also have serious limitations, when one wishes for example to study the signatures expected for specific instrument layouts. Detailed spatial, spectral and temporal structures of the signal can be easily lost in the process of integrating over different contributions and due to simplifying geometrical assumptions. Besides, these formalisms contain free parameters to be tuned, such as the drift velocities, which strongly impacts the predicted electric field strengths \cite{Scholten08}.
On the other hand, running microscopic simulations for very high-energy particles and very large or dense arrays consisting of hundreds of antennas, is highly time-consuming, so that one quickly reaches the limitations of the computational resources. Typically, simulating the electric-field traces over 200 radio antennas for one air-shower of primary energy $10^{17}\,\mathrm{eV}$ with the ZHAireS simulation costs about 2 hours on one node and with a thinning level of $10^{-4}$. In the early phases of an experiment, when exploring the performances of particular layouts, a more portable and faster method is needed, that also provides as precise information than the classical models that have been studied so far.
This can be achieved by {\it Radio Morphing}, a novel method we present in this paper.
It consists in simulating the radio signal emitted by {\it one} generic air-shower and in {\it morphing} of it in order to obtain the electric field from any primary particle, at any desired antenna position. Morphing is performed by using mostly well-documented analytical formalisms which enable to account for the effect of each relevant primary particle, as well as atmospheric conditions and detector position parameters.
Our approach allows us to reflect the complexity of the particle distributions in spite of the analytical layer, thanks to the use of the initial full simulation output. The morphing treatment allows fast calculations. For the example given above, once a single generic shower simulation has been generated, the response over 200 radio antennas can be computed in less than 2 minutes with Radio Morphing on one node: a gain of roughly 2 orders of magnitude in CPU time. The gain further increases for lower thinning levels.
We first recall in Section~\ref{section:setup} the basics of radio-emission. Section~\ref{section:method} details the physical principles behind the construction of the Radio Morphing method, and outlines the full Radio Morphing method. We demonstrate in Section~\ref{section:validation} the performances of the method in reproducing the key features of radio-emission from air-showers, and compare our results with microscopic simulation outputs for a set of horizontal events which are expected to be measured by the future GRAND detector~\cite{Alvarez-Muniz:2018bhp}. We discuss possible improvements and limitations originated from assumptions made for simplicity in Section~\ref{section:discussion}.\\
\\
The Radio Morphing method discussed in this paper has been implemented as a dedicated Python module~\cite{RM:GitHub}, freely available online under the LGPL-3.0 license. The \texttt{radiomorphing} module requires \texttt{numpy}~\cite{numpy} in order to speed up intensive numerical computations, e.g. matrix operations or Fourier transforms.
\section{Physics of radio emission}\label{section:setup}
A primary high-energy particle induces an extensive air-shower in the atmosphere of the Earth, {\it i.e.}, a cascade of high-energy, mostly leptonic, particles and electromagnetic radiation. Most of the particles are concentrated in a shower front, that is typically a few centimeter to meters thick near the shower axis.
Time variation of the total charge or current in the relativistic shower front in combination with Cherenkov effects leads to coherent radio emission over the typical dimensions of the particle cascade. These radio pulses last typically tens of nanoseconds, with varying amplitudes of up to several hundreds of $\mu \mathrm{V/m}$. The signal can be interpreted by two main mechanisms (see figure~\ref{fig:GeoAskSketch}).
The so-called {\it Askaryan effect}~\cite{Askaryan1962,Askaryan1965} results from Compton scattering while the shower propagates through the Earth atmosphere. The resulting electrons are swept into the shower front.
In a non-absorptive, dielectric medium the number of electrons in the particle front, and therefore the net charge excess in the cascade, varies in time, which induces a coherent electromagnetic pulse.
The radio signal emitted by this mechanism is linearly polarized. The electric field vector is oriented radially around the shower axis, so the orientation of the electric field vector depends on the location of an observer with respect to the shower axis.
The second and main mechanism at play is the {\it geomagnetic effect}~\cite{Falcke03,TimsReview}.
In the shower front, the secondary electrons and positrons are being deflected towards opposite directions by the geomagnetic field, after which they are stopped by interactions with air molecules. In total, this leads to a net drift of the electrons and positrons in opposite directions as governed by the Lorentz force $\vec{F}=q\, \vec{v} \times \vec{B}$, where $q$ is the particle charge, $\vec{v}$ the velocity vector of the shower and $\vec{B}$ the geomagnetic field vector. As these ``transverse currents'' vary in time during the air shower development, they lead to the emission of electromagnetic radiation. The polarization of this signal is linear, with the electric field vector aligned with the Lorentz force (along $\vec{v}\times \vec{B}$).
Depending on the position of the observer and hence the orientation of the electric field vector for the two emission mechanisms, these contributions can add constructively or destructively, leading to an asymmetric ring structure: a {\it radio footprint}, the pattern of the radio signal on ground.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./figures/GeoAskSketch.pdf}
\caption{\label{fig:GeoAskSketch} Main radio emission mechanisms in an extensive air-shower and the polarization of their
corresponding electric field in the shower plane. The Askaryan effect can be described as a variation of the net charge excess of the shower in time ($\dot{q}$) and the geomagnetic effect results from the time-variation of the transverse current in the shower ($\dot{I}$). Both mechanisms superimpose, resulting in a complex distribution of the electric field vector (bottom).}
\end{figure}
Since the refractive index of air is slightly larger than $1$, the radio waves travel slower through the air than the relativistically moving particle front. In addition to a strong forward-beaming of the emission, this leads to a so-called Cherenkov compression.
At particular observer positions on ground, a radio pulse is detected as being compressed in time since the radiation emitted by a significant part of the shower arrives simultaneously. The pulse becomes very narrow and coherent up to frequencies in the GHz region.
These observer positions can be found on the {\it Cherenkov ring} \cite{deVries11,Alvarez12,Nelles15}, given by $\cos\Theta_{\rm C}= (n\,\beta)^{-1}$ with $\Theta_{\rm C}$ defined as the Cherenkov angle, $n$ the refractive index of the medium that depends on the emission height, and $\beta$ the particle velocity.
One of the main observables that characterize the air-shower is the atmospheric depth $X_{\mathrm{max}}$ (also called the \textit{shower maximum} and given in $\mathrm{g}.\mathrm{cm}^{-2}$) at which the development of the cascade reaches its maximum particle number in the electromagnetic component~\cite{Xmax_plot}. Since the strength of the emitted radio signal scales linearly with the number of electrons and positrons, and since the signal is strongly beamed forward, $X_{\mathrm{max}}$ can be considered at first approximation, as a point-source position, where the maximum radiation comes from. This approximation is just valid in the far-field, and restricts therefore the methods to it.
\begin{figure*}[tb]
\centering
\includegraphics[width=1.00\textwidth]{./figures/Sketch_Planes.pdf}
\caption{Sampling of the radio signal at several distances along the shower axis from the shower maximum $X_{\mathrm{max}}$. The antenna positions in the planes are arranged in a so-called star-shape pattern, defined by the shower direction $\vec{v}$ and the orientation of the Earth's magnetic field $\vec{B}$.} \label{fig:starshape_planes}
\end{figure*}
\section{The Radio Morphing method}\label{section:method}
Previous macroscopic studies have shown that the average radio emission properties from air-showers depends on a limited set of parameters, describing the energy and geometry of the shower~\cite{Kahn66,Allan:1971,Falcke03,Scholten08, Scholten18}. Following this observation, we have developed the Radio Morphing method, in which radio signals are rescaled from a single reference air-shower, via a series of simple mathematical operations.
This idea relies on the {\it universality} of the distribution of the electrons and positrons in extensive air-showers, that was pointed out by several works already~\cite{Giller05,Gora06}. Interestingly, this distribution was found to depend mainly on the depth of the shower maximum $X_{\rm max}$ and the number of particles in the cascade at that depth, that is, on the age of the shower. Based on this concept, Ref.~\cite{Universality} presented a parametrization of the air-shower pair distributions, that enables one to calculate the properties of any air-shower by a linear rescaling of a small number of parameters. The parametrization was later refined by, e.g., Refs.~\cite{Giller15,Smialkowski18}.
The distribution of these electrons and positrons in the shower front are directly responsible for the radio emission. Hence the associated radio signals are also expected to be universal.
Because the radio signal is integrated over the full shower evolution, shower-to-shower fluctuations are further smoothed out for observer positions in the far-field and this average universality can be seen as a robust estimate.
Here, we would like to mention that for energies of electromagnetic showers of about $10^{20}\,$eV the Landau-Migdal-Pomeranchuk (LPM) effect which leads to a suppression of bremsstrahlung and pair production has to be taken into account~\cite{Cillis:1998hf}. Below the energy, the effect will not produce serious distortion in the shower development.
The strength of the measured radio emission is impacted by other external ingredients such as the geomagnetic angle, and various geometrical distance scales (shower zenith angle and altitude) that can modify the distance of the observer to the radio source and thus stretch the size of the radio footprint on ground. First we focus on the generation of a reference shower in Section~\ref{section:observers}. We will demonstrate in Section~\ref{section:amplitude} that, taking these mentioned processes into account, the radio emission properties can still be parametrized by only 4 parameters with a good level of precision at a fixed distance from the shower maximum $X_{\rm max}$, namely the primary particle energy ${\cal E}$, the shower zenith and azimuth angles $\theta$ and $\phi$ at injection, and the shower injection altitude $h$. The direction and position of the shower can then be re-adjusted using isometries (Section~\ref{section:isometries}).
The last step is the performance of two-point interpolations (Section~\ref{sec:interpolation}) of the electric field trace at the desired antenna position.
Hence, we are able to calculate, at any desired observer positions, the electric field $\vec{E}(\vec{x},t)$ emitted by any target shower from one generic simulated shower, acting as reference, by simple analytical operations.
The different steps of the Radio Morphing method are summarized in Fig.~\ref{fig:Recipe}. The corresponding code is publicly available, see \cite{RM:GitHub}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{./figures/recipe_v4.pdf}
\caption{Recipe for Radio Morphing: These different steps are applied to a reference shower to receive the electric field traces for a target shower with desired parameters at the desired observer positions.}
\label{fig:Recipe}
\end{figure}
\subsection{Sampling positions for the simulated radio signal}\label{section:observers}
In the simulation of radio emission from air showers the signals are recorded at a set of observer positions.
To select these positions wisely, we profit from our knowledge about the emission mechanisms:
As mentioned in the previous section, the geomagnetic and the Askaryan effects are linearly polarized along $\vec{v}\times\vec{B}$ and around the shower axis, respectively. This naturally leads to a radiation profile that is not rotationally symmetric, and that can be adequately described in the shower-coordinate system defined by $(\vec{v}, \vec{v}\times\vec{B}, \vec{v}\times\vec{v}\times\vec{B})$. The advantage of the shower coordinates is that the radio emission can be fully described by a superposition the two main emission mechanisms whose contributions can be disentangled due to their different polarizations.
A correct sampling of the radio signals has to record the lateral distribution of the radio signal as well as the longitudinal distribution function along the shower axis defined by $\vec{v}$.
In order to guarantee it while minimizing the number of simulated observer positions, the positions are optimized so that at several distances from $X_{\rm max}$ along the shower axis they cover the locations where the interference between the two radiation components reach their minima and maxima.
The variations in signal strength are along the $\vec{v}\times\vec{B}$-axis. We thus position a set of observers over a {\it star-shaped pattern} in the shower plane with eight arms, two aligned with the $\vec{v}\times\vec{B}$ axis, and two aligned with the $\vec{v}\times\vec{v}\times\vec{B}$-axis (see figure~\ref{fig:starshape_planes}) \cite{Buitink14}.
The extensions of these star-shape pattern in several distances from $X_{\rm max}$ are determined by the fact that the emission is strongly beamed forward and forms a cone of a few degree opening angle with $X_{\mathrm{max}}$ as an approximation for a point source (see Fig.~\ref{fig:starshape_planes}).
In the following, reference showers are produced via microscopic simulations of the radio signal, performed in Cartesian coordinates, with the $x$-component aligned along magnetic North-South (NS), the $y$-component along East-West (EW) and the $z$-component pointing upward (Up), while the scaling procedure is performed in the shower referential defined by ($\vec{v}$,$\vec{v}\times\vec{B}$,$\vec{v}\times\vec{v}\times\vec{B}$).
\subsection{Parametrizing the electric field}\label{section:amplitude}
The radio signal measured at a given position $\vec{x}$ and time $t$ can be formally described by the electric field vector $\vec{E}(\vec{x},t)$. Four parameters constrain the strength and polarization of the field at a given observer position: the primary's energy ${\cal E}$, the shower direction towards which the cascade is propagating\footnote{In the conventional definition, azimuth and zenith are defined to describe the direction from which the shower is coming.}, defined by zenith $\theta$ and azimuth $\phi$ at injection, and the altitude of injection $h$.
The key hypothesis in Radio Morphing is that, at a fixed distance to the shower maximum $X_{\rm max}$,
the electric field vector of any shower $B$ at the position $\vec{x}_B$ can be derived from that of a reference shower $A$ by a set of simple operations that are applied on the overall electric field $\vec{E}_A$ and on the position $\vec{x}_A$
\begin{equation}
\vec{E}_B(\vec{x}_B,t) = J_{AB}({{\cal E},\theta,\phi,h}) \,\vec{E}_A[k_{AB}({\theta,h})\,\vec{x}_A,t] \ ,
\end{equation}
where the scaling matrices $J_{AB}({{\cal E},\theta,\phi,h})$ and factors $k_{AB}({\theta,h})$, can be calculated as a function of the reference and target shower parameters $({\cal E},\theta,\phi,h)$, taking into account independent effects related to the primary energy, the geomagnetic field, the air density, and air refraction index. We detail in this section the dependency of the electric field on these various physical parameters at play in order to express $J_{AB}$ and $k_{AB}$.
We will derive their mathematical formulae by expressing their dependency on the primary energy ${\cal E}$, the geomagnetic angle $\alpha$, the density $\rho$ at the height of the shower maximum and the corresponding values for the Cherenkov angles $\Theta_{\rm C}$ of the target and the reference showers:
\begin{equation}
J_{AB}({{\cal E},\theta,\phi,h}) = j_{\cal E} j_{\rho} j_{\rm C} J_\alpha
\end{equation}
and
\begin{equation}
k_{AB}({\theta,h})= 1/j_{\rm C},
\end{equation}
where the scalars $j_{\cal E}$, $j_{\rho}$, $j_{\rm C}$ and the matrix $J_\alpha$ will be derived or explained below.
The electric-field and position vectors will be written in the shower coordinate system $(\vec{v}, \vec{v}\times\vec{B}, \vec{v}\times\vec{v}\times\vec{B})$.
\subsubsection{Scaling in primary energy}\label{section:energy}
Up to the frequencies corresponding to the typical thickness of the particle shower front (a few meters), secondary particles in the air-shower create a coherent radio emission: at a given frequency, the radiation emitted from several particle experiences negligible relative phase shifts during its propagation to the observer. This implies that the vectorial electric fields produced by each particle also add up coherently, and the total electric field amplitude scales linearly with the number of particles, itself scaling with ${\cal E}$. We can thus write $|\vec{E}(\vec{x},t)| \propto {\cal E}$.
This relationship is consistent at first order with the seminal expression of the electric field amplitude derived by \cite{Allan:1971} from pioneering measurements, and with the recent measurements performed by AERA and LOFAR \cite{AERAEnergy,LOFARMass}.
In practice, the electric-field amplitude of a generic shower $A$ with primary energy ${\cal E}_A$ can be scaled to the one of a target shower $B$ with the energy ${\cal E}_B$, by multiplying by the factor\begin{equation}
j_{\cal E} = \frac{{\cal E}_B}{{\cal E}_A}\ .
\end{equation}
Note that for frequencies typically higher than $\sim100\,$MHz, coherence in the emission is no longer guaranteed (besides for position in the Cherenkov cone, see Sec.~\ref{section:setup}) and can lead to uncertainties while applying this factor. More precisely, the effective thickness of the shower front that sets the coherence condition also depends on the observer angle.
\subsubsection{Scaling in geomagnetic angle}\label{section:geomagnetic}
The strength of the radiation emitted via the geomagnetic effect scales with the strength of the Lorentz force $\vec{F}=q \, \vec{v} \times \vec{B}$ experienced by each particle in the shower front, and that induces a transverse current. The magnitude of the emitted signal scales with
$|\vec{v}\times \vec{B}|=|\vec{v}| |\vec{B} | \sin\alpha$,
leading directly to a sinusoidal dependency of the electric field strength over the geomagnetic angle:
$|\vec{E}| \propto |\vec{v}\times \vec{B}| \propto \sin\alpha$.
Here, the geomagnetic angle is given by $\alpha= \measuredangle (\vec v, \vec B)$, introducing the dependency on the angles $\theta$ and $\phi$ of the shower.
Also, this dependency is consistent at first order with the seminal expression of the electric field amplitude derived by \cite{Allan:1971} from pioneering measurements, and is confirmed experimentally by recent measurements performed by CODALEMA \cite{Ardouin:2009zp} and later by AERA and LOFAR \cite{AERAEnergy,LOFARMass}.
We neglect the linear dependency on the local magnetic field strength at the moment and choose a reference shower which is simulated for the target site.
The amplitudes of reference shower $A$ and target shower $B$ can be related via a scaling factor that takes into account the two geomagnetic angles $\alpha(\theta_A,\phi_A)$ and $\alpha(\theta_B,\phi_B)$ sensed by each shower: $j_\alpha=\sin\alpha(\theta_B,\phi_B)/\sin\alpha(\theta_A,\phi_A)$. This scaling was recently demonstrated experimentally by AERA \cite{AERAEnergy}.
Since the geomagnetic emission is linearly polarized along $\vec{v}\times \vec{B}$, this factor is multiplied to the ${\vec{v}\times\vec{B}}$ component of the electric field in shower coordinates.
Thus the scaling matrix can be expressed in the shower referential by
\begin{equation}
J_\alpha =
\begin{bmatrix}
1 & 0 & 0 \\
0 & j_\alpha & 0 \\
0 & 0 & 1 \\
\end{bmatrix}
\quad \mathrm{with} \quad
j_\alpha=\frac{\sin\alpha(\theta_B,\phi_B)}{\sin\alpha(\theta_A,\phi_A)}
\ .
\end{equation}
Here, we neglect the possible projection of the geomagnetic component into the $\vec{v}$ component of the electric field, assuming that it is negligible with respect to our target accuracy. The phase shift between Askayran and Geomagnetic emission, as observed by LOFAR~\cite{Scholten:2016gmj}, is included implicitly by using a microscopic simulation as reference. We assume that the phase shift depends solely on the observed frequency and on the position of the observer with respect to the Cherenkov cone. Therefore, a scaling of the amplitude will not change the time difference, determined by the microscopic simulation.
For simplicity, we chose in the current version of Radio Morphing to scale ${\vec{v}\times\vec{B}}$ component of the electric field without an a-priori decoupling of the contributions from the Geomagnetic and Askaryan effect. This impact of the latter one could be of order of a few$\,\%$ contribution, depending on the measured polarization~\cite{14percent}.
To exclude this artificially induced uncertainty, one has to decouple the two emission components completely, e.g. by comparing simulations with magnetic field turned on and off. These effects, as well as a possible scaling with the magnetic field strength, will be included in a future version of the Radio Morphing code.
\subsubsection{Effects of air density}\label{sec:height}
The parametrization on the injection altitude and zenith angles, and therefore on the air density, is poorly documented in the literature. Complicated ad-hoc fits have been invoked \cite{Glaser16,Scholten18}, and the zenith scaling is handled at the moment in the community as a ``distance to $X_{\rm max}$'' correction. We present here a more natural modeling of these effects, validated with ZHAireS simulations.
In practice, our input parameter is the initial injection height $h$ of the air shower, that corresponds to the altitude at which the shower starts developing in the atmosphere. Geometrically, this altitude is connected to the altitude of the shower maximum $h_{X_\mathrm{max}}$ by the following relation via the zenith angle of the air shower $\theta$ at injection:
\begin{equation}\label{eq:height}
h_{X_{\mathrm{max}}} = h + d_\mathrm{hor}/\tan \theta
\end{equation}
with $d_\mathrm{hor}$ the horizontal distance of the shower maximum to the injection point of the shower. Here, a flat-Earth approximation can be used since $d_\mathrm{hor} \ll R_\otimes$.
The dependency of the radiated energy on the density at $X_{\rm max}$ is complicated to estimate analytically, as it corresponds to integrated values over the full shower development. We investigated this effect numerically using ZHAireS \cite{Zhaires} simulations. We have produced sets of shower simulations with the parameters of the AERA \cite{AERAEnergy} site at the Pierre Auger Observatory.
The sets of simulations contain proton-induced showers with energy $10^{17}\,$eV arriving from the North and a zenith angle $\theta = 80^\circ$ in the conventional angular system used for cosmic rays. To achieve different densities at the position of the shower maximum, we changed the injection height of the shower artificially. We calculated for each event the measured radiated energy on ground emitted by the shower.
The result is illustrated in Figure~\ref{fig:density_scaling}. The errorbars are given by the RMS for 10 simulated air showers for each shower geometry.
Interestingly, we find that the scaling of the radiated energy scales as $|\vec{E}|^2 \propto [\rho_{X_{\rm max}}(h,\theta)]^{-1}$ (for simplicity we use a value of $1$ instead the exact fitting result), where $\rho_{X_{\rm max}}(h,\theta)$ is the air density at $X_{\rm max}$, related to the density at $h$ via Eq.~\ref{eq:height}. This effect might be understood as follows.
Each part of the footprint on the ground is more sensitive to a specific part of the longitudinal profile of the shower $X$. Thus if one ``separates'' each part of the footprint and links it to a specific position $X$ in the profile, the electric field will scale with the air density at $X$. The integration over the energy in radio that reaches the ground, brings up an intricate convolution on the number of particles at each particular depth $X$, the air density at that depth and the position on the ground (each part is more sensitive to a certain $X$). Nevertheless, this effect is not yet fully understood and will be investigated further in a future study.
Even though we expect that the charge-excess increases with the air density, this effect is found to be relatively small. Therefore, we neglect it for the moment. In addition, the Askaryan emission is just a minor correction to the total measured peak amplitude for inclined showers. For the reference shower used in Sec.~\ref{sec:statvalid},we found a relative contribution to the peak amplitude on the Cherenkov cone of about $14\%$. Therefore, we apply the scaling to all components of the electric field vector in the current version of Radio Morphing.
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\textwidth]{./figures/pnormal-Inj-Fitrho-comparison_cliipingcorrected.pdf}\\
\caption{Density scaling of the radio signal in the $30-80\,\mathrm{MHz}$ frequency band for a proton-induced showers with energy $10^{17}\,$eV arriving from the North.
Red dots show the simulation outputs for an antenna array at the AERA site: the radiated energy in the radio signal $E_\mathrm{radio}$ as a function of the density ratio $r_\rho=\rho_{X_{\rm max}} / \rho_0$, with $\rho_{X_{\rm max}}$ the actual density at the height of the shower maximum and $\rho_0= 1.225\,\mathrm{g/cm}^3$ the density at sea level. The data points can be well-fit by a power-law function $E_\mathrm{radio}=E_0/{r_{\rho}}^p$ where the values of $E_0$ and $p$ are indicated in the label (dotted line).}
\label{fig:density_scaling}
\end{figure}
From these results, we can compute the scaling factor to obtain the amplitude of the target shower $B$ from the reference shower $A$
\begin{equation}
j_{\rho}= \left[\frac{\rho_{X_{\rm max}}(h_A,\theta_A)}{\rho_{X_{\rm max}}(h_B,\theta_B)} \right]^{1/2}
\end{equation}
with $\rho_{X_{\rm max}}(h,\theta)$ the air density at altitude $X_{\rm max}$ for reference and target showers $A$ and $B$, related to the altitude at shower injection via Eq.~\ref{eq:height}.
A comparison with the formula presented in Ref.~\cite{Glaser16} shows a nice agreement between the two formalisms, for highly inclined showers with high densities at the shower maximum.
\subsubsection{Stretching effect from the Cherenkov angle}\label{sec:cherenkov}
The injection altitude and zenith angle also impact the size of the radio footprint.
The opening angle of this cone depends on the altitude as the atmosphere becomes denser with decreasing height, leading to a larger refractive index, and hence a larger Cherenkov cone. Indeed, the radius of the Cherenkov cone within which the radio signal is emitted is given by $r_L=L \tan \Theta_C$ where $L$ is the distance from $X_{\mathrm{max}}$ and $\Theta_C =\arccos[1/n(h)]$ is the Cherenkov angle \cite{deVries11,Alvarez12}. The refractive index $n_{X_{\rm max}}(h,\theta)$ is a function of the altitude $h$ and zenith angle $\theta$ and has to be evaluated at the altitude of the shower maximum $X_{\mathrm{max}}$ using Eq.~\ref{eq:height}. For instance in ZHAireS, an exponential function is implemented~\cite{Zhaires}.
The scaling between two showers at different injection heights is given by the ratio between the Cherenkov radii. This leads to a stretching factor for the radio footprints, to be applied to the positions $\vec{x}_B = k_{\rm C} \vec{x}_A$ with
\begin{equation}
k_{\rm C} = \frac{\Theta_{{\rm C},B}}{\Theta_{{\rm C},A}} \sim \frac{\arccos[1/n_{X_{\rm max}}(h_B,\theta_B)]}{\arccos[1/n_{X_{\rm max}}(h_A,\theta_A)]}
\end{equation}
with $(h_A,\theta_A)$ and $(h_B,\theta_B)$ the parameters of reference and target showers $A$ and $B$. Here we have assumed that $\tan\Theta_{\rm C}\sim \Theta_{\rm C}$, as $\Theta_{\rm C}\ll 1$. This means that the distances between the simulated antenna positions along the star-shaped arms are corrected by the corresponding stretching factor $k_{\rm C}$.
By energy conservation, the total radiated energy over each area intersecting the Cherenkov cone should be constant: $\lvert \vec{E}(\vec{x},t) \rvert ^2/r^2 = {\rm constant}$, where $r$ is the radius of the intersected area. The stretching of the area by $r_B = k_{\rm C} r_A$ yields the following scaling factor between showers $A$ and $B$
\begin{equation}
j_{\rm C} = \frac{\lvert \vec{E}_B(\vec{x},t) \rvert}{\lvert \vec{E}_A(\vec{x},t) \rvert} = \frac{1}{k_{\rm C}} \ .
\end{equation}
Note that very deep showers can be affected by the clipping effect, when particles reach the ground before the shower has completely developed. This effect is not accounted for in the scaling of the electric-field amplitude.
\begin{figure}[tp]
\centering
\includegraphics[width=0.45\textwidth]{./figures/isometry_example.pdf}
\caption{\label{fig:scaling_positions} Illustration of the isometry operation: the simulated antenna positions for the reference shower (blue) are rotated and translated accordingly to the new shower direction (red line) and the $X_\mathrm{max}$ position (yellow diamonds) of the target shower (red).}
\end{figure}
\subsection{Isometries of observer positions} \label{section:isometries}
Once the electric field vector of the reference shower $\vec{E}_A(\vec{x}_1,t)$ has been morphed to $\vec{E}_B(\vec{x}_2,t)$ according to the set of parameters $({\cal E},\theta,\phi,h)$ of the target shower, we rotate and translate the positions, at which the signal was simulated, according to the new shower direction. This can be done straightforwardly by a rotation and a translation of observer positions in the star-shaped planes, where the electric field of the reference shower were sampled (see Section~\ref{section:observers}), as is illustrated in Fig.~\ref{fig:scaling_positions}.
The isometries performed on the $\vec{E}_A(\vec{x}_A,t)$ should (by definition) conserve the distance of each star-shaped plane to the position of $X_{\rm max}$. This condition is required in order to ensure the validity of the morphing process performed in the first step to account for the parameters $({\cal E},\theta,\phi,h)$.
The physical location in space of the target's shower maximum $X_{\rm max}$ depends on the actual shower parameters, as e.g. the primary's energy, as well as on the primary-particle type. It can be obtained from e.g. dedicated simulations of the induced particle cascade or be computed as follows: one integrates the traversed air density along the shower axis from the point of shower injection until the average depth of the shower maximum for the target shower is reached for this specific primary type, taking into account an atmosphere density model (e.g., an isothermal model).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{./figures/interpolation_example_v2.pdf}
\caption{\label{fig:interpolation} Example for the interpolation of phase (wrapped and unwrapped) and amplitude for one antenna position: the red and blue markers represent the values at the antenna positions $a$ and $b$ which are used for the interpolation of the signal at the desired positions. Resulting phase and amplitude are represented by the green markers.}
\end{figure}
\subsection{Interpolation of the electric field traces: $\vec{E}_A(x_i,t) \rightarrow \vec{E}_A(x,t)$}\label{sec:interpolation}
Once the electric field has been sampled at fixed antenna positions in the star-shaped planes, we interpolate the signal pulses at any desired position.
In the provided scripts for Radio Morphing, we implemented the method presented in \cite{EwaThesis} as an example for an interpolation of radio signals. Here, a Fourier transform into the frequency domain is performed, and the signal at a given location is obtained by a linear interpolation in the frequency domain between two generic positions.
The signal spectrum can be represented in polar coordinates as
\[f(r,\varphi)=r e^{i\varphi} \]
with $r$ as the signal amplitude and $\varphi$ the complex phase in the interval $\left[-\pi, \pi\right)$. We detail in the following the two-point interpolation of $r$ and $\varphi$ at a given observer position.
The spectrum phase is wrapped into the interval $\left[-\pi, \pi\right)$, which results in discontinuities at the interval limits, and to sawtooth-shaped features (see Fig.~\ref{fig:interpolation}). Phases have to be unwrapped before interpolation, in order to account for data points sharing the same frequencies but with shifted phases that are not on the same saw-tooth edge. To locate the discontinuities the following conditions with $i=2,...,n$ are scanned:
\begin{eqnarray}
\left| \varphi_i -\varphi_{i-1} \right| &>& \left| \varphi_i -\varphi_{i-1} \right| + \pi \quad \mbox{ for } \varphi_i <\varphi_{i-1} \\
\left| \varphi_i -\varphi_{i-1} \right| &<& \left| \varphi_i -\varphi_{i-1} \right| - \pi \quad\mbox{ for } \varphi_i >\varphi_{i-1} \ .
\end{eqnarray}
For each discontinuity fulfilling these conditions, all the following data points $\varphi_{i+m}$ are then corrected for a constant additive with $l$ as the number of preceding discontinuities:
\begin{equation}
\varphi_{i+m, {\rm new}}= \varphi_{i+m} +2\pi l \ .
\end{equation}
This algorithm requires a sufficient sampling rate of the spectrum since the phase difference between consecutive data points has to be smaller than $\pi$.
The unwrapping results in continuous phases can be interpolated linearly:
\begin{equation}\label{eq:phase}
\varphi(\vec{x})= c_a \varphi(\vec{x}_a) + c_b \varphi(\vec{x}_b)\ .
\end{equation}
Here, $\vec{x}$ is the observer position of the interpolated signal, and $\vec{x}_a$ and $\vec{x}_b$ the actual simulated observer positions. The weighting coefficients $c_a$ and $c_b$ are defined as:
\[ c_a= \frac{\left|\vec{x}_a-\vec{x}\right|}{\left|\vec{x}_b-\vec{x}_a\right|} \quad \mathrm{ and } \quad c_b= \frac{\left|\vec{x}_b-\vec{x}\right|}{\left|\vec{x}_b-\vec{x}_a\right|}
\]
Since the amplitude in the frequency domain is independent in each frequency bin, a linear interpolation within one bin is sufficient for $r$:
\begin{equation}
r(\vec{x})= c_a r(\vec{x}_a) + c_b r(\vec{x}_b)\ .
\end{equation}
The interpolated spectrum is then given by
\begin{equation}
f_{\rm int}(r,\varphi)=r(\vec{x}) \,e^{i\varphi(\vec{x})}
\end{equation}
from which the corresponding time series can be derived via inverse Fourier transformation. An example for the interpolation of the phase and the amplitude is shown in figure~\ref{fig:interpolation}.
The linear interpolation of the phases (see Eq.~\ref{eq:phase}) implies a linear interpolation of the arrival time as long as the wave front can be estimated as a plane between two simulated observer positions, which is valid in the case of a dense grid of observers. This is a simplification of the hyperbolic shape of the wave front, that holds if the distance between the antennas is in the order of the wavelengths.
In the example presented in Section~\ref{section:validation}, the distances between the simulated antenna positions of the reference shower are larger than the radio wavelengths considered. In that case, the phase gradient in the phase interpolation cannot reproduce correctly the arrival timing of the signal at the considered antenna positions, while the signal structure itself is not affected. The correction of the arrival time of the signal at the observer position is part of foreseen developments for the Radio Morphing method.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{./figures/example_3D.pdf}
\caption{\label{fig:3Dinterpolation_new} Example for interpolation in 3D: after finding the two closest star-shape planes (red dots) to the desired antenna positions (blue circle), the projection of this position onto the planes along the line of sight are determined (green crosses). The 4 closest neighbored antenna positions which were simulated get identified (blue crosses). The yellow diamond represents the position of $X_{{\rm max}}$ for that example shower.}
\end{figure}
\subsubsection{3D interpolation}
The electric field time trace can be computed at any location inside the conical volume resulting from the isometry transformation. The process is the following, also illustrated on Fig. ~\ref{fig:3Dinterpolation_new}:
\begin{itemize}
\item First we define the intersection between the line linking the observer to the $X_{\mathrm{max}}$ position and the two star-shape planes surrounding the observer position.
\item Then we compute the signal for each of these two intersection points from the closest neighbor's signals through a bilinear interpolation, using the method detailed above.
\item Finally, we interpolate these two signals in order to compute the time trace at the desired observer position.
\end{itemize}
The underlying hypothesis of this treatment is that the radio emission is point-like, and emitted from $X_{\mathrm{max}}$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\textwidth]{./figures/ant20_000001_50deg.pdf}
\caption{\label{fig:timetraces_spectra} Example signal traces for an air shower induced by an electron with an energy of $1.05\,$EeV, an azimuth angle of $50^\circ$ and a zenith angle of $89.5^\circ$ (slightly up-going shower) using Radio Morphing (solid) and ZHAireS (dashed lines) for comparison. The antenna positions are in $\sim 75\,$km distance along the shower axis to the shower maximum while the Cherenkov ring is expected to be at an off-axis angle of $\sim 1.4^\circ$. Top: Time traces of the West-East component of the electric field for antenna positions in different distances to the shower axis, given as off-axis angle, in the time domain. The time traces were shifted in time for a better comparison. Bottom: The corresponding frequency spectra.}
\end{figure}
\section{Comparison to microscopic simulation and performances}\label{section:validation}
\subsection{Time traces and frequency spectra}
We validate the behavior of Radio-Morphed time traces and frequency spectra by a direct comparison to microscopic simulations for various antenna positions.
As a reference shower we used an electron-induced air-shower with a primary energy of $0.1\,$EeV, an azimuth angle of $230^\circ$ and a zenith angle of $88.5^\circ$ (slightly up-going shower), injected at a height of $1700\,$m above sea-level. The radio emission was sampled in $5\,$km steps at several distances from the shower maximum. We simulated the signal for $184$ observer positions per plane arranged in the star-shape pattern, so that a conical volume with a half-angle equal to 3$^{\circ}$ is covered. A thinning level of $10^{-4}$ and a sampling rate of $10\,$GHz were set for the simulation. The altitude is $2000\,$m above sea-level. The magnetic field direction is describe an inclination of $63.18^\circ$ and a declination of $2.72^\circ$, having a magnetic field strength of $56.5\,\mu$T. As an atmosphere, we are using the Linsley model including coefficients for the US standard atmosphere. These are the default options for all following ZHAireS simulations.
Figure~\ref{fig:timetraces_spectra} shows radio signals associated with a ZHAireS simulation (dashed lines) and Radio Morphing computation (solid lines) of an example target shower induced by an electron with energy of $1.05\,$EeV, first interaction at an height of $2200\,$m above sea-level, azimuth angle of $50^\circ$ (propagating towards North-West) and zenith angle of $89.5^\circ$ (slightly up-going shower). For the ZHAireS simulation a thinning level of $10^{-4}$ was set.
The signal is calculated for positions at several distances to the shower axis which are defined as an off-axis angle, located at a distance of roughly $d=75\,$km from the injection point of the shower.
The bipolar characteristic of the signals is correctly reproduced in the time domain, and the signal amplitudes agree well with each other.
One can see that the time-compression of the signal at the Cherenkov angle is also preserved by Radio Morphing. The Cherenkov ring is estimated to be located at $\sim 1.4^\circ$ from the shower axis, although this value strongly depends on the model used to calculate the refractive index $n_{X_{\rm max}}$ at the altitude of $X_\mathrm{max}$.
As expected, the time traces and the frequency spectra both increase in amplitude at high frequencies when approaching the Cherenkov ring (red lines).
A slight mismatch in signal amplitudes is observed for the position closest to the Cerenkov ring. This is very certainly due to statistical fluctuations in the shower development, which induce a difference between the $X_{\mathrm{max}}$ value of the simulated shower compared to that computed with Radio Morphing (see Section~\ref{section:isometries}). These different $X_{\mathrm{max}}$ values induce a (purely geometrical) variation of the Cherenkov ring radius at ground, only partially compensated by the different values of the Cherenkov angle at different $X_{\mathrm{max}}$ heights.
\subsection{Peak amplitude distributions}\label{sec:comp1}
Figure~\ref{fig:example_2D} shows the peak-amplitude distribution of received signals (West-East components) emitted by the same example target shower, simulated using ZHAireS (left) and calculated with Radio Morphing (right).
The observer positions are located at a distance of roughly $d=75\,$km from the injection point of the shower on a slope slightly tilted by $5^\circ$ towards South. The antenna positions are arranged in a grid-like structure, as planed for the future GRAND experiment.
The direct comparison shows that for an extended array, the predictions for the signal strength by Radio Morphing and ZHAireS are in good agreement. The feature of the Cherenkov cone is clearly visible in both distributions.
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\textwidth]{./figures/50deg_ey_2D_.pdf}
\caption{\label{fig:example_2D} Comparison of the footprint detected by an antenna array, tilted by $5^\circ$ towards South: the peak amplitude distributions once simulated using ZHAireS (left) and once calculated by Radio Morphing (right) for the example target shower.}
\end{figure}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.8\textwidth]{./figures/Diff_Ey_2D_.pdf}
\caption{\label{fig:Diff_2D} The relative (top) and the absolute (bottom) differences in the West-East component of the signal distribution, defined as $(E_{\rm{RM}} - E_{\rm{ZHAireS}}) /E_{\rm{ZHAireS}}$ and $E_{\rm{RM}} - E_{\rm{ZHAireS}}$, for the full frequency band of $0-500\,$MHz (left) and for the frequency bands $30-80\,$MHz (center) and $50-200\,$MHz (right).}
\end{figure*}
Figure~\ref{fig:Diff_2D} presents the relative and absolute differences between ZHAireS and Radio Morphed footprints for this same example shower, defined as $(E_{\mathrm{RM}} - E_{\mathrm{ZHAireS}}) /E_{\mathrm{ZHAireS}}$ (top) and $E_{\mathrm{RM}} - E_{\mathrm{ZHAireS}}$ (bottom), for the full frequency band of $0-500\,$MHz (left) and for the frequency bands $30-80\,$MHz (center) and $50-200\,$MHz (right). One can observe that the highest differences in the peak-amplitude distributions appear at the edges of the Cherenkov ring. This corresponds to the slight mismatch in the predicted positions of the Cherenkov cone discussed earlier. Since the signal strength drops sharply slightly off the cone angle, it leads to a larger difference in the predicted signal strength.
One observes that the signal predicted by Radio Morphing is slightly underestimated for observer positions on the Cherenkov ring, while it is slightly overestimated outside. This is induced mainly by the choice of the reference shower in this specific example, since a low-energy air shower with an energy of $0.1\,$EeV with a flat lateral distribution function acts as the reference for a target shower with an energy of $1.05\,$EeV. The discrepancy in the signal strength decreases if the time traces are filtered, as done for the $30-80\,$MHz (center) and $50-200\,$MHz (right) bands. Here, one first reason is the numerical noise at higher frequencies due to a low thinning level. Besides, coherence conditions are not fulfilled for frequencies above $\sim 100\,$MHz, so that the linear scaling in energy induces larger uncertainties at higher frequencies.
\subsection{Statistical validation of Radio Morphing}\label{sec:statvalid}
In the following, a comparison is made between results from Radio Morphing and ZHAires. The comparison is based on a set of $\sim300\,$inclined air showers induced by high-energy electrons and pions that are propagating towards the North. The distribution in injection height above sea level, energy, zenith and azimuth are shown in Figure~\ref{fig:eventset}. Figure~\ref{fig:eventset_geo} shows the corresponding values for the geomagnetic shower and the determined contribution of the Askaryan emission to the peak amplitude at an observer position on the Cherenkov cone. For the reference shower, we can decouple the contribution from the Geomagnetic and the Askayran emission for the simulated observer position along the positive $\vec{v}\times\vec{B}$-axis due to the polarisation of the two signal contributions. We found an Askaryan contribution of about $14$\% for a position on the Cherenkov cone. Based on the given formula in \cite{14percent}, we calculate the expected relative contribution to the signal strength at this position by scaling with the geomagnetic angle. The overall assumption that the impact of the Askaryan emission is minor is also valid for the used event set.
We computed the position of the shower maximum $X_{\mathrm{max}}$ of the target showers as described in Section \ref{section:isometries}. Here, we obtained the value for $X_{\mathrm{max}}$ in $\mathrm{g}.\mathrm{cm}^{-2}$ from ~\cite{Risse:2009ir}, while we determine the atmospheric column depth between the injection point and $X_{\mathrm{max}}$ of the electron (pion)-induced shower from the elongation rate value of photon (proton)-induced ones.
The radio signals were computed at observer positions located inside the showers radio footprint, typically $40-50\,$km away from the shower injection point. Only observer positions with off-axis angles smaller than $1.6^\circ$ were considered.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.65\textwidth]{./figures/leading_new.pdf}
\caption{\label{fig:eventset} Characterization of shower events in the set: distribution of injection heights, primary energies, azimuth and zenith angles of the pion (green) and electron (blue) primaries inducing the target air showers. The red line describes the distributions of all events in the set.}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.65\textwidth]{./figures/leading_new_alpha.pdf}
\caption{\label{fig:eventset_geo} Left: Determined geomagnetic angles for the events shown in Fig.~\ref{fig:eventset}. The vertical dashed line marks the value for the used reference shower. Right: The derived contribution of the Askaryan emission to the maximal amplitude for an observer position at the Cherenkov cone.}
\end{figure*}
The same reference shower as for the example above (see Sec. \ref{sec:comp1}) was used to compute the Radio Morphed signals.
Figure~\ref{fig:lead_amp} displays the calculated peak-to-peak amplitude for each antenna position in the events derived with Radio Morphing versus the peak-to-peak amplitude from ZHAireS simulations. The antenna positions are arranged in a grid-like structure, as also used in Fig.~\ref{fig:example_2D}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\textwidth]{./figures/leading_new_amplitude-log.pdf}
\caption{\label{fig:lead_amp} Comparison of the peak-to-peak amplitude in the East-West component of the signal detected by each single antenna included in the set, calculated with Radio Morphing and simulated with ZHAireS. The color-code represents the location of the observer position with respect to the shower axis, given as the corresponding off-axis angle. The green solid line marks equivalent amplitudes, the dashed line where the results are $25\%$ off, and the dashed-dotted line stands for $50\%$ discrepancy.}
\end{figure}
The green solid line marks equivalent amplitudes, the dashed and the dotted-dashed line $25\%$ and $50\%$ offsets, respectively.
The distribution follows a linear trend, clustering along the diagonal. For most of the positions, the peak-to-peak amplitudes deviate by less than $25\%$. This demonstrates that Radio Morphing can reproduce the amplitudes at the same level of magnitude as ZHAireS.
A histogram of the relative differences between the Radio Morphing and the ZHAireS peak-to-peak amplitudes at each antenna position is presented in Figure~\ref{fig:lead_histo} (top). When excluding the values with relative differences larger than 100\% --- only 1\% of the total --- the mean of the distribution, derived by a Gaussian fit, is $\mu=8.5\%$ with a standard deviation of $\sigma=27.2\%$. The Gaussian function with the mean and the standard deviation is plotted on top for comparison. It appears clearly that the function cannot describe the distribution properly due to a long asymmetric tail towards positive values. These entries arise mainly from the overestimation of the signal by Radio Morphing at antenna positions outside the Cherenkov ring as mention before. The fit does not have a quantitative value, but the plots aim to demonstrate qualitatively that Radio Morphing and ZHAireS agree within a reasonable margin. When the signals are filtered in the $30-80\,$MHz frequency range, then the mean and standard deviations decrease to $\mu=6.2\%$ and $\sigma=17.0\%$ and to $\mu=7.8\%$ and $\sigma=24.8\%$ after a filtering to $50-200\,$MHz, respectively. These behaviors are consistent with our observations in Figure~\ref{fig:Diff_2D}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.45\textwidth]{./figures/leading_new_Diff-Histo_EW.pdf}\\
\includegraphics[width=0.45\textwidth]{./figures/leading_new30-80MHz_Diff-Histo_EW.pdf}\\
\includegraphics[width=0.45\textwidth]{./figures/leading_new50-200MHz_Diff-Histo_EW.pdf}
\caption{\label{fig:lead_histo} Normalized histograms of the relative difference of the peak-to-peak amplitude at each antenna position in the event set, predicted by Radio Morphing and ZHAireS. For each position, the relative difference is calculated over the full frequency band (top), and over the $30-80\,$MHz (center) and $50-200\,$MHz (bottom) frequency bands. For each distribution a Gaussian function characterized by its mean $\mu$ and its standard deviation $\sigma$ is overlayed. To minimize the impact of extreme outliers, the Gaussian fit was performed only on the data with relative differences $<1$.}
\end{figure}
\section{Discussion}\label{section:discussion}
The previous section demonstrates the consistency between Radio Morphing and microscopic simulations. Given that the method is built on simple mathematical operations, this agreement is remarkable.
The major advantage of Radio Morphing compared to microscopic simulations lies in the huge gain in computation time.
While microscopic simulation programs as ZHAireS require several minutes to calculate the signal for each antenna position (e.g. $\mathcal{O}(\rm mins)$ on one node for a thinning level of $10^{-4}$, ten times more for a thinning level of $10^{-5}$), Radio Morphing requires $\mathcal{O}(\rm s)$ to calculate the electric field trace in the time domain at a desired position, once the reference shower is prepared. Here, the scaling of the amplitude and the positions of the reference shower requires the largest fraction of the computing time and scales linearly with the number of antenna positions included in the reference shower.
\subsection{Radio Morphing systematic uncertainties}
In this section we propose a qualitative discussion on the systematic bias associated with Radio Morphing, and how it can be reduced. We found in the previous section for the given example event set and reference shower a $(8.5\pm27.2)\%$ difference in peak amplitude between signals computed with Radio Morphing and ZHAireS simulation. This discrepancy may obviously impact results when using radio simulation for a specific study, e.g. the trigger rate of a radio array on air showers. This systematic bias may be reduced by two means essentially:
\begin{itemize}
\item increase the numbers of simulated antenna positions in the reference shower. This effectively means a decrease of the distance in-between the simulated antenna positions along the arms in the star-shape pattern and therefore a finer sampling of the shower. That leads to a reduction of the uncertainty in the linear interpolation of the pulse shape based on the plane-wave approximation. In addition, a smaller distance in-between the star-shape planes leads to a more precise sampling of the reference shower and therefore a better sampling of the changes in the field-strength distribution along the direction of propagation.
Note however that with a rising number of simulated antenna positions, not just the simulation time required for the production of the reference shower increases, but also the scaling operations within the radio-morphing method will last longer.
\item the uncertainty in the scaling also rises with the difference in the parameters values between the reference and the target showers. It means that more than one reference shower may have to be included in the target shower computation process, depending on the actual parameter range to be covered, application case and desired accuracy. Having more than one reference shower would lead to a reduction of the number of extreme outliers in the electric-field strength distribution (compare to Fig. \ref{fig:lead_histo}). Therefore, the spread in the offset to results obtained by ZHAireS simulations will decrease.
\end{itemize}
Another systematic uncertainty induced by using Radio Morphing is that so far the asymmetry in the signal footprint caused by the superposition of the geomagnetic and Askaryan effect is not yet included in the scaling. This means that the asymmetry in the signal distribution does not depend on the azimuth angle. Just the asymmetry information contained in the reference shower is conserved, and not adjusted for the new target geometry. This effect can as well be weakened by using more than one reference shower in Radio Morphing. The scaling of the signal asymmetry due to the interference of the two main emission mechanisms can be implemented by the disentanglement of the two components in shower coordinates and the identification of the Askaryan contribution by running reference simulations with the magnetic field switched on and off. This will be included in a future version of the Radio Morphing code.
\section{Summary}
{\it Radio Morphing} is a newly developed universal tool to calculate the radio signal of an air-shower, combining the precision of microscopic simulations with the speed of macroscopic approaches. It consists in simple mathematical operations performed on a reference shower and in simple signal interpolation in the frequency domain. The mathematical operations are based on theoretical and measured parametrizations of the radio signal on the characteristics of the primary particle. The computation speed is independent of thinning level of the air shower, in contrast to microscopic simulations.
With Radio Morphing, it is possible to achieve an impressive gain in computation time, while reproducing accurately all three electric field components at any antenna position. In particular, features such as the Cherenkov cone and the signal strength at any observer positions are correctly modeled. It is thus an ideal tool to perform fast simulations of non-flat topographies, as required for example, in the calculations of the performances of the GRAND project \cite{Alvarez-Muniz:2018bhp}.
A more systematic quantification of the relative errors compared to microscopic simulations requires testing on a specific layout and geographical assumptions. This is currently being explored within the framework of the GRAND project.
Other limitations of the method, such as the time interpolation of the signal, are also under investigation.
\subsection*{Acknowledgments}
This work is supported by the APACHE grant (ANR-16-CE31-0001) of the French Agence Nationale de la Recherche and by the grant \#2015/15735-1, São Paulo Research Foundation (FAPESP).
This work has made use of the Horizon Cluster hosted by Institut d'Astrophysique de Paris. Part of the simulations were performed using the computing resources at the CC-IN2P3 Computing Centre (Lyon/Villeurbanne – France), partnership between CNRS/IN2P3 and CEA/DSM/Irfu.
|
1,941,325,220,748 | arxiv | \section{Introductions}
Topological semimetals \cite{H.Weng2016,Y.B2016,
C.K2016,A.Bansil2016,S.Rao} are one of the fast growing families in
the frontier of material sciences and condensed matter physics due
to their unique density of states, transport properties and novel
topological surface states as well as their potential for use in
quantum computers, spintronics and novel physics. It has been
well-known that topological semimetals highlight several main types
of interesting fermions in crystal solids, such as three-dimensional
(3D) Dirac cones
\cite{Z.Wang2012,Cheng.X2014,Young2012,Liu.Z2014,Xu2015,
Z.W2013,Z.K2014,Neupane2014,Du.Y2015,J.Hul,B.J2014}, Weyl
nodes\cite{S.Murakami2007,X.Wan2011,G.Xu2011,S.Y2015,Shekhar2015,S.Y_02015,
H.Weng2015,S.M2014,B.Q2015,B.Q_02015,S.-Y2015,
L.Yang2015,Y.Zhang2016,A.A2015,Xu
SY2016,Chang2016,Yang2016,Singh2012,Ruan2016}, Dirac nodal
lines\cite{Fang.C2016,Ryu2002,Heikkila2011,Burkov.A2011,Ronghan2016,
Weng.H.M2015,Yu.R2015,Kim2015,Xie.L2015,M.G.Zeng2015,Mullen2015},
triply degenerate nodal
points\cite{B.Bradlyn2016,G.W2016,H.Weng_02016,H.Weng_12016,
Zhu2016,G.Chang2016,He.J2017,Ding.H2017,H.Yang2017,J.Yu2017}, and
even beyond\cite{B.Bradlyn2016}. In addition, their realization in
crystal solids is also important because they provide the ways to
study elementary particles, which were long-sought and predicted
ones, in high-energy physics. Importantly, in similarity to various
fermions of electrons, the exciting progresses of the bosons
(vibrational phonons) have been also predicted \cite{Lu.L2013} or
observed in the 3D momentum space of solid crystals with the
topological vibrational states, such as Dirac, Weyl and line-node
phonons in photonic crystals only with macroscopic systems of kHz
frequency
\cite{Lu.L2013,Lu.L2015,Huber.S2016,Prodan.E2009,Chen.B2014,Yang.Z2015,
Wang.P2015,Xiao.M2015,Nash.L2015,Susstrunk2015,
Mousavi2015,Fleury2016,Rocklin2016,He2016,Susstrunk2016,Lifeng2017}
and, even most recently, theoretically predicted doubly-Weyl phonons
in transition-metal monosilicides with atomic vibrations at THz
frequency \cite{Zhang.T2017}. However, to date no three-component
bosons have been reported, although three-component fermions have
been experimentally discovered in the most recent work of MoP
\cite{Ding.H2017}.
The three-component bosons would possibly occur in atomic solid
crystals because three-fold degeneracy can be protected by lattice
symmetries, such as symmorphic rotation combined with mirror
symmetries and non-symmorphic symmetries, as what was already
demonstrated to be triply degenerated points of electronic fermions
in the solid crystals
\cite{B.Bradlyn2016,G.W2016,H.Weng_02016,H.Weng_12016,Zhu2016,
G.Chang2016,He.J2017,Ding.H2017}. In addition to the importance of
seeking the new type of three-component bosons, the topological
phononic states will be extremely interesting because they could
certainly enable materials to exhibit novel heat transfer, phonon
scattering and electron-phonon interactions, as well as other
properties related with vibrational modes, such as thermodynamics.
{In the first, in similarity to topological properties of electrons,
the topological effects of phonons can induce the one-way edge
phonon states (the topologically protected boundary states). These
states will conduct phonon with little or no
scattering\cite{He2016,Mousavi2015}, highlighting possible
applications for designing phononic circuits\cite{Liuduan2016}.
Utilizing the one-way edge phonon states an ideal phonon diode
\cite{Liuduan2016} with fully 100\% efficiency becomes potential in
a multi-terminal transport system. In the second, in different from
that of electrons, as one of bosons, phonons, are not limited by the
Pauli exclusion principle. This fact demonstrates that the whole
frequency zone of phonon spectrum can be physically probed. It was
even theoretically demonstrated that the chiral phonons excited by
polarized photons can be detected by a valley phonon Hall effect in
monolayer hexagonal lattices \cite{Zhanglifa2015}. Within this
context, through first-principles calculations we report on the
novel coexistence of the triply degenerate nodal points (TDNPs) and
type-I and type-II Weyl nodes (WPs) of phonons in three compounds of
TiS, ZrSe and HfTe. Interestingly, these three materials
simultaneously still exhibit three-component fermions and
two-component Weyl fermions from their electronic structures. The
coexistence of three-component bosons, two-component Weyl bosons,
three-component fermions and two-component Weyl fermions provide
attractive candidates to study the interplays between topological
phonons and topological fermions in the same solid crystals.}
\section{Methods}
Within the framework of the density functional theory (DFT)
\cite{Hohenberg.P1964,Kohn.W1965} and the density functional
perturbation theory (DFPT) \cite{Baroni.S2001}, we have performed
the calculations on the structural optimization, the electronic band
structures, the phonon calculations and surface electronic band
structures. Both DFT and DFPT calculations have been performed by
employing the Vienna \emph{ab initio} Simulation Package (VASP)
\cite{G.Kresse1993,G.Kresse1994,G.Kresse1996}, with the projector
augmented wave (PAW) pseudopotens \cite{P.E1994,G.Kresse1999} and
the generalized gradient approximation (GGA) within the
Perdew-Burke-Ernzerhof (PBE) exchange-correlation
functional\cite{J.P1996}. The adopted PAW-PBE pseudopotentials of
all elements treat semi-core valence electrons as valence electrons.
A very accurate optimization of structural parameters have been
calculated by minimizing the interionic forces below 0.0001
eV/\AA\,. The cut-off energy for the expansion of the wave function
into the plane waves was 500 eV. The Brillouin zone integrations
were performed on the Monkhorst-Pack k-meshes
(21$\times$21$\times$23) and were sampled with a resolution of
2$\pi$ $\times$ 0.014\AA\,$^{-1}$. The band structures, either with
or without the inclusion of spin-orbit coupling (SOC), have been
performed by the Gaussian smearing method with a width of smearing
at 0.01 eV. Furthermore, the tight-binding (TB) through Green's
function methodology \cite{M.P1985,Weng2014,Weng2015} were used to
investigate the surface states. We have calculated the Hamiltonian
of tight-binding (TB) approach through maximally-localized Wannier
functions (MLWFs) \cite{N.Marzari1997,I.Souza2001} by using the
Wannier 90code \cite{A.A2008}. To calculate phonon dispersions,
force constants are generated based on finite displacement method
within the 4$\times$4$\times$4 supercells using the VASP code and
their dispersions have been further derived by Phononpy code
\cite{L.Chaput2011}. We have also computed the phonon dispersions by
including the SOC effect, which has been turned out to be no any
influence in them. {Furthermore, the force constants are used as the
tight-binding parameters to build the dynamic matrices. We determine
the topological charges of the WPs by using the Wilson-loop method
\cite{Soluyanov2011,add1}. The surface phonon DOSs are obtained by
using the iteration Green's function method \cite{M.P1985}.}
\section{Results and Discussions}
\subsection{Crystal structure and structural stabilities of the
\emph{MX} compounds}
Recently, the type of WC-type materials (Fig. \ref{fig1}(a)),
including ZrTe, TaN, MoP and WC, has been theoretically reported to
host the coexistence of the TDNPs and WPs in their electronic
structures. This type of coexisted fermions of electronic TDNPs and
WPs have been recently confirmed in MoP \cite{Ding.H2017}. We
further extended this family by proposing eight compounds (TiS,
TiSe, TiTe, ZrS, ZrSe, HfS, HfSe and HfTe), which are isoelectronic
and isostructural to ZrTe. Among these compounds, five compounds of
TiS, ZrS, ZrSe$_{0.90}$, and Hf$_{0.92}$Se as well as ZrTe were
experimentally reported to have the same WC-type structure
\cite{Hahn_01959,Harry1957,Steiger1970,Hahn1959,Schewe1994,
Sodeck1979,G.O2001,G.O2014}. {No any experimental data is available
for the remaining four compounds of TiSe, TiTe, HfS, and HfTe. Here,
in order to systematically investigate their electronic structures
and phonon spectra and to compare their differences, we have
considered that all these nine compounds crystallize in the same
WC-type structure.} For five experimentally known compounds TiS,
ZrS, ZrSe, ZrTe and HfSe, our DFT calculations yield the good
agreement of their equilibrium lattice parameters with the
experimental data (see supplementary Table S1). Their enthalpies of
formation are derived in supplementary Table S1, indicating their
stabilities in the thermodynamics and their phonon dispersions have
no any imaginary frequencies, revealing the stabilities in the
atomic mechanical vibrations.
\begin{figure}
\includegraphics[height=0.45\textwidth]{Figure1.eps}
\caption{{WC-type crystal structure and its Brilliouin zone of
\emph{MX}}(\emph{M} = Ti, Zr, Hf; \emph{X} = S, Se, Te). These
materials crystallize in the simple hexagonal crystal structure with
the space group of $P\bar{6}m2$ (No. 187). $M$ occupies the 1$a$
Wyckoff site (0, 0, 0) and $X$ locates at the 1$d$ (1/3, 2/3, 1/2)
site. Panel (\textbf{a}) shows the phonon vertical vibrational mode
(Mode$_\perp^z$) along the $k_z$ direction at the boundary -- the
high-symmetry A (0, 0, $\pi$/2) point -- of the Brilliouin zone
(BZ). Panel (\textbf{b}) denotes the phonon planar vibrational mode
(Mode$_{=}^{x}$) along the $k_x$ direction, which is two-fold
degenerate (Mode$_{=}^{x,y}$ = Mode$_{=}^{x}$ = Mode$_{=}^{y}$)
because of its C$_{3v}$ rotational symmetry. Panel (\textbf{c}) The
BZ in which the closed loops around each $K$ point denotes the Dirac
nodal lines (DNLs) of electrons around the Fermi level when SOC is
ignored. With the SOC inclusion each DNL is broken into two Weyl
points with the opposite chirality, marked as blue (WP-) and red
(WP+) balls and they coexist with the triply degenerate nodal point
(TDNP) of electronic structure (namely, three-component fermion).
Panel (\textbf{d}) shows the triply degenerate nodal point (TDNP) of
phonon dispersions ( three-component boson) along the $\Gamma$-A
direction in the BZ.} \label{fig1}
\end{figure}
\begin{figure*}[!htp]
\vspace{0.5cm}
\includegraphics[height=0.37\textwidth]{Figure2.eps}
\caption{{Phonon spectra of ZrS, ZrSe and ZrTe.} Panels (\textbf{a},
\textbf{b}, and \textbf{c}): DFT-derived phonon dispersions of ZrS,
ZrSe and ZrTe, respectively. Panels (\textbf{d}, \textbf{e}, and
\textbf{f}): DFT-derived total and partial phonon densities of
states (PDOS) of ZrS, ZrSe, and ZrTe, respectively.} \label{fig2}
\end{figure*}
\subsection{Three-component fermions and two-component Weyl
fermions in the electronic structures}
{We have elucidated the electronic band structures of these nine
compounds. Interestingly, they are in similarity to the case of ZrTe
in Ref. \onlinecite{H.Weng_12016}. As an example, the electronic
band structure of ZrSe is given in the supplementary Fig. S1,
indicating the coexisted fermions, TDNPs and WPs, whose coordinators
are further compiled in Fig. \ref{fig1}c. Of course, the similar
electronic behaviors can be observed for other compounds. But, TiS
is unique. Because of its rather weak spin-orbit coupling (SOC)
effect, TiS exhibits the coexistence of the six DNLs and the two
six-fold degenerate nodal points of its electronic structure in the
BZ. This situation is exactly what happens for other eight compounds
when the SOC effect is ignored. Basically, the appearance of these
two types of fermions, TDNPs and WPs, in this family share the same
physics, as previously discussed for ZrTe\cite{H.Weng_12016}. The
details of their electronic structures and their topologically
protected non-trivial surface states refer to the supplementary
Figs. S1, S2, S3, and S4 as well as the corresponding supplementary
texts.}
\subsection{Triply degenerate nodal points (TDNPs) of the
phonons in TiS, ZrSe and HfTe}
We have found that the presence of the triply degenerate nodal
points (TDNPs) of the phonons in three compounds of TiS, ZrSe and
HfTe after a systemical analysis of their phonon dispersions
(supplementary Fig. S5). Because each primitive cell contains two
atoms (Fig. \ref{fig1}(a)), their phonon dispersions have six
branches consisting of three acoustic and three optical ones,
respectively. As compared with the computed phonon dispersions in
Fig. \ref{fig2}(a, b, and c) and their phonon densities of states in
Fig. \ref{fig2}(d, e, and f) of the isoelectronic ZrS, ZrSe and ZrTe
compounds, a well-separated acoustic-optical gap can be observed in
both ZrS and ZrTe with the smallest direct gap at the A point (0, 0,
$\pi$/2) on the boundary of the BZ. The specified analysis uncovered
that for both ZrS and ZrTe compounds the top phonon band of the gap
at the A point is comprised with the doubly degenerate vibrational
mode of phonons in which both Zr and S (or Te) atoms, oppositely and
collinearly, displace along either $x$ or $y$ direction
(Mode$_{=}^{x,y}$ as marked in Fig. \ref{fig1}(b)). The vibrational
amplitude of the Mode$_{=}^{x,y}$ are contributed nearly 100\% by
the Zr atom, rather than by S (or Te) atoms. The bottom phononic
band of the gap at the A point is a singlet state originated from
the vibrational mode at which both Zr and S (or Te) atoms
collinearly move in the same $k_z$ direction (Mode$_{\perp}^{z}$ as
marked in Fig. \ref{fig1}(a)). But its amplitude of this
Mode$_{\perp}^{z}$ are almost fully dominated by the displacement of
S (or Te) atoms.
In contrast to both ZrS and ZrTe in Fig. \ref{fig2}, the case of
ZrSe shows no acoustic-optical gap (Fig. \ref{fig2}(b)), as
illustrated by its phonon density of states in \ref{fig2}(e)). It
has been noted that the planar Mode$_{=}^{x,y}$ at the A point
becomes now lower in frequency than the Mode$_{\perp}^{z}$.
Accordingly, this fact corresponds to the occurrence of phonon band
inversion at the A point. It means the unusual fact that around A
point the optical phonon bands inverts below the acoustic band which
normally should have a lower frequency. Physically, within the
(quasi)harmonic approximation the vibrational frequency, $\omega$,
have to be proportional to $\sqrt{\beta/m}$ at the boundary of the
BZ. Here, $\beta$ is the second-order force constant -- the second
derivative of the energy following a given vibrational mode as a
function of the displacement and $m$ the atomic mass. Therefore, as
seen in Fig. \ref{fig2}(b) for ZrSe the occurrence of the phonon
band inversion at the boundary A point is certainly induced by both
$\beta$ and $m$ which are determined by the planar Mode$_{=}^{x,y}$
and the Mode$_{\perp}^{z}$ at the A point. Following this
consideration, we have defined the dimensionless ratio $\tau$ as
follows,
\begin{equation}
\tau = \frac{\sqrt{\beta_=/m_=}}{\sqrt{\beta_\perp/m_\perp}},
\end{equation}
where $\tau$ specifies the comparison between the frequencies of
both Mode$_{=}^{x,y}$ and Mode$_{\perp}^z$. With $\tau$ $>$ 1 the
material shows no band inversion, thereby indicating no TDNPs. When
$\tau$ $<$ 1 implies the appearance of the phonon band inversion
with the TDNPs in the acoustic and optical gap. With such a
definition, we further plot the $\beta$ with the sequence of ZrS,
ZrSe and ZrTe in Fig. \ref{fig3}(a). It has been found that, only
with the second-order force constants of $\beta_{=}$ and
$\beta_{\perp}$ (Fig. \ref{fig3}(a)) it is not enough to induce the
phonon band inversion. This fact is in agreement with the Eq. (1)
although the $\beta_{=}$-$\beta_{\perp}$ difference is the smallest
in ZrSe among them in Fig. \ref{fig3}(a). Furthermore, for all nine
compounds in this family we compiled their $\tau$ values as a
function of the ratio ($\delta$) of the atomic masses related with
Mode$_{=}^{x,y}$ over Mode$_{\perp}^z$ (namely, $\delta$ =
$m$(Mode$_{=}^{x,y}$)/$m$(Mode$_{\perp}^{z}$)) in Fig.
\ref{fig3}(b). {With increasing the ratio of the atomic masses, the
$\tau$ value increases in a nearly linear manner. This implies that,
if the atomic masses of constituents in a targeted material highly
differ, the possibility to have TDNPs in the acoustic and optical
gap of its phonon dispersion is extremely low. However, if they have
the comparable atomic masses with the $\delta$ ratio close to 1 the
possibility to have TDNPs is high in the acoustic and optical gap.
Following this model, we have further uncovered that, because the
$\tau$ value is smaller than 1, both TiS and HfTe have similar
property as what ZrSe does (Fig. \ref{fig3}(b)). The findings for
both TiS and HfTe are in accordance with the DFT-derived phonon
dispersions in supplementary Fig. S5. However, there is no TDNP in
the acoustic and optical gap of the other members. These facts imply
that in these materials the difference between the atomic masses of
constituents in compound plays a key role in inducing the phonon
band inversion for the appearance of TDNPs in the acoustic and
optical gap, as seen for three cases of TiS, ZrSe and HfTe whose
$\delta$ value are all around 1.}
\begin{figure*}[!htp]
\includegraphics[height=0.37\textwidth]{Figure3.eps}
\caption{{{Second-order force constant at the A point and the
dimensionless ratio $\tau$ of \emph{MX}}. Panel (\textbf{a}):
DFT-derived second-order force constant at the A point for both the
two-fold degenerate planar vibrational Mode$_=^{x,y}$ and the
vibrational Mode$_{\perp}^z$. Panel (\textbf{b}): The derived
parameter $\tau$ from Equ. (1) as a function of the $\delta$ value,
as defined in the main text, for all nine compounds. Panels
(\textbf{c} and \textbf{e}): DFT-derived phonon dispersions to
elucidate phonon TDNPs of ZrSe along the X1 (-$\pi$/2, 0, 0) to X2
($\pi$/2, 0, 0) and $\Gamma$-A directions, respectively. Panels
(\textbf{d} and \textbf{f}): Zoom-in 3D visualization of phonon
TDNPs in the $k_z$ = 0 and $k_y$ = 0 planes, respectively.}}
\label{fig3}
\end{figure*}
Importantly, as accompanying with the occurrence of the phonon band
inversion, the TDNPs, featured by a linear crossing of the
frequencies between the acoustic and optical bands, unavoidably
appear at (0, 0, $k_z$ = $\pm$0.40769) along the $\Gamma$-A
direction in the BZ (Fig. \ref{fig2}(b) and Fig. \ref{fig3}) for
ZrSe. Their appearance of the TDNPs in the acoustic and optical gap
is indeed protected by the C$_{3z}$ rotation and mirror symmetries
along the $\Gamma$-A direction because C$_{3z}$ allows the
coexistence of two-fold (Mode$_{=}^{x,y}$) and one-fold
(Mode$_{\perp}^Z$) representations, in similarity to their
electronic band structures as discussed above. To elucidate the
underlying mechanism of the phonon TDNPs in the acoustic and optical
gap, it still needs to be emphasized that, on the one hand, the
rotation and mirror symmetries substantially provide the
prerequisite to produce these two competing modes (two-fold
Mode$_{=}^{x,y}$ and one-fold Mode$_{\perp}^Z$) and, on the other
hand, the comparable atomic masses of constituent elements are
another ingredient to trigger the phononic band inversion. Of
course, at this TDNP it still implies that the planar
Mode$_{=}^{x,y}$ and the Mode$_{\perp}^Z$ at (0, 0, $k_z$ =
$\pm$0.40769) locate at the strictly same frequency of 183.9
cm$^{-1}$. The TDNPs locate at (0, 0, $k_z$ = $\pm$0.40382) with the
frequency of 293.4 cm $^{-1}$ for TiS and at (0, 0, $k_z$ =
$\pm$0.43045) with the frequency of 133.3 cm $^{-1}$ for HfTe. To
elucidate the 3D TDNP shape of ZrSe, we also plot the zoom-in
dispersions on both $k_{z}$ = 0 and $k_y$ = 0 planes of BZ in Fig.
\ref{fig3}. From both Fig. \ref{fig3}(c) and \ref{fig3}(d) in the
$k_{z}$ = 0 the TDNP in the acoustic and optical gap can be clearly
visualized to have an isotropic shape. However, in the $k_y$ = 0
plane the phonon bands around the TDNPs are highly complex with the
helicoid shape (Fig. \ref{fig3}(e) and \ref{fig3}(f)).
\subsection{Two-component Weyl phonons in TiS, ZrSe and HfTe}
{Besides the existence of the TDNPs in TiS, ZrSe and HfTe, the
calculations revealed the occurrence of the two-component Weyl nodes
(WPs) in their phonon spectra. As evidenced in Fig. \ref{fig4}(a)
for TiS, the phonon bands have five different band crossings (from
C1 to C5) at the high-symmetry K point and a band crossing at the H
point. In particular, because these crossings are not constrained by
any mirror symmetry, they result in the appearance of six pairs of
WPs (Table \ref{tab1}). Among them, the band crossings from C1 to C5
confirm the five pairs of type-I WPs from WP1 to WP5 and the C6
crossing gives rise to the sixth pair of type-II WP6 one. The phonon
dispersions of type-I and type-II WPs are shown in Fig. \ref{fig4}h
and Fig. \ref{fig4}i, respectively. To identify their topological
non-trivial properties, we have calculated the topological charge of
each Weyl node, which is defined by the integration of Berry
curvature using a closed surface surrounding a node within the
framework of the Wilson-loop method \cite{Soluyanov2011,add1}. For
instance, Fig. \ref{fig4}(d and e) shows the Wannier center
evolutions around WP3+ and WP2- with the topological positive and
negative charges, respectively. Their corresponding Berry curvatures
are shown in Fig. \ref{fig4}(f and g), indicating that the positive
and negative charges, WP3+ and WP2-, have different winding
directions of their Berry curvatures. Furthermore, we determine the
charges of all the WPs of TiS in Table I. In similarity, ZrSe shares
the same six pairs of WPs (5 pairs for type-I ones and a pair for
type-II one) in Fig. \ref{fig4}b whereas HfTe only has four pairs of
type-I WPs in Fig. \ref{fig4}c, whose coordinators are given in
Table \ref{tab1}. This difference is mainly because in HfTe the
phonon dispersions from K to H are lacking of two band crossings, C3
at K and C6 at H.}
\begin{figure*}[!htp]
\includegraphics[height=0.7\textwidth]{Figure4.eps}
\caption{{{Topological phonons of TiS, ZrSe and HfTe}. Panels
(\textbf{a}, \textbf{b} and \textbf{c}): DFT-derived phonon
dispersions along K to H for TiS, ZrSe and HfTe, respectively.
Panels (\textbf{d} and \textbf{e}) show the Wannnier center
evolutions around positive charge WP3+ and negative charge WP2-
nodes for TiS, respectively. Panels (\textbf{g} and \textbf{f})
denotes the Berry curvature distributions around WP3+ and WP2- Weyl
nodes for TiS. Panels (\textbf{h} and \textbf{i}) the phonon
dispersions around a type-I WP3+ and a type-II WP6+ weyl node for
TiS, respectively. Noted that the symbols of WP1$\sim$WP6 are the
Weyl nodes, the symbols of C1 to C6 refer to six different band
crossings, and the signs of + and - denote the topological positive
and negative charges, respectively.}} \label{fig4}
\end{figure*}
\begin{figure*}
\includegraphics[width=1.0\textwidth]{Figure5.eps}
\caption{{{The surface phonon spectra and the surface phonon
densities of states (PDOSs) of the (10$\bar{1}$0) surface of TiS}.
Panels (a, b, c): the surface phonon spectra along the high-symmetry
lines in panel (a) and along the defined $\bar{K}$-$\bar{H}$ line of
the BZ in the ((10$\bar{1}$0) surface. Panel (d to e): the surface
PDOSs at the six frequencies that the six pairs of Weyl nodes have
and the projections of these bulk WPs are marked as solid blue
(positive topological charge) and white (negative topological
charge) circs in each panel. The surface opening arc states connect
two WPs with opposite charges can be visualized in panels (d to i).
Panels j and k: the frequency-dependent evolutions of the arc states
connecting the type-I WP1 and the type-II WP6 on the the
(10$\bar{1}$0) surface of TiS, respectively.}} \label{fig5}
\end{figure*}
\begin{figure*}
\includegraphics[width=1.0\textwidth]{Figure6.eps}
\caption{{{The surface phonon dispersion and its evolution of the
PDOSs on the the (10$\bar{1}$0) surface of HfTe}. Panel (a): The
surface phonon dispersion on the (10$\bar{1}$0) surface of HfTe.
Panels (b to e): The surface phonon densities of states (PDOS) at
the frequencies of four pairs of Weyl nodes (WP1, WP2, WP4 and WP5
in HfTe). The projections (solid white circs - negative charge and
solid blue circs - positive charge) of the bulk WPs (and their
symmetric counterparts) on the (10$\bar{1}$0) surface are indicated
on each figure. The surface arcs connect two WPs with opposite
charges. Panel f: the frequency-dependent evolutions of the arc
states connecting the type-I WP1 on the (10$\bar{1}$0) surface of
HfTe.}} \label{fig6}
\end{figure*}
{Certainly, the existence of these WPs gives rise to the
topologically protected non-trivial surface states (TPSSs) of the
surface phonon dispersions. As shown in Fig. \ref{fig5}(a, b and c),
we have calculated the surface phonon spectrum of the (10$\bar{1}$0)
surface of TiS along the high-symmetry momentum paths in the surface
BZ. In particular, in order to see the projections of all WPs on the
(10$\bar{1}$0) surface, we have plot the surface phonon dispersions
(Fig. \ref{fig5}(b and c)) along the $\bar{K}$-$\bar{H}$ direction,
as defined in the (10$\bar{1}$0) surface BZ (Fig. \ref{fig5}d). This
$\bar{K}$-$\bar{H}$ direction indeed is the projection of the K-H
direction in the bulk BZ. As evidenced in Fig. \ref{fig5}b, the
three type-I WPs, WP1, WP2, and WP3, are clearly demonstrated and
the other two type-I WP4 and WP5 as well as another type-II WP6 can
be apparently seen in the Fig. \ref{fig5}c. Accordingly, we have
observed the interesting TPSSs, which are typically connecting each
WP in \ref{fig5}(b and c). We further plot their 2D visualization of
their phonon density of states (PDOSs) in Fig. \ref{fig5}(d to i)
using the exact frequencies with 361.22 cm$^{-1}$ of WP1, 349.10
cm$^{-1}$ of WP2, 289.21 cm$^{-1}$ of WP3, 242.05 cm$^{-1}$ of WP4,
238.37 cm$^{-1}$ of WP5, and 231.30 cm$^{-1}$ of WP6, respectively.
Interestingly, at each frequency for the (10$\bar{1}$0) surface in
Fig. \ref{fig5}(d to i), the TPSSs featured by the broken surface
arcs connecting two WPs with opposite charges for WP1, WP2, WP3 and
WP6 can be clearly visualized. However, it is a bit difficult to
observe the broken arcs states connecting WP4 and WP5 on the
(10$\bar{1}$0) surface because they are heavily overlapped with the
projections of bulk phonon states. The case of ZrSe also exhibits
the quite similar arc states of surface phonon on its (10$\bar{1}$0)
surface (not shown here).}
{As compared with both cases of both TiS and ZrSe, HfTe exhibits
some differences. HfTe only has four pairs of type-I WPs as marked
in Fig. \ref{fig4}c and no type-II WPs. Fig. \ref{fig6} shows its
phonon spectrum of the (10$\bar{1}$0) surface and the 2D
visualizations of the PDOSs with the frequencies of 172.95 cm$^{-1}$
of WP1, 169.95 cm$^{-1}$ of WP2, 101.67 cm$^{-1}$ of WP4, and 94.99
cm$^{-1}$ of WP5, respectively. The bulk WPs are also projected onto
the (10$\bar{1}$0) surface. As shown Fig. \ref{fig6}b, the broken
arc states of the TPSSs are clearly linked to the pair of WP1 with
opposite topological charges, and only partial for both WP2 and WP4
in Fig. \ref{fig6}(c and d), and not observable for WP5 due to its
overlapping with the projected states of bulk phonon dispersions in
Fig. \ref{fig6}e. In addition, it still needs to be emphasized that
the arc states can be certainly observed on some other planes which
are paralleling to the bulk H-K direction, such as the
(01$\bar{1}$0) plane. However, note that the arc states connecting
Weyl nodes cannot be observable on the (0001) surface because, on
it, the projections of the K-H direction coincide at the same
surface momentum, and their topological charges cancel to each
other.}
\begin{table*}[!t]
\begin{center}
\caption{Weyl points at $\mathbf{k} = (\frac{1}{3}, \frac{1}{3},
k_z)$ and their frequencies $\omega$, topological charges (+ or -)
and types (type-I or type-II) of TiS, ZrSe and HfTe. \label{tab1}}
\begin{ruledtabular}
\begin{tabular}{c|cccc|cccc|cccc}
WPs & {$k_z$} & {$\omega$ (cm$^{-1}$)} & {Charge} & {Type} & {$k_z$} & {$\omega$ (cm$^{-1}$)} & {Charge} & {Type} & {$k_z$} & {$\omega$ (cm$^{-1}$)} & {Charge} & {Type} \\
\hline
WP1& 0.1919 & 361.22 & $+$ & I & 0.1485 & 227.76 & $+$ & I & 0.1739 & 172.95 & $+$ & I \\
WP2& 0.3628 & 349.07 & $-$ & I & 0.2600 & 221.91 & $-$ & I & 0.2798 & 169.95 & $-$ & I \\
WP3& 0.0517 & 289.21 & $+$ & I & 0.0371 & 194.93 & $+$ & I \\
WP4& 0.2741 & 242.05 & $-$ & I & 0.2569 & 150.52 & $-$ & I & 0.2803 & 101.67 & $-$ & I \\
WP5& 0.2533 & 238.37 & $+$ & I & 0.2486 & 149.22 & $+$ & I & 0.2205 & 94.99 & $+$ & I \\
WP6& 0.3265 & 231.30 & $+$ & II & 0.2977 & 142.84 & $+$ & II \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table*}
\section{Discussions}
{Through the DFT-derived results, these three materials of TiS, ZrSe
and HfTe are highly attractive because of the occurrence of the
coexisted TDNPs and WPs. In the first, the TDNPs of their phonons
are interesting because (i) they provide a good platform to study
the behaviors of the basic triple degenerate boson, one of
elementary particles, in the real materials, (ii) they are highly
robust, which are locked by the threefold rotational symmetry of the
hexagonal lattices, and (iii) they exactly occur in the
optical-acoustic gap and do not overlap with other phonon bands.
Perhaps, the thermal-excited signals related with these TDNPs will
not be interfered by other vibrational modes, thereby highlight the
viable cases to experimentally probe the TDNP-related properties.}
{In the second, it is well-known that in the electronic structures
the WPs and their associated topological invariants enable the
corresponding materials to exhibit a variety of novel properties,
such as robust surface states and chiral anomaly
\cite{S.Murakami2007, X.Wan2011, G.Xu2011, S.Y2015, Shekhar2015,
S.Y_02015, H.Weng2015, S.M2014, B.Q2015, B.Q_02015, S.-Y2015,
L.Yang2015, Y.Zhang2016, A.A2015, Xu SY2016, Chang2016, Yang2016,
Singh2012, Ruan2016}. In our current cases, the existence of the
bulk phononic WPs and their robust TPSSs render them to be very
charming for possible applications, because these states can not be
backscattered. In particular, as evidenced in Fig. \ref{fig5}j the
surface broken arc states connecting a pair of WP1 nodes in TiS
exhibit an nearly one-way propagation. Its evolution further extends
and shift to the zone boundary with increasing the frequencies in a
relatively wide region of frequency in Fig. \ref{fig5}j. In
similarity, the nearly one-way arc states connecting a pair of WP1
nodes in HfTe can be clearly visualized in Fig. \ref{fig6}f.
However, the evolution of the surface arc states connecting a pair
of type-II WP6 in TiS cannot be fully visualized because most of
them are overlapped with the projections of the bulk phonon states
in Fig. \ref{fig5}k.}
\section{SUMMARY}
{Summarizing, through first-principles calculations we have revealed
that three WC-type materials of TiS, ZrSe and HfTe not only host
three-component bosons featured by TDNPs and two-component Weyl
bosons featured by WPs in their phonon spectra. In both TiS and
ZrSe, there exist six pairs of bulk WPs (five type-I nodes and one
type-II node) locating at the K-H line in the BZ, whereas in HfTe
only four pairs of type-I WPs exist. We have demonstrated that their
phonon spectra of these three cases are topological in nature,
exhibiting that the topologically protected non-trivial surface arc
states of phonons. These non-trivial states are directly linked with
various WPs with opposite chirality. Interestingly, these three
cases still exhibit three-component fermions featured by TDNPs and
six pairs of two-component Weyl fermions (WPs) in their electronic
structures of the bulk crystals. The novel coexistence of the main
features of (\emph{i}) three-component bosons, (\emph{ii})
two-component Weyl bosons, and three-component fermions, and
(\emph{iii}) two-component Weyl fermions and, in particular, both
three-component bosons and three-component fermions at the nearly
same momentum (Fig. \ref{fig1}(c and d)) along the $\Gamma$-A
direction could couple to each other through electron-phonon
interactions. They hence highlight a wonderful platform to study the
interplays between different types of topological electron
excitations and topological phonons within the atomistic scale for
potential multi-functionality quantum-mechanical properties.}
|
1,941,325,220,749 | arxiv | \section{Introduction}
OJ287 is a very special case among all quasars: its optical light curve of 120 yr length shows a double periodicity of 60 yrs and 12 yrs [1]. Another remarkable feature of this quasar is that while its radiation is dominated by synchrotron radiation at most times, during the major outbursts its optical to UV spectrum is bremsstrahlung at around $3\times10^5$ K [2]. These main outbursts occur in unpolarized light, as is expected of bremsstrahlung [3]. Further, it is the only known quasar showing bremsstrahlung outburst peaks.
Most of these facts were not known in 1995 when a detailed model of this source was constructed [4,5]. The model proposed that the major outbursts which occur at roughly 12 yr intervals are related to a 12 yr period in a black hole binary system. The detailed mechanism for generating the outbursts which occur in pairs separated by 1 to 2 years, was presumed to be impacts on the accretion disk of the primary by the secondary. Hot bubbles of plasma pulled off the accretion disk then generate the bremsstrahlung bursts when the bubbles become optically thin.
At present, the model has been verified in many ways [6] and subsequently, the model is
invoked to constrain the spin of the primary black hole and to explore the possibility of testing black hole no-hair
theorems [7,8]. It should be noted that the binary black hole model implies typical orbital velocity $v \sim 0.1\,c $ and it explains
why higher order post-Newtonian corrections are required to explain observed major outbursts and to make predictions
about future major outbursts. In contrast, binary pulsars have $v/c \sim 10^{-3}$, which is roughly a factor 10 higher
than orbital velocity of the Earth.
However, it is desirable to probe further observational implications arising from the presence of binary components in the model and
this is what is pursued below.
\section{ Dynamical modeling of the wobbling radio jet in OJ287}
Supermassive black holes in active galactic nuclei typically have bi-directional jets consisting of collimated beams of matter
that are expelled from the innermost regions of their accretion disks. Further, highly variable quasars like OJ287 are
expected to contain relativistic jets that are beamed towards the observers.
The jet from OJ287, usually observed in centimeter wavelengths, has exhibited prominent variations in its position angle in the sky (PA from here onwards) ever since the observations began in early 1980’s.
Our aim is to infer the presence of the orbital motion by following the temporal variations in the PA measurements of OJ287.
In Figure 1 we show the PA observations of the jet as a function of time. The data were collected by T. Savolainen and they are reported in [9]. In Figure 1 we plot also the expected variation in the PA of the jet if the jet is connected to the primary accretion disk and follows the wobble of the disk in a binary system [9]. Here the wobble is modeled by a doubly periodic sinusoidal function of time $t$ (in yr)
\begin{equation}
\Omega_1 = 1.5\times \sin(2\pi\times(t-1934.8-d)/120)
\end{equation}
\begin{equation}
\Omega_2 = 0.25\times \cos(2\pi\times(t-1934.8-d)/11.2)
\end{equation}
\begin{equation}
\Delta\Omega = \Omega_1+\Omega_2-\Omega_0.
\end{equation}
The angle $\Omega$, the nodal angle of the accretion disk, is given in degrees. We assume that the inclination $i$ of the disk is constant, since its variation was found to be much smaller than the variation of $\Omega$ in numerical simulations. The actual inclination is close to $90^{\circ}$ relative to the binary plane; in the following and in Table 1 we use the quantity which is actually $i-\pi/2$. The mean viewing angle of the jet is given by $(\Omega_0,i_0)$ which specifies the direction of the observer. The jet PA (JPA, in degrees) is then at any moment of time
\begin{equation}
\Delta\phi = -70+{\rm atan}(\tan\Delta\Omega/\sin i_0).
\end{equation}
The mean JPA $\Delta\phi=-70^{\circ}$ is obtained by fitting to the data. In actual numerical simulations the maxima of the $\Delta\Omega$ are not quite sinusoidal but more highly peaked.
There are three free parameters in fitting this function to the data: the two components of the mean viewing angle $(\Omega_0,i_0)$ of the jet, and the time delay $d$ (in yr) of the response of the jet to the variations in the disk. The results of fitting are shown in Table 1. In column 1 we indicate the data set to be compared with: TS = cm-wave data, IA = 7 mm-wave data [10] (2 yr averages) and CV = optical polarization data in one yr bins, calculated by C. Villforth and reported in [9]. Columns 2 and 3 give the parameters $i_0$ and $\Omega_0$, respectively, and column 4 the delay time $d$ in years. The last column gives the rms error of the fit. These are the best fits found for the radio jet data as well as for the polarization data. In the latter case it is assumed that the electric vector arising from radiating electrons is parallel to the jet (i.e. the magnetic field is perpendicular to the jet, presumably due to a compression in the radiating knot). The opposite case, a magnetic field parallel to the jet, was reported in [9]; it gives a considerably worse fit to the present doubly sinusoidal model.
\begin{table}
\caption{\label{1}Model fits.}
\begin{center}
\begin{tabular}{lllll}
\br
Data&$i_0$&$\Omega_0$&$d$&rms\\
\mr
TS&-1.08&1.91&16.9&7.5\\
TS&-0.60&1.96&27.7&7.7\\
TS&0.99&-1.92&78.5&7.2\\
CV&1.36&0.48&0.05&23.3\\
CV&0.90&1.07&11.2&22.2\\
IA&0.35&0.15&11.2&16.9\\
IA&0.27&0.90&22.1&14.4\\
\br
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics [width=3in]{drawing1.eps}
\end{center}
\caption{\label{label}Observations of the jet position angle in OJ287 at cm wavelengths. The line represent model described in the text.}
\end{figure}
In Figure 2 we show the average PA from the 7 mm observations by [10]. The observations at this wavelength are available only since mid 1990’s, but they have a better resolution than the cm-wave maps. Thus they refer to the jet orientation closer to the core than the cm-wave maps. An interesting feature is the major jump in the position angle of the jet around 2004. The line in the figure refers to a model where the jet responds to the changing disk, but now with a different viewing angle and time delay than in the case of the cm-wave fit.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{drawing2.eps}
\end{center}
\caption{\label{label}Observations of the JPA in OJ287 at 7 mm wavelength binned at 2 yr intervals. The line refers to a model calculated in [9].}
\end{figure}
\section{Interpretation}
The variations in the orientation of the accretion disk, due to the binary influence, are transmitted to the central component in about 10 yr [9]. We may assume that this variation is communicated to the jet, starting from the near jet, and proceeding outwards. We may now calculate the speed at which this happens; we call it kink speed.
In [9] it was found that the optical polarization angle responds to the disk changes with a 13 yr time delay. The present sinusoidal model suggests a 11 yr delay which is not significantly different. The source of optical emission is presumably in the jet, but closer to the central black hole than the resolved radio jet. The best fit (Figure 2) to the 7 mm data is obtained if the time delay is 22 yr, i.e. the orientation of the 7 mm jet lags behind the variations in the central component by about 11 yr. The best solution for the cm-wave jet is provided by the 79 yr time delay, i.e. its response is delayed by $\Delta t_{kink}$$\sim$ 68 years with respect to the optical core (Figure 1). A slightly worse solution is obtained with $\Delta t_{kink}$ $\sim$ 6 years.
However, since we are looking at a relativistic jet almost head-on (the viewing angle is $\sim$ $1^{\circ}$ to $2^{\circ}$), the observed time intervals $\Delta$$t$ are compressed depending on the speed of the outflow. The knots in the jet presumably take part in a flow with close to the speed of light (Lorentz factor $\Gamma_{knot}\sim20$ [11]). They take about a year to come from the central component to the observed cm-wave jet. The kink in the jet proceeds more slowly, and reaches the 7 mm-wave jet before the cm-wave jet. Thus typically, the two parts of the jet are misaligned. Our fits show that the misalignment is about 1 degree.
Let us estimate the kink speed $\beta_{kink} $ (relative to the speed of light $c$). The apparent speed of the kink is
\begin{equation}
\beta_{akink} = \beta_{kink}\sin\theta/(1-\beta_{kink}\cos\theta)
\end{equation}
while for the knot a corresponding equation is valid.
Taking the ratio
\begin{equation}
\beta_{aknot}/\beta_{akink} = \Delta t_{kink}/\Delta t_{knot} = \beta_{knot}(1-\beta_{kink}\cos\theta))/\beta_{kink}(1-\beta_{knot}\cos\theta),
\end{equation}
we find, using the observed $\Delta t_{kink}/\Delta t_{knot}\sim50$ and $\theta\sim1^{\circ}$, the Lorentz factor $\Gamma_{kink}$. From
\begin{equation}
\beta_{kink}^2 = 1-1/\Gamma_{kink}^2,
\end{equation}
we get $\Gamma_{kink}\sim3$. For $\Delta t_{kink}/\Delta t_{knot}\sim5$ we find $\Gamma_{kink}\sim8$.
Thus the kink speed is lower than the knot speed, perhaps because the jet has a fast spine which carries the knots and a slower sheath responsible for the jet confinement.
\medskip
|
1,941,325,220,750 | arxiv | \section{Introduction}
Consider the ordinary differential equation (ODE)%
\begin{equation}
\frac{d}{dt}X_{t}^{x}=b(t,X_{t}^{x}),\quad X_{0}=x,\quad 0\leq t\leq T
\label{ODE}
\end{equation}%
for a vector field $b:[0,T]\times \mathbb{R}^{d}\longrightarrow \mathbb{R}%
^{d}$.
It is well-known that the ODE \eqref{ODE} admits the existence of a unique
solution $X_{t}$, $0\leq t\leq T$, if $b$ is a Lipschitz function of linear
growth, uniformly in time. Further, if in addition $b\in C^{k}([0,T]\times
\mathbb{R}^{d};\mathbb{R}^{d})$, $k\geq 1$, then the flow associated with
the ODE \eqref{ODE} inherits the regularity from the vector field, that is
\begin{align*}
(x\longmapsto X_{t}^{x})\in C^{k}(\mathbb{R}^{d};\mathbb{R}^{d}).
\end{align*}
However, well-posedness of the ODE \eqref{ODE} in the sense of existence,
uniqueness and the regularity of solutions or flow may fail, if the driving
vector field $b$ lacks regularity, that is if $b$ e.g. is not Lipschitzian
or discontinuous.
In this article we aim at studying the restoration of well-posedness of the
ODE \eqref{ODE} in the above sense by perturbing the equation via a specific
noise process $\mathbb{B}_{t}$, $0\leq t\leq T$, that is we are interested
to analyze strong solutions to the following stochastic differential
equation (SDE
\begin{equation} \label{SDE}
X_{t}^{x}=x+\int_{0}^{t}b(t,X_{s}^{x})ds+\mathbb{B}_{t},\quad 0\leq t\leq T,
\end{equation}%
where the driving process $\mathbb{B}_{t},$ $0\leq t\leq T$ is a stationary
Gaussian process with non-H\"{o}lder continuous paths given by
\begin{equation} \label{B}
\mathbb{B}_{t}=\sum_{n\geq 1}\lambda _{n}B_{t}^{H_{n},n}.
\end{equation}%
Here $B_{\cdot }^{H_{n},n}$, $n\geq 1$ are independent fractional Brownian
motions in $\mathbb{R}^{d}$ with Hurst parameters $H_{n}\in (0,\frac{1}{2}%
),n\geq 1$ such that
\begin{equation*}
H_{n}\searrow 0
\end{equation*}%
for $n\longrightarrow \infty $. Further, $\sum_{n\geq 1}\left\vert \lambda
_{n}\right\vert <\infty $ for $\lambda _{n}\in \mathbb{R},n\geq 1$.
In fact, on the other hand, the SDE (\ref{SDE}) can be also naturally recast
for $Y_{t}^{x}:=X_{t}^{x}-\mathbb{B}_{t}$ in terms of the ODE%
\begin{equation}
Y_{t}^{x}=x+\int_{0}^{t}b^{\ast }(t,Y_{s}^{x})ds, \label{RODE}
\end{equation}%
where $b^{\ast }(t,y):=b(t,y+\mathbb{B}_{t})$ is a "randomization" of the
input vector field $b$.
We recall (for $d=1$) that a fractional Brownian motion $B_{\cdot }^{H}$
with Hurst parameter $H\in (0,1)$ is a centered Gaussian process on some
probability space with a covariance structure $R_{H}(t,s)$ given by
\begin{equation*}
R_{H}(t,s)=E[B_{t}^{H}B_{s}^{H}]=\frac{1}{2}(s^{2H}+t^{2H}+\left\vert
t-s\right\vert ^{2H}),\quad t,s\geq 0.
\end{equation*}%
We mention that $B_{\cdot }^{H}$ has a version with H\"{o}lder continuous
paths with exponent strictly smaller than $H$. The fractional Brownian
motion coincides with the Brownian motion for $H=\frac{1}{2}$, but is
neither a semimartingale nor a Markov process, if $H\neq \frac{1}{2}$.
We also recall here that a fractional Brownian motion $B_{\cdot }^{H}$ has a
representation in terms of a stochastic integral as%
\begin{equation}
B_{t}^{H}=\int_{0}^{t}K_{H}(t,u)dW_{u}, \label{Rep}
\end{equation}%
where $W_{\cdot }$ is a Wiener process and where $K_{H}(t,\cdot )$ is an
integrable kernel. See e.g. \cite{Nua10} and the references therein for more
information about fractional Brownian motion.
\bigskip
Using Malliavin calculus combined with integration-by-parts techniques based
on Fourier analysis, we want to show in this paper the existence of a unique
global strong solution $X_{\cdot }^{x}$ to \eqref{SDE} with a stochastic
flow which is \emph{smooth}, that is
\begin{equation} \label{Smooth}
(x\longmapsto X_{t}^{x})\in C^{\infty }(\mathbb{R}^{d};\mathbb{R}^{d})\quad %
\mbox{a.e. for all}\quad t,
\end{equation}%
when the driving vector field $b$ is \emph{singular}, that is more
precisely, when
\begin{equation*}
b\in \mathcal{L}_{2,p}^{q}:=L^{q}([0,T];L^{p}(\mathbb{R}^{d};\mathbb{R}%
^{d}))\cap L^{1}(\R^{d};L^{\infty }([0,T];\mathbb{R}^{d}))
\end{equation*}%
for $p,q\in (2,\infty ]$.
We think that the latter result is rather surprising since it seems to
contradict the paradigm in the theory of (stochastic) dynamical systems that
solutions to ODE's or SDE's inherit their regularity from the driving vector
fields.
Further, we expect that the regularizing effect of the noise in \eqref{SDE}
will also pay off dividends in PDE theory and in the study of dynamical
systems with respect to singular SDE's:
For example, if $X_{\cdot }^{x}$ is a solution to the ODE \eqref{ODE} on $%
[0,\infty )$, then $X:[0,\infty )\times \mathbb{R}^{d}\longrightarrow
\mathbb{R}^{d}$ may have the interpretation of a flow of a fluid with
respect to the velocity field $u=b$ of an incompressible inviscid fluid,
which is described by a solution to an incompressible Euler equation%
\begin{align} \label{Euler}
u_{t}+(Du)u+\triangledown P=0,\text{ }\triangledown \cdot u=0,
\end{align}
where $P:[0,\infty )\times \mathbb{R}^{d}\longrightarrow \mathbb{R}^{d}$ is
the pressure field.
Since solutions to \eqref{Euler} may be singular, a deeper analysis of the
regularity of such solutions also necessitates the study of ODE's \eqref{ODE}
with irregular vector fields. See e.g. Di Perna, Lions \cite{DPL89} or
Ambrosio \cite{ambrosio.04} in connection with the construction of
(generalized) flows associated with singular ODE's.
In the context of stochastic regularization of the ODE \eqref{ODE} in the
sense of \eqref{SDE}, however, the obtained results in this article
naturally give rise to the question, whether the constructed smooth
stochastic flow in \eqref{Smooth} may be used for the study of regular
solutions of a stochastic version of the Euler equation \eqref{Euler}.
Regarding applications to the theory of stochastic dynamical systems one may
study the behaviour of orbits with respect to solutions to SDE's \eqref{SDE}
with singular vector fields at sections on a $2$-dimensional sphere (Theorem
of Poincar\'{e}-Bendixson). Another application may pertain to stability
results in the sense of a modified version of the Theorem of Kupka-Smale
\cite{Smale.63}. We mention that well-posedness in the sense of existence
and uniqueness of strong solutions to \eqref{ODE} via regularization of
noise was first found by Zvonkin \cite{Zvon74} in the early 1970ties in the
one-dimensional case for a driving process given by the Brownian motion,
when the vector field $b$ is merely bounded and measurable. Subsequently the
latter result, which can be considered a milestone in SDE theory, was
extended to the multidimensional case by Veretennikov \cite{Ver79}.
Other more recent results on this topic in the case of Brownian motion were
e.g. obtained by Krylov, R\"{o}ckner \cite{KR05}, where the authors
established existence and uniqueness of strong solutions under some
integrability conditions on $b$. See also the works of Gy\"{o}ngy, Krylov
\cite{GyK96} and Gy\"{o}ngy, Martinez \cite{GyM01}. As for a generalization
of the result of Zvonkin \cite{Zvon74} to the case of stochastic evolution
equations on a Hilbert space, we also mention the striking paper of Da
Prato, Flandoli, Priola, R\"{o}ckner \cite{DPFPR13}, who constructed strong
solutions for bounded and measurable drift coefficients by employing
solutions of infinite-dimensional Kolmogorov equations in connection with a
technique known as the "It\^{o}-Tanaka-Zvonkin trick".
The common general approach used by the above mentioned authors for the
construction of strong solutions is based on the so-called Yamada-Watanabe
principle \cite{YW71}: The authors prove the existence of a weak solution
(by means of e.g. Skorokhod's or Girsanov's theorem) and combine it with the
property of pathwise uniqueness of solutions, which is shown by using
solutions to (parabolic) PDE's, to eventually obtain strong uniqueness. As
for this approach in the case of certain classes of L\'{e}vy processes the
reader may consult Priola \cite{Priola12} or Zhang \cite{Zhang13} and the
references therein.
Let us comment on here that the methods of the above authors, which are
essentially limited to equations with Markovian noise, cannot be directly
used in connection with our SDE \eqref{SDE}. The reason for this is that the
initial noise in \eqref{SDE} is not a Markov process. Furthermore, it is
even not a semimartingale due to the properties of a fractional Brownian
motion.
In addition, we point out that our approach is diametrically opposed to the
Yamada-Watanabe principle: We first construct a strong solution to %
\eqref{SDE} by using Mallliavin calculus. Then we verify uniqueness in law
of solutions, which enables us to establish strong uniqueness, that is we
use the following principle:
\begin{align*}
\fbox{Strong existence}+\text{\fbox{Uniqueness in law}}\Rightarrow \text{
\fbox{Strong uniqueness}.}
\end{align*}
\bigskip
Finally, let us also mention some results in the literature on the existence
and uniqueness of strong solutions of singular SDE's driven by a
non-Markovian noise in the case of fractional Brownian motion:
The first results in this direction were obtained by Nualart, Ouknine \cite%
{nualart.ouknine.02,nualart.ouknine.03} for one-dimensional SDE's with
additive noise. For example, using the comparison theorem, the authors in
\cite{nualart.ouknine.02} are able to derive unique strong solutions to such
equations for locally unbounded drift coefficients and Hurst parameters $H<%
\frac{1}{2}$.
More recently, Catellier, Gubinelli \cite{CG} developed a construction
method for strong solutions of multi-dimensional singular SDE's with
additive fractional noise and $H\in (0,1)$ for vector fields $b$ in the
Besov-H\"{o}lder space $B_{\infty ,\infty }^{\alpha +1},\alpha \in \mathbb{R}
$. Here the solutions obtained are even \emph{path-by-path} in the sense of
Davie \cite{Da07} and the construction technique of the authors rely on the
Leray-Schauder-Tychonoff fixed point theorem and a comparison principle
based on an average translation operator.
Another recent result which is based on Malliavin techniques very similar to
our paper can be found in Ba\~{n}os, Nilssen, Proske \cite{BNP.17}. Here the
authors proved the existence of unique strong solutions for coefficients
\begin{align*}
b\in L_{\infty ,\infty }^{1,\infty }:=L^{1}(\R^d;L^{\infty }([0,T];\mathbb{R}%
^{d}))\cap L^{\infty }(\R^d;L^{\infty }([0,T];\mathbb{R}^{d}))
\end{align*}
for sufficiently small $H\in (0,\frac{1}{2})$.
The approach in \cite{BNP.17} is different from the above mentioned ones and
the results for vector fields $b\in L_{\infty ,\infty }^{1,\infty }$ are not
in the scope of the techniques in \cite{CG}. See also \cite{BOPP.17} in the
case fractional noise driven SDE's with distributional drift.
\bigskip
Let us now turn to results in the literature on the well-posedness of
singular SDE's under the aspect of the regularity of stochastic flows:
If we assume that the vector field $b$ in the ODE \eqref{ODE} is not smooth,
but merely require that $b\in W^{1,p}$ and $\triangledown \cdot b\in
L^{\infty },$ then it was shown in \cite{DPL89} the existence of a unique
generalized flow $X$ associated with the ODE \eqref{ODE}. See also \cite%
{ambrosio.04} for a generalization of the latter result to the case of
vector fields of bounded variation.
On the other hand, if $b$ in ODE \eqref{ODE} is less regular than required
\cite{DPL89,ambrosio.04}, then a flow may even not exist in a generalized
sense.
However, the situation changes, if we regularize the ODE \eqref{ODE} by an
(additive) noise:
For example, if the driving noise in the SDE \eqref{SDE} is chosen to be a
Brownian noise, or more precisely if we consider the SDE
\begin{align*}
dX_{t}=u(t,X_{t})dt+dB_{t},\quad s,t\geq 0,\quad X_{s}=x\in \mathbb{R}^{d}
\end{align*}
with the associated stochastic flow $\varphi _{s,t}:\mathbb{R}%
^{d}\rightarrow \mathbb{R}^{d}$, the authors in \cite{MNP14} could prove for
merely bounded and measurable vector fields $b$ a regularizing effect of the
Brownian motion on the ODE \eqref{ODE} that is they could show that $\varphi
_{s,t}$ is a stochastic flow of Sobolev diffeomorphisms with
\begin{align*}
\varphi _{s,t},\varphi _{s,t}^{-1}\in L^{2}(\Omega ;W^{1,p}(\mathbb{R}%
^{d};w))\text{ }
\end{align*}
for all $s,t$ and $p\in (1,\infty)$, where $W^{1,p}(\mathbb{R}^{d};w)$ is a
weighted Sobolev space with weight function $w:\mathbb{R}^{d}\rightarrow
\lbrack 0,\infty )$. Further, as an application of the latter result, which
rests on techniques similar to those used in this paper, the authors also
study solutions of a singular stochastic transport equation with
multiplicative noise of Stratonovich type.
Another work in this direction with applications to Navier-Stokes equations,
which invokes similar techniques as introduced in \cite{MNP14}, deals with
globally integrable $u\in L^{r,q}$ for $r/d+2/q<1$ ($r$ stands here for the
spatial variable and $q$ for the temporal variable). In this context, we
also mention the paper \cite{FedFlan.13}, where the authors present an
alternative method to the above mentioned ones based on solutions to
backward Kolmogorov equations. See also \cite{FedFlan10}. We also refer to
\cite{Priola12} and \cite{Zhang13} in the case of $\alpha$-stable processes.
\bigskip
On the other hand if we consider a noise in the SDE \eqref{SDE}, which is
rougher than Brownian motion with respect to the path properties and given
by fractional Brownian motion for small Hurst parameters, one can even
observe a stronger regularization by noise effect on the ODE \eqref{ODE}:
For example, using Malliavin techniques very similar to those in our paper,
the authors in \cite{BNP.17} are able to show for vector fields $b\in
L_{\infty ,\infty }^{1,\infty }$ the existence of higher order Fr\'{e}chet
differentiable stochastic flows
\begin{align*}
(x\mapsto X_{t}^{x})\in C^{k}(\mathbb{R}^{d})\quad \text{a.e. for all} \quad
t,
\end{align*}%
provided $H=H(k)$ is sufficient small.
Another work in connection with fractional Brownian motion is that of
Catellier, Gubinelli \cite{CG}, where the authors under certain conditions
obtain Lipschitz continuity of the associated stochastic flow for drift
coefficients $b$ in the Besov-H\"{o}lder space $B_{\infty,\infty }^{\alpha
+1},\alpha \in \mathbb{R}$.
\bigskip
We again stress that our approach for the construction of strong solutions
of singular SDE's \eqref{SDE} in connection with smooth stochastic flows is
not based on the Yamada-Watanabe principle or techniques from Markov or
semimartingale theory as commonly used in the literature. In fact, our
construction method has its roots in a series of papers \cite{MMNPZ10}, \cite%
{MBP06}, \cite{MBP10}, \cite{BNP.17}. See also \cite{HaaPros.14} in the case
of SDE's driven by L\'{e}vy processes, \cite{FNP.13}, \cite{MNP14} regarding
the study of singular stochastic partial differential equations or \cite%
{BOPP.17}, \cite{BHP.17} in the case of functional SDE's.
The method we aim at employing in this paper for the
construction of strong solutions rests on a compactness criterion for square
integrable functionals of a cylindrical Brownian motion from Malliavin
calculus, which is a generalization of that in \cite{DPMN92}, applied to
solutions $X_{\cdot }^{x,n}$
\begin{align*}
dX_{t}^{x,n}=b_{n}(t,X_{t}^{x,n})dt+d\mathbb{B}_{t},\quad
X_{0}^{x,n}=x,\quad n\geq 1,
\end{align*}
where $b_{n},n\geq 0$ are smooth vector fields converging to $b\in \mathcal{L%
}_{2,p}^{q}$. Then using variational techniques based on Fourier analysis,
we prove that $X_{t}^{x}$ as a solution to \eqref{SDE} is the strong $L^{2}-$%
limit of $X_{t}^{x,n}$ for all $t$.
To be more specific (in the case of time-homogeneous vector fields), we
"linearize" the problem of finding strong solutions by applying Malliavin
derivatives $D^{i}$ in the direction of Wiener processes $W^{i}$ with
respect to the corresponding representations of $B_{\cdot }^{H_{i},i}$ in (%
\ref{Rep}) in connection with (\ref{B}) and get the linear equation%
\begin{equation}
D_{t}^{i}X_{u}^{x,n}=\int_{t}^{u}b_{n}^{\shortmid
}(X_{s}^{x,n})D_{t}^{i}X_{s}^{x,n}ds+K_{H}(u,t)I_{d},0\leq t<u,n\geq 1,
\label{LinearD}
\end{equation}%
where $b_{n}^{\shortmid }$ denotes the spatial derivative of $b_{n}$, $K_{H}$
the kernel in (\ref{Rep}) and $I_{d}\in \mathbb{R}^{d\times d}$ the unit
matrix. Picard iteration then yields%
\begin{equation}
D_{t}^{i}X_{u}^{x,n}=K_{H}(u,t)I_{d}+\sum_{m\geq
1}\int_{t<s_{1}<...<s_{m}<u}b_{n}^{\shortmid
}(X_{s_{m}}^{x,n})...b_{n}^{\shortmid
}(X_{s_{1}}^{x,n})K_{H}(s_{1},t)I_{d}ds_{1}...ds_{m}. \label{MD}
\end{equation}%
In a next step, in order to "get rid of" the derivatives of $b_{n}$ in (\ref%
{MD}), we use Girsanov's change of measure in connection with the following
"local time variational calculus" argument:%
\begin{equation}
\int_{0<s_{1}<...<s_{n}<t}\kappa (s)D^{\alpha }f(\mathbb{B}_{s})ds=\int_{%
\mathbb{R}^{dn}}D^{\alpha }f(z)L_{\kappa }^{n}(t,z)dz=(-1)^{\left\vert
\alpha \right\vert }\int_{\mathbb{R}^{dn}}f(z)D^{\alpha }L_{\kappa
}^{n}(t,z)dz, \label{localtime}
\end{equation}%
for $\mathbb{B}_{s}:=(\mathbb{B}_{s_{1}},...,\mathbb{B}_{s_{n}})$ and smooth
functions $f:\mathbb{R}^{dn}\longrightarrow \mathbb{R}$ with compact
support, where $D^{\alpha }$ stands for a partial derivative of order $%
\left\vert \alpha \right\vert $ for a multi-index $\alpha $). Here, $%
L_{\kappa }^{n}(t,z)$ is a spatially differentiable local time of $\mathbb{B}%
_{\cdot }$ on a simplex scaled by non-negative integrable function $\kappa
(s)=$ $\kappa _{1}(s)...\kappa _{n}(s)$.
Using the latter enables us to derive upper bounds based on Malliavin
derivatives $D^{i}$ of the solutions in terms of continuous functions of $%
\left\Vert b_{n}\right\Vert _{\mathcal{L}_{2,p}^{q}}$, which we can use in
conncetion with a compactness criterion for square integrable functionals of
a cylindrical Brownian motion to obtain the strong solution as a $L^{2}-$%
limit of approximating solutions.
Based on similar previous arguments we also verify that the flow associated
with \eqref{SDE} for $b\in \mathcal{L}_{2,p}^{q}$ is smooth by using an
estimate of the form
\begin{equation*}
\sup_{t}\sup_{x\in U}E\left[ \left\Vert \frac{\partial ^{k}}{\partial x^{k}}%
X_{t}^{x,n}\right\Vert ^{\alpha }\right] \leq C_{p,q,d,H,k,\alpha ,T}\left(
\left\Vert b_{n}\right\Vert _{\mathcal{L}_{2,p}^{q}}\right) ,n\geq 1
\end{equation*}%
for arbitrary $k\geq 1,$ where $C_{p,q,d,H,k,\alpha ,T}:[0,\infty
)\rightarrow \lbrack 0,\infty )$ is a continuous function, depending on $%
p,q,d,H=\{H_{n}\}_{n\geq 1},k,\alpha ,T$ for $\alpha \geq 1$ and $U\subset
\mathbb{R}^{d}$ a fixed bounded domain. See Theorem \ref{VI_derivative}.
We also mention that the method used in this article significantly differs
from that in \cite{BNP.17} and related works, since the underlying noise of $%
\mathbb{B}_{\cdot }$ in \eqref{SDE} is of infinite-dimensional nature, that
is a cylindrical Brownian motion. The latter however, requires in this paper
the application of an infinite-dimensional version of the compactness
criterion in \cite{DPMN92} tailored to the driving noise $\mathbb{B}_{\cdot
} $.
It is crucial to note here that the above technique explained in the
case of perturbed ODE's of the form (\ref{SDE}) reveals or strongly hints at
a general principle, which could be used to study important classes of PDE's
in connection with conservation laws or fluid dynamics. In fact, we believe
that the following underlying principles may play a major role in the
analysis of solutions to\ PDE's:
\bigskip
\textbf{1.} \emph{Nash-Moser principle}: The idea of this principle, which
goes back to J. Nash \cite{Nash} and and J. Moser \cite{Moser}, can be
(roughly) explained as follows:
Assume a function $\Phi $ of class $C^{k}$. Then the Nash-Moser technique
pertains to the study of solutions $u$ to the equation%
\begin{equation}
\Phi (u)=\Phi (u_{0})+f, \label{NMeq}
\end{equation}%
where $u_{0}\in C^{\infty }$ is given and where $f$ is a "small"
perturbation.
In the setting of our paper, the latter equation corresponds to the SDE (\ref%
{SDE}) with a (non-deterministic) perturbation given by $f=\mathbb{B}_{\cdot
}$ (or $\varepsilon \mathbb{B}_{\cdot }$ for small $\varepsilon >0$). Then,
using this principle, the problem of studying solutions to (\ref{NMeq}) is
"linearized" by analyzing solutions to the linear equation%
\begin{equation}
\Phi ^{\shortmid }(u)v=g, \label{NMlinear}
\end{equation}%
where $\Phi ^{\shortmid }$ stands for the Fr\'{e}chet derivative of $\Phi $.
The study of the latter problem, however, usually comes along with a "loss
of derivatives", which can be measured by "tame" estimates based on a
(decreasing) family of Banach spaces $E_{s},0\leq s<\infty $ with norms $%
\left\vert \cdot \right\vert _{s}$ such that $\cap _{s\geq 0}E_{s}=C^{\infty
}$. Typically, $E_{s}=C^{s}$ (H\"{o}lder spaces) or $E_{s}=H^{s}$ (Sobolev
spaces).
In our situation, equation (\ref{NMlinear}) has its analogon in (\ref%
{LinearD}) with respect to the (stochastic Sobolev) derivative $D^{i}$ (or
the Fr\'{e}chet derivative $D$ in connection with flows).
Roughly speaking, in the case of H\"{o}lder spaces, assume that
\begin{equation*}
\Phi ^{\shortmid }(u)\psi (u)=Id
\end{equation*}%
for a linear mapping $\psi (u)$, which satisfies the "tame" estimate:%
\begin{equation*}
\left\vert \psi (u)g\right\vert _{\alpha }\leq C(\left\vert g\right\vert
_{\alpha +\lambda }+\left\vert g\right\vert _{\lambda }(1+\left\vert
u\right\vert _{\alpha +r}))
\end{equation*}%
for numbers $\lambda ,r\geq 0$ and $\alpha \geq 0$. In addition, require a
similar estimate with respect to $\Phi ^{\shortmid \shortmid }(u)$. Then,
there exists in a certain neighbourhood $W$ of the origin such that for $%
f\in W$ equation (\ref{NMeq}) has a solution $u(f)\in C^{\alpha }$. Solution
here means that there exists a sequence $u_{j},j\geq 1$ in $C^{\infty }$
such that for all $\varepsilon >0$, $u_{j}\longrightarrow u$ in $C^{\alpha
-\varepsilon }$ and $\Phi (u_{j})\longrightarrow \Phi (u_{0})+f$ in $%
C^{\alpha +\lambda -\varepsilon }$ for $j\longrightarrow \infty $. The proof
of the latter result rests on a Newton approximation scheme and results from
Littlewood-Paley theory. See also \cite{AlG} and the references therein.
\bigskip
\textbf{2.} \emph{Signature of higher order averaging operators along a
highly fractal stochastic curve}: In fact another, but to the best of our
knowledge new principle, which comes into play in connection with our
technique for the study of perturbed ODE's, is the "extraction" of
information from "signatures" of \emph{higher order averaging operators}
along a highly irregular or fractal stochastic curve $\gamma _{t}=\mathbb{B}%
_{t}$ of the form%
\begin{eqnarray}
&&(T_{t}^{0,\gamma ,l_{1},...,l_{k}}(b)(x),T_{t}^{1,\gamma
,l_{1},...,l_{k}}(b)(x),T_{t}^{2,\gamma ,l_{1},...,l_{k}}(b)(x),...) \notag \\
&=&(I_{d},\int_{\mathbb{R}^{d}}b(x^{(1)}+z_{1})\Gamma _{\kappa
}^{1,l_{1},...,l_{k}}(z_{1})dz_{1}, \notag \\
&&\int_{\mathbb{R}^{2d}}b^{\otimes 2}(x^{(2)}+z_{2})\Gamma _{\kappa
}^{2,l_{1},...,l_{k}}(z_{2})dz_{2},\int_{\mathbb{R}^{3d}}b^{\otimes
3}(x^{(3)}+z_{3})\Gamma _{\kappa }^{3,l_{1},...,l_{k}}(z_{3})dz_{3},...)
\notag \\
&\in &\mathbb{R}^{d\times d}\times \mathbb{R}^{d}\times \mathbb{R}^{d\times d}\times...
\label{S}
\end{eqnarray}%
where $b:\mathbb{R}^{d}\longrightarrow \mathbb{R}^{d}$ is a "rough", that is
a merely (locally integrable) Borel measurable vector field and
\begin{equation*}
\Gamma _{\kappa }^{n,l_{1},...,l_{k}}(z_{n})=(D^{\alpha
^{j_{1},...,j_{n-1},j,l_{1},...,l_{k}}}L_{\kappa }^{n}(t,z_{n}))_{1\leq
j_{1},...,j_{n-1},j\leq d}
\end{equation*}%
for multi-indices $\alpha ^{j_{1},...,j_{n-1},j,l_{1},...,l_{k}}\in \mathbb{N%
}_{0}^{nd}$ of order $\left\vert \alpha
^{j_{1},...,j_{n-1},j,l_{1},...,l_{k}}\right\vert =n+k-1$ for all (fixed) $%
l_{1},...,l_{k}\in \{1,...,d\}$, $k\geq 0$ and $x^{(n)}:=(x,...,x)\in
\mathbb{R}^{nd}$. Here $L_{\kappa }^{n}$ is the local time from (\ref%
{localtime}) and the multiplication of $b^{\otimes n}(z_{n})$ and $\Gamma
_{\kappa }^{n,l_{1},...,l_{k}}(z_{n})$ in the above signature is defined via tensor
contraction as%
\begin{equation*}
(b^{\otimes n}(z_{n})\Gamma _{\kappa
}^{n,l_{1},...,l_{k}}(z_{n}))_{ij}=\sum_{j_{1},...,j_{n-1}=1}^{d}(b^{\otimes
n}(z_{n}))_{ij_{1},...,j_{n-1}}(\Gamma _{\kappa
}^{n,l_{1},...,l_{k}}(z_{n}))_{j_{1},...,j_{n-1}j}, n\geq 2\text{.}
\end{equation*}%
If $k=0$, we simply set%
\begin{equation*}
T_{t}^{n,\gamma ,l_{1},...,l_{k}}(b)(x)=T_{t}^{n,\gamma }(b)(x)=\int_{%
\mathbb{R}^{d}}b(z)L_{\kappa }^{1}(t,z)dz
\end{equation*}%
for all $n\geq 1$.
The motivation for the concept (\ref{S}) for rough vector fields $b$ comes
from the integration by parts formula (\ref{localtime}) applied to each
summand of (\ref{MD}) (under a change of measure), which can be written in terms
of $T_{u}^{n,\gamma ,l_{1},...,l_{k}}(b)(x)$ for $k=1$.
Higher order derivatives $(D^{i})^{k}$ (or alternatively Fr\'{e}chet
derivatives $D^{k}$ of order $k$) in connection with (\ref{MD}) give rise to
the definition of operators $T_{u}^{n,\gamma ,l_{1},...,l_{k}}(b)(x)$ for
general $k\geq 1$ (see Section $5$).
For example, if $n=1$, $k=2$, $\kappa \equiv 1$, then we have for (smooth) $b
$ that
\begin{eqnarray}
\int_{0}^{t}b^{\shortmid \shortmid }(x+\gamma _{s})ds
&=&\int_{0}^{t}b^{\shortmid \shortmid }(x+\mathbb{B}_{s})ds \notag \\
&=&(\int_{\mathbb{R}^{d}}b(x^{(1)}+z_{1})(D^{2}L_{\kappa
}^{1}(t,z_{1}))_{l_{1},l_{2}}dz_{1})_{1\leq l_{1},l_{2}\leq d} \notag \\
&=&(\int_{\mathbb{R}^{d}}b(x^{(1)}+z_{1})\Gamma _{\kappa
}^{1,l_{1},l_{2}}(z_{1})dz_{1})_{_{1\leq l_{1},l_{2}\leq d}} \notag \\
&=&(T_{t}^{1,\gamma ,l_{1},l_{2}}(b)(x))_{1\leq l_{1},l_{2}\leq d}\in
\mathbb{R}^{d}\otimes \mathbb{R}^{d}\text{.} \label{Example}
\end{eqnarray}%
In the case, when $n=1$, $k=0$, $\kappa \equiv 1$ and $\gamma _{t}=B_{t}^{H}$
a fractional Brownian motion for $H<\frac{1}{2},$ the first order averaging
operator $T_{t}^{1,\gamma }$ along the curve $\gamma _{t}$ in (\ref{S})
coincides with that in Catellier, Gubinelli \cite{CG} given by%
\begin{equation*}
T_{t}^{\gamma }(b)(x)=\int_{0}^{t}b(x+B_{s}^{H})ds,
\end{equation*}%
which was used by the authors- as mentioned before- to study the
regularization effect of $\gamma _{t}$ on ODE's perturbed by such curves.
For example, if $b\in B_{\infty ,\infty }^{\alpha +1}$ (Besov-H\"{o}lder
space) with $\alpha >2-\frac{1}{2H}$, then the corresponding SDE (\ref{SDE})
driven by $B_{\cdot }^{H}$ admits a unique Lipschitz flow. The reason- and
this is important to mention here- why the latter authors "only" obtain
Lipschitz flows and not higher regularity is that they do not take into
account in their analysis information coming from higher order averaging
operators $T_{t}^{n,\gamma ,l_{1},...,l_{k}}$ for $n>1$, $k\geq 1$. Here in
this article, we rely in fact on the information based on such higher order
averaging operators to be able to study $C^{\infty }-$regularization effects
with respect to flows.
Let us also mention here that T. Tao, J. Wright \cite{TW} actually
introduced averaging operators of the type $T_{t}^{\gamma }$ along (smooth)
\emph{deterministic} curves $\gamma _{t}$ for improving bounds of such
operators on $L^{p}$ along such curves. See also the recent work of
\cite{Gressman} and the references therein.
On the other hand, in view of the possibility of a geometric study of the
regularity of solutions to ODE's or PDE's, it would be (motivated by (\ref%
{Example}) natural to replace the signatures in (\ref{S}) by the following
family of signatures for rough vector fields $b$:%
\begin{eqnarray*}
S_{t}^{n}(b)(x) &:&=(1,T_{t}^{n,\gamma }(b)(x),(T_{t}^{n,\gamma
,l_{1}}(b)(x))_{1\leq l_{1}\leq d},(T_{t}^{n,\gamma
,l_{1},l_{2}}(b)(x))_{1\leq l_{1},l_{2}\leq d},...) \\
&\in &T(\mathbb{R}^{d}):=\prod_{k\geq 0}(\otimes _{i=1}^{k}\mathbb{R}%
^{d}),n\geq 1,
\end{eqnarray*}%
where we use the convention $\otimes _{i=1}^{0}\mathbb{R}^{d}=\mathbb{R}$.
The space $T(\mathbb{R}^{d})$ becomes an associative algebra under tensor
multiplication. Then the regularity of solutions to ODE's or
PDE's can be analyzed by means of such signatures in connection with Lie
groups $\mathfrak{G}\subset T_{1}(\mathbb{R}^{d}):=\{(g_{0},g_{1},...)\in T(%
\mathbb{R}^{d}):g_{0}=1\}$.
In this context, it would be conceivable to be able to derive a Chen-Strichartz
type of formula by means of $S_{t}^{n}(b)$ in connection with a
sub-Riemannian geometry for the study of flows.\ See \cite{Baudoin} and the
references therein.
\bigskip
\textbf{3.} \emph{Removal of a "thin" set of "worst case" input data via
noisy perturbation}: As explained before well-posedness of the ODE (\ref{ODE}%
) can be restored by "randomization" or perturbation of the input vector
field $b$ in (\ref{RODE}). The latter suggests that this procedure
leads to a removal of a "thin" set of "worst case" input data, which do not
allow for regularization or the restoration of well-posedness. It would be
interesting here to develop methods for the measurement of the size of such
"thin" sets
\bigskip
The organization of our article is as follows: In Section \ref{frameset} we
discuss the mathematical framework of this paper. Further, in Section \ref%
{monstersection} we derive important estimates via variational techniques
based on Fourier analysis, which are needed later on for the proofs of the
main results of this paper. Section \ref{strongsol} is devoted to the
construction of unique strong solutions to the SDE \eqref{SDE}. Finally, in
Section \ref{flowsection} we show $C^{\infty }-$regularization by noise $%
\mathbb{B}_{\cdot }$ of the singular ODE \eqref{ODE}.
\subsection{Notation}
Throughout the article, we will usually denote by $C$ a generic constant. If
$\pi$ is a collection of parameters then $C_{\pi}$ will denote a collection
of constants depending only on the collection $\pi$. Given differential
structures $M$ and $N$, we denote by $C_c^{\infty}(M;N)$ the space of
infinitely many times continuously differentiable function from $M$ to $N$
with compact support. For a complex number $z\in \mathbb{C}$, $\overline{z}$
denotes the conjugate of $z$ and $\boldsymbol{i}$ the imaginary unit. Let $E$
be a vector space, we denote by $|x|$, $x\in E$ the Euclidean norm. For a
matrix $A$, we denote $|A|$ its determinant and $\|A\|_\infty$ its maximum
norm.
\section{Framework and Setting}
\label{frameset}
In this section we recollect some specifics on Fourier analysis, shuffle
products, fractional calculus and fractional Brownian motion which will be
extensively used throughout the article. The reader might consult \cite%
{Mall97}, \cite{Mall78} or \cite{DOP08} for a general theory on Malliavin
calculus for Brownian motion and \cite[Chapter 5]{Nua10} for fractional
Brownian motion. For more detailed theory on harmonic analysis and Fourier
transform the reader is referred to \cite{grafakos.08}.
\subsection{Fourier Transform}
In the course of the paper we will make use of the Fourier transform. There
are several definitions in the literature. In the present article we have
taken the following: let $f\in L^1(\R^d)$ then we define its \emph{Fourier
tranform}, denoted it by $\widehat{f}$, by
\begin{align} \label{Fourier}
\widehat{f}(\xi) = \int_{\R^d} f(x) e^{-2\pi \boldsymbol{i} \langle
x,\xi\rangle_{\R^d}} dx, \quad \xi \in \R^d.
\end{align}
The above definition can be actually extended to functions in $L^2(\R^d)$
and it makes the operator $L^2(\R^d) \ni f \mapsto \widehat{f}\in L^2(\R^d)$
a linear isometry which, by polarization, implies
\begin{equation*}
\langle \widehat{f},\widehat{g}\rangle_{L^2(\R^d)} = \langle f,g\rangle_{L^2(%
\R^d)},\quad f,g\in L^2(\R^d),
\end{equation*}
where
\begin{equation*}
\langle f,g\rangle_{L^2(\R^d)} = \int_{\R^d} f(z)\overline{g(z)} dz,\quad
f,g\in L^2(\R^d).
\end{equation*}
\subsection{Shuffles}
\label{VI_shuffles}
Let $k\in \mathbb{N}$. For given $m_1,\dots, m_k\in \mathbb{N}$, denote
\begin{equation*}
m_{1:j} := \sum_{i=1}^j m_i,
\end{equation*}
e.g. $m_{1:k} = m_1+\cdots +m_k$ and set $m_0:=0$. Denote by $S_{m}
=\{\sigma: \{1,\dots, m\}\rightarrow \{1,\dots,m\} \}$ the set of
permutations of length $m \in \mathbb{N}$. Define the set of \emph{shuffle
permutations} of length $m_{1:k} = m_1+\cdots m_k$ as
\begin{equation*}
S(m_1,\dots, m_k) := \{\sigma\in S_{m_{1:k}}: \, \sigma(m_{1:i} +1)<\cdots
<\sigma(m_{1:i+1}), \, i=0,\dots,k-1\},
\end{equation*}
and the $m$-dimensional simplex in $[0,T]^m$ as
\begin{equation*}
\Delta_{t_0,t}^m:=\{(s_1,\dots,s_m)\in [0,T]^m : \, t_0<s_1<\cdots <
s_m<t\}, \quad t_0,t\in [0,T], \quad t_0<t.
\end{equation*}
Let $f_i:[0,T] \rightarrow [0,\infty)$, $i=1,\dots,m_{1:k}$ be integrable
functions. Then, we have
\begin{align} \label{VI_shuffle}
\begin{split}
\prod_{i=0}^{k-1} \int_{\Delta_{t_0,t}^{m_i}} f_{m_{1:i}+1}(s_{m_{1:i}+1})
&\cdots f_{m_{1:i+1}}(s_{m_{1:i+1}}) ds_{m_{1:i}+1}\cdots ds_{m_{1:i+1}} \\
&= \sum_{\sigma^{-1}\in S(m_1,\dots, m_k)} \int_{\Delta_{t_0,t}^{m_{1:k}}}
\prod_{i=1}^{m_{1:k}} f_{\sigma(i)}(w_i) dw_1\cdots dw_{m_{1:k}}.
\end{split}%
\end{align}
The above is a trivial generalisation of the case $k=2$ where
\begin{align} \label{shuffleIntegral}
\begin{split}
\int_{\substack{ t_0<s_1\cdots <s_{m_1}<t \\ t_0<s_{m_1+1}<\cdots
<s_{m_1+m_2}<t}} &\prod_{i=1}^{m_1+m_2} f_i(s_i) \, ds_1 \cdots ds_{m_1+m_2}
\\
&\hspace{-1cm}= \sum_{\sigma^{-1}\in S(m_1,m_2)} \int_{t_0<w_1<\cdots
<w_{m_1+m_2}<t} \prod_{i=1}^{m_1+m_2} f_{\sigma(i)} (w_i) dw_1\cdots
dw_{m_1+m_2}
\end{split}%
,
\end{align}
which can be for instance found in \cite{LCL.04}.
We will also need the following formula. Given indices $j_0,j_1,\dots,
j_{k-1}\in \mathbb{N}$ such that $1\leq j_i\leq m_{i+1}$, $i=1,\dots,k-1$
and we set $j_0:=m_1+1$. Introduce the subset $S_{j_1,\dots,j_{k-1}}(m_1,%
\dots,m_k)$ of $S(m_1,\dots, m_k)$ defined as
\begin{align*}
S_{j_1,\dots, j_{k-1}}(m_1,\dots,m_k):=& \, \Big\{\sigma \in
S(m_1,\dots,m_k): \, \sigma(m_{1:i}+1)<\cdots <\sigma(m_{1:i} + j_i -1), \\
&\sigma(l)=l, \, m_{1:i} + j_i \leq l \leq m_{1:i+1} , \, i=0,\dots,k-1 %
\Big\}.
\end{align*}
We have
\begin{align} \label{VI_shuffle2}
\begin{split}
&\int_{\Delta_{t_0,t}^{m_k} \times
\Delta_{t_0,s_{m_{1:k-1}+j_{k-1}}}^{m_{k-1}} \times \cdots \times
\Delta_{t_0, s_{m_1+j_1}}^{m_1}} \prod_{i=1}^{m_{1:k}} f_i (s_i) \,
ds_1\cdots ds_{m_{1:k}} \\
& \hspace{1cm} = \int_{\substack{ t_0< s_1<\cdots <s_{m_1}< s_{m_1+j_1} \\ %
t_0<s_{m_1+m_2+1}<\cdots < s_{m_1+m_2}< s_{m_1+m_2+j_2} \\ \vdots \\ %
t_0<s_{m_1+\cdots m_{k-1}+1}<\cdots <s_{m_1+\cdots +m_k}< t}}
\prod_{i=1}^{m_{1:k}} f_i (s_i) \, ds_1\cdots ds_{m_{1:k}} \\
& \hspace{1cm} = \sum_{\sigma^{-1}\in S_{j_1,\dots, j_{k-1}}(m_1,\dots,
m_k)} \int_{t_0<w_1<\cdots <w_{m_{1:k}}<t} \prod_{i=1}^{m_{1:k}}
f_{\sigma(i)}(w_i) \, dw_1\cdots dw_{m_{1:k}}.
\end{split}%
.
\end{align}
\begin{equation*}
\# S(m_1,\dots,m_k) = \frac{(m_1+\cdots+m_k)!}{m_1! \cdots m_k!},
\end{equation*}
where $\#$ denotes the number of elements in the given set. Then by using
Stirling's approximation, one can show that
\begin{equation*}
\# S(m_1,\dots,m_k) \leq C^{m_1+\cdots+m_k}
\end{equation*}
for a large enough constant $C>0$. Moreover,
\begin{equation*}
\# S_{j_1,\dots,j_{k-1}}(m_1,\dots,m_k) \leq \# S(m_1,\dots,m_k).
\end{equation*}
\subsection{Fractional Calculus}
\label{VI_fraccal} We pass in review here some basic definitions and
properties on fractional calculus. The reader may consult \cite%
{samko.et.al.93} and \cite{lizorkin.01} for more information about this
subject.
Suppose $a,b\in \R$ with $a<b$. Further, let $f\in L^{p}([a,b])$ with $p\geq
1$ and $\alpha >0$. Introduce the \emph{left-} and \emph{right-sided
Riemann-Liouville fractional integrals} by
\begin{equation*}
I_{a^{+}}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\int_{a}^{x}(x-y)^{\alpha
-1}f(y)dy
\end{equation*}%
and
\begin{equation*}
I_{b^{-}}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\int_{x}^{b}(y-x)^{\alpha
-1}f(y)dy
\end{equation*}%
for almost all $x\in \lbrack a,b]$, where $\Gamma $ stands for the Gamma
function.
Furthermore, for an integer $p\geq 1$, denote by $I_{a^{+}}^{\alpha }(L^{p})$
(resp. $I_{b^{-}}^{\alpha }(L^{p})$) the image of $L^{p}([a,b])$ of the
operator $I_{a^{+}}^{\alpha }$ (resp. $I_{b^{-}}^{\alpha }$). If $f\in
I_{a^{+}}^{\alpha }(L^{p})$ (resp. $f\in I_{b^{-}}^{\alpha }(L^{p})$) and $%
0<\alpha <1$ then we define the \emph{left-} and \emph{right-sided
Riemann-Liouville fractional derivatives} by
\begin{equation*}
D_{a^{+}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\frac{\diff}{\diff x}%
\int_{a}^{x}\frac{f(y)}{(x-y)^{\alpha }}dy
\end{equation*}%
and
\begin{equation*}
D_{b^{-}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\frac{\diff}{\diff x}%
\int_{x}^{b}\frac{f(y)}{(y-x)^{\alpha }}dy.
\end{equation*}
The above left- and right-sided derivatives of $f$ \ can be represented as
follows:
\begin{equation*}
D_{a^{+}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\left( \frac{f(x)}{%
(x-a)^{\alpha }}+\alpha \int_{a}^{x}\frac{f(x)-f(y)}{(x-y)^{\alpha +1}}%
dy\right) ,
\end{equation*}
\begin{equation*}
D_{b^{-}}^{\alpha }f(x)=\frac{1}{\Gamma (1-\alpha )}\left( \frac{f(x)}{%
(b-x)^{\alpha }}+\alpha \int_{x}^{b}\frac{f(x)-f(y)}{(y-x)^{\alpha +1}}%
dy\right) .
\end{equation*}
By construction one also finds the relations
\begin{equation*}
I_{a^{+}}^{\alpha }(D_{a^{+}}^{\alpha }f)=f
\end{equation*}%
for all $f\in I_{a^{+}}^{\alpha }(L^{p})$ and
\begin{equation*}
D_{a^{+}}^{\alpha }(I_{a^{+}}^{\alpha }f)=f
\end{equation*}%
for all $f\in L^{p}([a,b])$ and similarly for $I_{b^{-}}^{\alpha }$ and $%
D_{b^{-}}^{\alpha }$.
\subsection{Fractional Brownian motion}
Consider $d$-dimensional \emph{fractional Brownian motion }$%
B_{t}^{H}=(B_{t}^{H,(1)},...,B_{t}^{H,(d)}),$ $0\leq t\leq T$ with Hurst
parameter $H\in (0,1/2)$. So $B_{\cdot }^{H}$ is a centered Gaussian process
with covariance structure
\begin{equation*}
(R_{H}(t,s))_{i,j}:=E[B_{t}^{H,(i)}B_{s}^{H,(j)}]=\delta _{i,j}\frac{1}{2}%
\left( t^{2H}+s^{2H}-|t-s|^{2H}\right) ,\quad i,j=1,\dots ,d,
\end{equation*}%
where $\delta _{i,j}=1$ if $i=j$ and $\delta _{i,j}=0$ otherwise.
One finds that $E[|B_{t}^{H}-B_{s}^{H}|^{2}]=d|t-s|^{2H}$. The latter
implies that $B_{\cdot }^{H}$ has stationary increments and H\"{o}lder
continuous trajectories of index $H-\varepsilon $ for all $\varepsilon \in
(0,H)$. In addition, one also checks that the increments of $B_{\cdot }^{H}$%
, $H\in (0,1/2)$ are not independent. This fact however, complicates the
study of e.g. SDE's driven by the such processes compared to the Wiener
setting. Another difficulty one is faced with in connection with such
processes is that they are not semimartingales, see e.g. \cite[Proposition
5.1.1]{Nua10}.
In what follows let us briefly discuss the construction of fractional
Brownian motion via an isometry. In fact, this construction can be done
componentwise. Therefore, for convenience we confine ourselves to the
one-dimensional case. We refer to \cite{Nua10} for further details.
Let us denote by $\mathcal{E}$ the set of step functions on $[0,T]$ and by $%
\mathcal{H}$ the Hilbert space, which is obtained by the closure of $%
\mathcal{E}$ with respect to the inner product
\begin{equation*}
\langle 1_{[0,t]},1_{[0,s]}\rangle _{\mathcal{H}}=R_{H}(t,s).
\end{equation*}%
The mapping $1_{[0,t]}\mapsto B_{t}^{H}$ has an extension to an isometry
between $\mathcal{H}$ and the Gaussian subspace of $L^{2}(\Omega )$
associated with $B^{H}$. We denote the isometry by $\varphi \mapsto
B^{H}(\varphi )$.
The following result, which can be found in (see \cite[Proposition 5.1.3]%
{Nua10} ), provides an integral representation of $R_{H}(t,s)$, when $H<1/2$:
\begin{prop}
Let $H<1/2$. The kernel
\begin{equation*}
K_H(t,s)= c_H \left[\left( \frac{t}{s}\right)^{H- \frac{1}{2}} (t-s)^{H-
\frac{1}{2}} + \left( \frac{1}{2}-H\right) s^{\frac{1}{2}-H} \int_s^t u^{H-%
\frac{3}{2}} (u-s)^{H-\frac{1}{2}} du\right],
\end{equation*}
where $c_H = \sqrt{\frac{2H}{(1-2H) \beta(1-2H , H+1/2)}}$ being $\beta$ the
Beta function, satisfies
\begin{align} \label{VI_RH}
R_H(t,s) = \int_0^{t\wedge s} K_H(t,u)K_H(s,u)du.
\end{align}
\end{prop}
The kernel $K_{H}$ also has a representation in terms of a fractional
derivative as follows
\begin{equation*}
K_{H}(t,s)=c_{H}\Gamma \left( H+\frac{1}{2}\right) s^{\frac{1}{2}-H}\left(
D_{t^{-}}^{\frac{1}{2}-H}u^{H-\frac{1}{2}}\right) (s).
\end{equation*}
Let us now introduce a linear operator $K_{H}^{\ast }:\mathcal{E}\rightarrow
L^{2}([0,T])$ by
\begin{equation*}
(K_{H}^{\ast }\varphi )(s)=K_{H}(T,s)\varphi (s)+\int_{s}^{T}(\varphi
(t)-\varphi (s))\frac{\partial K_{H}}{\partial t}(t,s)dt
\end{equation*}%
for every $\varphi \in \mathcal{E}$. We see that $(K_{H}^{\ast
}1_{[0,t]})(s)=K_{H}(t,s)1_{[0,t]}(s)$. From this and \eqref{VI_RH} we
obtain that $K_{H}^{\ast }$ is an isometry between $\mathcal{E}$ and $%
L^{2}([0,T])$ which has an extension to the Hilbert space $\mathcal{H}$.
For a $\varphi \in \mathcal{H}$ one proves the following representations for
$K_{H}^{\ast }$:
\begin{equation*}
(K_{H}^{\ast }\varphi )(s)=c_{H}\Gamma \left( H+\frac{1}{2}\right) s^{\frac{1%
}{2}-H}\left( D_{T^{-}}^{\frac{1}{2}-H}u^{H-\frac{1}{2}}\varphi (u)\right)
(s),
\end{equation*}
\begin{align*}
(K_{H}^{\ast }\varphi )(s)=& \,c_{H}\Gamma \left( H+\frac{1}{2}\right)
\left( D_{T^{-}}^{\frac{1}{2}-H}\varphi (s)\right) (s) \\
& +c_{H}\left( \frac{1}{2}-H\right) \int_{s}^{T}\varphi (t)(t-s)^{H-\frac{3}{%
2}}\left( 1-\left( \frac{t}{s}\right) ^{H-\frac{1}{2}}\right) dt.
\end{align*}
On the other hand one also gets the relation $\mathcal{H}=I_{T^{-}}^{\frac{1%
}{2}-H}(L^{2})$ (see \cite{decreu.ustunel.98} and \cite[Proposition 6]%
{alos.mazet.nualart.01}).
Using the fact that $K_{H}^{\ast }$ is an isometry from $\mathcal{H}$ into $%
L^{2}([0,T])$, the $d$-dimensional process $W=\{W_{t},t\in \lbrack 0,T]\}$
given by
\begin{equation*}
W_{t}:=B^{H}((K_{H}^{\ast })^{-1}(1_{[0,t]}))
\end{equation*}%
is a Wiener process and the process $B^{H}$ can be represented as
\begin{equation}
B_{t}^{H}=\int_{0}^{t}K_{H}(t,s)dW_{s}\text{.}
\label{VI_BHW}
\end{equation}%
See \cite{alos.mazet.nualart.01}.
In the sequel, we denote by $W_{\cdot }$ a standard Wiener process on a
given probability space endowed with the natural filtration generated by $W$
augmented by all $P$-null sets. Further, $B_{\cdot }:=B_{\cdot }^{H}$ stands
for the fractional Brownian motion with Hurst parameter $H\in (0,1/2)$ given
by the representation \eqref{VI_BHW}.
In the following, we need a version of Girsanov's theorem for fractional
Brownian motion which goes back to \cite[Theorem 4.9]{decreu.ustunel.98}.
Here we state the version given in \cite[Theorem 3.1]{nualart.ouknine.02}.
In preparation of this, we introduce an isomorphism $K_{H}$ from $%
L^{2}([0,T])$ onto $I_{0+}^{H+\frac{1}{2}}(L^{2})$ associated with the
kernel $K_{H}(t,s)$ in terms of the fractional integrals as follows, see
\cite[Theorem 2.1]{decreu.ustunel.98}
\begin{equation*}
(K_{H}\varphi )(s)=I_{0^{+}}^{2H}s^{\frac{1}{2}-H}I_{0^{+}}^{\frac{1}{2}%
-H}s^{H-\frac{1}{2}}\varphi ,\quad \varphi \in L^{2}([0,T]).
\end{equation*}
Using the latter and the properties of the Riemann-Liouville fractional
integrals and derivatives, one finds that the inverse of $K_{H}$ is given by
\begin{equation} \label{opK_H-1}
(K_{H}^{-1}\varphi )(s)=s^{\frac{1}{2}-H}D_{0^{+}}^{\frac{1}{2}-H}s^{H-\frac{%
1}{2}}D_{0^{+}}^{2H}\varphi (s),\quad \varphi \in I_{0+}^{H+\frac{1}{2}%
}(L^{2}).
\end{equation}
Hence, if $\varphi $ is absolutely continuous, see \cite{nualart.ouknine.02}%
, one can prove that
\begin{equation} \label{VI_inverseKH}
(K_{H}^{-1}\varphi )(s)=s^{H-\frac{1}{2}}I_{0^{+}}^{\frac{1}{2}-H}s^{\frac{1%
}{2}-H}\varphi ^{\prime }(s),\quad a.e.
\end{equation}
\begin{thm}[Girsanov's theorem for fBm]
\label{VI_girsanov} Let $u=\{u_t, t\in [0,T]\}$ be an $\mathcal{F}$-adapted
process with integrable trajectories and set $\widetilde{B}_t^H = B_t^H +
\int_0^t u_s ds, \quad t\in [0,T].$ Assume that
\begin{itemize}
\item[(i)] $\int_0^{\cdot} u_s ds \in I_{0+}^{H+\frac{1}{2}} (L^2 ([0,T]))$,
$P$-a.s.
\item[(ii)] $E[\xi_T]=1$ where
\begin{equation*}
\xi_T := \exp\left\{-\int_0^T K_H^{-1}\left( \int_0^{\cdot} u_r
dr\right)(s)dW_s - \frac{1}{2} \int_0^T K_H^{-1} \left( \int_0^{\cdot} u_r
dr \right)^2(s)ds \right\}.
\end{equation*}
\end{itemize}
Then the shifted process $\widetilde{B}^H$ is an $\mathcal{F}$-fractional
Brownian motion with Hurst parameter $H$ under the new probability $%
\widetilde{P}$ defined by $\frac{d\widetilde{P}}{dP}=\xi_T$.
\end{thm}
\begin{rem}
For the multidimensional case, define
\begin{equation*}
(K_H \varphi)(s):= ( (K_H \varphi^{(1)} )(s), \dots, (K_H
\varphi^{(d)})(s))^{\ast}, \quad \varphi \in L^2([0,T];\R^d),
\end{equation*}
where $\ast$ denotes transposition. Similarly for $K_H^{-1}$ and $K_H^{\ast}$%
.
\end{rem}
Finally, we mention a crucial property of the fractional Brownian motion
which was proven by \cite{pitt.78} for general Gaussian vector fields.
Let $m\in \mathbb{N}$ and $0=:t_0<t_1<\cdots <t_m<T$. Then for every $%
\xi_1,\dots, \xi_m\in \R^d$ there exists a positive finite constant $C>0$
(depending on $m$) such that
\begin{align} \label{VI_SLND}
Var\left[ \sum_{j=1}^m \langle\xi_j, B_{t_j}^H-B_{t_{j-1}}^H\rangle_{\R^d}%
\right] \geq C \sum_{j=1}^m |\xi_j|^2 E\left[|B_{t_j}^H-B_{t_{j-1}}^H|^2%
\right].
\end{align}
The above property is known as the \emph{local non-determinism} property of
the fractional Brownian motion. A stronger version of the local
non-determinism, which we want to make use of in this paper and which is
referred to as \emph{two sided strong local non-determinism} in the
literature, is also satisfied by the fractional Brownian motion: There
exists a constant $K>0$, depending only on $H$ and $T$, such that for any $%
t\in \lbrack 0,T]$, $0<r<t$,
\begin{equation}
Var\left[ B_{t}^{H}|\ \{B_{s}^{H}:|t-s|\geq r\}\right] \geq Kr^{2H}.
\label{2sided}
\end{equation}%
The reader may e.g. consult \cite{pitt.78} or \cite{xiao.11} for more
information on this property.
\section{A New Regularizing Process}
\label{monstersection}
Throughout this article we operate on a probability space $(\Omega ,%
\mathfrak{A},P)$ equipped with a filtration $\mathcal{F}:=\{\mathcal{F}%
_{t}\}_{t\in \lbrack 0,T]}$ where $T>0$ is fixed, generated by a process $%
\mathbb{B}_{\cdot }=\mathbb{B}_{\cdot }^{H}=\{\mathbb{B}_{t}^{H},t\in
\lbrack 0,T]\}$ to be defined later and here $\mathfrak{A}:=\mathcal{F}_{T}$.
Let $H=\{H_{n}\}_{n\geq 1}\subset (0,1/2)$ be a sequence of numbers such
that $H_{n}\searrow 0$ for $n\longrightarrow \infty $. Also, consider $%
\lambda =\{\lambda _{n}\}_{n\geq 1}\subset \R$ a sequence of real numbers
such that there exists a bijection
\begin{equation}
\{n:\lambda _{n}\neq 0\}\rightarrow \mathbb{N} \label{lambdacond}
\end{equation}%
and
\begin{equation}
\sum_{n=1}^{\infty }|\lambda _{n}|\in (0,\infty ). \label{lambdacond2}
\end{equation}
Let $\{W_{\cdot }^{n}\}_{n\geq 1}$ be a sequence of independent $d$%
-dimensional standard Brownian motions taking values in $\R^{d}$ and define
for every $n\geq 1$,
\begin{equation} \label{compfBm}
B_{t}^{H_{n},n}=\int_{0}^{t}K_{H_{n}}(t,s)dW_{s}^{n}=\left(
\int_{0}^{t}K_{H_{n}}(t,s)dW_{s}^{n,1},\dots
,\int_{0}^{t}K_{H_{n}}(t,s)dW_{s}^{n,d}\right) ^{\ast }.
\end{equation}
By construction, $B_{\cdot }^{H_{n},n}$, $n\geq 1$ are pairwise independent $%
d$-dimensional fractional Brownian motions with Hurst parameters $H_{n}$.
Observe that $W_{\cdot }^{n}$ and $B_{\cdot }^{H_{n},n}$ generate the same
filtrations, see \cite[Chapter 5, p. 280]{Nua10}. We will be interested in
the following stochastic process
\begin{equation}
\mathbb{B}_{t}^{H}=\sum_{n=1}^{\infty }\lambda _{n}B_{t}^{H_{n},n},\quad
t\in \lbrack 0,T]. \label{monster}
\end{equation}
Finally, we need another technical condition on the sequence $\lambda
=\{\lambda _{n}\}_{n\geq 1}$, which is used to ensure continuity of the
sample paths of $\mathbb{B}_{\cdot }^{H}$:
\begin{equation}
\sum_{n=1}^{\infty }|\lambda _{n}|E\left[ \sup_{0\leq s\leq
1}|B_{s}^{H_{n},n}|\right] <\infty , \label{contcond}
\end{equation}%
where $\sup_{0\leq s\leq 1}|B_{s}^{H_{n},n}|\in L^{1}(\Omega )$ indeed, see
e.g. \cite{berman.89}.
The following theorem gives a precise definition of the process $\mathbb{B}%
_{\cdot }^{H}$ and some of its relevant properties.
\begin{thm}
\label{monsterprocess} Let $H=\{H_{n}\}_{n\geq 1}\subset (0,1/2)$ be a
sequence of real numbers such that $H_{n}\searrow 0$ for $n\longrightarrow
\infty $ and $\lambda =\{\lambda _{n}\}_{n\geq 1}\subset \R$ satisfying %
\eqref{lambdacond}, \eqref{lambdacond2} and \eqref{contcond}. Let $%
\{B_{\cdot }^{H_{n},n}\}_{n=1}^{\infty }$ be a sequence of $d$-dimensional
independent fractional Brownian motions with Hurst parameters $H_{n}$, $%
n\geq 1$, defined as in \eqref{compfBm}. Define the process
\begin{equation*}
\mathbb{B}_{t}^{H}:=\sum_{n=1}^{\infty }\lambda _{n}B_{t}^{H_{n},n},\quad
t\in \lbrack 0,T],
\end{equation*}%
where the convergence is $P$-a.s. and $\mathbb{B}_{t}^{H}$ is a well defined
object in $L^{2}(\Omega )$ for every $t\in \lbrack 0,T]$.
Moreover, $\mathbb{B}_t^H$ is normally distributed with zero mean and
covariance given by
\begin{equation*}
E[\mathbb{B}_t^H (\mathbb{B}_s^H)^\ast] = \sum_{n=1}^{\infty} \lambda_n^2
R_{H_n}(t,s)I_d,
\end{equation*}
where $\ast$ denotes transposition, $I_d$ is the $d$-dimensional identity
matrix and $R_{H^n}(t,s):= \frac{1}{2}\left(s^{2H_n}+t^{2H_n}-|t-s|^{2H_n}%
\right)$ denotes the covariance function of the components of the fractional
Brownian motions $B_t^{H_n,n}$.
The process $\mathbb{B}_{\cdot }^{H}$ has stationary increments. It does not
admit any version with H\"{o}lder continuous paths of any order. $\mathbb{B}%
_{\cdot }^{H}$ has no finite $p$-variation for any order $p>0$, hence $%
\mathbb{B}_{\cdot }^{H}$ is not a semimartingale. It is not a Markov process
and hence it does not possess independent increments.
Finally, under condition \eqref{contcond}, $\mathbb{B}_{\cdot }^{H}$ has $P$%
-a.s. continuous sample paths.
\end{thm}
\begin{proof}
One can verify, employing Kolmogorov's three series theorem, that the series
converges $P$-a.s. and we easily see that
\begin{equation*}
E[|\mathbb{B}_t^H|^2] = d\sum_{n=1}^{\infty}\lambda_n^2 t^{2H_n}\leq d(1+t)
\sum_{n=1}^{\infty}\lambda_n^2<\infty,
\end{equation*}
where we used that $x^{\alpha}\leq 1+x$ for all $x\geq 0$ and any $\alpha\in[%
0,1]$.
The Gaussianity of $\mathbb{B}_{t}^{H}$ follows simply by observing that for
every $\theta \in \R^{d}$,
\begin{equation*}
E\left[ \exp \left\{ \boldsymbol{i}\langle \theta ,\mathbb{B}_{t}^{H}\rangle
_{\R^{d}}\right\} \right] =e^{-\frac{1}{2}\sum_{n=1}^{\infty
}\sum_{j=1}^{d}\lambda _{n}^{2}t^{2H_{n}}\theta ^{2}},
\end{equation*}%
where we used the independence of $B_{t}^{H_{n},n}$ for every $n\geq 1$. The
covariance formula follows easily again by independence of $B_{t}^{H_{n},n}$.
The stationarity follows by the fact that $B^{H_n,n}$ are independent and
stationary for all $n\geq 1$.
The process $\mathbb{B}_{\cdot }^{H}$ could \emph{a priori} be very
irregular. Since $\mathbb{B}_{\cdot }^{H}$ is a stochastically continuous
separable process with stationary increments, we know by \cite[Theorem 5.3.10%
]{MR.06} that either $\mathbb{B}^{H}$ has $P$-a.s. continuous sample paths
on all open subsets of $[0,T]$ or $\mathbb{B}^{H}$ is $P$-a.s. unbounded on
all open subsets on $[0,T]$. Under condition \eqref{contcond} and using the
self-similarity of the fractional Brownian motions we see that
\begin{align*}
E\left[ \sup_{s\in \lbrack 0,T]}|\mathbb{B}_{s}^{H}|\right] & \leq
\sum_{n=1}^{\infty }|\lambda _{n}|T^{H_{n}}E\left[ \sup_{s\in \lbrack
0,1]}|B_{s}^{H_{n},n}|\right] \\
& \hspace{2cm}\leq (1+T)\sum_{n=1}^{\infty }|\lambda _{n}|E\left[ \sup_{s\in
\lbrack 0,1]}|B_{s}^{H_{n},n}|\right] <\infty
\end{align*}%
and hence by Belyaev's dichotomy for separable stochastically continuous
processes with stationary increments (see e.g. \cite[Theorem 5.3.10]{MR.06})
there exists a version of $\mathbb{B}_{\cdot }^{H}$ with continuous sample
paths.
Trivially, $\mathbb{B}_{\cdot }^{H}$ is never H\"{o}lder continuous since
for arbitrary small $\alpha >0$ there is always $n_{0}\geq 1$ such that $%
H_{n}<\alpha $ for all $n\geq n_{0}$ and since the sequence $\lambda $
satisfies \eqref{lambdacond} cancellations are not possible. Further, one
also argues that $\mathbb{B}_{\cdot }^{H}$ is neither Markov nor has finite
variation of any order $p>0$ which then implies that $\mathbb{B}_{\cdot
}^{H} $ is not a semimartingale.
\end{proof}
We will refer to \eqref{monster} as a \emph{regularizing} cylindrical
fractional Brownian motion with associated Hurst sequence $H$ or simply a
regularizing fBm.
Next, we state a version of Girsanov's theorem which actually shows that
equation \eqref{maineq} admits a weak solution. Its proof is mainly based on
the classical Girsanov theorem for a standard Brownian motion in Theorem \ref%
{VI_girsanov}.
\begin{thm}[Girsanov]
\label{girsanov} Let $u:[0,T]\times \Omega \rightarrow \R^d$ be a (jointly
measurable) $\mathcal{F}$-adapted process with integrable trajectories such
that $t\mapsto \int_0^t u_s ds$ belongs to the domain of the operator $%
K_{H_{n_0}}^{-1}$ from \eqref{opK_H-1} for some $n_0\geq 1$.
Define the $\R^d$-valued process
\begin{equation*}
\widetilde{\mathbb{B}}_t^H := \mathbb{B}_t^H + \int_0^t u_s ds.
\end{equation*}
Define the probability $\widetilde{P}_{n_0}$ in terms of the Radon-Nikodym
derivative
\begin{equation*}
\frac{d\widetilde{P}_{n_0}}{dP_{n_0}}:=\xi_T,
\end{equation*}
where
\begin{equation*}
\xi_T^{n_0} := \exp \left\{-\int_0^T K_{H_{n_0}}^{-1}\left( \frac{1}{%
\lambda_{n_0}} \int_0^{\cdot} u_s ds \right) (s) dW_s^{n_0} -\frac{1}{2}%
\int_0^T \left|K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}
\int_0^{\cdot}u_s ds \right) (s)\right|^2 ds\right\}.
\end{equation*}
If $E[\xi _{T}^{n_{0}}]=1$, then $\widetilde{\mathbb{B}_{\cdot }}^{H}$ is a
regularizing $\R^{d}$-valued cylindrical fractional Brownian motion with
respect to $\mathcal{F}$ under the new measure $\widetilde{P}_{n_{0}}$ with
Hurst sequence $H$.
\end{thm}
\begin{proof}
Indeed, write
\begin{align*}
\widetilde{\mathbb{B}}_t^H &= \int_0^t u_s ds +
\lambda_{n_0}B_t^{H_{n_0},n_0}+\sum_{n\neq n_0}^{\infty} \lambda_n
B_t^{H_n,n} \\
&= \lambda_{n_0}\left(\frac{1}{\lambda_{n_0}}\int_0^t u_s ds +
B_t^{H_{n_0},n_0}\right) + \sum_{n\neq n_0}^{\infty} \lambda_n B_t^{H_n,n} \\
&= \lambda_{n_0}\left(\frac{1}{\lambda_{n_0}}\int_0^t u_s ds + \int_0^t
K_{H_{n_0}}(t,s) dW_s^{n_0}\right) + \sum_{n\neq n_0}^{\infty} \lambda_n
B_t^{H_n,n} \\
&= \lambda_{n_0}\left(\int_0^t K_{H_{n_0}}(t,s) d\widetilde{W}%
_s^{n_0}\right) + \sum_{n\neq n_0}^{\infty} \lambda_n B_t^{H_n,n},
\end{align*}
where
\begin{equation*}
\widetilde{W}_t^{n_0} := W_t^{n_0} + \int_0^t K_{H_{n_0}}^{-1}\left(\frac{1}{%
\lambda_{n_0}} \int_0^{\cdot} u_r dr\right)(s)ds.
\end{equation*}
Then it follows from Theorem \ref{VI_girsanov} or \cite[Theorem 3.1]%
{nualart.ouknine.03} that
\begin{equation*}
\widetilde{B}_t^{H_{n_0},n_0}:= \int_0^t K_{H_{n_0}}(t,s) d\widetilde{W}%
_t^{n_0}
\end{equation*}
is a fractional Brownian motion with Hurst parameter $H_{n_0}$ under the
measure
\begin{equation*}
\frac{d\widetilde{P}_{n_0}}{dP_{n_0}} = \exp \left\{-\int_0^T
K_{H_{n_0}}^{-1}\left( \frac{1}{\lambda_{n_0}} \int_0^{\cdot} u_s ds \right)
(s) dW_s^{n_0} -\frac{1}{2}\int_0^T \left|K_{H_{n_0}}^{-1}\left(\frac{1}{%
\lambda_{n_0}} \int_0^{\cdot} u_s ds \right) (s)\right|^2 ds\right\}.
\end{equation*}
Hence,
\begin{equation*}
\widetilde{\mathbb{B}}_t^H = \sum_{n=1}^{\infty} \lambda_n\widetilde{B}%
_t^{H_n,n},
\end{equation*}
where
\begin{equation*}
\widetilde{B}_t^{H_n,n} =
\begin{cases}
B_t^{H_n,n} \quad \mbox{if}\quad n\neq n_0, \\
\widetilde{B}_t^{H_{n_0},n_0} \quad \mbox{if}\quad n= n_0%
\end{cases}%
,
\end{equation*}
defines a regularizing $\R^d$-valued cylindrical fractional Brownian motion
under $\widetilde{P}_{n_0}$.
\end{proof}
\begin{rem}
In the above Girsanov theorem we just modify the law of the drift plus one
selected fractional Brownian motion with Hurst parameter $H_{n_0}$. In our
proof later, we show that actually $t\mapsto \int_0^t b(s,\mathbb{B}_s^H)ds$
belongs to the domain of the operators $K_{H_n}^{-1}$ for any $n\geq 1$ but
only large $n\geq 1$ satisfy Novikov's condition for arbitrary selected
values of $p,q\in (2,\infty]$.
\end{rem}
\bigskip
Consider now the following stochastic differential equation with the driving
noise $\mathbb{B}_{\cdot }^{H}$, introduced earlier:
\begin{equation} \label{eqsmooth}
X_{t}=x+\int_{0}^{t}b(s,X_{s})ds+\mathbb{B}_{t}^{H},\quad t\in \lbrack 0,T],
\end{equation}%
where $x\in \R^{d}$ and $b$ is regular.
The following result summarises the classical existence and uniqueness
theorem and some of the properties of the solution. Existence and uniqueness
can be conducted using the classical arguments of $L^{2}([0,T]\times \Omega
) $-completeness in connection with a Picard iteration argument.
\begin{thm}
Let $b:[0,T]\times \R^{d}\rightarrow \R^{d}$ be continuously differentiable
in $\R^{d}$ with bounded derivative uniformly in $t\in \lbrack 0,T]$ and
such that there exists a finite constant $C>0$ independent of $t$ such that $%
|b(t,x)|\leq C(1+|x|)$ for every $(t,x)\in \lbrack 0,T]\times \R^{d}$. Then
equation \eqref{eqsmooth} admits a unique global strong solution which is $P$%
-a.s. continuously differentiable in $x$ and Malliavin differentiable in
each direction $W^{i}$, $i\geq 1$ of $\mathbb{B}_{\cdot }^{H}$. Moreover,
the space derivative and Malliavin derivatives of $X$ satisfy the following
linear equations
\begin{equation*}
\frac{\partial }{\partial x}X_{t}=I_{d}+\int_{0}^{t}b^{\prime }(s,X_{s})%
\frac{\partial }{\partial x}X_{s}ds,\quad t\in \lbrack 0,T]
\end{equation*}%
and
\begin{equation*}
D_{t_{0}}^{i}X_{t}=\lambda
_{i}K_{H_{i}}(t,t_{0})I_{d}+\int_{t_{0}}^{t}b^{\prime
}(s,X_{s})D_{t_{0}}^{i}X_{s}ds,\quad i\geq 1,\quad t_{0},t\in \lbrack
0,T],\quad t_{0}<t,
\end{equation*}%
where $b^{\prime }$ denotes the space Jacobian matrix of $b$, $I_{d}$ the $d$%
-dimensional identity matrix and $D_{t_{0}}^{i}$ the Malliavin derivative
along $W^{i}$, $i\geq 1$. Here, the last identity is meant in the $L^{p}$%
-sense $[0,T]$.
\end{thm}
\section{Construction of the Solution}
\label{strongsol}
We aim at constructing a Malliavin differentiable unique global $\mathcal{F}$%
-strong solution to the following equation
\begin{equation} \label{maineq}
dX_{t}=b(t,X_{t})dt+d\mathbb{B}_{t}^{H},\quad X_{0}=x\in \R^{d},\quad t\in
\lbrack 0,T],
\end{equation}%
where the differential is interpreted formally in such a way that if %
\eqref{maineq} admits a solution $X_{\cdot }$, then
\begin{equation*}
X_{t}=x+\int_{0}^{t}b(s,X_{s})ds+\mathbb{B}_{t}^{H},t\in \lbrack 0,T],
\end{equation*}%
whenever it makes sense. Denote by $L_{p}^{q}:=L^{q}([0,T];L^{p}(\R^{d};\R%
^{d}))$, $p,q\in \lbrack 1,\infty ]$ the Banach space of integrable
functions such that
\begin{equation*}
\Vert f\Vert _{L_{p}^{q}}:=\left( \int_{0}^{T}\left( \int_{\R%
^{d}}|f(t,z)|^{p}dz\right) ^{q/p}dt\right) ^{1/q}<\infty ,
\end{equation*}%
where we take the essential supremum's norm in the cases $p=\infty $ and $%
q=\infty $.
In this paper, we want to reach the class of discontinuous coefficients $%
b:[0,T]\times \R^{d}\rightarrow \R^{d}$ in the Banach space
\begin{equation*}
\mathcal{L}_{2,p}^{q}:=L^{q}([0,T];L^{p}(\R^{d};\R^{d}))\cap L^{1}(\R%
^{d};L^{\infty }([0,T];\mathbb{R}^{d})),\quad p,q\in (2,\infty ],
\end{equation*}%
of functions $f:[0,T]\times \R^{d}\rightarrow \R^{d}$ with the norm
\begin{equation*}
\Vert f\Vert _{\mathcal{L}_{2,p}^{q}}=\Vert f\Vert _{L_{p}^{q}}+\Vert f\Vert
_{L_{\infty }^{1}}
\end{equation*}%
for chosen $p,q\in (2,\infty ]$, where
\begin{equation*}
L_{\infty }^{1}:=L^{1}(\R^{d};L^{\infty }([0,T];\mathbb{R}^{d})).
\end{equation*}
Hence, our computations also show the result for uniformly bounded
coefficients that are square-integrable.
We will show existence and uniqueness of strong solutions of equation %
\eqref{maineq} driven by a $d$-dimensional regularizing fractional Brownian
motion with Hurst sequence $H$ with coefficients $b$ belonging to the class $%
\mathcal{L}_{2,p}^{q}$. Moreover, we will prove that such solution is
Malliavin differentiable and infinitely many times differentiable with
respect to the initial value $x$, where $d\geq 1$, $p,q\in (2,\infty ]$ are
arbitrary.
\begin{rem}
We would like to remark that with the method employed in the present
article, the existence of weak solutions and the uniqueness in law, holds
for drift coefficients in the space $L_{p}^{q}$. In fact, as we will see
later on, we need the additional space $L_{\infty }^{1}$ to obtain unique
strong solutions.
\end{rem}
This solution is neither a semimartingale, nor a Markov process, and it has
very irregular paths. We show in this paper that the process $\mathbb{B}%
_{\cdot }^{H}$ is a right noise to use in order to produce infinitely
classically differentiable flows of \eqref{maineq} for highly irregular
coefficients.
To construct a solution the main key is to approximate $b$ by a sequence of
smooth functions $b_n$ a.e. and denoting by $X^n = \{X_t^n, t\in [0,T]\}$
the approximating solutions, we aim at using an \emph{ad hoc} compactness
argument to conclude that the set $\{X_t^n\}_{n\geq 1}\subset L^2(\Omega)$
for fixed $t\in [0,T]$ is relatively compact.
As for the regularity of the mapping $x\mapsto X_{t}^{x}$, we are interested
in proving that it is infinitely many times differentiable. It is known that
the SDE $dX_{t}=b(t,X_{t})dt+dB_{t}^{H}$, $X_{0}=x\in \R^{d}$ admits a
unique strong solution for irregular vector fields $b\in L_{\infty ,\infty
}^{1,\infty }$ and that the mapping $x\mapsto X_{t}^{x}$ belongs, $P$-a.s.,
to $C^{k}$ if $H=H(k,d)<1/2$ is small enough. Hence, by adding the noise $%
\mathbb{B}_{\cdot }^{H}$, we should expect the solution of \eqref{maineq} to
have a smooth flow.
Hereunder, we establish the following main result, which will be stated
later on in this Section in a more precise form\emph{\ }(see Theorem \ref%
{VI_mainthm}):
\bigskip
\emph{Let }$b\in \mathcal{L}_{2,p}^{q}$\emph{, }$p,q\in (2,\infty ]$\emph{\
and assume that }$\lambda =\{\lambda _{i}\}_{i\geq 1}$\emph{\ in (}\ref%
{monster}\emph{) satisfies certain growth conditions to be specified later
on. Then there exists a unique (global) strong solution }$X=\{X_{t},t\in
\lbrack 0,T]\}$\emph{\ of equation }\eqref{maineq}\emph{. Moreover, for
every }$t\in \lbrack 0,T]$\emph{, }$X_{t}$\emph{\ is Malliavin
differentiable in each direction of the Brownian motions }$W^{n}$\emph{, }$%
n\geq 1$\emph{\ in }\eqref{compfBm}.
\bigskip
The proof of Theorem \ref{VI_mainthm} consists of the following steps:
\begin{enumerate}
\item First, we give the construction of a weak solution $X_{\cdot }$ to %
\eqref{maineq} by means of Girsanov's theorem for the process $\mathbb{B}%
_{\cdot }^{H}$, that is we introduce a probability space $(\Omega ,\mathfrak{%
A},P)$, on which a regularizing fractional Brownian motion $\mathbb{B}%
_{\cdot }^{H}$ and a process $X_{\cdot }$ are defined, satisfying the SDE %
\eqref{maineq}. However, a priori $X_{\cdot }$ is not adapted to the natural
filtration $\mathcal{F}=\{\mathcal{F}_{t}\}_{t\in \lbrack 0,T]}$ with
respect to $\mathbb{B}_{\cdot }^{H}$.
\item In the next step, consider an approximation of the drift coefficient $%
b $ by a sequence of compactly supported and infinitely continuously
differentiable functions (which always exists by standard approximation
results) $b_{n}:[0,T]\times \R^{d}\rightarrow \R^{d}$, $n\geq 0$ such that $%
b_{n}(t,x)\rightarrow b(t,x)$ for a.e. $(t,x)\in \lbrack 0,T]\times \R^{d}$
and such that $\sup_{n\geq 0}\Vert b_{n}\Vert _{\mathcal{L}_{2,p}^{q}}\leq M$
for some finite constant $M>0$. Then by the previous Section we know that
for each smooth coefficient $b_{n}$, $n\geq 0$, there exists unique strong
solution $X^{n}=\{X_{t}^{n},t\in \lbrack 0,T]\}$ to the SDE
\begin{equation} \label{VI_Xn}
dX_{t}^{n}=b_{n}(t,X_{t}^{n})du+d\mathbb{B}_{t}^{H},\,\,0\leq t\leq
T,\,\,\,X_{0}^{n}=x\in \mathbb{R}^{d}\,.
\end{equation}%
Then we prove that for each $t\in \lbrack 0,T]$ the sequence $X_{t}^{n}$
converges weakly to the conditional expectation $E[X_{t}|\mathcal{F}_{t}]$
in the space $L^{2}(\Omega )$ of square integrable random variables.
\item By the previous Section we have that for each $t\in \lbrack 0,T]$ the
strong solution $X_{t}^{n}$, $n\geq 0$, is Malliavin differentiable, and
that the Malliavin derivatives $D_{s}^{i}X_{t}^{n}$, $i\geq 1$, $0\leq s\leq
t$, with respect to $W^{i}$ in \eqref{compfBm} satisfy
\begin{equation*}
D_{s}^{i}X_{t}^{n}=\lambda _{i}K_{H_{i}}(t,s)I_{d}+\int_{s}^{t}b_{n}^{\prime
}(u,X_{u}^{n})D_{s}^{i}X_{u}^{n}du,
\end{equation*}%
for every $i\geq 1$ where $b_{n}^{\prime }$ is the Jacobian of $b_{n}$ and $%
I_{d}$ the identity matrix in $\R^{d\times d}$. Then, we apply an
infinite-dimensional compactness criterion for square integrable functionals
of a cylindrical Wiener process based on Malliavin calculus to show that for
every $t\in \lbrack 0,T]$ the set of random variables $\{X_{t}^{n}\}_{n\geq
0}$ is relatively compact in $L^{2}(\Omega )$. The latter, however, enables
us to prove that $X_{t}^{n}$ converges strongly in $L^{2}(\Omega )$ to $%
E[X_{t}|\mathcal{F}_{t}]$. Further we find that $E[X_{t}|\mathcal{F}_{t}]$
is Malliavin differentiable as a consequence of the compactness criterion.
\item We verify that $E[X_{t}|\mathcal{F}_{t}]=X_{t}$. So it follows that $%
X_{t}$ is $\mathcal{F}_{t}$-measurable and thus a strong solution on our
specific probability space.
\item Uniqueness in law is enough to guarantee pathwise uniqueness.
\end{enumerate}
\bigskip
In view of the above scheme, we go ahead with step (1) by first providing
some preparatory lemmas in order to verify Novikov's condition for $\mathbb{B%
}_{\cdot }^{H}$. Consequently, a weak solution can be constructed via a
change of measure.
\begin{lem}
\label{interlemma} Let $\mathbb{B}_{\cdot }^{H}$ be a $d$-dimensional
regularizing fBm and $p,q\in \lbrack 1,\infty ]$. Then for every Borel
measurable function $h:[0,T]\times \R^{d}\rightarrow \lbrack 0,\infty )$ we
have
\begin{equation} \label{estimateh}
E\left[ \int_{0}^{T}h(t,\mathbb{B}_{t}^{H})dt\right] \leq C\Vert h\Vert
_{L_{p}^{q}},
\end{equation}%
where $C>0$ is a constant depending on $p$, $q$, $d$ and $H$. Also,
\begin{equation} \label{estimatehexp}
E\left[ \exp \left\{ \int_{0}^{T}h(t,\mathbb{B}_{t}^{H})dt\right\} \right]
\leq A(\Vert h\Vert _{L_{p}^{q}}),
\end{equation}%
where $A$ is an analytic function depending on $p$, $q$, $d$ and $H$.
\end{lem}
\begin{proof}
Let $\mathbb{B}_{\cdot }^{H}$ be a $d$-dimensional regularizing fBm, then
\begin{equation*}
\mathbb{B}_{t}^{H}-E\left[ \mathbb{B}_{t}^{H}|\mathcal{F}_{t_{0}}\right]
=\sum_{n=1}^{\infty }\lambda _{n}\int_{t_{0}}^{t}K_{H_{n}}(t,s)dW_{s}^{n}.
\end{equation*}%
So because of the independence of the increments of the Brownian motion, we
find that%
\begin{equation*}
Var\left[ \mathbb{B}_{t}^{H}|\mathcal{F}_{t_{0}}\right] =Var[\mathbb{B}%
_{t}^{H}-E\left[ \mathbb{B}_{t}^{H}|\mathcal{F}_{t_{0}}\right] ].
\end{equation*}%
On the other the strong local non-determinism of the fractional Brownian
motion yields%
\begin{equation*}
Var[\mathbb{B}_{t}^{H}-E\left[ \mathbb{B}_{t}^{H}|\mathcal{F}_{t_{0}}\right]
]=Var\left[ \mathbb{B}_{t}^{H}|\mathcal{F}_{t_{0}}\right] \geq
\sum_{n=1}^{\infty }\lambda _{n}^{2}C_{n}(t-t_{0})^{2H_{n}},
\end{equation*}%
where $C_{n}$ are the constants depending on $H_{n}$.
Hence, by a conditioning argument it is easy to see that for every Borel
measurable function $h$ we have
\begin{align*}
E& \left[ \int_{t_{0}}^{T}h(t_{1},\mathbb{B}_{t_{1}}^{H})dt_{1}\bigg|%
\mathcal{F}_{t_{0}}\right] \\
& \leq \int_{t_{0}}^{T}\int_{\R^{d}}h(t_{1},Y+z)(2\pi )^{-d/2}\sigma
_{t_{0},t_{1}}^{-d}\exp \left( -\frac{|z|^{2}}{2\sigma _{t_{0},t_{1}}^{2}}%
\right) dzdt_{1}\bigg|_{Y=\sum_{n=1}^{\infty }\lambda
_{n}\int_{0}^{t_{0}}K_{H_{n}}(t,s)dW_{s}^{n}},
\end{align*}%
where
\begin{equation*}
\sigma _{t_{0},t_{1}}^{2}:=\sum_{n=1}^{\infty }\lambda
_{n}^{2}C_{n}|t_{1}-t_{0}|^{2H_{n}}.
\end{equation*}%
Applying H\"{o}lder's inequality, first w.r.t. $z$ and then w.r.t. $t_{1}$
we arrive at
\begin{align*}
E& \left[ \int_{t_{0}}^{T}h(t_{1},\mathbb{B}_{t_{1}}^{H})dt_{1}\bigg|%
\mathcal{F}_{t_{0}}\right] \leq \\
& \leq C\left( \int_{t_{0}}^{T}\left( \int_{\R^{d}}h(t_{1},x_{1})^{p}dx_{1}%
\right) ^{q/p}dt_{1}\right) ^{1/q}\left( \int_{t_{0}}^{T}\left( \sigma
_{t_{0},t_{1}}^{2}\right) ^{-dq^{\prime }(p^{\prime }-1)/2p^{\prime
}}dt_{1}\right) ^{1/q^{\prime }},
\end{align*}%
for some finite constant $C>0$. The time integral is finite for arbitrary
values of $d,q^{\prime }$ and $p^{\prime }$. To see this, use the bound $%
\sum_{n}a_{n}\geq a_{n_{0}}$ for $a_{n}\geq 0$ and for all $n_{0}\geq 1$.
Hence,
\begin{align*}
\int_{t_{0}}^{T}& \left( \sum_{n=1}^{\infty }\lambda
_{n}^{2}C_{n}(t_{1}-t_{0})^{2H_{n}}\right) ^{-dq^{\prime }(p^{\prime
}-1)/2p^{\prime }}dt_{1} \\
& \leq \left( \lambda _{n_{0}}^{2}C_{n_{0}}\right) ^{-dq^{\prime }(p^{\prime
}-1)/2p^{\prime }}\int_{t_{0}}^{T}(t_{1}-t_{0})^{-H_{n_{0}}dq^{\prime
}(p^{\prime }-1)/p^{\prime }}dt_{1},
\end{align*}%
then for fixed $d,q^{\prime }$ and $p^{\prime }$ choose $n_{0}$ so that $%
H_{n_{0}}dq^{\prime }(p^{\prime }-1)/p^{\prime }<1$. Actually, the above
estimate already implies that all exponential moments are finite by \cite[%
Lemma 1.1]{Por90}. Here, though we need to derive the explicit dependence on
the norm of $h$.
Altogether,
\begin{align} \label{conditionalest}
E\left[\int_{t_0}^T h(t_1,\mathbb{B}_{t_1}^H) dt_1\bigg| \mathcal{F}_{t_0} %
\right] \leq C \left(\int_{t_0}^T \left(\int_{\R^d} h(t_1,x_1)^p
dx_1\right)^{q/p} dt_1\right)^{1/q},
\end{align}
and setting $t_0=0$ this proves \eqref{estimateh}.
In order to prove \eqref{estimatehexp}, Taylor's expansion yields
\begin{equation*}
E\left[ \exp \left\{ \int_{0}^{T}h(t,\mathbb{B}_{t}^{H})dt\right\} \right]
=1+\sum_{m=1}^{\infty }E\left[ \int_{0}^{T}\int_{t_{1}}^{T}\cdots
\int_{t_{m-1}}^{T}\prod_{j=1}^{m}h(t_{j},\mathbb{B}_{t_{j}}^{H})dt_{m}\cdots
dt_{1}\right] .
\end{equation*}%
Using \eqref{conditionalest} iteratively we have
\begin{equation*}
E\left[ \exp \left\{ \int_{0}^{T}h(t,\mathbb{B}_{t}^{H})dt\right\} \right]
\leq \frac{C^{m}}{(m!)^{1/q}}\left( \int_{0}^{T}\left( \int_{\R%
^{d}}h(t,x)^{p}dx\right) ^{q/p}dt\right) ^{m/q}=\frac{C^{m}\Vert h\Vert
_{L_{p}^{q}}^{m}}{(m!)^{1/q}},
\end{equation*}%
and the result follows with $A(x):=\sum_{m=1}^{\infty }\frac{C^{m}}{%
(m!)^{1/q}}x^{m}$.
\end{proof}
\begin{lem}
\label{domainKH} Let $\mathbb{B}_{\cdot }^{H}$ be a $d$-dimensional
regularizing fBm and assume $b\in L_{p}^{q}$, $p,q\in \lbrack 2,\infty ]$.
Then for every $n\geq 1$,
\begin{equation*}
t\mapsto \int_{0}^{t}b(s,\mathbb{B}_{s}^{H})ds\in I_{0+}^{H_{n}+\frac{1}{2}%
}(L^{2}([0,T])),\quad P-a.s.,
\end{equation*}%
i.e. the process $t\mapsto \int_{0}^{t}b(s,\mathbb{B}_{s}^{H})ds$ belongs to
the domain of the operator $K_{H_{n}}^{-1}$ for every $n\geq 1$, $P$-a.s.
\end{lem}
\begin{proof}
Using the property that $D_{0^{+}}^{H+\frac{1}{2}}I_{0^{+}}^{H+\frac{1}{2}%
}(f)=f$ for $f\in L^{2}([0,T])$ we need to show that for every $n\geq 1$,
\begin{equation*}
D_{0^{+}}^{H_{n}+\frac{1}{2}}\int_{0}^{\cdot }|b(s,\mathbb{B}_{s}^{H})|ds\in
L^{2}([0,T]),\quad P-a.s.
\end{equation*}%
Indeed,
\begin{align*}
\left\vert D_{0^{+}}^{H_{n}+\frac{1}{2}}\left( \int_{0}^{\cdot }|b(s,\mathbb{%
B}_{s}^{H})|ds\right) (t)\right\vert =& \frac{1}{\Gamma \left( \frac{1}{2}%
-H_{n}\right) }\Bigg(\frac{1}{t^{H_{n}+\frac{1}{2}}}\int_{0}^{t}|b(u,\mathbb{%
B}_{u}^{H})|du \\
& +\,\left( H+\frac{1}{2}\right) \int_{0}^{t}(t-s)^{-H_{n}-\frac{3}{2}%
}\int_{s}^{t}|b(u,\mathbb{B}_{u}^{H})|duds\Bigg) \\
& \hspace{-4cm}\leq \frac{1}{\Gamma \left( \frac{1}{2}-H_{n}\right) }\Bigg(%
\frac{1}{t^{H_{n}+\frac{1}{2}}}+\,\left( H+\frac{1}{2}\right)
\int_{0}^{t}(t-s)^{-H_{n}-\frac{3}{2}}ds\Bigg)\int_{0}^{t}|b(u,\mathbb{B}%
_{u}^{H})|du.
\end{align*}%
Hence, for some finite constant $C_{H,T}>0$ we have
\begin{equation*}
\left\vert D_{0^{+}}^{H+\frac{1}{2}}\left( \int_{0}^{\cdot }|b(s,\mathbb{%
\tilde{B}}_{s}^{H})|ds\right) (t)\right\vert ^{2}\leq
C_{H,T}\int_{0}^{T}|b(u,\mathbb{B}_{u}^{H})|^{2}du
\end{equation*}%
and taking expectation the result follows by Lemma \ref{interlemma} applied
to $|b|^{2}$.
\end{proof}
\bigskip
We are now in a position to show that Novikov's condition is met if $n$ is
large enough.
\begin{prop}
\label{novikov} Let $\mathbb{B}_t^H$ be a $d$-dimensional regularizing
fractional Brownian motion with Hurst sequence $H$. Assume $b\in L_p^q$, $%
p,q\in (2,\infty]$. Then for every $\mu \in \R$, there exists $n_0$ with $%
H_{n}< \frac{1}{2}-\frac{1}{p}$ for every $n\geq n_0$ and such that for
every $n\geq n_0$ we have
\begin{equation*}
E\left[ \exp\left\{\mu \int_0^T \left|K_{H_n}^{-1}\left( \frac{1}{\lambda_n}%
\int_0^{\cdot} b(r,\mathbb{B}_r^H) dr\right) (s)\right|^2 ds\right\} \right]
\leq C_{\lambda_n,H_n,d,\mu,T}(\|b\|_{L_{p}^{q}})
\end{equation*}
for some real analytic function $C_{\lambda_n,H_n,d,\mu,T}$ depending only
on $\lambda_n$, $H_n$, $d$, $T$ and $\mu$.
In particular, there is also some real analytic function $\widetilde{C}%
_{\lambda_n,H_n,d,\mu,T}$ depending only on $\lambda_n$, $H_n$, $d$, $T$ and
$\mu$ such that
\begin{equation*}
E\left[ \mathcal{E}\left(\int_0^T K_{H_n}^{-1}\left(\frac{1}{\lambda_n}%
\int_0^{\cdot} b(r,\mathbb{B}_r^H) dr\right)^{\ast} (s) dW_s^n\right)^\mu %
\right] \leq \widetilde{C}_{H,d,\mu,T}(\|b\|_{L_{p}^{q}}),
\end{equation*}
for every $\mu \in \R$.
\end{prop}
\begin{proof}
By Lemma \ref{domainKH} both random variables appearing in the statement are
well defined. Then, fix $n\geq n_0$ and denote $\theta_s^n :=
K_{H_n}^{-1}\left(\frac{1}{\lambda_n}\int_0^{\cdot} |b(r,\mathbb{B}_r^H)|
dr\right) (s)$. Then using relation \eqref{VI_inverseKH} we have
\begin{align} \label{thetan}
|\theta_s^n| =& \left|\frac{1}{\lambda_n}s^{H_n-\frac{1}{2}} I_{0^+}^{\frac{1%
}{2}-H_n} s^{\frac{1}{2}-H_n} |b(s,\mathbb{B}_s^H)|\right| \notag \\
=&\frac{1/|\lambda_n|}{\Gamma \left(\frac{1}{2}-H_n\right)} s^{H_n- \frac{1}{%
2}} \int_0^s (s-r)^{-\frac{1}{2}-H_n} r^{\frac{1}{2}-H_n} |b(r,\mathbb{B}%
_r^H)|dr.
\end{align}
Observe that since $H_n< \frac{1}{2}-\frac{1}{p}$, $p\in (2,\infty]$ we may
take $\varepsilon\in [0,1)$ such that $H_n<\frac{1}{1+\varepsilon}-\frac{1}{2%
}$ and apply H\"{o}lder's inequality with exponents $1+\varepsilon$ and $%
\frac{1+\varepsilon}{\varepsilon}$, where the case $\varepsilon=0$
corresponds to the case where $b$ is bounded. Then we get
\begin{align} \label{thetabound}
|\theta_s^n| \leq C_{\varepsilon,\lambda_n,H_n} s^{\frac{1}{1+\varepsilon}%
-H_n-\frac{1}{2}} \left(\int_0^s |b(r,\mathbb{B}_r^H)|^{\frac{1+\varepsilon}{%
\varepsilon}}dr\right)^{\frac{\varepsilon}{1+\varepsilon}},
\end{align}
where
\begin{equation*}
C_{\varepsilon,\lambda_n, H_n}:=\frac{\Gamma\left(1-(1+\varepsilon)(H_n+1/2)%
\right)^{\frac{1}{1+\varepsilon}}\Gamma\left(1+(1+\varepsilon)(1/2-H_n)%
\right)^{\frac{1}{1+\varepsilon}} }{\lambda_n \Gamma \left(\frac{1}{2}%
-H_n\right) \Gamma \left(2(1-(1+\varepsilon)H_n)\right)^{\frac{1}{%
1+\varepsilon}}}.
\end{equation*}
Squaring both sides and using the fact that $|b|\geq 0$ we have the
following estimate
\begin{align*}
|\theta_s^n|^2 \leq C_{\varepsilon,\lambda_n,H_n}^2 s^{\frac{2}{1+\varepsilon%
}-2H_n-1} \left(\int_0^T |b(r,\mathbb{B}_r^H)|^{\frac{1+\varepsilon}{%
\varepsilon}}dr\right)^{\frac{2\varepsilon}{1+\varepsilon}}, \quad P-a.s.
\end{align*}
Since $0<\frac{2\varepsilon}{1+\varepsilon}<1$ and $|x|^{\alpha}\leq \max
\{\alpha,1-\alpha\}(1+|x|)$ for any $x\in \R$ and $\alpha\in (0,1)$ we have
\begin{align} \label{VI_fracL2}
\int_0^T |\theta_s^n|^2 ds \leq C_{\varepsilon,\lambda_n,H_n,T} \left(1+
\int_0^T |b(r,\mathbb{B}_r^H)|^{\frac{1+\varepsilon}{\varepsilon}}dr\right),
\quad P-a.s.
\end{align}
for some constant $C_{\varepsilon,\lambda_n, H_n,T}>0$. Then estimate %
\eqref{estimatehexp} from Lemma \ref{interlemma} with $h =
C_{\varepsilon,\lambda_n,H_n,T} \ \mu \ b^{\frac{1+\varepsilon}{\varepsilon}%
} $ with $\varepsilon\in [0,1)$ arbitrarily close to one yields the result
for $p,q\in (2,\infty]$.
\end{proof}
Let $(\Omega ,\mathfrak{A},\widetilde{P})$ be some given probability space
which carries a regularizing fractional Brownian motion $\widetilde{\mathbb{B%
}_{\cdot }}^{H}$ with Hurst sequence $H=\{H_{n}\}_{n\geq 1}$ and set $%
X_{t}:=x+\widetilde{\mathbb{B}}_{t}^{H}$, \mbox{$t\in [0,T]$}, $x\in \R^{d}$%
. Set $\theta _{t}^{n_{0}}:=\left( K_{H_{n_{0}}}^{-1}\left( \frac{1}{\lambda
_{n_{0}}}\int_{0}^{\cdot }b(r,X_{r})dr\right) \right) (t)$ for some fixed $%
n_{0}\geq 1$ such that Proposition \ref{novikov} can be applied and consider
the new measure defined by
\begin{equation*}
\frac{dP_{n_{0}}}{d\widetilde{P}_{n_{0}}}=Z_{T}^{n_{0}},
\end{equation*}%
where
\begin{equation*}
Z_{t}^{n_{0}}:=\prod_{n=1}^{\infty }\mathcal{E}\left( \theta _{\cdot
}^{n_{0}}\right) _{t}:=\exp \left\{ \int_{0}^{t}\left( \theta
_{s}^{n_{0}}\right) ^{\ast }dW_{s}^{n_{0}}-\frac{1}{2}\int_{0}^{t}|\theta
_{s}^{n_{0}}|^{2}ds\right\} ,\quad t\in \lbrack 0,T].
\end{equation*}
In view of Proposition \ref{novikov} the above random variable defines a new
probability measure and by Girsanov's theorem, see Theorem \ref{girsanov},
the process
\begin{equation} \label{VI_weak}
\mathbb{B}_{t}^{H}:=X_{t}-x-\int_{0}^{t}b(s,X_{s})ds,\quad t\in \lbrack 0,T]
\end{equation}%
is a regularizing fractional Brownian motion on $(\Omega ,\mathfrak{A}%
,P_{n_{0}})$ with Hurst sequence $H$. Hence, because of \eqref{VI_weak}, the
couple $(X,\mathbb{B}_{\cdot }^{H})$ is a weak solution of \eqref{maineq} on
$(\Omega ,\mathfrak{A},P_{n_{0}})$. Since $n_{0}\geq 1$ is fixed we will
omit the notation $P_{n_{0}}$ and simply write $P$.
Henceforth, we confine ourselves to the filtered probability space $(\Omega ,%
\mathfrak{A},P)$, $\mathcal{F}=\{\mathcal{F}_{t}\}_{t\in \lbrack 0,T]}$
which carries the weak solution $(X,\mathbb{B}_{\cdot }^{H})$ of %
\eqref{maineq}.
\begin{rem}
\label{VI_stochbasisrmk} In order to establish existence of a strong
solution, the main difficulty here is that $X_{\cdot }$ is $\mathcal{F}$%
-adapted. In fact, in this case $X_{t}=F_{t}(\mathbb{B}_{\cdot }^{H})$ for
some family of progressively measurable functional $F_{t}$, $t\in \lbrack
0,T]$ on $C([0,T];\R^{d})$ and for any other stochastic basis $(\hat{\Omega},%
\hat{\mathfrak{A}},\hat{P},\hat{\mathbb{B}})$ one gets that $X_{t}:=F_{t}(%
\hat{\mathbb{B}}_{\cdot })$, $t\in \lbrack 0,T]$, is a solution to SDE~%
\eqref{maineq}, which is adapted with respect to the natural filtration of $%
\hat{\mathbb{B}}_{\cdot }$. But this exactly gives the existence of a strong
solution to SDE~\eqref{maineq}.
\end{rem}
We take a weak solution $X_{\cdot }$ of \eqref{maineq} and consider $E[X_{t}|%
\mathcal{F}_{t}]$. The next result corresponds to step (2) of our program.
\begin{lem}
\label{VI_weakconv} Let $b_n:[0,T]\times \R^d\rightarrow \R^d$, $n\geq 1 $,
be a sequence of compactly supported smooth functions converging a.e. to $b$
such that $\sup_{n\geq 1} \|b_n\|_{L_p^q}<\infty$. Let $t\in [0,T]$ and $%
X_t^n$ denote the solution of \eqref{maineq} when we replace $b$ by $b_n$.
Then for every $t\in [0,T]$ and continuous function $\varphi:\R^d
\rightarrow \R$ of at most linear growth we have that
\begin{equation*}
\varphi(X_t^{n}) \xrightarrow{n \to \infty} E\left[ \varphi(X_t) |\mathcal{F}%
_t \right],
\end{equation*}
weakly in $L^2(\Omega)$.
\end{lem}
\begin{proof}
Let us assume, without loss of generality, that $x=0$. In the course of the
proof we always assume that for fixed $p,q\in (2,\infty ]$ then $n_{0}\geq 1$
is such that $H_{n_{0}}<\frac{1}{2}-\frac{1}{p}$ and hence Proposition \ref%
{novikov} can be applied.
First we show that
\begin{align} \label{VI_doleansDadeConvergence}
\mathcal{E}\left(\frac{1}{\lambda_{n_0}} \int_0^t
K_{H_{n_0}}^{-1}\left(\int_0^{\cdot} b_n(r,\mathbb{B}^H_r)dr\right)^{%
\ast}(s) dW_s^{n_0} \right) \rightarrow \mathcal{E}\left( \int_0^t
K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}\int_0^{\cdot} b(r,\mathbb{B}%
^H_r)dr\right)^{\ast}(s) dW_s^{n_0}\right)
\end{align}
in $L^p(\Omega)$ for all $p \geq 1$. To see this, note that
\begin{equation*}
K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}\int_0^{\cdot} b_n(r,\mathbb{B}%
^H_r)dr\right)(s) \rightarrow K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}%
\int_0^{\cdot} b(r,\mathbb{B}^H_r)dr\right)(s)
\end{equation*}
in probability for all $s$. Indeed, from \eqref{thetabound} we have a
constant $C_{\varepsilon,\lambda_{n_0},H_{n_0}}>0$ such that
\begin{align*}
E\Bigg[\Big|& K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}\int_0^{\cdot}
b_n(r,\mathbb{B}^H_r)dr\right)(s) - K_{H_{n_0}}^{-1}\left(\frac{1}{%
\lambda_{n_0}}\int_0^{\cdot} b(r,\mathbb{B}^H_r)dr\right)(s)\Big| \Bigg] \\
&\leq C_{\varepsilon,,\lambda_{n_0},H_{n_0}} s^{\frac{1}{1+\varepsilon}%
-H_{n_0}-\frac{1}{2}} \left(\int_0^s |b_n(r,\mathbb{B}_r^H) -b(r,\mathbb{B}%
_r^H) |^{\frac{1+\varepsilon}{\varepsilon}} dr\right)^{\frac{\varepsilon}{%
1+\varepsilon}} \rightarrow 0
\end{align*}
as $n \rightarrow \infty$ by Lemma \ref{interlemma}.
Moreover, $\left\{ K_{H_{n_0}}^{-1}(\frac{1}{\lambda_{n_0}}\int_0^{\cdot}
b_n(r,\mathbb{B}_r^H)dr) \right\}_{n \geq 0}$ is bounded in $L^2([0,t]
\times \Omega; \mathbb{R}^d)$. This is directly seen from (\ref{VI_fracL2})
in Proposition \ref{novikov}.
Consequently
\begin{equation*}
\int_0^t K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}\int_0^{\cdot} b_n(r,%
\mathbb{B}_r^H)dr \right)^{\ast}(s) dW_s^{n_0} \rightarrow \int_0^t
K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}\int_0^{\cdot} b(r,\mathbb{B}%
_r^H)dr\right)^{\ast}(s) dW_s^{n_0}
\end{equation*}
and
\begin{equation*}
\int_0^t \left|K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}\int_0^{\cdot}
b_n(r,\mathbb{B}_r^H)dr\right)(s)\right|^2 ds \rightarrow \int_0^t
\left|K_{H_{n_0}}^{-1}\left(\frac{1}{\lambda_{n_0}}\int_0^{\cdot} b(r,%
\mathbb{B}_r^H)dr\right)(s)\right|^2 ds
\end{equation*}
in $L^2(\Omega)$ since the latter is bounded $L^p(\Omega)$ for any $p \geq 1$%
, see Proposition \ref{novikov}.
By applying the estimate $|e^{x}-e^{y}|\leq e^{x+y}|x-y|$, H\"{o}lder's
inequality and the bounds in Proposition \ref{novikov} in connection with
Lemma \ref{interlemma} we see that (\ref{VI_doleansDadeConvergence}) holds.
Similarly, one finds that
\begin{equation*}
\exp \left\{ \left\langle \alpha ,\int_{s}^{t}b_{n}(r,\mathbb{B}%
_{r}^{H})dr\right\rangle \right\} \rightarrow \exp \left\{ \left\langle
\alpha ,\int_{s}^{t}b(r,\mathbb{B}_{r}^{H})dr\right\rangle \right\}
\end{equation*}%
in $L^{p}(\Omega )$ for all $p\geq 1$, $0\leq s\leq t\leq T$, $\alpha \in \R%
^{d}$.
In order to complete the proof, we note that the set
\begin{equation*}
\Sigma _{t}:=\left\{ \exp \{\sum_{j=1}^{k}\langle \alpha _{j},\mathbb{B}%
_{t_{j}}^{H}-\mathbb{B}_{t_{j-1}}^{H}\rangle \}:\{\alpha
_{j}\}_{j=1}^{k}\subset \mathbb{R}^{d},0=t_{0}<\dots <t_{k}=t,k\geq 1\right\}
\end{equation*}%
is a total subspace of $L^{2}(\Omega ,\mathcal{F}_{t},P)$ and therefore it
is sufficient to prove the convergence
\begin{equation*}
\lim_{n\rightarrow \infty }E\left[ \left( \varphi (X_{t}^{n})-E[\varphi
(X_{t})|\mathcal{F}_{t}]\right) \xi \right] =0
\end{equation*}%
for all $\xi \in \Sigma _{t}$. In doing so, we notice that $\varphi $ is of
linear growth and hence $\varphi (\mathbb{B}_{t}^{H})$ has all moments.
Thus, we obtain the following convergence
\begin{equation*}
E\left[ \varphi (X_{t}^{n})\exp \left\{ \sum_{j=1}^{k}\langle \alpha _{j},%
\mathbb{B}_{t_{j}}^{H}-\mathbb{B}_{t_{j-1}}^{H}\rangle \right\} \right]
\end{equation*}%
\begin{equation*}
=E\left[ \varphi (X_{t}^{n})\exp \left\{ \sum_{j=1}^{k}\langle \alpha
_{j},X_{t_{j}}^{n}-X_{t_{j-1}}^{n}-%
\int_{t_{j-1}}^{t_{j}}b_{n}(s,X_{s}^{n})ds\rangle \right\} \right]
\end{equation*}%
\begin{equation*}
=E[\varphi (\mathbb{B}_{t}^{H})\exp \{\sum_{j=1}^{k}\langle \alpha _{j},%
\mathbb{B}_{t_{j}}^{H}-\mathbb{B}_{t_{j-1}}^{H}-%
\int_{t_{j-1}}^{t_{j}}b_{n}(s,\mathbb{B}_{s}^{H})ds\rangle \}\mathcal{E}%
\left( \int_{0}^{t}K_{H_{n_{0}}}^{-1}\left( \frac{1}{\lambda _{n_{0}}}%
\int_{0}^{\cdot }b_{n}(r,\mathbb{B}_{r}^{H})dr\right) ^{\ast
}(s)dW_{s}^{n_{0}}\right) ]
\end{equation*}%
\begin{equation*}
\rightarrow E[\varphi (\mathbb{B}_{t}^{H})\exp \{\sum_{j=1}^{k}\langle
\alpha _{j},\mathbb{B}_{t_{j}}^{H}-\mathbb{B}_{t_{j-1}}^{H}-%
\int_{t_{j-1}}^{t_{j}}b(s,\mathbb{B}_{s}^{H})ds\rangle \}\mathcal{E}\left(
\int_{0}^{t}K_{H_{n_{0}}}^{-1}\left( \frac{1}{\lambda _{n_{0}}}%
\int_{0}^{\cdot }b(r,\mathbb{B}_{r}^{H})dr\right) ^{\ast
}(s)dW_{s}^{n_{0}}\right) ]
\end{equation*}%
\begin{equation*}
=E[\varphi (X_{t})\exp \{\sum_{j=1}^{k}\langle \alpha _{j},\mathbb{B}%
_{t_{j}}^{H}-\mathbb{B}_{t_{j-1}}^{H}\rangle \}]
\end{equation*}%
\begin{equation*}
=E[E[\varphi (X_{t})|\mathcal{F}_{t}]\exp \{\sum_{j=1}^{k}\langle \alpha
_{j},\mathbb{B}_{t_{j}}^{H}-\mathbb{B}_{t_{j-1}}^{H}\rangle \}].
\end{equation*}
\end{proof}
\bigskip
\bigskip
We now turn to step (3) of our program. For its completion we need to derive
some crucial estimates.
\bigskip In preparation of those estimates, we introduce some notation and
definitions:
Let $m$ be an integer and let the function $f:[0,T]^{m}\times (\R%
^{d})^{m}\rightarrow \R$ be of the form
\begin{equation}
f(s,z)=\prod_{j=1}^{m}f_{j}(s_{j},z_{j}),\quad s=(s_{1},\dots ,s_{m})\in
\lbrack 0,T]^{m},\quad z=(z_{1},\dots ,z_{m})\in (\R^{d})^{m}, \label{f}
\end{equation}%
where $f_{j}:[0,T]\times \R^{d}\rightarrow \R$, $j=1,\dots ,m$ are smooth
functions with compact support. Further, let $\varkappa
:[0,T]^{m}\rightarrow \R$ a function of the form
\begin{equation}
\varkappa (s)=\prod_{j=1}^{m}\varkappa _{j}(s_{j}),\quad s\in \lbrack
0,T]^{m}, \label{kappa}
\end{equation}%
where $\varkappa _{j}:[0,T]\rightarrow \R$, $j=1,\dots ,m$ are integrable
functions.
Let $\alpha _{j}$ be a multi-index and denote by $D^{\alpha _{j}}$ its
corresponding differential operator. For $\alpha =(\alpha _{1},\dots ,\alpha
_{m})$ viewed as an element of $\mathbb{N}_{0}^{d\times m}$ we define $%
|\alpha |=\sum_{j=1}^{m}\sum_{l=1}^{d}\alpha _{j}^{(l)}$ and write
\begin{equation*}
D^{\alpha }f(s,z)=\prod_{j=1}^{m}D^{\alpha _{j}}f_{j}(s_{j},z_{j}).
\end{equation*}
The objective of this section is to establish an integration by parts
formula of the form
\begin{equation}
\int_{\Delta _{\theta ,t}^{m}}D^{\alpha }f(s,\mathbb{B}_{s})ds=\int_{(\R%
^{d})^{m}}\Lambda _{\alpha }^{f}(\theta ,t,z)dz, \label{ibp}
\end{equation}%
where $\mathbb{B}:=\mathbb{B}_{\cdot }^{H}$, for a random field $\Lambda
_{\alpha }^{f}$. In fact, we can choose $\Lambda _{\alpha }^{f}$ by
\begin{equation}
\Lambda _{\alpha }^{f}(\theta ,t,z)=(2\pi )^{-dm}\int_{(\R%
^{d})^{m}}\int_{\Delta _{\theta
,t}^{m}}\prod_{j=1}^{m}f_{j}(s_{j},z_{j})(-iu_{j})^{\alpha _{j}}\exp
\{-i\langle u_{j},\mathbb{B}_{s_{j}}-z_{j}\rangle \}dsdu. \label{LambdaDef}
\end{equation}
Let us strat by \emph{defining} $\Lambda _{\alpha }^{f}(\theta ,t,z)$ as
above and show that it is a well-defined element of $L^{2}(\Omega )$.
We also need the following notation: Given $(s,z)=(s_{1},\dots
,s_{m},z_{1}\dots ,z_{m})\in \lbrack 0,T]^{m}\times (\R^{d})^{m}$ and a
shuffle $\sigma \in S(m,m)$ we define
\begin{equation*}
f_{\sigma }(s,z):=\prod_{j=1}^{2m}f_{[\sigma (j)]}(s_{j},z_{[\sigma (j)]})
\end{equation*}%
and
\begin{equation*}
\varkappa _{\sigma }(s):=\prod_{j=1}^{2m}\varkappa _{\lbrack \sigma
(j)]}(s_{j}),
\end{equation*}%
where $[j]$ is equal to $j$ if $1\leq j\leq m$ and $j-m$ if $m+1\leq j\leq
2m $.
For a multiindex $\alpha $, define
\begin{eqnarray*}
&&\Psi _{\alpha }^{f}(\theta ,t,z,H) \\
&:&=\prod_{l=1}^{d}\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}%
\sum_{\sigma \in S(m,m)}\int_{\Delta _{0,t}^{2m}}\left\vert f_{\sigma
}(s,z)\right\vert \prod_{j=1}^{2m}\frac{1}{\left\vert
s_{j}-s_{j-1}\right\vert ^{H(d+2\sum_{l=1}^{d}\alpha _{\lbrack \sigma
(j)]}^{(l)})}}ds_{1}...ds_{2m}
\end{eqnarray*}
respectively,
\begin{eqnarray*}
&&\Psi _{\alpha }^{\varkappa }(\theta ,t,H) \\
&:&=\prod_{l=1}^{d}\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}%
\sum_{\sigma \in S(m,m)}\int_{\Delta _{0,t}^{2m}}\left\vert \varkappa
_{\sigma }(s)\right\vert \prod_{j=1}^{2m}\frac{1}{\left\vert
s_{j}-s_{j-1}\right\vert ^{H(d+2\sum_{l=1}^{d}\alpha _{\lbrack \sigma
(j)]}^{(l)})}}ds_{1}...ds_{2m}.
\end{eqnarray*}
\begin{thm}
\label{mainthmlocaltime} Suppose that $\Psi _{\alpha }^{f}(\theta
,t,z,H_{r}),\Psi _{\alpha }^{\varkappa }(\theta ,t,H_{r})<\infty $ for some $%
r\geq r_{0}$. Then, $\Lambda _{\alpha }^{f}(\theta ,t,z)$ as in %
\eqref{LambdaDef} is a random variable in $L^{2}(\Omega )$.\ Further, there
exists a universal constant $C_{r}=C(T,H_{r},d)>0$ such that%
\begin{equation}
E[\left\vert \Lambda _{\alpha }^{f}(\theta ,t,z)\right\vert ^{2}]\leq \frac{1%
}{\lambda _{r}^{2md}}C_{r}^{m+\left\vert \alpha \right\vert }\Psi _{\alpha
}^{f}(\theta ,t,z,H_{r}). \label{supestL}
\end{equation}%
Moreover, we have%
\begin{equation}
\left\vert E[\int_{(\mathbb{R}^{d})^{m}}\Lambda _{\alpha }^{f}(\theta
,t,z)dz]\right\vert \leq \frac{1}{\lambda _{r}^{md}}C_{r}^{m/2+\left\vert
\alpha \right\vert /2}\prod_{j=1}^{m}\left\Vert f_{j}\right\Vert _{L^{1}(%
\mathbb{R}^{d};L^{\infty }([0,T]))}(\Psi _{\alpha }^{\varkappa }(\theta
,t,H_{r}))^{1/2}. \label{intestL}
\end{equation}
\end{thm}
\begin{proof}
For notational simplicity we consider $\theta =0$ and set $\mathbb{B}_{\cdot
}=\mathbb{B}_{\cdot }^{H}$, $\Lambda _{\alpha }^{f}(t,z)=\Lambda _{\alpha
}^{f}(0,t,z).$
For an integrable function $g:(\mathbb{R}^{d})^{m}\longrightarrow \mathbb{C}$
we get that%
\begin{eqnarray*}
&&\left\vert \int_{(\mathbb{R}^{d})^{m}}g(u_{1},...,u_{m})du_{1}...du_{m}%
\right\vert ^{2} \\
&=&\int_{(\mathbb{R}^{d})^{m}}g(u_{1},...,u_{m})du_{1}...du_{m}\int_{(%
\mathbb{R}^{d})^{m}}\overline{g(u_{m+1},...,u_{2m})}du_{m+1}...du_{2m} \\
&=&\int_{(\mathbb{R}^{d})^{m}}g(u_{1},...,u_{m})du_{1}...du_{m}(-1)^{dm}%
\int_{(\mathbb{R}^{d})^{m}}\overline{g(-u_{m+1},...,-u_{2m})}%
du_{m+1}...du_{2m},
\end{eqnarray*}%
where we employed the change of variables $(u_{m+1},...,u_{2m})\longmapsto
(-u_{m+1},...,-u_{2m})$ in the last equality.
This yields%
\begin{eqnarray*}
&&\left\vert \Lambda _{\alpha }^{f}(t,z)\right\vert ^{2} \\
&=&(2\pi )^{-2dm}(-1)^{dm}\int_{(\mathbb{R}^{d})^{2m}}\int_{\Delta
_{0,t}^{m}}\prod_{j=1}^{m}f_{j}(s_{j},z_{j})(-iu_{j})^{\alpha
_{j}}e^{-i\left\langle u_{j},\mathbb{B}_{s_{j}}-z_{j}\right\rangle
}ds_{1}...ds_{m} \\
&&\times \int_{\Delta
_{0,t}^{m}}\prod_{j=m+1}^{2m}f_{[j]}(s_{j},z_{[j]})(-iu_{j})^{\alpha
_{\lbrack j]}}e^{-i\left\langle u_{j},\mathbb{B}_{s_{j}}-z_{[j]}\right%
\rangle }ds_{m+1}...ds_{2m}du_{1}...du_{2m} \\
&=&(2\pi )^{-2dm}(-1)^{dm}\sum_{\sigma \in S(m,m)}\int_{(\mathbb{R}%
^{d})^{2m}}\left( \prod_{j=1}^{m}e^{-i\left\langle
z_{j},u_{j}+u_{j+m}\right\rangle }\right) \\
&&\times \int_{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod_{j=1}^{2m}u_{\sigma
(j)}^{\alpha _{\lbrack \sigma (j)]}}\exp \left\{
-\sum_{j=1}^{2m}\left\langle u_{\sigma (j)},\mathbb{B}_{s_{j}}\right\rangle
\right\} ds_{1}...ds_{2m}du_{1}...du_{2m},
\end{eqnarray*}%
where we applied shuffling in connection with Section \ref{VI_shuffles} in
the last step.
By taking the expectation on both sides in connection with the assumption
that the fractional Brownian motions $B_{\cdot }^{i,H_{i}},i\geq 1$ are
independent we find that%
\begin{align}\label{Lambda}
\begin{split}
&E[\left| \Lambda _{\alpha }^{f}(t,z)\right|^{2}]
\\
&=(2\pi )^{-2dm}(-1)^{dm}\sum_{\sigma \in S(m,m)}\int_{(\mathbb{R}^{d})^{2m}}\left( \prod_{j=1}^{m}e^{-i\left\langle
z_{j},u_{j}+u_{j+m}\right\rangle }\right) \\
&\times \int_{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod_{j=1}^{2m}u_{\sigma (j)}^{\alpha _{\lbrack \sigma (j)]}}\exp \left\{ -\frac{1}{2}
Var[\sum_{j=1}^{2m}\left\langle u_{\sigma (j)},\mathbb{B}_{s_{j}}\right
\rangle ]\right\} ds_{1}...ds_{2m}du_{1}...du_{2m} \\
&=(2\pi )^{-2dm}(-1)^{dm}\sum_{\sigma \in S(m,m)}\int_{(\mathbb{R}
^{d})^{2m}}\left( \prod_{j=1}^{m}e^{-i\left\langle
z_{j},u_{j}+u_{j+m}\right\rangle }\right) \\
&\times \int_{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod_{j=1}^{2m}u_{\sigma (j)}^{\alpha _{\lbrack \sigma (j)]}}\exp \left\{ -\frac{1}{2}\sum_{n\geq 1}\lambda _{n}^{2}\sum_{l=1}^{d}Var[\sum_{j=1}^{2m}u_{\sigma
(j)}^{(l)}B_{s_{j}}^{(l),n,H_{n}}]\right\}
ds_{1}\dots ds_{2m}du_{1}^{(1)}\dots du_{2m}^{(1)} \\
&\dots du_{1}^{(d)}\dots du_{2m}^{(d)} \\
&=(2\pi )^{-2dm}(-1)^{dm}\sum_{\sigma \in S(m,m)}\int_{(\mathbb{R}^{d})^{2m}}\left( \prod_{j=1}^{m}e^{-i\left\langle
z_{j},u_{j}+u_{j+m}\right\rangle }\right)
\\
&\times \int_{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod_{j=1}^{2m}u_{\sigma(j)}^{\alpha _{\lbrack \sigma (j)]}}\prod_{n\geq
1}\prod_{l=1}^{d}\exp \left\{ -\frac{1}{2}\lambda _{n}^{2}((u_{\sigma
(j)}^{(l)})_{1\leq j\leq 2m})^{\ast }Q_{n}((u_{\sigma (j)}^{(l)})_{1\leq
j\leq 2m})\right\} ds_{1}\dots ds_{2m} \\
&du_{\sigma (1)}^{(1)}\dots du_{\sigma (2m)}^{(1)}\dots du_{\sigma
(1)}^{(d)}\dots du_{\sigma (2m)}^{(d)},
\end{split}
\end{align}
where $\ast $ stands for transposition and where
\begin{equation*}
Q_{n}=Q_{n}(s):=(E[B_{s_{i}}^{(1)}B_{s_{j}}^{(1)}])_{1\leq i,j\leq 2m}.
\end{equation*}%
Further, we get that%
\begin{align}\label{Lambda2}
\begin{split}
&\int_{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \int_{(%
\mathbb{R}^{d})^{2m}}\prod_{j=1}^{2m}\prod_{l=1}^{d}\left| u_{\sigma
(j)}^{(l)}\right|^{\alpha _{\lbrack \sigma
(j)]}^{(l)}}\prod_{n\geq 1}\prod_{l=1}^{d}\exp \left\{ -\frac{1}{2}%
\lambda _{n}^{2}((u_{\sigma (j)}^{(l)})_{1\leq j\leq 2m})^{\ast
}Q_{n}((u_{\sigma (j)}^{(l)})_{1\leq j\leq 2m})\right\} \\
&du_{\sigma (1)}^{(1)}\dots du_{\sigma (2m)}^{(1)}\dots du_{\sigma
(1)}^{(d)}\dots du_{\sigma (2m)}^{(d)}ds_{1}\dots ds_{2m} \\
&\leq \int_{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right|
\int_{(\mathbb{R}^{d})^{2m}}\prod_{j=1}^{2m}\prod_{l=1}^{d}\left|
u_{j}^{(l)}\right|^{\alpha _{\lbrack \sigma (j)]}^{(l)}} \\
&\times \prod_{l=1}^{d}\exp \left\{ -\frac{1}{2}\lambda_{r}^{2}\left\langle Q_{r}u^{(l)},u^{(l)}\right\rangle \right\} \\
&du_{1}^{(1)}\dots du_{2m}^{(1)}\dots du_{1}^{(d)}\dots du_{2m}^{(d)}ds_{1}\dots ds_{2m} \\
&=\int_{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right|
\prod_{l=1}^{d}\int_{\mathbb{R}^{2m}}(\prod_{j=1}^{2m}\left|
u_{j}^{(l)}\right|^{\alpha _{\lbrack \sigma (j)]}^{(l)}})\exp \left\{-
\frac{1}{2}\lambda _{r}^{2}\left\langle Q_{r}u^{(l)},u^{(l)}\right\rangle
\right\} du_{1}^{(l)}\dots du_{2m}^{(l)}ds_{1}\dots ds_{2m},
\end{split}
\end{align}
where
\begin{equation*}
u^{(l)}:=(u_{j}^{(l)})_{1\leq j\leq 2m}.
\end{equation*}%
We obtain that%
\begin{eqnarray*}
&&\int_{\mathbb{R}^{2m}}(\prod_{j=1}^{2m}\left\vert u_{j}^{(l)}\right\vert
^{\alpha _{\lbrack \sigma (j)]}^{(l)}})\exp \left\{ -\frac{1}{2}\lambda
_{r}^{2}\left\langle Q_{r}u^{(l)},u^{(l)}\right\rangle \right\}
du_{1}^{(l)}...du_{2m}^{(l)} \\
&=&\frac{1}{\lambda _{r}^{2m}}\frac{1}{(\det Q_{r})^{1/2}}\int_{\mathbb{R}%
^{2m}}(\prod_{j=1}^{2m}\left\vert \left\langle
Q_{r}^{-1/2}u^{(l)},e_{j}\right\rangle \right\vert ^{\alpha _{\lbrack \sigma
(j)]}^{(l)}})\exp \left\{ -\frac{1}{2}\left\langle
u^{(l)},u^{(l)}\right\rangle \right\} du_{1}^{(l)}...du_{2m}^{(l)},
\end{eqnarray*}%
where $e_{i},i=1,...,2m$ is the standard ONB of $\mathbb{R}^{2m}$.
We also have that%
\begin{eqnarray*}
&&\int_{\mathbb{R}^{2m}}(\prod_{j=1}^{2m}\left\vert \left\langle
Q_{r}^{-1/2}u^{(l)},e_{j}\right\rangle \right\vert ^{\alpha _{\lbrack \sigma
(j)]}^{(l)}})\exp \left\{ -\frac{1}{2}\left\langle
u^{(l)},u^{(l)}\right\rangle \right\} du_{1}^{(l)}...du_{2m}^{(l)} \\
&=&(2\pi )^{m}E[\prod_{j=1}^{2m}\left\vert \left\langle
Q_{r}^{-1/2}Z,e_{j}\right\rangle \right\vert ^{\alpha _{\lbrack \sigma
(j)]}^{(l)}}],
\end{eqnarray*}%
where%
\begin{equation*}
Z\sim \mathcal{N}(\mathcal{O},I_{2m\times 2m}).
\end{equation*}%
On the other hand, it follows from Lemma \ref{LiWei}, which is a type of
Brascamp-Lieb inequality, that%
\begin{eqnarray*}
&&E[\prod_{j=1}^{2m}\left\vert \left\langle Q_{r}^{-1/2}Z,e_{j}\right\rangle
\right\vert ^{\alpha _{\lbrack \sigma (j)]}^{(l)}}] \\
&\leq &\sqrt{perm(\sum )}=\sqrt{\sum_{\pi \in S_{2\left\vert \alpha
^{(l)}\right\vert }}\prod_{i=1}^{2\left\vert \alpha ^{(l)}\right\vert
}a_{i\pi (i)}},
\end{eqnarray*}%
where $perm(\sum )$ is the permanent of the covariance matrix $\sum
=(a_{ij}) $ of the Gaussian random vector%
\begin{equation*}
\underset{\alpha _{\lbrack \sigma (1)]}^{(l)}\text{ times}}{\underbrace{%
(\left\langle Q^{-1/2}Z,e_{1}\right\rangle ,...,\left\langle
Q^{-1/2}Z,e_{1}\right\rangle }},\underset{\alpha _{\lbrack \sigma (2)]}^{(l)}%
\text{ times}}{\underbrace{\left\langle Q^{-1/2}Z,e_{2}\right\rangle
,...,\left\langle Q^{-1/2}Z,e_{2}\right\rangle }},...,\underset{\alpha
_{\lbrack \sigma (2m)]}^{(l)}\text{ times}}{\underbrace{\left\langle
Q^{-1/2}Z,e_{2m}\right\rangle ,...,\left\langle
Q^{-1/2}Z,e_{2m}\right\rangle }}),
\end{equation*}%
$\left\vert \alpha ^{(l)}\right\vert :=\sum_{j=1}^{m}\alpha _{j}^{(l)}$ and
where $S_{n}$ denotes the permutation group of size $n$.
Furthermore, using an upper bound for the permanent of positive semidefinite
matrices (see \cite{AG}) or direct computations, we find that%
\begin{equation}
perm(\sum )=\sum_{\pi \in S_{2\left\vert \alpha ^{(l)}\right\vert
}}\prod_{i=1}^{2\left\vert \alpha ^{(l)}\right\vert }a_{i\pi (i)}\leq
(2\left\vert \alpha ^{(l)}\right\vert )!\prod_{i=1}^{2\left\vert \alpha
^{(l)}\right\vert }a_{ii}. \label{PSD}
\end{equation}
Let now $i\in \lbrack \sum_{k=1}^{j-1}\alpha _{\lbrack \sigma
(k)]}^{(l)}+1,\sum_{k=1}^{j}\alpha _{\lbrack \sigma (k)]}^{(l)}]$ for some
arbitrary fixed $j\in \{1,...,2m\}$. Then%
\begin{equation*}
a_{ii}=E[\left\langle Q_{r}^{-1/2}Z,e_{j}\right\rangle \left\langle
Q_{r}^{-1/2}Z,e_{j}\right\rangle ].
\end{equation*}
\bigskip Further, substitution yields%
\begin{eqnarray*}
&&E[\left\langle Q_{r}^{-1/2}Z,e_{j}\right\rangle \left\langle
Q_{r}^{-1/2}Z,e_{j}\right\rangle ] \\
&=&(\det Q_{r})^{1/2}\frac{1}{(2\pi )^{m}}\int_{\mathbb{R}^{2m}}\left\langle
u,e_{j}\right\rangle ^{2}\exp (-\frac{1}{2}\left\langle
Q_{r}u,u\right\rangle )du_{1}...du_{2m} \\
&=&(\det Q_{r})^{1/2}\frac{1}{(2\pi )^{m}}\int_{\mathbb{R}%
^{2m}}u_{j}^{2}\exp (-\frac{1}{2}\left\langle Q_{r}u,u\right\rangle
)du_{1}...du_{2m}
\end{eqnarray*}
\bigskip
In the next step, we want to apply Lemma \ref{CD}. Then we obtain that%
\begin{eqnarray*}
&&\int_{\mathbb{R}^{2m}}u_{j}^{2}\exp (-\frac{1}{2}\left\langle
Q_{r}u,u\right\rangle )du_{1}...du_{m} \\
&=&\frac{(2\pi )^{(2m-1)/2}}{(\det Q_{r})^{1/2}}\int_{\mathbb{R}}v^{2}\exp (-%
\frac{1}{2}v^{2})dv\frac{1}{\sigma _{j}^{2}} \\
&=&\frac{(2\pi )^{m}}{(\det Q_{r})^{1/2}}\frac{1}{\sigma _{j}^{2}},
\end{eqnarray*}%
where $\sigma _{j}^{2}:=Var[B_{s_{j}}^{H_{r}}\left\vert
B_{s_{1}}^{H_{r}},...,B_{s_{2m}}^{H_{r}}\text{ without }B_{s_{j}}^{H_{r}}%
\right] .$
We now aim at using strong local non-determinism of the form (see (\ref%
{2sided})): For all $t\in \lbrack 0,T],$ $0<r<t:$%
\begin{equation*}
Var[B_{t}^{H_{r}}\left\vert B_{s}^{H_{r}},\left\vert t-s\right\vert \geq r
\right] \geq Kr^{2H_{r}}
\end{equation*}%
for a constant $K$ depending on $H_{r}$ and $T$.
The latter entails that
\begin{equation*}
(\det Q_{r}(s))^{1/2}\geq K^{(2m-1)/2}\left\vert s_{1}\right\vert
^{H_{r}}\left\vert s_{2}-s_{1}\right\vert ^{H_{r}}...\left\vert
s_{2m}-s_{2m-1}\right\vert ^{H_{r}}
\end{equation*}%
as well as%
\begin{equation*}
\sigma _{j}^{2}\geq K\min \{\left\vert s_{j}-s_{j-1}\right\vert
^{2H_{r}},\left\vert s_{j+1}-s_{j}\right\vert ^{2H_{r}}\}.
\end{equation*}%
Hence%
\begin{eqnarray*}
\prod_{j=1}^{2m}\sigma _{j}^{-2\alpha _{\lbrack \sigma (j)]}^{(l)}} &\leq
&K^{-2m}\prod_{j=1}^{2m}\frac{1}{\min \{\left\vert s_{j}-s_{j-1}\right\vert
^{2H_{r}\alpha _{\lbrack \sigma (j)]}^{(l)}},\left\vert
s_{j+1}-s_{j}\right\vert ^{2H_{r}\alpha _{\lbrack \sigma (j)]}^{(l)}}\}} \\
&\leq &C^{m+\left\vert \alpha ^{(l)}\right\vert }\prod_{j=1}^{2m}\frac{1}{%
\left\vert s_{j}-s_{j-1}\right\vert ^{4H_{r}\alpha _{\lbrack \sigma
(j)]}^{(l)}}}
\end{eqnarray*}%
for a constant $C$ only depending on $H_{r}$ and $T$.
\bigskip So we conclude from (\ref{PSD}) that%
\begin{eqnarray*}
perm(\sum ) &\leq &(2\left\vert \alpha ^{(l)}\right\vert
)!\prod_{i=1}^{2\left\vert \alpha ^{(l)}\right\vert }a_{ii} \\
&\leq &(2\left\vert \alpha ^{(l)}\right\vert )!\prod_{j=1}^{2m}((\det
Q_{r})^{1/2}\frac{1}{(2\pi )^{m}}\frac{(2\pi )^{m}}{(\det Q_{r})^{1/2}}\frac{%
1}{\sigma _{j}^{2}})^{\alpha _{\lbrack \sigma (j)]}^{(l)}} \\
&\leq &(2\left\vert \alpha ^{(l)}\right\vert )!C^{m+\left\vert \alpha
^{(l)}\right\vert }\prod_{j=1}^{2m}\frac{1}{\left\vert
s_{j}-s_{j-1}\right\vert ^{4H_{r}\alpha _{\lbrack \sigma (j)]}^{(l)}}}.
\end{eqnarray*}%
Thus%
\begin{eqnarray*}
&&E[\prod_{j=1}^{2m}\left\vert \left\langle Q_{r}^{-1/2}Z,e_{j}\right\rangle
\right\vert ^{\alpha _{\lbrack \sigma (j)]}^{(l)}}]\leq \sqrt{perm(\sum )} \\
&\leq &\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}C^{m+\left\vert \alpha
^{(l)}\right\vert }\prod_{j=1}^{2m}\frac{1}{\left\vert
s_{j}-s_{j-1}\right\vert ^{2H_{r}\alpha _{\lbrack \sigma (j)]}^{(l)}}}.
\end{eqnarray*}%
Therefore we see from (\ref{Lambda}) and (\ref{Lambda2}) that%
\begin{eqnarray*}
&&E[\left\vert \Lambda _{\alpha }^{f}(\theta ,t,z)\right\vert ^{2}] \\
&\leq &C^{m}\sum_{\sigma \in S(m,m)}\int_{\Delta _{0,t}^{2m}}\left\vert
f_{\sigma }(s,z)\right\vert \prod_{l=1}^{d}\int_{\mathbb{R}%
^{2m}}(\prod_{j=1}^{2m}\left\vert u_{j}^{(l)}\right\vert ^{\alpha _{\lbrack
\sigma (j)]}^{(l)}})\exp \left\{ -\frac{1}{2}\left\langle
Q_{r}u^{(l)},u^{(l)}\right\rangle \right\}
du_{1}^{(l)}...du_{2m}^{(l)}ds_{1}...ds_{2m} \\
&\leq &M^{m}\sum_{\sigma \in S(m,m)}\int_{\Delta _{0,t}^{2m}}\left\vert
f_{\sigma }(s,z)\right\vert \frac{1}{\lambda _{r}^{2md}}\frac{1}{(\det
Q(s))^{d/2}}\prod_{l=1}^{d}\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}%
C^{m+\left\vert \alpha ^{(l)}\right\vert }\prod_{j=1}^{2m}\frac{1}{%
\left\vert s_{j}-s_{j-1}\right\vert ^{2H_{r}\alpha _{\lbrack \sigma
(j)]}^{(l)}}}ds_{1}...ds_{2m} \\
&=&\frac{1}{\lambda _{r}^{2md}}M^{m}C^{md+\left\vert \alpha \right\vert
}\prod_{l=1}^{d}\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}\sum_{\sigma
\in S(m,m)}\int_{\Delta _{0,t}^{2m}}\left\vert f_{\sigma }(s,z)\right\vert
\prod_{j=1}^{2m}\frac{1}{\left\vert s_{j}-s_{j-1}\right\vert
^{H_{r}(d+2\sum_{l=1}^{d}\alpha _{\lbrack \sigma (j)]}^{(l)})}}%
ds_{1}...ds_{2m}
\end{eqnarray*}%
for a constant $M$ depending on $d$.
\bigskip In the final step, we want to prove estimate (\ref{intestL}). Using
the inequality (\ref{supestL}), we get that%
\begin{eqnarray*}
&&\left\vert E\left[ \int_{(\mathbb{R}^{d})^{m}}\Lambda _{\alpha
}^{\varkappa f}(\theta ,t,z)dz\right] \right\vert \\
&\leq &\int_{(\mathbb{R}^{d})^{m}}(E[\left\vert \Lambda _{\alpha
}^{\varkappa f}(\theta ,t,z)\right\vert ^{2})^{1/2}dz\leq \frac{1}{\lambda
_{r}^{md}}C^{m/2+\left\vert \alpha \right\vert /2}\int_{(\mathbb{R}%
^{d})^{m}}(\Psi _{\alpha }^{\varkappa f}(\theta ,t,z,H_{r}))^{1/2}dz.
\end{eqnarray*}%
By taking the supremum over $[0,T]$ with respect to each function $f_{j}$,
i.e.%
\begin{equation*}
\left\vert f_{[\sigma (j)]}(s_{j},z_{[\sigma (j)]})\right\vert \leq
\sup_{s_{j}\in \lbrack 0,T]}\left\vert f_{[\sigma (j)]}(s_{j},z_{[\sigma
(j)]})\right\vert ,j=1,...,2m
\end{equation*}%
we find that%
\begin{eqnarray*}
&&\left\vert E\left[ \int_{(\mathbb{R}^{d})^{m}}\Lambda _{\alpha
}^{\varkappa f}(\theta ,t,z)dz\right] \right\vert \\
&\leq &\frac{1}{\lambda _{r}^{md}}C^{m/2+\left\vert \alpha \right\vert
/2}\max_{\sigma \in S(m,m)}\int_{(\mathbb{R}^{d})^{m}}\left(
\prod_{l=1}^{2m}\left\Vert f_{[\sigma (l)]}(\cdot ,z_{[\sigma
(l)]})\right\Vert _{L^{\infty }([0,T])}\right) ^{1/2}dz \\
&&\times (\prod_{l=1}^{d}\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}%
\sum_{\sigma \in S(m,m)}\int_{\Delta _{0,t}^{2m}}\left\vert \varkappa
_{\sigma }(s)\right\vert \prod_{j=1}^{2m}\frac{1}{\left\vert
s_{j}-s_{j-1}\right\vert ^{H(d+2\sum_{l=1}^{d}\alpha _{\lbrack \sigma
(j)]}^{(l)})}}ds_{1}...ds_{2m})^{1/2} \\
&=&\frac{1}{\lambda _{r}^{md}}C^{m/2+\left\vert \alpha \right\vert
/2}\max_{\sigma \in S(m,m)}\int_{(\mathbb{R}^{d})^{m}}\left(
\prod_{l=1}^{2m}\left\Vert f_{[\sigma (l)]}(\cdot ,z_{[\sigma
(l)]})\right\Vert _{L^{\infty }([0,T])}\right) ^{1/2}dz\cdot (\Psi _{\alpha
}^{\varkappa }(\theta ,t,H_{r}))^{1/2} \\
&=&\frac{1}{\lambda _{r}^{md}}C^{m/2+\left\vert \alpha \right\vert /2}\int_{(%
\mathbb{R}^{d})^{m}}\prod_{j=1}^{m}\left\Vert f_{j}(\cdot ,z_{j})\right\Vert
_{L^{\infty }([0,T])}dz\cdot (\Psi _{\alpha }^{\varkappa }(\theta
,t,H_{r}))^{1/2} \\
&=&\frac{1}{\lambda _{r}^{md}}C^{m/2+\left\vert \alpha \right\vert
/2}\prod_{j=1}^{m}\left\Vert f_{j}(\cdot ,z_{j})\right\Vert _{L^{1}(\mathbb{R%
}^{d};L^{\infty }([0,T]))}\cdot (\Psi _{\alpha }^{\varkappa }(\theta
,t,H_{r}))^{1/2}.
\end{eqnarray*}
\end{proof}
Using Theorem \ref{mainthmlocaltime} we obtain the following crucial
estimate (compare \cite{BNP.17}, \cite{BOPP.17}, \cite{ABP} and \cite{ACHP}):
\begin{prop}
\label{mainestimate1} Let the functions $f$ and $\varkappa $ be as in (\ref%
{f}), respectively as in (\ref{kappa}). Further, let $\theta ,\theta \prime
,t\in \lbrack 0,T],\theta \prime <\theta <t$ and%
\begin{equation*}
\varkappa _{j}(s)=(K_{H_{r_{0}}}(s,\theta )-K_{H_{r_{0}}}(s,\theta \prime
))^{\varepsilon _{j}},\theta <s<t
\end{equation*}%
for every $j=1,...,m$ with $(\varepsilon _{1},...,\varepsilon _{m})\in
\{0,1\}^{m}$ for $\theta ,\theta \prime \in \lbrack 0,T]$ with $\theta
\prime <\theta .$ Let $\alpha \in (\mathbb{N}_{0}^{d})^{m}$ be a
multi-index. If for some $r\geq r_{0}$
\begin{equation*}
H_{r}<\frac{\frac{1}{2}-\gamma _{r_{0}}}{(d-1+2\sum_{l=1}^{d}\alpha
_{j}^{(l)})}
\end{equation*}%
holds for all $j$, where $\gamma _{r_{0}}\in (0,H_{r_{0}})$ is sufficiently
small, then there exists a universal constant $C_{r_{0}}$ (depending on $%
H_{r_{0}}$, $T$ and $d$, but independent of $m$, $\{f_{i}\}_{i=1,...,m}$ and
$\alpha $) such that for any $\theta ,t\in \lbrack 0,T]$ with $\theta <t$ we
have%
\begin{eqnarray*}
&&\left\vert E\int_{\Delta _{\theta ,t}^{m}}\left( \prod_{j=1}^{m}D^{\alpha
_{j}}f_{j}(s_{j},\mathbb{B}_{s_{j}})\varkappa _{j}(s_{j})\right)
ds\right\vert \\
&\leq &\frac{1}{\lambda _{r}^{md}}C_{r_{0}}^{m+\left\vert \alpha \right\vert
}\prod_{j=1}^{m}\left\Vert f_{j}(\cdot ,z_{j})\right\Vert _{L^{1}(\mathbb{R}%
^{d};L^{\infty }([0,T]))}\left( \frac{\theta -\theta \prime }{\theta \theta
\prime }\right) ^{\gamma _{r_{0}}\sum_{j=1}^{m}\varepsilon _{j}}\theta
^{(H_{r_{0}}-\frac{1}{2}-\gamma _{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}} \\
&&\times \frac{(\prod_{l=1}^{d}(2\left\vert \alpha ^{(l)}\right\vert
)!)^{1/4}(t-\theta )^{-H_{r}(md+2\left\vert \alpha \right\vert )+(H_{r_{0}}-%
\frac{1}{2}-\gamma _{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}+m}}{\Gamma
(-H_{r}(2md+4\left\vert \alpha \right\vert )+2(H_{r_{0}}-\frac{1}{2}-\gamma
_{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}+2m)^{1/2}}.
\end{eqnarray*}
\end{prop}
\begin{proof}
From the definition of $\Lambda _{\alpha }^{\varkappa f}$ (\ref{LambdaDef})
we see that the integral in our proposition can be expressed as%
\begin{equation*}
\int_{\Delta _{\theta ,t}^{m}}\left( \prod_{j=1}^{m}D^{\alpha
_{j}}f_{j}(s_{j},B_{s_{j}}^{H})\varkappa _{j}(s_{j})\right) ds=\int_{\mathbb{%
R}^{dm}}\Lambda _{\alpha }^{\varkappa f}(\theta ,t,z)dz.
\end{equation*}%
By taking expectation and using Theorem \ref{mainthmlocaltime} we get that%
\begin{equation*}
\left\vert E\int_{\Delta _{\theta ,t}^{m}}\left( \prod_{j=1}^{m}D^{\alpha
_{j}}f_{j}(s_{j},B_{s_{j}}^{H})\varkappa _{j}(s_{j})\right) ds\right\vert
\leq \frac{1}{\lambda _{r}^{md}}C_{r}^{m/2+\left\vert \alpha \right\vert
/2}\prod_{j=1}^{m}\left\Vert f_{j}(\cdot ,z_{j})\right\Vert _{L^{1}(\mathbb{R%
}^{d};L^{\infty }([0,T]))}\cdot (\Psi _{\alpha }^{\varkappa }(\theta
,t,H_{r}))^{1/2},
\end{equation*}%
where in this case
\begin{eqnarray*}
&&\Psi _{k}^{\varkappa }(\theta ,t,H_{r}) \\
&:&=\prod_{l=1}^{d}\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}%
\sum_{\sigma \in S(m,m)}\int_{\Delta
_{0,t}^{2m}}\prod_{j=1}^{2m}(K_{H_{r}}(s_{j},\theta )-K_{H_{r}}(s_{j},\theta
\prime ))^{\varepsilon _{\lbrack \sigma (j)]}} \\
&&\frac{1}{\left\vert s_{j}-s_{j-1}\right\vert
^{H_{r}(d+2\sum_{l=1}^{d}\alpha _{\lbrack \sigma (j)]}^{(l)})}}%
ds_{1}...ds_{2m}.
\end{eqnarray*}%
We wish to use Lemma \ref{VI_iterativeInt}. For this purpose, we need that $%
-H_{r}(d+2\sum_{l=1}^{d}\alpha _{\lbrack \sigma (j)]}^{(l)})+(H_{r_{0}}-%
\frac{1}{2}-\gamma _{r_{0}})\varepsilon _{\lbrack \sigma (j)]}>-1$ for all $%
j=1,...,2m.$ The worst case is, when $\varepsilon _{\lbrack \sigma (j)]}=1$
for all $j$. So $H_{r}<\frac{\frac{1}{2}-\gamma _{r}}{(d-1+2\sum_{l=1}^{d}%
\alpha _{\lbrack \sigma (j)]}^{(l)})}$ for all $j$, since $H_{r_{0}}\geq
H_{r}$. Therfore, we get that%
\begin{eqnarray*}
\Psi _{\alpha }^{\varkappa }(\theta ,t,H_{r}) &\leq
&C_{r_{0}}^{2m}\sum_{\sigma \in S(m,m)}\left( \frac{\theta -\theta \prime }{%
\theta \theta \prime }\right) ^{\gamma _{r_{0}}\sum_{j=1}^{2m}\varepsilon
_{\lbrack \sigma (j)]}}\theta ^{(H_{r_{0}}-\frac{1}{2}-\gamma
_{r_{0}})\sum_{j=1}^{2m}\varepsilon _{\lbrack \sigma (j)]}} \\
&&\times \prod_{l=1}^{d}\sqrt{(2\left\vert \alpha ^{(l)}\right\vert )!}\Pi
_{\gamma }(2m)(t-\theta )^{-H_{r}(2md+4\left\vert \alpha \right\vert
)+(H_{r}-\frac{1}{2}-\gamma _{r})\sum_{j=1}^{2m}\varepsilon _{\lbrack \sigma
(j)]}+2m},
\end{eqnarray*}%
where $\Pi _{\gamma }(m)$ is defined as in Lemma \ref{VI_iterativeInt} and
where $C_{r_{0}}$ is a constant, which only depends on $H_{r_{0}}$ and $T$.
The factor $\Pi _{\gamma }(m)$ has the following upper bound:
\begin{equation*}
\Pi _{\gamma }(2m)\leq \frac{\prod_{j=1}^{2m}\Gamma
(1-H_{r}(d+2\sum_{l=1}^{d}\alpha _{\lbrack \sigma (j)]}^{(l)}))}{\Gamma
(-H_{r}(2md+4\left\vert \alpha \right\vert )+(H_{r_{0}}-\frac{1}{2}-\gamma
_{r_{0}})\sum_{j=1}^{2m}\varepsilon _{\lbrack \sigma (j)]}+2m)}.
\end{equation*}%
Note that $\sum_{j=1}^{2m}\varepsilon _{\lbrack \sigma
(j)]}=2\sum_{j=1}^{m}\varepsilon _{j}.$ Hence, it follows that%
\begin{eqnarray*}
&&(\Psi _{k}^{\varkappa }(\theta ,t,H_{r}))^{1/2} \\
&\leq &C_{r_{0}}^{m}\left( \frac{\theta -\theta \prime }{\theta \theta
\prime }\right) ^{\gamma _{r_{0}}\sum_{j=1}^{m}\varepsilon _{j}}\theta
^{(H_{r}-\frac{1}{2}-\gamma _{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}} \\
&&\times \frac{(\prod_{l=1}^{d}(2\left\vert \alpha ^{(l)}\right\vert
)!)^{1/4}(t-\theta )^{-H_{r}(md+2\left\vert \alpha \right\vert )-(H_{r_{0}}-%
\frac{1}{2}-\gamma _{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}+m}}{\Gamma
(-H_{r}(2md+4\left\vert \alpha \right\vert )+2(H_{r_{0}}-\frac{1}{2}-\gamma
_{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}+2m)^{1/2}},
\end{eqnarray*}%
where we used $\prod_{j=1}^{2m}\Gamma (1-H_{r}(d+2\sum_{l=1}^{d}\alpha
_{\lbrack \sigma (j)]}^{(l)})\leq K^{m}$ for a constant $K=K(\gamma
_{r_{0}})>0$ and $\sqrt{a_{1}+...+a_{m}}\leq \sqrt{a_{1}}+...\sqrt{a_{m}}$
for arbitrary non-negative numbers $a_{1},...,a_{m}$.
\end{proof}
\begin{prop}
\label{mainestimate2} Let the functions $f$ and $\varkappa $ be as in (\ref%
{f}), respectively as in (\ref{kappa}). Let $\theta ,t\in \lbrack 0,T]$ with
$\theta <t$ and%
\begin{equation*}
\varkappa _{j}(s)=(K_{H_{r_{0}}}(s,\theta ))^{\varepsilon _{j}},\theta <s<t
\end{equation*}%
for every $j=1,...,m$ with $(\varepsilon _{1},...,\varepsilon _{m})\in
\{0,1\}^{m}$. Let $\alpha \in (\mathbb{N}_{0}^{d})^{m}$ be a multi-index. If
for some $r\geq r_{0}$
\begin{equation*}
H_{r}<\frac{\frac{1}{2}-\gamma _{r_{0}}}{(d-1+2\sum_{l=1}^{d}\alpha
_{j}^{(l)})}
\end{equation*}%
holds for all $j$, where $\gamma _{r_{0}}\in (0,H_{r_{0}})$ is sufficiently
small, then there exists a universal constant $C_{r_{0}}$ (depending on $%
H_{r_{0}}$, $T$ and $d$, but independent of $m$, $\{f_{i}\}_{i=1,...,m}$ and
$\alpha $) such that for any $\theta ,t\in \lbrack 0,T]$ with $\theta <t$ we
have%
\begin{eqnarray*}
&&\left\vert E\int_{\Delta _{\theta ,t}^{m}}\left( \prod_{j=1}^{m}D^{\alpha
_{j}}f_{j}(s_{j},\mathbb{B}_{s_{j}})\varkappa _{j}(s_{j})\right)
ds\right\vert \\
&\leq &\frac{1}{\lambda _{r}^{md}}C_{r_{0}}^{m+\left\vert \alpha \right\vert
}\prod_{j=1}^{m}\left\Vert f_{j}(\cdot ,z_{j})\right\Vert _{L^{1}(\mathbb{R}%
^{d};L^{\infty }([0,T]))}\theta ^{(H_{r_{0}}-\frac{1}{2})\sum_{j=1}^{m}%
\varepsilon _{j}} \\
&&\times \frac{(\prod_{l=1}^{d}(2\left\vert \alpha ^{(l)}\right\vert
)!)^{1/4}(t-\theta )^{-H_{r}(md+2\left\vert \alpha \right\vert )+(H_{r_{0}}-%
\frac{1}{2}-\gamma _{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}+m}}{\Gamma
(-H_{r}(2md+4\left\vert \alpha \right\vert )+2(H_{r_{0}}-\frac{1}{2}-\gamma
_{r_{0}})\sum_{j=1}^{m}\varepsilon _{j}+2m)^{1/2}}.
\end{eqnarray*}
\end{prop}
\begin{proof}
The proof is similar to the previous proposition.
\end{proof}
\begin{rem}
\label{Remark 3.4} We mention that%
\begin{equation*}
\prod_{l=1}^{d}(2\left\vert \alpha ^{(l)}\right\vert )!\leq (2\left\vert
\alpha \right\vert )!C^{\left\vert \alpha \right\vert }
\end{equation*}%
for a constant $C$ depending on $d$. Later on in the paper, when we deal
with the existence of strong solutions, we will consider the case%
\begin{equation*}
\alpha _{j}^{(l)}\in \{0,1\}\text{ for all }j,l
\end{equation*}%
with%
\begin{equation*}
\left\vert \alpha \right\vert =m.
\end{equation*}
\end{rem}
\bigskip
The next proposition is a verification of the sufficient condition needed to
guarantee relative compactness of the approximating sequence $%
\{X_{t}^{n}\}_{n\geq 1}$.
\begin{prop}
\label{Holderintegral} Let $b_{n}:[0,T]\times \R^{d}\rightarrow \R^{d}$, $%
n\geq 1$, be a sequence of compactly supported smooth functions converging
a.e. to $b$ such that $\sup_{n\geq 1}\Vert b_{n}\Vert _{\mathcal{L}%
_{2,p}^{q}}<\infty $, $p,q\in (2,\infty ]$. Let $X_{\cdot }^{n}$ denote the
solution of \eqref{maineq} when we replace $b$ by $b_{n}$. Further, let $%
C_{i}$ for $r_{0}=i$ be the (same) constant (depending only on $H_{i}$, $T$
and $d$) in the estimates of Proposition \ref{mainestimate1} and \ref%
{mainestimate2}. Then there exist sequences $\{\alpha _{i}\}_{i=1}^{\infty }$%
, $\beta =\{\beta _{i}\}_{i=1}^{\infty }$ (depending only on $%
\{H_{i}\}_{i=1}^{\infty }$) with $0<\alpha _{i}<\beta _{i}<\frac{1}{2}$, $%
\delta =\{\delta _{i}\}_{i=1}^{\infty }$ as in Theorem \ref{compinf} and $%
\lambda =\{\lambda _{i}\}_{i=1}^{\infty }$ in (\ref{monster}), which
satisfies (\ref{lambdacond}), (\ref{lambdacond2}), (\ref{contcond}) and
which is of the form $\lambda _{i}=\phi _{i}\cdot \varphi (C_{i})$ being
independent of the size of $\sup_{n\geq 1}\Vert b_{n}\Vert _{\mathcal{L}%
_{2,p}^{q}}$ for a sequence $\{\phi _{i}\}_{i=1}^{\infty }$ and a bounded function $\varphi$ , such that
\begin{equation}
\sum_{i=1}^{\infty }\frac{|\phi _{i}|^{2}}{1-2^{-2(\beta _{i}-\alpha
_{i})}\delta _{i}^{2}}<\infty , \label{Finite}
\end{equation}%
\begin{equation*}
\sup_{n\geq 1}E[\Vert X_{t}^{n}\Vert ^{2}]<\infty ,
\end{equation*}%
\begin{equation*}
\sup_{n\geq 1}\sum_{i=1}^{\infty }\frac{1}{\delta _{i}^{2}}%
\int_{0}^{t}E[\Vert D_{t_{0}}^{i}X_{t}^{n}\Vert ^{2}]dt_{0}\leq
C_{1}(\sup_{n\geq 1}\Vert b_{n}\Vert _{\mathcal{L}_{2,p}^{q}})<\infty ,
\end{equation*}%
and
\begin{eqnarray*}
&&\sup_{n\geq 1}\sum_{i=1}^{\infty }\frac{1}{(1-2^{-2(\beta _{i}-\alpha
_{i})})\delta _{i}^{2}}\int_{0}^{t}\int_{0}^{t}\frac{E[\Vert
D_{t_{0}}^{i}X_{t}^{n}-D_{t_{0}^{\prime }}^{i}X_{t}^{n}\Vert ^{2}]}{%
|t_{0}-t_{0}^{\prime }|^{1+2\beta _{i}}}dt_{0}dt_{0}^{\prime } \\
&\leq &C_{2}(\sup_{n\geq 1}\Vert b_{n}\Vert _{\mathcal{L}_{2,p}^{q}})<\infty
\end{eqnarray*}%
for all $t\in \lbrack 0,T]$, where $C_{j}:[0,\infty )\longrightarrow \lbrack
0,\infty ),$ $j=1,2$ are continuous functions depending on $%
\{H_{i}\}_{i=1}^{\infty }$, $p$, $q$, $d$, $T$ and where $D^{i}$ denotes the
Malliavin derivative in the direction of the standard Brownian motion $W^{i}$%
, $i\geq 1$. Here, $\Vert \cdot \Vert $ denotes any matrix norm.
\end{prop}
\begin{rem}
\label{Phi}The proof Proposition \ref{Holderintegral} shows that one may for
example choose $\lambda _{i}=\phi _{i}\cdot \varphi (C_{i})$ in (\ref%
{monster}) for $\varphi (x)=\exp (-x^{100})$ and $\{\phi
_{i}\}_{i=1}^{\infty }$ satisfying (\ref{Finite}).
\end{rem}
\begin{proof}
The most challenging estimate is the last one, the two others can be proven
easily. Take $t_{0},t_{0}^{\prime }>0$ such that $0<t_{0}^{\prime }<t_{0}<t$%
. Using the chain rule for the Malliavin derivative, see \cite[Proposition
1.2.3]{Nua10}, we have
\begin{equation*}
D_{t_{0}}^{i}X_{t}^{n}=\lambda
_{i}K_{H_{i}}(t,t_{0})I_{d}+\int_{t_{0}}^{t}b_{n}^{\prime
}(t_{1},X_{t_{1}}^{n})D_{t_{0}}X_{t_{1}}^{n}dt_{1}
\end{equation*}%
$P$-a.s. for all $0\leq t_{0}\leq t$ where $b_{n}^{\prime }(t,z)=\left(
\frac{\partial }{\partial z_{j}}b_{n}^{(i)}(t,z)\right) _{i,j=1,\dots ,d}$
denotes the Jacobian matrix of $b_{n}$ at a point $(t,z)$ and $I_{d}$ the
identity matrix in $\R^{d\times d}$. Thus we have
\begin{align*}
D_{t_{0}}^{i}X_{t}^{n}-& D_{t_{0}^{\prime }}^{i}X_{t}^{n}=\lambda
_{i}(K_{H_{i}}(t,t_{0})I_{d}-K_{H_{i}}(t,t_{0}^{\prime })I_{d}) \\
& +\int_{t_{0}}^{t}b_{n}^{\prime
}(t_{1},X_{t_{1}}^{n})D_{t_{0}}^{i}X_{t_{1}}^{n}dt_{1}-\int_{t_{0}^{\prime
}}^{t}b_{n}^{\prime }(t_{1},X_{t_{1}}^{n})D_{t_{0}^{\prime
}}^{i}X_{t_{1}}^{n}dt_{1} \\
=& \lambda _{i}(K_{H_{i}}(t,t_{0})I_{d}-K_{H_{i}}(t,t_{0}^{\prime })I_{d}) \\
& -\int_{t_{0}^{\prime }}^{t_{0}}b_{n}^{\prime
}(t_{1},X_{t_{1}}^{n})D_{t_{0}^{\prime
}}^{i}X_{t_{1}}^{n}dt_{1}+\int_{t_{0}}^{t}b_{n}^{\prime
}(t_{1},X_{t_{1}}^{n})(D_{t_{0}}^{i}X_{t_{1}}^{n}-D_{t_{0}^{\prime
}}^{i}X_{t_{1}}^{n})dt_{1} \\
=& \lambda _{i}\mathcal{K}_{t_{0},t_{0}^{\prime
}}^{H_{i}}(t)I_{d}-(D_{t_{0}^{\prime }}^{i}X_{t_{0}}^{n}-\lambda
_{i}K_{H_{i}}(t_{0},t_{0}^{\prime })I_{d}) \\
& +\int_{t_{0}}^{t}b_{n}^{\prime
}(t_{1},X_{t_{1}}^{n})(D_{t_{0}}^{i}X_{t_{1}}^{n}-D_{t_{0}^{\prime
}}^{i}X_{t_{1}}^{n})dt_{1},
\end{align*}%
where as in Proposition \ref{mainestimate1} we define
\begin{equation*}
\mathcal{K}_{t_{0},t_{0}^{\prime
}}^{H_{i}}(t)=K_{H_{i}}(t,t_{0})-K_{H_{i}}(t,t_{0}^{\prime }).
\end{equation*}
Iterating the above equation we arrive at
\begin{align*}
D_{t_{0}}^{i}X_{t}^{n}-& D_{t_{0}^{\prime }}^{i}X_{t}^{n}=\lambda _{i}%
\mathcal{K}_{t_{0},t_{0}^{\prime }}^{H_{i}}(t)I_{d} \\
& +\lambda _{i}\sum_{m=1}^{\infty }\int_{\Delta
_{t_{0},t}^{m}}\prod_{j=1}^{m}b_{n}^{\prime }(t_{j},X_{t_{j}}^{n})\mathcal{K}%
_{t_{0},t_{0}^{\prime }}^{H_{i}}(t_{m})I_{d}dt_{m}\cdots dt_{1} \\
& -\left( I_{d}+\sum_{m=1}^{\infty }\int_{\Delta
_{t_{0},t}^{m}}\prod_{j=1}^{m}b_{n}^{\prime
}(t_{j},X_{t_{j}}^{n})dt_{m}\cdots dt_{1}\right) \left( D_{t_{0}^{\prime
}}^{i}X_{t_{0}}^{n}-\lambda _{i}K_{H_{i}}(t_{0},t_{0}^{\prime })I_{d}\right)
.
\end{align*}%
On the other hand, observe that one may again write
\begin{equation*}
D_{t_{0}^{\prime }}^{i}X_{t_{0}}^{n}-\lambda
_{i}K_{H_{i}}(t_{0},t_{0}^{\prime })I_{d}=\lambda _{i}\sum_{m=1}^{\infty
}\int_{\Delta _{t_{0}^{\prime },t_{0}}^{m}}\prod_{j=1}^{m}b_{n}^{\prime
}(t_{j},X_{t_{j}}^{n})(K_{H_{i}}(t_{m},t_{0}^{\prime })I_{d})\,dt_{m}\cdots
dt_{1}.
\end{equation*}%
In summary,
\begin{equation*}
D_{t_{0}}^{i}X_{t}^{n}-D_{t_{0}^{\prime }}^{i}X_{t}^{n}=\lambda
_{i}I_{1}(t_{0}^{\prime },t_{0})+\lambda _{i}I_{2}^{n}(t_{0}^{\prime
},t_{0})+\lambda _{i}I_{3}^{n}(t_{0}^{\prime },t_{0}),
\end{equation*}%
where
\begin{align*}
I_{1}(t_{0}^{\prime },t_{0}):=& \mathcal{K}_{t_{0},t_{0}^{\prime
}}^{H_{i}}(t)I_{d}=K_{H_{i}}(t,t_{0})I_{d}-K_{H_{i}}(t,t_{0}^{\prime })I_{d}
\\
I_{2}^{n}(t_{0}^{\prime },t_{0}):=& \sum_{m=1}^{\infty }\int_{\Delta
_{t_{0},t}^{m}}\prod_{j=1}^{m}b_{n}^{\prime }(t_{j},X_{t_{j}}^{n})\mathcal{K}%
_{t_{0},t_{0}^{\prime }}^{H_{i}}(t_{m})I_{d}\ dt_{m}\cdots dt_{1} \\
I_{3}^{n}(t_{0}^{\prime },t_{0}):=& -\left( I_{d}+\sum_{m=1}^{\infty
}\int_{\Delta _{t_{0},t}^{m}}\prod_{j=1}^{m}b_{n}^{\prime
}(t_{j},X_{t_{j}}^{n})dt_{m}\cdots dt_{1}\right) \\
& \times \left( \sum_{m=1}^{\infty }\int_{\Delta _{t_{0}^{\prime
},t_{0}}^{m}}\prod_{j=1}^{m}b_{n}^{\prime
}(t_{j},X_{t_{j}}^{n})(K_{H_{i}}(t_{m},t_{0}^{\prime })I_{d})dt_{m}\cdots
dt_{1}.\right) .
\end{align*}
Hence,
\begin{equation*}
E[\Vert D_{t_{0}}^{i}X_{t}^{n}-D_{t_{0}^{\prime }}^{i}X_{t}^{n}\Vert
^{2}]\leq C\lambda _{i}^{2}\left( E[\Vert I_{1}(t_{0}^{\prime },t_{0})\Vert
^{2}]+E[\Vert I_{2}^{n}(t_{0}^{\prime },t_{0})\Vert ^{2}]+E[\Vert
I_{3}^{n}(t_{0}^{\prime },t_{0})\Vert ^{2}]\right) .
\end{equation*}
It follows from Lemma \ref{VI_doubleint} and condition \eqref{Finite} that
\begin{align*}
\sum_{i=1}^{\infty }\frac{\lambda _{i}^{2}}{1-2^{-2(\beta _{i}-\alpha
_{i})}\delta _{i}^{2}}& \int_{0}^{t}\int_{0}^{t}\frac{\Vert
I_{1}(t_{0}^{\prime },t_{0})\Vert _{L^{2}(\Omega )}^{2}}{|t_{0}-t_{0}^{%
\prime }|^{1+2\beta _{i}}}dt_{0}dt_{0}^{\prime } \\
& \leq \sum_{i=1}^{\infty }\frac{\lambda _{i}^{2}}{1-2^{-2(\beta _{i}-\alpha
_{i})}\delta _{i}^{2}}t^{4H_{i}-6\gamma _{i}-2\beta _{i}-1}<\infty
\end{align*}%
for a suitable choice of sequence $\{\beta _{i}\}_{i\geq 1}\subset (0,1/2)$.
Let us continue with the term $I_{2}^{n}(t_{0}^{\prime },t_{0})$. Then
Theorem \ref{girsanov}, Cauchy-Schwarz inequality and Lemma \ref{novikov}
imply
\begin{align*}
E[& \Vert I_{2}^{n}(t_{0}^{\prime },t_{0})\Vert ^{2}] \\
& \leq C(\Vert b_{n}\Vert _{L_{p}^{q}})E\left[ \left\Vert \sum_{m=1}^{\infty
}\int_{\Delta _{t_{0},t}^{m}}\prod_{j=1}^{m}b_{n}^{\prime }(t_{j},x+\mathbb{B%
}_{t_{j}}^{H})\mathcal{K}_{t_{0},t_{0}^{\prime }}^{H_{i}}(t_{m})I_{d}\
dt_{m}\cdots dt_{1}\right\Vert ^{4}\right] ^{1/2},
\end{align*}%
where $C:[0,\infty )\rightarrow \lbrack 0,\infty )$ is the function from
Lemma \ref{novikov}. Taking the supremum over $n$ we have
\begin{equation*}
\sup_{n\geq 0}C(\Vert b_{n}\Vert _{L_{p}^{q}})=:C_{1}<\infty .
\end{equation*}
Let $\Vert \cdot \Vert $ from now on denote the matrix norm in $\R^{d\times
d}$ such that $\Vert A\Vert =\sum_{i,j=1}^{d}|a_{ij}|$ for a matrix $%
A=\{a_{ij}\}_{i,j=1,\dots ,d}$, then we have
\begin{align}
& E[\Vert I_{2}^{n}(t_{0}^{\prime },t_{0})\Vert ^{2}]\leq C_{1}\Bigg(%
\sum_{m=1}^{\infty }\sum_{j,k=1}^{d}\sum_{l_{1},\dots ,l_{m-1}=1}^{d}\Bigg\|%
\int_{\Delta _{t_{0},t}^{m}}\frac{\partial }{\partial x_{l_{1}}}%
b_{n}^{(j)}(t_{1},x+\mathbb{B}_{t_{1}}^{H}) \notag \\
& \times \frac{\partial }{\partial x_{l_{2}}}b_{n}^{(l_{1})}(t_{2},x+\mathbb{%
B}_{t_{2}}^{H})\cdots \frac{\partial }{\partial x_{k}}%
b_{n}^{(l_{m-1})}(t_{m},x+\mathbb{B}_{t_{m}}^{H})\mathcal{K}%
_{t_{0},t_{0}^{\prime }}^{H_{i}}(t_{m})dt_{m}\cdots dt_{1}\Bigg\|%
_{L^{4}(\Omega ,\R)}\Bigg)^{2}. \label{I2}
\end{align}
Now, the aim is to shuffle the four integrals above. Denote
\begin{equation} \label{VI_I}
J_{2}^{n}(t_{0}^{\prime },t_{0}):=\int_{\Delta _{t_{0},t}^{m}}\frac{\partial
}{\partial x_{l_{1}}}b_{n}^{(j)}(t_{1},x+\mathbb{B}_{t_{1}}^{H})\cdots \frac{%
\partial }{\partial x_{k}}b_{n}^{(l_{m-1})}(t_{m},x+\mathbb{B}_{t_{m}}^{H})%
\mathcal{K}_{t_{0},t_{0}^{\prime }}^{H_{i}}(t_{m})dt.
\end{equation}
Then, shuffling $J_{2}^{n}(t_{0}^{\prime },t_{0})$ as shown in %
\eqref{shuffleIntegral}, one can write $(J_{2}^{n}(t_{0}^{\prime
},t_{0}))^{2}$ as a sum of at most $2^{2m}$ summands of length $2m$ of the
form
\begin{equation*}
\int_{\Delta _{t_{0},t}^{2m}}g_{1}^{n}(t_{1},x+\mathbb{B}_{t_{1}}^{H})\cdots
g_{2m}^{n}(t_{2m},x+\mathbb{B}_{t_{2m}}^{H})dt_{2m}\cdots dt_{1},
\end{equation*}%
where for each $l=1,\dots ,2m$,
\begin{equation*}
g_{l}^{n}(\cdot ,x+\mathbb{B}_{\cdot }^{H})\in \left\{ \frac{\partial }{%
\partial x_{k}}b_{n}^{(j)}(\cdot ,x+\mathbb{B}_{\cdot }^{H}),\frac{\partial
}{\partial x_{k}}b_{n}^{(j)}(\cdot ,x+\mathbb{B}_{\cdot }^{H})\mathcal{K}%
_{t_{0},t_{0}^{\prime }}^{H_{i}}(\cdot ),\,j,k=1,\dots ,d\right\} .
\end{equation*}
Repeating this argument once again, we find that $J_{2}^{n}(t_{0}^{\prime
},t_{0})^{4}$ can be expressed as a sum of, at most, $2^{8m}$ summands of
length $4m$ of the form
\begin{equation} \label{VI_III}
\int_{\Delta _{t_{0},t}^{4m}}g_{1}^{n}(t_{1},x+\mathbb{B}_{t_{1}}^{H})\cdots
g_{4m}^{n}(t_{4m},x+\mathbb{B}_{t_{4m}}^{H})dt_{4m}\cdots dt_{1},
\end{equation}%
where for each $l=1,\dots ,4m$,
\begin{equation*}
g_{l}^{n}(\cdot ,x+\mathbb{B}_{\cdot }^{H})\in \left\{ \frac{\partial }{%
\partial x_{k}}b_{n}^{(j)}(\cdot ,x+\mathbb{B}_{\cdot }^{x}H\frac{\partial }{%
\partial x_{k}}b_{n}^{(j)}(\cdot ,x+\mathbb{B}_{\cdot }^{H})\mathcal{K}%
_{t_{0},t_{0}^{\prime }}^{H_{i}}(\cdot ),\,j,k=1,\dots ,d\right\} .
\end{equation*}
It is important to note that the function $\mathcal{K}_{t_{0},t_{0}^{\prime
}}^{H_{i}}(\cdot )$ appears only once in term \eqref{VI_I} and hence only
four times in term \eqref{VI_III}. So there are indices $j_{1},\dots
,j_{4}\in \{1,\dots ,4m\}$ such that we can write \eqref{VI_III} as
\begin{equation*}
\int_{\Delta _{t_{0},t}^{4m}}\left( \prod_{j=1}^{4m}b_{j}^{n}(t_{j},x+%
\mathbb{B}_{t_{j}}^{H})\right) \prod_{l=1}^{4}\mathcal{K}_{t_{0},t_{0}^{%
\prime }}^{H_{i}}(t_{j_{l}})dt_{4m}\cdots dt_{1},
\end{equation*}%
where
\begin{equation*}
b_{l}^{n}(\cdot ,x+\mathbb{B}_{\cdot }^{H})\in \left\{ \frac{\partial }{%
\partial x_{k}}b_{n}^{(j)}(\cdot ,x+\mathbb{B}_{\cdot }^{H}),\,j,k=1,\dots
,d\right\} ,\quad l=1,\dots ,4m.
\end{equation*}
The latter enables us to use the estimate from Proposition \ref%
{mainestimate1} for $\sum_{r=1}^{4m}\varepsilon _{r}=4,$ $\left\vert \alpha
\right\vert =4m$, $\sum_{l=1}^{d}\alpha _{j}^{(l)}=1$ for all $l,$ $H_{r}<%
\frac{1}{2(d+2)}$ for some $r\geq i$ combined with Remark \ref{Remark 3.4}.
Thus we obtain that
\begin{eqnarray*}
\left( E(J_{2}^{n}(t_{0}^{\prime },t_{0}))^{4}\right) ^{1/4} &\leq & \\
&&\frac{1}{\lambda _{r}^{md}}C_{i}^{2m}\left\Vert b_{n}\right\Vert _{L^{1}(%
\mathbb{R}^{d};L^{\infty }([0,T]))}^{m}\left\vert \frac{t_{0}-t_{0}^{\prime }%
}{t_{0}t_{0}^{\prime }}\right\vert ^{\gamma _{i}}t_{0}^{(H_{i}-\frac{1}{2}%
-\gamma _{i})} \\
&&\times \frac{C(d)^{m}((8m)!)^{1/16}\left\vert t-t_{0}\right\vert
^{-H_{r}(md+2m)+(H_{i}-\frac{1}{2}-\gamma _{i})+m}}{\Gamma (-H_{r}(2\cdot
4md+4\cdot 4m)+2(H_{i}-\frac{1}{2}-\gamma _{i})+8m)^{1/8}}
\end{eqnarray*}
for a constant $C(d)$ depending only on $d.$
Then the series in (\ref{I2}) is summable over $j,k$, $l_{1},\dots ,l_{m-1}$
and $m$. Hence, we just need to verify that the double integral is finite
for suitable $\gamma _{i}$'s and $\beta _{i}$'s. Indeed,
\begin{equation*}
\int_{0}^{t}\int_{0}^{t}\frac{\left\vert t_{0}-t_{0}^{\prime }\right\vert
^{2\gamma _{i}-1-2\beta _{i}}}{\left\vert t_{0}t_{0}^{\prime }\right\vert
^{2\gamma _{i}}}t_{0}^{2\left( H_{i}-\frac{1}{2}-\gamma _{i}\right)
}|t-t_{0}|^{-2\left( H_{i}-\frac{1}{2}-\gamma _{i}\right)
}dt_{0}dt_{0}^{\prime }<\infty ,
\end{equation*}%
whenever $2\left( H_{i}-\frac{1}{2}-\gamma _{i}\right) >-1$, $2\gamma
_{i}-1-2\beta _{i}>-1$ and $2\left( H_{i}-\frac{1}{2}-\gamma _{i}\right)
-2\gamma _{i}>-1$ which is fulfilled if for instance $\gamma _{i}<H_{i}/4$
and $0<\beta _{i}<\gamma _{i}$.
Now we may choose for example a function $\varphi $ with $\varphi (x)=\exp
(-x^{100})$. In this case, we find that%
\begin{equation*}
C_{i}^{2m}\lambda _{i}=\phi _{i}C_{i}^{2m}\varphi (C_{i})\leq \phi
_{i}\left( \frac{1}{50}\right) ^{\frac{m}{50}}m^{\frac{m}{50}}
\end{equation*}%
So, finally, if $H_{r}$\ for a fixed $r\geq i$ is sufficiently small, the
sums over $i\geq 1$ also converge since we have $\phi _{i}$ satisfying %
\ref{Finite}.
For the term $I_{3}^{n}$ we may use Theorem \ref{girsanov}, Cauchy-Schwarz
inequality twice and observe that the first factor of $I_{3}^{n}$ is bounded
uniformly in $t_{0},t\in \lbrack 0,T]$ by a simple application of
Proposition \ref{mainestimate2} with $\varepsilon _{j}=0$ for all $j$. Then,
the remaining estimate is fairly similar to the case of $I_{2}^{n}$ by using
Proposition \ref{mainestimate2} again. As for the estimate for the Malliavin
derivative the reader may agree that the arguments are analogous.
\end{proof}
\bigskip
The following is a consequence of combining Lemma \ref{VI_weakconv} and
Proposition \ref{Holderintegral}.
\begin{cor}
\label{VI_L2conv} For every $t\in \lbrack 0,T]$ and continuous function $%
\varphi :\R^{d}\rightarrow \R$ with at most linear growth we have
\begin{equation*}
\varphi (X_{t}^{n})\xrightarrow{n\to \infty}\varphi (E[X_{t}|\mathcal{F}%
_{t}])
\end{equation*}%
strongly in $L^{2}(\Omega )$. In addition, $E[X_{t}|\mathcal{F}_{t}]$ is
Malliavin differentiable along any direction $W^{i}$, $i\geq 1$ of $\mathbb{B%
}_{\cdot }^{H}$. Moreover, the solution $X$ is $\mathcal{F}$-adapted, thus
being a strong solution.
\end{cor}
\begin{proof}
This is a direct consequence of the relative compactness from Theorem \ref%
{compinf} combined with Proposition \ref{Holderintegral} and by Lemma \ref%
{VI_weakconv}, we can identify the limit as $E[X_{t}|\mathcal{F}_{t}].$ Then
the convergence holds for any bounded continuous functions as well. The
Malliavin differentiability of $E[X_{t}|\mathcal{F}_{t}]$ is verified by
taking $\varphi =I_{d}$ and the second estimate in Proposition \ref%
{Holderintegral} in connection with \cite[Proposition 1.2.3]{Nua10}.
\end{proof}
\bigskip
Finally, we can complete step (4) of our scheme.
\begin{cor}
The constructed solution $X_{\cdot }$ of \eqref{maineq} is strong.
\end{cor}
\begin{proof}
We have to show that $X_{t}$ is $\mathcal{F}_{t}$-measurable for every $t\in
\lbrack 0,T]$ and by Remark \ref{VI_stochbasisrmk} we see that there exists
a strong solution in the usual sense, which is Malliavin differentiable. In
proving this, let $\varphi $ be a globally Lipschitz continuous function.
Then it follows from Corollary \ref{VI_L2conv} that there exists a
subsequence $n_{k}$, $k\geq 0$, that
\begin{equation*}
\varphi (X_{t}^{n_{k}})\rightarrow \varphi (E[X_{t}|\mathcal{F}_{t}]),\ \
P-a.s.
\end{equation*}%
as $k\rightarrow \infty $.
Further, by Lemma \ref{VI_weakconv} we also know that
\begin{equation*}
\varphi (X_{t}^{n})\rightarrow E\left[ \varphi (X_{t})|\mathcal{F}_{t}\right]
\end{equation*}%
weakly in $L^{2}(\Omega )$. By the uniqueness of the limit we immediately
obtain that
\begin{equation*}
\varphi \left( E[X_{t}|\mathcal{F}_{t}]\right) =E\left[ \varphi (X_{t})|%
\mathcal{F}_{t}\right] ,\ \ P-a.s.
\end{equation*}%
which implies that $X_{t}$ is $\mathcal{F}_{t}$-measurable for every $t\in
\lbrack 0,T]$.
\end{proof}
\bigskip
Finally, we turn to step (5) and complete this Section by showing pathwise
uniqueness. Following the same argument as in \cite[Chapter IX, Exercise
(1.20)]{RY2004} we see that strong existence and uniqueness in law implies
pathwise uniqueness. The argument does not rely on the process being a
semimartingale. Hence, uniqueness in law is enough. The following Lemma
actually implies the desired uniqueness by estimate \eqref{thetabound} in
connection with \cite[Theorem 7.7]{LS.77}.
\begin{lem}
Let $X$ be a strong solution of \eqref{maineq} where $b\in L_{p}^{q}$, $%
p,q\in (2,\infty ]$. Then the estimates \eqref{estimateh} and %
\eqref{estimatehexp} hold for $X$ in place of $\mathbb{B}_{\cdot }^{H}$. As
a consequence, uniqueness in law holds for equation \eqref{maineq} and since
$X$ strong, pathwise uniqueness follows.
\end{lem}
\begin{proof}
Assume first that $b$ is bounded. Fix any $n\geq 1$ and set
\begin{equation*}
\eta _{s}^{n}=K_{H_{n}}^{-1}\left( \frac{1}{\lambda _{n}}\int_{0}^{\cdot
}b(r,X_{r})dr\right) (s).
\end{equation*}%
Since $b$ is bounded it is easy to see from \eqref{thetan} by changing $%
\mathbb{B}_{\cdot }^{H}$ with $X$ and bounding $b$ that for every $\kappa
\in \R$,
\begin{equation} \label{exp1}
E_{\widetilde{P}}\left[ \exp \left\{ -2\kappa \int_{0}^{T}(\eta
_{s}^{n})^{\ast }dW_{s}^{n}-2\kappa ^{2}\int_{0}^{T}|\eta
_{s}^{n}|^{2}ds\right\} \right] =1,
\end{equation}%
where
\begin{equation*}
\frac{d\widetilde{P}}{dP}=\exp \left\{ -\int_{0}^{T}(\eta _{s}^{n})^{\ast
}dW_{s}^{n}-\frac{1}{2}\int_{0}^{T}|\eta _{s}^{n}|^{2}ds\right\} .
\end{equation*}%
Hence, $X_{t}-x$ is a regularizing fractional Brownian motion with Hurst
sequence $H$ under $\widetilde{P}$. Define
\begin{equation*}
\xi _{T}^{\kappa }:=\exp \left\{ -\kappa \int_{0}^{T}(\eta _{s}^{n})^{\ast
}dW_{s}^{n}-\frac{\kappa }{2}\int_{0}^{T}|\eta _{s}^{n}|^{2}ds\right\} .
\end{equation*}%
Then,
\begin{align*}
E_{\widetilde{P}}\left[ \xi _{T}^{\kappa }\right] & =E_{\widetilde{P}}\left[
\exp \left\{ -\kappa \int_{0}^{T}(\eta _{s}^{n})^{\ast }dW_{s}^{n}-\frac{%
\kappa }{2}\int_{0}^{T}|\eta _{s}^{n}|^{2}ds\right\} \right] \\
& =E_{\widetilde{P}}\left[ \exp \left\{ -\kappa \int_{0}^{T}(\eta
_{s}^{n})^{\ast }dW_{s}^{n}-\kappa ^{2}\int_{0}^{T}|\eta
_{s}^{n}|^{2}ds\right\} \exp \left\{ \left( \kappa ^{2}+\frac{\kappa }{2}%
\right) \int_{0}^{T}|\eta _{s}^{n}|^{2}ds\right\} \right] \\
& \leq \left( E_{\widetilde{P}}\left[ \exp \left\{ 2\left\vert \kappa ^{2}+%
\frac{\kappa }{2}\right\vert \int_{0}^{T}|\eta _{s}^{n}|^{2}ds\right\} %
\right] \right) ^{1/2}
\end{align*}%
in view of \eqref{exp1}.
On the other hand, using \eqref{VI_fracL2} with $X$ in place of $\mathbb{B}%
_{\cdot }^{H}$ we have
\begin{equation*}
\int_{0}^{T}|\eta _{s}|^{2}ds\leq C_{\varepsilon ,\lambda
_{n},H_{n},T}\left( 1+\int_{0}^{T}|b(r,X_{r})|^{\frac{1+\varepsilon }{%
\varepsilon }}dr\right) ,\quad P-a.s.
\end{equation*}%
for any $\varepsilon \in (0,1)$. Hence, applying Lemma \ref{interlemma} we
get
\begin{equation*}
E_{\widetilde{P}}\left[ \xi _{T}^{\kappa }\right] \leq e^{\left\vert \kappa
^{2}+\frac{\kappa }{2}\right\vert C_{\varepsilon ,\lambda
_{n},H_{n},T}}\left( A\left( C_{\varepsilon ,\lambda _{n},H_{n},T}\left\vert
\kappa ^{2}+\frac{\kappa }{2}\right\vert \Vert |b|^{\frac{1+\varepsilon }{%
\varepsilon }}\Vert _{L_{p}^{q}}\right) \right) ^{1/2},
\end{equation*}%
where $A$ is the analytic function from Lemma \ref{interlemma}.
Furthermore, observe that for every $\kappa\in \R$ we have
\begin{align} \label{sumexp}
E_P[\xi_T^{\kappa}] = E_{\widetilde{P}}[\xi_T^{\kappa-1}].
\end{align}
In fact, \eqref{sumexp} holds for any $b\in L_p^q$ by considering $b_n:=b%
\mathbf{1}_{\{|b|\leq n\}}$, $n\geq 1$ and then letting $n\to \infty$.
Finally, let $\delta \in (0,1)$ and apply H\"{o}lder's inequality in order
to get
\begin{equation*}
E_{P}\left[ \int_{0}^{T}h(t,X_{t})dt\right] \leq T^{\delta }\left( E_{%
\widetilde{P}}[(\xi _{T}^{1})^{\frac{1+\delta }{\delta }}]\right) ^{\frac{%
\delta }{1+\delta }}\left( E_{\widetilde{P}}\left[
\int_{0}^{T}h(t,X_{t})^{1+\delta }dt\right] \right) ^{\frac{1}{1+\delta }},
\end{equation*}%
and
\begin{equation*}
E_{P}\left[ \exp \left\{ \int_{0}^{T}h(t,X_{t})dt\right\} \right] \leq
T^{\delta }\left( E_{\widetilde{P}}[(\xi _{T}^{1})^{\frac{1+\delta }{\delta }%
}]\right) ^{\frac{\delta }{1+\delta }}\left( E_{\widetilde{P}}\left[ \exp
\left\{ (1+\delta )\int_{0}^{T}h(t,X_{t})dt\right\} \right] \right) ^{\frac{1%
}{1+\delta }},
\end{equation*}%
for every Borel measurable function. Since we know that $X_{t}-x$ is a
regularizing fractional Brownian motion with Hurst sequence $H$ under $%
\widetilde{P}$, the result follows by Lemma \ref{interlemma} by choosing $%
\delta $ close enough to 0.
\end{proof}
\bigskip
Using the all the previous intermediate results, we are now able to state
the main result of this Section:
\begin{thm}
\label{VI_mainthm} Retain the conditions for $\lambda =\{\lambda
_{i}\}_{i\geq 1}$ with respect to $\mathbb{B}_{\cdot }^{H}$ in Theorem \ref%
{Holderintegral}. Let\emph{\ }$b\in \mathcal{L}_{2,p}^{q}$\emph{, }$p,q\in
(2,\infty ]$. Then there exists a unique (global) strong solution\emph{\ }$%
X_{t},0\leq t\leq T$ of equation \eqref{maineq}. Moreover, for every\emph{\ }%
$t\in \lbrack 0,T]$\emph{, }$X_{t}$\emph{\ }is Malliavin differentiable in
each direction of the Brownian motions\emph{\ }$W^{n}$\emph{, }$n\geq 1$%
\emph{\ }in\emph{\ }\eqref{compfBm}.
\end{thm}
\section{Infinitely Differentiable Flows for Irregular Vector Fields }
\label{flowsection}
From now on, we denote by $X_{t}^{s,x}$ the solution to the following SDE
driven by a regularizing fractional Brownian motion $\mathbb{B}_{\cdot }^{H}$
with Hurst sequence $H$:
\begin{equation*}
dX_{t}^{s,x}=b(t,X_{t}^{s,x})dt+d\mathbb{B}_{t}^{H},\quad s,t\in \lbrack
0,T],\quad s\leq t,\quad X_{s}^{s,x}=x\in \R^{d}.
\end{equation*}
We will then assume the hypotheses from Theorem \ref{VI_mainthm} on $b$ and $%
H$.
The next estimate essentially tells us that the stochastic mapping $x\mapsto
X_{t}^{s,x}$ is $P$-a.s. infinitely many times continuously differentiable.
In particular, it shows that the strong solution constructed in the former
section, in addition to being Malliavin differentiable, is also smooth in $x$
and, although we will not prove it explicitly here, it is also smooth in the
Malliavin sense, and since H\"{o}rmander's condition is met then implies
that the densities of the marginals are also smooth.
\begin{thm}
\label{VI_derivative} Let $b\in C_{c}^{\infty }((0,T)\times \R^{d})$. Fix
integers $p\geq 2$ and $k\geq 1$. Choose a $r$ such that $H_{r}<\frac{1}{%
(d-1+2k)}$. Then there exists a continuous function $C_{k,d,H_{r},p,%
\overline{p},\overline{q},T}:[0,\infty )^{2}\rightarrow \lbrack 0,\infty )$,
depending on $k,d,H_{r},p,\overline{p},\overline{q}$ and $T$.
\begin{equation*}
\sup_{s,t\in \lbrack 0,T]}\sup_{x\in \R^{d}}\text{\emph{E}}\left[ \left\Vert
\frac{\partial ^{k}}{\partial x^{k}}X_{t}^{s,x}\right\Vert ^{p}\right] \leq
C_{k,d,H_{r},p,\overline{p},\overline{q},T}(\Vert b\Vert _{L_{\overline{p}}^{%
\overline{q}}},\Vert b\Vert _{L_{\infty }^{1}}).
\end{equation*}
\end{thm}
\begin{proof}
For notational simplicity, let $s=0$, $\mathbb{B}_{\cdot }=\mathbb{B}_{\cdot
}^{H}$ and let $X_{t}^{x},$ $0\leq t\leq T$ be the solution with respect to
the vector field $b\in C_{c}^{\infty }((0,T)\times \mathbb{R}^{d})$. We know
that the stochastic flow associated with the smooth vector field $b$ is
smooth, too (compare to e.g. \cite{Kunita}).\ Hence, we get that%
\begin{equation}
\frac{\partial }{\partial x}X_{t}^{x}=I_{d}+\int_{s}^{t}Db(u,X_{u}^{x})\cdot
\frac{\partial }{\partial x}X_{u}^{x}du,
\end{equation}%
where $Db(u,\cdot ):\mathbb{R}^{d}\longrightarrow L(\mathbb{R}^{d},\mathbb{R}%
^{d})$ is the derivative of $b$ with respect to the space variable.
By using Picard iteration, we see that%
\begin{equation}
\frac{\partial }{\partial x}X_{t}^{x}=I_{d}+\sum_{m\geq 1}\int_{\Delta
_{0,t}^{m}}Db(u,X_{u_{1}}^{x})...Db(u,X_{u_{m}}^{x})du_{m}...du_{1},
\label{FirstOrder}
\end{equation}%
where%
\begin{equation*}
\Delta _{s,t}^{m}=\{(u_{m},...u_{1})\in \lbrack 0,T]^{m}:\theta
<u_{m}<...<u_{1}<t\}.
\end{equation*}
By applying dominated convergence, we can differentiate both sides with
respect to $x$ and find that%
\begin{equation*}
\frac{\partial ^{2}}{\partial x^{2}}X_{t}^{x}=\sum_{m\geq 1}\int_{\Delta
_{0,t}^{m}}\frac{\partial }{\partial x}%
[Db(u,X_{u_{1}}^{x})...Db(u,X_{u_{m}}^{x})]du_{m}...du_{1}.
\end{equation*}%
Further, the Leibniz and chain rule yield%
\begin{eqnarray*}
&&\frac{\partial }{\partial x}%
[Db(u_{1},X_{u_{1}}^{x})...Db(u_{m},X_{u_{m}}^{x})] \\
&=&\sum_{r=1}^{m}Db(u_{1},X_{u_{1}}^{x})...D^{2}b(u_{r},X_{u_{r}}^{x})\frac{%
\partial }{\partial x}X_{u_{r}}^{x}...Db(u_{m},X_{u_{m}}^{x}),
\end{eqnarray*}%
where $D^{2}b(u,\cdot )=D(Db(u,\cdot )):\mathbb{R}^{d}\longrightarrow L(%
\mathbb{R}^{d},L(\mathbb{R}^{d},\mathbb{R}^{d}))$.
Therefore (\ref{FirstOrder}) entails%
\begin{eqnarray}
\frac{\partial ^{2}}{\partial x^{2}}X_{t}^{x} &=&\sum_{m_{1}\geq
1}\int_{\Delta
_{0,t}^{m_{1}}}%
\sum_{r=1}^{m_{1}}Db(u_{1},X_{u_{1}}^{x})...D^{2}b(u_{r},X_{u_{r}}^{x})
\notag \\
&&\times \left( I_{d}+\sum_{m_{2}\geq 1}\int_{\Delta
_{0,u_{r}}^{m_{2}}}Db(v_{1},X_{v_{1}}^{x})...Db(v_{m_{2}},X_{v_{m_{2}}}^{x})dv_{m_{2}}...dv_{1}\right)
\notag \\
&&\times
Db(u_{r+1},X_{u_{r+1}}^{x})...Db(u_{m_{1}},X_{u_{m_{1}}}^{x})du_{m_{1}}...du_{1}
\notag \\
&=&\sum_{m_{1}\geq 1}\sum_{r=1}^{m_{1}}\int_{\Delta
_{0,t}^{m_{1}}}Db(u_{1},X_{u_{1}}^{x})...D^{2}b(u_{r},X_{u_{r}}^{x})...Db(u_{m_{1}},X_{u_{m_{1}}}^{x})du_{m_{1}}...du_{1}
\notag \\
&&+\sum_{m_{1}\geq 1}\sum_{r=1}^{m_{1}}\sum_{m_{2}\geq 1}\int_{\Delta
_{0,t}^{m_{1}}}\int_{\Delta
_{0,u_{r}}^{m_{2}}}Db(u_{1},X_{u_{1}}^{x})...D^{2}b(u_{r},X_{u_{r}}^{x})
\notag \\
&&\times
Db(v_{1},X_{v_{1}}^{x})...Db(v_{m_{2}}X_{v_{m_{2}}}^{x})Db(u_{r+1},X_{u_{r+1}}^{x})...Db(u_{m_{1}},X_{u_{m_{1}}}^{x})
\notag \\
&&dv_{m_{2}}...dv_{1}du_{m_{1}}...du_{1} \notag \\
&=&:I_{1}+I_{2}. \label{SecondOrder}
\end{eqnarray}
In the next step, we wish to employ Lemma \ref{OrderDerivatives} (in
connection with shuffling in Section \ref{VI_shuffles}) to the term $I_{2}$
in (\ref{SecondOrder}) and get that%
\begin{equation}
I_{2}=\sum_{m_{1}\geq 1}\sum_{r=1}^{m_{1}}\sum_{m_{2}\geq 1}\int_{\Delta
_{0,t}^{m_{1}+m_{2}}}\mathcal{H}%
_{m_{1}+m_{2}}^{X}(u)du_{m_{1}+m_{2}}...du_{1} \label{l2}
\end{equation}%
for $u=(u_{1},...,u_{m_{1}+m_{2}}),$ where the integrand $\mathcal{H}%
_{m_{1}+m_{2}}^{X}(u)\in \mathbb{R}^{d}\otimes \mathbb{R}^{d}\otimes \mathbb{%
R}^{d}$ has entries given by sums of at most $C(d)^{m_{1}+m_{2}}$ terms,
which are products of length $m_{1}+m_{2}$ of functions being elements of
the set%
\begin{equation*}
\left\{ \frac{\partial ^{\gamma ^{(1)}+...+\gamma ^{(d)}}}{\partial ^{\gamma
^{(1)}}x_{1}...\partial ^{\gamma ^{(d)}}x_{d}}b^{(r)}(u,X_{u}^{x}),\text{ }%
r=1,...,d,\text{ }\gamma ^{(1)}+...+\gamma ^{(d)}\leq 2,\text{ }\gamma
^{(l)}\in \mathbb{N}_{0},\text{ }l=1,...,d\right\} .
\end{equation*}%
Here it is important to mention that second order derivatives of functions
in those products of functions on $\Delta _{0,t}^{m_{1}+m_{2}}$ in (\ref{l2}%
) only occur once. Hence the total order of derivatives $\left\vert \alpha
\right\vert $ of those products of functions in connection with Lemma \ref%
{OrderDerivatives} in the Appendix is%
\begin{equation}
\left\vert \alpha \right\vert =m_{1}+m_{2}+1.
\end{equation}%
Let us now choose $p,c,r\in \lbrack 1,\infty )$ such that $cp=2^{q}$ for
some integer $q$ and $\frac{1}{r}+\frac{1}{c}=1.$ Then we can employ H\"{o}%
lder's inequality and Girsanov's theorem (see Theorem \ref{VI_girsanov})
combined with Lemma \ref{novikov} and obtain that%
\begin{eqnarray}
&&E[\left\Vert I_{2}\right\Vert ^{p}] \notag \\
&\leq &C(\left\Vert b\right\Vert _{L_{\overline{p}}^{\overline{q}}})\left(
\sum_{m_{1}\geq 1}\sum_{r=1}^{m_{1}}\sum_{m_{2}\geq 1}\sum_{i\in
I}\left\Vert \int_{\Delta _{0,t}^{m_{1}+m_{2}}}\mathcal{H}_{i}^{\mathbb{B}%
}(u)du_{m_{1}+m_{2}}...du_{1}\right\Vert _{L^{2^{q}}(\Omega ;\mathbb{R}%
)}\right) ^{p}, \label{Lp}
\end{eqnarray}%
where $C:[0,\infty )\longrightarrow \lbrack 0,\infty )$ is a continuous
function depending on $p,\overline{p}$ and $\overline{q}$. Here $\#I\leq
K^{m_{1}+m_{2}}$ for a constant $K=K(d)$ and the integrands $\mathcal{H}%
_{i}^{\mathbb{B}}(u)$ are of the form
\begin{equation*}
\mathcal{H}_{i}^{B^{H}}(u)=\prod_{l=1}^{m_{1}+m_{2}}h_{l}(u_{l}),h_{l}\in
\Lambda ,l=1,...,m_{1}+m_{2}
\end{equation*}%
where
\begin{equation*}
\Lambda :=\left\{
\begin{array}{c}
\frac{\partial ^{\gamma ^{(1)}+...+\gamma ^{(d)}}}{\partial ^{\gamma
^{(1)}}x_{1}...\partial ^{\gamma ^{(d)}}x_{d}}b^{(r)}(u,x+\mathbb{B}_{u}),%
\text{ }r=1,...,d, \\
\gamma ^{(1)}+...+\gamma ^{(d)}\leq 2,\text{ }\gamma ^{(l)}\in \mathbb{N}%
_{0},\text{ }l=1,...,d%
\end{array}%
\right\} .
\end{equation*}%
As above we observe that functions with second order derivatives only occur
once in those products.
Let
\begin{equation*}
J=\left( \int_{\Delta _{0,t}^{m_{1}+m_{2}}}\mathcal{H}_{i}^{\mathbb{B}%
}(u)du_{m_{1}+m_{2}}...du_{1}\right) ^{2^{q}}.
\end{equation*}%
By using shuffling (see Section \ref{VI_shuffles}) once more, successively,
we find that $J$ has a reprsentation as a sum of, at most of length $%
K(q)^{m_{1}+m_{2}}$ with summands of the form%
\begin{equation}
\int_{\Delta
_{0,t}^{2^{q}(m_{1}+m_{2})}}%
\prod_{l=1}^{2^{q}(m_{1}+m_{2})}f_{l}(u_{l})du_{2^{q}(m_{1}+m_{2})}...du_{1},
\label{f}
\end{equation}%
where $f_{l}\in \Lambda $ for all $l$.
Note that the number of factors $f_{l}$ in the above product, which have a
second order derivative, is exactly $2^{q}$. Hence the total order of the
derivatives in (\ref{f}) in connection with Lemma \ref{OrderDerivatives}
(where one in that Lemma formally replaces $X_{u}^{x}$ by $x+\mathbb{B}_{u}$
in the corresponding terms) is
\begin{equation}
\left\vert \alpha \right\vert =2^{q}(m_{1}+m_{2}+1). \label{alpha2}
\end{equation}
We now aim at using Theorem \ref{mainestimate2} for $m=2^{q}(m_{1}+m_{2})$
and $\varepsilon _{j}=0$ and find that%
\begin{eqnarray*}
&&\left\vert E\left[ \int_{\Delta
_{0,t}^{2^{q}(m_{1}+m_{2})}}%
\prod_{l=1}^{2^{q}(m_{1}+m_{2})}f_{l}(u_{l})du_{2^{q}(m_{1}+m_{2})}...du_{1}%
\right] \right\vert \\
&\leq &C^{m_{1}+m_{2}}(\left\Vert b\right\Vert _{L_{\infty
}^{1}})^{2^{q}(m_{1}+m_{2})} \\
&&\times \frac{((2(2^{q}(m_{1}+m_{2}+1))!)^{1/4}}{\Gamma
(-H_{r}(2d2^{q}(m_{1}+m_{2})+42^{q}(m_{1}+m_{2}+1))+22^{q}(m_{1}+m_{2}))^{1/2}%
}
\end{eqnarray*}%
for a constant $C$ depending on $H_{r},T,d$ and $q$.
Therefore the latter combined with (\ref{Lp}) implies that%
\begin{eqnarray*}
&&E[\left\Vert I_{2}\right\Vert ^{p}] \\
&\leq &C(\left\Vert b\right\Vert _{L_{\overline{p}}^{\overline{q}}})\left(
\sum_{m_{1}\geq 1}\sum_{m_{2}\geq 1}K^{m_{1}+m_{2}}(\left\Vert b\right\Vert
_{L_{\infty }^{1}})^{2^{q}(m_{1}+m_{2})}\right. \\
&&\left. \times \frac{((2(2^{q}(m_{1}+m_{2}+1))!)^{1/4}}{\Gamma
(-H_{r}(2d2^{q}(m_{1}+m_{2})+42^{q}(m_{1}+m_{2}+1))+22^{q}(m_{1}+m_{2}))^{1/2}%
})^{1/2^{q}}\right) ^{p}
\end{eqnarray*}%
for a constant $K$ depending on $H_{r},$ $T,$ $d,$ $p$ and $q$.
Since $\frac{1}{2(d+3)}\leq \frac{1}{2(d+2\frac{m_{1}+m_{2}+1}{m_{1}+m_{2}})}
$ for $m_{1},$ $m_{2}\geq 1$, one concludes that the above sum converges,
whenever $H_{r}<\frac{1}{2(d+3)}$.
Further, one gets an estimate for $E[\left\Vert I_{1}\right\Vert ^{p}]$ by
using similar reasonings as above. In summary, we obtain the proof for $k=2$.
We now give an explanation how we can generalize the previous line of
reasoning to the case $k\geq 2$: In this case, we we have that%
\begin{equation}
\frac{\partial ^{k}}{\partial x^{k}}X_{t}^{x}=I_{1}+...+I_{2^{k-1}},
\label{Ik}
\end{equation}%
where each $I_{i},$ $i=1,...,2^{k-1}$ is a sum of iterated integrals over
simplices of the form $\Delta _{0,u}^{m_{j}},$ $0<u<t,$ $j=1,...,k$ with
integrands having at most one product factor $D^{k}b$, while the other
factors are of the form $D^{j}b,j\leq k-1$.
In the following we need the following notation: For multi-indices $%
m.=(m_{1},...,m_{k})$ and $r:=(r_{1},...,r_{k-1})$, set%
\begin{equation*}
m_{j}^{-}:=\sum_{i=1}^{j}m_{i}\text{ }
\end{equation*}%
and%
\begin{equation*}
\sum_{\substack{ m\geq 1 \\ r_{l}\leq m_{l}^{-} \\ l=1,...,k-1}}%
:=\sum_{m_{1}\geq 1}\sum_{r_{1}=1}^{m_{1}}\sum_{m_{2}\geq
1}\sum_{r_{2}=1}^{m_{2}^{-}}...\sum_{r_{k-1}=1}^{m_{k-1}^{-}}\sum_{m_{k}\geq
1}.
\end{equation*}%
In what follows, without loss of generality we confine ourselves to deriving
an estimate with respect to the summand $I_{2^{k-1}}$ in (\ref{Ik}). Just as
in the case $k=2,$ we obtain by employing Lemma \ref{OrderDerivatives} (in
connection with shuffling in Section \ref{VI_shuffles}) that
\begin{equation}
I_{2^{k-1}}=\sum_{\substack{ m\geq 1 \\ r_{l}\leq m_{l}^{-} \\ l=1,...,k-1}}%
\int_{\Delta _{0,t}^{m_{1}+...+m_{k}}}\mathcal{H}%
_{m_{1}+...+m_{k}}^{X}(u)du_{m_{1}+m_{2}}...du_{1}
\end{equation}%
for $u=(u_{m_{1}+...+m_{k}},...,u_{1}),$ where the integrand $\mathcal{H}%
_{m_{1}+...+m_{k}}^{X}(u)\in \otimes _{j=1}^{k+1}\mathbb{R}^{d}$ has
entries, which are given by sums of at most $C(d)^{m_{1}+...+m_{k}}$ terms.
Those terms are given by products of length $m_{1}+...m_{k}$ of functions,
which are elements of the set%
\begin{equation*}
\left\{
\begin{array}{c}
\frac{\partial ^{\gamma ^{(1)}+...+\gamma ^{(d)}}}{\partial ^{\gamma
^{(1)}}x_{1}...\partial ^{\gamma ^{(d)}}x_{d}}b^{(r)}(u,X_{u}^{x}),r=1,...,d,
\\
\gamma ^{(1)}+...+\gamma ^{(d)}\leq k,\gamma ^{(l)}\in \mathbb{N}%
_{0},l=1,...,d%
\end{array}%
\right\} .
\end{equation*}%
Exactly as in the case $k=2$ we can invoke Lemma \ref{OrderDerivatives} in
the Appendix and get that the total order of derivatives $\left\vert \alpha
\right\vert $ of those products of functions is
\begin{equation}
\left\vert \alpha \right\vert =m_{1}+...+m_{k}+k-1.
\end{equation}%
Then we can adopt the line of reasoning as before and choose $p,c,r\in
\lbrack 1,\infty )$ such that $cp=2^{q}$ for some integer $q$ and $\frac{1}{r%
}+\frac{1}{c}=1$ and find by applying H\"{o}lder's inequality and Girsanov's
theorem (see Theorem \ref{VI_girsanov}) combined with Lemma \ref{novikov}
that%
\begin{eqnarray}
&&E[\left\Vert I_{2^{k-1}}\right\Vert ^{p}] \notag \\
&\leq &C(\left\Vert b\right\Vert _{L_{\overline{p}}^{\overline{q}}})\left(
\sum_{\substack{ m\geq 1 \\ r_{l}\leq m_{l}^{-} \\ l=1,...,k-1}}\sum_{i\in
I}\left\Vert \int_{\Delta _{0,t}^{m_{1}+m_{2}}}\mathcal{H}_{i}^{\mathbb{B}%
}(u)du_{m_{1}+...+m_{k}}...du_{1}\right\Vert _{L^{2^{q}}(\Omega ;\mathbb{R}%
)}\right) ^{p}, \label{Lp2}
\end{eqnarray}%
where $C:[0,\infty )\longrightarrow \lbrack 0,\infty )$ is a continuous
function depending on $p,\overline{p}$ and $\overline{q}$. Here $\#I\leq
K^{m_{1}+...+m_{k}}$ for a constant $K=K(d)$ and the integrands $\mathcal{H}%
_{i}^{\mathbb{B}}(u)$ take the form
\begin{equation*}
\mathcal{H}_{i}^{\mathbb{B}}(u)=\prod_{l=1}^{m_{1}+...+m_{k}}h_{l}(u_{l}),%
\text{ }h_{l}\in \Lambda ,\text{ }l=1,...,m_{1}+...+m_{k},
\end{equation*}%
where
\begin{equation*}
\Lambda :=\left\{
\begin{array}{c}
\frac{\partial ^{\gamma ^{(1)}+...+\gamma ^{(d)}}}{\partial ^{\gamma
^{(1)}}x_{1}...\partial ^{\gamma ^{(d)}}x_{d}}b^{(r)}(u,x+\mathbb{B}_{u}),%
\text{ }r=1,...,d, \\
\gamma ^{(1)}+...+\gamma ^{(d)}\leq k,\text{ }\gamma ^{(l)}\in \mathbb{N}%
_{0},\text{ }l=1,...,d%
\end{array}%
\right\} .
\end{equation*}%
Define
\begin{equation*}
J=\left( \int_{\Delta _{0,t}^{m_{1}+...+m_{k}}}\mathcal{H}_{i}^{\mathbb{B}%
}(u)du_{m_{1}+...+m_{k}}...du_{1}\right) ^{2^{q}}.
\end{equation*}%
Once more, repeated shuffling (see Section \ref{VI_shuffles}) shows that $J$
can be represented as a sum of, at most of length $K(q)^{m_{1}+....m_{k}}$
with summands of the form%
\begin{equation}
\int_{\Delta
_{0,t}^{2^{q}(m_{1}+...+m_{k})}}%
\prod_{l=1}^{2^{q}(m_{1}+...+m_{k})}f_{l}(u_{l})du_{2^{q}(m_{1}+....+m_{k})}...du_{1},
\label{f2}
\end{equation}%
where $f_{l}\in \Lambda $ for all $l$.
By applying Lemma \ref{OrderDerivatives} again (where one in that Lemma
formally replaces $X_{u}^{x}$ by $x+B_{u}^{H}$ in the corresponding
expressions) we obtain that the total order of the derivatives in the
products of functions in (\ref{f2}) is given by%
\begin{equation}
\left\vert \alpha \right\vert =2^{q}(m_{1}+...+m_{k}+k-1).
\end{equation}
Then Proposition \ref{mainestimate2} for $m=2^{q}(m_{1}+...+m_{k})$ and $%
\varepsilon _{j}=0$ yields that%
\begin{eqnarray*}
&&\left\vert E\left[ \int_{\Delta
_{0,t}^{2^{q}(m_{1}+...+m_{k})}}%
\prod_{l=1}^{2^{q}(m_{1}+...+m_{k})}f_{l}(u_{l})du_{2^{q}(m_{1}+...+m_{k})}...du_{1}%
\right] \right\vert \\
&\leq &C^{m_{1}+...+m_{k}}(\left\Vert b\right\Vert _{L_{\infty
}^{1}})^{2^{q}(m_{1}+...+m_{k})} \\
&&\times \frac{((2(2^{q}(m_{1}+...+m_{k}+k-1))!)^{1/4}}{\Gamma
(-H_{r}(2d2^{q}(m_{1}+...+m_{k})+42^{q}(m_{1}+...+m_{k}+k-1))+22^{q}(m_{1}+...+m_{k}))^{1/2}%
}
\end{eqnarray*}%
for a constant $C$ depending on $H_{r},$ $T,$ $d$ and $q$.
Thus we can conclude from (\ref{Lp2}) that%
\begin{eqnarray*}
&&E[\left\Vert I_{2^{k-1}}\right\Vert ^{p}] \\
&\leq &C(\left\Vert b\right\Vert _{L_{\overline{p}}^{\overline{q}}})\left(
\sum_{m_{1}\geq 1}...\sum_{m_{k}\geq 1}K^{m_{1}+...+m_{k}}(\left\Vert
b\right\Vert _{L_{\infty }^{1}})^{2^{q}(m_{1}+...+m_{k})}\right. \\
&&\left. \times \frac{((2(2^{q}(m_{1}+...+m_{k}+k-1))!)^{1/4}}{\Gamma
(-H_{r}(2d2^{q}(m_{1}+...+m_{k})+42^{q}(m_{1}+...+m_{k}+k-1))+22^{q}(m_{1}+...+m_{k}))^{1/2}%
})^{1/2^{q}}\right) ^{p} \\
&\leq &C(\left\Vert b\right\Vert _{L_{\overline{p}}^{\overline{q}}}\left(
\sum_{m\geq 1}\sum_{\substack{ l_{1},...,l_{k}\geq 0: \\ l_{1}+...+l_{k}=m}}%
K^{m}(\left\Vert b\right\Vert _{L_{\infty }^{1}})^{2^{q}m}\right. \\
&&\left. \times \frac{((2(2^{q}(m+k-1))!)^{1/4}}{\Gamma
(-H_{r}(2d2^{q}m+42^{q}(m+k-1))+22^{q}m)^{1/2}})^{1/2^{q}}\right) ^{p}
\end{eqnarray*}%
for a constant $K$ depending on $H_{r},$ $T,$ $d,$ $p$ and $q$.
Since $H_{r}<$ $\frac{1}{2(d-1+2k)}$ by assumption, we see that the above
sum converges. Hence the proof follows.
\end{proof}
\bigskip
\bigskip The following is the main result of this Section and shows that the
regularizing fractional Brownian motion $\mathbb{B}_{\cdot }^{H}$ "produces"
an infinitely continuously differentiable stochastic flow $x\mapsto X_{t}^{x}
$, when $b$ merely belongs to $\mathcal{L}_{2,p}^{q}$ for any $p,q\in
(2,\infty ]$.
\begin{thm}
Assume that the conditions for $\lambda =\{\lambda _{i}\}_{i=1}^{\infty }$
with respect to $\mathbb{B}_{\cdot }^{H}$ in Theorem \ref{VI_mainthm} hold.
Suppose that $b\in \mathcal{L}_{2,p}^{q}$, $p,q\in (2,\infty ]$. Let $%
U\subset \R^{d}$ be an open and bounded set and $X_{t},$ $0\leq t\leq T$ the
solution of \eqref{maineq}. Then for all $t\in \lbrack 0,T]$ we have that
\begin{equation*}
X_{t}^{\cdot }\in \bigcap_{k\geq 1}\bigcap_{\alpha >2}L^{2}(\Omega
,W^{k,\alpha }(U)).
\end{equation*}
\end{thm}
\begin{proof}
First, we approximate the irregular drift vector field $b$ by a sequence of
functions $b_{n}:[0,T]\times \R^{d}\rightarrow \R^{d}$, $n\geq 0$ in $%
C_{c}^{\infty }((0,T)\times \R^{d},\R^{d})$ in the sense of \eqref{VI_Xn}.
Let $X^{n,x}=\{X_{t}^{n,x},t\in \lbrack 0,T]\}$ be the solution to %
\eqref{maineq} with initial value $x\in \R^{d}$ associated with $b_{n}$.
We find that for any test function $\varphi \in C_{c}^{\infty }(U,\R^{d})$
and fixed $t\in \lbrack 0,T]$ the set of random variables
\begin{equation*}
\langle X_{t}^{n,\cdot },\varphi \rangle :=\int_{U}\langle
X_{t}^{n,x},\varphi (x)\rangle _{\R^{d}}dx,\quad n\geq 0
\end{equation*}%
is relatively compact in $L^{2}(\Omega )$. In proving this, we want to apply
the compactness criterion Theorem \ref{compinf} in terms of the Malliavin
derivative in the Appendix. Using the sequence $\{\delta
_{i}\}_{i=1}^{\infty }$ in Proposition \ref{Holderintegral}, we get that
\begin{align*}
\sum_{i=1}^{\infty }\frac{1}{\delta _{i}^{2}}E[\int_{0}^{T}|D_{s}^{i,(j)}%
\langle X_{t}^{n,\cdot },\varphi \rangle |^{2}ds]=& \sum_{l=1}^{d}\left(
\int_{U}E[D_{s}^{i,(j)}X_{t}^{n,x,(l)}]\varphi _{l}(x)dx\right) ^{2} \\
\leq & d\Vert \varphi \Vert _{L^{2}(\R^{d},\R^{d})}^{2}\lambda \{\mbox{supp }%
(\varphi )\}\sup_{x\in U}\sum_{i=1}^{\infty }\frac{1}{\delta _{i}^{2}}E\left[
\int_{0}^{T}\Vert D_{s}^{i}X_{t}^{n,x}\Vert ^{2}ds\right] ,
\end{align*}%
where $D^{i,(j)}$ denotes the Malliavin derivative in the direction of $%
W^{i,(j)}$ where $W^{i}$ is the $d$-dimensional standard Brownian motion
defining $B^{H_{i},i}$ and $W^{i,(j)}$ its $j$-th component, $\lambda $ the
Lebesgue measure on $\R^{d}$, $\mbox{supp }(\varphi )$ the support of $%
\varphi $ and $\Vert \cdot \Vert $ a matrix norm. So it follows from the
estimates in Proposition \ref{Holderintegral} that
\begin{equation*}
\sup_{n\geq 0}\sum_{i=1}^{\infty }\frac{1}{\delta _{i}^{2}}\Vert D_{\cdot
}^{i}\langle X_{t}^{n,\cdot },\varphi \rangle \Vert _{L^{2}(\Omega \times
\lbrack 0,T])}^{2}\leq C\Vert \varphi \Vert _{L^{2}(\R^{d},\R%
^{d})}^{2}\lambda \{\mbox{supp }(\varphi )\}.
\end{equation*}%
Similarly, we get that
\begin{equation*}
\sup_{n\geq 0}\sum_{i=1}^{\infty }\frac{1}{(1-2^{-2(\beta _{i}-\alpha
_{i})})\delta _{i}^{2}}\int_{0}^{T}\int_{0}^{T}\frac{E[\Vert D_{s^{\prime
}}^{i}\langle X_{t}^{n,\cdot },\varphi \rangle -D_{s}^{i}\langle
X_{t}^{n,\cdot },\varphi \rangle \Vert ^{2}]}{|s^{\prime }-s|^{1+2\beta _{i}}%
}<\infty
\end{equation*}%
for some sequences $\{\alpha _{i}\}_{i=1}^{\infty }$, $\{\beta
_{i}\}_{i=1}^{\infty }$ as in Proposition \ref{Holderintegral}. Hence $%
\langle X_{t}^{n,\cdot },\varphi \rangle $, $n\geq 0$ is relatively compact
in $L^{2}(\Omega )$. Denote by $Y_{t}(\varphi )$ its limit after taking (if
necessary) a subsequence.
By adopting the same reasoning as in Lemma \ref{VI_weakconv} one proves that
\begin{equation*}
\langle X_{t}^{n,\cdot },\varphi \rangle \xrightarrow{n \to \infty}\langle
X_{t}^{\cdot },\varphi \rangle
\end{equation*}%
weakly in $L^{2}(\Omega )$. Then by uniqueness of the limit we see that
\begin{equation*}
\langle X_{t}^{n,\cdot },\varphi \rangle \underset{n\longrightarrow \infty }{%
\longrightarrow }Y_{t}(\varphi )=\langle X_{t}^{\cdot },\varphi \rangle
\end{equation*}%
in $L^{2}(\Omega )$ for all $t$ (without using a subsequence).
We observe that $X_{t}^{n,\cdot },n\geq 0$ is bounded in the Sobolev norm $%
L^{2}(\Omega ,W^{k,\alpha }(U))$ for each $n\geq 0$ and $k\geq 1$. Indeed,
from Proposition \ref{VI_derivative} it follows that
\begin{align*}
\sup_{n\geq 0}\Vert X_{t}^{n,\cdot }\Vert _{L^{2}(\Omega ,W^{k,\alpha
}(U))}^{2}=& \sup_{n\geq 0}\sum_{i=0}^{k}E\left[ \Vert \frac{\partial ^{i}}{%
\partial x^{i}}X_{t}^{n,\cdot }\Vert _{L^{\alpha }(U)}^{2}\right] \\
\leq & \sum_{i=0}^{k}\int_{U}\sup_{n\geq 0}E\left[ \Vert \frac{\partial ^{i}%
}{\partial x^{i}}X_{t}^{n,x}\Vert ^{\alpha }\right] ^{\frac{2}{\alpha }}dx \\
<& \infty .
\end{align*}
The space $L^{2}(\Omega ,W^{k,\alpha }(U))$, $\alpha \in (1,\infty )$ is
reflexive. So the set $\{X_{t}^{n,x}\}_{n\geq 0}$ is (relatively) weakly
compact in $L^{2}(\Omega ,W^{k,\alpha }(U))$ for every $k\geq 1$. Hence,
there exists a subsequence $n(j)$, $j\geq 0$ such that
\begin{equation*}
X_{t}^{n(j),\cdot }\xrightarrow[j\to \infty]{w}Y\in L^{2}(\Omega
,W^{k,\alpha }(U)).
\end{equation*}
We als know that $X_{t}^{n,x}\rightarrow X_{t}^{x}$ strongly in $%
L^{2}(\Omega )$ for all $t$.
So for all $A\in \mathcal{F}$ and $\varphi \in C_{0}^{\infty }(\R^{d},\R%
^{d}) $ we have for all multi-indices $\gamma $ with $\left\vert \gamma
\right\vert \leq k$ that
\begin{eqnarray*}
E[1_{A}\langle X_{t}^{\cdot },D^{\gamma }\varphi \rangle ]
&=&\lim_{j\rightarrow \infty }E[1_{A}\langle X_{t}^{n(j),\cdot },D^{\gamma
}\varphi \rangle ] \\
&=&\lim_{j\rightarrow \infty }(-1)^{\left\vert \gamma \right\vert
}E[1_{A}\langle D^{\gamma }X_{t}^{n(j),\cdot },\varphi \rangle
]=(-1)^{\left\vert \gamma \right\vert }E[1_{A}\langle D^{\gamma }Y,\varphi
\rangle ]
\end{eqnarray*}%
Using the latter, we can conclude that
\begin{equation*}
X_{t}^{\cdot }\in L^{2}(\Omega ,W^{k,\alpha }(U)),\ \ P-a.s.
\end{equation*}%
Since $k\geq 1$ is arbitrary, the proof follows.
\end{proof}
|
1,941,325,220,751 | arxiv | \section{Introduction}
Entanglement is a fascinating aspect of many-body quantum systems.~\cite{Amico2008:Entanglement} It describes how a many-body wave function cannot in general be written as a tensor product of individual single-particle wave functions. The degree to which a wave function fails to be written as a product state of two subsystem wave functions can be quantified in terms of the entanglement entropy (EE) between these two subsystems. In one dimension, gapped quantum systems have an EE that stays constant as the size of the subsystem is increased. This is consistent with the so-called ``area law,'' which states that the EE grows with the area of the boundary of the subsystem.\cite{Eisert2010:AreaLaws} However in gapless conformally invariant systems, the EE violates the area law and exhibits a logarithmic divergence with a prefactor given by the central charge. \cite{Holzhey1994:GeometricEntropyCFT,Calabrese2004:EEandQFT,Korepin2004:thermal} The logarithmic growth of entanglement has been observed in a number of quantum critical lattice models whose scaling limit is described by a CFT.
The entanglement entropy also has sub-leading terms that contain universal information beyond the central charge. Namely, one can find power law dependencies on the subsystem size with exponents given by the scaling dimensions of various conformal field theory (CFT) operators.\cite{Cardy2010:Unusual} Indeed, such universal contributions have been observed both numerically\cite{laflorencie2006boundary,Calabrese2010:ParityEffects} and analytically.\cite{Fagotti2011:UniversalParity} These sub-leading terms in the entanglement entropy can be used to characterize the operator content of the underlying CFT of various lattice models.\cite{Xavier2012:FiniteSizeCorrectionsEE, Xavier2011:ParitySpinS,Dalmonte2011:EstimatingOrder}
In this work we consider the R\'{e}nyi entanglement entropy (defined in Sec. \ref{sec:data}) in the context of SU($N$) Heisenberg antiferromagnetic spin chains. We are able to completely characterize the underlying CFT for this model based on the central charge from the leading log term, as well as the operator content contained in the sub-leading oscillatory terms, which decay as power laws. We have also measured the R\'{e}nyi entropy of a subsystem as a function of temperature in order to gain perspective on finite temperature effects and as a secondary means of extracting the central charge.
\section{The lattice Hamiltonian}
We consider the following Hamiltonian with the spin on each lattice site transforming under the fundamental representation of the SU($N$) algebra.
{\allowdisplaybreaks
\begin{equation}
H_{\Pi} = J\sum_{<ij>}\sum^{N-1}_{\alpha ,\beta =0} |\beta_{i}\alpha_{j}\rangle\langle\alpha_{i}\beta_{j}|
\label{eq:PermutationHamiltonian}
\end{equation}}
This model\cite{Sutherland1975:MulticompModel} consists of a sum of bond operators that permute nearest neighbor spins (colors) which are labeled by numbers 0 through $N-1$. $H_\Pi$ reduces to the spin-$1/2$ Heisenberg model when $N=2$, and provides a natural extension to larger $N$. We will consider the case when $J$ is positive (antiferromagnetic) and spins tend to anti-align with one another. Since it takes $N$ lattice sites to form an SU($N$) singlet, we expect and find that the ground state consists of equal numbers of each color and is non-degenerate if the chain is an integer multiple of $N$. Finally, and most importantly for the work that we present here, the ground state is described by the SU($N$)$_{1}$ WZW model with central charge $c=N-1$\cite{Affleck1986:CriticalExpSUN,Affleck1988:CriticalSUN} and $N-1$ primary fields with scaling dimensions $\Delta_{a}=a-a^{2}/N$ where $a \in [1,N-1]$.\cite{Bouwknegt1999:ExclusionCFTWZW,Assaraf1999:MetalInsulator}
Models with SU($N$) symmetry are of both theoretical and experimental interest, since it has been shown that ultra cold atoms in optical traps can give rise to this symmetry.\cite{Gorshkov2010:TwoOrbitalSUN} In fact, the model we consider here can be obtained from the SU($N$) Hubbard model at 1/$N$ filling in the limit of large on-site repulsion (see [\onlinecite{Assaraf1999:MetalInsulator},\onlinecite{Manmana2011:SUNmagnetismUCA}] for a numerical study of the Mott transition).
Entanglement in this class of models has previously been studied using DMRG for $N\leq 4$,\cite{Fuhringer2008:DMRGSUN} though the universal sub-leading oscillations were not present in the von Neumann entanglement entropy under closed boundary conditions. Other studies\cite{Frischmuth1999:ThermoSU4,Messio2012:entropyDep} found oscillations in the spin-spin correlation function for different values of $N$, verifying that the periodicity is given by 2$\pi/N$. In fact, one can make a precise connection between entanglement and spin correlations in one dimension.\cite{Song2010:EntanglementFluctuations}
Here we will study in detail the scaling form of these oscillations which are induced in the R\'{e}nyi entanglement entropy under both open and closed boundaries. We will demonstrate quantitatively that the decay of these oscillations contain interesting information about the scaling dimensions of operators in the WZW field theory.
\begin{figure}[!t]
\centerline{\includegraphics[angle=0,width=0.95\columnwidth]{EDvsQMCsu2.pdf}}
\caption{Comparison of QMC and exact diagonalization for the ground state REE of a 14 site chain with $N$=2 under periodic boundary conditions. This is shown for different values of the R\'{e}nyi index $\alpha$, as indicated in the legend. Colored points are data from exact diagonalization and black circles connected by dashed lines are obtained from QMC for $\alpha=2,3,4,5$. Note that under periodic boundaries, oscillations in the REE appear when $\alpha>1$ (see the end of Appendix \ref{app_replica}).}
\label{fig:EDsu2}
\end{figure}
\section{Numerical results}
\label{sec:data}
We begin by first defining the R\'{e}nyi entanglement entropy (REE). Take a one-dimensional system of length $L$ and partition it into two segments of length $l_{A}$ and $l_{B}=L-l_{A}$. Construct the density matrix for the entire system ($\rho$), and compute the reduced density matrix ($\rho_{A}=\mathrm{Tr}_{B}\rho$) by tracing over the degrees of freedom in $l_{B}$. The R\'{e}nyi entanglement entropy is then given by
\begin{equation}
S_{\alpha}(\rho_{A})= \frac{1}{1-\alpha}\log(\text{Tr}\{ \rho^{\alpha}_{A} \}),
\label{eq:REE}
\end{equation}
where one obtains the von Neumann entanglement entropy $S(l_{A})=-\textrm{Tr}\{\rho_{A}\log(\rho_{A})\}$ by taking the limit $\alpha\to1$. Appendix \ref{app_replica} reviews the extended ensemble approach introduced in [\onlinecite{Humeniuk2012:ExtendedEnsemble}] that allows for efficient measurements of the REE with QMC. In Fig. \ref{fig:EDsu2} we compare our QMC results versus exact diagonalization for an SU(2) chain of length $L=14$ under periodic boundaries for several values of the R\'{e}nyi index ($\alpha$). The QMC delivers exact results within controllable statistical error bars. Similar agreement is found for $N>2$ (not shown).
\subsection{REE periodic boundaries}
\label{subsec:REEclosedBC}
Inspired by previous work on the XXZ spin chain we fit our numerical data to the following scaling form\cite{Calabrese2010:ParityEffects}
{\allowdisplaybreaks
\begin{equation}
S_{\alpha}(l_{A}) =S^{\mathrm{log}}_{\alpha}(l_{A})+S^{\mathrm{osc}}_{\alpha}(l_{A})+\tilde{c}_\alpha,
\label{eq:GenScalingForm}
\end{equation}}%
where
{\allowdisplaybreaks
\begin{equation}
S^{\mathrm{log}}_{\alpha}(l_{A}) =\frac{c}{6 \eta}\left(1+\frac{1}{\alpha}\right)\log\left[\left(\frac{\eta L}{\pi}\sin\left(\frac{\pi l_{A}}{L}\right)\right)\right]
\label{eq:Slog}
\end{equation}}%
and
{\allowdisplaybreaks
\begin{equation}
S^{\mathrm{osc}}_{\alpha}(l_{A}) =F_{\alpha}(l_{A}/L)\cos(2k_Fl_{A})\left|\frac{2\eta L}{\pi}\sin(\pi l_{A}/L)\right|^{-\frac{2\Delta_{1}}{\eta \alpha}},
\label{eq:Sosc}
\end{equation}}%
where $\eta=1,2$ is for periodic and open boundary conditions, respectively. The universal parameters are the central charge $c$, the Fermi momentum $k_{F}$, and the scaling dimension $\Delta_{1}$. $F_{\alpha}(l_{A}/L)$ is a universal scaling function (into which a factor of $|\sin(k_{F})|^{\frac{2\Delta_{1}}{\eta \alpha}}$ has been absorbed) and $\tilde{c}$ is a non-universal constant. In the present study we find that approximating $F_{\alpha}(l_{A}/L)$ to be a constant allows for a sufficiently accurate extraction of the parameters of interest. In the rest of this paper we take $k_{F}=\pi/N,$ which can clearly be seen in the data. All of our simulations are performed with $\beta J=N L$, which ensures that finite temperature effects are negligible.
Fig. \ref{fig:REE} shows our data for the REE with periodic boundary condition for $2\leq N \leq 6$. When $N=2$, there is just one primary field with conformal weight $\Delta_{1}$. In the case of higher $N$, there are primary fields with less relevant scaling dimensions that contribute in addition to the oscillatory behavior. In the periodic case, oscillations are very small, and one oscillatory term from the most relevant primary field is sufficient to describe the data. However, precisely because the oscillations are small, we are unable to reliably extract the scaling dimensions from our numerical fits (though it must be included for reliable extraction of the central charge). Higher values of the R\'{e}nyi index indeed make it easier to extract exponents from the oscillations; however, this route is impractical since finite size effects become greater for larger $\alpha$. The benefit of considering periodic boundaries is that it leads to very accurate estimates of the central charge with minimal finite size effects, as shown in Fig. \ref{fig:REE} and Table \ref{tab:central}.
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{EE.pdf}}
\caption{REE as a function of the subsystem size ($l_{A}$) with periodic boundary conditions. We set the R\'{e}nyi index $\alpha = 2$ and the total chain lengths are integer multiples of $N$. Oscillations have Fermi momentum $k_{F}=\pi / N$. Solid lines are best fits to the CFT scaling form. In the right inset we have subtracted the oscillatory and constant pieces of the REE, and plotted it against $S_{\alpha}^{\mathrm{log}}(l_{A})/c$. This is plotted on top of lines with slope $N-1$. The best fit values for the central charges are given in Table \ref{tab:central}.}
\label{fig:REE}
\end{figure}
\begin{table}
\begin{tabular}{cccc} \hline
$N$ & $L$ & $c$ & $c_{\rm CFT}$ \\%[3pt]
\hline
$2$ & $64$ & $0.99(1)$ & $1$ \\%[3pt]
$3$ & $74$ & $2.01(1)$ & $2$ \\%[3pt]
$4$ & $64$ & $2.99(1)$ & $3$ \\%[3pt]
$5$ & $70$ & $3.99(1)$ & $4$ \\%[3pt]
$6$ & $72$ & $5.01(1)$ & $5$ \\%[3pt]
\hline
\end{tabular}
\caption{Best fit central charges corresponding to Fig. \ref{fig:REE}. Exact values are given by $c_{\rm CFT}=N-1$. These results are obtained by excluding the first few data points when fitting to the form Eq.~(\ref{eq:GenScalingForm}).}
\label{tab:central}
\end{table}
\subsection{REE open boundaries, $N<4$}
\label{subsec:REEopenBCNls4}
In order to efficiently extract the scaling dimensions from the REE, we consider open boundary conditions where oscillations are much more pronounced. We begin with $N=2,3$ where there is just one distinct scaling dimension. The second R\'{e}nyi entropy along with the best fit of $\Delta_{1}$ is given in Fig. \ref{fig:su2su3Kll}. We find that $\Delta_{1}$ in the SU(2) case is not fully converged due to the presence of logarithmic corrections to correlations that have not been accounted for.\cite{Affleck1999:LogCorrections} Interestingly, this seems to have less of an effect in the SU(3) case where the best fit in the region $l_{A}\gg1$ converges close to the analytical value in the thermodynamic limit (see Inset of Fig.~\ref{fig:su2su3Kll}).
In order to see the qualitative signature of the primary fields, we show in Fig. \ref{fig:EEfouriersu2su3} the discrete Fourier transform of the REE appearing in Fig. \ref{fig:su2su3Kll}. Before taking the Fourier transform we used the fact that $S_{\alpha}(l_{A},L)=S_{\alpha}(L-l_{A},L)$ to reconstruct the REE along the entire chain length. We then dropped $L/4$ points from each edge in order to minimize finite size effects coming from the boundary. The Fourier transform is given by
{\allowdisplaybreaks
\begin{equation}
\tilde{S}_{k} =\frac{1}{\sqrt{n}}\sum_{j=0}^{n-1}S_{j}e^{-2\pi i k j /n},
\label{eq:EEfourier}
\end{equation}}%
where $S_{j}:j\in\lbrack0,n-1\rbrack$ is the list of entries in $S_{\alpha}(l_{A})$ after the points have been dropped and n is the total number of elements left. We have dropped the $\alpha$ index from the discrete Fourier transform for ease of notation.
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{su2su3Kll.pdf}}
\caption{Second R\'{e}nyi entropy with open boundaries for $N=2,3$. In the inset we plot the scaling dimension ($\Delta_{1}$) as obtained by fitting the QMC data. The solid black lines are the exact values, and the QMC results are plotted as a function of the number of boundary points that are excluded from the fit.}
\label{fig:su2su3Kll}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{EE_fourierSu2Su3.pdf}}
\caption{Discrete Fourier transform Eq.~(\ref{eq:EEfourier}) of the REE data appearing in Fig. \ref{fig:su2su3Kll}. Here we used the fact that $S_{\alpha}(l_{A},L)=S_{\alpha}(L-l_{A},L)$ to reconstruct the REE along the entire chain length, then dropped $L/4$ points from each edge before taking the Fourier transform. Peaks in the Fourier transform appear at integer multiples of 2$k_{F}$.}
\label{fig:EEfouriersu2su3}
\end{figure}
\subsection{REE open boundaries, $N\geq4$}
\label{subsec:REEopenBCNgeq4}
We now move to $N=4$ which is the first $N$ for which there is a more than one distinct scaling dimension. We hence have to generalize Eq.~(\ref{eq:Sosc}) to a sum of oscillating terms. We use the following form,
{\allowdisplaybreaks
\begin{equation}
S^{\mathrm{osc}}_{\alpha}(l_{A}) =\sum_{a=1}^{N-1} f_{\alpha}^{a}\cos(2ak_Fl_{A})\left|\frac{2\eta L}{\pi}\sin(\pi l_{A}/L)\right|^{-\frac{2\Delta_{a}}{\eta\alpha}},
\label{eq:Sosc2}
\end{equation}}%
where we achieve very good fits to our QMC data, again taking the universal scaling function to be a constant.
In Figs. \ref{fig:su4L32L64L128} and \ref{fig:su5L30L70L120} we have fit our data from $N=4$ and $N=5$ with the oscillatory piece Eq.~(\ref{eq:Sosc2}) and use it to extract the two distinct scaling dimensions. For $N=4,5$ we find it necessary to go to even larger system sizes in order to show convergence of the scaling dimensions to their CFT values. Strong finite size effects are apparent in the extraction of $\Delta_{2}$, however it is essential to include it as a fit parameter in order to obtain a reasonable estimate of $\Delta_{1}$.
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{su4Kll1Kll2.pdf}}
\caption{Second R\'{e}nyi entropy with open boundaries for $N=4$ and $L=32,64,128$. The inset is similar to Fig.~\ref{fig:su2su3Kll}, although now we fit two different primary field scaling dimensions. Strong finite size effects are apparent in the extraction of $\Delta_{2}$, the signature of which is much weaker than that of $\Delta_{1}$. Error bars indicate stochastic error and do not include the systematic error inherent in the fit.}
\label{fig:su4L32L64L128}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{su5Kll1Kll2.pdf}}
\caption{Second R\'{e}nyi entropy with open boundaries for $N=5$ and $L=30,70,120$. This figure is similar to Fig. \ref{fig:su4L32L64L128}.}
\label{fig:su5L30L70L120}
\end{figure}
Though it is difficult to obtain accurate quantitative estimates of $\Delta_{2}$ using this method, clear qualitative signatures of all primary fields are present in the Fourier spectrum of the REE shown in Figs. \ref{fig:EEfouriersu4} and \ref{fig:EEfouriersu5}.
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{EE_fourierSu4L32L64L128.pdf}}
\caption{Fourier transform of the second R\'{e}nyi entropy for $N=4$ given in Fig. \ref{fig:su4L32L64L128}.}
\label{fig:EEfouriersu4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{EE_fourierSu5L30L70L120.pdf}}
\caption{Fourier transform of the second R\'{e}nyi entropy for $N=5$ given in Fig. \ref{fig:su5L30L70L120}.}
\label{fig:EEfouriersu5}
\end{figure}
\subsection{Effect of finite temperature on R\'{e}nyi entropy}
\label{subsec:thermalRE}
Finally we consider the thermal R\'{e}nyi entropy, which is defined in the same way as Eq.~(\ref{eq:REE}), except that the density matrix is no longer pure (constructed from only the ground state) but rather it is mixed (constructed from a thermal distribution of excited states):
{\allowdisplaybreaks
\begin{equation}
\rho=\frac{e^{-\beta H}}{Z}.
\label{eq:Rhothermal}
\end{equation}}%
The thermal R\'{e}nyi entropy allows us to extract the central charge at finite temperature which in practice is much less computationally intensive.
Fig. \ref{fig:finiteTemergence} shows the emergence of a linear scaling region in the thermal R\'{e}nyi entropy that now captures the entanglement between subsystems as well as the thermal entropy of subsystem $A$. Since the entropy goes like the log of the number of states, and in the high temperature limit all of the $N^{l_{A}}$ states are equally probable, we naturally expect a linear scaling region to emerge at finite temperature.
We use the following scaling form to fit to our thermal R\'{e}nyi entropy data\cite{Calabrese2004:EEandQFT,Korepin2004:thermal}
{\allowdisplaybreaks
\begin{equation}
S_{2}(\beta |l_{A}) \sim \left (1+\frac{1}{\alpha} \right ) \frac{\pi c l_{A}}{12 k_{F} \beta },
\label{eq:Sthermal}
\end{equation}}%
where we fix $k_{F}=\pi/N$. We tested that this formula gives the correct central charge in the linear scaling regime for different values of $N$ and $\alpha$ and for both open and closed boundary conditions. Here we only present data for $N=3$ and $\alpha=2$.
In Fig. \ref{fig:finiteTsu3} we use the scaling form Eq.~(\ref{eq:Sthermal}) to extract an effective central charge for $N=3$ chains at different values of $L$ and $\beta J$, which is given in the inset. We clearly see that the central charge flows to its analytical value in the limit $L \to \infty$ and $\beta J \gg 1$. In practice, we find the central charge approaches a value slightly higher than its analytical one. This is due to the fact that oscillations have been neglected by considering only troughs, leading to a linear fit with a slightly larger slope.
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{12chain_QMCvsED_FiniteT_OBCandPBC.pdf}}
\caption{Thermal R\'{e}nyi entropy of a length 12 chain for SU(2) with both open (left) and closed (right) boundaries. Colored data points are obtained from exact diagonalization, and the black QMC data points are in perfect agreement. Oscillations still occur at finite temperature, but become drowned out at small enough values of $\beta J$ where the linear scaling regime emerges and open and closed chains take on the same scaling form.}
\label{fig:finiteTemergence}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=1.0\columnwidth]{su3_thermal_renyi2.pdf}}
\caption{Thermal R\'{e}nyi entropy at $\beta J = 15$ as a function of subsystem size plotted for several lengths on periodic SU(3) chains. The inset shows effective central charges extracted using the form Eq.~(\ref{eq:Sthermal}) for several values of total length and $\beta J$. The fits only include points that are well within the linear scaling regime.}
\label{fig:finiteTsu3}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper we have investigated the R\'{e}nyi entanglement entropy in the context of critical SU($N$) spin chains which are described by a WZW non-linear sigma model in the thermodynamic limit. We showed that signatures of all $N-1$ primary fields are present in the oscillations of the entanglement entropy. We further used the analytical form of the oscillations given by Eq.~(\ref{eq:Sosc2}) to extract the numerical values of the scaling dimensions, which are consistent with the results of CFT.
We considered both closed and open boundary conditions, where the former proves effective in extracting the central charge, while the latter is more suitable for extracting the scaling dimensions of primary fields. Finally, we demonstrated universal behavior of the thermal R\'{e}nyi entropy that allows for extraction of the central charge with less computational effort.
These results serve to illustrate the wealth of information contained in the entanglement entropy. By measuring this quantity alone, one determines all the parameters that make up the continuum description in terms of a CFT. One could extend this work by considering sub-leading (possibly oscillating) terms in the entanglement entropy for different representations of SU($N$). Such models are described by more general WZW CFTs. These have been studied numerically in Ref. [\onlinecite{Fuhringer2008:DMRGSUN}] and Ref. [\onlinecite{Rachel2009:SpinonConfinement}], though it would be interesting to see the structure of oscillations that one observes and whether it is possible for scaling dimensions to be extracted via some generalization of Eq.~(\ref{eq:Sosc2}).
Partial financial support was received through NSF DMR-1056536. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575; in particular, resources were used on the Trestles cluster housed at the San Diego Supercomputing Center (SDSC) allocated under TG-DMR130040. Part of this work was completed while one author (RKK) held
an adjunct faculty position at the TIFR.
|
1,941,325,220,752 | arxiv | \section{Introduction}
Stochastic rounding (SR) is an idea proposed in the 1950s by von Neumann and Goldstine~\cite{Neumann1947NumericalIO}. First, it can be used to estimate empirically the numerical error of computer programs; SR introduces a random noise in each floating-point operation and then applies a statistical analysis of the set of sampled outputs to simulate the effect of rounding errors. To make this simulation available, various tools such as Verificarlo~\cite{verificarlo}, Verrou~\cite{verrou} and Cadna~\cite{cadna} have been developed. Second, SR can be used as a replacement for the default deterministic rounding mode in numerical simulations.
It has been demonstrated that in multiple domains such as neural networks, ODEs, PDEs, and Quantum mechanics~\cite{survey}, SR provides better results compared to the IEEE-754 default rounding mode~\cite{norm}. Connolly et al.~\cite{theo21stocha} show that SR successfully prevents the phenomenon of stagnation that takes place in various applications such as neural networks, ODEs and PDEs. In particular, deep neural networks are prone to stagnation during the training phase~\cite{gupta}. For PDEs, solved via Runge-Kutta finite difference methods in low precision, SR avoids stagnation in the computations of the heat equation solution~\cite{pde}.
Hardware units proposing stochastic rounding are still unavailable in most computers.
However, it has been introduced in various specialized processors such as Graphcore IPUs which supports SR for binary32 and binary16 arithmetic~\cite{graph-c}, or Intel neuromorphic chip Loihi~\cite{davies2018loihi} to improve the accuracy of biological neuron and synapse models. Also, AMD~\cite{amd}, NVIDIA~\cite{nvidia}, IBM~\cite{ibm1,ibm2}, and other computing companies~\cite{com1,com2,com3} own several related patents. These developments support the idea of hardware implementations using stochastic rounding becoming more available in the future.
Most current hardware implements the IEEE-754 standard, that defines five rounding modes for floating-point arithmetic which are all deterministic~\cite{norm}: round to nearest ties to even (default), round to nearest ties away, round to zero, round to $+ \infty$ and round to $-\infty$.
SR, on the other hand, is a non-deterministic rounding mode: for a number that cannot be represented exactly in the working precision, it randomly chooses the next larger or smaller floating-point number.
In the literature, several properties and results of SR have been proven. Connolly et al.~\cite{theo21stocha} show that under SR-nearness the expected value coincides with the exact value for a large family of algorithms.
Based on the Azuma-Hoeffding inequality and the martingale theory, recent works on the inner product show that SR probabilistic bound of the forward error is proportional to $\sqrt{n}u$ rather than $nu$~\cite{ilse}. Moreover, the martingale central limit theorem implies that under certain conditions, the error converges in distribution to a normal distribution that is characterized by its mean and variance~\cite{dacunha1986probability}. This behaviour is often observed in practice.
In this case, the number of significant digits can be estimated by $-\log(\frac{\sigma}{\rvert \mu \lvert})$ where $\sigma$ is the standard deviation (the square root of the variance) and $\mu$ is the expected value~\cite{sohier2021confidence}.
Variance also allows to use several probabilistic properties such as concentration inequalities that provide a bound on how a random variable deviates from some value (typically, its expected value)~\cite{boucheron2013concentration}. To our knowledge, the variance analysis of a SR computation has not attracted any attention in the literature.
The purpose of this paper is to further the probabilistic investigation of SR with the following contributions:
\begin{enumerate}\addtocounter{enumi}{-1}
\item We review the works proposed by M. P Connolly, N.J. Higham and T. Mary~\cite{theo21stocha} and Ilse C.F. Ipsen, Hua Zhou~ \cite{ilse} that show the forward error for the inner product is proportional to $\sqrt{n}u$ at any probability $\lambda \leq 1$ rather than to the deterministic bound of $nu$~\cite{ilse}.
\item Under stochastic rounding and without any additional assumption, we propose Lemma~\ref{model}, a general framework applicable to a wide class of algorithms that allows to compute a variance bound. We choose the inner product and Horner algorithms as applications. Our bound is deterministic and depends on the condition number, the problem size $n$ and the unit roundoff of the floating-point arithmetic in use $u$.
\item We extend the method proposed in~\cite{ilse} to derive a new forward error bound of the Horner algorithm in $O(\sqrt{n}u)$. This illustrates how these tools can be applied (with some work) to any algorithm based on a fixed sequence of sum and products.
\item We introduce a new approach to derive a probabilistic bound in $O(\sqrt{n}u)$ based on the variance calculation and Bienaymé–Chebyshev inequality. This approach gives a tight forward error bound than the current existing one mentioned in item $2$ for a probability at most $0.758$. This bound remains tight from a rank $n$ high with respect to $u$.
\end{enumerate}
Interestingly, the variance method introduces a tight probabilistic error bound in low precision. In this regard, studying algorithms under stochastic rounding in low precision, especially bfloat-16 is becoming increasingly attractive due to its higher speed and lower energy consumption. Recent works show that in various domains such as PDEs~\cite{pde}, ODEs~\cite{ode} and neural networks~\cite{gupta}, SR provides positive effects compared to the deterministic IEEE-754~\cite{norm} default rounding mode in this precision format.
Section~\ref{sec:back}
presents the background on floating-point arithmetic and more particularly SR-nearness, a stochastic rounding mode introduced in \cite[p.~34]{parker1997monte}, that has the important property of being unbiased. It also satisfies the mean independence property, an assumption weaker than independence yet powerful enough to yield important results by martingale theory.
Section~\ref{sec:var}
is articulated around Lemma~\ref{model} that bounds the variance of the numerical error for a wide class of algorithms. We apply this result to the inner product and Horner algorithms in Theorem~\ref{variance bound-inner} and Theorem~\ref{variance-bound-horner}, respectively.
Section~\ref{sec:pbBound}
shows that, under SR-nearness rounding, the numerical error of these two algorithms is probabilistically bounded in $O(\sqrt{n}u)$ instead of the deterministic bound in $O(nu)$. We first prove it with the Azuma–Hoeffding inequality and martingale theory:
we analyze techniques used for the inner product in works by Higham and Mary~\cite{theo21stocha, theo19} and Ilse, Ipsen and Zhou~\cite{ilse}, point the difference in these two works, and adapt them to compute the relative error of the Horner method for polynomial evaluation. We then use the Bienaymé–Chebyshev inequality which, combined to the previous variance bound, leads to a probabilistic bound in $O(\sqrt{n}u)$.
The probabilistic bounds above depend on three parameters: the precision $u$, the problem size $n$, and the probability $p$ that a SR-nearness computation has an error greater than the bound. In Section~\ref{bound-analyze}, we analyze these probabilistic bounds and we show that the one obtained by the Bienaymé-Chebyshev inequality is tighter in many cases; in particular for any given $p$ and $u$, there exist a problem size $n$ above which the Bienaymé–Chebyshev bound is tighter.
Numerical experiments in Section~\ref{sec:exp} illustrate the quality of these bounds on the two aforementioned algorithms and compare them to deterministic rounding.
\section{Notations and definitions}\label{sec:back}
\subsection{Notation}
Throughout this paper, $E(X)$ denotes the expected value of $X$, $V(X)$ denotes its variance and $\sigma(X)$ denotes its standard deviation. The conditional expectation of $X$ given $Y$ is $\mathbb{E}[X / Y]$.
\subsection{Floating-point background}\label{sec:FP_def}
A normal floating-point number in such a format is a number $x$ for which there exists a triple $(s, m, e)$ such that $x= \pm m \times \beta^{e-p}$, where $\beta$ is the basis, $e$ is the exponent, $p$ is the working precision, and $m$ is an integer (the significand) such that $\beta^{p-1} \leq m < \beta^p$.
We only consider normal floating-point numbers; detailed information on the floating-point format most generally in use in current computer systems is defined in the IEEE-754 norm~\cite{norm}.
Let us denote $\mathcal{F}\subset \mathbb{R}$ the set of normal floating-point numbers and $x\in \mathbb{R}$. Upward rounding $\lceil x \rceil $ and downward rounding $\lfloor x \rfloor$ are defined by:
$$ \lceil x \rceil=\min\{y\in \mathcal{F} : y \geq x\}, \quad \lfloor x \rfloor=\max\{y\in \mathcal{F} : y \leq x\},
$$
by definition, $\lfloor x \rfloor \leq x \leq \lceil x \rceil$, with equalities if and only if $x \in\mathcal{F}$.
The floating-point approximation of a real number $x\ne 0$ is one of $\lfloor x\rfloor$ or $\lceil x\rceil$:
\begin{equation}
\fl(x) =x(1+\delta), \label{fl(x)}
\end{equation}
where $\delta = \frac{ \fl(x) - x}{x}$ is the relative error: $\lvert \delta \rvert \leq \beta^{1-p}$.
In the following, we note $u=\beta^{1-p}$. IEEE-754 mode RN (round to nearest, ties to even) has the stronger property that $\lvert \delta \rvert \leq\frac12\beta^{1-p}=\frac12u$\footnote{In many works focusing on IEEE-754 RN, $u$ is chosen to be $\frac12\beta^{1-p}$.}.
For $x, y\in\mathcal F$, the considered rounding modes verify $\fl(x\op y)\in\{\lfloor x\op y\rfloor, \lceil x\op y\rceil\}$ for $\op\in\{+, -, *, /\}$. Moreover, for IEEE-754 RN~\cite{norm} and stochastic rounding~\cite{theo21stocha} the error in one operation is bounded:
\begin{equation}
\fl(x \op y) = (x \op y)(1+\delta), \; \lvert \delta \rvert \leq u, \label{fl(xopy)}
\end{equation}
specifically for RN we have $\lvert \delta \rvert \leq \frac12 u$.
Let us assume that $x$ is a real that is not representable: $x\in \mathbb{R} \setminus \mathcal{F}$.
The machine-epsilon or the distance between the two floating-point numbers enclosing $x$ is
$\epsilon(x) = \lceil x \rceil - \lfloor x \rfloor = \beta^{e-p}$. Since $\beta^{p-1} \leq m < \beta^p$, then $\beta^{e-1} \leq \lvert x \rvert < \beta^e$ and
\begin{equation}\label{epsilon-bound}
\lvert \epsilon(x) \rvert = \beta^{e-1} u
\leq \lvert x \rvert u.
\end{equation}
The fraction of $\epsilon(x)$ rounded away, as shown in figure~~\ref{fig:theta}, is $\theta(x) = \frac{x - \lfloor x \rfloor}{\lceil x \rceil - \lfloor x \rfloor }.$
\begin{figure}
\centering
\begin{tikzpicture}[xscale=4]
\draw (0,0) -- (1,0);
\draw[shift={(0,0)},color=black] (0pt,0pt) -- (0pt, 2pt) node[below] {$\lfloor x \rfloor$};
\draw[shift={(1,0)},color=black] (0pt,0pt) -- (0pt, 2pt) node[below] {$\lceil x \rceil$};
\draw[shift={(.3,0)},color=black] (0pt,0pt) -- (0pt, 2pt) node[below] {$x$};
\draw[shift={(.5,0)},color=black] (0pt,0pt) -- (0pt, 2pt);
\draw[|->] (0, 30pt) -- (.5, 30pt) node[above] {$\frac{1}{2}\epsilon(x)$};
\draw[|->] (0, 5pt) -- (.3, 5pt) node[above] {$\theta(x)\epsilon(x)$};
\end{tikzpicture}
\caption{$\theta(x)$ is the fraction of $\epsilon(x)$ to be rounded away.}
\label{fig:theta}
\end{figure}
We note $\llfloor x \rrfloor$ the integer part of $x$. The following lemma gives a useful property of downward rounding.
\begin{lemma}\label{integer part}
Let $x\in \mathbb{R} \setminus \mathcal{F}$. $\beta^{p-e}\lfloor x \rfloor = \llfloor \beta^{p-e} x \rrfloor$.
\end{lemma}
\begin{proof}
We ~know ~that ~$\beta^{p-e}\lfloor x \rfloor, ~\beta^{p-e} \lceil x \rceil \in \mathbb{Z}$, and
~$\lfloor x \rfloor < x < \lceil x \rceil$; ~then $\beta^{p-e}\lfloor x \rfloor <\beta^{p-e} x < \beta^{p-e} \lceil x \rceil$. We thus have
$\beta^{p-e}\lfloor x \rfloor \leq \llfloor \beta^{p-e} x \rrfloor < \beta^{p-e} \lceil x \rceil.$
Since $\lceil x \rceil -\lfloor x \rfloor = \beta^{e-p}$, then $\beta^{p-e} \lceil x \rceil - \beta^{p-e} \lfloor x \rfloor =1$ and
$$\beta^{p-e}\lfloor x \rfloor \leq \llfloor \beta^{p-e} x \rrfloor < \beta^{p-e} \lfloor x \rfloor +1.
$$
\end{proof}
\subsection{Stochastic rounding definition}
Throughout this paper, $\fl(x)=\widehat x$ is the approximation of the real number $x$ under stochastic rounding.
For $x\in \mathbb{R}\setminus \mathcal{F}$, we consider the following stochastic rounding mode, called SR-nearness:
\begin{align*}
\fl(x) &= \left\{
\begin{array}{ccl}
\lceil x \rceil & \text{with probability} & \theta(x) =(x-\lfloor x \rfloor)/(\lceil x \rceil - \lfloor x \rfloor ), \\
\lfloor x \rfloor & \text{with probability} & 1-\theta(x).
\end{array}
\right.
\end{align*}
\begin{figure}
\centering
\begin{tikzpicture}[xscale=5]
\draw (0,0) -- (1,0);
\draw[shift={(0,0)},color=black] (0pt,0pt) -- (0pt, 2pt) node[below] {$\lfloor x \rfloor$};
\draw[shift={(1,0)},color=black] (0pt,0pt) -- (0pt, 2pt) node[below] {$\lceil x \rceil$};
\draw[shift={(.3,0)},color=black] (0pt,0pt) -- (0pt, 2pt) node[below] {$x$};
\draw (0,0) .. controls (.15,.4) .. (.3,0) (0.15,0.5) node {$1-\theta(x)$};
\draw (.3,0) .. controls (.65,.7) .. (1,0) (0.65,0.8) node {$\theta(x)$} ;
\end{tikzpicture}
\caption{\textbf{SR-nearness}.}
\end{figure}
SR-nearness mode is unbiased~\cite[p.~34]{parker1997monte}.
\begin{align*}
E(\widehat x) &= \theta(x)\lceil x \rceil +(1-\theta(x))\lfloor x \rfloor \\
&= \theta(x)(\lceil x \rceil - \lfloor x \rfloor) + \lfloor x \rfloor =x.
\end{align*}
In the following, we focus on this stochastic rounding mode. In general and under SR-nearness, the error terms in algorithms appear as a sequence of random variables such that the independence property does not hold. However, a weaker, and yet fruitful, assumption, called mean independence, does.
\begin{defn}
A random variable $Y$ is said to be mean independent from random variable $X$ if its conditional mean $\mathbb{E}[Y / X]=\mathbb{E}(Y)$. The random sequence $(X_1, X_2, \ldots)$ is mean independent if $\mathbb{E}[X_k / X_1, . . . , X_{k-1}] = \mathbb{E}(X_k)$ for all $k$.
\end{defn}
\begin{pro}
Let $X$ and $Y$ be two real random variables:
\begin{enumerate}
\item If $X$ and $Y$ are independen
then $X$ is mean independent from $Y$.
\item If $X$ is mean independent from $Y$ then $X$ and $Y$ are uncorrelated.
\end{enumerate}
The reciprocals of these two implications are false.
\end{pro}
For $x_1, x_2\in \mathcal{F}$, and $\widehat{c} \leftarrow x_1 \op x_2$ ~the ~result of ~an elementary ~operation ~$\op \in \{+, -, *, /\}$ obtained from SR-nearness, the relative error $\delta$ such that
$$ \widehat{c}= (x_1 \op x_2)(1+\delta),
$$
is a random variable verifying $\mathbb{E}(\delta)=0$ and $\lvert \delta\rvert\leq u$.
The following lemma has been proven in~\cite[Lem 5.2]{theo21stocha} and shows that SR-nearness satisfies the property of mean independence.
\begin{lemma}\label{meanindp}
Consider a sequence of elementary operations $x_k \leftarrow y_k \op_k z_k$, with $\delta_k$ the error of their $k^{th}$ operation (i e, $\widehat x_i \leftarrow x_i(1+\delta_i))$.
The $\delta_k$ are random variables with mean zero such that $\mathbb{E}[\delta_k / \delta_1,\ldots , \delta_{k-1}] = \mathbb{E}(\delta_k)= 0$.
\end{lemma}
\section{The variance of the error for stochastic rounding}
\label{sec:var}
We now turn to bounding the variance of the error in a computation.
If $\widehat x =x(1+\delta)$ is the result of an elementary operation rounded with SR-nearness, then
$E(\widehat x)= x$ and
\begin{align*}
V(\widehat x) &= E(\widehat x^2) - x^2 = \lceil x \rceil^2 \theta(x) + \lfloor x \rfloor^2 (1-\theta(x)) - x^2 \\
&= \theta(x) (\lceil x \rceil^2 - \lfloor x \rfloor^2) - (x^2 - \lfloor x \rfloor^2)\\
&= \theta(x) \epsilon(x) (\lceil x \rceil + \lfloor x \rfloor) - \theta(x) \epsilon(x)(x+ \lfloor x \rfloor)\\
&= \theta(x) \epsilon(x) (\lceil x \rceil - x )\\
&= \epsilon(x)^2 \theta(x) (1-\theta(x)).
\end{align*}
Using~(\ref{epsilon-bound}) leads to $V(\widehat x)\leq x^2 \frac{u^2}{4}$, in particular $V(\widehat x)\leq x^2 u^2$. Lemma~\ref{model} below allows to estimate the variance of the accumulated errors in a sequence of additions and multiplications.
Let $K$ a subset of $\mathbb{N}$ of cardinal $n$. Assume that $\delta_1, \delta_2,...$ in that order are random errors on elementary operations obtained from SR-nearness. Let us denote
$$ \psi_{K} = \prod_{k\in K} (1+\delta_k).
$$
Since $\lvert \delta_k \rvert \leq u$ for all $k \in K$ we have $\lvert \psi_{K} \rvert \leq (1+u)^n.$
Throughout this paper, let $\gamma_n(u)= (1+u)^{n}-1$ and $K\triangle K' = (K\cup K') \setminus (K\cap K')$. The following lemma gives some properties of $\psi$ that allows to bound the variance of errors in an algorithm consisting in a fixed sequence of sums and products.
\begin{lemma}\label{model}
Under SR-nearness $\psi_{K}$ satisfies
\begin{enumerate}
\item $E(\psi_{K}) = 1$.
\item Let $K' \subset \mathbb{N}$ such that ~$\lvert K\cap K'\rvert = m$, under the assumption that $\forall j\in K\triangle K', k\in K\cap K',j<k$ we have
$$ 0 \leq \mathrm{Cov}(\psi_{K},\psi_{K'}) \leq \gamma_{m}(u^2).
$$
\item $V(\psi_K) \leq \gamma_n(u^2) $,
\end{enumerate}
where $\gamma_n(u^2)= (1+u^2)^{n}-1 = nu^2 +O(u^3).$
\end{lemma}
\begin{proof}
The first point is an immediate consequence of~\cite[lem 6.1]{theo21stocha}. The third point is a particular case of the second with $K=K'$. Let us prove point $2$.
$$\mathrm{Cov}(\psi_{K},\psi_{K'}) = E(\psi_{K} \psi_{K'}) -E(\psi_{K})E(\psi_{K'}) = E(\psi_{K} \psi_{K'}) -1.
$$
Assume that $K\cap K' = \{k_1,...,k_m\}$. Let us denote
$$Q_m :=\psi_{K} \psi_{K'} = \prod_{j\in K\triangle K'} (1+\delta_j) \prod_{l=k_1}^{k_m} (1+\delta_l)^2, $$
such that $j < k_i$ for all $j\in K\triangle K'$ and $i \in\{1,...,m\}$.
We prove by induction over $m$ that $1 \leq E(Q_m ) \leq (1+u^2)^m$. For $m=0$, we have $K\cap K'=\emptyset$ and $Q_0 = \prod_{j\in K\triangle K'} (1+\delta_j)$, from the first point $E(Q_0)=1$.
Assume that the inequality holds for $Q_{m-1}$.
\begin{align*}
Q_m &= (1+\delta_{k_m})^2\prod_{l =k_1}^{k_{m-1}} (1+\delta_l)^2 \prod_{j\in K\triangle K'} (1+\delta_j) = (1+\delta_{k_m})^2 Q_{m-1}.
\end{align*}
Let us denote $ \mathcal{S}_{K\triangle K'}=\{ \delta_j, \ j\in K\triangle K')\}$, using the law of total expectation $E(X)= E(E[X/Y])$ and lemma~\ref{meanindp} we have
\begin{align*}
E(Q_m) &= E\big( (1+\delta_{k_m})^2 Q_{m-1} \big) = E\big( E[ (1+\delta_{k_m})^2 Q_{m-1}/ \mathcal{S}_{K\triangle K'}, \delta_{k_1},...,\delta_{k_{m-1}}] \big)\\
&= E\big( Q_{m-1} E[(1+\delta_{k_m})^2/ \mathcal{S}_{K\triangle K'}, \delta_{k_1},...,\delta_{k_{m-1}}]\big)\\
&= E\big( Q_{m-1} E[1+\delta_{k_m}^2/\mathcal{S}_{K\triangle K'}, \delta_{k_1},...,\delta_{k_{m-1}}]\big).
\end{align*}
Since $\lvert \delta_{k_m}\rvert \leq u$, we have
\begin{align*}
E(Q_{m-1}) \leq E\big(Q_{m-1} E[1+\delta_{k_m}^2/\mathcal{S}_{K\triangle K'}, \delta_{k_1},...,\delta_{k_{m-1}}]\big) \leq E\big(Q_{m-1}(1+u^2)\big).
\end{align*}
Thus, $1 \leq E\big( Q_{m}\big) \leq (1+u^2)^m.$ Finally, by induction, the claim is proven
\begin{align*}
0\leq E\big( Q_{m}\big) -1 = \mathrm{Cov}(\psi_{K},\psi_{K'}) \leq \gamma_m(u^2).
\end{align*}
\end{proof}
Under SR-nearness, Lemma~\ref{model} can now be used to derive a variance bound of many algorithms, such as inner products, matrix-vector and matrix-matrix products, solutions of triangular systems, and the Horner algorithm. In the following, we chose the inner product and Horner algorithms as applications.
\subsection{Inner product}
Consider the inner product $s_n = y = a_1 b_1 + \ldots + a_n b_n$, evaluated from left to right, i.e, $s_i = s_{i-1} + a_i b_i$, starting with $s_1 = a_1b_1$. Let $\delta_0=0$, the computed $\widehat s_i$ satisfies $\widehat s_1 =a_1b_1(1+\delta_1)$ and
$$ \widehat s_i =(\widehat s_{i-1} +a_ib_i(1+\delta_{2i-2}))(1+\delta_{2i-1}), \quad \lvert \delta_{2i-2} \rvert, \lvert \delta_{2i-1} \rvert \leq u,
$$
for all $2\leq i \leq n$, where $\delta_{2i-2}$ and $\delta_{2i-1}$ represent the rounding errors from the products and additions, respectively. We thus have
\begin{equation*}
\widehat y = \widehat s_n = \sum_{i=1}^{n} a_ib_i(1+\delta_{2i-2}) \prod_{k=i}^n (1+\delta_{2k-1}).
\end{equation*}
\begin{theorem}\label{variance bound-inner}
Under SR-nearness, the computed $\widehat y$ satisfies $ E(\widehat y) = y$ and
\begin{equation}\label{innervar}
V(\widehat y) \leq y^2 \mathcal{K}_1^2 \gamma_n(u^2),
\end{equation}
where $\mathcal{K}_1 =\frac{\sum_{i=1}^{n} \lvert a_ib_i \rvert}{\lvert \sum_{i=1}^{n} a_ib_i \rvert}$ is the condition number for the computed $y=\sum_{i=1}^n a_ib_i$ using the 1-norm.
\end{theorem}
\begin{proof}
For all $1 \leq i \leq n$, we have
$$ \widehat y = \sum_{i=1}^{n} a_ib_i(1+\delta_{2i-2}) \prod_{k=i}^n (1+\delta_{2k-1}) = \sum_{i=1}^{n} a_ib_i \psi_{K_i},
$$
with $K_i = \{2i-2, 2i-1, 2i+1,\ldots,2n-1\}$.
Lemma~\ref{model} shows that $E(\psi_{K_i}) = 1$ for all $1\leq i \leq n$, hence
$$ E(\widehat y)= E\big(\sum_{i=1}^{n} a_ib_i \psi_{K_i} \big)= \sum_{i=1}^{n} a_ib_i E(\psi_{K_i})= y.
$$
For all $1\leq i < j \leq n$,
$K_j \cap K_i = \{2j-1, 2j+1,...,2n-1\}$ and $\mathrm{Card}(K_j \cap K_i) = n-j +1$.
\begin{align*}
V(\widehat y) &= V\big( \sum_{i=1}^{n} a_ib_i \psi_{K_i} \big)\\
&= \sum_{i=1}^{n} (a_ib_i)^2 V(\psi_{K_i}) + 2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} a_ib_i a_jb_j \mathrm{Cov}(\psi_{K_i}, \psi_{K_j})\\
&\leq \sum_{i=1}^{n} (a_ib_i)^2 V(\psi_{K_i}) + 2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} \lvert a_ib_i a_jb_j \rvert \mathrm{Cov}(\psi_{K_i}, \psi_{K_j})\\
&\leq \sum_{i=1}^{n} (a_ib_i)^2 \gamma_{n-i+1}(u^2) + 2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} \lvert a_ib_i a_jb_j \rvert \gamma_{n-j+1}(u^2) && \text{by Lemma~\ref{model}} \\
&\leq \gamma_n(u^2) (\sum_{i=1}^{n} \lvert a_ib_i \rvert)^2 \quad \text{since $\gamma_{n-i+1}(u^2) \leq \gamma_{n}(u^2)$} \\
&= y^2 \mathcal{K}_1^2 \gamma_n(u^2).
\end{align*}
\end{proof}
\begin{remark}
Because $E(\widehat y) = y$, under a normality assumption, the number of significant digits can be lower-bounded by
\begin{align*}
-\log\left( \frac{\sigma(\widehat y)}{\lvert E(\widehat y) \rvert}\right) & \geq -\log\left( \mathcal{K}_1 \sqrt{\gamma_n(u^2)}\right) \approx -\log (\mathcal{K}_1) - \log(u) -\frac12 \log(n).
\end{align*}
\end{remark}
\subsection{Horner algorithm}
Horner algorithm is an efficient way of evaluating polynomials. When performed in floating-point arithmetic, this algorithm may suffer from catastrophic cancellations and yield a computed value less accurate than expected.
\begin{mode}\label{mod}
Let $P(x) = \sum_{i=0}^n a_i x^i$, Horner rule consists in writing this polynomial as
$$P(x)= (((a_nx +a_{n-1})x +a_{n-2})x \ldots +a_1)x +a_0.
$$
We define by induction the following sequence \\
\begin{center}
\begin{tabular}{|C{2.5cm}||L{5.5cm}|L{3.5cm}|}
\hline Operation & Floating-point arithmetic & Exact computation \\
\hline & $\widehat r_0 = a_n $ & $r_0 = a_n $ \\
$* $ & $\widehat{r}_{2k-1}=\widehat{r}_{2k-2} x (1+\delta_{2k-1}) $ & $r_{2k-1} = r_{2k-2}x $\\
$+ $ & $\widehat{r}_{2k} = (\widehat r_{2k-1} +a_{n-k})(1+\delta_{2k}) $ & $r_{2k} = r_{2k-1} +a_{n-k} $\\
\hline Output & $\widehat{r}_{2n}=\widehat{P}(x) $ & $r_{2n}= P(x) $\\
\hline
\end{tabular}
\end{center}
\end{mode}
for all $1 \leq k \leq n$, with $\delta_{2k-1}$ and $\delta_{2k}$ represent the rounding errors from the products and the additions, respectively. Let $\delta_0 =0$, we thus have
$$ \widehat r_{2n} = \sum_{i=0}^{n}a_i x^i \prod_{k=2(n -i)}^{2n} (1+\delta_k).
$$
\begin{theorem}\label{variance-bound-horner}
Using SR-nearness, the computed $\widehat r_{2n}$ satisfies $E(\widehat r_{2n}) = r_{2n}$ and
\begin{equation}\label{hor-var}
V(\widehat r_{2n}) \leq r_{2n}^2 cond_1(P,x)^2 \gamma_{2n}(u^2),
\end{equation}
where $cond_1(P,x) =\frac{\sum_{i=1}^{n} \lvert a_i x^i \rvert}{\lvert \sum_{i=1}^{n} a_i x^i \rvert}$ is the condition number for the computed $P(x)=\sum_{i=1}^n a_i x^i$ using the 1-norm.
\end{theorem}
\begin{proof}
We have
$$\widehat r_{2n} = \sum_{i=0}^{n}a_i x^i \prod_{k=2(n -i)}^{2n} (1+\delta_k)
= \sum_{i=0}^{n}a_i x^i \psi_{K_i},$$
with $K_i =\{ 2(n-i), 2(n-i)+1,...,2n\}$ for all $0 \leq i \leq n$.
Lemma~\ref{model} implies $E(\psi_{K_i}) = 1$, then
$E(\widehat r_{2n}) = E\big( \sum_{i=0}^{n}a_i x^i \psi_{K_i}\big)
= \sum_{i=0}^{n}a_i x^i E(\psi_{K_i}) = r_{2n}.$
Therefore, because $\delta_0 =0$ we have
\begin{align*}
V(\widehat r_{2n}) &= V\big(\sum_{i=0}^{n}a_i x^i \psi_{K_i} \big) \\
&= \sum_{i=1}^{n} (a_i x^i)^2 V(\psi_{K_i}) + 2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} a_i x^i a_j x^j \mathrm{Cov}(\psi_{K_i},\psi_{K_j})\\
&\leq \sum_{i=1}^{n} (a_i x^i)^2 V(\psi_{K_i}) + 2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} \lvert a_i x^i a_j x^j \rvert \mathrm{Cov}(\psi_{K_i},\psi_{K_j})\\
&\leq \sum_{i=1}^{n} (a_i x^i)^2 \gamma_{2i}(u^2) + 2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} \lvert a_i x^i a_j x^j \rvert \gamma_{2j}(u^2) \quad~\text{by Lemma~\ref{model}}\\
&\leq \sum_{i=1}^{n} (a_i x^i)^2 \gamma_{2n}(u^2) + 2 \sum_{i=1}^{n} \sum_{j=i+1}^{n} \lvert a_i x^i a_j x^j \rvert \gamma_{2n}(u^2) \quad \text{since $\gamma_{2i}(u^2) \leq \gamma_{2n}(u^2)$} \\
&= \gamma_{2n}(u^2) \big(\sum_{i=1}^{n} \lvert a_i x^i) \rvert \big)^2 = r_{2n}^2 cond_1(P,x)^2 \gamma_{2n}(u^2).
\end{align*}
\end{proof}
\begin{remark}
Because $E(\widehat r_{2n}) = r_{2n}$, under a normality assumption, the number of significant digits can be lower-bounded by
\begin{align*}
-\log\left( \frac{\sigma(\widehat r_{2n})}{\lvert E(\widehat r_{2n}) \rvert}\right) & \geq -\log\left( cond_1(P,x) \sqrt{\gamma_{2n}(u^2)}\right)\\
&\approx -\log (cond_1(P,x)) - \log(u) -\frac12 \log(2n).
\end{align*}
\end{remark}
\section{Probabilistic bounds of the error for stochastic rounding}
\label{sec:pbBound}
This section provides a probabilistic bound on the forward error of the inner product and Horner method in $O(\sqrt{n} u)$ compared to the deterministic bound in $O(nu)$.
On the one side, we start with the approach based on the Azuma-Hoeffding inequality and the martingale property (AH method in the following). In this context, firstly, we give a rigorous review of the previous results of the inner product forward error by Higham and Mary~\cite{theo21stocha, theo19} and Ilse, Ipsen and Zhou~\cite{ilse}. Secondly, we extend these techniques to the Horner algorithm which also gives a probabilistic bound proportional to $\sqrt{n}u$.
On the other side, we present a new approach based on Bienaymé–Chebyshev inequality and the previous variance estimations (BC method in the following), our bound is also in $O(\sqrt{n} u)$ and it is lower than the AH bound in several situations for both inner product and Horner algorithms.
\subsection{Azuma-Hoeffding method}\label{azuma-method}
Let us recall the concept of a martingale and the Azuma-Hoeffding inequality for a martingale~\cite{azum}.
\begin{defn}
\label{def:martingale}
A sequence of random variables $M_1, ... , M_n$ is a martingale with respect to the sequence $X_1, . . . , X_n$ if, for all $k,$
\begin{itemize}
\item $M_k$ is a function of $X_1, ..., X_k$,
\item $\mathbb{E}(\lvert M_k \rvert ) < \infty,$ and
\item $\mathbb{E}[M_k / X_1, ..., X_{k-1}]=M_{k-1}$.
\end{itemize}
\end{defn}
\begin{lemma}(Azuma-Hoeffding inequality). Let $M_0, ..., M_n$ be a martingale with respect to a sequence $X_1, . . . , X_n.$ We assume that there exist $a_k<b_k$ such that $a_k \leq M_k - M_{k-1} \leq b_k$ for $k = 1: n.$ Then, for any $A > 0$
$$ \mathbb{P}(\lvert M_n - M_0 \rvert \geq A) \leq 2 \exp \left(
-\frac{2A^2}{\sum_{k=1}^n(b_k-a_k)^2}
\right).
$$
In the particular case $a_k=-b_k$ and $\lambda = 2 \exp \left(
-\frac{A^2}{2\sum_{k=1}^n b_k^2} \right) $ we have
$$ \mathbb{P}\left( \lvert M_n - M_0 \rvert \leq \sqrt{\sum_{k=1}^n b_k^2} \sqrt{2 \ln (2 / \lambda)} \right) \geq 1- \lambda,
$$
where $0< \lambda <1$.
\end{lemma}
\subsubsection{Inner product}\label{inner-product}
Under SR-nearness, the inner product $y=a^{\top}b,$ where $a,b\in \mathbb{R}^n$ is defined as
$\widehat y = \widehat s_n =\sum_{i=1}^{n} a_ib_i(1+\delta_{2(i-1)}) \prod_{k=i}^n (1+\delta_{2k-1}).$ The worst case of the forward error of the computed $\widehat y$ is in $O(n u)$. Wilkinson~\cite[sec 1.33]{wilk} had the intuition that the roundoff error accumulated in $n$ operations is proportional to $\sqrt{n} u$ rather than $n u$. Based on the mean independence of errors established in Lemma~\ref{meanindp}, Higham and Mary~\cite{theo21stocha} and Ilse, Ipsen and Zhou~\cite{ilse} have proved this result for SR-nearness. Both works build on the mean independence property of SR-nearness. This allows them to form a martingale, and then to apply the Azuma-Hoeffding concentration inequality. The difference between these two works is in the way they form the martingale. In \cite[sec 3]{theo21stocha}, the martingale is built using the errors accumulated in the whole process $\psi_{K_i}=(1+\delta_{2(i-1)}) \prod_{k=i}^n (1+\delta_{2k-1})$ for all $ 1\leq i \leq n$. Azuma-Hoeffding inequality implies that $\lvert \psi_{K_i} \rvert \leq \tilde{\gamma}_n(\lambda)$ with probability at least $1-2\exp{\frac{-\lambda^2}{2}}$, where $\tilde{\gamma}_n(\lambda) = \exp{\frac{\lambda\sqrt{n}u + nu^2}{1-u}} -1.$ This approach uses the inclusion-exclusion principle to generalize the bound to the summation which results in a pessimistic $n$ in the probability. They prove
$$ \frac{\lvert \widehat y - y \rvert}{\lvert y \rvert} \leq \mathcal{K}_1 \tilde{\gamma}_n(\lambda),
$$
with probability at least $1-2n\exp{\frac{-\lambda^2}{2}}$. The factor $n$ in the probability disrupts the $\sqrt{n}u$ property. $\delta = 2n\exp{\frac{-\lambda^2}{2}}$ implies that $\lambda = \sqrt{2\ln{(2n/\delta)}}$ and
\begin{equation}\label{mary-bound-inner}
\frac{\lvert \widehat y - y \rvert}{\lvert y \rvert} \leq \mathcal{K}_1 \tilde{\gamma}_n\big(\sqrt{2\ln{(2n/\delta)}} \big),
\end{equation}
with probability at least $1-\delta$, where
\begin{align*}
\tilde{\gamma}_n(\sqrt{2\ln{(2n/\delta)}}) &= \exp{\frac{\sqrt{2n\ln{(2n/\delta)}}u + nu^2}{1-u}} -1 \\
&= u\sqrt{2n\ln{(2n/\delta)}} + O(u^2)\\
&= u\sqrt{2n\ln{2n} -2n\ln{\delta}} + O(u^2)= O(u\sqrt{n\ln{n})}.
\end{align*}
On the other hand \cite[sec 4]{ilse} forms it by following step-by-step how the error accumulates in the recursive summation of the inner product. In particular, they distinguish between the multiplications and additions computed at each step and carefully monitor their mean independences. This approach leads to the following probabilistic bound
\begin{equation}\label{ilse-bound-inner}
\frac{\lvert \widehat y - y \rvert}{\lvert y \rvert} \leq \mathcal{K}_1 \sqrt{u \gamma_{2n}(u)} \sqrt{\ln (2 / \delta)},
\end{equation}
with probability at least $1-\delta$. This technique avoids the inclusion-exclusion principle and leads to the bound
$$ \sqrt{u \gamma_{2n}(u)} \sqrt{\ln (2 / \delta)} = u\sqrt{2n\ln{2} -2n \ln{\delta}} +O(u^2).
$$
Note that~(\ref{mary-bound-inner}) and~(\ref{ilse-bound-inner}) differ only in the factor $\sqrt{\ln{n}}$ that appears in~(\ref{mary-bound-inner}) due to the use of the martingale property on each partial sum. All in all, ~(\ref{ilse-bound-inner}) is proportional to $u\sqrt{n}$, while~(\ref{mary-bound-inner}) is proportional to $u\sqrt{n\ln{n}}$.
\subsubsection{Horner algorithm}
In the following, we derive a probabilistic bound for the computed $\widehat{P}(x)$ based on the previous method applied for the inner product in~\cite[sec 4]{ilse}.
With the notations defined in Model~\ref{mod}, let us denote $Z_i :=\widehat r_i - r_i$ for all $1 \leq i \leq 2n$. The total forward error is $\lvert Z_{2n} \rvert = \lvert \widehat r_{2n} - r_{2n} \rvert = \lvert \widehat{P}(x) - P(x) \rvert$ and
\begin{align*}
\lvert \widehat{P}(x) - P(x) \rvert &= \lvert \sum_{i=0}^n a_i x^{i} (\prod_{k=2(n -i)}^{2n} (1+\delta_k)-1) \rvert
\leq \sum_{i=0}^n \lvert a_i x^i\rvert \gamma_{2n}(u).
\end{align*}
Finally
\begin{equation}\label{detbound}
\frac{\lvert \widehat{P}(x) - P(x) \rvert}{\lvert P(x) \rvert} \leq cond_1(P,x) \gamma_{2n}(u).
\end{equation}
The deterministic bound is proportional to $nu$. In the following, we prove a probabilistic bound in $O(\sqrt{n}u)$.
The partial sums forward error satisfy
\begin{align*}
Z_{2k-1} &= \widehat r_{2k-1} - r_{2k-1} = \widehat r_{2k-2}x (1+\delta_{2k-1}) - r_{2k-2}x\\
&= xZ_{2k-2} +\widehat r_{2k-2}x \delta_{2k-1}, \\
Z_{2k} &= \widehat r_{2k} - r_{2k} = (\widehat r_{2k-1} +a_{n-k})(1+\delta_{2k}) - r_{2k-1} - a_{n-k}\\
&= Z_{2k-1} +(\widehat r_{2k-1} +a_{n-k}) \delta_{2k},
\end{align*}
for all $1 \leq k \leq n$.
The sequence $Z_1,...,Z_{2n}$ does not form a martingale with respect to $\delta_1,...,\delta_{2n}$ because, due to the multiplication in odd steps, $$ E[Z_{2k-1}/\delta_1,...,\delta_{2k-2}]= xZ_{2k-2}.$$
In order to form a martingale and use the Azuma-Hoeffding inequality, we define the following variable change
$$ Y_i = \frac{Z_i}{x^{\llfloor (i+1)/2 \rrfloor}},$$
where $\llfloor (i+1)/2 \rrfloor$ is the integer part of $(i+1)/2$, we thus have
\begin{align}
\left\{
\begin{array}{ccl}
Y_{2k-1} &=& Y_{2k-2} + \frac{1}{x^{k-1}}\widehat r_{2k-2} \delta_{2k-1}, \\
Y_{2k} &=& Y_{2k-1} + \frac{1}{x^k}(\widehat r_{2k-1} + a_{n-k})\delta_{2k},
\end{array}
\right.
\label{eq2} \end{align}
for all $1 \leq k \leq n.$
\begin{theorem}
The sequence of random variables $Y_1,...,Y_{2n}$ is a martingale with respect to $\delta_1, ..., \delta_{2n}$.
\end{theorem}
\begin{proof}
We check that the three conditions of definition~\ref{def:martingale} are satisfied. Throughout ~the ~proof, ~we ~note ~the ~set ~$\mathbb{F}_k= \{\delta_1, ..., \delta_k\}$.
\begin{itemize}
\item The recursion in Model~\ref{mod} shows that $Y_i$ is a function of $\delta_1, ..., \delta_{i}$ for all $1 \leq i \leq 2n$.
\item $\mathbb{E}(\lvert Y_i \rvert ) $ is finite because $x$ and $a_k$ are finite ~for ~all ~$n-i \leq k \leq n$ and $\lvert \delta_j \rvert \leq u$ for all $1 \leq j \leq i$.
\item We prove that $\mathbb{E}[Y_i /\mathbb{F}_{i-1}] = Y_{i-1}$ by distinguishing the even and odd cases.
Firstly, using the mean independence of $\delta_1, ... \delta_{2k-1}$ and equation (\ref{eq2}) we obtain
\begin{align*}
\mathbb{E}[Y_{2k-1}/\mathbb{F}_{2k-2}] &= \mathbb{E}[Y_{2k-2}/\mathbb{F}_{2k-2}]
+ \mathbb{E}[ \frac{1}{x^{k-1}}\widehat r_{2k-2} \delta_{2k-1}/\mathbb{F}_{2k-2}]\\
&= Y_{2k-2}
+ \frac{1}{x^{k-1}}\widehat r_{2k-2} \mathbb{E}[\delta_{2k-1}/\mathbb{F}_{2k-2}] = Y_{2k-2}.
\end{align*}
\end{itemize}
Secondly, using the mean independence of $\delta_1, ... \delta_{2k}$ and equation (\ref{eq2}) we obtain
\begin{align*}
\mathbb{E}[Y_{2k}/\mathbb{F}_{2k-1}] &= \mathbb{E}[Y_{2k-1}/\mathbb{F}_{2k-1}] + \mathbb{E}[\frac{1}{x^k}(\widehat r_{2k-1} + a_{n-k})\delta_{2k}/\mathbb{F}_{2k-1}]\\
&= Y_{2k-1} + \frac{1}{x^k}(\widehat r_{2k-1} + a_{n-k})\mathbb{E}[\delta_{2k}/\mathbb{F}_{2k-1}]= Y_{2k-1}.
\end{align*}
\end{proof}
\begin{lemma}\label{cst-bound}
The above martingale $Y_1,..., Y_{2n}$ satisfies
$ \lvert Y_i - Y_{i-1} \rvert \leq C_i u$, for all $1\leq i \leq 2n,$
where
\begin{align*}
\left\{
\begin{array}{ccl}
C_{2k-1} &=& \lvert a_n \rvert (1+u)^{2k-2} + \sum_{j=1}^{k-1} \lvert a_{n-j} \rvert \lvert x \rvert^{-j}(1+u)^{2(k-j)-1},\\
C_{2k} &=& \lvert a_n \rvert (1+u)^{2k-1} + \sum_{j=1}^k \lvert a_{n-j} \rvert \lvert x \rvert^{-j}(1+u)^{2(k-j)},
\end{array}
\right.
\end{align*}
for all $1\leq k \leq n.$
\end{lemma}
\begin{proof}
Note that $Y_0=0$, then $\lvert Y_1 - Y_0 \rvert = \lvert Y_1 \rvert = \lvert a_n \rvert $ and the equality holds for $C_1$. Using equation~(\ref{eq2})
$$ \lvert Y_{2k-1} - Y_{2k-2} \rvert \leq \frac{1}{\lvert x \rvert^{k-1}} \lvert \widehat r_{2k-2} \rvert u.
$$
Moreover
\begin{align*}
\lvert \widehat r_{2k-2} \rvert &\leq \lvert \widehat r_{2k-3} \rvert (1+u) + \lvert a_{n-k+1}\rvert (1+u) \leq \lvert \widehat r_{2k-4} \rvert \lvert x \rvert (1+u)^2 + \lvert a_{n-k+1}\rvert (1+u),
\end{align*}
by induction we obtain
$$ \lvert \widehat r_{2k-2} \rvert \leq \lvert a_n \rvert \lvert x \rvert^k (1+u)^{2k-2} + \sum_{j=1}^{k-1} \lvert a_{n-j} \rvert \lvert x \rvert^{k-j}(1+u)^{2(k-j)-1}.
$$
This completes the proof for $C_{2k-1}$ for all $1\leq k \leq n$. A similar approach can be applied to proving the same result for $C_{2k}$ for all $1\leq k \leq n$.
\end{proof}
We now have all the tools to state and demonstrate the main result of this section:
\begin{theorem}
Under SR-nearness, for all $0 < \lambda <1$ and with probability at least $1-\lambda$
\begin{equation}
\frac{\lvert \widehat{P}(x) - P(x) \rvert}{\lvert P(x) \rvert} \leq cond_1(P,x) \sqrt{u \gamma_{4n}} \sqrt{\ln (2 / \lambda)},
\end{equation}
where $cond_1(P,x) = \frac{\sum_{i=1}^n \lvert a_i x^i \rvert}{\lvert P(x) \rvert}$ is the condition number of the polynomial evaluation and $\gamma_{4n}=(1+u)^{4n} -1$.
\end{theorem}
\begin{proof}
Recall that $\lvert \widehat r_{2n} - r_{2n} \rvert = \lvert Z_{2n} \rvert = \lvert x^n \rvert \lvert Y_{2n} \rvert$. Therefore, $ Y_1,...,Y_{2n}$ is a martingale with respect to $\delta_1, ..., \delta_{2n}$ and Lemma~\ref{cst-bound} implies $ \lvert Y_i - Y_{i-1} \rvert \leq C_i u$ for all $1\leq i \leq 2n$.
Using the Azuma-Hoeffding inequality yields
$$\mathbb{P}\left( \lvert Y_{2n} \rvert \leq u \sqrt{\sum_{i=1}^{2n}C_i^2}\sqrt{2 \ln (2 / \lambda)}\right) \geq 1-\lambda,
$$
it follows that
$$ \lvert Z_{2n} \rvert \leq u \sqrt{\sum_{i=1}^{2n} (\lvert x \rvert^n C_i)^2}\sqrt{2 \ln (2 / \lambda)},
$$
with probability at least $1-\lambda$, where
\begin{align*}
\lvert x \rvert^n C_{2k} &= \lvert a_n \rvert \lvert x \rvert^n (1+u)^{2k-1} + \sum_{j=1}^k \lvert a_{n-j} x^{n-j} \rvert(1+u)^{2(k-j)}\\
&\leq(1+u)^{2k-1} \sum_{j=0}^k \lvert a_{n-j} x^{n-j} \rvert \leq(1+u)^{2k-1} \sum_{j=0}^n \lvert a_{j} x^j \rvert,
\end{align*}
for all $1\leq k \leq n.$
Hence, $$ (\lvert x \rvert^n C_{2k})^2 \leq (1+u)^{2(2k-1)} \big(\sum_{j=0}^n \lvert a_{j} x^j \rvert \big)^2.
$$
In a similar way
$$ (\lvert x \rvert^n C_{2k-1})^2 \leq (1+u)^{2(2k-2)} \big(\sum_{j=0}^n \lvert a_{j} x^j \rvert \big)^2.
$$
Thus
\begin{align*}
\sum_{i=1}^{2n} (\lvert x \rvert^n C_i)^2 &\leq \big(\sum_{j=0}^n \lvert a_{j} x^j \rvert \big)^2 \sum_{i=0}^{2n-1} ((1+u)^{2})^i\\
&= \big(\sum_{j=0}^n \lvert a_{j} x^j \rvert \big)^2 \frac{((1+u)^2)^{2n}-1}{(1+u)^2-1}= \big(\sum_{j=0}^n \lvert a_{j} x^j \rvert \big)^2 \frac{\gamma_{4n}}{u^2+2u}.
\end{align*}
As a result
$$ \lvert \widehat{P}(x) - P(x) \rvert = \lvert Z_{2n} \rvert \leq \sum_{j=0}^n \lvert a_{j} x^j \rvert \sqrt{\frac{u \gamma_{4n}}{2+u}} \sqrt{2 \ln (2 / \lambda)},
$$
with probability at least $1-\lambda$.
Finally
$$
\frac{\lvert \widehat{P}(x) - P(x) \rvert}{\lvert P(x) \rvert} \leq cond_1(P,x) \sqrt{u \gamma_{4n}} \sqrt{\ln (2 / \lambda)},
$$
with probability at least $1-\lambda$.
\end{proof}
\subsection{Bienaymé–Chebyshev method}
Another way to obtain a probabilistic $O(\sqrt{n}u)$ bound is to use Bienaymé–Chebyshev inequality. This method requires only information on the variance. Moreover, we will see in Section~\ref{bound-analyze} that for any probability $\lambda$ there exist $n$ such that this method introduces a tighter probabilistic bound than the AH method.
\begin{lemma}(Bienaymé–Chebyshev inequality)\label{Bienaymé–Chebyshev-inequality}
Let $X$ be a random variable with finite expected value and finite non-zero variance. For any real number $\alpha > 0$,
$$ \mathbb{P}\big(\lvert X - E(X) \rvert \leq \alpha \sqrt{V(X)}\big) \geq 1- \frac{1}{\alpha^2}.
$$
\end{lemma}
Regarding the two algorithms above, the computed $\widehat{y}$ satisfies $E(\widehat y)= y$, then
$$\mathbb{P}\left(
\lvert \widehat{y} - y \rvert \leq \alpha \sqrt{V(\widehat y)}
\right) \geq 1-\frac{1}{\alpha^2},
$$
taking $\lambda = \frac{1}{\alpha^2}$ yields
$\lvert \widehat{y} - y \rvert \leq \sqrt{V(\widehat y)/\lambda}$ with probability at least $1-\lambda$.
\subsubsection{Inner product}\label{sub-inner}
From Theorem \ref{innervar} we have
$$ \frac{\sqrt{V\big( \widehat y) /\lambda}}{\lvert y \rvert} \leq \mathcal{K}_1 \sqrt{\gamma_{n}(u^2)/\lambda}.
$$
Thus,
$$ \frac{\lvert \widehat{y} - y \rvert}{\lvert y \rvert} \leq \frac{\sqrt{V\big( \widehat y)/\lambda}}{\lvert y \rvert} \leq \mathcal{K}_1 \sqrt{\gamma_{n}(u^2)/\lambda},
$$
and
\begin{equation}
\mathbb{P} \left( \frac{\lvert \widehat{y} - y \rvert}{\lvert y \rvert} \leq \mathcal{K}_1 \sqrt{\gamma_{n}(u^2)/\lambda}\right) \geq \mathbb{P} \left( \frac{\lvert \widehat{y} - y \rvert}{\lvert y \rvert} \leq \frac{\sqrt{V\big( \widehat y) /\lambda}}{\lvert y \rvert} \right) \geq 1-\lambda.
\end{equation}
\subsubsection{Horner algorithm}
From Theorem \ref{hor-var} we have
$$ \frac{V\big( \widehat{P}(x) \big)}{\lvert P(x) \rvert} \leq cond_1(P,x) \sqrt{\gamma_{2n}(u^2)}.
$$
The previous reasoning from sub-section~\ref{sub-inner} leads to
\begin{equation}
\mathbb{P} \left( \frac{\lvert \widehat{P}(x) - P(x) \rvert}{\lvert P(x) \rvert} \leq cond_1(P,x) \sqrt{\gamma_{2n}(u^2)/\lambda} \right) \geq1-\lambda.
\end{equation}
\section{Bounds analysis}\label{bound-analyze}
In the following, we compare the various bounds of the two previous algorithms and analyze which bound is the tightest depending on the precision in use, the target probability and the number of operations.
\subsection{Inner product}\label{bound-compare}
In the beginning, let us recall all bounds obtained for the inner product $y=a^{\top}b,$ where $a,b\in\mathbb{R}^n$
\begin{align}
\frac{\lvert \widehat{y} - y \rvert}{\lvert y \rvert} &\leq \mathcal{K}_1 \gamma_{n}(u), \quad \label{det-inn-bound}\tag{Det-IP}\\
\frac{\lvert \widehat{y} - y \rvert}{\lvert y \rvert} &\leq \mathcal{K}_1 \sqrt{u \gamma_{2n}(u)} \sqrt{\ln (2 / \lambda)} && \text{with probability at least $1-\lambda$}, \label{azum-inn-bound}\tag{AH-IP} \\
\frac{\lvert \widehat{y} - y \rvert}{\lvert y \rvert} &\leq \mathcal{K}_1 \sqrt{ \gamma_{n}(u^2) } \sqrt{1/ \lambda} &&\text{with probability at least $1-\lambda$}. \label{cheb-inn-bound}\tag{BC-IP}
\end{align}
All bounds have the same condition number $\mathcal{K}_1$, but differ in the others factor: $\gamma_{n}(u)$ for~(\ref{det-inn-bound}), $ \sqrt{u \gamma_{2n}(u)} \sqrt{\ln (2 / \lambda)}$ for~(\ref{azum-inn-bound}) and $\sqrt{ \gamma_{n}(u^2)} \sqrt{1/ \lambda}$ for~(\ref{cheb-inn-bound}).
For $n$ such that $nu < 1$ ~\cite[Lemma 3.1]{higham2002} implies
$$
\gamma_{n}(u) \leq \frac{nu}{1-nu},
$$
it follows that for $2nu <1$,
\begin{align*}
\sqrt{u \gamma_{2n}(u)} &\leq \sqrt{ \frac{2nu^2}{1-2nu}} = u \sqrt{n} \sqrt{\frac{2}{1-2nu}},
\end{align*}
and for $nu^2 <1$
\begin{align*}
\sqrt{\gamma_{n}(u^2)} &\leq \sqrt{ \frac{nu^2}{1-nu^2}} = u \sqrt{n} \frac{1}{ \sqrt{ 1-nu^2}}.
\end{align*}
For $n$ large, Taylor's formula gives $\gamma_n(u) = nu + O(u^2)$ and
$\gamma_{n}(u) \approx nu$.
This approach can't be used for $\sqrt{\gamma_{2n}(u)}$ because it's indeterminate in $0$. However,
$$\lim_{u \to 0} \frac{ \sqrt{u \gamma_{2n}(u)} }{\sqrt{n} u} = \sqrt{2} \Longleftrightarrow \sqrt{u \gamma_{2n}(u)} \approx \sqrt{2} \sqrt{n} u,
$$
and
$$\lim_{u \to 0} \frac{ \sqrt{\gamma_{n}(u^2)} }{\sqrt{n} u} = 1 \ \Longleftrightarrow \sqrt{u \gamma_{2n}(u^2)} \approx \sqrt{n} u.
$$
Interestingly, AH and BC methods are asymptotically equivalent and the probabilistic bounds for the inner product forward error are in $O(\sqrt{n} u)$ versus $O(nu)$ for the deterministic bound.
\subsection{Horner algorithm}
Let us recall all bounds obtained for the Horner algorithm
\begin{align}
\frac{\lvert \widehat{P}(x) - P(x) \rvert}{\lvert P(x) \rvert} &\leq cond_1(P,x) \gamma_{2n}(u), \quad \label{det-hor-bound}\tag{Det-H}\\
\frac{\lvert \widehat{P}(x) - P(x) \rvert}{\lvert P(x) \rvert} &\leq cond_1(P,x) \sqrt{u \gamma_{4n}(u)} \sqrt{\ln \frac2\lambda} && \text{with probability $\geq1-\lambda$}, \label{azum-hor-bound}\tag{AH-H}\\
\frac{\lvert \widehat{P}(x) - P(x) \rvert}{\lvert P(x) \rvert} &\leq cond_1(P,x) \sqrt{ \gamma_{2n}(u^2) } \sqrt{\frac1 \lambda} && \text{with probability $\geq1-\lambda$}. \label{cheb-hor-bound}\tag{BC-H}
\end{align}
A similar reasoning to Section~\ref{bound-compare} shows that the probabilistic bounds for the Horner algorithm forward error are in $O(\sqrt{n} u)$ versus $O(nu)$ for the deterministic bound.
In conclusion, these probabilistic approaches show that the roundoff error accumulated in $n$ operations is proportional to $\sqrt{n}u$ rather than $nu$. In the next section, we analyze these two probabilistic methods.
\subsection{Bienaymé–Chebyshev vs Azuma-Hoeffding}\label{cheb-vs-azum}
In the following, we focus on the inner product bounds (similar reasoning can be applied to the Horner algorithm with the same result).
The two probabilistic bounds have the same condition number $\mathcal{K}_1$. Thus, it is enough to compare $\sqrt{\frac{u}{2} \gamma_{2n}(u)} \sqrt{2\ln (2 / \lambda)}$ and $\sqrt{ \gamma_{n}(u^2) } \sqrt{1/ \lambda}$. These two bounds depend on $n$ and $\lambda$. Firstly, using the binomial theorem we have
\begin{align*}
\frac{u}{2} \gamma_{2n}(u) - \gamma_{n}(u^2) &= \frac{u}{2} \left( (1+u^2+2u)^{n} -1\right) - \left((1+u^2)^n -1\right)\\
&= \frac{u}{2} \sum_{k=1}^n \dbinom{n}{k} (u^2+2u)^{k} - \sum_{k=1}^n \dbinom{n}{k} (u^2)^{k}\\
&= \sum_{k=1}^n \dbinom{n}{k} \left[\frac{u}{2}(u^2+2u)^k -(u^2)^k\right]\\
&\geq \sum_{k=1}^2 \dbinom{n}{k} \left[\frac{u}{2}(u^2+2u)^k -(u^2)^k\right]\\
&\geq n(n-\frac12)u^3.
\end{align*}
We can conclude that
\begin{equation}\label{inequality}
\sqrt{ \gamma_{n}(u^2) } \leq \sqrt{\frac{u}{2} \gamma_{2n}(u)} \ \text{for all} \ n\geq 1.
\end{equation}
Now, let us compare $\sqrt{1/\lambda}$ and $\sqrt{2\ln(2/\lambda)}$ for $\lambda \in]0;1[$,
\begin{figure}
\centering
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.99\linewidth]{compare.pdf}
\caption{Illustration of $\sqrt{1/\lambda}$ and $\sqrt{2\ln(2/\lambda)}$ behaviour for all $\lambda \in]0;1[$.}
\label{compare lambda}
\end{minipage}%
\hfill
\begin{minipage}{.48\textwidth}
\centering
\includegraphics[width=.99\linewidth]{plot-cheb-azum.pdf}
\caption{AH bound vs BC bound with probability $0.95$ and $u=2^{-23}$ for the inner product.}
\label{fig:cheb-vs-azuma}
\end{minipage}
\end{figure}
Figure~\ref{compare lambda} and the inequality~(\ref{inequality}) show that whatever the problem size $n$ and for a probability at most $\approx 0.758$, the BC method gives a tighter probabilistic bound than the AH method.
Figure~\ref{fig:cheb-vs-azuma} shows that with probability $0.95$ and for a large $n$, AH bound grows rapidly (quadratic) compared to BC bound (linear). The variance calculation and the mean independence allow to bound the error terms $(1+\delta)^2$ by $(1+u^2)$ and avoid all $\delta$ terms of degree one because $E(\delta)=0$. In contrast, the AH method requires bounded increments leading to terms $(1+u)^2$. As $n$ increases, the quantity $\sqrt{\frac{u}{2} \gamma_{2n}(u)}$ dominates, therefore the advantage of Azuma-Hoeffding inequality for a probability near $1$ becomes negligible.
\begin{figure}\label{table}
\centering
\begin{tabular}{|C{2.5cm}|C{2.5cm}|C{3.5cm}|C{2.5cm}|}
\hline Probability & $u$ & Precision format & $n \gtrsim $ \\
\hline $1- \lambda = 0.95$ & $2^{-7}$ & bfloat16 & $110$ \\
\cline{2-4}
& $2^{-10}$ & fp16 & $890$ \\
\cline{2-4}
& $2^{-23}$ & fp32 & $7.3 e06$ \\
\cline{2-4}
& $2^{-52}$ & fp64 & $3.9 e15$\\
\hline $1- \lambda = 0.99$ & $2^{-7}$ & bfloat16 & $220$ \\
\cline{2-4}
& $2^{-10}$ & fp16 & $1810$ \\
\cline{2-4}
& $2^{-23}$ & fp32 & $1.48 e07$\\
\cline{2-4}
& $2^{-52}$ & fp64 & $7.9 e15$\\
\hline
\end{tabular}
\caption{The smallest $n$ such that BC method gives a tighter probabilistic bound than AH method for the inner product.}
\label{table-n}
\end{figure}
The table above illustrates how BC is tighter than AH when $n$ grows. The $n$ threshold above which BC is preferable to AH bound depends on the format precision. The lower the precision, the lower the threshold becomes. Using SR in low precision is of high interest in the areas of machine learning~\cite{gupta}, PDEs~\cite{pde} and ODEs~\cite{ode}, motivating the use of our improved BC method.
\section{Numerical experiments}\label{sec:exp}
This section presents numerical experiments that support and complete the theory presented previously. The various bounds are compared on two numerical applications: the inner product and the evaluation of the Chebyshev polynomial.
We show that the probabilistic bounds are tighter than the deterministic bound and faithfully capture the behavior of SR-nearness forward error. For inner product of large vectors, we show that the BC bound is smaller than the AH bound. All SR computations are repeated $30$ times with verificarlo~\cite{verificarlo}; we plot all samples and the forward error of the average of the 30 SR instances.
\subsection{Horner algorithm}
We use Horner’s method to evaluate the polynomial $P(x)=T_{N}(x) = \sum_{i=0}^{\llfloor \frac{N}{2} \rrfloor} a_i (x^2)^i$ where $T_{N}$ is the Chebyshev polynomial of even degree $N=2n$. The previous error bounds, ~(\ref{det-hor-bound}),~(\ref{azum-hor-bound}), and~(\ref{cheb-hor-bound}) apply to this computation.
\begin{figure}
\centering
\subfloat{
\hspace{-.6cm}\includegraphics[scale=0.41]{plot-20-0.5.pdf}
} \subfloat{
\hspace{-0.8cm}\includegraphics[scale=0.41]{plot-20-0.9.pdf}\hspace{-0.8cm}
}
\caption{Probabilistic error bounds with probability $1- \lambda =0.5$ (left) and $1- \lambda =0.9$ (right) vs deterministic bound for the Horner's evaluation of $T_{20}(x)$ and $u=2^{-23}$.
Triangles mark 30 instances of the SR-nearness evaluation in binary32 precision, a circle marks their average, and a star represents the IEEE RN-binary32 value.}
\label{fig:n=20}
\end{figure}
Chebyshev polynomial is ill-conditioned near $1$ as shown in Figure~\ref{fig:n=20}, which evaluates $T_{20}(x)$ for $x\in[\frac{8}{64};1]$. Due to catastrophic cancellations among the polynomial terms, the condition number increases from $10^0$ to $10^7$ in the chosen $x$ interval, resulting in an increasing numerical error for both RN-binary32 and SR-nearness computations.
The left plot confirms that the Bienaymé–Chebyshev bound~(\ref{cheb-hor-bound}) is more accurate than the Azuma-Hoeffding bound~(\ref{azum-hor-bound}) for probability $1 - \lambda = 0.5$. With a higher probability $1 - \lambda = 0.9$ (right plot), since $N=20$ and $u=2^{-23}$ Azuma-Hoeffding bound~(\ref{azum-hor-bound}) is tighter, as predicted in Figure~\ref{fig:cheb-vs-azuma}. Both probabilistic bounds are tighter than the deterministic bound.
For $N=20$, there is no significant difference between SR-nearness and RN-binary32. However, as expected, the average of the SR-nearness computations is more precise than the nearest round evaluation for almost all values of $x$.
\begin{figure}
\centering
\subfloat{
\hspace{-.6cm}\includegraphics[scale=0.44]{plot24-26-0.5.pdf}
} \subfloat{
\hspace{-0.3cm}\includegraphics[scale=0.44]{plot24-26-0.9.pdf}\hspace{-0.9cm}
}
\caption{Normalized forward error ($\text{error}/cond(P,x)$) with probability $1- \lambda =0.5$ (left) and $1- \lambda =0.9$ (right) for Horner's evaluation of $T_{N}(24/26)$.
}
\label{fig:x=24/26}
\end{figure}
In Figure \ref{fig:x=24/26}, the three previous bounds and the forward error are normalized by the condition number $cond(P,x)$. The evaluation in $x=24/26\approx0.923$ is plotted for various polynomial degrees $N.$ As expected, when $N$ increases, the deterministic bound grows faster than the probabilistic bounds. The right plot shows that Azuma-Hoeffding bound is tighter for a high probability and a small $n$.
Overall, Chebyshev polynomial numerical experiment illustrates the advantage of the probabilistic error bounds over the deterministic error bound.
However, for most of the evaluations in this experiment, RN-binary32 is more accurate than one instance of SR-nearness. This result is unsurprising because the degree $n$ is small. To illustrate the behavior of these errors with a large $n$, we now turn to the inner product.
\subsection{Inner product}
To showcase the advantage of using BC method for large $n$, we present a numerical application of the inner product for vectors with positive floating-points chosen uniformly at random between $0$ and $1$.
\begin{figure}
\centering
\includegraphics[scale=0.45]{inner-product-plottt.pdf}
\caption{Probabilistic bounds with probability $1- \lambda =0.9$ vs deterministic bound of the computed forward errors of the inner product with $u=2^{-23}$.}
\label{fig:inner}
\end{figure}
For smaller $n$, AH and BC bounds are comparable with a slight advantage for~(\ref{azum-inn-bound}).
However, for a large $n$, the AH bound grows exponentially faster than the BC bound. Asymptotically, the BC bound is therefore much tighter.
Interestingly, when $n$ increases, a single instance of SR-nearness in binary32 precision is more accurate than RN-binary32. Therefore, for large vectors, using stochastic rounding instead of the default round to nearest improves the computation accuracy of the inner product.
However, this experiment raises concerns regarding the use of SR as a model to estimate RN rounding errors~\cite{verificarlo,sohier2021confidence}, in particular for a large number of operations. Further studies are required to assess precisely the limits of this model and possibly give criteria to detect them.
\section{Conclusion}
For a wide field of applications, SR results in a smaller accumulated error, for example by avoiding stagnation effects. Moreover, SR errors satisfy the mean independence property allowing to derive tight probabilistic error bounds from either our variance bound or the martingale property.
For an inner product $y=a^{\top}b,$ sub-section~\ref{azuma-method} illustrates the benefits of constructing the martingale from the recursive summation of the inner product~\cite{ilse} versus the construction from the errors accumulated in the whole process at each product $a_ib_i$~\cite{theo21stocha, theo19}. In particular, the construction in~\cite{ilse} gives a $O(\sqrt{n}u)$ probabilistic bound, tighter than the $O\big(u\sqrt{n\ln{(n)}} \big)$ bound in~\cite{theo21stocha}.
An extension of the~\cite{ilse} method to the Horner algorithm is presented.
Unlike the inner product, Horner algorithm does not explicitly satisfy the martingale property on the partial sums requiring a change of variable before once can use the Azuma-Hoeffding inequality.
Lemma~\ref{model} is a variance bound for the family of algorithms whose error can be written as a product of error terms of the form $1+ \delta$. Based on the Bienaymé–Chebyshev inequality, a new method is proposed to obtain probabilistic error bounds. This method allows to get tighter probabilistic error bound in various situations, such as computations with a large $n$. We demonstrate the strength of this new approach on two algorithms: the Inner Product which has been previously studied, and Horner polynomial evaluation, for which no SR results were known beforehand.
The scripts for reproducing the numerical experiments in this paper are published in the repository \url{https://github.com/verificarlo/sr-variance-bounds/}.
\section*{Acknowledgment}
This research was supported by the French National Agency for Research (ANR) via the InterFLOP project (No. ANR-20-CE46-0009).
\bibliographystyle{siam}
|
1,941,325,220,753 | arxiv | \section*{Introduction}
Throughout this paper, by a \emph{Hankel matrix}, we mean a matrix of the form
\[
H\colonequals\begin{pmatrix}
x_1 & x_2 & x_3 & \cdots & x_s\\
x_2 & x_3 & \cdots & \cdots & x_{s+1}\\
x_3 & \cdots & \cdots & \cdots & x_{s+2}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
x_r & \cdots & \cdots & \cdots & x_{s+r-1}
\end{pmatrix},
\]
where $x_1,\dots,x_{s+r-1}$ are indeterminates over a field $\mathbb{F}$. By a \emph{Hankel determinantal ring} we mean a ring of the form
\[
\mathbb{F}[x_1,\dots,x_{s+r-1}]/I_t(H),
\]
where~$1\leqslant t\leqslant\min\{r,s\}$, and $I_t(H)$ is the ideal generated by the size $t$ minors of~$H$. These rings arise as homogeneous coordinate rings of higher order secant varieties of rational normal curves, see for example Room's 1938 study \cite[Chapter~11.7]{Room}.
We prove that Hankel determinantal rings over fields of characteristic zero have rational singularities, Theorem~\ref{theorem:ratsing}. In particular, higher order secant varieties of rational normal curves have rational singularities. Theorem~\ref{theorem:ratsing} may be compared with corresponding statements for generic determinantal rings, and those defined by minors of symmetric matrices of indeterminates or by pfaffians of skew-symmetric matrices of indeterminates: in characteristic zero, these are all invariant rings of linearly reductive classical groups acting on polynomial rings, and hence are pure subrings of polynomial rings. By Boutot's theorem~\cite{Boutot}, it then follows that they have rational singularities. We do not know if Hankel determinantal rings, in general, arise as invariant rings for group actions on polynomial rings, or if they are pure subrings of polynomial rings. However, for $t\geqslant 3$, we show that they are not pure subrings of the polynomial rings in which they are naturally embedded, see Proposition~\ref{proposition:not:pure:subrings}. Our proof of rational singularities is via reduction modulo~$p$ methods, using Smith's theorem \cite{Smith:ratsing} that rings of $F$-rational type have rational singularities.
We compute the divisor class groups of Hankel determinantal rings: the group is finite cyclic, in particular, the rings are $\mathbb{Q}$-Gorenstein, and we show that each divisor class group element corresponds to a rank one maximal Cohen-Macaulay module, see Theorem~\ref{theorem:class:group}.
We also prove that Hankel determinantal rings over fields of positive characteristic are~$F$-pure, Theorem~\ref{theorem:fpure}. Finally, for $R$ a Hankel determinantal ring with homogeneous maximal ideal $\mathfrak{m}_R$, we compute the $F$-pure threshold of~$\mathfrak{m}_R$ in $R$, and of its defining ideal~$I_t(H)$ in the ambient polynomial ring, see Theorems~\ref{theorem:fpt1} and~\ref{theorem:fpt2}.
\section{Generalities}
By a result of Gruson and Peskine,~\cite[Lemme~2.3]{Gruson-Peskine}, every Hankel determinantal ring is isomorphic to one where the defining ideal $I_t(H)$ is generated by the maximal sized minors of a Hankel matrix; alternatively see \cite[Proposition~7]{JWatanabe} or \cite[Corollary~2.2(b)]{Conca}. In view of this, we will henceforth work with Hankel determinantal rings of the form
\[
R\colonequals\mathbb{F}[x_1,\dots,x_{n+t-1}]/I_t(H),
\]
where $H$ is a $t\times n$ Hankel matrix and $t\leqslant n$; except where stated otherwise, $H$ will denote such a~matrix.
Consider the \emph{generic} determinantal ring
\[
B\colonequals\mathbb{F}[Y]/I_t(Y),
\]
where $Y$ is a~$t\times n$ matrix of indeterminates, and $I_t(Y)$ the ideal generated by its size $t$ minors. The $(t-1)(n-1)$ elements $Y_{i,j+1}-Y_{i+1,j}$ are readily seen to be part of a system of parameters for $B$, and specializing these to $0$ gives a ring isomorphic to~$R$. Since~$B$ is Cohen-Macaulay by \cite{Eagon-Northcott, Hochster-Eagon}, so is the ring $R$, and the Eagon-Northcott complex provides a minimal free resolution of $R$. It follows as well that
\[
\dim R\ =\ 2t-2,
\]
and hence that
\[
\operatorname{height}\, I_t(H)\ =\ n-t+1.
\]
The elements $x_1,\dots,x_{t-1},x_{n+1},\dots,x_{n+t-1}$ are a homogeneous system of parameters for~$R$, and the socle modulo this system of parameters is spanned by the degree $t-1$ monomials in~$x_t,\dots,x_n$. In particular, the ring $R$ has $a$-invariant
\[
a(R)\ =\ 1-t.
\]
The multiplicity of the ring $R$ is
\[
e(R)\ =\ \binom{n}{t-1},
\]
as may be seen directly from the above discussion, or obtained using the multiplicity of generic determinantal rings, e.g., \cite[Proposition~2.15]{Bruns-Vetter}.
The ring $R$ is a normal domain, see for example,~\cite[Proposition~8]{JWatanabe}; it is Gorenstein precisely when~$t=n$. The ideal $I_t(H)$ is a set-theoretic complete intersection by~\cite{Valla}. The singular locus of $R$ is defined by the image of $I_{t-1}(H)$, see \cite[Theorem~1.56]{Iarrobino-Kanev}. For secant varieties of smooth curves in general, we mention \cite{Vermeire} and the references therein.
\begin{notation}
Given a matrix $X$, we use ${[a_1\ \dots\ a_r\mid b_1\ \dots\ b_r]}_X$ to denote the determinant of the submatrix of $X$ with rows $a_1,\dots,a_r$ and columns $b_1,\dots,b_r$. We omit the subscript whenever the matrix is clear from the context.
\end{notation}
\section{Rational singularities}
In proving that Hankel determinantal rings of characteristic zero have rational singularities, we will use the following description: A $2\times n$ Hankel determinantal ring over a field~$\mathbb{F}$ is readily seen to be isomorphic to the $n$-th Veronese subring of a polynomial ring~$\mathbb{F}[u,v]$, where the Hankel matrix maps entrywise to
\[
\begin{pmatrix}
u^n & u^{n-1}v & u^{n-2}v^2 & \cdots & uv^{n-1}\\
u^{n-1}v & u^{n-2}v^2 & \cdots & \cdots & v^n
\end{pmatrix}.
\]
This is the homogeneous coordinate ring of the rational normal curve $C_n$ in $\mathbb{P}^n$; as it is a Veronese subring, it is a pure subring of $\mathbb{F}[u,v]$, independent of the characteristic of $\mathbb{F}$.
A $3\times n$ Hankel determinantal ring is the homogeneous coordinate ring of the secant variety of the rational normal curve $C_{n+1}$ in $\mathbb{P}^{n+1}$; it is isomorphic to the subring of the polynomial ring $\mathbb{F}[u_1,u_2,v_1,v_2]$, where the Hankel matrix maps entrywise to the matrix
\[
\begin{pmatrix}
u_1^{n+1}+u_2^{n+1} & u_1^nv_1^{\phantom.}+u_2^nv_2^{\phantom.} & u_1^{n-1}v_1^2+u_2^{n-1}v_2^2 & \cdots & u_1^2v_1^{n-1}+u_2^2v_2^{n-1}
\\[0.4em]
u_1^nv_1^{\phantom.}+u_2^nv_2^{\phantom.} & u_1^{n-1}v_1^2+u_2^{n-1}v_2^2 & \cdots & \cdots & u_1^{\phantom.}v_1^n+u_2^{\phantom.}v_2^n
\\[0.4em]
u_1^{n-1}v_1^2+u_2^{n-1}v_2^2 & \cdots & \cdots & \cdots & v_1^{n+1}+v_2^{n+1}
\end{pmatrix}.
\]
More generally, the Hankel determinantal ring $R\colonequals\mathbb{F}[x_1,\dots,x_{n+t-1}]/I_t(H)$ is the homogeneous coordinate ring of the order $t-2$ secant variety of the rational normal curve~$C_{n+t-2}$ in $\mathbb{P}^{n+t-2}$, see for example~\cite[Section~4]{Eisenbud}. Specifically, we claim that $R$ is isomorphic to the $\mathbb{F}$-subalgebra of the polynomial ring
\[
S\colonequals\mathbb{F}[u_1,\dots,u_{t-1},v_1,\dots,v_{t-1}]
\]
generated by the elements
\begin{equation}
\label{equation:secant}
h_i\ \colonequals\ u_1^{n+t-2-i}v_1^i\ +\ u_2^{n+t-2-i}v_2^i\ +\ \cdots\ +\ u_{t-1}^{n+t-2-i}v_{t-1}^i,
\quad\text{ for }\ 0\leqslant i\leqslant n+t-2.
\end{equation}
To see this, consider the $\mathbb{F}$-algebra homomorphism $\varphi\colon\mathbb{F}[x_1,\dots,x_{n+t-1}]\longrightarrow S$ defined by $\varphi(x_i)=h_{i-1}$ for each $i$. Note that $\varphi$ maps the Hankel matrix of indeterminates $H$ to a matrix $M$ that is Hankel in the elements~$h_i$. As $M$ is the sum of $t-1$ matrices of the form
\[
\begin{pmatrix}
u^{n+t-2} & u^{n+t-3}v & u^{n+t-4}v^2 & \cdots & u^{t-1}v^{n-1}\\
u^{n+t-3}v & u^{n+t-4}v^2 & \cdots & \cdots & u^{t-2}v^n\\
u^{n+t-4}v^2 & \cdots & \cdots & \cdots & u^{t-3}v^{n+1}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
u^{n-1}v^{t-1} & \cdots & \cdots & \cdots & v^{n-t+2}
\end{pmatrix},
\]
each having rank~$1$, it follows that the rank of $M$ is at most $t-1$, i.e., that $I_t(M)=0$. Hence~$\varphi$ induces a homomorphism $\widetilde{\varphi}\colon R\longrightarrow S$. Since $R$ is a domain of dimension $2t-2$, which is also the dimension of $S$, it follows that $\widetilde{\varphi}$ is injective.
\begin{theorem}
\label{theorem:ratsing}
Let $R=\mathbb{F}[x_1,\dots,x_{n+t-1}]/I_t(H)$, where $\mathbb{F}$ is a field, and $H$ is a $t\times n$ Hankel matrix. If $\mathbb{F}$ has characteristic zero, then $R$ has rational singularities. If $\mathbb{F}$ is a field of positive characteristic $p$, with $p\geqslant t$, then $R$ is $F$-rational.
\end{theorem}
It follows that if $\mathbb{F}$ has characteristic $p\geqslant t$, then the ring $R$ has rational singularities in the sense of \cite[Definition~1.3]{Kovacs}; see \cite[Corollary~1.12]{Kovacs}.
\begin{proof}
It suffices to prove the positive characteristic assertion in the theorem: it then follows that $R$ is of $F$-rational type for $\mathbb{F}$ of characteristic zero, and then by \cite[Theorem~4.3]{Smith:ratsing} that $R$ has rational singularities.
Let $\mathbb{F}$ be a field of characteristic $p\geqslant t$, and assume $t\geqslant 2$. Using \cite[Theorem~4.7]{HH:JAG} and the preceding remark in that paper, it suffices to prove that the ideal generated by one choice of a homogeneous system of parameters for~$R$ is tightly closed. Set
\[
S\colonequals\mathbb{F}[u_1,\dots,u_{t-1},v_1,\dots,v_{t-1}],
\]
i.e., $S$ is a polynomial ring in $2t-2$ indeterminates, and identify $R$ with the subring generated by the elements $h_0,\dots,h_{n+t-2}$ as in~\eqref{equation:secant}. The elements
\[
h_0,\dots,h_{t-2},h_n,\dots,h_{n+t-2}
\]
form a homogeneous system of parameters for~$R$. Let $\mathfrak{a}$ be the ideal of $R$ generated by these elements; it suffices to show that~$\mathfrak{a}$ is tightly closed. Note that $h_i$ belongs to the ideal~$(u_1^n,u_2^n,\dots,u_{t-1}^n)S$ for $0\leqslant i\leqslant t-2$, and to~$(v_1^n,v_2^n,\dots,v_{t-1}^n)S$ for $n\leqslant i\leqslant n+t-2$, so
\[
\mathfrak{a} S\ \subseteq\ (u_1^n,u_2^n,\dots,u_{t-1}^n,v_1^n,v_2^n,\dots,v_{t-1}^n)S.
\]
The socle of $R/\mathfrak{a}$ is the vector space spanned by the images of the elements
\[
h_{i_1}h_{i_2}\cdots h_{i_{t-1}}
\quad\text{ where }\ t-1\leqslant i_1\leqslant i_2\leqslant\cdots\leqslant i_{t-1}\leqslant n-1.
\]
Suppose that a linear combination of the above elements, say
\[
r\colonequals\sum\lambda_{i_1 i_2 \cdots i_{t-1}} h_{i_1}h_{i_2}\cdots h_{i_{t-1}}
\quad\text{ where }\ \lambda_{i_1 i_2 \cdots i_{t-1}}\in\mathbb{F},
\]
belongs to $\mathfrak{a}^*$, i.e., to the tight closure of $\mathfrak{a}$ in $R$. Since $R\subset S$ is an inclusion of domains, it then follows from the definition of tight closure that $r\in (\mathfrak{a} S)^*$. But $(\mathfrak{a} S)^*=\mathfrak{a} S$ since $S$ is regular, implying that $r\in\mathfrak{a} S$, and hence that
\[
r\ \in\ (u_1^n,u_2^n,\dots,u_{t-1}^n,v_1^n,v_2^n,\dots,v_{t-1}^n)S.
\]
We claim that this occurs only when each coefficient $\lambda_{i_1 i_2 \cdots i_{t-1}}$ equals $0$; it then follows that~$r=0$, i.e., that $\mathfrak{a}$ is tightly closed, as desired.
We first illustrate the proof of the claim when $t=3$. In this case, the ring $R$ may be identified with the $\mathbb{F}$-subalgebra of $S=\mathbb{F}[u_1,u_2,v_1,v_2]$ generated by the elements
\[
h_i\ =\ u_1^{n+1-i}v_1^i + u_2^{n+1-i}v_2^i
\quad\text{ where }\ 0\leqslant i\leqslant n+1.
\]
Suppose
\[
r\ =\sum_{2\leqslant i_1\leqslant i_2\leqslant n-1} \lambda_{i_1i_2} h_{i_1}h_{i_2}\ \in\ (u_1^n,u_2^n,v_1^n,v_2^n)S.
\]
Fix $k_1,k_2$ with $2\leqslant k_1\leqslant k_2\leqslant n-1$, and consider the coefficient of $u_1^{n+1-k_1}v_1^{k_1} u_2^{n+1-k_2}v_2^{k_2}$ in the expression above, i.e., in
\[
\sum\lambda_{i_1i_2} h_{i_1}h_{i_2}\ =\ \sum\lambda_{i_1i_2} (u_1^{n+1-i_1}v_1^{i_1} + u_2^{n+1-i_1}v_2^{i_1}) (u_1^{n+1-i_2}v_1^{i_2} + u_2^{n+1-i_2}v_2^{i_2}).
\]
This coefficient is $\lambda_{k_1k_2}$ if $k_1<k_2$, and it equals $2\lambda_{k_1k_1}$ if $k_1=k_2$. Since the characteristic of~$\mathbb{F}$ is~$p\geqslant 3$, and $r\in (u_1^n,u_2^n,v_1^n,v_2^n)S$, it follows that each coefficient must be $0$ as claimed.
We now turn to the general case: suppose
\[
r\ =\sum\lambda_{i_1 i_2 \cdots i_{t-1}} h_{i_1}h_{i_2}\cdots h_{i_{t-1}}\ \in\ (u_1^n,u_2^n,\dots,u_{t-1}^n,v_1^n,v_2^n,\dots,v_{t-1}^n)S,
\]
where the sum is over indices with $t-1\leqslant i_1\leqslant i_2\leqslant\cdots\leqslant i_{t-1}\leqslant n-1$. Let $k_1,\dots,k_{t-1}$ be integers with
\[
t-1\leqslant k_1\leqslant k_2\leqslant\cdots\leqslant k_{t-1}\leqslant n-1.
\]
The coefficient of
\[
u_1^{n+t-2-k_1}v_1^{k_1} u_2^{n+t-2-k_2}v_2^{k_2} \cdots u_{t-1}^{n+t-2-k_{t-1}}v_{t-1}^{k_{t-1}}
\]
in $r$ is $c\lambda_{k_1 k_2 \cdots k_{t-1}}$ where $c$ is a product of positive integers, each less than $t$. Hence $c\neq0$ in~$\mathbb{F}$, and so it follows that each coefficient is $0$.
\end{proof}
While the description in terms of higher secant varieties shows that every Hankel determinantal ring is a subring of a polynomial ring, it is not in general a pure subring of that polynomial ring, as we show next; recall that a ring homomorphism $R\longrightarrow S$ is \emph{pure} if
\[
R\otimes_RM\longrightarrow S\otimes_RM
\]
is injective for each $R$-module $M$.
\begin{proposition}
\label{proposition:not:pure:subrings}
Let $R$ be a $t\times n$ Hankel determinantal ring, regarded as the $\mathbb{F}$-subalgebra of the polynomial ring $S=\mathbb{F}[u_1,\dots,u_{t-1},v_1,\dots,v_{t-1}]$, generated by the elements $h_i$ as in~\eqref{equation:secant}. If $t\geqslant 3$, then $R$ is not a pure subring of~$S$.
\end{proposition}
\begin{proof}
Let $\mathfrak{m}_R$ denote the homogeneous maximal ideal of $R$. The expansion of this ideal to $S$ is contained in the height $t$ ideal
\[
(u_1-v_1,\ \dots,\ u_{t-1}-v_{t-1},\ v_1^{n+t-2}+\cdots+v_{t-1}^{n+t-2})S.
\]
Since $\operatorname{height}\,\mathfrak{m}_RS\leqslant t<2t-2=\dim S$, the Hartshorne-Lichtenbaum Vanishing Theorem, for example \cite[Theorem~14.1]{24hours}, implies that
\[
H^{2t-2}_{\mathfrak{m}_R}(S)\ =\ H^{2t-2}_{\mathfrak{m}_RS}(S)\ =\ 0.
\]
If $R\longrightarrow S$ is pure, the injectivity of $H^{2t-2}_{\mathfrak{m}_R}(R)\longrightarrow H^{2t-2}_{\mathfrak{m}_R}(S)$ implies that $H^{2t-2}_{\mathfrak{m}_R}(R)=0$, which is a contradiction since $\dim R=2t-2$.
\end{proof}
\begin{remark}
\label{remark:not:pure:subrings}
Being a pure subring of a polynomial ring is a stronger property than having rational singularities, or even having $F$-regular type; the hypersurface in~\cite[Theorem~5.1]{Singh-Swanson} has $F$-regular type, but is not a pure subring of a polynomial ring.
\end{remark}
\begin{question}
\label{question:pure:subrings}
Is every Hankel determinantal ring a pure subring of a polynomial ring?
\end{question}
\section{The divisor class group}
Consider the Hankel determinantal ring $R=\mathbb{F}[x_1,\dots,x_{n+t-1}]/I_t(H)$, where $\mathbb{F}$ is a field. To avoid some trivialities, we assume throughout this section that $n\geqslant t\geqslant 2$. Set $\mathfrak{p}$ to be the ideal of $R$ generated by the maximal minors of the first $t-1$ rows of $H$, i.e.,
\begin{equation}
\label{equation:p}
\mathfrak{p}\colonequals I_{t-1}\begin{pmatrix}
x_1 & x_2 & x_3 & \cdots & x_n\\
x_2 & x_3 & \cdots & \cdots & x_{n+1}\\
x_3 & \cdots & \cdots & \cdots & x_{n+2}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
x_{t-1} & \cdots & \cdots & \cdots & x_{n+t-2}
\end{pmatrix}.
\end{equation}
The ring $R/\mathfrak{p}$ may be identified with the polynomial ring in the indeterminate $x_{n+t-1}$ over a size $(t-1)\times n$ Hankel determinantal ring; it follows that $R/\mathfrak{p}$ is an integral domain of dimension $2t-3$, and hence that $\mathfrak{p}$ is a prime ideal of height $1$.
For each integer $k$ with $1\leqslant k\leqslant n-t+2$, set $\mathfrak{p}^{\langle k\rangle}$ to be the ideal of $R$ as below,
\[
\mathfrak{p}^{\langle k\rangle}\colonequals I_{t-1}\begin{pmatrix}
x_1 & x_2 & x_3 & \cdots & x_{n-k+1}\\
x_2 & x_3 & \cdots & \cdots & x_{n-k+2}\\
x_3 & \cdots & \cdots & \cdots & x_{n-k+3}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
x_{t-1} & \cdots & \cdots & \cdots & x_{n-k+t-1}
\end{pmatrix}.
\]
Note that $\mathfrak{p}^{\langle 1\rangle}=\mathfrak{p}$, and that the ideal $\mathfrak{p}^{\langle n-t+2\rangle}$ is principal. With this notation, we prove:
\begin{theorem}
\label{theorem:class:group}
Consider the Hankel determinantal ring $R\colonequals\mathbb{F}[x_1,\dots,x_{n+t-1}]/I_t(H)$, for~$\mathbb{F}$ a field, and $t\geqslant 2$. Then the divisor class group of $R$ is cyclic of order $n-t+2$, generated by the ideal $\mathfrak{p}$ as in~\eqref{equation:p}. The symbolic powers of $\mathfrak{p}$ are
\[
\mathfrak{p}^{(k)}=\mathfrak{p}^{\langle k\rangle}
\quad\text{ for }\ 1\leqslant k\leqslant n-t+2.
\]
Moreover, each of these is a maximal Cohen-Macaulay $R$-module.
\end{theorem}
We need a number of preliminary results.
\begin{lemma}
\label{lemma:identity}
Let $Y$ be an $m\times n$ matrix with entries in a commutative ring. Assume that $Y$ has rank less than $t$.
\begin{enumerate}[\ \rm(1)]
\item For every choice of row and column indices, one has
\begin{multline*}
[a_1\ \dots\ a_{t-1} \mid b_1\ \dots\ b_{t-1}] \times [c_1\ \dots\ c_{t-1} \mid d_1 \ \dots\ d_{t-1}] \\
=\
[a_1\ \dots\ a_{t-1} \mid d_1\ \dots\ d_{t-1}] \times [c_1\ \dots\ c_{t-1} \mid b_1 \ \dots\ b_{t-1}].
\end{multline*}
\item Let $Y(a,b)$ denote the submatrix of $Y$ with row indices $\leqslant a$, and column indices $\leqslant b$. Then, for all $a<m$ and $b<n$, one has
\[
I_{t-1}(Y(a,b+1))\ I_{t-1}(Y(a+1,b))\ =\ I_{t-1}(Y(a,b))\ I_{t-1}(Y(a+1,b+1)).
\]
\end{enumerate}
\end{lemma}
\begin{proof}
First note that (2) follows immediately from (1). To prove (1), we may assume right away that the underlying ring is $B\colonequals\mathbb{Z}[X]/I_t(X)$, with $X$ an $m\times n$ matrix of indeterminates, and that $Y$ is the image of $X$ in $B$. Since $B$ is a domain, it suffices to verify the displayed identity in the fraction field $\mathbb{K}$ of $B$. Consider the linear map
\[
\varphi\colon\mathbb{K}^{m}\longrightarrow\mathbb{K}^{n}
\]
given by the image of $Y$. The map $\varphi$ has rank less than $t$, so the exterior power
\[
\Lambda^{t-1}\varphi\colon\Lambda^{t-1} \mathbb{K}^{m}\longrightarrow\Lambda^{t-1} \mathbb{K}^{n}
\]
is a linear map of rank at most $1$. For rows $i_1,\dots,i_{t-1}$ and columns $j_1,\dots,j_{t-1}$ of $\varphi$, the corresponding matrix entry of $\Lambda^{t-1} \varphi$ is the determinant
\[
{[i_1\ \dots\ i_{t-1} \mid j_1\ \dots\ j_{t-1}]}_Y.
\]
The required identity is now immediate from the fact that the size $2$ minors of the matrix for $\Lambda^{t-1}\varphi$ are zero.
\end{proof}
\begin{lemma}
\label{lemma:valuation}
Let $\mathfrak{p}$ be as in~\eqref{equation:p}, and let $v$ denote the valuation of the discrete valuation ring~$R_\mathfrak{p}$. Then, for integers $1\leqslant i_1<i_2<\cdots<i_{t-1}\leqslant n$, the minors of $H$ satisfy
\[
v([1\ \dots\ t-1 \mid i_1\ \dots\ i_{t-1}])\ =\ n+1-i_{t-1}.
\]
Consequently for $\mathfrak{p}^{\langle k\rangle}$ as defined earlier, and $k$ with $1\leqslant k\leqslant n-t+2$, one has
\[
\mathfrak{p}^{\langle k\rangle}\subseteq\mathfrak{p}^{(k)}
\quad\text{and}\quad
\mathfrak{p}^{\langle k\rangle}R_\mathfrak{p}=\mathfrak{p}^{(k)}R_\mathfrak{p}.
\]
\end{lemma}
\begin{proof}
Set $\pi\colonequals [1\ \dots\ t-1\mid n-t+2\ \dots\ n]$. We will prove inductively that
\begin{equation}
\label{equation:val1}
v([1\ \dots\ t-1 \mid i_1\ \dots\ i_{t-1}])\ =\ (n+1-i_{t-1})v(\pi),
\end{equation}
with the base case for the induction being $i_{t-1}=n$. By Lemma~\ref{lemma:identity}~(1) one has
\begin{equation}
\label{equation:val2}
[1\ \dots\ t-1 \mid i_1\ \dots\ i_{t-1}] \times [2\ \dots\ t \mid n-t+2\ \dots\ n]
\ =\ \pi\times [2\ \dots\ t \mid i_1\ \dots\ i_{t-1}].
\end{equation}
We work in the ring $R_\mathfrak{p}$, where the minor
\[
[2\ \dots\ t \mid n-t+2\ \dots\ n]
\]
is a unit. If~$i_{t-1}=n$, then $[2\ \dots\ t \mid i_1\ \dots\ i_{t-1}]$ is a unit in $R_\mathfrak{p}$ as well, and it follows that
\[
v([1\ \dots\ t-1 \mid i_1\ \dots\ i_{t-1}])\ =\ v(\pi),
\]
which proves the base case. For the inductive step, assume that $i_{t-1}<n$ and that~\eqref{equation:val1} holds for larger values of $i_{t-1}$. Since
\[
[2\ \dots\ t \mid i_1\ \dots\ i_{t-1}]\ =\ [1\ \dots\ t-1 \mid i_1+1\ \dots\ i_{t-1}+1],
\]
the inductive hypothesis gives
\[
v([2\ \dots\ t \mid i_1\ \dots\ i_{t-1}])\ =\ (n-i_{t-1})v(\pi).
\]
Combining this with~\eqref{equation:val2}, it follows that
\[
v([1\ \dots\ t-1 \mid i_1\ \dots\ i_{t-1}])\ =\ v(\pi)+(n-i_{t-1})v(\pi)\ =\ (n+1-i_{t-1})v(\pi),
\]
which completes the proof of~\eqref{equation:val1}.
Since the valuation of each minor generating the ideal $\mathfrak{p}$ is a positive integer multiple of~$v(\pi)$, it follows that $\pi$ generates the maximal ideal of $R_\mathfrak{p}$, and that $v(\pi)=1$. Lastly, note that the minors that generate the ideal~$\mathfrak{p}^{\langle k\rangle}$ are precisely those with valuation at least $k$.
\end{proof}
The following is a slight modification of \cite[Lemma~4]{JWatanabe}, adapted to our notation, and with a shorter proof.
\begin{lemma}
\label{lemma:watanabe}
Let $R$ be a $t\times n$ Hankel determinantal ring over a field $\mathbb{F}$. Set
\[
\Delta\colonequals[1\ \dots\ t-1 \mid 1\ \dots\ t-1],
\]
viewed as an element of $R$. Then:
\begin{enumerate}[\ \rm(1)]
\item the ideal $\Delta R$ has radical $\mathfrak{p}$, for $\mathfrak{p}$ as in~\eqref{equation:p},
\item $R_\Delta = {\mathbb{F}[x_1,\ \dots,\ x_{2t-2}}]_\Delta$, and
\item the elements $x_1,\ \dots,\ x_{2t-2}$ of $R$ are algebraically independent over~$\mathbb{F}$.
\end{enumerate}
\end{lemma}
\begin{proof} (1) In the notation of Lemma~\ref{lemma:identity}, the ideal $\mathfrak{p}^{\langle k\rangle}$ is $I_{t-1}(Y(t-1,n-k+1))$, where~$Y$ is the image of the Hankel matrix $H$ in $R$. Since
\[
I_{t-1}(Y(t-1,n-k+1))\ =\ I_{t-1}(Y(t,n-k)),
\]
Lemma~\ref{lemma:identity}~(2) gives
\begin{multline*}
I_{t-1}(Y(t-1,n-k+1))^2\ =\ I_{t-1}(Y(t-1,n-k))\ I_{t-1}(Y(t,n-k+1))\\
\subset \ I_{t-1}(Y(t-1,n-k))
\end{multline*}
i.e.,
\[
(\mathfrak{p}^{\langle k\rangle})^2\ \subset\ \mathfrak{p}^{\langle k+1\rangle}.
\]
Since $\mathfrak{p}^{\langle 1\rangle}=\mathfrak{p}$ and $\mathfrak{p}^{\langle n-t+2\rangle}=\Delta R$, we are done.
(2) For each $a\geqslant t$, we have $[1\ \dots\ t \mid 1\ \dots\ t-1 \ a]=0$ in $R$, so
\[
x_{t+a-1}\Delta\ \in\ \mathbb{F}[x_1, \dots, x_{t+a-2}].
\]
Since $\Delta\in\mathbb{F}[x_1, \dots, x_{t+a-2}]$, it follows that
\[
{\mathbb{F}[x_1, \dots, x_{t+a-1}]}_\Delta\ =\ {\mathbb{F}[x_1, \dots, x_{t+a-2}]}_\Delta.
\]
Iterating the above display, one gets the desired result.
(3) The dimension of $R$ is $2t-2$, hence $\dim \mathbb{F}[x_1,\ \dots,\ x_{2t-2}]=2t-2$.
\end{proof}
\begin{lemma}
\label{lemma:cm}
For each $k$ with $1\leqslant k\leqslant n-t+2$, the ring $R/\mathfrak{p}^{\langle k\rangle}$ is Cohen-Macaulay. Hence the ideal $\mathfrak{p}^{\langle k\rangle}$ is a maximal Cohen-Macaulay $R$-module; in particular, it is reflexive.
\end{lemma}
\begin{proof}
Since $R/\mathfrak{p}$ is a polynomial extension of a $(t-1)\times n$ Hankel determinantal ring, its multiplicity is
\[
e(R/\mathfrak{p})\ =\ \binom{n}{t-2}.
\]
Fix $k$ with $1\leqslant k\leqslant n-t+2$. Since $\Delta \in\mathfrak{p}^{\langle k\rangle}$, it follows from Lemma~\ref{lemma:watanabe} that $\mathfrak{p}^{\langle k\rangle}$ has radical~$\mathfrak{p}$. The associativity formula for multiplicities, \cite[Corollary~4.7.8]{Bruns-Herzog}, then gives the first equality in
\[
e(R/\mathfrak{p}^{\langle k\rangle})\ =\ \ell\left(\frac{R_\mathfrak{p}}{\mathfrak{p}^{\langle k\rangle}R_\mathfrak{p}}\right)e(R/\mathfrak{p})\ =\ k\binom{n}{t-2},
\]
while the second equality follows from Lemma~\ref{lemma:valuation}.
Let $A$ be the polynomial ring $\mathbb{F}[x_1,\dots,x_{n+t-1}]$, and let $P_k$ be the inverse image of $\mathfrak{p}^{\langle k\rangle}$ under the canonical surjection $A\longrightarrow R$. The images of the indeterminates
\[
{\boldsymbol{x}}\colonequals x_1, \dots, x_{t-2}, x_{n+1}, \dots, x_{n+t-1}
\]
are a homogeneous system of parameters for~$A/P_k = R/\mathfrak{p}^{\langle k\rangle}$. Set
\[
J\colonequals P_k+({\boldsymbol{x}})A.
\]
Using, for example, \cite[Corollary~4.7.11]{Bruns-Herzog}, one has
\begin{equation}
\label{equation:inequalities}
\ell(A/J)\ \geqslant\ e\big({\boldsymbol{x}},R/\mathfrak{p}^{\langle k\rangle}\big)\ \geqslant\ e\big(R/\mathfrak{p}^{\langle k\rangle}\big)\ =\ k\binom{n}{t-2}.
\end{equation}
We claim that
\[
\ell(A/J)\ \leqslant\ k\binom{n}{t-2}.
\]
Assuming the claim, all the terms in~\eqref{equation:inequalities} are equal, but then $R/\mathfrak{p}^{\langle k\rangle}$ is Cohen-Macaulay using, again, \cite[Corollary~4.7.11]{Bruns-Herzog}.
To prove the claim, consider the degrevlex order on $A$ induced by
\[
x_1>x_2>\cdots>x_{n+t-1}.
\]
Then the initial ideal of $J$ contains the ideal
\[
({\boldsymbol{x}})\ +\ (x_{t-1}, \dots, x_{n-k+1})^{t-1}\ +\ (x_t, \dots, x_n)^t,
\]
so it suffices to verify that the length of
\[
\frac{\mathbb{F}[x_{t-1},\dots,x_n]}{(x_{t-1}, \dots, x_{n-k+1})^{t-1} + (x_t, \dots, x_n)^t}
\]
is at most
\[
k\binom{n}{t-2}.
\]
This is immediate from the following lemma.
\end{proof}
\begin{lemma}
Let $\mathbb{F}$ be a field, and consider integers $t\geqslant 2$ and $1\leqslant r\leqslant s$. Then
\[
\ell\left(\frac{\mathbb{F}[y_1,\dots,y_s]}{(y_1, \dots, y_r)^{t-1} + (y_2, \dots, y_s)^t}\right)
\ =\ (s-r+1)\binom{s+t-2}{t-2}.
\]
\end{lemma}
\begin{proof}
When $r=1$, the length in question is that of
\[
\frac{\mathbb{F}[y_1]}{(y_1^{t-1})}\otimes_{\mathbb{F}}\frac{\mathbb{F}[y_2,\dots,y_s]}{(y_2, \dots, y_s)^t},
\]
which equals
\[
(t-1)\binom{s-1+t-1}{t-1}\ =\ s\binom{s+t-2}{t-2},
\]
so the asserted formula holds. Assume for the rest that $r\geqslant 2$.
The case when $t=2$ is readily checked as well; we proceed by induction on $t$ and $s$. Set
\[
V\colonequals\mathbb{F}[y_1,\dots,y_s]
\quad\text{and}\quad
I\colonequals (y_1, \dots, y_r)^{t-1} + (y_2, \dots, y_s)^t,
\]
and consider the exact sequence
\[
0\longrightarrow V/(I:y_2) \longrightarrow V/I \longrightarrow V/(I+y_2V) \longrightarrow 0.
\]
Since $(I:y_2) = (y_1, \dots, y_r)^{t-2} + (y_2, \dots, y_s)^{t-1}$, the inductive hypothesis gives
\begin{align*}
\ell(V/I) &\ =\ \ell(V/(I:y_2)) + \ell(V/(I+y_2V)) \\
&\ =\ (s-r+1)\binom{s+(t-1)-2}{(t-1)-2} + ((s-1)-(r-1)+1)\binom{(s-1)+t-2}{t-2} \\
&\ =\ (s-r+1)\binom{s+t-2}{t-2}.\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:class:group}]
By Lemma~\ref{lemma:watanabe}, the ring $R_\Delta$ is a localization of a polynomial ring, and hence a UFD. Nagata's theorem, e.g.,~\cite[page~315]{Bruns-Herzog}, then implies that $\operatorname{Cl}(R)$ is generated by the height $1$ prime ideals of $R$ that contain $\Delta$, namely by the ideal $\mathfrak{p}$.
Fix $k$ with $1\leqslant k\leqslant n-t+2$. Then $\mathfrak{p}^{\langle k\rangle}$ has radical $\mathfrak{p}$, and is unmixed by Lemma~\ref{lemma:cm}. Thus, the primary decomposition of $\mathfrak{p}^{\langle k\rangle}$ has the form $\mathfrak{p}^{(i)}$ for some $i$. The integer~$i$ can be computed after localization at $\mathfrak{p}$, but then Lemma~\ref{lemma:valuation} implies that $\mathfrak{p}^{\langle k\rangle}=\mathfrak{p}^{(k)}$ as claimed. Note that the ideal $\mathfrak{p}^{\langle k\rangle}$ is principal precisely when $k=n-t+2$. Lastly, each $\mathfrak{p}^{\langle k\rangle}$ is a maximal Cohen-Macaulay module by Lemma~\ref{lemma:cm}.
\end{proof}
\begin{remark}
\label{remark:mcm}
Let $Y$ be a~$t\times n$ matrix of indeterminates over a field $\mathbb{F}$, and consider the generic determinantal ring
\[
B\colonequals\mathbb{F}[Y]/I_t(Y).
\]
Set $P$ to be the prime ideal of $B$ generated by the size $t-1$ minors of the first $t-1$ rows of~$Y$, and $Q$ to be the prime generated by the size $t-1$ minors of the first $t-1$ columns. By \cite[Example~9.27(d)]{Bruns-Vetter}, the following are maximal Cohen-Macaulay $B$-modules:
\[
B,\ P,\ Q,\ Q^2,\ \dots,\ Q^{n-t+1},
\]
and, in fact, the only rank one maximal Cohen-Macaulay $B$-modules up to isomorphism. The canonical module of $B$ is isomorphic to $Q^{n-t}$, see \cite[Theorem~8.8]{Bruns-Vetter}.
Since the Hankel determinantal ring $R$ may be obtained as the specialization of $B$ modulo a regular sequence, it follows that the images in $R$ of the modules displayed above are Cohen-Macaulay $R$-modules. Note that~$\mathfrak{p}=PR$, and set~$\mathfrak{q}\colonequals QR$, in which case
\[
R,\ \mathfrak{p},\ \mathfrak{q},\ \mathfrak{q}^2,\ \dots,\ \mathfrak{q}^{n-t+1}
\]
are Cohen-Macaulay $R$-modules. Due to the symmetry in a Hankel matrix, one has
\[
\mathfrak{q}\ =\ \mathfrak{p}^{\langle n-t+1\rangle}\ =\ \mathfrak{p}^{(n-t+1)}.
\]
Fix $i$ with $1\leqslant i\leqslant n-t+1$. Since $\mathfrak{q}^i$ is a maximal Cohen-Macaulay $R$-module, and hence a divisorial ideal, it follows that
\[
\mathfrak{q}^i\ =\ (\mathfrak{p}^{(n-t+1)})^i\ =\ \mathfrak{p}^{(i(n-t+1))}\ \cong\ \mathfrak{p}^{(n-t+2-i)}.
\]
In particular, $\mathfrak{q}^{n-t+1}\cong\mathfrak{p}$, and the $n-t+3$ rank one maximal Cohen-Macaulay $B$-modules specialize to the $n-t+2$ elements of the divisor class group of $R$.
The canonical module $Q^{n-t}$ of $B$ specializes to the canonical module
\[
\mathfrak{q}^{n-t}\ \cong\ \mathfrak{p}^{(2)}
\]
of $R$. Since the $a$-invariant of the ring $R$ is $1-t$, and $\mathfrak{p}^{(2)}$ is generated in degree $t-1$, it follows that the \emph{graded} canonical module of $R$ is
\[
\omega_R\colonequals\mathfrak{p}^{(2)}.
\]
Note that the number of generators of $\omega_R$ as an $R$-module is
\[
\binom{n-1}{t-1}.
\]
Since $\omega_R$ is a reflexive $R$-module of rank one, it corresponds to an element $[\omega_R]$ of $\operatorname{Cl}(R)$. The order of this element is
\[
\operatorname{ord}\,[\omega_R]\ =\ \begin{cases} n-t+2 & \text{ if $n-t+2$ is odd},\\ (n-t+2)/2 & \text{ if $n-t+2$ is even}. \end{cases}
\]
\end{remark}
\section{\texorpdfstring{$F$}{F}-purity and the \texorpdfstring{$F$}{F}-pure threshold}
Following \cite[page~121]{Hochster-Roberts:purity}, a ring $R$ of positive prime characteristic is \emph{$F$-pure} if the Frobenius endomorphism $F\colon R\longrightarrow R$ is pure. We prove:
\begin{theorem}
\label{theorem:fpure}
Let $R$ be a Hankel determinantal ring over a field $\mathbb{F}$. If $\mathbb{F}$ has positive characteristic, then the ring $R$ is $F$-pure. If $\mathbb{F}$ has characteristic zero, then $R$ has log canonical singularities.
\end{theorem}
The proof uses the graded version of Fedder's criterion, \cite[Theorem~1.12]{Fedder}, and a result from \cite{Conca}; we record these below:
\begin{theorem}[Fedder's criterion]
\label{theorem:fedder}
Let $A$ be an $\mathbb{N}$-graded polynomial ring, where $A_0$ is a field of characteristic $p>0$. Let $I$ be a homogeneous ideal of~$A$, and set $R \colonequals A/I$. Let $\mathfrak{m}$ be the homogeneous maximal ideal of~$A$. Then $R$ is $F$-pure if and only if
\[
(I^{[p]}:_AI)\ \nsubseteq\ \mathfrak{m}^{[p]}.
\]
\end{theorem}
The following lemma can be seen as a special case of \cite[Theorem~3.12]{Conca} that express the primary decomposition of a product of Hankel determinantal ideals in terms of symbolic powers and the so-called gamma functions. We present here a direct argument that is based only on \cite[Lemma~3.7]{Conca}.
\begin{lemma}
\label{lemma:symbolic}
Let $A\colonequals\mathbb{F}[x_1,\dots,x_{s+r-1}]$ be a polynomial ring over a field $\mathbb{F}$, and let $H$ be the~$r\times s$ Hankel matrix in the indeterminates $x_1,\dots,x_{s+r-1}$. Set $I\colonequals I_t(H)$, where $t$ is an integer with $1\leqslant t\leqslant\min\{r,s\}$. Let $d$ be a positive integer, and let $\delta_1,\dots,\delta_m$ be minors of~$H$ such that $m\leqslant d$ and $\sum_i\deg\delta_i\geqslant td$. Then
\[
\delta_1\cdots\delta_m\ \in\ I^d.
\]
\end{lemma}
\begin{proof} By adding factors of degree $0$ if needed, we may assume that $m=d$. For $u$ an integer, set~$I_u\colonequals I_u(H)$. If $\deg\delta_i\geqslant t$ for all $i=1,\dots,d$, then the assertion is obvious. If~$\deg \delta_i< t$ for some $i$, say $u=\deg \delta_1< t$, then, since $\sum_i\deg\delta_i\geqslant td$, there must be an index~$j$ such that $\deg\delta_j>t$, say $v=\deg\delta_2>t$. By \cite[Lemma 3.7]{Conca} one has
\[
I_uI_v\subseteq I_{u+1}I_{v-1}
\]
since~$u+1<v$. Hence we may replace $\delta_1\delta_2$ in the product with $\delta'_1\delta'_2$, where $\deg\delta'_1=u+1$ and $\deg \delta'_2=v-1$. Repeating the argument as needed, we obtain the desired assertion.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:fpure}]
The characteristic zero case follows from the positive characteristic assertion by \cite[Theorem~3.9]{Hara-Watanabe}; in view of this, let $\mathbb{F}$ be a field of characteristic $p>0$. Set $A\colonequals\mathbb{F}[x_1,\dots,x_{n+t-1}]$ and $I\colonequals I_t(H)$. By Fedder's criterion, it suffices to verify that
\[
(I^{[p]}:_AI)\ \nsubseteq\ \mathfrak{m}^{[p]},
\]
where $\mathfrak{m}$ is the homogeneous maximal ideal of $A$. We construct a polynomial $f$ with
\begin{equation}
\label{equation:symbolic}
f\ \in\ I^{(n-t+1)}
\end{equation}
such that, with respect to the lexicographic order $x_1>x_2> \cdots >x_{n+t-1}$, one has
\[
\operatorname{in_{lex}}(f)\ =\ x_1x_2\cdots x_{n+t-1}.
\]
Since the initial term of $f$ is squarefree, it follows that $f^{p-1}\notin \mathfrak{m}^{[p]}$. We claim that~\eqref{equation:symbolic} implies $f^{p-1} \in (I^{[p]}:_AI)$, i.e.,
\[
f^{p-1}I\ \subseteq\ I^{[p]}.
\]
By the flatness of the Frobenius endomorphism of $A$, the set of associated primes of $A/I^{[p]}$ equals that of~$A/I$, so it suffices to verify that the containment displayed above holds after localization at the prime ideal $I$. The ideal $I$ has height $n-t+1$, so $(A_I,IA_I)$ is a regular local ring of dimension $n-t+1$, and the pigeonhole principle gives
\[
I^{(n-t+1)(p-1)+1}A_I\ \subseteq\ I^{[p]}A_I.
\]
Using~\eqref{equation:symbolic}, it follows that
\[
f^{p-1}IA_I\ \subseteq\ I^{(n-t+1)(p-1)+1}A_I,
\]
which proves the claim. It remains to construct $f$ with the properties asserted above; the construction depends on whether $n+t-1$ is odd or even:
Suppose $n+t-1$ is odd, set $k\colonequals (n+t)/2$. Then $I$ also equals the ideal generated by the size $t$ minors of the~$k \times k$ Hankel matrix
\[
H'\colonequals\begin{pmatrix}
x_1 & x_2 & x_3& \cdots & x_k\\
x_2 & x_3 & x_4 & \cdots & x_{k+1}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
x_k & x_{k+1} &\cdots & \cdots & x_{n+t-1}
\end{pmatrix}.
\]
Let $f$ be the product of $\delta_1\colonequals{[1\ \dots\ k \mid 1\ \dots\ k]}_{H'}$ and $\delta_2\colonequals{[1\ \dots\ k-1 \mid 2\ \dots\ k]}_{H'}$. Then
\[
\operatorname{in_{lex}}(f)\ =\ (x_1x_3 \cdots x_{n+t-1})(x_2x_4\cdots x_{n+t-2})\ =\ x_1x_2\cdots x_{n+t-1}
\]
as claimed. Let $\delta_3,\dots,\delta_{n-t+1}$ be size $t-1$ minors of $H'$. Then Lemma~\ref{lemma:symbolic} implies that
\[
\delta_1\cdots\delta_{n-t+1}\ \in\ I^{n-t+1}.
\]
Since $I$ is a prime ideal generated in degree $t$, and each of $\delta_3,\dots,\delta_{n-t+1}$ has degree $t-1$, it follows that $f=\delta_1\delta_2$ belongs to the symbolic power $I^{(n-t+1)}$, as claimed in~\eqref{equation:symbolic}.
When $n+t-1$ is even, set $k\colonequals (n+t-1)/2$, and consider the~$k \times (k+1)$ Hankel matrix
\[
H''\colonequals\begin{pmatrix}
x_1 & x_2 & \cdots & x_k & x_{k+1}\\
x_2 & x_3 & \cdots & x_{k+1} & x_{k+2}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
x_k & \cdots & \cdots & x_{n+t-2} & x_{n+t-1}
\end{pmatrix}.
\]
Then $I$ equals $I_t(H'')$. Take $f$ to be the product of the minors $\delta_1\colonequals{[1\ \dots\ k \mid 1\ \dots\ k]}_{H''}$ and~$\delta_2\colonequals{[1\ \dots\ k \mid 2\ \dots\ k+1]}_{H''}$, in which case
\[
\operatorname{in_{lex}}(f)\ =\ (x_1x_3 \cdots x_{n+t-2})(x_2x_4\cdots x_{n+t-1})\ =\ x_1x_2\cdots x_{n+t-1}.
\]
Choosing size $t-1$ minors $\delta_3,\dots,\delta_{n-t+1}$ of $H''$, Lemma~\ref{lemma:symbolic} gives
\[
\delta_1\cdots\delta_{n-t+1}\ \in\ I^{n-t+1}
\]
and hence $f\in I^{(n-t+1)}$, as in the previous case.
\end{proof}
The definition of $F$-pure thresholds is due to Takagi and Watanabe \cite{Takagi-Watanabe}, and provides a positive characteristic analogue of the log canonical threshold. We focus here on the $F$-pure threshold of a homogeneous ideal in standard graded $F$-pure ring:
\begin{definition}
Let $A$ be a polynomial ring over an $F$-finite field of characteristic~$p>0$, and let $I$ be a homogeneous ideal such that $R \colonequals A/I$ is $F$-pure. Let $\mathfrak{a}$ be a homogeneous ideal of~$R$, and let $J$ be its preimage in $A$. Given $e \in \mathbb{N}$, set
\[
\nu_e(\mathfrak{a})\colonequals\max\left\{r \geqslant 0 \mid (I^{[q]}:_AI)J^r \not\subseteq \mathfrak{m}_A^{[q]} \right\},
\]
where $q=p^e$. Then the \emph{$F$-pure threshold} of $\mathfrak{a}\subset R$ is
\[
\operatorname{fpt}(\mathfrak{a}) \colonequals \lim_{e \longrightarrow \infty}\nu_e(\mathfrak{a})/p^e.
\]
\end{definition}
Suppose, in addition, that $R$ is normal; let $\omega_R$ be the graded canonical module of $R$. Taking $\mathfrak{a}$ to be $\mathfrak{m}_R$ in the above definition, \cite[Theorem~4.1]{STV} implies that $-\nu_e(\mathfrak{m}_R)$ equals the degree of a minimal generator of $\omega_R^{(1-q)}$. Using this, we obtain:
\begin{theorem}
\label{theorem:fpt1}
Let $R=\mathbb{F}[x_1,\dots,x_{n+t-1}]/I_t(H)$, where $\mathbb{F}$ is a field of characteristic $p>0$, and $H$ is a $t\times n$ Hankel matrix. Then the $F$-pure threshold of $\mathfrak{m}_R\subset R$ is
\[
\operatorname{fpt}(\mathfrak{m}_R)\ =\ \frac{2(t-1)}{n-t+2}.
\]
\end{theorem}
\begin{proof}
Recall from Remark~\ref{remark:mcm} that $\omega_R=\mathfrak{p}^{(2)}$. For an integer $q=p^e$, one then has
\[
\omega_R^{(1-q)}\ =\ \mathfrak{p}^{(2(1-q))}.
\]
Write
\[
2(q-1)=i(n-t+2)+j,
\quad\text{ where }\ 0\leqslant j\leqslant n-t+1.
\]
In view of the graded isomorphism
\[
\mathfrak{p}^{(n-t+2)}\ \cong\ R(-(t-1)),
\]
one then has
\[
\omega_R^{(1-q)}\ =\ \mathfrak{p}^{(2(1-q))}\ =\ \mathfrak{p}^{(-i(n-t+2))}\mathfrak{p}^{(-j)}\ \cong\ \mathfrak{p}^{(-j)}(i(t-1)).
\]
Since $0\leqslant j\leqslant n-t+1$, the module $\mathfrak{p}^{(-j)}$ has minimal generators in degree $0$, which then implies that $\omega_R^{(1-q)}$ has minimal generators in degree $-i(t-1)$, and hence that
\[
\nu_e(\mathfrak{m}_R)\ =\ i(t-1)\ =\ (t-1) \left\lfloor\frac{2(q-1)}{n-t+2}\right\rfloor.
\]
The calculation of $\operatorname{fpt}(\mathfrak{m}_R)$ follows immediately from this.
\end{proof}
Using the general theory developed in \cite{Henriques-Varbaro}, one can also compute the $F$-pure threshold of the ideal $I_t(H)$ in the polynomial ring $\mathbb{F}[x_1,\dots,x_{n+t-1}]$:
\begin{theorem}
\label{theorem:fpt2}
Let $H$ be a $t\times n$ Hankel matrix of indeterminates over a field $\mathbb{F}$ of positive characteristic. Then the $F$-pure threshold of $I_t(H)\subset \mathbb{F}[x_1,\dots,x_{n+t-1}]$ is
\[
\operatorname{fpt}(I_t(H))\ =\ \min\left\{\frac{n+t-2i+1}{t-i+1}\mid i=1,\dots,t\right\}.
\]
More precisely, if $\lambda\in \mathbb{R}_{>0}$, the generalized test ideal $\tau(\lambda \bullet I_t(H))$ is
\[
\tau(\lambda \bullet I_t(H))\ =\ \bigcap_{i=1}^tI_i(H)^{(\lfloor\lambda(t-i+1)\rfloor-n-t+2i)}.
\]
\end{theorem}
\begin{proof}
The powers of the ideal $I_t(H)$ are integrally closed by \cite[Theorem~4.5]{Conca}, and using~\cite[Theorem~3.12]{Conca} one has
\[
\bigcup_{s\geqslant 1}\operatorname{Ass} I_t(H)^s\ \subseteq\ \{I_1(H),\ I_2(H),\ \dots,\ I_t(H)\}.
\]
In the notation of \cite[\S\,3]{Henriques-Varbaro}, by \cite[Theorem~3.12]{Conca} we also infer that $I_t(H)$ satisfies condition ($\diamond$) and that
\[
e_{I_i(H)}(I_t(H))\ =\ t-i+1
\quad\text{ for }\ i=1,\dots,t.
\]
Recall that the polynomial $f$ constructed in the proof of Theorem~\ref{theorem:fpure} has a squarefree initial term. By an argument similar to the one used there for $i=t$, one sees that
\[
f\ \in \ I_i(H)^{(n+t-2i+1)}
\quad\text{ for }\ i=1,\dots,t.
\]
Since $I_i(H)$ is equal to the ideal generated by the size $i$ minors of an $i\times (t+n-i)$ Hankel matrix, it follows that
$\operatorname{height}\, I_i(H)=n+t-2i+1$, and that the ideal $I_t(H)$ satisfies the condition ($\diamond+$). The assertion now follows by \cite[Theorem~3.14]{Henriques-Varbaro}.
\end{proof}
\begin{remark}
A similar argument allows one to compute the $F$-pure threshold and the generalized test ideals (in positive characteristic), as well as the log canonical threshold and the multiplier ideals (in characteristic zero), of any product of ideals of minors of a Hankel matrix in a polynomial ring.
\end{remark}
We conclude with the following question; we prove in \cite{CMSV} that the answer is affirmative in a number of cases.
\begin{question}
Is every Hankel determinantal ring over a field of positive characteristic an~$F$-regular ring?
\end{question}
|
1,941,325,220,754 | arxiv | \section{Introduction}\label{sec:introduction}
Natural Language Processing (\textmc{nlp}\xspace) for economics and finance is a rapidly developing research area \cite{econlp-2018, econlp-2019, finnlp-2020, fnp-2020-joint}. While financial data are usually reported in tables, much valuable information also lies in text. A prominent source of such textual data is the Electronic Data Gathering, Analysis, and Retrieval system (\textmc{edgar}\xspace) from the \textmc{us}\xspace Securities and Exchange (\textmc{sec}\xspace) website that hosts filings of publicly traded companies.\footnote{See \url{https://www.sec.gov/edgar/searchedgar/companysearch.html} for more information.} In order to maintain transparency and regulate exchanges, \textmc{sec}\xspace requires all public companies to periodically upload various reports, describing their financial status, as well as important events like acquisitions and bankruptcy.
Financial documents from \textmc{edgar}\xspace have been useful in a variety of tasks such as stock price prediction \cite{related-lee-8k-1}, risk analysis \cite{related-kogan-10k-1}, financial distress prediction \cite{financial-distress}, and merger participants identification \cite{katsafados-merger-participants}. However, there has not been an open-source, efficient tool to retrieve textual information from \textmc{edgar}\xspace. Researchers interested in economics and \textmc{nlp}\xspace often rely on heavily-paid subscription services or try to build web crawlers from scratch, often unsuccessfully. In the latter case, there are many technical challenges, especially when aiming to retrieve the annual reports of a specific firm for particular years or to extract the most relevant item sections from documents that may contain hundreds of pages. Thus, developing a web-crawling toolkit for \textmc{edgar}\xspace as well as releasing a large financial corpus in a clean, easy-to-use form would foster research in financial \textmc{nlp}\xspace.
\begin{table}[t]
\Large
\renewcommand{\arraystretch}{1.2}
\resizebox{\columnwidth}{!}
{
\centering
\begin{tabular}{l|c|c|c|c}
\toprule
\textbf{Corpora} & \textbf{Filings} & \textbf{Tokens} & \textbf{Companies} & \textbf{Years}\\
\midrule
\citet{related-joco} & Various & 242M & 270 & 2000-2015\\
\citet{related-cofif} & Various & 188M & 60 & 1995-2018 \\
\hline
\citet{related-lee-8k-1} & 8-K & 27.9M & 500 & 2002-2012 \\
\citet{related-kogan-10k-1} & 10-K & 247.7M & 10,492 & 1996-2006 \\
\citet{related-tsai-10k-2} & 10-K & 359M & 7,341 & 1996-2013\\
\textmc{edgar-corpus}\xspace (ours) & 10-K & \textbf{6.5B} & \textbf{38,009} & \textbf{1993-2020} \\
\bottomrule
\end{tabular}
}
\caption{Financial corpora derived from \textmc{sec}\xspace (lower part) and other sources (upper part).}
\label{tab:corpus-related-work}
\vspace*{-4mm}
\end{table}
In this paper, we release \textmc{edgar-corpus}\xspace, a novel financial corpus containing all the \textmc{us}\xspace annual reports (10-K filings) from 1993 to 2020.\footnote{\textmc{edgar-corpus}\xspace is available at: \url{https://zenodo.org/record/5528490}} Each report is provided in an easy-to-use \textmc{json}\xspace format containing all 20 sections and subsections (items) of a \textmc{sec}\xspace annual report; different items provide useful information for different tasks in financial \textmc{nlp}\xspace. To the best of our knowledge, \textmc{edgar-corpus}\xspace is the largest publicly available financial corpus (Table \ref{tab:corpus-related-work}). In addition, we use \textmc{edgar-corpus}\xspace to train and release \textmc{word}2\textmc{vec}\xspace embeddings, dubbed \textmc{edgar}-\textmc{w}2\textmc{v}\xspace. We experimentally show that the new embeddings are more useful for financial \textmc{nlp}\xspace tasks than generic \textmc{g}lo\textmc{v}e\xspace embeddings \citep{glove-2014} and other previously released financial \textmc{word}2\textmc{vec}\xspace embeddings \citep{related-tsai-10k-2}. Finally, to further facilitate future research in financial \textmc{nlp}\xspace, we open-source \textmc{edgar-crawler}\xspace, the Python toolkit we developed to download and extract the text from the annual reports of publicly traded companies available at \textmc{edgar}\xspace.\footnote{\textmc{edgar-crawler}\xspace is available at: \url{https://github.com/nlpaueb/edgar-crawler}}
\section{Related Work}\label{sec:related_work}
There are few textual financial resources in the \textmc{nlp}\xspace literature. \citet{related-joco} published \textmc{joc}o\xspace, a corpus of non-\textmc{sec}\xspace annual and social responsibility reports for the top 270 \textmc{us}\xspace, \textmc{uk}\xspace, and German companies. \citet{related-cofif} released \textmc{c}o\textmc{f}i\textmc{f}\xspace, the first financial corpus in the French language, comprising annual, semestrial, trimestrial, and reference business documents.
While some previous work has published document collections from \textmc{edgar}\xspace, those collections come with certain limitations. \citet{related-kogan-10k-1} published a collection of the Management’s Discussion and Analysis Sections (Item 7) for all \textmc{sec}\xspace company annual reports from 1996 to 2006. \citet{related-tsai-10k-2} updated that collection to include reports up to 2013 while also providing \textmc{word}2\textmc{vec}\xspace embeddings. Finally, \citet{related-lee-8k-1} released a collection of 8-K reports from \textmc{edgar}\xspace, which announce significant firm events such as acquisitions or director resignations, from 2002 until 2012.
Compared to previous work, \textmc{edgar-corpus}\xspace contains all 20 items of the annual reports from all publicly traded companies in the \textmc{us}\xspace, covering a time period from 1993 to 2020. We believe that releasing the whole annual reports (with all 20 items) will facilitate several research directions in financial \textmc{nlp}\xspace \cite{loughran2016textual}. Also, \textmc{edgar-corpus}\xspace is much larger than previously published financial corpora in terms of tokens, number of companies, and year range (Table \ref{tab:corpus-related-work}).
\section{Creating \textmc{edgar-corpus}\xspace}\label{sec:corpus}
\subsection{Data and toolkit}\label{sub:data}
Publicly-listed companies are required to submit 10-K filings (annual reports) every year.
Each 10-K filing is a complete description of the company's economic activity during the corresponding fiscal year. Such reports also provide a full outline of risks, liabilities, corporate agreements, and operations. Furthermore, the documents provide an extensive analysis of the relevant sector industry and the marketplace as a whole.
A 10-K report is organized in 4 parts and 20 different items (Table \ref{tab:10k_item_sections}). Extracting specific items from documents with hundreds of pages usually requires manual work, which is time- and resource-intensive. To promote research in all possible directions, we extracted all available items using an extensive pre-processing and extraction pipeline.
\begin{table}[t]
\resizebox{\columnwidth}{!}
{
\begin{tabular}{l|l|p{0.8\linewidth}}
\toprule
& \textbf{Item} & \textbf{Section Name} \\
\midrule
\textbf{Part I} & Item 1 & Business \\
& Item 1A & Risk Factors \\
& Item 1B & Unresolved Staff Comments \\
& Item 2 & Properties \\
& Item 3 & Legal Proceedings \\
& Item 4 & Mine Safety Disclosures \\
\hline
\textbf{Part II} & Item 5 & Market \\
& Item 6 & Consolidated Financial Data \\
& Item 7 & Management's Discussion and Analysis \\%of Financial Condition and Results of Operations \\
& Item 7A & Quantitative and Qualitative Disclosures about Market Risks \\
& Item 8 & Financial Statements \\
& Item 9 & Changes in and Disagreements With\break Accountants \\% on Accounting and Financial Disclosure \\
& Item 9A & Controls and Procedures \\
& Item 9B & Other Information \\
\hline
\textbf{Part III} & Item 10 & Directors, Executive Officers and\break Corporate Governance \\
& Item 11 & Executive Compensation \\
& Item 12 & Security Ownership of Certain Beneficial Owners \\%and Management and Related Stockholder Matters \\
& Item 13 & Certain Relationships and Related\break Transactions \\%, and Director Independence \\
& Item 14 & Principal Accounting Fees and Services\\
\hline
\textbf{Part IV} & Item 15 & Exhibits and Financial Statement\break Schedules Signatures \\
\bottomrule
\end{tabular}
}
\caption{The 20 different items of a 10-K report.}
\label{tab:10k_item_sections}
\vspace*{-4mm}
\end{table}
In more detail, we developed \textmc{edgar-crawler}\xspace, which we used to download the 10-K reports of all publicly traded companies in the \textmc{us}\xspace between the years 1993 and 2020. We then removed all tables to keep only the textual data, which were \textmc{html}\xspace{-stripped},\footnote{We use Beautiful Soup (\url{https://beautiful-soup-4.readthedocs.io/en/latest}).} cleaned and split into the different items by using regular expressions. The resulting dataset is \textmc{edgar-corpus}\xspace.
While there exist toolkits to download annual filings from \textmc{edgar}\xspace, they do not support the extraction of specific item sections.\footnote{For example, \href{https://github.com/sec-edgar/sec-edgar}{sec-edgar} can download complete \textmc{html}\xspace reports (with images and tables), but it does not produce clean item-specific text.} This is particularly important since researchers often rely on certain items in their experimental setup. For example, \citet{fraud-item7}, \citet{mda-deception-item7}, and \citet{sentiment-fraud-item7} perform textual analysis on Item 7 to detect corporate fraud. \citet{katsafados-ipo-underprice-detection} combine Item 7 and Item 1A to detect Initial Public Offering (\textmc{ipo}\xspace) underpricing, while \citet{moriarty-item1-item7-m-and-a} combine Item 1 with Item 7 to predict mergers and acquisitions.
Apart from \textmc{edgar-corpus}\xspace, we also release \textmc{edgar-crawler}\xspace, the toolkit we developed to create \textmc{edgar-corpus}\xspace, to facilitate future research based on textual data from \textmc{edgar}\xspace. \textmc{edgar-crawler}\xspace consists of two Python modules that support its main functions:\vspace{2mm}
\noindent\textbf{\code{edgar\_crawler.py}} is used to download 10-K reports in batch or for specific companies that are of interest to the user.\vspace{2mm}
\noindent\textbf{\code{extract\_items.py}} extracts the text of all or particular items from 10-K reports. Each item's text becomes a separate \textmc{json}\xspace key-value pair (Figure \ref{fig:json_structure}).\vspace{2mm}
\begin{figure}[h]
\includegraphics[width=\columnwidth]{images/json_structure.png}
\caption{An example of a 10-K report in \textmc{json}\xspace format as downloaded and extracted by \textmc{edgar-crawler}\xspace; \textit{filename} is the downloaded 10-K file; \textit{cik} is the Company Index Key; \textit{year} is the corresponding Fiscal Year.}
\label{fig:json_structure}
\vspace*{-4mm}
\end{figure}
\subsection{Word embeddings}\label{sub:demonstration}
To facilitate financial \textmc{nlp}\xspace research, we used \textmc{edgar-corpus}\xspace to train \textmc{word}2\textmc{vec}\xspace embeddings (\textmc{edgar}-\textmc{w}2\textmc{v}\xspace), which can be used for downstream tasks, such as financial text classification or summarization. We used \textmc{word}2\textmc{vec}\xspace's skip-gram model \cite{word-embeddings-1,word-embeddings-2} with default parameters as implemented by \textmc{gensim}\xspace \citep{gensim} to generate 200-dimensional \textmc{word}2\textmc{vec}\xspace embeddings for a vocabulary of 100K words. The word tokens are generated using \textmc{spa}C\textmc{y}\xspace \cite{spacy}. We also release \textmc{edgar}-\textmc{w}2\textmc{v}\xspace.\footnote{The \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings are available at: \url{https://zenodo.org/record/5524358}}
To illustrate the quality of \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings, in Figure \ref{fig:umap_viz} we visualize sampled words from seven different entity types, i.e., \textit{location}, \textit{industry}, \textit{company}, \textit{year}, \textit{month}, \textit{number}, and \textit{financial term}, after applying dimensionality reduction with the \textmc{umap}\xspace algorithm \cite{mcinnes2018umap-software}. The financial terms are randomly sampled from the Investopedia Financial Terms Dictionary.\footnote{\url{https://www.investopedia.com/financial-term-dictionary-4769738}.} In addition, companies and industries are randomly sampled from well-known industry sectors and publicly traded stocks. Finally, the words belonging to the remaining entity types are randomly sampled from gazetteers. Figure \ref{fig:umap_viz} shows that words belonging to the same entity type form clear clusters in the 2-dimensional space indicating that \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings manage to capture the underlying financial semantics of the vocabulary.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{images/bestbyfar6.png}
\caption{Visualization of the \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings. Different colors indicate different entity types.}
\label{fig:umap_viz}
\vspace*{-4mm}
\end{figure}
\begin{table}[b]
\Large
\resizebox{\columnwidth}{!}
{
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\toprule
\textit{economy} & \textit{competitor} & \textit{market} & \textit{national} & \textit{investor} \\
\midrule
\midrule
downturn & competitive & marketplace & association & institutional \\
recession & competing & industry & regional & shareholder \\
slowdown & dominant & prices & nationwide & relations\\
sluggish & advantages & illiquidity & american & purchaser \\
stagnant & competition & prevailing & zions & creditor\\
\bottomrule
\end{tabular}
}
\caption{Sample words from \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings (top row) and their corresponding nearest neighbors (columns) based on cosine similarity.}
\label{tab:embeddings_nearest_neighbors}
\vspace*{-2mm}
\end{table}
To further highlight the semantics captured by \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings, we retrieved the 5 nearest neighbors, according to cosine similarity, for commonly used financial terms (Table~\ref{tab:embeddings_nearest_neighbors}).\footnote{We exclude obvious top-scoring neighbors of singular/plural pairs such as \textit{market/markets or investor/investors}.} As shown, all the nearest neighbors are highly related to the corresponding term. For instance, the word \textit{economy} is correctly associated with terms indicating the slowdown of the economy happening during the past few years, e.g., \textit{downturn}, \textit{recession}, or \textit{slowdown}. Also, \textit{market} is correctly related with words such as \textit{marketplace}, \textit{industry}, and \textit{prices}.
\section{Experiments on financial NLP tasks}
We also compare \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings against generic \textmc{g}lo\textmc{v}e\xspace \citep{glove-2014} embeddings\footnote{We use the 200-dimensional \textmc{g}lo\textmc{v}e\xspace embeddings from \url{https://nlp.stanford.edu/data/glove.6B.zip}.} and the financial embeddings of \citet{related-tsai-10k-2} in three financial \textmc{nlp}\xspace tasks. For each task, we use the same model and we only alter the embeddings component. In addition, we use the same pre-processing during the creation of the vocabulary of the embeddings in each case.\vspace{1.5mm}
\noindent\textbf{\textmc{f}in\textmc{s}im-3\xspace} \cite{finsim-3-proceedings} provides a set of business and economic terms and the task is to classify them into the most relevant hypernym from a set of 17 possible hypernyms from the Financial Industry Business Ontology (\textmc{fibo}\xspace).\footnote{\url{https://spec.edmcouncil.org/fibo/}.} Example hypernyms include \textit{Credit Index}, \textit{Bonds}, and \textit{Stocks}. We tackle the problem with a multinomial logistic regression model, which, given the embedding of an economic term, classifies the term to one of the 17 possible hypernyms. Since \textmc{f}in\textmc{s}im-3\xspace is a recently completed challenge, the true labels for the test data were not available. Therefore, we use a stratified 10-fold cross-validation. We report accuracy and the average rank of the correct hypernym as evaluation measures. For the latter, the 17 hypernyms are sorted according to the model's probabilities. A perfect model, i.e., one that would always rank the correct hypernym first, would have an average rank of 1.\vspace{1.5mm}
\noindent\textbf{Financial tagging (\textmc{f}in\textmc{t}\xspace)} is an in-house sequence labeling problem for financial documents. The task is to annotate financial reports with word-level tags from an accounting taxonomy. To tackle the problem, we use a \textmc{bilstm}\xspace encoder operating over word embeddings with a shared multinomial logistic regression that predicts the correct tag for each word from the corresponding \textmc{bilstm}\xspace state. We report the F1 score micro-averaged across all tags.\vspace{1.5mm}
\noindent\textbf{\textmc{f}i\textmc{qa}\xspace Open Challenge} \cite{fiqa} is a sentiment analysis regression challenge over financial texts. It contains financial tweets annotated by domain experts with a sentiment score in the range [-1, 1], with 1 denoting the most positive score. For this problem, we employ a \textmc{bilstm}\xspace encoder which operates over word embeddings, and a linear regressor operating over the last hidden state of the \textmc{bilstm}\xspace. Since we do not have access to the test set in this task, we use a 10-fold cross-validation. We evaluate the results using Mean Squared Error (MSE) and R-squared ($R^2$).\vspace{1.5mm}
\noindent Across all tasks, \textmc{edgar}-\textmc{w}2\textmc{v}\xspace outperforms \textmc{g}lo\textmc{v}e\xspace, showing that in-domain knowledge is critical in financial \textmc{nlp}\xspace problems (Table \ref{tab:experiments}). The gains are more substantial in \textmc{f}in\textmc{s}im-3\xspace and \textmc{f}in\textmc{t}\xspace, which rely to a larger extent on understanding highly technical economics discourse.
Interestingly, the in-domain embeddings of \citet{related-tsai-10k-2} are comparable to the generic \textmc{g}lo\textmc{v}e\xspace embeddings in two of the three tasks. One possible reason is that \citet{related-tsai-10k-2} employed stemming during the creation of the embeddings vocabulary, which might have contributed noise to the models due to loss of information.
\begin{table}[h]
\Huge
\renewcommand{\arraystretch}{1.1}
\resizebox{\columnwidth}{!}
{
\centering
\begin{tabular}{l|cc|c|cc}
\toprule
\toprule
\multirow{2}{*}{} &
\multicolumn{2}{c|}{\textmc{f}in\textmc{s}im-3\xspace} & \textmc{f}in\textmc{t}\xspace & \multicolumn{2}{c}{\textmc{f}i\textmc{qa}\xspace} \\
& {Acc. $\uparrow$} & {Rank $\downarrow$} & {F1 $\uparrow$} & {MSE $\downarrow$} & {$R^2$ $\uparrow$}\\
\midrule
\textmc{g}lo\textmc{v}e\xspace & 85.3 & 1.26 & 75.8 & 0.151 & 0.119 \\
\citet{related-tsai-10k-2} & 84.9 & 1.27 & 75.3 & 0.142 & 0.169\\
\textmc{edgar}-\textmc{w}2\textmc{v}\xspace (ours) & \textbf{87.9} & \textbf{1.21} & \textbf{77.3} & \textbf{0.141} & \textbf{0.176}\\
\bottomrule
\end{tabular}
}
\caption{Results across financial \textmc{nlp}\xspace tasks, with different word embeddings. We report averages over 3 runs with different random seeds. The standard deviations were very small and are omitted for brevity.}
\label{tab:experiments}
\vspace*{-4mm}
\end{table}
\section{Conclusions and Future Work}\label{sec:conclusions_and_future_work}
We introduced and released \textmc{edgar-corpus}\xspace, a novel \textmc{nlp}\xspace corpus for the financial domain. To the best of our knowledge, \textmc{edgar-corpus}\xspace is the largest financial corpus available. It contains textual data from annual reports published in \textmc{edgar}\xspace, the repository for all \textmc{us}\xspace publicly traded companies, covering a period of more than 25 years. All the reports are split into their corresponding items (sections) and are provided in a clean, easy-to-use \textmc{json}\xspace format. We also released \textmc{edgar-crawler}\xspace, a toolkit for downloading and extracting the reports. To showcase the impact of \textmc{edgar-corpus}\xspace, we used it to train and release \textmc{edgar}-\textmc{w}2\textmc{v}\xspace, which are financial \textmc{word}2\textmc{vec}\xspace embeddings. After illustrating the quality of \textmc{edgar}-\textmc{w}2\textmc{v}\xspace embeddings, we also showed their usefulness in three financial \textmc{nlp}\xspace tasks, where they outperformed generic \textmc{g}lo\textmc{v}e\xspace embeddings and other financial embeddings.
In future work, we plan to extend \textmc{edgar-crawler}\xspace to support additional types of documents (e.g., 10-Q, 8-K) and to leverage \textmc{edgar-corpus}\xspace to explore transfer learning for the financial domain, which is vastly understudied.
\section*{Disclaimer}
This publication contains information in summary form and is therefore intended for general guidance only. It is not intended to be a substitute for detailed research or the exercise of professional judgment. Member firms of the global EY organization cannot accept responsibility for loss to any person relying on this article.
|
1,941,325,220,755 | arxiv | \section{Introduction}
In \cite{fighting-fish}, Duchi, Guerrini, Rinaldi and Schaeffer introduced a new class of combinatorial objects called \emph{fighting fish}, which can be seen as a generalization of directed convex polyominoes. They found that the number of fighting fish with $n+1$ lower free edges is given by
\begin{equation} \label{eq:enum}
\frac{2}{(n+1)(3n+1)} \binom{3n+1}{n}.
\end{equation}
This formula also counts various other objects, such as two-stack sortable permutations \cite{west-tsp-def,zeilberger}, non-separable planar maps \cite{tutte-census,brown-nsp}, left ternary trees \cite{left-ternary-tree,nsp-bij-census} and generalized Tamari intervals \cite{tam-def,tam-bij}. In \cite{fighting-fish-enum}, the same authors also proved some refined equi-enumeration results on fighting fish and left ternary trees. However, their proofs used generating functions, thus combinatorially unsatisfactory. The authors then conjectured a still more refined enumerative correspondence between fighting fish and left ternary trees, involving more statistics. They also called for a bijective proof of their conjecture, which is still open to the author's knowledge.
Indeed, unlike the previously mentioned classes of objects, which are linked by a net of bijections, we still lack a combinatorial understanding of fighting fish. The present article is meant to fill this gap by providing a recursive bijection between fighting fish and two-stack sortable permutations. More precisely, our main result is the following (related definitions will be given later).
\begin{thm} \label{thm:main}
There is a bijection $\phi$ from two-stack sortable permutations to fighting fish satisfying the following conditions. Given a two-stack sortable permutation $\pi$, let $\S(\pi)$ be the result of sorting $\pi$ once through a stack. Suppose that $\pi$ is of length $n$, with $i$ ascents and $j$ descents in $\pi$, and $k$ left-to-right maxima and $\ell$ elements $a$ that precedes $a-1$ in $\S(\pi)$. Then $\phi(\pi)$ is a fighting fish with $n+1$ lower free edges, of which $i+1$ are left and $j+1$ are right, and with fin-length $k+1$ and $\ell+1$ tails.
\end{thm}
This result echoes the conjecture at the end of \cite{fighting-fish-enum}, which calls for a bijection from fighting fish to other objects such as left ternary trees. To prove our result, we first give a new recursive decomposition of two-stack sortable permutations. Then we observe that this new decomposition is isomorphic to a decomposition of fighting fish given in \cite{fighting-fish-enum}, which gives the recursive bijection $\phi$. We finally observe that various statistics are also carried over by $\phi$. Our result can thus be regarded as an equi-enumeration result refined by all related statistics, which can be understood combinatorially. When restricted to a subset of related statistics, we get a combinatorial vision of the refined enumeration results in \cite{fighting-fish-enum}. As a side product, we also prove the algebraicity of a refined generating function of two-stack sortable permutations similar to that in \cite{multistat}, using a simpler functional equation due to the new decomposition.
By providing a bijection between the newly introduced fighting fish and the relatively well-known two-stack sortable permutations, we in fact capture fighting fish in the net of bijections between objects counted by \eqref{eq:enum} we mentioned above. As a result, we could go further in the study of not only fighting fish but also other equi-enumerated objects, such as non-separable planar maps, by looking at structures transferred by our bijection, and natural compositions of our bijection with existing ones.
\section{Preliminaries and previous work}
Given two sequences $A$ and $B$, we denote by $A \cdot B$ their concatenation. The empty sequence (thus also the empty permutation) is denoted by $\epsilon$. We denote by $\operatorname{len}(A)$ the length of a sequence. We now adapt the setting in \cite{multistat}. Let $A = (a_1, a_2, \ldots, a_\ell)$ be a non-empty sequence of distinct integers, with $n$ its largest element. We can write $A$ as $A_L \cdot (n) \cdot A_R$, with $A_L$ (resp. $A_R$) the part of $A$ before (resp. after) $n$. We now define the \tdef{stack-sorting operator}, denoted by $\S$, recursively as
\begin{equation} \label{eq:S-def}
\S(\epsilon) = \epsilon, \quad \S(A) = \S(A_L) \cdot \S(A_R) \cdot (n).
\end{equation}
For example, $\S(0, -1, 7, 9, 3) = -1,0,7,3,9$ and $\S(6,4,3,2,7,1,5) = 2,3,4,6,1,5,7$.
An equivalent way of thinking about $S$ is that it corresponds to a pass over a ``lazy stack'' $LS$, as described in \cite{west-tsp-def,multistat}. To get $\S(A)$, we start with $LS$ empty, and we push elements of $A$ one by one to $LS$, while maintaining an increasing order of elements in $LS$ from top to bottom. To this end, each time before we push an element $a_i$ into $LS$, we pop every element larger than $a_i$ in $LS$. After all elements are treated, we pop out every element in $LS$, and the overall output sequence is $\S(A)$.
Given a permutation $\sigma$ in the symmetric group $\mathfrak{S}_n$ viewed as a sequence, we say that $\sigma$ is \tdef{stack-sortable} if $\S(\sigma)$ is the identity $\textrm{id}_n$ of $\mathfrak{S}_n$. In \cite{taocp}, the following well-known result, expressed using pattern avoidance, was proved by Knuth.
\begin{prop} \label{prop:av231}
A permutation $\sigma$ is stack-sortable if and only if it avoids the pattern $231$, that is, there are no indices $i<j<k$ such that $\sigma(k) < \sigma(i) < \sigma(j)$.
\end{prop}
We say that $\sigma \in \mathfrak{S}_n$ is a \tdef{two-stack sortable permutation} (or \tdef{2SSP}) if $\S(\S(\sigma)) = \textrm{id}_n$. We denote by $\mathcal{T}_n$ the set of 2SSPs of length $n$, and $\mathcal{T} = \cup_{n \geq 1} \mathcal{T}_n$ the set of all 2SSPs. We take the convention that the empty permutation $\epsilon$ is not a 2SSP for later compatibility with fighting fish. A characterization of 2SSPs with pattern avoidance can be found \cite{west-tsp-def}.
It was first conjectured by West \cite{west-tsp-def} that the number of 2SSPs of length $n$ is given by \eqref{eq:enum}. Zeilberger provided a proof in \cite{zeilberger} using generating functions. A refined enumeration including various statistics was given by Bousquet-M\'elou in \cite{multistat}. West also observed that \eqref{eq:enum} also counts the number of non-separable planar maps with $n+1$ edges studied by Tutte and Brown \cite{tutte-census,brown-nsp}. A combinatorial proof of West's observation was first given by Dulucq, Gire and Guibert in \cite{eight-bij-refined}, using a sequence of 8 bijections from 2SSPs to a certain family of permutations encoding non-separable planar maps. Then Goulden and West found in \cite{goulden-west} a recursive bijection directly between 2SSPs and non-separable planar maps. They showed that, under specific recursive decompositions, the two classes of objects share the same set of decomposition trees, later called \emph{description trees} in \cite{nsp-bij-census}. Though nice, all these bijections give no direct proof of the enumeration formula. It was in \cite{nsp-bij-census} that Jacquard and Schaeffer finally gave a combinatorial proof of \eqref{eq:enum} by relating description trees to the so-called \emph{left ternary trees}, first studied in \cite{left-ternary-tree}. More recent advances on 2SSPs and related families of permutations defined by sorting through devices like stacks and queues can be found in \cite{bona-survey}.
We now turn to fighting fish defined and studied by Duchi, Guerrini, Rinaldi and Schaeffer in \cite{fighting-fish,fighting-fish-enum}, which can be seen as a generalization of directed convex polyominoes. In the construction, we use \emph{cells}, which are unit squares rotated by 45 degrees. An edge of a cell is \tdef{free} if it is adjacent to only one cell. A \tdef{fighting fish} is constructed by starting with an initial cell called the \emph{head}, then adding cells successively as illustrated on the left side of Figure~\ref{fig:ff}. More precisely, there are three ways to add a new cell (the gray one): (a) we attach it to a free upper right edge of a cell; (b) we attach it to a free lower right edge of a cell; (c) if there is a cell $a$ with two cells $b$ and $c$ attached to its upper right and lower right edge, and such that $b$ (resp. $c$) has a free lower right (resp. upper right) edge, then we attach the new cell to both $b$ and $c$.
\begin{figure}
\begin{center}
\includegraphics[scale=1,page=2]{figures.pdf}
\end{center}
\caption{Adding a cell to a fighting fish, and an example of a fighting fish}
\label{fig:ff}
\end{figure}
We also need some statistics on fighting fish defined in \cite{fighting-fish-enum}. If a cell has both its right edges free, then its right vertex is called a \tdef{tail}. A fighting fish may have several tails, but it has only one \tdef{nose}, which is the left vertex of its head. The \tdef{fin} of a fighting fish is the path from the nose to the first tail met by following free edges counter-clockwise.
The enumerative properties of fighting fish are studied in \cite{fighting-fish-enum}. It turns out that fighting fish with $n+1$ lower free edges are also counted by \eqref{eq:enum}. Moreover, we have the following refinement.
\begin{prop}[Theorem~2 of \cite{fighting-fish-enum}] \label{prop:enum-refined}
The number of fighting fish with $i+1$ left lower free edges and $j+1$ right lower free edges is
\begin{equation}
\frac{1}{(i+1)(j+1)}\binom{2i+j+1}{j}\binom{i+2j+1}{i}.
\end{equation}
\end{prop}
Again, this result was proved using generating functions. The same formula was already in \cite{nsp-refined} as the number of non-separable planar maps with $i$ vertices and $j$ faces, and also in \cite{goulden-west,nsp-bij-census} as the number of two-stack sortable permutations with $i$ descents and $j$ ascents. Later we will see a combinatorial explanation via our bijection.
\section{A decomposition of two-stack sortable permutations}
We first lay down some definitions. Given a sequence $A = (a_1, a_2, \ldots, a_n)$ of distinct integers, we define $P(A)$ as the standardization of $a$, that is, the permutation reflecting the order of elements in $a$. For instance, with $A = (0,4,1,9,5,6)$, we have $P(A) = (1,3,2,6,4,5)$. For a permutation $\sigma$, we denote by $\sigma^{+k}$ the sequence obtained by adding $k$ to each element of $\sigma$, and by $\sigma^{+(k_1, m, k_2)}$ with $k_1 < k_2$ the sequence obtained from $\sigma$ by adding $k_1$ to each element strictly smaller than $m$, and adding $k_2$ to other elements. For example, with $\sigma = (6,2,4,1,5,3)$, we have:
\[
\sigma^{+3} = (9,5,7,4,8,6), \quad \sigma^{+(1,3,3)} = (9,3,7,2,8,6).
\]
We observe that, for any permutation $\sigma$ and any values of $k$, $m$ and $k_1 < k_2$, we have $P(\sigma^{+k}) = P(\sigma^{k_1, m, k_2}) = \sigma$. The following statement about $\S$ commuting with these operations is immediate.
\begin{prop} \label{prop:S-commute}
For any $\sigma \in \mathfrak{S}_n$, we have $\S(\sigma^{+k}) = \S(\sigma)^{+k}$ for any $k \in \mathbb{N}$, and we also have $\S(\sigma^{+(k_1, m, k_2)}) = \S(\sigma)^{+(k_1, m, k_2)}$ for any $0 \leq k_1 < k_2$ and $0 \leq m \leq n$.
\end{prop}
\begin{proof}
We observe that the operation of $S$ only depends on the relative order of elements, which is the same in $\sigma^{+k}$ and $\sigma^{+(k_1, m, k_2)}$ as in $\sigma$.
\end{proof}
We now present a recursive decomposition of 2SSPs. Let $\pi \in \mathcal{T}_n$ be a 2SSP of size $n$. We suppose that $\pi = \pi_\ell \cdot n \cdot \pi_r$ with $\pi_\ell$ of length $k$. We define $\pi_1 = P(\pi_\ell)$ and $\pi_2 = P(\pi_r)$, and the decomposition is written as $D(\pi) = (\pi_1, \pi_2)$. Here, $\pi_1, \pi_2$ may be empty. The following proposition shows that $D$ is indeed a recursive decomposition.
\begin{prop} \label{prop:valid-decomp}
For $\pi \in \mathcal{T}_n$ with $n \geq 1$ and $D(\pi) = (\pi_1, \pi_2)$, we have $\pi_1, \pi_2 \in \{\epsilon\} \cup \mathcal{T}$.
\end{prop}
\begin{proof}
From Proposition~\ref{prop:av231}, we know that $\S(\pi) = \S(\pi_1) \cdot \S(\pi_2) \cdot n$ avoids the pattern $231$, which means that $\S(\pi_1)$ and $\S(\pi_2)$ also avoids $231$. We thus conclude that $\pi_1$ and $\pi_2$ are either empty or in $\mathcal{T}$.
\end{proof}
Now, given $\pi_1, \pi_2$, we exhibit some (in fact, all, \textit{cf.} Propsition~\ref{prop:inverse-2}) possibilities of $\pi \in \mathcal{T}$ such that $D(\pi)=(\pi_1, \pi_2)$, using a new statistic on 2SSPs. Given $\pi \in \mathcal{T}$, we denote by $\operatorname{slmax}(\pi)$ the number of left-to-right maxima in $\S(\pi)$, \textit{i.e.}, the number of indices $i$ such that for all $j<i$ we have $\S(\sigma)(i) > \S(\sigma)(j)$. For example, with $\pi = (3,1,2,5,7,6,4)$, we have $\S(\pi) = (1,2,3,5,4,6,7)$, giving $\operatorname{slmax}(\pi) = 6$. We define $\operatorname{slmax}(\epsilon)=0$. Now suppose that $\pi_1 \in \mathcal{T}_k$ and $\pi_2 \in \mathcal{T}_\ell$. Let $t = \operatorname{slmax}(\pi_2)$, and $a_1, a_2, \ldots, a_t$ be the $t$ left-to-right maxima of $\S(\pi_2)$. We can construct elements in $\mathcal{T}_{k+\ell+1}$ in the following ways:
\begin{itemize}
\item $C_1(\pi_1, \pi_2) = \pi_1 \cdot (k+\ell+1) \cdot \pi_2^{+k}$;
\item $C_2(\pi_1, \pi_2, i) = \pi_1^{+(0,k,a_i)} \cdot (k + \ell + 1) \cdot \pi_2^{+(k-1,a_i+1,k)}$ for $1 \leq i \leq t$.
\end{itemize}
Both constructions are illustrated in Figure~\ref{fig:construction}. In $C_1(\pi_1, \pi_2)$, we allow $\pi_1$ and/or $\pi_2$ to be empty. In $C_2(\pi_1, \pi_2)$, both $\pi_1$ and $\pi_2$ must be non-empty. We now prove that our constructions are valid.
\begin{figure}
\begin{center}
\includegraphics[page=1, width=\textwidth]{figures.pdf}
\end{center}
\caption{Constructions $C_1$ and $C_2$}
\label{fig:construction}
\end{figure}
\begin{prop} \label{prop:c1-valid}
Given $k, \ell \geq 0$, for any $\pi_1 \in \mathcal{T}_k$ and $\pi_2 \in \mathcal{T}_\ell$, let $\pi = C_1(\pi_1, \pi_2)$. Here we take the convention that $\mathcal{T}_0 = \{ \epsilon \}$. We have $\pi \in \mathcal{T}_{k+\ell+1}$. Furthermore, $\operatorname{slmax}(\pi) = \operatorname{slmax}(\pi_1) + \operatorname{slmax}(\pi_2) + 1$.
\end{prop}
\begin{proof}
We first observe that $\pi \in \mathfrak{S}_{k+\ell+1}$, since $\pi_1$ covers integers from $1$ to $k$, and $\pi_2^{+k}$ covers integers from $k+1$ to $k+\ell$. With Proposition~\ref{prop:S-commute}, and the fact that $\S(A \cdot B) = \S(A) \cdot \S(B)$ if every element of $A$ is smaller than all elements in $B$, we have
\begin{align*}
\S(\pi) &= \S(\pi_1) \cdot \S(\pi_2)^{+k} \cdot (k+\ell+1) \\
\S(\S(\pi)) &= \S(\S(\pi_1)) \cdot \S(\S(\pi_2))^{+k} \cdot (k+\ell+1) = \textrm{id}_{k+\ell+1}.
\end{align*}
In the proof above, since we never specify any element in $\pi_1$ and $\pi_2$, the reasoning also works for $\pi_1$ and/or $\pi_2$ empty.
\end{proof}
\begin{prop} \label{prop:c2-valid}
Given $k, \ell > 0$, $\pi_1 \in \mathcal{T}_k$ and $\pi_2 \in \mathcal{T}_\ell$, let $t = \operatorname{slmax}(\pi_2)$, and $i$ be an integer between $1$ and $t$. Suppose that $a_i$ is the $i^{\rm th}$ left-to-right maximum of $\S(\pi_2)$. Then we have $\pi = C_2(\pi_1, \pi_2, i) \in \mathcal{T}_{k+\ell+1}$. Furthermore, $\operatorname{slmax}(\pi) = \operatorname{slmax}(\pi_1) +\operatorname{slmax}(\pi_2) - i + 1$.
\end{prop}
\begin{proof}
We first check that $\pi = \pi_1^{+(0,k,a_i)} \cdot (k+\ell+1) \cdot \pi_2^{k-1, a_i+1, k}$ is in $\mathfrak{S}_{k+\ell+1}$. We see that the set of elements in $\pi_1^{+(0,k,a_i)}$ is $\{ j \mid 1 \leq j \leq k-1 \} \cup \{k+a_i\}$, and that of $\pi_2^{k-1, a_i+1, k}$ is $\{ j \mid k \leq j \leq k+\ell, j \neq k+a_i\}$. We thus know that $\pi$ is indeed in $\mathfrak{S}_{k+\ell+1}$.
We now check that $\pi$ is in $\mathcal{T}_{k+\ell+1}$. With Proposition~\ref{prop:S-commute}, we have
\[
\S(\pi) = \S(\pi_1)^{+(0,k,a_i)} \cdot \S(\pi_2)^{+(k-1,a_i+1,k)} \cdot (k+\ell+1).
\]
Now we prove that $\tau = \S(\pi_1)^{+(0,k,a_i)} \cdot \S(\pi_2)^{+(k-1,a_i+1,k)}$ avoids the pattern $231$. Since $\pi_1, \pi_2 \in \mathcal{T}$, both $\S(\pi_1)$ and $\S(\pi_2)$ are stack-sortable, thus avoid $231$, and we only need to prove that there is no pattern $231$ across both parts. By construction, the first part $\S(\pi_1)^{+(0,k,a_i)}$ only has one element $k+a_i$ that is larger than some element in the second part $\S(\pi_2)^{+(k-1,a_i+1,k)}$. Therefore, we only need to check for three elements $b_3 < b_1 < b_2$ with $b_1$ in $\S(\pi_1)^{+(0,k,a_i)}$ and $b_2$ followed by $b_3$ in $\S(\pi_2)^{+(k-1, a_i+1, k)}$. By construction, we must have $b_1 = k+a_i$. But now, since $a_i$ is a left-to-right maximum of $\S(\pi_2)$, the element $b_2$ (thus $b_3$) must occur after $k-1+a_i$ in $\S(\pi_2)^{+(k-1, a_i+1, k)}$. If such elements $b_2, b_3$ exist, then $k-1+a_i, b_2, b_3$ is a pattern $231$ in $\S(\pi_2)^{+(k-1, a_i+1, k)}$, which is impossible. Therefore, $\tau$ avoids $231$, hence $\S(\pi)$ also, which means $\pi \in \mathcal{T}_{k+\ell+1}$.
For the equality on $\operatorname{slmax}$, we observe that $\S(\pi_1)^{+(0,k,a_i)}$ contains $k+a_i$, which is larger than the first $i$ left-to-right maxima ($k - 1 + a_j$ for $j \leq i$) in $\S(\pi_2)^{+(k-1, a_i+1, k)}$.
\end{proof}
We now show that the constructions $C_1, C_2$ are the inverse of the decomposition $D$.
\begin{prop} \label{prop:inverse-1}
Given two permutations $\pi_1, \pi_2$ in $\mathcal{T}$, we have $D(C_1(\pi_1, \pi_2)) = (\pi_1, \pi_2)$, and $D(C_2(\pi_1, \pi_2, i)) = (\pi_1, \pi_2)$ for any $1 \leq i \leq \operatorname{slmax}(\pi_2)$.
\end{prop}
\begin{proof}
It is clear from the constructions of $C_1, C_2$ and $D$, with the fact that, for any permutation $\sigma$, we have $P(\sigma^{+k}) = P(\sigma^{+(k_1, m, k_2)}) = \sigma$.
\end{proof}
\begin{prop} \label{prop:inverse-2}
Let $\pi$ be a permutation in $\mathcal{T}$. Suppose that $D(\pi) = (\pi_1, \pi_2)$. Then either $\pi = C_1(\pi_1, \pi_2)$, or $\pi = C_2(\pi_1, \pi_2, i)$ for some $1 \leq i \leq \operatorname{slmax}(\pi_2)$.
\end{prop}
\begin{proof}
Let $n$ be the length of $\pi$. We have $\pi = \pi_\ell \cdot n \cdot \pi_r$, and $\S(\pi) = \S(\pi_\ell) \cdot \S(\pi_r) \cdot n$. We also have $\pi_1 = P(\pi_\ell)$ and $\pi_2 = P(\pi_r)$. We now consider elements in $\pi_\ell$ that are larger than the minimum of $\pi_r$. There may be zero, one or more such elements.
Suppose that no element in $\pi_\ell$ is larger than the minimum of $\pi_r$. In this case, $\pi = C_1(\pi_1, \pi_2)$, and $\pi_1$ and $\pi_2$ can be empty.
Now suppose that there is exactly one element $m$ in $\pi_\ell$ larger than the minimum of $\pi_r$. In this case, neither $\pi_\ell$ nor $\pi_r$ can be empty. It is clear that $m$ is the largest element in $\pi_\ell$. Let $R_-$ (resp. $R_+$) be the set of elements in $\pi_r$ that are smaller (resp. larger) than $m$. We know that $\S(\pi_\ell)$ ends in $m$, and we write $\S(\pi_\ell)$ as $\tau_1' \cdot m$. We now consider $\S(\pi)$ as
\[
\S(\pi) = \tau_1' \cdot m \cdot \S(\pi_r) \cdot n.
\]
Since $\S(\pi)$ is stack-sortable, it avoids the pattern $231$. But if an element $r_- \in R_-$ is preceded by an element $r_+ \in R_+$, then $m, r_+, r_-$ is a $231$ pattern. Therefore, we can write $\S(\pi_r) = \tau_r^- \tau_r^+$, where $\tau_r^-$ (resp. $\tau_r^+$) is composed of elements in $R_-$ (resp. $R_+$). The maximum element $m'$ in $\tau_r^-$ must be a left-to-right maximum of $\S(\pi_r)$. Suppose that $m'$ is the $i^{\rm th}$ left-to-right maximum of $\S(\pi_r)$. Since $\S(\pi)$ is a permutation, $m$ is strictly larger than all elements in $\tau_r^-$ and strictly smaller than those in $\tau_r^+$. Therefore, $\S(\pi_r)$ is of the form $\S(\pi_2)^{+(k-1,m'+1,k)}$, where $k$ is the length of $\pi_\ell$. Since $\pi_2 = P(\pi_r)$, we thus have $\pi_r = \pi_2^{+(k-1,m'+1,k)}$, which means $\pi = C_2(\pi_1, \pi_2, i)$.
In the case where there are at least two elements $m_1, m_2$ in $\pi_\ell$ larger than the minimum $m_3$ of $\pi_r$, we can take $m_2$ the maximum of $\pi_\ell$, and we must have the order $m_1, m_2, m_3$ in $\S(\pi)$, which is an impossible $231$ pattern. We thus conclude the case analysis.
\end{proof}
From the propositions above, under the recursive decomposition $D$, we can build all 2SSPs in a unique way using $\epsilon$ and the constructions $C_1, C_2$. We now study statistics on 2SSPs under these constructions. We first define several statistics, some of which were also studied in \cite{multistat}. Let $\sigma$ be a permutation. We denote by $\operatorname{lmax}(\sigma)$ (resp. $\operatorname{rmax}(\sigma)$) the number of left-to-right (resp. right-to-left) maxima of $\sigma$, \textit{i.e.}, the number of indices $i$ such that for all $j<i$ (resp. $j>i$), we have $\sigma(i) > \sigma(j)$. We also denote by $\operatorname{asc}(\sigma)$ (resp. $\operatorname{des}(\sigma)$) the number of \tdef{ascents} (resp. \tdef{descents}) in $\sigma$, \textit{i.e.}, the number of indices $i$ such that $\sigma(i) < \sigma(i+1)$ (resp. $\sigma(i) > \sigma(i+1)$). Finally, we denote by $\operatorname{sldes}(\sigma)$ the number of \tdef{left descents} in $\S(\sigma)$, \textit{i.e.}, elements $a$ preceding $a-1$ in $\S(\sigma)$. We take the convention that $\operatorname{lmax}(\epsilon) = \operatorname{rmax}(\epsilon) = \operatorname{asc}(\epsilon) = \operatorname{des}(\epsilon) = \operatorname{sldes}(\epsilon) = 0$. We also recall that $\operatorname{len}(\sigma)$ is the length of $\sigma$ as a sequence. The following proposition follows directly from the constructions.
\begin{prop} \label{prop:tsp-stats}
Given two non-empty permutations $\pi_1, \pi_2$, for any $i$ from $1$ to $\operatorname{slmax}(\pi_2)$, we have
\begin{align*}
\operatorname{lmax}(C_1(\pi_1, \pi_2)) = \operatorname{lmax}(C_2(\pi_1, \pi_2, i)) &= \operatorname{lmax}(\pi_1) + 1, \\
\operatorname{rmax}(C_1(\pi_1, \pi_2)) = \operatorname{rmax}(C_2(\pi_1, \pi_2, i)) &= 1 + \operatorname{rmax}(\pi_2), \\
\operatorname{asc}(C_1(\pi_1, \pi_2)) = \operatorname{asc}(C_2(\pi_1, \pi_2, i)) &= \operatorname{asc}(\pi_1) + 1 + \operatorname{asc}(\pi_2), \\
\operatorname{des}(C_1(\pi_1, \pi_2)) = \operatorname{des}(C_2(\pi_1, \pi_2, i)) &= \operatorname{des}(\pi_1) + 1 + \operatorname{des}(\pi_2), \\
\operatorname{len}(C_1(\pi_1, \pi_2)) = \operatorname{len}(C_2(\pi_1, \pi_2, i)) &= \operatorname{len}(\pi_1) + 1 + \operatorname{len}(\pi_2), \\
\operatorname{sldes}(C_1(\pi_1, \pi_2)) &= \operatorname{sldes}(\pi_1) + \operatorname{sldes}(\pi_2), \\
\operatorname{sldes}(C_2(\pi_1, \pi_2, i)) &= \operatorname{sldes}(\pi_1) + \operatorname{sldes}(\pi_2) + 1.
\end{align*}
Furthermore, when one of $\pi_1, \pi_2$ is empty, the formulas for $C_1(\pi_1, \pi_2)$ still hold, except that $\operatorname{asc}(C_1(\epsilon, \pi_2)) = \operatorname{asc}(\pi_2)$, and $\operatorname{des}(C_1(\pi_1, \epsilon)) = \operatorname{des}(\pi_1)$.
\end{prop}
\section{Bijection with fighting fish}
In \cite{fighting-fish-enum}, there is a recursive construction of fighting fish called the \emph{wasp-waist decomposition}, which we briefly describe here (and illustrate in Figure~\ref{fig:ff-construct}) for the sake of self-containment. Readers are referred to \cite{fighting-fish-enum} for a detailed definition.
\begin{figure}
\begin{center}
\includegraphics[scale=1, page=3]{figures.pdf}
\end{center}
\caption{Constructions $C^\bullet_1$ and $C^\bullet_2$ for fighting fish}
\label{fig:ff-construct}
\end{figure}
Given two non-empty fighting fish $P_1$ and $P_2$, we build a new fighting fish $C^\bullet_1(P_1, P_2)$ as illustrated in the upper half of Figure~\ref{fig:ff-construct}, by gluing the upper left edge of the head of $P_1$ to the last edge of the fin of $P_2$, then add a new cell to each lower left free edge on the fin. We can also define $C^\bullet_1(P_1, P_2)$ for $P_1, P_2$ being empty (denoted by $\epsilon^\bullet$): $C^\bullet_1(\epsilon^\bullet,P_2)$ is obtained from $P_2$ by adding a new cell to each lower left free edge on the fin; $C^\bullet_1(P_1, \epsilon^\bullet)$ is $P_1$ with a new cell added to the upper left edge of its head; $C^\bullet_1(\epsilon^\bullet, \epsilon^\bullet)$ is the fighting fish with only the head. Now, suppose again that $P_1$ and $P_2$ are non-empty, and $P_2$ has fin-length $k+1$. We observe that $k \geq 1$, since the fin of a fighting fish has length at least $2$. We build $C^\bullet_2(P_1, P_2, i)$ with $1 \leq i \leq k$ as illustrated in the lower half of Figure~\ref{fig:ff-construct}. We first add a new cell to each lower left free edge among the first $k-i+1$ edges on the fin of $P_2$. Then, if the $(k-i+1)$-th edge $e$ is a lower right edge, we glue the head of $P_1$ to $e$, otherwise we glue the head of $P_1$ to the lower right edge of the new cell added to $e$.
It was proved in \cite{fighting-fish-enum} that every fighting fish can be uniquely constructed from $\epsilon^\bullet$ using the constructions $C^\bullet_1, C^\bullet_2$. We now look at some statistics on fighting fish. Given a fighting fish $P$, we denote by $\operatorname{fin}(P)$ the fin-length of $P$, by $\operatorname{size}(P)$ the number of lower free edges in $P$, by $\operatorname{lsize}(P)$ (resp. $\operatorname{rsize}(P)$) the number of left (resp. right) lower free edges in $P$, and by $\operatorname{tails}(P)$ the number of tails in $P$. We take the conventions that $\operatorname{fin}(\epsilon^\bullet)=\operatorname{lsize}(\epsilon^\bullet)=\operatorname{rsize}(\epsilon^\bullet)=\operatorname{size}(\epsilon^\bullet)=\operatorname{tails}(\epsilon^\bullet)=1$. We have the following observation from the definitions of $C^\bullet_1$ and $C^\bullet_2$.
\begin{prop} \label{prop:ff-stats}
Given two non-empty fighting fish $P_1, P_2$, for any $i$ from $1$ to $\operatorname{fin}(P_2)-1$, we have
\begin{align*}
\operatorname{fin}(C^\bullet_1(P_1, P_2)) &= \operatorname{fin}(P_1) + \operatorname{fin}(P_2), \\
\operatorname{fin}(C^\bullet_2(P_1, P_2, i)) &= \operatorname{fin}(P_1) + \operatorname{fin}(P_2) - i, \\
\operatorname{lsize}(C^\bullet_1(P_1, P_2)) = \operatorname{lsize}(C^\bullet_2(P_1, P_2, i)) &= \operatorname{lsize}(P_1) + \operatorname{lsize}(P_2), \\
\operatorname{rsize}(C^\bullet_1(P_1, P_2)) = \operatorname{rsize}(C^\bullet_2(P_1, P_2, i)) &= \operatorname{rsize}(P_1) + \operatorname{rsize}(P_2), \\
\operatorname{size}(C^\bullet_1(P_1, P_2)) = \operatorname{size}(C^\bullet_2(P_1, P_2, i)) &= \operatorname{size}(P_1) + \operatorname{size}(P_2), \\
\operatorname{tails}(C^\bullet_1(P_1, P_2)) &= \operatorname{tails}(P_1) - 1 + \operatorname{tails}(P_2), \\
\operatorname{tails}(C^\bullet_2(P_1, P_2,i)) &= \operatorname{tails}(P_1) + \operatorname{tails}(P_2). \\
\end{align*}
Furthermore, the formulas for $C^\bullet_1(P_1, P_2)$ hold for $P_1$ or $P_2$ empty, except that $\operatorname{lsize}(C^\bullet_1(\epsilon^\bullet, P_2)) = \operatorname{lsize}(P_2)$, and $\operatorname{rsize}(C^\bullet_1(P_1, \epsilon^\bullet)) = \operatorname{rsize}(P_1)$.
\end{prop}
Now we define our bijection $\phi$ recursively as follows, using both recursive decompositions of 2SSPs and fighting fish:
\begin{align}
\begin{split} \label{eq:phi-def}
\phi(\epsilon) &= \epsilon^\bullet, \\
\phi(C_1(\pi_1, \pi_2)) &= C_1^\bullet(\phi(\pi_1), \phi(\pi_2)), \\
\phi(C_2(\pi_1, \pi_2, i)) &= C_2^\bullet(\phi(\pi_1), \phi(\pi_2), i).
\end{split}
\end{align}
We can now prove our main result.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
We will prove the conditions of $\phi$ between the set of 2SSPs with $\epsilon$ added and the set of fighting fish with the ``empty fish'' $\epsilon^\bullet$ added.
We first prove by induction on $\operatorname{len}(\pi)$ that $\phi(\pi)$ is well-defined, with $\operatorname{slmax}(\pi) = \operatorname{fin}(\phi(\pi))-1$. The base case $\pi=\epsilon$ is clear. Now suppose that $\pi$ is not empty, and for every element $\pi' \in \mathcal{T}$ with $\operatorname{len}(\pi') < \operatorname{len}(\pi)$, we have $\phi(\pi')$ well-defined and $\operatorname{slmax}(\pi') = \operatorname{fin}(\phi(\pi))-1$. When $\pi = C_1(\pi_1, \pi_2)$, we see that $\phi(\pi)$ is well-defined. For the case $\pi = C_2(\pi_1, \pi_2, i)$, by induction hypothesis, we have $1 \leq i \leq \operatorname{slmax}(\pi_2) = \operatorname{fin}(\phi(\pi))-1$. Therefore, $\phi(\pi) = C^\bullet_2(\phi(\pi_1), \phi(\pi_2), i)$ is also well-defined. The equality $\operatorname{slmax}(\pi) = \operatorname{fin}(\phi(\pi))-1$ in both cases comes directly from Propositions~\ref{prop:c1-valid},~\ref{prop:c2-valid}~and~\ref{prop:ff-stats}. We thus conclude the induction. We note that, in the case $\pi = C_2(\pi_1, \pi_2, i)$ in the argument above, since $\operatorname{slmax}(\pi') = \operatorname{fin}(\phi(\pi))-1$, every possible value of $i$ in $C^\bullet_2(\phi(\pi_1), \phi(\pi_2), i)$ can be covered by some $\pi$. Therefore, combining with the fact that $C_1, C_2$ (resp. $C_1^\bullet, C_2^\bullet$) give unique construction of 2SSPs (resp. fighting fish), we conclude that $\phi$ is a bijection.
To prove the correspondences of statistics $\operatorname{len}(\pi) + 1 = \operatorname{size}(\phi(\pi))$, $\operatorname{asc}(\pi) + 1 = \operatorname{lsize}(\phi(\pi))$, $\operatorname{des}(\pi) + 1= \operatorname{rsize}(\phi(\pi))$ and $\operatorname{sldes}(\pi) + 1 = \operatorname{tails}(\phi(\pi))$, we also proceed by induction on the length of $\pi$. We first check that all these agree with the (strange) conventions of 2SSPs and fighting fish. Then we conclude by comparing Proposition~\ref{prop:tsp-stats} against Proposition~\ref{prop:ff-stats}. Details are left to readers.
\end{proof}
Using our bijection, we also recover Proposition~\ref{prop:enum-refined} in a bijective way from known enumeration results on non-separable planar maps with $i$ vertices and $j$ faces in \cite{nsp-refined}. More precisely, these planar maps are sent to 2SSPs with $i$ descents and $j$ ascents by the bijection in \cite{goulden-west}, and then to fighting fish with $i$ right lower free edges and $j$ left lower free edges by our bijection $\phi$. The two statistics can be exchanged with map duality on non-separable planar maps, providing a combinatorial explanation of the symmetry.
\section{Generating function}
We now analyze the generating function of 2SSPs enriched with all the statistics we mentioned before, therefore also that of fighting fish. Let $T(t,x,u,v) \equiv T(t,x,u,v;p,q,s)$ be the generating function defined by
\[
T(t,x,u,v;p,q,s) = \sum_{n \geq 1} \sum_{\pi \in \mathcal{T}_n} t^{n} x^{\operatorname{slmax}(\pi)} u^{\operatorname{lmax}(\pi)} v^{\operatorname{rmax}(\pi)} p^{\operatorname{asc}(\pi)} q^{\operatorname{des}(\pi)} s^{\operatorname{sldes}(\pi)}.
\]
With the symbolic method, from Proposition~\ref{prop:tsp-stats} we have the following equation:
\begin{align}
\begin{split} \label{eq:multi-stat}
T(t,x,u,v) &= t x u v (1+qT(t,x,u,1))(1+pT(t,x,1,v)) \\
&\hspace{3em} + t x u v p q s T(t,x,u,1)\frac{T(t,x,1,v) - T(t,1,1,v)}{x-1}.
\end{split}
\end{align}
We notice that \eqref{eq:multi-stat} is similar to (2.1) in \cite{fighting-fish-enum}. We have the following result.
\begin{prop} \label{prop:algebraic}
The generating function $T(t,x,u,v;p,q,s)$ is algebraic in its variables.
\end{prop}
\begin{proof}
We solve (\ref{eq:multi-stat}) with the quadratic method in a way similar to that in \cite{multistat}, without giving computational details. We denote by $T_{abc}$ with $a,b,c \in \{0,1\}$ the specialization of $T(t,x,u,v)$ where $a=1$ (resp. $b=1$, $c=1$) stands for $x$ (resp. $u$, $v$) specialized to $1$. For instance, $T_{101}$ stands for $T(t,1,u,1)$. We now use this notation to construct the following system for $T$:
\begin{align}
T_{000} &= txuv(1+qT_{001})(1+pT_{010}) + txuvpqsT_{001} \frac{T_{010}-T_{110}}{x-1}, \label{eq:000} \\
T_{010} &= txv(1+qT_{011})(1+pT_{010}) + txvpqsT_{011} \frac{T_{010}-T_{110}}{x-1}, \label{eq:010} \\
T_{001} &= txu(1+qT_{001})(1+pT_{011}) + txupqsT_{001} \frac{T_{011}-T_{111}}{x-1}, \label{eq:001} \\
T_{011} &= tx(1+qT_{011})(1+pT_{011}) + txpqsT_{011} \frac{T_{011}-T_{111}}{x-1}, \label{eq:011}
\end{align}
Equation~(\ref{eq:011}) is quadratic in $T_{011}$, with the catalytic variable $x$. Therefore, $T_{111}$ and $T_{011}$ is algebraic in related variables (see \cite{BMJ}), and can thus be solved using the quadratic method in particular. Then, Equation~(\ref{eq:001}) is linear in $T_{001}$, and depends further only on the known series $T_{111}$ and $T_{011}$. Therefore, $T_{001}$ is also algebraic in related variables. For Equation~(\ref{eq:010}), it is linear in $T_{010}$, with the catalytic variable $x$, and the equation depends further only on $T_{011}$, which is known to be algebraic. Therefore, $T_{110}$ and $T_{010}$ are both algebraic in all related variables. Finally, from Equation~(\ref{eq:000}) we know that $T_{000}$ is a polynomial in all variables and the algebraic series $T_{001}, T_{010}$ and $T_{110}$, therefore also algebraic itself.
\end{proof}
As a remark, the solution of \eqref{eq:multi-stat} is arguably simpler than that in \cite{multistat}, as there is only one divided difference.
\section{Discussion}
Our bijection $\phi$ is a first step towards a further combinatorial study of fighting fish and two-stack sortable permutations, whose properties are far from being well understood. For instance, flipping along the horizontal axis is an involution on fighting fish. Is this involution related to other involutions in related objects, such as map duality in non-separable planar maps, in a similar way as the case of $\beta$-(1,0) trees and synchronized intervals treated in \cite{duality}? How do all these involutions act on two-stack sortable permutations, which have no apparent involutive structure? We may also ask for recursive decompositions similar to the ones we have studied on other related objects. The conjecture at the end of \cite{fighting-fish-enum} also goes in this direction. As a final question, is there a non-recursive description or variant of the current presented recursive bijection? Such a direct variant would be useful in the structural study of related objects.
\section*{Acknowledgements}
The author thanks Mireille Bousquet-Mélou, Guillaume Chapuy and Gilles Schaeffer for their inspiring discussions and useful comments. The author also thanks the anonymous referees for their precious comments.
\bibliographystyle{alpha}
|
1,941,325,220,756 | arxiv | \section{Introduction}
\IEEEPARstart{D}{ata} clustering is an unsupervised learning technique that aims to partition a set of data objects (i.e., data points) into a certain number of homogeneous groups \cite{frey07_ap,das08_tsmca,meap13,svstream13,yang15,wang16_tkde,Chen18_tsmcs,Zhang18_tsmcs,He18_tsmcs,wu17_Euler}. It is a fundamental yet very challenging topic in the field of data mining and machine learning, and has been successfully applied in a wide variety of areas, such as image processing \cite{jm00_ncut,Huang16_neucom}, community discovery \cite{Wang14_tsmcs,neiwalk14_tkde}, recommender systems \cite{rafa13_tsmcs,symeon16_tsmcs,zhao18_tsmcs} and text mining \cite{rajp14_tsmcs}. In the past few decades, a large number of clustering algorithms have been developed by exploiting various techniques \cite{jain10_survey}. Different algorithms may lead to very different clustering performances for a specific dataset. Each clustering algorithm has its own advantages as well as weaknesses. However, there is no single algorithm that is suitable for all data distributions and applications. Given a clustering task, it is generally not easy to choose a proper clustering algorithm for it, especially without prior knowledge. Even if a specific algorithm is given, it may still be very difficult to decide the optimal parameters for the clustering task.
Unlike the conventional practice that typically uses a single algorithm to produce a single clustering result, ensemble clustering has recently emerged as a powerful tool whose purpose is to combine multiple different clustering results (generated by different algorithms or the same algorithm with different parameter settings) into a probably better and more robust consensus clustering \cite{Fred05_EAC}. Ensemble clustering has been gaining increasing attention, and many ensemble clustering algorithms have been proposed in recent years \cite{huang14_weac,Yu14_pr,wu15_TKDE,huang15_ecfg,Huang16_TKDE,huang17_tcyb,Yu16_tkde_incremental,Kang16_kbs,huang17_iconip,liu17_tkde,Yu17_tkde,yu17_tcyb}. Despite its significant progress, there are still two challenging issues in the current research. First, most of the existing ensemble clustering algorithms investigate the ensemble information at the object-level, and often fail to explore the higher-level information in the ensemble of multiple base clusterings. Second, they mostly focus on the direct relationship in ensembles, such as direct intersections and pair-wise co-occurrence, but generally neglect the multi-scale indirect connections in the base clusterings, which may exhibit a negative influence on the robustness of their consensus clustering performances.
\begin{figure}[!t]
\begin{center}
{\subfigure[]
{\includegraphics[width=0.3\linewidth]{Figures_exampleFig_ensemble1}\label{fig:exampleEnsemble1}}}
{\subfigure[]
{\includegraphics[width=0.3\linewidth]{Figures_exampleFig_ensemble2}\label{fig:exampleEnsemble2}}}
{\subfigure[]
{\includegraphics[width=0.3\linewidth]{Figures_exampleFig_ensemble3}\label{fig:exampleEnsemble3}}}
\caption{The relationship between two objects $x_i$ and $x_j$ (a) if they appear in the same cluster, (b) if they appear in two different but intersected clusters, and (c) if they appear in two different clusters that are indirectly connected by some other clusters.}
\label{fig:exampleEnsemble}
\end{center}
\end{figure}
\begin{figure}[!t]
\begin{center}
{\subfigure[]
{\includegraphics[width=0.298\linewidth]{Figures_cmpMcSize_1}\label{fig:cmpMcSize1}}}
{\subfigure[]
{\includegraphics[width=0.336\linewidth]{Figures_cmpMcSize_2}\label{fig:cmpMcSize2}}}
{\subfigure[]
{\includegraphics[width=0.336\linewidth]{Figures_cmpMcSize_3}\label{fig:cmpMcSize3}}}
\caption{Statistics of the intersection fragments on the \emph{LR} datasets. (a) Numbers of data objects, clusters, and fragments as the ensemble size $M$ increases from 0 to 50. (b) Number of fragments in each size interval (with $M=20$). (c) Total size of fragments in each size interval (with $M=20$).}
\label{fig:cmpMcSize}
\end{center}
\end{figure}
In ensemble clustering, the direct co-occurrence relationship between objects is the most basic information. Fred and Jain \cite{Fred05_EAC} captured the co-occurrence relationship by presenting the concept of co-association matrix, which reflects how many times two objects occur in the \emph{same} cluster among the multiple base clusterings. The drawback of the conventional co-association matrix lies in that it only considers the direct co-occurrence relationship, yet lacks the ability to take into consideration the rich information of indirect connections in ensembles. As shown in Fig.~\ref{fig:exampleEnsemble}, if two objects occur in the same cluster in a base clustering, then we say these two objects are directly connected. If two objects are in two different clusters \emph{and} the two clusters are directly or indirectly related to each other, then we say these two objects are indirectly connected. The challenge here is two-fold: (i) how to improve the object-wise relationship by exploiting the higher-level (e.g., cluster-level) connections; (ii) how to explore the direct and indirect structural relationship in a unified model. To partially address this, Iam-On et al. \cite{iam_on11_linkbased} proposed the weighted connected triple (WCT) method to incorporate the common neighborhood information between clusters into the conventional co-association matrix, which exploits the direct neighborhood information between clusters but cannot utilize their indirect neighboring connections. Further, Iam-On et al. \cite{iamon08_icds} took advantage of the SimRank similarity (SRS) to investigate the indirect connections for refining the co-association matrix, which, however, suffers from its very high computational cost and is not feasible for large datasets. More recently, Huang et al. \cite{Huang16_TKDE} investigated the ensemble information by performing the random walk on a set of data fragments (also known as microclusters). Specifically, the set of data fragments are generated by intersecting the cluster boundaries of multiple base clusterings, and can be used as a set of basic operating units in the consensus process \cite{Huang16_TKDE}. Although using data fragments instead of original objects may provide better computational efficiency, the approach in \cite{Huang16_TKDE} still suffers from two limitations. In one aspect, when the ensemble size (i.e., the number of base clusterings) grows larger, the number of the generated fragments may increase dramatically (as shown in Fig.~\ref{fig:cmpMcSize1}), which eventually leads to a rapidly increasing computational burden. In another aspect, by intersecting the cluster boundaries of multiple base clusterings, the generated fragments may be associated with very \emph{imbalanced} sizes. As an example, we use twenty base clusterings on the \emph{Letter Recognition} (\emph{LR}) dataset to generate a set of data fragments. Different intersection fragments may have very different sizes, i.e., they may consist of very different numbers of data objects. The number of the fragments in each size interval is illustrated in Fig.~\ref{fig:cmpMcSize2}, while the total size of fragments in each size interval is shown in Fig.~\ref{fig:cmpMcSize3}. It can be observed that over 80 percent of the fragments have a very small size (of 1 or 2), whereas only 1.72 percent of the total fragments have a size greater than 20. However, the 1.72 percent of these large fragments surprisingly amounts to as large as 36.45 percent of the entire set of objects, which shows the heavy imbalance of the fragment sizes and places an unstable factor on the overall consensus process. Despite the efforts that these algorithms have made \cite{Huang16_TKDE,iam_on11_linkbased,iamon08_icds}, it remains an open problem how to effectively and efficiently investigate the higher-level ensemble information as well as incorporate multi-scale direct and indirect connections in ensembles for enhancing the consensus clustering performance.
To address the aforementioned challenging issues, in this paper, we propose a new ensemble clustering approach based on fast propagation of cluster-wise similarities via random walks. Different from the existing techniques that work at the object-level \cite{Fred05_EAC} or the fragment-level \cite{Huang16_TKDE}, in this paper, we explore the rich information of the ensembles at the base-cluster-level with multi-scale integration and cluster-object mapping. Specifically, a cluster similarity graph is first constructed by treating the base clusters as graph nodes and using the Jaccard coefficient to build the weighted edges. By defining a transition probability matrix, the random walk process is then performed to explore the multi-scale structural information in the cluster similarity graph. Thereafter, a new cluster-wise similarity matrix can be derived by utilizing the random walk trajectories starting from different nodes in the original graph. Further, an enhanced co-association (ECA) matrix is constructed by mapping the newly obtained cluster-wise similarity back to the object-level. Finally, by performing the partitioning process at the object-level and at the cluster-level, respectively, two novel consensus functions are therefore proposed, i.e., ensemble clustering by propagating cluster-wise similarities with hierarchical consensus function (ECPCS-HC) and ensemble clustering by propagating cluster-wise similarities with meta-cluster based consensus function (ECPCS-MC). Extensive experiments have been conducted on a variety of real-world datasets, which demonstrate the effectiveness and efficiency of our ensemble clustering approach when compared to the state-of-the-art approaches.
For clarity, the main contributions of this paper are summarized as follows:
\begin{itemize}
\item A new cluster-wise similarity measure is derived, which captures the higher-level ensemble information and incorporates the multi-scale indirect connections by means of the random walk process starting from each cluster node.
\item An enhanced co-association matrix is presented based on the cluster-object mapping, which simultaneously reflects the object-wise co-occurrence relationship as well as the cluster-wise structural information.
\item Two novel consensus functions are devised, namely, ECPCS-HC and ECPCS-MC, which perform the partitioning process at the object-level and at the cluster-level, respectively, to obtain the final consensus clustering.
\item Experiments on multiple datasets have shown the superiority of the proposed approach over the existing ensemble clustering approaches.
\end{itemize}
The remainder of the paper is organized as follows. Section~\ref{sec:related_work} reviews the related work on ensemble clustering. Section~\ref{sec:formulation} provides the formulation of the ensemble clustering problem. Section~\ref{sec:propagation_of_CSG} describes the construction of the cluster similarity graph and the random walk propagation for multi-scale integration. Section~\ref{sec:mapping_similarity} presents the enhanced co-association matrix by mapping cluster-wise similarities to object-wise similarities. Section~\ref{sec:consensus_function} proposes two novel consensus functions in our cluster-wise similarity propagation framework. Section~\ref{sec:experiments} reports the experimental results. Finally, Section~\ref{sec:conclusion} concludes this paper.
\section{Related Work}
\label{sec:related_work}
Ensemble clustering aims to combine a set of multiple base clusterings into a better and more robust consensus clustering result \cite{Fred05_EAC}. In the past decade, many ensemble clustering algorithms have been proposed \cite{huang15_ecfg,Huang16_TKDE,huang17_tcyb,huang17_iconip,liu17_tkde,Yu17_tkde,strehl02,fern04_bipartite,topchy05,li07,Li_WCC08,Mimaroglu11_pr,yi_icdm12,franek13_pr,Zhong15_pr,liu17_bioinformatics}, which can be classified into three main categories, namely, the pair-wise co-occurrence based algorithms \cite{Fred05_EAC,li07,yi_icdm12}, the graph partitioning based algorithms \cite{strehl02,fern04_bipartite,Mimaroglu11_pr}, and the median partition based algorithms \cite{huang15_ecfg,topchy05,Li_WCC08,franek13_pr}.
The pair-wise co-occurrence based algorithms \cite{Fred05_EAC,li07,yi_icdm12} typically build a co-association matrix by considering the frequency that two objects occur in the same cluster among the multiple base clusterings. By treating the co-association matrix as the similarity matrix, the hierarchical agglomerative clustering algorithms \cite{jain10_survey} can be used to obtain the consensus result. Fred and Jain \cite{Fred05_EAC} for the first time presented the concept of co-association matrix and designed the evidence accumulation clustering (EAC) method. Then, Li et al. \cite{li07} extended the EAC method by presenting a new hierarchical agglomerative clustering algorithm that takes the sizes of clusters into consideration via normalized edges. Yi et al. \cite{yi_icdm12} dealt with the uncertain entries in the co-association matrix by exploiting the matrix completion technique to improve the robustness of the consensus clustering.
The graph partitioning based algorithms \cite{strehl02,fern04_bipartite,Mimaroglu11_pr} formulate the clustering ensemble into a graph model and obtain the consensus clustering by segmenting the graph into a certain number of subsets. Strehl and Ghosh \cite{strehl02} treated each cluster in the set of base clusterings as a hyper-edge and proposed three graph partitioning based ensemble clustering algorithms. Fern and Brodley \cite{fern04_bipartite} built a bipartite graph by treating both clusters and objects as graph nodes, which is then partitioned via the METIS algorithm to obtain the consensus clustering. Mimaroglu and Erdil \cite{Mimaroglu11_pr} constructed a similarity graph between data objects and partitioned the graph by finding pivots and growing clusters.
The median partition based algorithms cast the ensemble clustering problem into an optimization problem which aims to find a median partition (or clustering) such that the similarity between this clustering and the set of base clusterings is maximized \cite{huang15_ecfg,topchy05,Li_WCC08,franek13_pr}. To deal with the median partition problem, which is NP-hard \cite{topchy05}, Topchy et al. \cite{topchy05} utilized the EM algorithm to find an approximate solution for it. Li et al. \cite{Li_WCC08} formulated the ensemble clustering problem into a nonnegative matrix factorization (NMF) problem and proposed the weighted consensus clustering (WCC) method. Franek et al. \cite{franek13_pr} cast the ensemble clustering problem into an Euclidean median problem and obtained an approximate solution via the Weiszfeld algorithm \cite{Weiszfeld09}. Huang et al. \cite{huang15_ecfg} formulated the ensemble clustering problem into a binary linear programming problem and solved it via the factor graph model \cite{Kschischang_FG_SPA:01}.
Despite the fact that significant progress has been made in the ensemble clustering research in recent years \cite{huang15_ecfg,Huang16_TKDE,huang17_tcyb,strehl02,fern04_bipartite,topchy05,li07,Li_WCC08,Mimaroglu11_pr,yi_icdm12,franek13_pr,ren13_icdm,yu15_tkde,yu15_tcbb,Ren17_kais}, there are still two challenging issues in most of the existing algorithms. First, they mostly investigate the ensemble information at the object-level, but often fail to go beyond the object-level to explore the information at higher levels of granularity in the ensemble. Second, many of them only consider the direct connections in ensembles and lack the ability to incorporate the multi-scale (indirect) connections for improving the consensus robustness. To (partially) address this, Iam-On et al. \cite{iam_on11_linkbased} proposed to refine the co-association matrix by considering the common neighborhood information between clusters, which in fact exploits the one-step indirect connections yet still neglects the multi-step (or multi-scale) indirect connections in ensembles. Further, Iam-On et al. \cite{iamon08_icds} exploited the SimRank Similarity (SRS) to incorporate the multi-scale neighborhood information in ensembles, which unfortunately suffers from its very high computational cost and is not feasible for large datasets. Huang et al. \cite{Huang16_TKDE} proposed to explore the structural information in ensembles by conducting random walks on the data fragments that are generated by intersecting the cluster boundaries of multiple base clusterings. However, in one aspect, the number of fragments would increase dramatically as the number of base clusterings grows larger, which may bring in a very heavy computational burden \cite{Huang16_TKDE}. In another aspect, the potentially imbalanced nature of the fragments (as shown in Figs.~\ref{fig:cmpMcSize2} and \ref{fig:cmpMcSize3}) also places an unstable factor on the robustness of the overall consensus clustering process. Moreover, while working at the fragment-level, the approach in \cite{Huang16_TKDE} still lacks the desired ability to investigate the multi-scale cluster-wise relationship in ensembles. Although considerable efforts have been made \cite{Huang16_TKDE,iam_on11_linkbased,iamon08_icds}, it remains a very challenging task how to simultaneously tackle the aforementioned two issues effectively and efficiently for the ensemble clustering problem.
\section{Problem Formulation}
\label{sec:formulation}
Ensemble clustering is the process of combining multiple base clusterings into a better consensus clustering result. Let $\mathcal{X}=\{x_1,\cdots,x_N\}$ denote a dataset with $N$ objects, where $x_i$ is the $i$-th object. Let $\Pi=\{\pi^1,\cdots,\pi^M\}$ denote a set of $M$ base clusterings for the dataset, where $\pi^m=\{C_1^m,\cdots,C_{n^m}^m\} $
is the $m$-th base clustering, $C_i^m$ is the $i$-th cluster in $\pi^m$, and $n^m$ is the number of clusters in $\pi^m$.
For clarity, the set of all clusters in the clustering ensemble $\Pi$ is denoted as $\mathcal{C} = \{C_1,\cdots,C_{N_c}\}$,
where $C_i$ is the $i$-th cluster and $N_c$ is the total number of clusters in $\Pi$. Obviously, it holds that $N_c=\sum_{m=1}^Mn^m$.
Formally, the objective of ensemble clustering is to integrate the information of the ensemble of multiple base clusterings in $\Pi$ to build a better clustering result $\pi^*$.
\section{Propagation of Cluster-wise Similarities}
\label{sec:propagation_of_CSG}
In ensemble clustering, each base clustering consists of a certain number of base clusters. To capture the base cluster information, a commonly used strategy is to map the base cluster labels to the object-level \cite{Fred05_EAC} (or fragment-level \cite{Huang16_TKDE}) by building a co-association matrix, which reflects how many times two objects (or two fragments) are grouped in the same cluster among the multiple base clusterings. The straightforward mapping from the base cluster labels to the object-wise (or fragment-wise) co-association matrix implicitly assumes that different clusters are independent of each other, but fails to consider the potentially rich information hidden in the relationship between different clusters. In light of this, we aim to effectively and efficiently investigate the multi-scale direct and indirect relationship between base clusters in the ensemble, so as to achieve better and more robust consensus clustering results. Toward this end, two sub-problems here should first be solved, i.e., (i) how to define the initial similarity between clusters and (ii) how to incorporate the multi-scale information to construct more robust cluster-wise similarity.
Since a cluster is a set of data objects, the initial relationship between clusters can be investigated by the Jaccard coefficient \cite{Levandowsky1971}, which measures the similarity between two sets by considering their intersection size and union size. Formally, the Jaccard coefficient between two clusters (or sets), say, $C_i$ and $C_j$, is computed as \cite{Levandowsky1971}
\begin{equation}
\label{eq:eij}
Jaccard(C_i,C_j)=\frac{|C_i\bigcap C_j|}{|C_i\bigcup C_j|},
\end{equation}
where $\bigcap$ denotes the intersection of two sets, $\bigcup$ denotes the union of two sets, and $|\cdot|$ denotes the number of objects in a set. By adopting the Jaccard coefficient as the similarity measure between clusters, an initial cluster similarity graph is constructed for the ensemble with each cluster treated as a graph node. That is
\begin{equation}
\label{eq:cls_graph}
\mathcal{G}=\{\mathcal{V}, \mathcal{E}\},
\end{equation}
where $\mathcal{V}=\mathcal{C}$ is the node set and $\mathcal{E}$ is the edge set in the graph $\mathcal{G}$. The weight of an edge between two nodes $C_i,C_j\in\mathcal{V}$ is computed as
\begin{equation}
\label{eq:eij}
e_{ij}=Jaccard(C_i,C_j),
\end{equation}
With the initial similarity graph constructed, the next step is to incorporate the multi-scale information in the graph to enhance the cluster-wise similarity. In particular, the random walk process is performed on the graph, which is a dynamic process that transits from a node to one of its neighbors at each step with a certain probability \cite{neiwalk14_tkde,Huang16_TKDE,lovasz1993random,newman04,pon05_rw,lai_PRE10}. It is a crucial task in random walk to construct the transition probability matrix, which decides the probability of the random walker transiting from a node to another one. In this paper, the transition probability matrix $P=\{p_{ij}\}_{N\times N}$ on the graph is computed as follows:
\begin{align}
\label{eq:pij}
p_{ij} = \begin{cases}\frac{e_{ij}}{\sum_{C_k\neq C_i}e_{ik}},&\text{if~} i\neq j,\\
0, &\text{if~} i=j,
\end{cases}
\end{align}
where $p_{ij}$ is the probability that a random walker transits from node $C_i$ to node $C_j$ in one step, which is proportional to the edge weight between them. Based on the one-step transition probability matrix, we can obtain the multi-step transition probability matrix $P^{(t)}=\{p^{(t)}_{ij}\}_{N\times N}$ for the random walkers on the graph. That is
\begin{align}
P^{(t)}=\begin{cases}P,&\text{if~}t=1,\\
P^{(t-1)}\cdot P, &\text{if~}t>1.
\end{cases}
\end{align}
Note that the $(i,j)$-th entry in $P^{(t)}$, i.e., $p^{(t)}_{ij}$, denotes the probability of a random walker transiting from node $C_i$ to node $C_j$ in $t$ steps. We denote the $i$-th row in $P^{(t)}$ as $P^{(t)}_{i:}=\{p^{(t)}_{i1},p^{(t)}_{i2},\cdots,p^{(t)}_{iN}\}$, which represents the probability distribution of a random walker transiting from $C_i$ to all the other nodes in $t$ steps. As different step-lengths of random walkers can reflect the graph structure information at different scales \cite{Huang16_TKDE,lai_PRE10}, to capture the multi-scale information in the graph $\mathcal{G}$, the random walk trajectories at different steps are exploited here to refine the cluster-wise similarity.
Formally, for the random walker starting from a node $C_i$, its random walk trajectory from step 1 to step $t$ is denoted as
$P^{(1:t)}_{i:}=\{P^{(1)}_{i:},P^{(2)}_{i:},\cdots,P^{(t)}_{i:}\}$.
Obviously, the $t$-step random walk trajectory (i.e., $P^{(1:t)}_{i:}$), starting from node $C_i$ and having a step-length $t$, is an $N\cdot t$-tuple, which captures the multi-scale (or multi-step) structural information in the neighborhood of $C_i$. With the random walk trajectory of each node obtained, a new similarity measure can thereby be derived for every two nodes by considering the similarity of their random walk trajectories. Specifically, the new similarity matrix between all of the clusters in $\Pi$ is represented as
\begin{equation}
Z=\{z_{ij}\}_{N_c\times N_c},
\end{equation}
where
\begin{equation}
\label{eq:PTS}
z_{ij}=Sim(P^{(1:t)}_{i:},P^{(1:t)}_{j:}).
\end{equation}
denotes the new similarity between two clusters $C_i$ and $C_j$. Note that $Sim(\cdot,\cdot)$ can be any similarity measure between two vectors. In our paper, the cosine similarity \cite{tan2005introduction} is adopted. Thus, the new similarity measure between $C_i$ and $C_j$ can be computed as
\begin{align}
\label{eq:PTS_cos}
z_{ij}=\frac{<P^{(1:t)}_{i:},P^{(1:t)}_{j:}>}{\sqrt{<P^{(1:t)}_{i:},P^{(1:t)}_{i:}>\cdot <P^{(1:t)}_{j:},P^{(1:t)}_{j:}>}},
\end{align}
where $<\cdot,\cdot>$ outputs the dot product of two vectors. Since the entries in the transition probability matrix are always non-negative, it holds that $z_{ij}\in [0,1]$ for any two clusters $C_i$ and $C_j$ in $\Pi$.
\section{Enhanced Co-association Matrix Based on Similarity Mapping}
\label{sec:mapping_similarity}
Having obtained the new cluster-wise similarity matrix $Z$, we proceed to map the new similarity matrix from the cluster-level to the object-level, and describe the enhanced co-association representation in this section.
The conventional co-association matrix \cite{Fred05_EAC} is a widely used data structure to capture the object-wise similarity in the ensemble clustering problem. Given the clustering ensemble $\Pi$, the (direct) pair-wise relationship in the $m$-th base clustering (i.e., $\pi^m$) can be represented by a connectivity matrix, which is computed as follows:
\begin{align}
A^m &=\{a^m_{ij}\}_{N\times N},\\
a^m_{ij} &= \begin{cases} 1, &\text{if~}Cls^m(x_i)=Cls^m(x_j),\\
0, &\text{otherwise,}
\end{cases}
\end{align}
where $Cls^m(x_i)$ denotes the cluster in $\pi^m$ that contains the object $x_i$. Obviously, if $C_j\in\pi^m$ and $x_i\in C_j$, then $Cls^m(x_i)=C_j$. Then, the conventional co-association matrix $A=\{a_{ij}\}_{N\times N}$ for the entire ensemble is computed as follows:
\begin{align}
A = \frac{1}{M}\sum_{m=1}^{M}A^m.
\end{align}
The conventional co-association matrix reflects the number of times that two objects appear in the same cluster among the multiple base clusterings. Although it is able to exploit the (direct) cluster-level information by investigating the object-wise co-occurrence relationship, it inherently treats each cluster as an independent entity and, however, neglects the potential relationship between \emph{different} clusters, which may provide rich information for further refining the object-wise connections. In light of this, with the multi-scale cluster-wise relationship explored by random walks in Section~\ref{sec:propagation_of_CSG}, the key problem in this section is how to map the multi-scale cluster-wise relationship back to the object-level.
In particular, we present an enhanced co-association (ECA) matrix to simultaneously capture the object-wise co-occurrence relationship and the multi-scale cluster-wise similarity. Before the construction of the ECA matrix for the entire ensemble, we first take advantage of the newly designed cluster-wise similarity matrix $Z$ to build the enhanced connectivity matrix for a single base clusterings, say, $\pi^m$. That is
\begin{align}
B^m &=\{b^m_{ij}\}_{N\times N},\\
b^m_{ij} &= \begin{cases} 1, &\text{if~}Cls^m(x_i)=Cls^m(x_j),\\
z_{uv}, &\text{if~}Cls^m(x_i)\neq Cls^m(x_j),\\
\end{cases}
\end{align}
with
\begin{align}
Cls^m(x_i)&=C^m_u, Cls^m(x_j)=C^m_v.
\end{align}
Note that the $(i,j)$-th entry in $B^m$ and the $(i,j)$-th entry in $A^m$ will be the same \emph{only when} $x_i$ and $x_j$ occur in the same cluster in $\pi^m$. The difference between $B^m$ and $A^m$ arises when two objects belongs to different clusters in a base clustering, in which situation the conventional connectivity matrix $A^m$ lacks the ability to go beyond the direct co-occurrence relationship to exploit further cluster-wise connections. Different from the convectional connectivity matrix, when two objects belong to two different clusters in a base clustering, the enhanced connectivity matrix $B^m$ is still able to capture their indirect relationship by investigating the correlation between the two clusters that these two objects respectively belong to.
With the enhanced connectivity matrix for each base clustering constructed, the ECA matrix, denoted as $B=\{b_{ij}\}_{N\times N}$, for the entire ensemble $\Pi$ can be computed as follows:
\begin{align}
B = \frac{1}{M}\sum_{m=1}^{M}B^m.
\end{align}
With $z_{ij}\in [0,1]$, it is obvious that all entries in the ECA matrix are in the range of $[0,1]$. By the construction of the ECA matrix, the cluster-wise similarity in $Z$ is mapped from the cluster-level to the object-level. It is noteworthy that the ECA matrix can be utilized in any co-association matrix based consensus functions. In particular, two new consensus functions will be designed in the next section.
\section{Two Types of Consensus Functions}
\label{sec:consensus_function}
In this section, we propose two consensus functions to obtain the final consensus clustering in the proposed ensemble clustering by propagating cluster-wise similarities (ECPCS) framework. The first consensus function is described in Section~\ref{sec:ECPCS_HC}, which is based on hierarchical clustering and performs the partitioning process at the object-level, while the second consensus function is described in Section~\ref{sec:ECPCS_MC}, which is based on meta-clustering and performs the partitioning process at the cluster-level.
\subsection{ECPCS-HC}
\label{sec:ECPCS_HC}
In this section, we describe our first consensus function termed ECPCS-HC, short for ECPCS with hierarchical consensus function. By treating the ECA matrix as the new object-wise similarity matrix, the hierarchical agglomerative clustering can be performed to obtain the consensus clustering in an iterative region merging fashion. The original objects are viewed as the set of initial regions, that is
\begin{align}
\mathcal{R}^{(0)}=\{R^{(0)}_1, \cdots, R^{(0)}_N\},
\end{align}
where $R^{(0)}_i=\{x_i\}$ denotes the $i$-th initial region that contains exactly one object $x_i$. The similarity matrix for the set of initial regions is defined as
\begin{align}
S^{(0)}&=\{s^{(0)}_{ij}\}_{N\times N},\\
s^{(0)}_{ij}&=b_{ij}.
\end{align}
With the initial region set and its similarity matrix obtained, the region merging process is then performed iteratively. In each iteration, the two regions with the highest similarity are merged into a new and larger region, which will be followed by the update of the region set and the corresponding similarity matrix. Specifically, the updated region set after the $q$-th iteration is denoted as
\begin{align}
\mathcal{R}^{(q)}=\{R^{(q)}_1, \cdots, R^{(q)}_{|\mathcal{R}^{(q)}|}\},
\end{align}
where $R^{(q)}_i$ is the $i$-th region and $|\mathcal{R}^{(q)}|$ is the number of regions in $\mathcal{R}^{(q)}$.
The similarity matrix after the $q$-th iteration is updated according to the average-link. That is
\begin{align}
S^{(q)}&=\{s^{(q)}_{ij}\}_{|\mathcal{R}^{(q)}|\times |\mathcal{R}^{(q)}|},\\
s^{(q)}_{ij}&=\frac{1}{|R^{(q)}_i|\cdot |R^{(q)}_j|}\sum_{x_u\in R^{(q)}_i,x_v\in R^{(q)}_j} b_{uv},
\end{align}
where $|R^{(q)}_i|$ denotes the number of objects in $R^{(q)}_i$.
Note that in each iteration the number of regions decreases by one, i.e., $|\mathcal{R}^{(q+1)}|=|\mathcal{R}^{(q)}|-1$. Since the number of the initial regions is $N$, it is obvious that all objects will be merged into a root region after totally $N-1$ iterations. As the result of the region merging process, a dendrogram (i.e., a hierarchical clustering tree) will be iteratively constructed. Each level of the dendrogram corresponds to a clustering result with a certain number of clusters. By choosing a level in the dendrogram, the final consensus clustering can thereby be obtained.
\subsection{ECPCS-MC}
\label{sec:ECPCS_MC}
In this section, we describe our second consensus function termed ECPCS-MC, short for ECPCS with meta-cluster based consensus function. Different from ECPCS-HC, the ECPCS-MC method performs the partitioning process at the cluster-level, which takes advantage of the enhanced cluster-wise similarity matrix $Z$ and groups all the clusters in the ensemble into several subsets. Each subset of clusters is referred to as a meta-cluster. Then, each data object is assigned to one of the meta-clusters by majority voting to construct the final consensus clustering.
Specifically, by treating the clusters in the ensemble as graph nodes and using the cluster-wise similarity matrix $Z$ to define the edge weights between them, a new cluster similarity graph can be constructed. That is
\begin{equation}
\label{eq:cls_graph}
\tilde{\mathcal{G}}=\{\mathcal{V}, \tilde{\mathcal{E}}\},
\end{equation}
where $\mathcal{V}=\mathcal{C}$ is the node set and $\tilde{\mathcal{E}}$ is the edge set. The edge weights in the graph $\tilde{\mathcal{G}}$ are decided by the enhanced cluster-wise similarity matrix $B$. Given two clusters $C_i$ and $C_j$, the weight between them is defined as
\begin{equation}
\tilde{e}_{ij}=b_{ij}.
\end{equation}
Then, the normalized cut (Ncut) algorithm \cite{jm00_ncut} can be used to partition the new graph into a certain number of meta-clusters, that is
\begin{align}
\mathcal{MC}=\{MC_1,MC_2,\cdots,MC_k\},
\end{align}
where $MC_i$ is the $i$-the meta-cluster and $k$ is the number of meta-clusters.
Note that a meta-cluster consists of a certain number of clusters. Given an object $x_i$ and a meta-cluster $MC_j$, the object $x_i$ may appear in \emph{zero or more} clusters inside $MC_j$. Specifically, the voting score of $x_i$ w.r.t. the meta-cluster $MC_j$ can be defined as the proportion of the clusters in $MC_j$ that contain $x_i$. That is
\begin{align}
Score(x_i,MC_j)&=\frac{1}{|MC_j|}\sum_{C_l\in MC_j}\textbf{1}(x_i\in C_l),\\
\textbf{1}(statement)&=\begin{cases}1,&\text{if~$statement$~is~true},\\
0,&\text{otherwise}.\nonumber
\end{cases}
\end{align}
where $|MC_j|$ denotes the number of clusters in $MC_j$.
Then, by majority voting, each object is assigned to the meta-cluster in which it appears most frequently (i.e., with the highest voting score). That is
\begin{align}
MetaCls(x_i)={\arg\max}_{MC_j\in\mathcal{MC}}Score(x_i,MC_j).
\end{align}
If an object obtains the same highest voting score from two or more different meta-clusters (which in practice rarely happens), then the object will be randomly assigned to one of the winning meta-clusters. By assigning each object to a meta-cluster via majority voting and treating the objects in the same meta-cluster as a consensus cluster, the final consensus clustering result can therefore be obtained.
\section{Experiments}
\label{sec:experiments}
In this section, we conduct experiments on a variety of benchmark datasets to evaluate the performance of the proposed ECPCS-HC and ECPCS-MC algorithms against several state-of-the-art ensemble clustering algorithms.
\subsection{Datasets and Evaluation Measures}
In our experiments, ten benchmark datasets are used, i.e., \emph{Breast Cancer} (\emph{BC}), \emph{Cardiotocography} (\emph{CTG}), \emph{Ecoli}, \emph{Gisette}, \emph{Letter Recognition} (\emph{LR}), \emph{Landsat Satellite} (\emph{LS}), \emph{MNIST}, \emph{Pen Digits} (\emph{PD}), \emph{Wine}, and \emph{Yeast}. The \emph{MNIST} dataset is from \cite{lecun98}, while the other nine datasets are from the UCI machine learning repository \cite{Bache+Lichman:2013}. The detailed information of the benchmark datasets is given in Table~\ref{table:dataset}.
To quantitatively evaluate the clustering results, two widely used evaluation measures are adopted, namely, normalized mutual information (NMI) \cite{strehl02} and adjusted Rand index (ARI) \cite{vinh2010_ARI}. Note that large values of NMI and ARI indicate better clustering results.
The NMI evaluates the similarity between two clusterings from an information theory perspective \cite{strehl02}. Let $\pi'$ be a test clustering and $\pi^G$ be the ground-truth clustering. The NMI between $\pi'$ and $\pi^G$ is computed as follows \cite{strehl02}:
\begin{equation}
\label{eq:nmi}
NMI(\pi', \pi^G)=\frac{\sum_{i=1}^{n'}\sum_{j=1}^{n^G}n_{ij}\log\frac{n_{ij}n}{n_i'n_j^G}}{\sqrt{\sum_{i=1}^{n'}n_i'\log\frac{n_i'}{n}\sum_{j=1}^{n^G}n_j^G\log\frac{n_j^G}{n}}},
\end{equation}
where $n'$ is the cluster number in $\pi'$, $n^G$ is the cluster number in $\pi^G$, $n_i'$ is the number of objects in the cluster $i$ of $\pi'$, $n_j^G$ is the number of objects in the cluster $j$ of $\pi^G$, and $n_{ij}$ is the size of the intersection of the cluster $i$ of $\pi'$ and the cluster $j$ of $\pi^G$.
The ARI is an evaluation measure that takes into consideration the number of object-pairs upon which two clusterings agree (or disagree) \cite{vinh2010_ARI}. Formally, the ARI between two clusterings $\pi'$ and $\pi^G$ is computed as follows \cite{vinh2010_ARI}:
\begin{align}
&ARI(\pi', \pi^G)=\nonumber\\
& \frac{2(N_{00}N_{11}-N_{01}N_{10})}{(N_{00}+N_{01})(N_{01}+N_{11})+(N_{00}+N_{10})(N_{10}+N_{11})},
\end{align}
where $N_{11}$ is the number of object-pairs that belong to the same cluster in both $\pi'$ and $\pi^G$, $N_{00}$ is the number of object-pairs that belong to different clusters in both $\pi'$ and $\pi^G$, $N_{10}$ is the number of object-pairs that belong to the same cluster in $\pi'$ while belonging to different clusters in $\pi^G$, and $N_{01}$ is the number of object-pairs that belong to different clusters in $\pi'$ while belonging to the same cluster in $\pi^G$.
\begin{table}[!t]\footnotesize
\centering
\caption{Description of the benchmark datasets.}
\label{table:dataset}
\begin{tabular}{m{1.99cm}<{\centering}|m{1.371cm}<{\centering}m{1.371cm}<{\centering}m{1.371cm}<{\centering}}
\toprule
Dataset &\#Object &\#Class &Dimension\\
\midrule
\emph{BC} &683 &2 &9\\
\emph{CTG} &2,126 &10 &21\\
\emph{Ecoli} &336 &8 &7\\
\emph{Gisette} &7,000 &2 &5,000\\
\emph{LR} &20,000 &26 &16\\
\emph{LS} &$6,435$ &6 &36\\
\emph{MNIST} &$5,000$ &10 &784\\
\emph{PD} &$10,992$ &10 &16\\
\emph{Wine} &178 &3 &13\\
\emph{Yeast} &1,484 &10 &8\\
\bottomrule
\end{tabular}
\end{table}
\begin{table*}[!t]
\centering
\caption{Average NMI($\%$) scores (over 20 runs) by different ensemble clustering methods. The best three scores in each comparison are highlighted in \textbf{bold}, while the best one in \textbf{[bold and brackets]}.}
\label{table:compare_ensembles_nmi}
\begin{tabular}{|m{1.16cm}<{\centering}|m{0.78cm}<{\centering}|m{1.09cm}<{\centering}m{1.15cm}<{\centering}m{1.09cm}<{\centering}m{1.09cm}<{\centering}m{1.09cm}<{\centering}m{1.15cm}<{\centering}m{1.09cm}<{\centering}m{1.25cm}<{\centering}|m{1.39065cm}<{\centering}m{1.344cm}<{\centering}|}
\hline
\multicolumn{1}{|c}{\emph{Dataset}} &\multicolumn{1}{c|}{} &EAC &MCLA &SRS &WCT &KCC &PTGP &ECC &SEC &ECPCS-MC &ECPCS-HC\\
\hline
\multirow{2}{*}{\emph{BC}} &True-$k$ &73.00$_{\pm6.57}$ &77.63$_{\pm2.39}$ &72.46$_{\pm5.32}$ &76.32$_{\pm5.43}$ &76.18$_{\pm10.22}$ &76.09$_{\pm4.14}$ &\textbf{79.07}$_{\pm1.86}$ &45.26$_{\pm26.26}$ &\textbf{77.89}$_{\pm1.47}$ &[\textbf{79.46}$_{\pm3.47}$]\\
\cline{2-12}
&Best-$k$ &73.34$_{\pm5.64}$ &77.63$_{\pm2.39}$ &72.56$_{\pm5.01}$ &76.33$_{\pm5.42}$ &76.59$_{\pm8.20}$ &76.09$_{\pm4.14}$ &\textbf{79.07}$_{\pm1.86}$ &54.58$_{\pm16.93}$ &\textbf{77.89}$_{\pm1.47}$ &[\textbf{79.46}$_{\pm3.47}$]\\
\hline
\multirow{2}{*}{\emph{CTG}} &True-$k$ &\textbf{26.16}$_{\pm0.85}$ &24.71$_{\pm0.92}$ &25.85$_{\pm0.77}$ &26.15$_{\pm1.00}$ &23.28$_{\pm1.73}$ &25.15$_{\pm1.11}$ &23.58$_{\pm1.27}$ &24.41$_{\pm1.80}$ &[\textbf{26.87}$_{\pm0.91}$] &\textbf{26.42}$_{\pm1.42}$\\
\cline{2-12}
&Best-$k$ &26.47$_{\pm0.80}$ &25.63$_{\pm0.66}$ &26.27$_{\pm0.81}$ &\textbf{26.66}$_{\pm0.85}$ &25.07$_{\pm0.83}$ &26.38$_{\pm0.81}$ &24.78$_{\pm0.65}$ &25.44$_{\pm0.85}$ &[\textbf{27.60}$_{\pm0.76}$] &\textbf{26.99}$_{\pm0.87}$\\
\hline
\multirow{2}{*}{\emph{Ecoli}} &True-$k$ &58.33$_{\pm2.91}$ &49.17$_{\pm2.92}$ &56.79$_{\pm2.42}$ &\textbf{62.20}$_{\pm3.80}$ &49.64$_{\pm2.84}$ &50.28$_{\pm2.23}$ &50.61$_{\pm1.97}$ &51.76$_{\pm3.81}$ &\textbf{59.54}$_{\pm1.75}$ &[\textbf{70.48}$_{\pm2.51}$]\\
\cline{2-12}
&Best-$k$ &68.02$_{\pm3.36}$ &52.68$_{\pm1.58}$ &67.40$_{\pm2.79}$ &\textbf{70.98}$_{\pm1.86}$ &54.68$_{\pm3.19}$ &60.63$_{\pm2.99}$ &57.63$_{\pm2.30}$ &55.01$_{\pm3.73}$ &\textbf{69.93}$_{\pm2.01}$ &[\textbf{71.45}$_{\pm0.96}$]\\
\hline
\multirow{2}{*}{\emph{Gisette}} &True-$k$ &27.02$_{\pm13.60}$ &\textbf{41.69}$_{\pm12.52}$ &35.09$_{\pm9.52}$ &37.79$_{\pm8.35}$ &17.26$_{\pm12.74}$ &[\textbf{47.13}$_{\pm1.94}$] &29.15$_{\pm10.08}$ &12.10$_{\pm7.97}$ &\textbf{47.01}$_{\pm2.23}$ &40.42$_{\pm8.25}$\\
\cline{2-12}
&Best-$k$ &31.18$_{\pm8.78}$ &\textbf{43.13}$_{\pm8.37}$ &35.77$_{\pm8.39}$ &38.37$_{\pm7.17}$ &23.13$_{\pm7.57}$ &[\textbf{47.13}$_{\pm1.94}$] &30.41$_{\pm7.64}$ &17.83$_{\pm5.70}$ &\textbf{47.01}$_{\pm2.23}$ &41.00$_{\pm7.05}$\\
\hline
\multirow{2}{*}{\emph{LR}} &True-$k$ &38.30$_{\pm0.90}$ &38.60$_{\pm1.17}$ &38.40$_{\pm1.10}$ &38.48$_{\pm1.09}$ &34.87$_{\pm0.95}$ &\textbf{39.16}$_{\pm1.17}$ &35.72$_{\pm0.83}$ &33.13$_{\pm1.44}$ &[\textbf{39.30}$_{\pm0.74}$] &\textbf{38.73}$_{\pm1.44}$\\
\cline{2-12}
&Best-$k$ &41.64$_{\pm0.54}$ &40.53$_{\pm0.62}$ &42.14$_{\pm0.62}$ &\textbf{42.49}$_{\pm0.58}$ &38.78$_{\pm0.66}$ &41.85$_{\pm0.60}$ &39.22$_{\pm0.69}$ &38.88$_{\pm0.76}$ &[\textbf{42.85}$_{\pm0.55}$] &[\textbf{42.85}$_{\pm0.84}$]\\
\hline
\multirow{2}{*}{\emph{LS}} &True-$k$ &60.86$_{\pm3.73}$ &53.58$_{\pm3.16}$ &62.00$_{\pm3.80}$ &62.13$_{\pm2.59}$ &48.46$_{\pm3.67}$ &\textbf{62.45}$_{\pm1.33}$ &52.39$_{\pm4.21}$ &43.57$_{\pm6.97}$ &[\textbf{63.90}$_{\pm2.36}$] &\textbf{63.18}$_{\pm2.55}$\\
\cline{2-12}
&Best-$k$ &62.17$_{\pm2.17}$ &54.50$_{\pm2.30}$ &62.96$_{\pm1.59}$ &\textbf{63.79}$_{\pm1.61}$ &51.35$_{\pm2.41}$ &63.09$_{\pm1.18}$ &53.55$_{\pm3.33}$ &49.75$_{\pm3.58}$ &[\textbf{65.02}$_{\pm1.86}$] &\textbf{64.86}$_{\pm1.34}$\\
\hline
\multirow{2}{*}{\emph{MNIST}} &True-$k$ &61.94$_{\pm1.81}$ &58.26$_{\pm3.53}$ &\textbf{62.63}$_{\pm1.82}$ &62.44$_{\pm1.73}$ &50.90$_{\pm2.77}$ &\textbf{63.59}$_{\pm2.51}$ &50.02$_{\pm2.68}$ &45.74$_{\pm4.32}$ &[\textbf{63.81}$_{\pm2.15}$] &60.26$_{\pm1.62}$\\
\cline{2-12}
&Best-$k$ &62.47$_{\pm1.84}$ &58.60$_{\pm3.11}$ &62.99$_{\pm1.84}$ &63.73$_{\pm1.71}$ &54.13$_{\pm1.90}$ &\textbf{64.84}$_{\pm1.94}$ &53.54$_{\pm1.33}$ &55.12$_{\pm1.89}$ &\textbf{64.40}$_{\pm1.81}$ &[\textbf{65.00}$_{\pm1.36}$]\\
\hline
\multirow{2}{*}{\emph{PD}} &True-$k$ &73.63$_{\pm2.16}$ &70.20$_{\pm3.37}$ &74.57$_{\pm2.67}$ &74.61$_{\pm3.13}$ &60.77$_{\pm3.74}$ &\textbf{74.80}$_{\pm3.38}$ &62.36$_{\pm2.67}$ &51.75$_{\pm7.58}$ &[\textbf{76.41}$_{\pm2.28}$] &\textbf{74.91}$_{\pm3.24}$\\
\cline{2-12}
&Best-$k$ &76.87$_{\pm1.24}$ &71.01$_{\pm2.75}$ &77.75$_{\pm1.38}$ &78.51$_{\pm1.72}$ &67.63$_{\pm1.94}$ &\textbf{79.11}$_{\pm1.54}$ &67.55$_{\pm1.50}$ &67.44$_{\pm2.11}$ &[\textbf{79.79}$_{\pm1.24}$] &\textbf{78.84}$_{\pm1.65}$\\
\hline
\multirow{2}{*}{\emph{Wine}} &True-$k$ &86.34$_{\pm2.60}$ &82.25$_{\pm3.16}$ &\textbf{88.05}$_{\pm2.88}$ &87.33$_{\pm3.11}$ &86.01$_{\pm3.69}$ &86.85$_{\pm2.51}$ &83.29$_{\pm7.10}$ &86.10$_{\pm4.13}$ &\textbf{87.85}$_{\pm2.36}$ &[\textbf{88.82}$_{\pm2.82}$]\\
\cline{2-12}
&Best-$k$ &86.34$_{\pm2.60}$ &82.25$_{\pm3.16}$ &\textbf{88.05}$_{\pm2.88}$ &87.33$_{\pm3.11}$ &86.06$_{\pm3.41}$ &86.85$_{\pm2.51}$ &83.68$_{\pm6.19}$ &86.13$_{\pm4.01}$ &\textbf{87.85}$_{\pm2.36}$ &[\textbf{88.84}$_{\pm2.79}$]\\
\hline
\multirow{2}{*}{\emph{Yeast}} &True-$k$ &26.21$_{\pm1.33}$ &22.12$_{\pm1.15}$ &26.03$_{\pm1.04}$ &\textbf{28.36}$_{\pm1.39}$ &21.54$_{\pm2.59}$ &23.42$_{\pm1.21}$ &19.53$_{\pm0.72}$ &22.02$_{\pm2.01}$ &\textbf{27.71}$_{\pm1.09}$ &[\textbf{29.62}$_{\pm1.21}$]\\
\cline{2-12}
&Best-$k$ &28.44$_{\pm1.64}$ &23.15$_{\pm1.01}$ &28.05$_{\pm1.66}$ &\textbf{29.83}$_{\pm1.09}$ &23.37$_{\pm0.98}$ &27.76$_{\pm1.36}$ &24.30$_{\pm0.91}$ &23.49$_{\pm1.01}$ &\textbf{29.30}$_{\pm0.87}$ &[\textbf{30.05}$_{\pm0.97}$]\\
\hline
\hline
\multirow{2}{*}{Avg. score} &True-$k$ &53.18 &51.82 &54.19 &55.58 &46.89 &54.89 &48.57 &41.58 &57.03 &57.23\\
\cline{2-12}
&Best-$k$ &55.69 &52.91 &56.39 &57.80 &50.08 &57.37 &51.37 &47.37 &59.16 &58.93\\
\hline
\hline
\multirow{2}{*}{Avg. rank} &True-$k$ &5.70 &6.60 &5.10 &3.90 &8.50 &4.30 &7.90 &8.80 &1.90 &2.30\\
\cline{2-12}
&Best-$k$ &5.70 &7.20 &5.20 &3.60 &8.50 &4.30 &7.90 &8.70 &2.10 &1.70\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[!t]
\begin{center}
{\subfigure[]
{\includegraphics[width=0.895\columnwidth]{Figures_rankTop1_nmi}\label{fig:rankTop1Top3_nmi_1}}}
{\subfigure[]
{\includegraphics[width=0.895\columnwidth]{Figures_rankTop3_nmi}\label{fig:rankTop1Top3_nmi_2}}}
\caption{The number of times that each method is ranked (a) in the first position and (b) in the top three w.r.t. Table~\ref{table:compare_ensembles_nmi}.}
\label{fig:rankTop1Top3_nmi}
\end{center}
\end{figure*}
\subsection{Baseline Methods and Experimental Settings}
Our proposed ECPCS-HC and ECPCS-MC methods will be compared against eight baseline ensemble clustering methods, which are listed as follows:
\begin{enumerate}
\item \textbf{EAC} \cite{Fred05_EAC}: evidence accumulation clustering.
\item \textbf{MCLA} \cite{strehl02}: meta-clustering algorithm.
\item \textbf{SRS} \cite{iamon08_icds}: SimRank similarity based method
\item \textbf{WCT} \cite{iam_on11_linkbased}: weighted connected triple method.
\item \textbf{KCC} \cite{wu15_TKDE}: $k$-means based consensus clustering.
\item \textbf{PTGP} \cite{Huang16_TKDE}: probability trajectory based graph partitioning.
\item \textbf{ECC} \cite{liu17_bioinformatics}: entropy based consensus clustering.
\item \textbf{SEC} \cite{liu17_tkde}: spectral ensemble clustering.
\end{enumerate}
The parameters in the baseline methods are set as suggested by their corresponding papers \cite{Fred05_EAC,wu15_TKDE,Huang16_TKDE,liu17_tkde,iam_on11_linkbased,iamon08_icds,strehl02,liu17_bioinformatics}. The step-length parameter $t$ in the proposed methods is set to 20 for the experiments on all datasets, whose sensitivity will be further evacuated in Section~\ref{sec:sensitivity_t}.
To provide a fair comparison, we run each of the test methods twenty times on each dataset, and report their average NMI and ARI scores over multiple runs. At each run, an ensemble of $M=20$ base clusterings is constructed by the $k$-means clustering with initial cluster centers randomly initialized and the number of clusters in each base clustering randomly selected in the range of [K, min($\sqrt{N}$,100)], where $K$ is the number of classes and $N$ is the number of objects in the dataset. Moreover, the performances of these test methods using different ensemble size $M$ will be further evaluated in Section~\ref{sec:comp_Msize}.
\subsection{Comparison with Other Ensemble Clustering Methods}
In this section, we compare the proposed ECPCS-HC and ECPCS-MC methods against the baseline ensemble clustering methods on the ten benchmark datasets. For the experiment on each benchmark dataset, two criteria are adopted in terms of the number of clusters, that is, true-$k$ and best-$k$. In the true-$k$ criterion, the true number of classes in a dataset is used as the cluster number for all the test methods. In the best-$k$ criterion, the cluster number that leads to the best performance is used for each test method.
Table~\ref{table:compare_ensembles_nmi} reports the average NMI scores (over 20 runs) by different ensemble clustering methods. As shown in Table~\ref{table:compare_ensembles_nmi}, our ECPCS-HC method obtains the best performance w.r.t. NMI in terms of both true-$k$ and best-$k$ on the \emph{BC}, \emph{Ecoli}, \emph{Wine}, and \emph{Yeast} datasets, whereas ECPCS-MC achieves the best NMI scores in terms of both true-$k$ and best-$k$ on the \emph{CTG}, \emph{LR}, \emph{LS}, and \emph{PD} datasets. Note that, with two comparisons (w.r.t. true-$k$ and best-$k$ respectively) on each of ten datasets, there are totally twenty comparisons in Table~\ref{table:compare_ensembles_nmi}. As shown in Fig.~\ref{fig:rankTop1Top3_nmi_1}, our ECPCS-HC and ECPCS-MC methods are ranked in the first position in ten and nine comparisons, respectively, out of the totally twenty comparisons, while the third best method (i.e., PTGP) is ranked in the first position in only two comparisons. Similarly, as shown in Fig.~\ref{fig:rankTop1Top3_nmi_2}, ECPCS-HC and ECPCS-MC are ranked in the top 3 in seventeen and twenty comparisons, respectively, out of the totally twenty comparisons, while the third best method PTGP is only able to be ranked in the top 3 in eight comparisons.
Table~\ref{table:compare_ensembles_ari} reports the average ARI scores (over 20 runs) by different ensemble clustering methods. As shown in Table~\ref{table:compare_ensembles_ari}, the highest ARI scores are achieved by either ECPCS-HC or ECPCS-MC in sixteen comparisons out of the totally twenty comparisons. Specifically, as shown in Fig.~\ref{fig:rankTop1Top3_ari_1}, w.r.t. the average ARI scores, ECPCS-HC and ECPCS-MC are ranked in the first position in nine and seven comparisons, respectively, out of the totally twenty comparisons, while the third best methods is ranked in the first position in only two comparisons. As shown in Fig.~\ref{fig:rankTop1Top3_ari_2}, both ECPCS-HC and ECPCS-MC are ranked in the top 3 in seventeen comparisons out of the totally twenty comparisons, while the third best method WCT is ranked in the top 3 in only nine comparisons.
\begin{table*}[!th]
\centering
\caption{Average ARI($\%$) scores (over 20 runs) by different ensemble clustering methods. The best three scores in each comparison are highlighted in \textbf{bold}, while the best one in \textbf{[bold and brackets]}.}
\label{table:compare_ensembles_ari}
\begin{tabular}{|m{1.16cm}<{\centering}|m{0.78cm}<{\centering}|m{1.09cm}<{\centering}m{1.15cm}<{\centering}m{1.09cm}<{\centering}m{1.09cm}<{\centering}m{1.09cm}<{\centering}m{1.15cm}<{\centering}m{1.09cm}<{\centering}m{1.25cm}<{\centering}|m{1.39065cm}<{\centering}m{1.344cm}<{\centering}|}
\hline
\multicolumn{1}{|c}{\emph{Dataset}} &\multicolumn{1}{c|}{} &EAC &MCLA &SRS &WCT &KCC &PTGP &ECC &SEC &ECPCS-MC &ECPCS-HC\\
\hline
\multirow{2}{*}{\emph{BC}} &True-$k$ &83.11$_{\pm5.29}$ &87.09$_{\pm1.57}$ &82.92$_{\pm4.19}$ &85.55$_{\pm3.73}$ &84.42$_{\pm14.33}$ &85.70$_{\pm2.90}$ &\textbf{87.68}$_{\pm1.27}$ &46.48$_{\pm35.22}$ &\textbf{87.20}$_{\pm1.06}$ &[\textbf{87.81}$_{\pm2.25}$]\\
\cline{2-12}
&Best-$k$ &84.95$_{\pm3.36}$ &87.09$_{\pm1.57}$ &83.19$_{\pm3.67}$ &\textbf{87.58}$_{\pm1.38}$ &85.50$_{\pm8.99}$ &85.70$_{\pm2.90}$ &\textbf{87.68}$_{\pm1.27}$ &62.21$_{\pm19.35}$ &87.20$_{\pm1.06}$ &[\textbf{88.55}$_{\pm0.84}$]\\
\hline
\multirow{2}{*}{\emph{CTG}} &True-$k$ &12.23$_{\pm0.81}$ &11.55$_{\pm0.74}$ &12.45$_{\pm0.76}$ &\textbf{12.66}$_{\pm1.02}$ &10.64$_{\pm1.39}$ &11.93$_{\pm0.96}$ &11.10$_{\pm1.06}$ &11.58$_{\pm1.54}$ &[\textbf{13.05}$_{\pm0.90}$] &\textbf{12.68}$_{\pm1.19}$\\
\cline{2-12}
&Best-$k$ &12.95$_{\pm1.20}$ &13.18$_{\pm0.68}$ &13.13$_{\pm1.17}$ &13.40$_{\pm1.12}$ &11.69$_{\pm0.73}$ &\textbf{13.97}$_{\pm1.11}$ &11.69$_{\pm0.75}$ &12.24$_{\pm0.95}$ &[\textbf{15.58}$_{\pm0.99}$] &\textbf{13.79}$_{\pm0.94}$\\
\hline
\multirow{2}{*}{\emph{Ecoli}} &True-$k$ &49.15$_{\pm5.73}$ &35.37$_{\pm4.00}$ &45.58$_{\pm5.24}$ &\textbf{57.39}$_{\pm8.79}$ &34.87$_{\pm3.89}$ &35.94$_{\pm4.16}$ &37.60$_{\pm4.16}$ &39.94$_{\pm7.64}$ &\textbf{51.44}$_{\pm2.94}$ &[\textbf{75.75}$_{\pm5.35}$]\\
\cline{2-12}
&Best-$k$ &74.43$_{\pm2.52}$ &45.08$_{\pm2.01}$ &74.55$_{\pm1.89}$ &\textbf{76.78}$_{\pm1.65}$ &46.88$_{\pm8.17}$ &69.81$_{\pm3.38}$ &52.93$_{\pm8.89}$ &46.53$_{\pm7.87}$ &\textbf{75.44}$_{\pm1.64}$ &[\textbf{77.43}$_{\pm1.10}$]\\
\hline
\multirow{2}{*}{\emph{Gisette}} &True-$k$ &28.10$_{\pm17.20}$ &51.31$_{\pm15.00}$ &38.99$_{\pm13.63}$ &43.65$_{\pm11.32}$ &19.54$_{\pm14.41}$ &[\textbf{58.56}$_{\pm2.09}$] &34.63$_{\pm11.49}$ &10.98$_{\pm8.86}$ &\textbf{57.61}$_{\pm2.38}$ &\textbf{47.75}$_{\pm9.47}$\\
\cline{2-12}
&Best-$k$ &35.42$_{\pm10.97}$ &52.95$_{\pm10.03}$ &42.01$_{\pm9.78}$ &45.15$_{\pm8.56}$ &25.58$_{\pm9.78}$ &[\textbf{58.56}$_{\pm2.09}$] &36.14$_{\pm8.57}$ &18.73$_{\pm9.25}$ &\textbf{57.61}$_{\pm2.38}$ &\textbf{48.79}$_{\pm7.30}$\\
\hline
\multirow{2}{*}{\emph{LR}} &True-$k$ &14.97$_{\pm0.76}$ &[\textbf{17.71}$_{\pm1.27}$] &15.22$_{\pm1.04}$ &14.69$_{\pm0.90}$ &14.02$_{\pm1.04}$ &\textbf{16.16}$_{\pm1.36}$ &14.29$_{\pm0.66}$ &11.74$_{\pm1.79}$ &\textbf{15.44}$_{\pm0.73}$ &15.28$_{\pm0.78}$\\
\cline{2-12}
&Best-$k$ &16.73$_{\pm0.62}$ &[\textbf{18.44}$_{\pm0.88}$] &\textbf{17.85}$_{\pm0.70}$ &17.07$_{\pm0.55}$ &16.49$_{\pm0.80}$ &17.55$_{\pm0.73}$ &16.88$_{\pm0.72}$ &16.12$_{\pm1.29}$ &16.77$_{\pm0.39}$ &\textbf{17.68}$_{\pm0.81}$\\
\hline
\multirow{2}{*}{\emph{LS}} &True-$k$ &56.07$_{\pm6.52}$ &46.27$_{\pm4.90}$ &57.09$_{\pm5.95}$ &\textbf{60.07}$_{\pm5.68}$ &36.28$_{\pm5.60}$ &52.68$_{\pm2.88}$ &40.24$_{\pm5.89}$ &26.57$_{\pm10.16}$ &\textbf{61.49}$_{\pm5.25}$ &[\textbf{61.62}$_{\pm5.11}$]\\
\cline{2-12}
&Best-$k$ &60.72$_{\pm4.17}$ &52.23$_{\pm5.37}$ &62.40$_{\pm3.61}$ &\textbf{63.07}$_{\pm3.42}$ &41.29$_{\pm3.76}$ &60.76$_{\pm3.28}$ &45.87$_{\pm4.79}$ &37.10$_{\pm5.62}$ &[\textbf{65.43}$_{\pm2.60}$] &\textbf{64.77}$_{\pm3.02}$\\
\hline
\multirow{2}{*}{\emph{MNIST}} &True-$k$ &49.53$_{\pm2.73}$ &46.54$_{\pm5.24}$ &\textbf{51.62}$_{\pm2.45}$ &51.17$_{\pm2.06}$ &36.23$_{\pm4.11}$ &\textbf{52.88}$_{\pm4.27}$ &35.27$_{\pm3.73}$ &26.99$_{\pm6.46}$ &[\textbf{53.17}$_{\pm3.13}$] &49.61$_{\pm1.58}$\\
\cline{2-12}
&Best-$k$ &51.55$_{\pm2.53}$ &47.64$_{\pm4.30}$ &\textbf{54.01}$_{\pm2.12}$ &52.81$_{\pm2.30}$ &42.08$_{\pm2.61}$ &\textbf{55.43}$_{\pm3.01}$ &41.56$_{\pm1.78}$ &41.37$_{\pm2.89}$ &[\textbf{55.59}$_{\pm2.66}$] &53.08$_{\pm2.30}$\\
\hline
\multirow{2}{*}{\emph{PD}} &True-$k$ &62.21$_{\pm3.75}$ &58.29$_{\pm5.71}$ &63.23$_{\pm4.22}$ &62.85$_{\pm5.21}$ &43.79$_{\pm6.21}$ &\textbf{63.38}$_{\pm5.39}$ &45.09$_{\pm5.11}$ &29.90$_{\pm9.99}$ &[\textbf{65.42}$_{\pm4.23}$] &\textbf{63.54}$_{\pm5.22}$\\
\cline{2-12}
&Best-$k$ &71.13$_{\pm2.06}$ &60.72$_{\pm4.20}$ &\textbf{73.80}$_{\pm0.87}$ &\textbf{73.68}$_{\pm1.09}$ &56.88$_{\pm3.24}$ &72.40$_{\pm2.24}$ &56.53$_{\pm3.05}$ &54.77$_{\pm3.78}$ &73.10$_{\pm0.99}$ &[\textbf{74.61}$_{\pm0.98}$]\\
\hline
\multirow{2}{*}{\emph{Wine}} &True-$k$ &89.56$_{\pm2.40}$ &84.50$_{\pm3.35}$ &\textbf{90.78}$_{\pm2.82}$ &90.26$_{\pm3.04}$ &88.18$_{\pm4.09}$ &90.02$_{\pm2.37}$ &84.76$_{\pm10.11}$ &88.47$_{\pm5.75}$ &\textbf{90.62}$_{\pm2.25}$ &[\textbf{91.29}$_{\pm2.96}$]\\
\cline{2-12}
&Best-$k$ &89.76$_{\pm2.19}$ &84.50$_{\pm3.35}$ &\textbf{90.96}$_{\pm2.60}$ &90.59$_{\pm2.72}$ &88.28$_{\pm3.56}$ &90.02$_{\pm2.37}$ &86.52$_{\pm5.99}$ &88.83$_{\pm4.31}$ &\textbf{90.66}$_{\pm2.21}$ &[\textbf{91.70}$_{\pm2.53}$]\\
\hline
\multirow{2}{*}{\emph{Yeast}} &True-$k$ &16.38$_{\pm1.58}$ &12.32$_{\pm1.03}$ &16.32$_{\pm1.31}$ &\textbf{19.01}$_{\pm1.74}$ &11.89$_{\pm2.59}$ &13.36$_{\pm1.40}$ &9.89$_{\pm0.75}$ &11.90$_{\pm2.40}$ &\textbf{16.94}$_{\pm1.33}$ &[\textbf{20.62}$_{\pm1.51}$]\\
\cline{2-12}
&Best-$k$ &20.46$_{\pm2.53}$ &13.77$_{\pm1.12}$ &20.21$_{\pm2.77}$ &\textbf{21.40}$_{\pm1.57}$ &13.44$_{\pm1.25}$ &18.82$_{\pm1.70}$ &14.53$_{\pm0.75}$ &14.17$_{\pm1.67}$ &[\textbf{21.57}$_{\pm1.66}$] &\textbf{21.48}$_{\pm1.19}$\\
\hline
\hline
\multirow{2}{*}{Avg. score} &True-$k$ &46.13 &45.10 &47.42 &49.73 &37.99 &48.06 &40.05 &30.45 &51.24 &52.60\\
\cline{2-12}
&Best-$k$ &51.81 &47.56 &53.21 &54.15 &42.81 &54.30 &45.03 &39.21 &55.90 &55.19\\
\hline
\hline
\multirow{2}{*}{Avg. rank} &True-$k$ &5.80 &6.30 &4.70 &4.10 &8.70 &4.40 &7.90 &8.70 &2.20 &2.20\\
\cline{2-12}
&Best-$k$ &6.40 &6.40 &4.30 &3.70 &8.50 &4.20 &7.40 &9.10 &2.70 &2.20\\
\hline
\end{tabular}
\end{table*}
Additionally, the summary statistics (i.e., average score and average rank) of the experimental results are also provided in the bottom rows of Tables~\ref{table:compare_ensembles_nmi} and \ref{table:compare_ensembles_ari}. The average score is computed by averaging the NMI (or ARI) scores of each method across the ten benchmark datsets, whereas the average rank is obtained by averaging the ranking positions of each method across the ten benchmark datasets. As shown in Table~\ref{table:compare_ensembles_nmi}, in terms of true-$k$, our ECPCS-HC method achieves the highest average NMI($\%$) score of 57.23, across the ten datasets, while ECPCS-MC achieves the second highest average score of 57.03. In terms of best-$k$, the highest two average NMI scores across the ten datasets are also obtained by the proposed ECPCS-MC and ECPCS-HC methods, respectively. When considering the average rank, ECPCS-MC and ECPCS-HC achieve the best and the second best average ranks of 1.90 and 2.30, respectively, in terms of true-$k$, which are significantly better than the third best method (i.e., WCT), whose average rank in terms of true-$k$ is 3.90. In terms of best-$k$, ECPCS-HC and ECPCS-MC are also the best two methods w.r.t. the average rank across the ten datasets. Besides the performance w.r.t. NMI, similar advantages can also be observed in terms of average score and average rank w.r.t. ARI (as shown in Table~\ref{table:compare_ensembles_ari}). Moreover, it is interesting to compare ECPCS-HC against EAC and to compare ECPCS-MC against MCLA. Since EAC is a classical method based on the conventional co-association matrix, the comparison between the proposed ECPCS-HC method (which typically incorporates the multi-scale cluster-level information via the ECA matrix) and the EAC method provides a straightforward view as to how the proposed ECA matrix improves the consensus performance when compared to the original co-association matrix. Specifically, the average NMI($\%$) and ARI($\%$) scores (in terms of true-$k$) of EAC are respectively 53.18 and 46.13, whereas that of ECPCS-HC are respectively 57.23 and 52.60. Similar improvements can also be observed when considering the best-$k$ situation (as shown in Tables~\ref{table:compare_ensembles_nmi} and \ref{table:compare_ensembles_ari}). Besides ECPCS-HC versus EAC, the ECPCS-MC versus MCLA comparison also provides a view as to what influence the multi-scale cluster-level information has upon the conventional meta-cluster based method. Note that both ECPCS-MC and MCLA are meta-cluster based methods, the integration of cluster-wise similarity propagation is able to bring in significant improvements for the ECPCS-MC method when compared to the classical MCLA method, as shown by their average scores and average ranks across ten datasets.
To summarize, as shown in Tables~\ref{table:compare_ensembles_nmi} and \ref{table:compare_ensembles_ari} and Figs.~\ref{fig:rankTop1Top3_nmi} and \ref{fig:rankTop1Top3_ari}, the proposed ECPCS-HC and ECPCS-MC methods exhibit overall better performances (w.r.t. NMI and ARI) than the baseline methods on the benchmark datasets.
\begin{figure*}[!t]
\begin{center}
{\subfigure[]
{\includegraphics[width=0.895\columnwidth]{Figures_rankTop1_ari}\label{fig:rankTop1Top3_ari_1}}}
{\subfigure[]
{\includegraphics[width=0.895\columnwidth]{Figures_rankTop3_ari}\label{fig:rankTop1Top3_ari_2}}}
\caption{The number of times that each method is ranked (a) in the first position and (b) in the top three w.r.t. Table~\ref{table:compare_ensembles_ari}.}
\label{fig:rankTop1Top3_ari}
\end{center}
\end{figure*}
\subsection{Robustness to Ensemble Size $M$}
\label{sec:comp_Msize}
\begin{figure*}[!th]
\begin{center}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi3}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi5}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi2}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi8}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi10}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi7}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi6}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi9}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi1}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_nmi4}}}
{\subfigure
{\includegraphics[width=1.4\columnwidth]{Figures_cmpEnSize_legend}}}
\caption{Average performances w.r.t. NMI($\%$) over 20 runs by different ensemble clustering methods with varying ensemble sizes $M$. (a) \emph{BC}. (b) \emph{CTG}. (c) \emph{Ecoli}. (d) \emph{Gisette}. (e) \emph{LR}. (f) \emph{LS}. (g) \emph{MNIST}. (h) \emph{PD}. (i) \emph{Wine}. (j) \emph{Yeast}.}
\label{fig:comp_Msize_nmi}
\end{center}
\end{figure*}
\begin{figure*}[!th]
\begin{center}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari3}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari5}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari2}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari8}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari10}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari7}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari6}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari9}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari1}}}
{\subfigure[]
{\includegraphics[width=0.395\columnwidth]{Figures_cmpEnSize_ari4}}}
{\subfigure
{\includegraphics[width=1.4\columnwidth]{Figures_cmpEnSize_legend}}}
\caption{Average performances w.r.t. ARI($\%$) over 20 runs by different ensemble clustering methods with varying ensemble sizes $M$. (a) \emph{BC}. (b) \emph{CTG}. (c) \emph{Ecoli}. (d) \emph{Gisette}. (e) \emph{LR}. (f) \emph{LS}. (g) \emph{MNIST}. (h) \emph{PD}. (i) \emph{Wine}. (j) \emph{Yeast}.}
\label{fig:comp_Msize_ari}
\end{center}
\end{figure*}
In this section, we evaluate the performances of the proposed methods and the baseline methods using different ensemble sizes $M$. As shown in Fig.~\ref{fig:comp_Msize_nmi}, ECPCS-HC obtains the best performance w.r.t. NMI on the \emph{BC}, \emph{Ecoli}, \emph{LR}, \emph{wine}, and \emph{yeast} datasets, whereas ECPCS-MC obtains the best performance on the \emph{CTG} and \emph{PD} datasets, as the ensemble size goes from 10 to 50. Similarly, as shown in Fig.~\ref{fig:comp_Msize_ari}, ECPCS-HC obtains the best performance (w.r.t. ARI) on the \emph{BC}, \emph{Ecoli}, \emph{PD}, and \emph{Wine} datasets, whereas ECPCS-MC obtains the best performance (w.r.t. ARI) on the \emph{CTG} and \emph{MNIST} datasets, with varying ensemble sizes $M$. Although the MCLA method shows better ARI scores than the proposed methods on the \emph{LR} dataset, yet on all of the other nine datasets our methods consistently outperform MCLA with different ensemble sizes. As can be seen in Figs.~\ref{fig:comp_Msize_nmi} and \ref{fig:comp_Msize_ari}, the proposed ECPCS-HC and ECPCS-MC methods exhibit overall the best performance w.r.t. NMI and ARI on the benchmark datasets.
\subsection{Sensitivity of Parameter $t$}
\label{sec:sensitivity_t}
In this section, we evaluate the performances of the proposed ECPCS-HC and ECPCS-MC methods with varying parameter $t$.
Table~\ref{table:cmpPara_NMI} reports the average NMI scores (over 20 runs) of ECPCS-HC and ECPCS-MC when the parameter $t$ takes different values. Note that the parameter $t$ controls the number of steps of the random walkers during the propagation of cluster-wise similarities (as described in Section~\ref{sec:propagation_of_CSG}). As shown in Table~\ref{table:cmpPara_NMI}, the proposed methods yield consistently good performances (w.r.t. NMI) with varying parameter $t$. Generally, using a larger parameter $t$ (e.g., larger than 10) can lead to better clustering results than using a very small one, which is probably due to the fact that a random walker with adequate number of steps can better capture the multi-scale structure information of the graph. Also, the performances (w.r.t. ARI) by the proposed ECPCS-HC and ECPCS-MC methods are reported in Table~\ref{table:cmpPara_ARI}. From the experimental results in Tables~\ref{table:cmpPara_NMI} and \ref{table:cmpPara_ARI}, it can be observed that the proposed methods exhibit robust consensus clustering performances with different values of the parameter.
\begin{table*}[!t]
\centering
\caption{Average performance w.r.t. NMI($\%$) over 20 runs by our ECPCS-HC and ECPCS-MC methods using varying parameter $t$.}
\label{table:cmpPara_NMI}
\begin{tabular}{|m{0.99cm}<{\centering}|m{0.801cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.721cm}<{\centering}|m{0.801cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.721cm}<{\centering}|}
\hline
\multirow{2}{*}{Dataset} &\multicolumn{7}{c|}{ECPCS-HC} &\multicolumn{7}{c|}{ECPCS-MC}\\
\cline{2-15}
&$t=1$ &2 &4 &8 &16 &32 &64 &$t=1$ &2 &4 &8 &16 &32 &64\\
\hline
\emph{BC} &75.30 &76.28 &76.86 &78.23 &79.16 &79.69 &79.99 &77.19 &77.48 &77.80 &77.82 &77.80 &77.81 &77.74\\
\hline
\emph{CTG} &26.65 &26.69 &26.82 &26.90 &26.99 &26.97 &27.10 &27.45 &27.63 &27.63 &27.66 &27.66 &27.69 &27.73\\
\hline
\emph{Ecoli} &70.61 &70.98 &71.44 &71.66 &71.49 &71.34 &71.26 &69.63 &69.75 &70.11 &70.13 &70.09 &70.04 &70.08\\
\hline
\emph{Gisette} &37.58 &38.60 &40.34 &41.01 &40.95 &41.00 &39.49 &46.51 &46.77 &46.99 &47.04 &46.97 &46.88 &46.78\\
\hline
\emph{LR} &42.42 &42.35 &42.45 &42.50 &42.75 &43.20 &43.56 &42.36 &42.75 &42.96 &42.90 &42.85 &42.78 &42.77\\
\hline
\emph{LS} &63.43 &63.78 &64.22 &64.50 &64.95 &64.93 &65.00 &65.19 &65.31 &65.32 &65.29 &65.18 &65.16 &65.03\\
\hline
\emph{MNIST} &63.36 &63.43 &63.94 &64.32 &64.88 &64.82 &64.50 &63.63 &63.75 &63.90 &64.01 &64.30 &64.34 &64.22\\
\hline
\emph{PD} &77.98 &78.35 &78.53 &78.80 &78.73 &78.78 &78.49 &79.66 &79.81 &79.80 &79.84 &79.77 &79.70 &79.80\\
\hline
\emph{Wine} &87.22 &88.24 &88.52 &88.65 &88.98 &88.95 &89.06 &87.41 &87.69 &87.91 &87.87 &87.91 &87.91 &87.86\\
\hline
\emph{Yeast} &29.25 &29.35 &29.48 &29.91 &30.04 &30.15 &29.97 &28.73 &28.89 &29.03 &29.20 &29.22 &29.36 &29.44\\
\hline
\end{tabular}
\end{table*}
\begin{table*}[!t]
\centering
\caption{Average performance w.r.t. ARI($\%$) over 20 runs by our ECPCS-HC and ECPCS-MC methods using varying parameter $t$.}
\label{table:cmpPara_ARI}
\begin{tabular}{|m{0.99cm}<{\centering}|m{0.801cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.721cm}<{\centering}|m{0.801cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.701cm}<{\centering}m{0.721cm}<{\centering}|}
\hline
\multirow{2}{*}{Dataset} &\multicolumn{7}{c|}{ECPCS-HC} &\multicolumn{7}{c|}{ECPCS-MC}\\
\cline{2-15}
&$t=1$ &2 &4 &8 &16 &32 &64 &$t=1$ &2 &4 &8 &16 &32 &64\\
\hline
\emph{BC} &86.75 &87.34 &87.56 &88.12 &88.52 &88.67 &88.74 &87.03 &87.12 &87.14 &87.15 &87.14 &87.14 &87.09\\
\hline
\emph{CTG} &13.39 &13.47 &13.65 &13.65 &13.49 &13.49 &13.61 &15.52 &15.56 &15.73 &15.66 &15.62 &15.65 &15.62\\
\hline
\emph{Ecoli} &76.40 &76.75 &77.17 &77.41 &77.41 &77.35 &77.33 &75.34 &75.44 &75.67 &75.59 &75.57 &75.53 &75.63\\
\hline
\emph{Gisette} &43.75 &45.15 &47.22 &48.81 &48.76 &48.82 &47.18 &57.09 &57.36 &57.58 &57.63 &57.57 &57.47 &57.37\\
\hline
\emph{LR} &17.05 &17.02 &17.07 &17.24 &17.43 &17.35 &16.93 &16.55 &16.59 &16.63 &16.62 &16.65 &16.75 &16.75\\
\hline
\emph{LS} &62.89 &63.11 &64.16 &64.51 &64.82 &64.54 &63.94 &65.89 &66.06 &66.29 &66.04 &65.74 &65.23 &64.89\\
\hline
\emph{MNIST} &52.37 &52.61 &53.12 &53.08 &53.20 &52.67 &52.16 &53.63 &54.00 &54.46 &54.83 &55.45 &55.45 &55.34\\
\hline
\emph{PD} &73.03 &73.69 &74.11 &74.47 &74.59 &74.68 &74.82 &73.03 &73.09 &73.17 &73.12 &73.11 &73.20 &73.20\\
\hline
\emph{Wine} &90.57 &91.36 &91.54 &91.63 &91.81 &91.78 &91.99 &90.69 &90.69 &90.70 &90.67 &90.70 &90.69 &90.66\\
\hline
\emph{Yeast} &20.99 &21.15 &21.46 &21.72 &21.61 &21.41 &21.04 &21.12 &21.19 &21.29 &21.34 &21.39 &21.70 &21.89\\
\hline
\end{tabular}
\end{table*}
\subsection{Execution Time}
In this section, we evaluate the execution times of different ensemble clustering methods. The experiments are conducted on the \emph{LR} dataset with the data size varying from 0 to 20,000. As shown in Fig.~\ref{fig:compTimeAll}, ECPCS-MC is the fastest method, which requires 1.28s to process the entire \emph{LR} dataset with 20,000 objects, while SEC and MCLA are the second and third fastest ones, which requires 1.56s and 2.00s, respectively, to process the entire \emph{LR} dataset. The time efficiency of ECPCS-HC is comparable to that of the ECC method, and is better than the PTGP, WCT, and SRS methods. To summarize, the proposed ECPCS-MC and ECPCS-HC methods consistently outperform the baseline methods in terms of clustering quality (as shown in Tables~\ref{table:compare_ensembles_nmi} and \ref{table:compare_ensembles_ari} and Figs.~\ref{fig:rankTop1Top3_nmi}, \ref{fig:rankTop1Top3_ari}, \ref{fig:comp_Msize_nmi}, and \ref{fig:comp_Msize_ari}) while exhibiting competitive time efficiency (as shown in Fig.~\ref{fig:compTimeAll}).
All experiments were conducted in MATLAB 2016b on a PC with an Intel i7-6700K CPU and 64GB of RAM.
\section{Conclusion}
\label{sec:conclusion}
\begin{figure}[!t]
\begin{center}
{
{\includegraphics[width=0.91\columnwidth]{Figures_cmpTime}}}
\caption{Execution times of different ensemble clustering methods on the \emph{LR} datasets with the data size varying from 0 to 20,000.}
\label{fig:compTimeAll}
\end{center}
\end{figure}
In this paper, we propose a new ensemble clustering approach based on fast propagation of cluster-wise similarities via random walks. By treating the base clusters as nodes and using the Jaccard coefficient to build weighted edges, a cluster similarity graph is first constructed. With a new transition probability matrix defined on the graph, the random walk process is performed with each node treated as a starting node. Then, a new cluster-wise similarity matrix can be derived from the original graph by investigating the propagating trajectories of the random walkers starting from different nodes (i.e., clusters). Then, we construct an ECA matrix by mapping the new cluster-wise similarity from the cluster-level to the object-level, and propose two novel consensus functions, i.e., ECPCS-HC and ECPCS-MC, to achieve the final consensus clustering result. Extensive experiments are conducted on ten real-world datasets, which have shown the advantage of our ensemble clustering approach over the state-of-the-art in terms of both clustering quality and efficiency.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank the anonymous reviewers for their constructive comments and suggestions that help enhance the paper significantly.
Our MATLAB source code is available for download at: www.researchgate.net/publication/328581758.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,941,325,220,757 | arxiv | \section{introduction}
Quantum phase transition \cite{sachdev,vojta} is an interesting topic for both theoretical and
experimental condensed matter research activities.
This phase transition is found at the zero temperature based on variation of non thermal control parameter such as magnetic field or hole doping. Quantum phase transition (QPT) occurs
at the quantum critical point where quantum fluctuations destroys a long range order of
the model at absolute zero temperature.
One of the novel species of quantum phase transition is field-induced-magnetic phase transition that
can be occurred in insulating antiferromagnetic systems \cite{gia} like transition metal oxides and local spin systems.
This kind of QPT has been observed in the copper halide Cs$_{2}$CuCl$_{4}$ which is an insulator
and each Cu$^{2+}$ carries a spin of $1/2$.
Cs$_{2}$CuCl$_{4}$ can be described as a quasi-two-dimensional spin 1/2 antiferromagnet on a triangular lattices
({\it bc} plane) weakly coupled along the crystallographic {\it a} direction.\cite{coldea}
The crystal field effects quench the orbital angular momentum; however, the anisotropic effect is still significant on the phase transition.
The layered crystal structure confines the main superexchange routes to neighboring spins in the {\it bc} plane.
According to the above facts the magnetic properties of this material can be described by the
antiferromagnetic Heisenberg model.
The magnetic field is applied perpendicular to the {\it bc} plane which
adds a Zeeman term to the model.
For magnetic fields ($B$) close to the critical field ($B_{c}$), Zeeman term competes with the spin exchange
interaction and system enters a field induced ferromagnetic state \cite{viktor1,nikuni}.
The field induced ferromagnetic phase ($B>B_c$) has gapped quasi-particles, gapped magnons.
The field induced gap vanishes at $B_{c}$ when the magnetic field is reduced and
the magnetic ordering for transverse component of spins sets up.
This latter state (for $B<B_c$) is named spiral long range order.
In this work we have studied the mentioned quantum phase transition based
on Bose-Einstein condensation of magnons via a bosonic gas model\cite{matsubara}.
Bloch\cite{bloch} applied the Bose-Einstein quantum statistics to the excitations in solids
which gives the basic notion to relate Bose-Einstein of magnons to the magnetic ordering
in the original spin model\cite{nikuni,viktor2}.
The isotropic Heisenberg model with longitudinal applied magnetic field
has been studied by
theoretical and numerical methods.
The isotropic model on cubic lattice has been investigated by numerical quantum Monte-Carlo method
at finite temperature which gives the phase boundary between spiral order and induced ferromagnetic state\cite{nohadani}. A theoretical approach based on Bose condensation of magnons
for the isotropic model on triangular lattice has been studied in Ref.[\onlinecite{viktor1}].
Furtheremore, the experimental data
for specific heat indicate that the $\lambda$ like anomaly peak appears in the behavior of specific heat versus temperature for $B<B_{c}(T=0)$\cite{viktor2}.
In the general case, the spin model Hamiltonian can include spatial anisotropies
in the exchange coupling between nearest neighbor spins. This property is related to the
existence of easy axes magnetization due to
crystalline electric field and spin-orbit coupling.
Dzyaloshinskii-Moriya (DM) interaction with a DM vector in a specific direction
establishes easy-plane spin anisotropy in the Cs$_{2}$CUCl$_{4}$\cite{moriya}.
Anisotropy due to DM interaction violates SU(2) symmetry of the isotropic Hamiltonian although U(1) symmetry corresponding to spin rotation
around DM vector is still present. However, the spin-orbit coupling may induce the anisotropy
in the {\it bc} easy-plane which reduces the U(1) to Z(2) symmetry.
Therefore, Goldstone theorem \cite{auerbach} can not be applied in such cases, because it is
applied for Hamiltonian with a continuous symmetry.
In other words, the excitations of this model are not Goldstone modes.
In this paper, we intend to find the effect of in plane anisotropy on the
critical point and the transverse spin structure factor close to the field induced QPT.
We have considered the fully anisotropic spin 1/2 Heisenberg model in the presence of
a longitudinal field on a cubic lattice. We anticipate that the general behavior
for cubic lattice is also valid for the triangular one. Moreover, the study on cubic lattice
reduces the complexity of calculations which will be the route to investigate the triangular case.
In addition the results on cubic lattice can be applied to the field-induced magnetic
phase transition of TlCuCl$_3$.
We have implemented the hard core boson transformation for spin operators which gives the
excitation spectrum in terms of many body calculations for bosonic gas \cite{gorkov}.
We have used Brueckner approach\cite{fetter}
to find the bosonic self energy to get the magnon dispersion relation.
The quantum critical point is approached where the magnon spectrum becomes gapless.
We have found an analytic expression for the gap exponent in terms of the anisotropy parameter.
Our results show that a small amount of in-plane anisotropy changes the gap exponent drastically
which is the witness for the change in universal behavior. We have also found the dependence
of critical magnetic field on the anisotropy parameter which is also justified by the
divergence of the transverse structure factor at the antiferromagnetic wave vector.
The divergence of the in-plane magnetic susceptibility obeys an algebraic power law
with an exponent equals to the negative of gap exponent as the magnetic field approaches
the critical one. Moreover, the vanishing of the staggered magnetization is given
be the exponent $\beta=0.5$ in the mean field approximation.
\section{Anisotropic spin Hamiltonian}
The most general effective Hamiltonian to describe magnetic insulating matter due to exchange interaction between the spin of localized electrons can be written by
\begin{eqnarray}
\mathcal H=\frac{1}{2}\sum_{\langle ij\rangle}\sum_{\alpha=x,y,z}J_{ij}^{\alpha}S^{\alpha}_{i}S^{\alpha}_{j}-g\mu_{B}B\sum_{i}S_{i}^{z},
\label{e1}
\end{eqnarray}
where $g\simeq 2.2$, $\mu_B$ is the Bohr magneton and $B$ is the magnetic field.
The localized spins are located on the cubic lattice structure with nearest neighbor exchange interaction.
The effect of spin orbit coupling
is generally entered to the fully anisotropic exchange couplings $J_{ij}^{\alpha}$.
The exchange anisotropy is defined by parameter $\nu$ with the following relations
\begin{eqnarray}
J^{x}=J(1+\nu)\;\;,\;\;J^{y}=J(1-\nu)\;\;,\;\;J^z=J,
\label{e2}
\end{eqnarray}
where the scale of energy $J$ is set to one.
The above type of exchange interaction implies an anisotropy in both in plane and axial directions
where the symmetry has be reduced to Z(2).
\section{Hard core representation of spin Hamiltonian and bosonic Green's functions}
As mentioned in the introduction we intend to describe the field induced QPT in terms of
Bose-Einstein condensation of magnons. Our approach is similar to what we have implemented
in Ref.\onlinecite{rezania2008} to study the quantum critical properties of the Kondo-necklace
model.
In the first step the spin Hamiltonian is
mapped to a bosonic model.
This is done by the hard core boson representation given by: $S^{+}_{i}\longrightarrow a_{i},\;S^{-}_{i}\longrightarrow a^{\dag}_{i}$ and $S^{z}_{i}=1/2-a^{\dag}_{i}a_{i}$,
where $a_i$ and $a_i^{\dagger}$ are boson annihilation and creation operators, respectively.
The SU(2) algebra of spin operators is retrived from bosonic algebra of $a_i$ and $a_i^{\dagger}$ operators with the hard core constraint, i.e only one boson can occupy a single site of lattice.
The constraint is added to the Hamiltonian by an on-site infinite repulsion among bosonic particles. The
resulting Hamiltonian in terms of bilinear ($\mathcal H_{bil}$) and interacting ($\mathcal H_{int}$)
parts is given by
\begin{eqnarray}
\mathcal H_{bil}=\frac{1}{2}\sum_{i,j}\{J [a_{i}^{\dag} a_{j} +\frac{1}{2}\nu
(a_{i}^{\dag} a_{j}^{\dag}+a_{i} a_{j})]-J^{z} a_{i}^{\dag} a_{i}\}
+g\mu_{B}B\sum_{i}a_{i}^{\dag}a_{i},
\label{a3}
\end{eqnarray}
\begin{eqnarray}
\mathcal H_{int}=\mathcal U\sum_{i}a_{i}^{\dag}a_{i}^{\dag}a_{i}a_{i} +
\frac{1}{2}\sum_{ij}J^z a_{i}^{\dag}a_{i}a_{j}^{\dag}a_{j}.
\label{a4}
\end{eqnarray}
The bilinear Hamiltonian in the Fourier space representation is
\begin{eqnarray}
\mathcal{H}_{bil}=\sum_{\bf k}\{A_{\bf k}a_{\bf k}^{+} a_{\bf k}+
\frac{B_{\bf k}}{2}(a_{\bf k}^{+} a_{\bf -k}^{+} + a_{\bf -k}a_{\bf k} )\}, \label{e6}
\end{eqnarray}
\begin{eqnarray}
A_{\bf k}&=&[J_{\bf k} - J_0^z +g\mu _{B}B],\nonumber\\
B_{\bf k}&=&\nu J_{\bf k}.
\end{eqnarray}
It is defined $J_{\bf k}=J \sum_{\alpha=x,y,z} \cos(k_{\alpha})$ and $J_0^z=3J$.
The wave vectors $k_{\alpha}$ are considered in the first Brillouin zone.
The effect of hard core repulsion part ($\mathcal U \rightarrow \infty$)
of the interacting Hamiltonian in Eq.(\ref{a4}) is dominant
compared with the second quartic term. Thus, it is sufficient to take into account the effect of
hard core repulsion on the magnon spectrum and neglect the second quartic term.
The interacting part of Hamiltonian in terms of Fourier transformation of bosonic operators is given by
\begin{eqnarray}
\mathcal{H}_{int}=\mathcal{U}\sum_{k,k',q}a^{\dag}_{
k+q}a^{\dag}_{k'-q }a_{k'}a_{k} \;.
\label{e4}
\end{eqnarray}
The bilinear Hamiltonian is simply diagonalized by the unitary Bogoliuobov transformation
to the new bosonic quasi particle operators $\alpha_{\bf k}$ and $\alpha_{\bf k}^{\dag}$,
[\onlinecite{rezania2008}],
which is given by
\begin{eqnarray}
\mathcal {H}_{bil}&=&\sum_{\bf k}\omega_{\bf k}\left( \alpha_{\bf k}^{\dag}\alpha_{\bf k} +1/2\right),\nonumber\\
\omega^{2}_{\bf k}&=&A_{\bf k}^2-B_{\bf k}^2,
\end{eqnarray}
and the Bogoliubov coefficients are
\begin{eqnarray}
u^{2}_{\bf k} (v^{2}_{\bf k})=(-)\frac{1}{2}+\frac{A_{\bf k}}{2\omega_{\bf k}}.
\label{e64}
\end{eqnarray}
Although the bilinear part of Hamiltonian is diagonal in the new bosonic
($\alpha_{\bf k}, \alpha_{\bf k}^{\dag}$) representation, to avoid the complexity of
the calculations for the hard core repulsion term the Green's functions will be
calculated in the original boson operators ($a_k, a_k^{\dagger}$).
In the original boson representation, $\mathcal {H}_{bil}$ includes the pairing term between magnons
which requires
both anomalous and normal Green's functions to be considered.
More explanations of the detailed calculations can be found in
Ref.\onlinecite{rezania2008}.
Finally, the self-energy is expanded in the low energy limit which gives the single particle part of Green's function ($G_{n}^{sp}$),
\begin{equation}
G_{n}^{sp}(k,\omega)=
\frac{Z_{k}U_{k}^{2}}{\omega-\Omega_{k}+i\eta}-\frac{Z_{k}V_{k}^{2}}{\omega+\Omega_{k}-i\eta},
\label{e330}
\end{equation}
where the renormalized triplet spectrum ($\Omega_{k}$), the renormalized single particle
weight constants ($Z_{k}$) and renormalized Bogoliuobov coefficients ($U_{k}, V_{k}$) are given by
\begin{eqnarray}
\Omega_{k}&=&Z_{k}\sqrt{[A_{k}+\Sigma_{n}(k,0))]^{2}-[B_{k}+\Sigma_{a}(k,0)]^{2})},
\nonumber\\
&&Z_{k}^{-1}=1-(\frac{\partial \Sigma_{n}}{\partial \omega})_{\omega=0},\nonumber\\
&&U_{k}^{2} (V_{k}^{2})=(-)\frac{1}{2}+\frac{Z_{k}[A_{k}+\Sigma_{n}(k,0)]}{2\Omega_{k}}.
\label{e340}
\end{eqnarray}
The renormalized weight constant is the residue of the single particle pole in the
Green's function. In the next step we will take into account the effect of hard core
repulsion on the magnon spectrum.
\section{Effect of hard core repulsion on the magnon spectrum}
The density of the magnons is obtained from the normal Green's functions
\begin{eqnarray}
n_{i}=\langle a^{\dag}_{i}a_{
i}\rangle
=\frac{1}{N}\sum_{k}v^{2}_{k},
\label{e131}
\end{eqnarray}
where $N$ is the number of the spins in the cubic lattice.
In the vicinity of the critical field ($B^{0}_{c}$) and at the zero temperature the density of excited magnons is negligible\cite{nikuni}.
Since the Hamiltonian $\mathcal{H}_{int}$ in Eq.(\ref{e4}) is short ranged and $\mathcal{U}$ is large,
the Brueckner approach (ladder diagram summation)\cite{gorkov,fetter} can be applied
for the low density limit of magnons.
The interacting normal Green's function is obtained by imposing the hard core boson
repulsion, $\mathcal{U}\rightarrow \infty$.
Firstly, the scattering amplitude (t-matrix) $\Gamma(k_{1},k_{2};k_{3},k_{4})$
of magnons is introduced where $k_{i}\equiv(\textbf{k},(k_{0}))_{i}$.
The basic approximation made in the derivation of $\Gamma(K)$ is
that we neglect all anomalous scattering vertices, which are
presented in the theory due to the existence of anomalous Green's functions.
According to the Feynman rules\cite{fetter} in momentum space at zero temperature,
the scattering amplitude is calculated
(see Fig.1 of Ref.\onlinecite{rezania2008}).
By replacing the noninteracting normal Green's function
in the Bethe-Salpeter equation
and taking the
limit $\mathcal{U}\longrightarrow\infty$ we obtain
the scattering matrix in the form
\begin{eqnarray}
\Gamma(\textbf{K},\omega)=-\Big(\frac{1}
{(2\pi)^{3}}
\int d^{3}Q\frac{u^{2}_{\textbf{Q}}u^{2}_{\textbf{K}-\textbf{Q}}}
{\omega-\omega_{\textbf{Q}}-\omega_{\textbf{K}-\textbf{Q}}}
-\frac{v^{2}_{\textbf{Q}}v^{2}_{\textbf{K}-\textbf{Q}}}
{\omega+\omega_{\textbf{Q}}+\omega_{\textbf{K}-\textbf{Q}}}
\Big)^{-1}.
\label{e178}
\end{eqnarray}
According to Fig.2 of Ref.\onlinecite{rezania2008} and after
some calculations the normal self-energy is obtained
in the following form
\begin{eqnarray}
\Sigma^{\mathcal{U}}_n(\textbf{k},\omega)&=&\frac{2}{N}\sum_{p} v_{\textbf{p}}^{2}\Gamma(\textbf{p}+\textbf{k},\omega-\omega_{\textbf{p}}).
\label{e211}
\end{eqnarray}
In the dilute gas approximation there are other diagrams which are formally at most linear in
the density of magnons. However, the magnon densities are very small and the contributions of
such terms are numerically smaller than Eq.~(\ref{e211}).
We should also consider the anomalous self-energy related to $H_{\mathcal{U}}$ which
exists in the vertex function.
The anomalous self-energy has a vanishing contribution.
\section{the gap exponent}
Close to the quantum critical point, the excitation gap ($\Delta$) in the field-induced ferromagnetic phase vanishes according to the following power law behavior
\begin{eqnarray}
\Delta\sim |B-B_{c}|^{\phi},
\label{gapscaling}
\end{eqnarray}
where $B_{c}$ is the critical magnetic field and $\phi$ is the gap exponent which is related to universality class of the quantum critical point.
The quantum critical point corresponds to the vanishing of magnon spectrum at the
antiferromagnetic wave vector $Q_{AF}=(\pi, \pi, \pi)$. The magnon spectrum close to
the antiferromagnetic wave vector ($Q_{AF}$) is approximated by
\begin{eqnarray}
\omega_{k}=\sqrt{\Delta^{2}+c^{2}(k-Q_{AF})^{2}},
\end{eqnarray}
where $c$ is the spin wave velocity.
The spin wave velocity ($c$) is obtained numerically from the excitation spectrum.
In the first step, we calculate the variation of the self-energy related to
$H_{\mathcal U}$ which is given by
\begin{eqnarray}
\delta \Sigma^{\mathcal U}(Q_{AF})
&=&\frac{2}{N}\sum_{k}\delta v_{k}^{2}\Gamma(k+Q_{AF},-\omega_{k})+\frac{2}{N}\sum_{k}v_{k}^{2}\delta \Gamma(k+Q_{AF},-\omega_{k}).
\label{e750}
\end{eqnarray}
The main contribution to the first integral in Eq.~(\ref{e750}) comes from the
small momenta $q\sim \Delta/c\ll1$ where $q\equiv k-Q_{AF}$ since
\begin{eqnarray}
\delta v_{k}^{2}=\frac{1}{2} \Big(\frac{\delta A_{k}}{\omega_{q}}+A_{k}\delta [\frac{1}{\omega_{k}}]\Big)\approx -\frac{A_{Q_{AF}}^{c}\Delta^{2}}{2(\Delta^{2}+c^{2}q^{2})^{3/2}}.
\label{e800}
\end{eqnarray}
Taking into account the first correction to the magnon density, the vertex function can be written for small $q$ (see Ref.[\onlinecite{shevchenko}])
\begin{eqnarray}
\Gamma(q,-\omega_{q})\approx\Gamma^{c}_0 [1+\frac{{\Gamma^{c}_0}A_{k=0}^{c}}{4\pi c^{2}}\ln q],
\label{e900}
\end{eqnarray}
where $\Gamma^{c}_0\equiv\Gamma^c(k=0)$ and the value of quantity $X$ at the critical point is defined by $X^c$.
The substitution of Eq.(\ref{e900}) in Eq.(\ref{e750}) and replacing
$q\simeq \Delta/J$ in the first integral of Eq.(\ref{e750}),
we find that
\begin{eqnarray}
\delta \Sigma^{\mathcal U}(\pi,\pi)=-\frac{A_{Q_{AF}}^{c}\Delta^{2}}{8\pi^{2} c^{3}}\Gamma^{c}_0[1+\frac{{\Gamma^{c}}_0A_{k=0}^{c}}{4\pi c^{2}}\ln\frac{\Delta}{J}]+\Gamma^{\prime}n_{b}\delta B,
\label{e950}
\end{eqnarray}
where $\Gamma^{\prime}=\frac{\delta \Gamma(q,-\omega_{q})}{\delta B}$ and
$n_{b} (=\frac{1}{N}\sum_i \langle a^{\dagger}_i a_i\rangle$) is the density of magnons at the critical point.
Let us define the following expressions
\begin{eqnarray}
&&\lambda \equiv \frac{A^{c}_{Q_{AF}}\Gamma^{c}_0}{8\pi^{2} c^{3}},\nonumber\\
&&\sigma \equiv \Gamma^{\prime}n_{b}.
\label{e1040}
\end{eqnarray}
After some calculations we finally get the following relation
\begin{eqnarray}
\Delta^{2}=\frac{(g\mu_{B}+\sigma)\delta B}{\lambda}\Big(1-\frac{ A^{c}_{Q_{AF}}\Gamma^{c}}{4\pi c^{2}}ln\frac{\delta B}{J}\Big).
\label{e1050}
\end{eqnarray}
The gap exponent $\phi$ is obtained upon replacing $\Delta=|\delta B|^{\phi}$ in the
above equation which finally is equal to
\begin{eqnarray}
\phi=\frac{1}{2}-\frac{A^{c}_{Q_{AF}}\Gamma^{c}_0}{8\pi c^{2}}.
\label{e1060}
\end{eqnarray}
We have presented the numerical results of gap exponent in terms of anisotropic parameters
in section-\ref{summary}.
\section{staggered magnetization for $B\lesssim B_c$}
At the quantum critical point the magnons condensate at $q=Q_{AF}$ which is the onset of the
long range antiferromagnetic (AF) order in the model. The system is represented by the
AF ordered state as far as $B<B_c$. In the AF phase
we have applied Hartree-Fock-Popov mean field approach \cite{shi,nikuni} by
taking into account the condensation of magnons in the interacting Hamiltonian at wave vector $Q_{AF}$.
The effective interparticle interaction is defined by the following Hamiltonian
\begin{eqnarray}
\mathcal{H}_{eff}=\Gamma^{c}(Q_{AF})\sum_{k,k',q}a^{\dag}_{
k+q}a^{\dag}_{k'-q }a_{k'}a_{k} \;,
\label{e1070}
\end{eqnarray}
where, $\Gamma^{c}(Q_{AF})$ is the interaction parameter at the critical point ($B=B_{c}$).
Below the critical point ($B< B_c$), the AF order parameter becomes nonzero and it can be interpreted by
the nonzero mean field value of the creation operator of magnons at $Q_{AF}$.
Let us define $\langle a_{Q_{AF}}\rangle=N_c=\langle a^{\dagger}_{Q_{AF}}\rangle$
for the condensate phase where $N_c$ is the number of condensed magnons.
The staggered magnetization in the x-y plane
which represents the long range AF order is denoted by
$m_{\perp}\equiv m_{x}+im_{y}=g\mu_{B}\sqrt{Nn_{c}}$
where $N$ is the total number of sites and $n_c$ is the condensed magnon density.
The effective Hamiltonian in Eq.(\ref{e1070}) can be written in the following form where
the contribution from the condensate phase has been denoted by $H_{\mathcal{U}}^{0}$,
\begin{eqnarray}
H_{\mathcal{U}}&=&H_{\mathcal{U}}^{0}+H_{\mathcal{U}}^{2}+H_{\mathcal{U}}^{3}+H_{\mathcal{U}}^{4},\nonumber\\
H_{\mathcal{U}}^{0}&=&\frac{\Gamma^{c}(Q_{AF})N_{c}^{2}}{2N},\nonumber\\
H_{\mathcal{U}}^{2}&=&\frac{\Gamma^{c}(Q_{AF})N_{c}}{N}\sum_{q}'\Big[\frac{1}{2}(a_{q}a_{-q}+a^{\dag}_{q}a^{\dag}_{-q})+2a^{\dag}_{q}a_{q}\Big],\nonumber\\
H_{\mathcal{U}}^{3}&=&\frac{\Gamma^{c}(Q_{AF})\sqrt{N_{c}}}{N}\sum_{k,q}'\Big(a^{\dag}_{k}a_{k+q}a_{-q}+h.c.\Big),\nonumber\\
H_{4}&=&\frac{\Gamma^{c}(Q_{AF})}{2N}\sum_{k,q,k^{'}}'a^{\dag}_{
k+q}a^{\dag}_{k'-q }a_{k'}a_{k}.
\label{e1080}
\end{eqnarray}
In the above equations, $\sum^{'}$ implies that the terms with creation and annihilation operators
at the antiferromagnetic wave vector ($Q_{AF}$) are excluded.
In a mean field approximation the contribution form $H_{\mathcal{U}}^{3}$ is zero since it
contains linear terms of boson operators.
Taking into account the hard core repulsion which avoids the pairing of magnons and considering
all other contractions the mean field representation of $H_{\mathcal{U}}^{4}$ is
\begin{eqnarray}
H_{\mathcal{U}}^{4}=2(1-n_{c})\Gamma^{c}(Q_{AF})\sum_{k}'a^{\dag}_{k}a_{k}.
\label{e1090}
\end{eqnarray}
After adding the non-interacting part, Eq.(\ref{e6}), to the mean field (MF) interacting one the
Hamiltonian is given by the following equation plus a constant term which has been omitted here,
\begin{eqnarray}
H^{MF}=\sum_{k}'(A_{k}+2\Gamma^{c}(Q_{AF}))a^{\dag}_{k}a_{k}+\sum_{k}'\frac{\nu J_{k}+\Gamma^{c}(Q_{AF})n_{c}}{2}(a_{k}a_{-k}+a^{\dag}_{k}a^{\dag}_{-k}).
\label{e1100}
\end{eqnarray}
The mean field Hamiltonian is diagonalized by the unitary Bogoliubov transformation,
\begin{eqnarray}
H^{MF}&=&\sum_{k}'\Omega_{k}(\phi^{\dag}_{k}\phi_{k}),\nonumber\\
\Omega_{k}&=&\sqrt{(A_{k}+2\Gamma^{c}(Q_{AF}))^{2}-(\nu J_{k}+\Gamma^{c}(Q_{AF})n_{c})^{2}},\nonumber\\
a_{k}&=&d_{k}\phi_{k}-f_{k}\phi^{\dag}_{-k},
\end{eqnarray}
where $\Omega_{k}$ gives the excitation spectrum of the new bosonic quasi partiles defined by
the creation operator $\phi^{\dag}_{k}$ and $d_{k}$, $f_{k}$ are Bogoliubov coefficients
\begin{eqnarray}
d_{k}=\sqrt{\frac{A_{k}+2\Gamma^{c}(Q_{AF})}{2\Omega_{k}}+\frac{1}{2}},\;\;\;\ f_{k}=\sqrt{\frac{A_{k}+2\Gamma{c}(Q_{AF})}{2\Omega_{k}}-\frac{1}{2}}.
\end{eqnarray}
The condensation of magnons at the AF wave vector implies that excitation spectrum
must be gapless at $Q_{AF}$ which gives the following relation
\begin{eqnarray}
J_{\bf k=Q_{AF}} - J_0^z +g\mu _{B}B+2\Gamma^{c}(_{AF})=-\nu J_{k=Q_{AF}}-\Gamma^{c}(Q_{AF})n_{c}.
\label{e1150}
\end{eqnarray}
Therefore, the transverse order parameter for $B\leq B_{c}$ is given by
\begin{eqnarray}
m_{\perp}=g\mu_{B}\sqrt{n_{c}}=g\mu_{B}\sqrt{\frac{-J_{\bf k=Q_{AF}}+J_0^z -g\mu _{B}B-2\Gamma^{c}(Q_{AF})-\nu J_{Q_{AF}}}{\Gamma^{c}(Q_{AF})}}
\label{1151}
\end{eqnarray}
The critical field is touched where $n_{c}=0$ in Eq.(\ref{e1150}) which leads
to the following expression,
\begin{eqnarray}
B_{c}=\frac{J_{0}^{z}-J_{\bf Q_{AF}}-2\Gamma^{c}(Q_{AF})-\nu J_{\bf Q_{AF}}}{g\mu_{B}}.
\end{eqnarray}
Therefore the transverse staggered magnetization has the following expression in the
mean field approximation
\begin{eqnarray}
m_{\perp}=g\mu_{B}\sqrt{n_{c}}=g\mu_{B}\sqrt{\frac{g\mu _{B}(B_{c}-B)}{\Gamma^{c}(Q_{AF})}}.
\label{tm}
\end{eqnarray}
The scaling behavior of transverse order parameter close to the critical field is characterized
by the exponent $\beta$ via $m_{\perp} \sim |B_c-B|^{\beta}$ which is $\beta=0.5$, in the mean field approximation.
\section{Summary, results and discussions \label{summary}}
In this article we have studied the effect of in-plane anisotropy on the quantum critical
properties of the spin 1/2 Heisenberg model in the presence of a longitudinal field ($B$) on a
cubic lattice. The in-plane anisotropy breaks the U(1) symmetry of the model (around the direction
of the magnetic field) and changes the quantum critical properties of the model which is discussed
in this section. Moreover, we have analyzed the field-induced phase transition in this model
in terms of Bose-Einstein condensation of magnons.
The original spin model has been represented by a bosonic model in the presence of hard core repulsion
to avoid double occupation of bosons at each lattice site which preserves the SU(2) algebra
of the spin model.
In the limit of $B/J\longrightarrow\infty$, the ground state is a field induced ferromagnetic
state and a finite energy gap exists to the lowest excited state which is called the magnon spectrum.
The decrease of magnetic field lowers the excitation gap which eventually vanishes at
the critical magnetic field ($B_c$). This point corresponds to the
condensation of magnons which is the onset of long range antiferromagnetic order of the spin model.
We have implemented the Green's function approach to obtain the effect of interaction
on the diagonal part of the bosonic Hamiltonian using Brueckner formalism close to the quantum
critical point where the magnon density is small.
The magnon spectrum have been calculated from
Eqs.(\ref{e330}, \ref{e340}, \ref{e211})
selfconsistently.
The procedure is started with an initial guess for $Z_{k}, \Sigma_{n}(k,0)$ and $\Sigma_{a}(k,0)$,
then using Eq.(\ref{e340}) we find the renormalized excitation energy and the
renormalized Bogoliuobov coefficients. The procedure is repeated until convergence is reached.
Using the final values for energy gap, renormalization constants and Bogoliubov coefficients,
we have obtained the quantum critical point
for different anisotropy parameters in Table.\ref{t1}. Our data shows that a small amount
of anisotropy has a considerable change on the critical magnetic field. We have also plotted
the magnon gap versus the magnetic field in Fig.\ref{fig1} for different values of anisotropy.
It is obvious from Fig.\ref{fig1} that the magnon gap vanishes as the magnetic field approaches
the critical value for $B\gtrsim B_c$.
Moreover, the scaling behavior of gap close to $B_c$ which is characterized by the
gap exponent ($\phi$) defined
in Eq.(\ref{gapscaling}) depends
on the anisotropy parameter. We have presented the gap exponent ($\phi$)
for different anisotropies in Table.\ref{t1}. The dependence of $\phi$ on
$\nu$ shows that the in-plane anisotropy changes the universal behavior of the model.
A drastic change of $\phi$ from $0.40$ for $\nu=0$ to $0.2$ for $\nu=0.1$ manifests
the change of universality class due to explicit breaking of symmetry by anisotropy.
At $\nu=0$ the model has U(1) symmetry while for $\nu \neq 0$ the symmetry breaks to Z2.
Although the calculated gap exponent for different nonzero anisotropies
may change slightly,
the amount of changes are in the order of error bar which mainly comes from the
error bar inherited in the value of $B_c(\nu)$, nonzero values of $\nu$
do not present different universal behavior.
\begin{center}
\begin{table}[ht]
\caption{\label{t1} The critical magnetic field ($B_{c}$) and gap exponent ($\phi$) for different values
of anisotropies. The error bar for all data is $\pm 0.05$.
}\vspace{0.3cm}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
$\nu$ &0.0 &0.1&0.2&0.3&0.4\\
\hline
$B_c$ (critical field)&12.26&13.03&13.88&14.72&15.53\\
$\phi$ (gap exponent)&0.40&0.2&0.2&0.2&0.2\\
\end{tabular}
\end{ruledtabular}
\end{table}
\end{center}
\begin{figure}
\vspace{1cm}
\includegraphics[width=10cm]{fig1.eps}
\caption{\label{fig1} (Color online) The energy gap versus the magnetic field for various anisotropy parameters. The change
of the critical magnetic field (where the gap vanishes) for various anisotropies is remarkable.}
\end{figure}
The long range ordering can be deduced from the behavior of spin susceptibility.
The magnetic ordering occurs for transverse spin components; therefore, static spin structure factor for tranverse component diverges at the antiferromagnetic wave vector in the quantum critical
point.
The x-component static spin structure factor at momentum $q$ is defined by
\begin{eqnarray}
\chi^{xx}(q)=\langle s^{x}(q)s^{x}(-q)\rangle,
\label{211.7}
\end{eqnarray}
which is given by the following expression,
\begin{eqnarray}
\chi^{xx}(q)=\frac{\pi}{2}[2\sqrt{\frac{A_{q}^{2}}{4\omega_{q}^{2}}-1}+\frac{A_{q}}{\omega_{q}}].
\label{susceptibility}
\end{eqnarray}
Close to the quantum critical point where the magnon spectrum
vanishes the dominant contribution is obtained to be
\begin{equation}
\chi^{xx}(Q_{AF})\approx \pi A^{c}_{Q_{AF}} |B-B_{c}|^{-\phi},
\end{equation}
which shows that the divergens of magnetic susceptibility follows a scaling relation
with exponent, $\phi$.
The numerical results for $\chi^{xx}(Q_{AF})$ versus the magnetic field have been plotted
in Fig.\ref{fig2}. This plot confirms the presence of antiferromagnetic order at the
critical magnetic field. The divergence of the static structure factor happens at the
different critical fields for various anisotropies ($\nu$) which justifies the previous
results on energy gap.
Both results on the energy gap and transverse structure factor confirms that a small amount of
in-plane anisotropy changes the critical magnetic field considerably. Moreover, the explicit
breaking of symmetry due to in-plane anisotropy shows up in the gap exponent. Our results
on the scaling behavior of magnetic order parameter ($\beta$) is limited to the mean field
approximation which does not show its dependence on the anisotropy. However, we expect that
the dependence of $\beta$ on anisotropy ($\nu$) should appear if the calculation goes
beyond the mean field approach.
As far as the model has two control parameter $\nu$ and $B$, the universal
behavior should be fixed by two exponents $\phi$ and $\beta$. In other words, any other
exponent for a scaling behavior close to critical field at zero temperature can be expressed
in terms of the obtained exponents ($\phi, \beta$).
\begin{figure}
\vspace{1cm}
\includegraphics[width=10cm]{fig2.eps}
\caption{\label{fig2} (Color online) The (x-component) transverse structure factor at the antiferromagnetic
wave vector versus the magnetic field for different anisotropies. The divergence at the critical
magnetic field justifies the onset of magnetic order.}
\end{figure}
\section{acknowledgment}
We would like to express our deep gratitude to P. Thalmeier and V. Yushankhai
who originally suggested
this problem and also for their valuable comments and fruitful discussions.
The authors would like to thank the hospitality of physics department of the institute for research
in fundamental sciences (IPM) during part of this collaboration.
This work was supported in part by the Center of Excellence in
Complex Systems and Condensed Matter (www.cscm.ir).
\section*{References}
|
1,941,325,220,758 | arxiv | \section{Introduction and Preliminaries}
We denote the set of all bounded linear operators on a Hilbert space $\mathcal{H}$ by $\mathcal{B}\left( \mathcal{H} \right)$. An operator $A\in \mathcal{B}\left( \mathcal{H} \right)$ is said to be positive (denoted by $A\ge 0$) if $\left\langle Ax,x \right\rangle \ge 0$ for all $x\in \mathcal{H}$. If a positive operator is invertible, it is said to be strictly positive and we write $A>0.$
The axiomatic theory for connections and means for pairs of positive matrices have been studied by Kubo and Ando \cite{kubo}. A binary operation $\sigma$ defined on the cone of strictly positive operators is called an operator mean if for $A,B>0,$
\begin{itemize}
\item[(i)] $I\sigma I=I$, where $I$ is the identity operator;
\item[(ii)] ${{C}^{*}}\left( A\sigma B \right)C\le \left( {{C}^{*}}AC \right)\sigma \left( {{C}^{*}}BC \right)$, $\forall C\in\mathcal{B}(\mathcal{H})$;
\item[(iii)] $A_{n}\downarrow A$ and $B_{n}\downarrow B$ imply $A_{n}\sigma B_{n}\downarrow A\sigma B$, where ${{A}_{n}}\downarrow A$ means that ${{A}_{1}}\ge {{A}_{2}}\ldots $ and ${{A}_{n}}\to A$ as $n\to \infty $ in the strong operator topology;
\item[(iv)]
\begin{equation}\label{13}
A\le B\quad \& \quad C\le D\quad\text{ }\Rightarrow\quad \text{ }A\sigma C\le B\sigma D, \forall C,D>0.
\end{equation}
\end{itemize}
For a symmetric operator mean $\sigma $ (in the sense that $A\sigma B=B\sigma A$), a parametrized operator mean ${{\sigma }_{\alpha }}$ ($\alpha \in \left[ 0,1 \right]$) is called an interpolational path for $\sigma $ (or Uhlmann's interpolation for $\sigma $) if it satisfies
\begin{itemize}
\item[(c1)] $A{{\sigma }_{0}}B=A$ (here we recall the convention ${{T}^{0}}=I$ for any positive operator $T$), $A{{\sigma }_{1}}B=B$, and $A{{\sigma }_{\frac{1}{2}}}B=A\sigma B$;
\item[(c2)] $\left( A{{\sigma }_{\alpha }}B \right)\sigma \left( A{{\sigma }_{\beta }}B \right)=A{{\sigma }_{\frac{\alpha +\beta }{2}}}B$ for all $\alpha ,\beta \in \left[ 0,1 \right]$;
\item[(c3)] the map $\alpha \in \left[ 0,1 \right]\mapsto A{{\sigma }_{\alpha }}B$ is norm continuous for each $A$ and $B$.
\end{itemize}
It is straightforward to see that the set of all $\gamma \in \left[ 0,1 \right]$ satisfying
\begin{equation}\label{20}
\left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\gamma }}\left( A{{\sigma }_{\beta }}B \right)=A{{\sigma }_{\left( 1-\gamma \right)\alpha +\gamma \beta }}B
\end{equation}
for all $\alpha ,\beta $ is a convex subset of $\left[ 0,1 \right]$ including 0 and 1. Therefore \eqref{20} is valid for all $\alpha ,\beta ,\gamma \in \left[ 0,1 \right]$ (see \cite[Lemma 1]{fujii}).
Typical interpolational means are so-called power means
\[A{{m}_{\upsilon }}B={{A}^{\frac{1}{2}}}{{\left( \frac{1}{2}\left( I+{{\left( {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \right)}^{\upsilon }} \right) \right)}^{\frac{1}{\upsilon }}}{{A}^{\frac{1}{2}}},\quad\text{ }-1\le \upsilon \le 1\]
and their interpolational paths are \cite[Theorem 5.24]{mond-pecaric},
\[A{{m}_{\upsilon ,\alpha }}B={{A}^{\frac{1}{2}}}{{\left( \left( 1-\alpha \right)I+\alpha {{\left( {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \right)}^{\upsilon }} \right)}^{\frac{1}{\upsilon }}}{{A}^{\frac{1}{2}}},\quad\text{ }0\le \alpha \le 1.\]
In particular, we have
\[A{{m}_{1,\alpha }}B=A{{\nabla }_{\alpha }}B=\left( 1-\alpha \right)A+\alpha B,\]
\[A{{m}_{0,\alpha }}B=A{{\sharp}_{\alpha }}B={{A}^{\frac{1}{2}}}{{\left( {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \right)}^{\alpha }}{{A}^{\frac{1}{2}}},\]
\[A{{m}_{-1,\alpha }}B=A{{!}_{\alpha }}B={{\left( {{A}^{-1}}{{\nabla }_{\alpha }}B \right)}^{-1}}.\]
They are called the weighted arithmetic, weighted geometric, and weighted harmonic interpolations respectively. It is well-known that
\begin{equation}\label{15}
A{{!}_{\alpha }}B\le A{{\sharp}_{\alpha }}B\le A{{\nabla }_{\alpha }}B,\quad\text{ }0\le \alpha \le 1
\end{equation}
In \cite{aujla}, Aujla et al. introduced the notion of operator log-convex functions in the following way: A continuous real function $f:\left( 0,\infty \right)\to \left( 0,\infty \right)$ is called operator log-convex if
\begin{equation}\label{22}
f\left( A{{\nabla }_{\alpha }}B \right)\le f\left( A \right){{\sharp}_{\alpha }}f\left( B \right),\quad\text{ }0\le \alpha \le 1
\end{equation}
for all positive operators $A$ and $B$. After that, Ando and Hiai \cite{1} gave the following characterization of operator monotone decreasing functions: Let $f$ be a continuous non-negative function on $\left( 0,\infty \right)$. Then the following conditions are equivalent:
\begin{itemize}
\item[(a)] $f$ is operator monotone decreasing;
\item[(b)] $f$ is operator log-convex;
\item[(c)] $f\left( A\nabla B \right)\le f\left( A \right)\sigma f\left( B \right)$ for all positive operators $A$, $B$
and for all symmetric operator means $\sigma $.
\end{itemize}
In Theorem \ref{theorem01} below, we provide a more precise estimate than \eqref{22} for operator log-convex functions. As a by-product, we improve both inequalities in \eqref{15}. Additionally, we give refinement and two reverse inequalities for the triangle inequality.
Our main application of Theorem \ref{theorem01} is a subadditive behavior of operator monotone decreasing functions. Recall that a concave function (not necessarily operator concave) $f:(0,\infty)\to [0,\infty)$ enjoys the subadditive inequality
\begin{equation}\label{concave_subadditive_intro}
f(a+b)\leq f(a)+f(b), a,b>0.
\end{equation}
Operator concave functions (equivalently, operator monotone) do not enjoy the same subadditive behavior. However, in \cite{ando}
it was shown that an operator concave function $f:(0,\infty)\to (0,\infty)$ satisfies the norm version of \eqref{concave_subadditive_intro} as follows
\begin{equation}\label{subadditive_ando}
|||f(A+B)|||\leq |||f(A)+f(B)|||,
\end{equation}
for positive definite matrices $A,B$ and any unitraily invariant norm $|||\;\;|||$. Later, the authors in \cite{bourin} showed that \eqref{subadditive_ando} is still valid for concave functions $f:(0,\infty)\to (0,\infty)$ (not necessarily operator concave).
We emphasize that \eqref{subadditive_ando} does not hold without the norm. In \cite{aujla1}, it is shown that an operator monotone decreasing function $f:(0,\infty)\to (0,\infty)$ satisfies the subadditive inequality
\begin{equation}\label{aujla_sub_intro}
f(A+B)\leq f(A)\nabla f(B),
\end{equation}
for the positive matrices $A,B.$
In Corollary \ref{subadditive_coro}, we present multiple refinements of \eqref{aujla_sub_intro}.
The celebrated Ando's inequality asserts that if $\Phi $ is a positive linear map and $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ are positive operators, then
\begin{equation}\label{23}
\Phi \left( A{{\sharp}_{\alpha }}B \right)\le \Phi \left( A \right){{\sharp}_{\alpha }}\Phi \left( B \right),\quad\text{ }0\le \alpha \le 1.
\end{equation}
Recall that, a linear map $\Phi $ is positive if $\Phi \left( A \right)$ is positive whenever $A$ is positive. We improve and extend this result to Uhlmann's interpolation ${{\sigma }_{\alpha \beta }}$ ($0\le \alpha ,\beta \le 1$). Precisely speaking, we prove that
\[\begin{aligned}
\Phi \left( A{{\sigma }_{\alpha \beta }}B \right)&\le \Phi \left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( A{{\sigma }_{0}}B \right) \right){{\sigma }_{\alpha }}\Phi \left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( A{{\sigma }_{1}}B \right) \right) \\
& \le \Phi \left( A \right){{\sigma }_{\alpha \beta }}\Phi \left( B \right).
\end{aligned}\]
This result is included in Section \ref{s2}.
\section{On the operator log-convexity}\label{s1}
Our first main result in this paper reads as follows.
\begin{theorem}\label{theorem01}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ be positive operators and $0\le \alpha \le 1$. If $f$ is a non-negative operator monotone decreasing function, then
\begin{equation}\label{19}
f\left( A{{\nabla }_{\alpha }}B \right)\le f\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}A \right){{\sharp}_{\alpha }}f\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}B \right)\le f\left( A \right){{\sharp}_{\alpha }}f\left( B \right)
\end{equation}
for any $0\le \beta \le 1$.
\end{theorem}
\begin{proof}
Assume $f$ is operator monotone decreasing. We start with the useful identity
\begin{equation}\label{12}
A{{\nabla }_{\alpha }}B=\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }} A\right){{\nabla }_{\alpha }}\left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}B\right),
\end{equation}
which follows from \eqref{20} with $A=A\nabla_0B$ and $B=A\nabla_1B$. Then we have
\begin{align}
f\left( A{{\nabla }_{\alpha }}B \right)&=f\left( \left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}A\right){{\nabla }_{\alpha }}\left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}B\right) \right) \nonumber\\
& \le f\left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}A\right){{\sharp}_{\alpha }}f\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}B \right) \label{3}\\
& \le \left( f\left( A{{\nabla }_{\alpha }}B \right){{\sharp}_{\beta }}f(A) \right){{\sharp}_{\alpha }}\left( f\left( A{{\nabla }_{\alpha }}B \right){{\sharp}_{\beta }}f(B) \right) \label{4}\\
& \le \left(\left( f\left( A \right){{\sharp}_{\alpha }}f\left( B \right) \right) {{\sharp}_{\beta }}f(A)\right){{\sharp}_{\alpha }}\left( \left( f\left( A \right){{\sharp}_{\alpha }}f\left( B \right) \right) {{\sharp}_{\beta }}f(B)\right) \label{5}\\
& =\left(\left( f\left( A \right){{\sharp}_{\alpha }}f\left( B \right) \right) {{\sharp}_{\beta}}\left(f(A)\sharp_0f(B)\right)\right){{\sharp}_{\alpha }}\left( \left( f\left( A \right){{\sharp}_{\alpha }}f\left( B \right) \right) {{\sharp}_{\beta }}\left(f(A)\sharp_1f(B)\right)\right) \label{6}\\
& =f\left( A \right){{\sharp}_{(1-\beta)\alpha +\beta\alpha}}f\left( B \right) \label{7}\\
& =f\left( A \right){{\sharp}_{\alpha }}f\left( B \right) \nonumber
\end{align}
where the inequalities \eqref{3}, \eqref{4} and \eqref{5} follow directly from the log-convexity assumption on $f$ together with \eqref{13}, the equalities \eqref{6} and \eqref{7} are obtained from the property (c1) and \eqref{20}, respectively.
This completes the proof.
\end{proof}
As promised in the introduction, we present the following refinement of Aujla inequality \eqref{aujla_sub_intro}, as a main application of Theorem \ref{theorem01}.
\begin{corollary}\label{subadditive_coro}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ be positive operators. If $f$ is a non-negative operator monotone decreasing function, then
\begin{align*}
f(A+B)&\leq f(3A\nabla B)\sharp f(A\nabla 3B)\\
&\leq f(2A)\sharp f(2B)\\
&\leq f(2A)\nabla f(2B)\\
&\leq f(A)\nabla f(B).
\end{align*}
\end{corollary}
\begin{proof}
In Theorem \ref{theorem01}, let $\alpha=\beta=\frac{1}{2}$ and replace $(A,B)$ by $(2A,2B).$ This implies the first and second inequalities immediately. The third inequality follows from the second inequality in \eqref{15}, while the last inequality follows properties of operator means and the fact that $f$ is operator monotone decreasing.
\end{proof}
\begin{remark}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ be positive operators and $0\le \alpha \le 1$. If $f$ is a function satisfying
\begin{equation}\label{9}
f\left( A{{\nabla }_{\alpha }}B \right)\le f\left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }} A\right){{\sharp}_{\alpha }}f\left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}B\right),
\end{equation}
for $0\leq \beta\leq 1,$ then $f$ is operator monotone decreasing. This follows by taking $\beta =1$ in \eqref{9} and equivalence between (a) and (b) above.
\end{remark}
\begin{corollary}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ be positive operators. If $g$ is a non-negative operator monotone increasing, then
\[g\left( A{{\nabla }_{\alpha }}B \right)\ge g\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}A \right){{\sharp}_{\alpha }}g\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}B \right)\ge g\left( A \right){{\sharp}_{\alpha }}g\left( B \right)\]
for any $0\le \alpha ,\beta \le 1$.
\end{corollary}
\begin{proof}
It was shown in \cite{1} that operator monotonicity of $g$ is equivalent to operator log-concavity ( $g\left( A{{\nabla }_{\alpha }}B \right)\ge g\left( A \right){{\sharp}_{\alpha }}g\left( B \right)$). The proof goes in a similar way to the proof of Theorem \ref{theorem01}.
\end{proof}
\begin{remark}
In \cite[Remark 2.6]{1}, we have for non-negative operator monotone decreasing function $f$, any operator mean $\sigma$ and $A,B>0,$
\begin{equation}\label{16}
f(A\nabla_{\alpha}B)\le f(A)!_{\alpha}f(B)\le f(A) \sigma f(B),\; 0\leq \alpha\leq 1.
\end{equation}
Better estimates than \eqref{16} may be obtained as follows, where $0\leq \alpha, \beta\leq 1,$
\begin{align}
f\left( A{{\nabla }_{\alpha }}B \right)&=f\left( \left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}A\right){{\nabla }_{\alpha }}\left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}B\right) \right) \nonumber\\
& \le f\left( \left( A{{\nabla }_{\alpha }}B \right) {{\nabla }_{\beta }}A\right){{!}_{\alpha }}f\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}B \right) \nonumber\\
& \le \left( f\left( A{{\nabla }_{\alpha }}B \right){{!}_{\beta }}f(A) \right){{!}_{\alpha }}\left( f\left( A{{\nabla }_{\alpha }}B \right){{!}_{\beta }}f(B) \right) \nonumber\\
& \le \left(\left( f\left( A \right){{!}_{\alpha }}f\left( B \right) \right) {{!}_{\beta }}f(A)\right){{!}_{\alpha }}\left( \left( f\left( A \right){{!}_{\alpha }}f\left( B \right) \right) {{!}_{\beta }}f(B)\right) \nonumber\\
& =\left(\left( f\left( A \right){{!}_{\alpha }}f\left( B \right) \right) {{!}_{\beta}}\left(f(A)!_0f(B)\right)\right){{!}_{\alpha }}\left( \left( f\left( A \right){{!}_{\alpha }}f\left( B \right) \right) {{!}_{\beta }}\left(f(A)!_1f(B)\right)\right) \nonumber\\
& =f\left( A \right){{!}_{(1-\beta)\alpha +\beta\alpha}}f\left( B \right) \nonumber\\
& =f\left( A \right){{!}_{\alpha }}f\left( B \right) \nonumber\\
& \le f\left( A \right){{\sigma}}f\left( B \right) \nonumber
\end{align}
\end{remark}
In the following we improve the well-known weighted operator arithmetic-geometric-harmonic mean inequalities \eqref{15}.
\begin{theorem}\label{amgm_ref}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ be positive operators. Then
\[\begin{aligned}
A{{!}_{\alpha }}B&\le \left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}A \right){{!}_{\alpha }}\left( \left( A{{\sharp}_{\alpha }}B \right) {{\sharp}_{\beta }}B\right) \\
& \le A{{\sharp}_{\alpha }}B \\
& \le \left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}A \right){{\nabla }_{\alpha }}\left( \left( A{{\sharp}_{\alpha }}B \right) {{\sharp}_{\beta }}B\right) \\
& \le A{{\nabla }_{\alpha }}B
\end{aligned}\]
for $0\le \alpha ,\beta \le 1$.
\end{theorem}
\begin{proof}
It follows from the proof of Theorem \ref{theorem01} that
\begin{equation}\label{21}
A{{\sharp}_{\alpha }}B=\left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}A \right){{\sharp}_{\alpha }}\left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}B \right),\quad\text{ }0\le \alpha ,\beta \le 1.
\end{equation}
Thus, we have
\begin{align}
A{{\sharp}_{\alpha }}B&=\left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}A \right){{\sharp}_{\alpha }}\left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}B \right) \nonumber\\
& \le \left( \left( A{{\sharp}_{\alpha }}B \right) {{\sharp}_{\beta }}A\right){{\nabla }_{\alpha }}\left( \left( A{{\sharp}_{\alpha }}B \right) {{\sharp}_{\beta }}B\right) \label{10}\\
& \le \left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla}_{\beta }} A \right) \nabla_{\alpha }\left( \left( A{{\nabla}_{\alpha }}B \right){{\nabla }_{\beta }}B \right) \label{11}\\
& =\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla}_{\beta }} \left(A\nabla_0B\right) \right) \nabla_{\alpha }\left( \left( A{{\nabla}_{\alpha }}B \right){{\nabla }_{\beta }}\left(A\nabla_1B\right) \right) \nonumber \\
& =A{{\nabla }_{\alpha }}B \label{11final}
\end{align}
where in the inequalities \eqref{10} and \eqref{11} we used the weighted operator arithmetic-geometric mean inequality and the equality \eqref{11final} follows from \eqref{20}. This proves the third and fourth inequalities.
As for the first and second inequalities, replace $A$ and $B$ by $A^{-1}$ and $B^{-1}$, respectively, in the third and fourth inequalities
$$
A{{\sharp}_{\alpha }}B\le \left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}A \right){{\nabla }_{\alpha }}\left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}B \right)\le A{{\nabla }_{\alpha }}B
$$
which we have just shown. Then take the inverse to obtain the required results (thanks to the identity $A^{-1}\sharp_{\alpha}B^{-1}=(A\sharp_{\alpha}B)^{-1}$). This completes the proof.
\end{proof}
\begin{remark}
We notice that similar inequalities maybe obtained for any symmetric mean $\sigma$, as follows. First, observe that if $\sigma,\tau$ are two symmetric means such that $\sigma\leq \tau$, then the set $T=\{t:0\leq t\leq 1\;{\text{and}}\;\sigma_t\leq \tau_t\}$ is convex. Indeed, assume $t_1,t_2\in T$. Then for the positive operators $A,B$, we have
\begin{align*}
A\sigma_{\frac{t_1+t_2}{2}}B&=(A\sigma_{t_1}B)\sigma (A\sigma_{t_2}B)\\
&\leq (A\tau_{t_1}B)\tau (A\tau_{t_2}B)\\
&=A\tau_{\frac{t_1+t_2}{2}}B,
\end{align*}
where we have used the assumptions $\sigma\leq \tau$ and $t_1,t_2\in T.$ This proves that $T$ is convex, and hence $T=[0,1]$ since $0,1\in T$, trivially. Thus, we have shown that if $\sigma\leq \tau$ then $\sigma_{\alpha}\leq \tau_{\alpha},$ for all $0\leq \alpha\leq 1.$ Now noting that
$$A\sigma_{\alpha} B= \left((A\sigma_{\alpha}B)\sigma_{\beta}A\right)\sigma_{\alpha}\left((A\sigma_{\alpha}B)\sigma_{\beta}B\right),$$ and proceeding as in Theorem \ref{theorem01}, we obtain
\begin{equation}
f\left( A{{\nabla }_{\alpha }}B \right)\le f\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}A \right){{\sigma}_{\alpha }}f\left( \left( A{{\nabla }_{\alpha }}B \right){{\nabla }_{\beta }}B \right)\le f\left( A \right){{\sigma}_{\alpha }}f\left( B \right)
\end{equation}
for any $0\le \beta \le 1$ and the operator log-convex function $f$. This provides a more precise estimate than $(c)$ above.
On the other hand, proceeding as in Theorem \ref{amgm_ref}, we obtain
\begin{equation}
A\sigma_{\alpha} B\leq \left((A\sigma_{\alpha}B)\sigma_{\beta}A\right)\nabla_{\alpha}\left((A\sigma_{\alpha}B)\sigma_{\beta}B\right)\leq A\nabla_{\alpha}B,
\end{equation}
observing that $\sigma_{\alpha}\leq \nabla_{\alpha}.$ This provides a refinement of the latter basic inequality.
\end{remark}
Taking into account \eqref{12}, it follows that
\begin{equation*}
A+B=\alpha A+\left( 1-\alpha \right)\left( A\nabla B \right)+\alpha B+\left( 1-\alpha \right)\left( A\nabla B \right).
\end{equation*}
As a consequence of this inequality, we have the following refinement of the well-known triangle inequality
\[\left\| A+B \right\|\le \left\| A \right\|+\left\| B \right\|.\]
\begin{corollary}\label{triangle_ineq01}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$. Then, for $\alpha\in\mathbb{R}$,
\[\left\| A+B \right\|\le \left\| \alpha A+\left( 1-\alpha \right)\left( A\nabla B \right) \right\|+\left\| \alpha B+\left( 1-\alpha \right)\left( A\nabla B \right) \right\|\le \left\| A \right\|+\left\| B \right\|.\]
\end{corollary}
\begin{remark}
Using Corollary \ref{triangle_ineq01}, we obtain the reverse triangle inequalities
$$
\left\| A \right\| -\left\| B \right\| \le\frac{1}{2}\left(\left\| A \nabla_{-\alpha}(2B)\right\| +\left\| A \nabla_{\alpha}(2B)\right\|-2\left\| B \right\| \right)\le \left\| A-B \right\|
$$
and
$$
\left\| B \right\| -\left\| A \right\| \le \frac{1}{2}\left(\left\| B \nabla_{-\alpha}(2A)\right\| +\left\| B \nabla_{\alpha}(2A)\right\|-2\left\| A \right\| \right)\le \left\| A-B \right\|,
$$
where $\alpha\in\mathbb{R}.$
\end{remark}
\section{A glimpse at the Ando's inequality}\label{s2}
In this section, we present some versions and improvements of Ando's inequality \eqref{23}.
\begin{theorem}\label{2}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ be positive operators and $\Phi $ be a positive linear map. Then for any $0\le \alpha ,\beta \le 1$,
\begin{equation}\label{17}
\Phi \left( A{{\sharp}_{\alpha }}B \right)\le \Phi \left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}A \right){{\sharp}_{\alpha }}\Phi \left( \left( A{{\sharp}_{\alpha }}B \right){{\sharp}_{\beta }}B \right)\le \Phi \left( A \right){{\sharp}_{\alpha }}\Phi \left( B \right).
\end{equation}
In particular,
\begin{equation}\label{18}
\begin{aligned}
\sum\limits_{j=1}^{m}{{{A}_{j}}{{\sharp}_{\alpha }}{{B}_{j}}}&\le \left( \sum\limits_{j=1}^{m}{\left( {{A}_{j}}{{\sharp}_{\alpha }}{{B}_{j}} \right){{\sharp}_{\beta }} {{A}_{j}} } \right){{\sharp}_{\alpha }}\left( \sum\limits_{j=1}^{m}{ \left( {{A}_{j}}{{\sharp}_{\alpha }}{{B}_{j}} \right){{\sharp}_{\beta }} {{B}_{j}} } \right) \\
& \le \left( \sum\limits_{j=1}^{m}{{{A}_{j}}} \right){{\sharp}_{\alpha }}\left( \sum\limits_{j=1}^{m}{{{B}_{j}}} \right).
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
We omit the proof of \eqref{17} because it is proved in a way similar to that of \eqref{19} in Theorem \ref{theorem01}. Now, if in \eqref{17} we take $\Phi :{{M}_{nk}}\left( \mathbb{C} \right)\to {{M}_{k}}\left( \mathbb{C} \right)$ defined by
\[\Phi \left( \left( \begin{matrix}
{{X}_{1,1}} & {} & {} \\
{} & \ddots & {} \\
{} & {} & {{X}_{n,n}} \\
\end{matrix} \right) \right)={{X}_{1,1}}+\ldots +{{X}_{n,n}}\]
and apply $\Phi $ to $A={\text{diag}}\left( {{A}_{1}},\ldots ,{{A}_{n}} \right)$ and $B={\text{diag}}\left( {{B}_{1}},\ldots ,{{B}_{n}} \right)$, we get \eqref{18}.
\end{proof}
In the following, we present a more general form of \eqref{17} will be shown.
\begin{theorem}
Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$ be positive operators and $\Phi $ be any positive linear map. Then we have the following inequalities for Uhlmann's interpolation ${{\sigma }_{\alpha \beta }}$ and $0\le \alpha ,\beta \le 1$,
\[\begin{aligned}\label{theorem3.2}
\Phi \left( A{{\sigma }_{\alpha \beta }}B \right)&\le \Phi \left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( A{{\sigma }_{0}}B \right) \right){{\sigma }_{\alpha }}\Phi \left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( A{{\sigma }_{1}}B \right) \right) \\
& \le \Phi \left( A \right){{\sigma }_{\alpha \beta }}\Phi \left( B \right).
\end{aligned}\]
\end{theorem}
\begin{proof}
Thanks to \eqref{20}, we obviously have
\[\begin{aligned}
&\left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( A{{\sigma }_{0}}B \right) \right){{\sigma }_{\alpha }}\left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( A{{\sigma }_{1}}B \right) \right)\\
& =\left( A{{\sigma }_{\alpha \left( 1-\beta \right)}}B \right){{\sigma }_{\alpha }}\left( A{{\sigma }_{\alpha \left( 1-\beta \right)+\beta }}B \right) \\
&= A{{\sigma }_{\alpha \beta }}B.
\end{aligned}\]
Now, the desired result follows directly from the above identities.
\end{proof}
\begin{remark}
From simple calculations, we have the following inequalities for positive operators $A,B\in \mathcal{B}\left( \mathcal{H} \right)$, any positive linear map $\Phi$ and $0\le \alpha ,\beta,\gamma,\delta \le 1$,
\begin{equation}\label{14}
\begin{aligned}
\Phi \left( A{{\sigma }_{\alpha \left( 1-\beta \right)+\beta \left( \left( 1-\alpha \right)\gamma +\alpha \delta \right)}}B \right)&\le \Phi \left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( \left( A{{\sigma }_{\gamma }}B \right) \right) \right){{\sigma }_{\alpha }}\Phi \left( \left( A{{\sigma }_{\alpha }}B \right){{\sigma }_{\beta }}\left( \left( A{{\sigma }_{\delta }}B \right) \right) \right) \\
& \le \Phi \left( A \right){{\sigma }_{\alpha \left( 1-\beta \right)+\beta \left( \left( 1-\alpha \right)\gamma +\alpha \delta \right)}}\Phi \left( B \right).
\end{aligned}
\end{equation}
Apparently, \eqref{14} reduces to \eqref{theorem3.2} when $\gamma =0$ and $\delta =1$.
\end{remark}
|
1,941,325,220,759 | arxiv | \section{Introduction}
Dynamo theory as an explanation for the magnetic fields in galaxies, including the milky way has a long history both observationally and theoretically. Recent observational reviews and discussion of the relevant theory may be found in \cite{Beck2015} and \cite{Kr2015}. The textbook \cite{KF2015} discusses the theory and observation with abundant references. \cite{Black2015} discusses modern theoretical developments .
During the past few years the members of the CHANGES survey \cite{WI2015} have systematically observed the halos of edge-on galaxies using the scaled configurations of the JVLA (Jansky Very Large Array). These observations have extended our knowledge of scale heights, cosmic rays and magnetic fields in the halos of spiral galaxies. The discovery of lagging halos (e.g. \citet{R2000},\citet{H2007}) has also led to the suggestion that the halo magnetic field is coupled to the intergalactic medium \cite{HI2016}. That model is in fact a type of $\alpha-\omega$ dynamo with diffusivity, but the $\alpha^2$ dynamo was omitted in that paper. The fields of the present paper can fill that gap.
Over the same period increased sophistication in decoding observed rotation measures (RM) has become common, based on rotation measure synthesis \cite{BdeB2005}, \cite{Heald2009},\cite{DamSegov2016}. This has led to the study of rather complicated magnetic geometry in order to connect the disc and halo magnetic fields of a spiral galaxy (see e.g. \citet{FT2014} and references, for empirical fits ).
In this paper and the following we undertake a semi-analytic study of the magnetic fields in the discs and halos of spiral galaxies. We use the classic mean field dynamo theory based on the `$\alpha-\omega$' and the ` $\alpha^2$' terms. This is of course far from innovative, but by assuming the steady state, and scale invariance (and in this paper axial symmetry), we are able to simplify the calculations that yield the mean magnetic field. This leads to definite global qualitative and quantitative predictions that may be compared to observational data, such as the well-established `X' polarization pattern \cite{Kr2009}.
By `halo' we mean the halo as defined by cosmic ray particles and magnetic fields. Our approximate solutions require us to be within several kilo parsecs of the central galactic plane, although the height increases with galactic radius. Thus at a radius of $10$ kilo parsecs, our approximations should be comfortably qualitatively correct in the region $1-3$ kilo parsecs. We do find some analytic `toy' models that are valid at greater heights. These serve mainly to justify our approximations, including the critical rotation measure sign change in each quadrant. **This effect is related to the `mixed parity' solutions of earlier work (see discussion and references).**
Even with the steady state assumption in axial symmetry, the full dynamo problem coupled to realistic dynamics remains a formidable theoretical problem.
In this paper we simplify the problem further by seeking scale invariant solutions. We use a particular method that systemizes the procedure (e.g. \citet{CH1991},\cite{Hen2015}), but this is not necessary. In addition we allow the scale invariance to determine the spatial variation of the velocity field. We do not solve independently for the equations of motion so that the velocity amplitudes remain simple parameters. This allows a conveniently brief survey of the effects of various flows on the basic $\alpha^2$ and $\alpha-\omega$ turbulent dynamos and their observational consequences.
The assumption of scale invariance is made essentially for simplicity. However it is well known (e.g. \citet{Hen2015}) that scale invariance arises asymptotically in many physical systems, once away from initial conditions and boundaries. This is likely to apply to the disc of a spiral galaxy beyond the bulge. Moreover the implied power law radial dependences are quite natural and flexible, if various scale invariant `classes' and different scale heights parallel and perpendicular to the galactic plane are considered. Different `classes' are physically motivated by global conservation laws. For example when the class $a=2$ is considered, there is a global integral of Dimension equal to that of specific angular momentum (angular momentum in a galaxy of fixed mass). Should we choose the class $a=3$, then there is a global constant of Dimension equal to that of magnetic flux. This does not automatically exclude local sources of mean magnetic field.
In the case of the pure $\alpha^2$ dynamo when ${\bf v}={\bf 0}$ in the pattern frame, our results are strictly independent of the dynamics. This is also the case should ${\bf v}\parallel {\bf B}$, where both ${\bf v}$ and ${\bf B}$ are mean fields. Such a limit allows direct contact with the halo lag model of \cite{HI2016}, wherein the only $\alpha-\omega$ effect is produced by the differential boundary conditions between the disc and infinity.
The form of the helicity and diffusivity coefficients produced by the sub-scale turbulence is also normally fixed by the assumption of scale invariance, although we suggest a generalization for subsequent investigation. We are forced to make a `near disc' approximation when dealing with the dependence on height above the plane in our general treatment. Fortunately this is a region of observational interest. Moreover the key result is confirmed by an analytic that holds at all heights .
This paper examines axially symmetric solutions (i.e. $m=0$ in a modal analysis), but a companion paper \cite{Hen2017} uses the same methods on higher order modes based on logarithmic spirals. The total halo magnetic field must in general be a combination of the axially symmetric mode with one or more spiral modes, which introduce their own (probably weaker) quadrantal sign changes.
\section{ Formulation of Scale Invariant Equations}
We represent the magnetic field in terms of a vector potential. We ignore any conservative electric field which would require charge separation in the plasma. Then the traditional (for a steady state, we need not worry about quenching) dynamo equation for the vector potential reduces to
\begin{equation}
\alpha_d\nabla\wedge{\bf A}-\eta\nabla\wedge\nabla\wedge{\bf A}+{\bf v}\wedge \nabla\wedge {\bf A}={\bf 0},\label{eq:dynamoI}
\end{equation}
where a scaled magnetic field is given by
\begin{equation}
{\bf b}=\nabla\wedge {\bf A},\label{eq:B}
\end{equation}
and $\alpha_d$, $\eta$ are the sub-scale `helicity' and diffusion coefficients respectively.
For Dimensional simplicity the scaled magnetic field is taken as
\begin{equation}
{\bf b}\equiv \frac{{\bf B}}{\sqrt{4\pi\rho}},\label{eq:b}
\end{equation}
where $\rho$ is some fiducial density. This gives ${\bf b}$ the Dimension of velocity and so ${\bf A}$ has the Dimension of specific angular momentum.
The formula for the vector potential requires neglecting an electrostatic field that would result from large scale charge separation. It seems unlikely that a large scale electrostatic field should be important. Such an integration has the additional benefit of allowing the `helicity' $\alpha_d$ and the diffusion coefficient $\eta$ to vary with position without adding extra terms.
**
Because the vector potential enters equation (\ref{eq:dynamoI}) only as the curl, it is evident that these equations may also be written directly in terms of ${\bf b}$. However the magnetic field equations may be written `ab initio' (see Appendix B) without making the vector potential substitution. The solenoidal condition then becomes a second constraint on the mean magnetic field.
**
If we suppose that either ${\bf v} \parallel {\bf B}$ or that ${\bf v}={\bf 0}$, or that the spatial dependence of ${\bf v}$ is to be given by the scale invariance (to within arbitrary factors that appear as parameters), then these equations are complete for the magnetic field given suitable boundary conditions. Nevertheless we discuss examples in which the presence of ${\bf v}$ is essential to the existence of a solution.
The scale invariance hypothesis requires (e.g. \citet{Hen2015}) the invariants \footnote{These are $Z$ , which fixes the conical symmetry, plus barred quantities; although $Z$ varies from cone to cone as do the quantities that depend on it.} ) and the variables $\{r,z\}$ as
\begin{equation}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\delta r =e^{\delta R},~~~~~\delta z=Ze^{\delta R},\label{eq:rz}
\end{equation}
and
\begin{eqnarray}
{\bf A}&=&\bar {\bf A}(Z)e^{(2\delta-\alpha)R},\nonumber\\
{\bf v}&=&\bar{\bf v}(Z)e^{(\delta-\alpha)R},\nonumber\\
\alpha_d&=&\bar\alpha_d(Z)e^{(\delta-\alpha)R},\label{eq:Scinv}\\
\eta&=& \bar\eta(Z)e^{(2\delta-\alpha)R},\nonumber\\
\Delta&\equiv& \frac{\bar\alpha_d}{\bar\eta\delta}.\nonumber
\end{eqnarray}
The parameter $\Delta$ is a kind of sub-scale Reynolds number. It is the only parameter that appears in the equations beyond the velocity field and the similarity `class' as introduced below.
The scale invariance is imposed by assuming that the barred quantities are independent of $R$.
The quantities $\alpha$ and $\delta$ are arbitrary reciprocal temporal and spatial Units, and the similarity class (e.g. \citet{Hen2015}) is
\begin{equation}
a\equiv \frac{\alpha}{\delta}.\label{eq:a}
\end{equation}
In terms of the cylindrical variables $\{z,r\}$ the invariant $Z$ has the value
\begin{equation}
Z=\frac{z}{r},\label{eq:Z}
\end{equation}
which is also the tangent of the angle between the radius to any point in the system and the equatorial plane.
Proceeding with the scale-invariant analysis of equation (\ref{eq:dynamoI}) , we let
\begin{equation}
y\equiv \bar A'_r-(2-a)\bar A_z+Z\bar A'_z=\frac{\bar b_\phi}{\delta},\label{eq:y}
\end{equation}
for notational convenience. The magnetic field is given by ($\delta$ is the average reciprocal scale if different vertical and radial scales are used)
\begin{equation}
{\bf b}={\bar{\bf b}}e^{(\delta-\alpha)R},\label{eq:barb}
\end{equation}
and the components of $\bar{\bf b}$ in addition to $b_\phi$ are
\begin{eqnarray}
\frac{\bar b_r}{\delta}&=&-\partial_Z\bar A_\phi,\nonumber\\
\frac{\bar b_z}{\delta}&=& (3-a)\bar A_\phi-Z\bar A'_\phi.\label{eq:bfield}
\end{eqnarray}
The equations (\ref{eq:dynamoI}) are now reduced to three ordinary linear equations of the form
\begin{eqnarray}
0&=&\hskip-0.75em-\bar A'_\phi\Delta +\bar u_\phi((3-a)\bar A_\phi-Z\bar A'_\phi)-y\bar u_z+y',\label{eq:Ar}\\
0&= &\hskip -0.75em (1+Z^2)\bar A''_\phi-\bar u_z\bar A'_\phi+\left((1-a)-\bar u_r)\right)(3-a)\bar A_\phi+y\Delta+\bar u_rZ\bar A'_\phi-(3-2a)Z\bar A'_\phi,\label{eq:Aphi}\\
0&=&\hskip -0.75em\bar u_\phi\bar A'_\phi+((3-a)\bar A_\phi-Z\bar A'_\phi)\Delta+y(\bar u_r-(2-a))+Zy',\label{eq:Az}
\end{eqnarray}
where we have set
\begin{equation}
{\bf\bar u}=\frac{{\bf \bar v}}{\bar \eta\delta},\label{eq:u}
\end{equation}
and the prime denotes differentiation with respect to $Z\equiv z/r$. The only change in these equations if different scales in the radial and vertical directions are used is that $a\equiv \alpha/\delta=2\alpha/(\delta_\perp+\delta_\parallel)$.
These equations are reduced to ordinary equations in $Z$ because of the assumption of scale invariance. They are not commonly available so we give them explicitly here. They have the peculiarity of being over determined due to the scale invariance unless the sub-scale `Reynolds number' $\Delta$ (essentially the dynamo number \citet{B2014}) is a function of $Z$. They then cease to be linear equations so that we leave them to other work. The over determination must be resolved by imposing conditions on the parameters such that two of the equations become identical. It is important to realize that even when a non-trivial solution may be found for an appropriate choice of parameters, the solution is not generally unique because of the over determination. Other restrictions on the parameters lead to different solutions as we show in our examples.
**
It is also useful to have the scale invariant equations in terms of the reduced magnetic field ${\bf b}$ (e.g. Appendix B). Certainly if ${\bf A}$ is solved for from the three equations (\ref{eq:dynamoI}) and the mean field is calculated from the curl, then the divergence of the field will be zero. This is the procedure we follow normally in this paper. However we will use these equations to find exact solutions with and without zero divergence to demonstrate the sign change at large $Z$. The physical significance of the non solenoidal examples is speculative (see appendix B and the examples). Interestingly there is one example solution of these equations ($\bar u_\phi\ne 0$) where the non solenoidal mean magnetic field is due to the $\alpha^2$ dynamo action itself. That is, when $\Delta\ne 0$, one encounters local sources of the mean magnetic field.
**
Explicitly these scale invariant, steady, mean magnetic field equations are:
\begin{eqnarray}
0&=& \bar b_r\Delta+\bar u_\phi\bar b_z-\bar u_z\bar b_\phi+\bar b'_\phi,\nonumber\\
0&=& -(1+Z^2)\bar b'_r+\bar u_z\bar b_r-\bar u_r\bar b_z+\bar b_\phi\Delta+(1-a)\bar b_z+(2-a)Z\bar b_r,\label{eq:beqs}\\
0&=& Z\bar b'_\phi-(2-a)\bar b_\phi+\bar u_r\bar b_\phi-\bar u_\phi\bar b_r+\bar b_z\Delta.\nonumber
\end{eqnarray}
**
These equations for the magnetic field are exact given our assumptions. It is important to note that they are are {\it not} overdetermined even with $\Delta$ constant because of the absence of the solenoidal condition. They may therefore be readily studied numerically. If this condition is added on physical grounds, then as in the equations for the vector potential (which yield an exactly solenoidal magnetic field) they become over determined, unless $\Delta$ is a variable.
**
At this stage we remark that $\alpha_d$ and $\bar\eta$ may each be the same arbitrary function of $Z$, since the equations contain them only as $\Delta$. This allows for the same gradients of diffusivity and helicity on cones. They could be different functions if $\Delta$ were to be a function of $Z$. In that case the equations are well posed for $A_\phi(Z)$, $y(Z)$ and $\Delta(Z)$. This is an intriguing possibility, but we we will normally hold $\Delta$ constant in this paper because otherwise the equations are non-linear. The reduced and scaled velocity components $\bar{\bf u}$ are also held constant, but the scaled velocity may have the same $Z$ dependence as $\bar\eta$ (see e.g. equation (\ref{eq:u}). We note from the definition that the dependence on $Z$ is actually a dependence on $z/(r)$, so that all quantities are constant on cones but for the appropriate power law scaling factors (e.g. see equation \ref{eq:Scinv}).
The equations for the vector potential are difficult to solve exactly. We adopt additional simplifying assumptions at this stage in an effort to avoid a numerical study in this exploratory paper. A simple exponential solution is possible if the terms explicitly dependent on $Z=z/r$ are neglected. This requires $Z$ to be small, so that the solution extends over cones relatively close to the galactic plane, although the height can easily reach a kilo parsec .
We do this everywhere $Z$ appears in these equations, so we require both $Z$ to be small and the variation in $Z$ to be slow. Such a region `close' (say within $0.2$ of the radius of the disc) to the disc of an edge-on spiral galaxy is easily accessible observationally. However this approximation casts doubt on the physical reality of our sign change well above the plane.
**
We shall nevertheless call attention to this behaviour; based in part on a smooth continuation to larger $Z$ from the small $Z$ region, and on confirmation (e.g. see figure \ref{fig:analytvphi}) provided by the solutions that are valid for all $Z$ found in terms of the magnetic field equations (\ref{eq:beqs}). Similar behaviour has also been suggested and glimpsed in earlier work (\citet{SS1990}, \cite{BDMSST92}, \cite{MS2008}) based on time dependent numerical evolution. It is referred to as `mixed' or `intermediate' parity in those and related studies.
**
Our general approach to solving for the vector potential proceeds , after neglecting the terms in $Z$ ,by solving for $y$ from equation (\ref{eq:Az}), and substituting into equations (\ref{eq:Aphi}) and (\ref{eq:Ar}) to obtain two equations for $\bar A_\phi$. An alternative approach involves solving for $y$ from equation (\ref{eq:Aphi}) and substituting into the other two equations. This yields a less general set of parameters for a solution, and we do not find it to be physically applicable to the observations and so omit it in the interests of brevity. In either case the two equations for $\bar A_\phi$ must be reconciled by requiring the coefficients of $\bar A_\phi$ and its derivatives in the two equations to be identical. This imposes conditions on $\Delta$ and on the velocity components in order that a non-trivial solution exist, much as does setting the determinant of the coefficients of a set of homogeneous linear equations to zero.
Proceeding in this way the general case is reduced to one equation for $\bar A_\phi$ namely
\begin{equation}
\bar A''_\phi-(\bar u_z+\frac{\bar u_\phi}{\tilde u_r}\Delta)\bar A'_\phi-(3-a)(\tilde u_r+\frac{\bar u_z}{\bar u_\phi}\Delta)\bar A_\phi=0,\label{eq:Aeq}
\end{equation}
where
\begin{equation}
\tilde u_r\equiv \bar u_r-(2-a).\label{eq:tildeu},
\end{equation}
This allows simple analytic solutions.
The conditions required to obtain a non-trivial solution by equating the coefficients of the first and second members of the scaled vector potential equations are ($\bar u_\phi\ne 0$)
\begin{eqnarray}
\tilde u_r^2+(3-a)\tilde u_r+\bar u_\phi^2&=&0,\label{eq:C1}\\
\Delta^2-\frac{\tilde u_r\bar u_z}{\bar u_\phi}\Delta+\tilde u_r&=&0.\label{eq:C2}
\end{eqnarray}
The solution for the scaled magnetic field is completed by the equations (\ref{eq:bfield}) and
\begin{equation}
y=-\frac{\bar u_\phi\bar A'_\phi+(3-a)\bar A_\phi\Delta}{\tilde u_r}\equiv \frac{\bar b_\phi}{\delta}.\label{eq:bphi}
\end{equation}
We see from equation (\ref{eq:C1}) that $\tilde u_r<0$ (assuming $a<3$ as is normally the case in this section) for a real solution. In fact this equation requires for real $\bar u_\phi$ that
\begin{equation}
-(3-a)<\tilde u_r<0,\label{eq:radcon}
\end{equation}
which is a useful condition on the radial velocity. The analysis of various cases requires some care, and two specific examples are discussed in the appropriate section.
\section{ Examples}
\subsection{Exact solutions for the mean magnetic field}
**
We turn to analytic solutions based on equations (\ref{eq:beqs}). The first example is an exact solution to the dynamo equations for all $Z$ and it is solenoidal when the $\alpha^2$ dynamo action is absent ($\Delta=0$).
This first example follows from equations (\ref{eq:beqs}) when only $\bar u_\phi\ne 0$. One finds that
\begin{eqnarray}
\bar b_r&=&\frac{Z\bar u_\phi-\Delta}{\Delta^2+\bar u_\phi^2}\bar b'_\phi,\nonumber\\
\bar b_z&=&-\frac{\bar u_\phi+Z\Delta}{\Delta^2+\bar u_\phi^2}\bar b'_\phi,\nonumber\\
0&=& (1+Z^2)(\Delta-Z\bar u_\phi)\bar b''_\phi+(\Delta-Z\bar u_\phi)Z\bar b'_\phi+\Delta(\Delta^2+\bar u_\phi^2)\bar b_\phi.\label{eq:buphi}
\end{eqnarray}
The equation for $b_\phi$ is more difficult in this case so we do not study it. Our purpose is to note that now the solenoidal condition is equal to
\begin{equation}
-\frac{1}{\Delta^2+\bar u_\phi^2}(\bar u_\phi(1+Z^2)\bar b''_\phi+(\Delta+Z\bar u_\phi)\bar b'_\phi)=0.
\end{equation}
This will only hold (given a solution of the third equation in \ref{eq:buphi}) if $\Delta =0$. That is, when there is no $\alpha^2$ dynamo action. In this example it seems that the sub-scale magnetic field generation requires a source in the mean magnetic field.
The solution with $\Delta=0$ is a pure $\alpha-\omega$ , solenoidal, mean magnetic dynamo (in the pattern frame) and has the solution
\begin{eqnarray}
\bar b_r&=&\frac{Z\bar b'_\phi}{\bar u_\phi},\nonumber\\
\bar b_\phi&=& C\sinh^{-1}(Z)+\bar b_\phi(0),\label{eq:solbuphi}\\
\bar b_z&=&-\frac{\bar b'_\phi}{\bar u_\phi},\nonumber
\end{eqnarray}
where $sinh^{-1}(Z)$ may be expressed as $sinh^{-1}{Z}=\ln{|Z+sgn(Z)\sqrt{(1+Z^2)}|}$ for application on both sides of the disc. The projected linear polarization is as usual, the angle with the plane $\lambda$ being given by
\begin{equation}
\lambda =\arctan(\frac{b_z}{b_r})=-\frac{r}{z},\label{eq:XF1}
\end{equation}
that is
\begin{equation}
tan(\pi/2-\lambda)=Z,\label{eq:XF2}
\end{equation}
This yields projected magnetic field lines at a fixed angle to the axis above the plane, which is rather like the observed fields (e.g. \cite{Kr2015}). There is also in this pure $\alpha-\omega$ dynamo the sign changing (or parity changing) effect that represents the main content of our approximate solutions below, so we study it in some detail.
\begin{figure}
\begin{tabular}{cc}
\rotatebox{0}{\scalebox{0.4}
{\includegraphics{analytvphifieldplot.eps}}}&
\rotatebox{0}{\scalebox{0.6}
{\includegraphics{analytvphifieldplot3d.jpg}}}\\
{\rotatebox{0}{\scalebox{0.4}
{\includegraphics{analytvphispacecurve.jpg}}}}&
\rotatebox{0}{\scalebox{0.4}
{\includegraphics{analytvphiRM.jpg}}}
\end{tabular}
\caption{At upper left we show a cut through the pure $\alpha-\omega$ exact solenoidal solution at $z=0.35$. We have set $C=-\bar b_\phi(0)=-1$. The radius runs from $0.15$ to $1.0$. At upper right there is a 3D plot of the same solution wherein each of $r$ and $z$ run from $0.15$ to $0.75$. At lower left we show a field line from the same solution that originates in the disc at $r=0.5$, $\phi=0$ and $z=0.05$ and continues to positive $z$. It spirals into and crosses $r=0$ near $z=0.5$ and is then reproduced on the other side of the axis, forming a closed twisted loop. At lower right we show the Faraday screen rotation measure in the first quadrant for this solution. We take $\bar u_\phi=1$ in all images except at upper right where $\bar u_\phi=2$. }
\label{fig:analytvphi}
\end{figure}
In figure (\ref{fig:analytvphi}) we show at upper left a cut through the solution at $z=0.35$. The sign change in the azimuthal component of the field as the radius runs over the disc is evident. This is emphasized at upper right in three dimensions. We see the sign change there at the same radius but at different heights. We have also increased $\bar u_\phi$ to $2$ in this image to show the consequent increase in the azimuthal field relative to the radial field.
At lower left a field line is shown spirally converging to the axis. This field line would cross the axis and be reproduced on the other side, thus forming a large (it reaches $z=0.5$) loop over the galactic centre. Because of the axial symmetry, this will appear at all azimuthal angles. The off axis twisting of the field line is the origin of the sign changing effect that we find in our approximate solutions.
At lower right we show the Faraday screen (see the general discussion below) rotation measure in the first quadrant. The reddish to reddish -blue colour is generally positive , while the dark blue to green to yellow or orange is generally negative. Of course the signs may be reversed as there is an arbitrary constant amplitude. It is also not a numerical fit to observational data but indicates the qualitative behaviour that may have been found observationally \cite{CMP2016}.
The solution gives the azimuthal field to increase logarithmically but indefinitely with height, while the radial field becomes constant and the vertical field vanishes. As such the solution can not be continued into the inter galactic medium, but may indicate increased strength of the field in the halo.
The general importance of this example is mainly to confirm at large $Z$ the existence of the sign-changing effect that we infer in our approximate solutions .
In order to demonstrate that non solenoidal solutions also follow from equations (\ref{eq:beqs}) we proceed to give an elegant example. Its physical significance is not clear (see a brief speculation in Appendix B and below), but it exhibits many of the same characteristics found in our solenoidal examples.
We assume either ${\bf v}={\bf 0}$ or ${\bf v}\parallel {\bf b}$ and choose the scale invariant class $a=2$.
This choice implies a globally conserved angular momentum in a galaxy of fixed mass. The velocity and magnetic fields vary in cylindrical radius $r$ as $r^{-1}$. The helicity and diffusivity are not necessarily constant on cones so long as $\Delta$ and ${\bf u}$ are.
Then equations (\ref{eq:beqs}) yield
\begin{eqnarray}
\bar b_r&=&-\frac{\bar b'_\phi}{\Delta},\nonumber\\
\bar b_z&=& -\frac{Z\bar b'_\phi}{\Delta},\label{eq:balt}\\
0&=&(1+Z^2)\bar b''_\phi+Z\bar b'_\phi+\Delta^2\bar b_\phi,\nonumber
\end{eqnarray}
and the last equation has the solution
\begin{equation}
b_\phi=C1\sin{(\Delta\ln{(|Z+sgn(Z)\sqrt{1+Z^2}|)})}+C2\cos{(\Delta\ln{(|Z+sgn(Z)\sqrt{1+Z^2}|)})},\label{eq:bphialt}
\end{equation}
where we have again written $sinh^{-1}{Z}=\ln{|Z+sgn(Z)\sqrt{(1+Z^2)}|}$ .
One can note that the expected projected linear polarization in this type of solution is at an angle $\lambda =\arctan {(b_z/b_r)}$ to the galactic disc. In the solution above this is simply equal to $\arctan{(Z)}$, which makes the projected polarization X-type (i.e. diverging from the galactic plane). The magnetic field oscillates slowly but indefinitely at large $Z$ and thus like the solenoidal solution can not be continued into the intergalactic medium. The oscillation is however reminiscent of the temporal azimuthal field oscillations found in numerical work (e.g. \citet{MS2008}).
This is an exact solution to the axially symmetric $\alpha^2$ dynamo equations (possibly with parallel velocity and magnetic fields), which also contains a possible sign changing effect, but the divergence is not zero. In our variables zero divergence requires
\begin{equation}
-Z\bar b'_r+\bar b'_z=0.\label{eq:div1}
\end{equation}
Inserting the values for $\bar b_r$ and $\bar b_z$ from equations (\ref{eq:balt}) shows that the last expression becomes equal to $\bar b_r\equiv rb_r$.
We recall that we solve for a mean field, which we speculate (see end of Appendix B) may physically have a non-zero divergence if the mesoscale averaging volume varies with position. We further speculate that this may happen near a physical boundary such as the galactic disc, because the turbulent intensity can vary rapidly and anisotropically there. This would change the volume over which a constant $\alpha_d$ may be calculated.
**
Our final example using equations (\ref{eq:beqs}) illustrates another choice of similarity class, in which the magnetic field is solenoidal. We used $a=2$ above and will use $a=1$ below. Here we take $a=3$ , which implies a global constant with the Dimensions of magnetic flux\footnote{$\ell^2 b$ has the Dimensions $\ell^3/t$, hence $\alpha=3\delta$ or $a=3$.}. The solenoidal condition becomes now
\begin{equation}
-Z\bar b'_r+\bar b'_z-\bar b_r=0.\label{eq:div2}
\end{equation}
Using equations (\ref{eq:beqs}) we see by inspection that if we take $\bar u_r=-1$, then we have
\begin{eqnarray}
\bar b_r&=&-\frac{\bar b'_\phi}{\Delta},\nonumber\\
\bar b_z&=&-\frac{Z\bar b'_\phi}{\Delta},\label{eq:solbur}\\
0&=& (1+Z^2)\bar b''_\phi+2Z\bar b'_\phi+\Delta^2\bar b_\phi,\nonumber
\end{eqnarray}
and equation (\ref{eq:div2}) is satisfied. The projected linear polarization is X-type with $\lambda=\arctan{(Z)}$ . The solution to the equation for $\bar b_\phi$ is
\begin{equation}
\bar b_\phi=C_1P_{arg(\Delta)}(iZ)+C_2Q_{arg(\Delta)}(iZ),\label{eq:solbphiflux}
\end{equation}
where
\begin{equation}
arg(\Delta)=\frac{\sqrt{1-4\Delta^2}-1}{2},\label{eq:arg}
\end{equation}
and $P$ and $Q$ refer to the Legendre functions of the first and second kinds. We recall that the magnetic and velocity fields will have the above forms multiplied by $r^{-2}$ in this case.
We indicate only two aspects of this solution in figure (\ref{fig:fluxfig}). On the left we show a field vector plot of one example of this solution for comparison with the similar plot in figure(\ref{fig:analytvphi}). The similarity is striking. On the right of the figure we show a case where the sign of the azimuthal field changes dramatically with increasing $Z$ in each quadrant. There must be off axis spirals as we indeed find below. It is one of our main observable results.
\begin{figure}{h}
\begin{tabular}{cc}
\rotatebox{0}{\scalebox{0.6}
{\includegraphics{fluxconserved.jpg}}}&
\rotatebox{0}{\scalebox{0.4}
{\includegraphics{fluxconservedbphi}}}
\end{tabular}
\caption{ On the left we show a vector plot of the $a=3$ flux conserved solution. We have set $\Delta=0.5$, and $C=0$ and $\bar u_r=-1$.The radius of the disc is one Unit, which is also the Unit in $z$. On the right we have plotted $\bar b_\phi$ for the case $\Delta=1.0$, $C=1.0$ and $\bar u_r=-1$. There is a substantial sign reversal in $\bar b_\phi$ as a function of $Z$. This leads to a reversal in the rotation measure on the same side of the minor axis. }
\label{fig:fluxfig}
\end{figure}
\subsection{General Resolution of Over Determinateness}
In this section we pursue the general approximation with $\Delta$ constant as summarized in equations (\ref{eq:Aeq}),(\ref{eq:bfield}),(\ref{eq:bphi}) and (\ref{eq:C2}),(\ref{eq:C1}).
We solve the linear equation (\ref{eq:Aeq}) in the general form
\begin{equation}
\bar A_\phi=C^+_\phi e^{p_+Z}+C^-_\phi e^{p_-Z},\label{eq:Asolexp}
\end{equation}
where $p$ and $C_\phi$ are complex in general. The equation for the two values of $p$ becomes
\begin{equation}
p=\frac{1}{2}(\bar u_z+\frac{\bar u_\phi}{\tilde u_r}\Delta)\pm \frac{1}{2}\sqrt{(\bar u_z+\frac{\bar u_\phi}{\tilde u_r}\Delta)^2+4(3-a)(\tilde u_r+\frac{\bar u_z}{\bar u_\phi}\Delta)}.\label{eq:pval}
\end{equation}
The conditions (\ref{eq:C1}) and (\ref{eq:C2}) are usefully written as
\begin{equation}
\bar u_\phi=\pm\sqrt{-\tilde u_r^2-(3-a)\tilde u_r},\label{eq:C11},
\end{equation}
and
\begin{equation}
\Delta=\frac{1}{2}\frac{\tilde u_r\bar u_z}{\bar u_\phi}\pm\frac{1}{2}\sqrt{(\frac{\tilde u_r\bar u_z}{\bar u_\phi})^2-4\tilde u_r},\label{eq:C22}
\end{equation}
particularly when we remember condition (\ref{eq:radcon}).
Whenever $p$ is complex we write the solution for $\bar A_\phi$ in the form
\begin{equation}
\bar A_\phi=C_\phi \exp{(Re(p)Z)}\cos{(Im(p)Z+\Phi)},\label{eq:Asolosc}
\end{equation}
rather than the single or double exponential that applies when one or both values of $p$ are real and negative ($Z>0$).
One can in general calculate a simple expression for the angle of the projected linear polarization namely $tan^{-1}(\bar b_z/\bar b_r)$ for these solutions. In the case of complex $p$ this becomes from equations (\ref{eq:bfield})
\begin{equation}
tan^{-1}(\frac{\bar b_z}{\bar b_r})=\arctan{(Z-\frac{(3-a)}{ (Re(p)-Im(p)tan(Im(p)Z+\Phi))})}.\label{eq:projpol}
\end{equation}
It becomes $\arctan{(Z)}$ for all cases if $a=3$ as in our last example of the previous section. In general, $a$ and the various velocities would have to be fit to the observations.
Although the inferred fields are more complicated than either a pure dipole or a pure quadrupole, it is convenient to label a magnetic field with zero vertical field at the disc ($Z=0$) as `quadrupolar', while that with a non-zero vertical field may be said to be `dipolar'. This is not quite the same as these designations in \cite{KF2015}, where even symmetry on crossing the disc is quadrupolar and odd symmetry is dipolar. Our solutions show mainly mixed symmetry in the tangential field components on crossing the disc.
Equation (\ref{eq:bfield}) together with equation (\ref{eq:Asolosc}) shows that the quadrupolar boundary condition is applied when $\Phi=\pi/2$. A numerical example requires the choice of $\tilde u_r$, $\bar u_z$, $\Phi$, $C_\phi$ and $a$ as parameters. Until we are concerned with observed quantities, we may set $C_\phi=\pm 1$. Moreover the similarity class $a$ is normally fixed by some physical quantity that we wish to conserve under the rescaling operation. For example if we work in the systemic reference frame where there is a constant disc velocity, then we require the homothetic similarity class with $a=1$. Consequently, since $\Phi$ determines essentially the boundary condition, we are left with $\tilde u_r$ and $\bar u_z$ as principal physical parameters, with $\tilde u_r$ nevertheless restricted by the condition (\ref{eq:radcon}).
In addition there are three `switches' $s1$,$s2$ $s3$ , each having the values $\pm 1$ that correspond to the signs chosen for the radicals in the expressions for $\bar u_\phi$, $\Delta$ and $p$ respectively. It transpires that these switches play an important role in allowing $\bar u_z$ to change sign across the disc while maintaining the same numerical solution on each side. We proceed in the next section with an example of this type.
\subsection{The Halo Magnetic Field}.
We choose $\tilde u_r=-0.9$ and $\bar u_z=\pm 0.3$ at positive or negative $z$ respectively. We will suppose a constant disc velocity in the systemic reference system so that a consistent similarity class is $a=1$ . We can drop the bar over the velocity components and over the magnetic field components as a result since this implies $\alpha=\delta$. We are proceeding with the isotropic scaling rather than the anisotropic possibility discussed in the Appendix. This choice of $\tilde u_r$ gives $u_r=0.1$. We have chosen $s1=-1$ which gives $u_\phi=-0.995$ so that the disc is rotating according to the left-hand rule (positive $z$ axis up). This is rather arbitrary relative to right/left line of sight disc velocity, since this may be reversed by inverting the solution. The number being close to unity suggests that our velocity scale is set by the disc rotational velocity. In fact the velocity scale is set through $\bar\eta\delta\equiv \eta/r$ (see equation \ref{eq:u}), which requires in this case $\eta/r\approx 200$ km/sec.
The vertical outflow is thus $30\%$ of the value set by $\bar\eta\delta$, while the disc velocity is $99\%$ of this velocity. The switch $s2$ is set =$-1$ for $z>0$ and $u_z>0$ but becomes $+1$ below the plane. The switch $s3$ also changes sign on crossing the plane. We take $C_\phi=1$ , but this is arbitrary. These guarantee that $\bar b_z$ , $\bar b_\phi$ are continuous across the plane, although $\bar b_r$ changes sign. There is no change in the handedness of the field on crossing the plane.
When $\Phi=\pi/2$ the vertical field at the disc is zero, which is the quadrupolar special case.
There is a complementary boundary condition, found by changing the sign of $C_\phi$ crossing the disc and leaving the switch changes the same. Then $b_\phi$ changes sign but $b_r$ does not . The handedness of a field line is changed in that case.
In figure (\ref{fig:halofields}) the image at upper left shows the magnetic vectors in a cut at $z=0.05$. There is no sign reversal in each galactic quadrant. A typical field line is shown at lower left for this `quadrupolar' case. It descends rapidly to become parallel to the plane. At upper right we show a cut through a `dipolar' example at $z=0.05$ with the radius running from $0.05$ to $1$. It is very similar to the similar cut for the exact dynamo solution in figure (\ref{fig:analytvphi}). A sign reversal in the first quadrant occurs at $r\sim 0.1$ ,where $Z=0.5$. The field lines at lower right are meant to show how this may occur. Each field line is part of an {\it off axis} rising conical helix that is slowly turning. A field line initially directed towards us, turns away from the line of sight at large height. We have shown an accretion case $\bar u_z=-0.3$ in order to show the effect within our height limit. The sub-scale Reynolds number $\Delta$ is $-0.822$ above the plane and $+0.822$ below the plane in our example.
\begin{figure}
\begin{tabular}{cc}
\rotatebox{0}{\scalebox{0.4}
{\includegraphics{halofieldplotquad.eps}}}&
\rotatebox{0}{\scalebox{0.4}
{\includegraphics{halofieldplotdip.eps}}}\\
{\rotatebox{0}{\scalebox{0.44}
{\includegraphics{halofieldspacequad.jpg}}}}&
\rotatebox{0}{\scalebox{0.44}
{\includegraphics{halofieldspacedip.jpg}}}
\end{tabular}
\caption{At upper left we show the magnetic field vectors in a cut at $z=0.05$. Using a description parameter vector $[\tilde u_r,\bar u_z,s1,s2,s3,a,C_\phi,\Phi]$ , this is for the case $[-0.9,0.3,-1,-1,1,1,r,0.05,1,\pi/2]$ . At upper right we show the same cut for the same parameter vector
except that $\Phi=1$. The radius runs from $0.1$ to $1$ at upper left and from $0.05$ to $1$ at upper right. At lower left we show a field line starting at $[r,phi,z]=[0.5,\pi,0.1]$ for the same parameter vector as at upper left. At lower right we show two field lines originating at $[r,phi,z]=[0.2,\pi,0.1]$ (solid) and
$[r,phi,z]=[0.2,0,0.1]$ (dashed). These have the same parameters as at upper right except that we have set $\bar u_z=-0.3$ (accretion) so as to remain inside our height limit and to accentuate the curvature. }
\label{fig:halofields}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in]{Q1.jpg}
\caption{ Using a parameter description vector $[\tilde u_r,\bar u_z, s1,s2,s3,a,C_\phi,\Phi]$, this case is given by $[ -0.9,0.3,-1,-1,1,1,-1,1]$ as at upper right in figure (\ref{fig:halofields}). In addition the radius runs over the range $[.05,1]$ and the height runs over the range $[0.01,0.3]$. The figure shows the magnetic field vectors in three dimensions. }
\label{fig:dipfield3d}
\end{figure}
Figure (\ref{fig:dipfield3d}) illustrates the poloidal field associated with the toroidal magnetic field illustrated at upper right in figure (\ref{fig:halofields}). This field is pursued below for its rotation measure properties, but here it is clear that in projection it contains the `X' form of magnetic field and hence of polarization.
As an amusing analogy, it is difficult not to remark on the similarity of these galactic magnetic fields to those of magnetic `skyrmions' discovered in condensed matter physics (e.g. \cite{NT2013}). These are particularly prominent in thin films and the spiral or vortical spin skyrmions resemble those above. Of course the spatial scale is in nano metres rather than in kiloparsecs! Moreover the field is the actual `magnetic field' in the medium rather than the `magnetic induction' field used here.
\section{Rotation Measure}
In the light of a recent discovery of oscillating rotation measure above and below the disc due to rotation measure synthesis in the CHANGES galaxy NGC 4631 (\citet{CMP2017};\citet{CMP2016}; \citet{SPK2016}), we compute the corresponding rotation measure in the magnetic field of this example. We take the halo to have a cylindrical geometry with the galactic disc defining a section of the cylinder. The height is left undefined.
For these axially symmetric magnetic fields the rotation measure, integrated along the line of sight through the halo of a strictly edge-on galaxy, depends only on the azimuthal component of the field. For a galaxy inclination significantly different from $90^\circ$, the other field components will in general contribute. For simplicity here, where we are only introducing a possible oscillating effect in the rotation measure in each quadrant, we proceed with perpendicular inclination. Moreover we treat the electron density $n_e$ as either constant or as an exponential in $Z=z/r$ . There is an arbitrary amplitude constant $C_\phi$ stemming from the magnetic field amplitude.
Under these conditions the RM, integrated along a line of sight (los) through a section of the cylindrical halo, is given by (actually the negative of this quantity)
\begin{equation}
RM=C_\phi r_\perp \int_{-\sqrt{1-r_\perp^2/R^2}}^{\sqrt{1-r_\perp^2/R^2}}~n_e(r_\perp/\sqrt{(1-x^2)},z)\left(\frac{b_\phi(r_\perp/(\sqrt{(1-x^2}),z)}{1-x^2}\right)~dx,\label{eq:RMInt}
\end{equation}
where $R$ is the radius of the disc and $x=\cos{\phi}$. The angle $\phi$ is between the los lying in a cylindrical section of the halo (the los has the impact parameter $r_\perp$ relative to the minor axis of the galaxy) and a line drawn to a point on the los from the intersection of the galactic minor axis with the cylindrical cross section.
Our calculation is really the parallel magnetic field component integrated along the line of sight. As such it is the (negative) of the Faraday rotation that would be produced by a Faraday Screen. One needs a model for the emitting electrons in the halo in order to calculate the RM quantitatively using radiative transfer, but we are looking at qualitative possibilities here, so we hold it constant. The RM is weighted towards $x=0$ ($\phi=\pi/2$), so that its value reflects somewhat the actual RM of a region near $r=r_\perp$.
In figure (\ref{fig:RMquadrupolar}) we show a gray scale measure of this RM in the first quadrant for the quadrupolar field shown on the left hand side of figure (\ref{fig:halofields}). The value varies from $+1.29$ at grid point $[25,30]$ to $[+0.89]$ at grid point $[25,5]$. There is smaller Faraday rotation near the axis, and the sign is the same everywhere in the quadrant. It would be the negative of this value on the other side of the galactic minor axis, although of course the signs may be interchanged. The region between the two lines on the figure is subject to quantitative error but it seems qualitatively correct based on continuity.
\begin{figure}
\centering
\includegraphics[width=4in]{grayquadpol.jpg}
\caption{The figure shows the Faraday screen RM in the first quadrant of the quadrupolar field $[-0.9,0.3,-1,-1,1,1,r,z,1,\pi/2]$ as in figure (\ref{fig:halofields}). The solid line shows on this grid the line $Z(r,z)=1$, because two units on the ordinate are equal to one unit on the abscissa . The dashed line is the locus of $Z(r,z)=0.5$. }
\label{fig:RMquadrupolar}
\end{figure}
\newpage
In figure (\ref{fig:RMquad1}) we show the RM displayed over the halo for an edge-on galaxy having the parameter vector $[-0.9,0.3,-1,-1,1,1,r,z,1,1]$ , so that it is dipolar. The straight lines in the first quadrant are as in figure (\ref{fig:RMquadrupolar}). The region between the straight lines is again subject to quantitative error. However qualitatively it shows the change in sign associated with the characteristic field lines at lower right in figure (\ref{fig:halofields}). The curve along which the RM changes sign is shown relative to the two limiting straight lines is shown in figure (\ref{fig:Limits}). These lines are drawn in $\{r,z\}$ space so that $z=0.5$ corresponds to the top of figure (\ref{fig:RMquad1}). We see that the sign change region falls just within our height restriction.
The axes of figure (\ref{fig:RMquad1}) should be interpreted in terms of the $ grid number/50 $ for the abscissa and $grid number/100$ for the ordinate. The radius of the galaxy has been taken to be $50$ in grid Units ,that is $1$ in $\{r,z\}$ Units. The quadrants are readily formed by rotation from the first and second quadrants. However the fourth quadrant may be calculated directly , given the first quadrant, under the transformation $\{u,w,s1,s2,s3,a,r,z,C_\phi,\Phi\}\leftarrow\{u,-w,s1,-s2,-s3,a,r,z-0.501,C_\phi,\Phi\}$ . The number in the transformation implies calculation on a $50$ by $50$ grid in steps of $0.02$ in radius and $0.01$ in height. A similar transformation yields the third quadrant from the second quadrant by calculation, on using $r\leftarrow r-1.02$ and $z\leftarrow 0.501-z$ in addition to the other transformations.
We have chosen to illustrate the case where $\bar b_\phi$ is symmetric on crossing the plane. If we had allowed $\bar b_\phi$ to change sign on crossing the plane than the symmetry would be `diagonal' rather than `vertical'. This would be accomplished by changing the sign of $C_\phi$. A `butterfly' symmetry that would be horizontal is not physically reasonable in axial symmetry. The orange colour is negative according to our calculation, which would be measured as positive. Blue to red would then be measured as positive.
There is a conspicuous `X' type pattern to the RM distribution. This reflects the pattern expected in the linear polarization and in fact known for some time (e.g. \cite{Kr2009}). The coexistence of these two properties may have been observed in (\cite{CMP2016}; \cite{SPK2016}).
Figure (\ref{fig:cquad}) shows the RM distribution in the first quadrant with contours overlaid. The zero contour is shown as a dashed line . The inclined nature of the RM distribution is apparent.
\begin{figure}
\begin{tabular}{cc}
\rotatebox{0}{\scalebox{0.4}
{\includegraphics{Qquad42.jpg}}}&
\rotatebox{0}{\scalebox{0.4}
{\includegraphics{Qquad41.jpg}}}\\
{\rotatebox{0}{\scalebox{0.44}
{\includegraphics{Qquad43.jpg}}}}&
\rotatebox{0}{\scalebox{0.44}
{\includegraphics{Qquad44.jpg}}}
\end{tabular}
\caption{ We have calculated the Faraday screen rotation measure for the magnetic fields having the parameter vector $[-0.9,0.3,-1,-1,1,1,r,z,-1,1]$ . The radius of the disc is equal to $1$. We have chosen vertical symmetry for the display, but diagonal symmetry is achievable by changing the sign of $C_\phi$ on crossing the plane. The upper line on the first quadrant is the locus $Z(r,z)=1$ and the lower line is the locus of $Z(r,z)=0.5$. There is an arbitrary amplitude constant but the contrast between positive and negative RM is meaningful. In the first quadrant the orange peak ($[30,5]$ is negative at $~-0.67$ while the positive red peak $[10,40]$ has the value $~+0.65$. The green shading $[30,25]$ is at $~-0.26$ while the blue shading at $[25,35]$ is at $~+0.3$. }
\label{fig:RMquad1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in]{limits.eps}
\caption{The two solid lines correspond to $Z(r,z)=1$ and $Z(r,z)=0.5$. The curved dashed line shows the locus of the sign change in The RM as calculated for the distribution of figure (\ref{fig:RMquad1}). The vertical dashed line is the edge of the disc.}
\label{fig:Limits}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{cquad.jpg}
\caption{Contours are overlaid on the RM distribution for the first quadrant in figure (\ref{fig:RMquad1}) of this section. The dashed line shows the zero contour. There are sixteen contours equally spaced between about $-0.67$ and $0.65$. The right edge is the edge of the disc. The positive measured RM (the negative of our values) peaks at about $0.6$ disc radii with the negative peak higher and closer to the axis. The upper solid line is the locus of $Z(r,z)=1$ while the lower solid line if that of $Z(r,z)=0.5$. }
\label{fig:cquad}
\end{figure}
\newpage
As one might expect, the RM distribution is very sensitive to the assumed electron density distribution. We have taken it constant until now to emphasize the sign changes in the RM {\it in each galactic quadrant } due to this dynamo magnetic field. For the illustrative parameters, the sign change occurs at considerable height in the halo. This makes the visibility of the sign change very sensitive to an exponential cut-off in the thermal electron density with height. The contrast with the region near the disc becomes extreme, even if the absolute number may still be detectable.
\begin{figure}
\centering
\includegraphics[width=3in]{neexp.jpg}
\caption{ We show contours on the RM distribution of figure (\ref{fig:RMquad1}) in the first quadrant. The parameter vector is again $[-0.9,0.3,-1,-1,1,1,r,z,-1,1]$, but the electron density has been taken as $\propto \exp{(-z/r)}$. The strength of the positive (mesured) RM region is reduced. The upper solid line is the locus of $Z(r,z)=1$ while the lower solid line is that of $Z(r,z)=0.5$.}
\label{fig:Qdip5}
\end{figure}
Applying density dependences in $r$ and $z$ separately will break the scale invariance. One density variation that does not break the symmetry is
\begin{equation}
n_e=n_e(0)\exp{(- k Z)}\equiv n_e(0)\exp{(-kz/r)},\label{eq:SSdens}
\end{equation}
where in the integration along the los we use $r=r_\perp/\sqrt{1-x^2}$. This gives a variable scale height equal to $r/k$. We show in figure (\ref{fig:Qdip5}) the case for $k=1$ for the same dynamo field as in figure(\ref{fig:RMquad1}). The dashed line is the zero line and the solid line is $Z(r,z)=1$. Further discussion along these lines must await a more extensive parameter search motivated by data.
\newpage
\section{Discussion and Conclusions}
One can hardly imagine a simpler origin of a galactic magnetic field than a scale-free, axially symmetric, steady, mean field dynamo as presented here\footnote{Fossil magnetic fields have been suggested and were considered already in the sixties and seventies as involved in galaxy formation and activity--see e.g. \cite{HR1977}}. The number of parameters are reduced to a bare minimum.
Clearly it has not been my intention to advance dynamo theory itself, but rather to apply classical theory to a developing set of observations ( CHANG-ES, \citet{WI2015}). Mean field dynamo theory, particularly in numerical simulations \cite{B2014},\cite{KF2015}, has indeed advanced well beyond the classical theory. However the `no z' approximation (e.g. \citet{B2014},\cite{M2015}) has led to the dynamo structure above the spiral disc being somewhat neglected recently.
**
However a referee has quite properly called my attention to earlier work that is relevant to the results of this paper. This concerns the extended discussion in the literature of the possibility of different `parity' relative to the galactic equator between the equatorial regions and the halo. This suggestion appears to be found originally in \cite{SS1990}, wherein the authors observe that this may be true due to different geometries. This suggestion was confirmed in \cite{BDMSST92}. {\it Such a parity contrast implies the `sign change' in the azimuthal magnetic field that we have found here}. The calculation was time dependent and assumed rather particular (however physically motivated) variations in diffusivity and helicity. Their results also exhibited sensitivity to poorly known initial conditions, and a steady state was not easily attained.
In the present study, the assumption of scale invariance and the steady state has allowed us to dispense with particular assumptions about the diffusivity and helicity, so long as they are proportional on cones. Nevertheless, this steady state effect could be described as a `parity oscillation' in each quadrant as in the earlier papers.
Motivated by observations of the Milky Way suggesting this effect \cite{sun2008}, this possibility was studied again in a generic fashion in
\cite{MS2008}. These authors studied once again time dependent solutions of the classic dynamo equations with particular choices of diffusivity and helicity. They found the desired effect (their model 134b) {\it only} in an oscillatory model for a careful choice of parameters. For this reason their principal conclusion was negative, although they do remark (without elaboration) that this effect may be found in steady solutions.
Our intention in this paper was to produce a semi-analytic model of likely steady magnetic dynamo structure above the disc that may be easily reproduced and compared to observations. Our results depend on the self-similar assumption and the steady state, whereas earlier work introduced time dependence and particular distributions of diffusivity and helicity. To obtain a steady state after temporal evolution some form of `quenching' is required. Such an effect is potentially present in our approach since a certain velocity structure is generally required for each of our solutions. Nevertheless despite the marked differences in approach, the apparent convergence of this work with earlier work comprises support for both techniques.
This paper presents the axially symmetric model (the same symmetry as in the earlier studies referred to above) and a companion paper \cite{Hen2017} discusses magnetic spiral arms. The detailed comparison with observation must await a subsequent paper.
**
The most relevant insight from this work is that the dynamo with globally constant sub-scale Reynolds number (this is essentially the `Dynamo number' as defined in \citet{B2014}) can produce helical magnetic fields on cones that are `off axis'. That is, the axis of the helical cone does not coincide with the minor axis of the galaxy. This behaviour is not sensitive to variation in the helicity or the diffusivity on cones, so long as these vary proportionally on cones. The solutions thus contain in principle a general form of gradients in the helicity and diffusivity (see e.g. \citet{M2015}) . The twist of these fields varies with radius at a given height or vice versa, due to the scale invariant symmetry. This leads to sign
reversals in the azimuthal magnetic field and consequently of the RM in each quadrant of the edge-on galaxy in the sky plane. This is not possible in axial symmetry if the axes of the field lines coincide with the galactic axis.
This single quadrant sign reversal of the azimuthal magnetic field (i.e. `parity oscillation') is reflected in the Faraday screen Rotation Measure. Indications of such behaviour have recently been discovered (\cite{CMP2016}; \cite{SPK2016};\cite{CMP2017}) in the CHANG-ES survey galaxies, but detailed comparisons will be necessary . These must include the addition of asymmetric modes. Ultimately radiative transfer through a realistic model halo must be carried out to interpret the results of rotation measure synthesis (e.g \citet{HF2014}).
The model discussed mostly here is always solenoidal (as it follows from the equations for the vector potential) but it is limited in accuracy in the vertical direction, especially at small radius. This has been discussed at length in section 4. However even with $Z=0.2$, we are justified in reaching one kilo parsec into the halo at a radius of five kilo parsecs. This is well into the `halo' as defined by the edge-on galaxy observers. Although subject to quantitative error, it does seem that the sign change effect is real and comprehensible.
We have studied the equations (\ref{eq:beqs}) for simple (i.e. `toy' models) solutions that apply at all heights in the galactic halo in order to justify the approximate treatment. {\it Both solenoidal and non-solenoidal solutions of these classic mean field dynamo equations exist}. These solutions hold at all heights in the halo and their behaviour is similar to that studied approximately. These solutions have allowed us to explore other scale invariant `classes' such as $a=2$ and $a=3$ and to discuss their implications. They all contain the sign-changing effect in one quadrant of an edge-on galaxy.
One should recall equations (\ref{eq:Scinv}) for the radial dependences of fields, helicity and diffusivity. Appendix A shows how these may be generalized by the addition of parallel and perpendicular scale lengths. When $a=3$ the solenoidal solution of equations (\ref{eq:beqs}) exhibits very similar sign changing behaviour to that shown in our approximate cases. However it is not so readily calculated as either the non solenoidal example or our approximate solution (being dependent on Legendre functions), so that we display the simpler cases.
**
The pure $\alpha-\omega$ dynamo example ( $\Delta=0$) is solenoidal. It does not have the correct X type projection, but it does include the principal result of this paper for any value of $Z$. That is, that axially symmetric steady dynamos may have different signs of rotation measure in each galactic quadrant. It is not however a solution with which we would fit to observational data. Either our general explicit forms or numerical solutions to the self-similar equations will permit this in a subsequent study.
The question as to the physical significance of the non-solenoidal equations (\ref{eq:beqs}) remains. We have allowed ourselves a speculation that is now presented in an appendix. Briefly we suggest that a non solenoidal mean field implies a different averaging volume over different regions of the sub-scale magnetic field. The steady state mean magnetic field equations do not incorporate the solenoidal requirement directly (e.g. \citet{M1978}). Even the time dependent equations must start from this initial condition and then maintain it rigorously in order to arrive at a solenoidal state. It seems to us that different regions of a turbulent dynamo may have to be averaged over different volumes in order to reflect a constant helicity and diffusivity on cones. Different sized regions may even evolve to a steady state at different times. In any case, this speculation is not essential to the main result of this paper.
A demonstration that equations (\ref{eq:beqs}) may be derived independently of the solenoidal condition is also given in that Appendix, which is quite independent of the speculation.
**
At present only very simple electron density distributions (constant or exponential in $z/r$) have been considered. Moreover the halo geometry is cylindrical and the inclination of the galaxy is taken as $90^\circ$. Modifying these assumptions adds parameters that should be considered in making comparisons with the observations. For simplicity while presenting our main effect qualitatively we have avoided these in the present paper.
It is already clear that the field can produce Rotation Measure distributions that are either symmetric about the galactic plane or are diagonally symmetric through the galactic centre while always maintaining an X type pattern in each quadrant. A distribution with `butterfly symmetry', that is horizontal symmetry, can not be plausibly produced as that would require changing the field sign across the galactic minor axis. In order that RM distributions may reveal the galactic plane magnetic field, the handedness of the magnetic field should not change on crossing the plane. That is, the azimuthal field should be constant through the plane.
Our parametric method avoids the difficult problem of solving simultaneously for the dynamics and the dynamo magnetic field either by using the scaled velocity as a parameter or by assuming parallel velocity and magnetic fields \cite{HI2016}. However the combination of scale invariance and axial symmetry leads to an over determined problem for the vector potential since the solution is also constrained to be solenoidal. This can be resolved through restrictions on the parameters, but the solution is not unique. It is possible to let vary $\alpha_d$, $\bar\eta$ and $\bar v$ , all as the same arbitrary function of $Z$, and still obtain our results. These freedoms do not change the solution for the reduced (constant) velocity $u$ and the scaled magnetic field, but it does increase the generality of the physical velocity field and allows the dynamo parameters to vary across cones (cf \citet{M2015}). Should we allow our sub-scale Reynolds number parameter (i.e essentially the Dynamo number $\Delta$) to be a function of $Z$ to be found as part of the solution, the problem ceases to be overdetermined but becomes strongly non-linear. Nevertheless this provides a method to proceed numerically.
The scale invariant equations (\ref{eq:beqs}) are not over determined since they are not constrained to be solenoidal. They are exact and could be studied at length numerically. We have been content to find some exact analytic solutions both solenoidal and non solenoidal.
A more numerical study will be important when observational data are to be fitted, although trends should be clear from our results. The necessary data is still developing \cite{CMP2017}. A sign changing effect is of course to be expected if the magnetic spirals \cite{Beck2015} persist into the halo. A companion paper \cite{Hen2017} discusses that possibility using the same methods. Nevertheless it is important to have such an effect present in axial symmetry since this may be the ultimate form of the magnetic field if it is subject to differential rotation \cite{B2014}. This is the same concern that appears when discussing the material arms.
Despite the limitations of our exploratory study, the magnetic fields shown in our figures contain many observed properties. One can reproduce the `X-type' polarization structure, horizontal or vertical RM sign changes, and hints of RM sign variation in each quadrant in the halo of an edge-on galaxy. The magnetic field behaviour is rather similar to that suggested for the Milky Way \cite{Gfarr2015} and inferred empirically \cite{FT2014}.
\section {Acknowledgements}
This study was achieved while the author was enjoying the hospitality of the Astronomy group at the R\"uhr Universit\"at Bochum. The stay was made possible by a generous award from the Alexander von Humboldt foundation together with the welcome provided by professor Ralf-Juergen Dettmar and members of his group. I wish to thank Reiner Beck for discussion and helpful criticism of an earlier draft of this paper. Marita Krause, Carolina Mora, Philip Schmidt and Arpad Miskolczi are to be thanked for discussions. Dr. Judith Irwin always advises wisely. A referee has helped to sharpen the arguments.
\section{Appendix A: Generalized Power law behaviour}
The multiplicative radial dependences of our solutions are always a simple power law, depending on the class parameter $a\equiv \alpha/\delta$. We have used a mean isotropic spatial scaling $\delta$ in the text. It is possible to generalize the radial dependence slightly in terms of distinct parallel $\delta_\parallel$ and perpendicular $\delta_\perp$ reciprocal scales, which device yields slightly more flexible radial power laws ($1/\delta_\perp$ would be related to a scale height). In that case the form of the scale invariance becomes ($R$ and $Z$ are as in the text)
\begin{eqnarray}
\bf {A}&=&{\bf \bar A}(Z) e^{(2\delta-\alpha)R},\nonumber \\
\bf{v}&=&{\bf \bar v}(Z) e^{(\delta_\perp-\alpha)R},\label{eq:altScinv}\\
\alpha_d&=& \bar\alpha_d(Z) e^{(\delta_\perp-\alpha)},\nonumber\\
\eta &=& \bar\eta(Z) e^{(\delta+\delta_\perp -\alpha)},\nonumber
\end{eqnarray}
and now
\begin{equation}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~\delta=\frac{\delta_\parallel+\delta_\perp}{2}\label{eq:avgdelta}.
\end{equation}
This treatment of scale invariance follows from the analysis advocated in (\cite{CH1991} )and in detail in \cite{Hen2015}) . It is similar in spirit to the two dimensional scaling used in the Blasius solution for viscous flow over a flat plate. It allows us to use different scalings in the directions perpendicular and parallel to the plane.
We have scaled the sub-scale helicity and the mean velocity (in whatever pattern frame) in terms of the perpendicular scale height, because this corresponds to rotating gas rising from the galactic plane. We scaled the vector potential in terms of the isotropic reciprocal scale, assuming it to have similar characteristic scales in both directions.
The innovative scaling is that of the diffusion coefficient. It must have a net spatial scaling (the dependence through $\delta$ as in the text equation (\ref{eq:altScinv})) of two Dimensionally, and a Dimensional temporal scaling of one (the dependence through $\alpha$). We have achieved this by using one perpendicular reciprocal scale plus the average reciprocal scale. This choice is necessary to create scale invariance given our other choices. However we can expect the diffusion to involve both perpendicular and parallel scales in fact because it is likely turbulence dependent. We note that this requires the two spatial reciprocal scales to be divided according to $3\delta_\perp/2+\delta_\parallel/2$, so that the perpendicular scale is weighted more heavily. However, it is possible to exchange $\delta_\perp$ for $\delta_\parallel$ throughout, if this were believed to be more physically reasonable.
We are free to define the `similarity class' once again as $a\equiv \alpha/\delta$. Hence the radial dependences of the velocity , the sub-scale helicity and the diffusion coefficient become (recall the text equation (\ref{eq:rz}) for $\delta r$)
\begin{eqnarray}
\bf{v},\alpha_d &\propto& (\delta r)^{(\frac{2\delta_\perp}{\delta_\parallel+\delta_\perp}-a)} ,\nonumber\\
\eta&\propto&(\delta r)^{(\frac{3\delta_\perp}{2\delta}+\frac{\delta_\parallel}{2\delta}-a)}.\label{eq:radial}
\end{eqnarray}
The standard procedure sets $\delta_\perp=\delta_\parallel=\delta$, so that these dependences become $(1-a)$ and $(2-a)$ respectively, as in equations (\ref{eq:Scinv}) of the text. However, these dependences depend now on the relative size of the two directional reciprocal scales. For example if $\delta_\perp\gg\delta_\parallel$ then the powers tend to $2-a$ and $3-a$ respectively. If $\delta_\parallel\gg\delta_\perp$ then we obtain $-a$ and $1-a$ for the respective powers. The class $a$ may also change depending on how a governing integral acts. {\it Because of the integrated form of equation (\ref{eq:dynamoI}), the equations for the vector potential are not different from the standard case obtained when the scales are equal, as in the text}. We can therefore invoke this optional scaling as desired.
**
\section{Appendix B: Steady State dynamo; Solenoidal Mean Magnetic field?}
The solenoidal/a-solenoidal ambiguity appears to be because the `ab initio' {\it steady state} dynamo theory is ambiguous in this regard. Thus consider that we expect from Ohm's law that the total {\it steady} electric field ${\bf E'}$ in the medium moving with velocity ${\bf v}$ is
\begin{equation}
c{\bf E'} =\frac{{\bf j}}{\sigma},
\end{equation}
where ${\bf j}$ is the current density and $\sigma$ is the effective conductivity. Consequently from Faraday's law and Amp\`ere's law (and $v\ll c$) in the absence of a conservative electric field, we have in the systemic frame the total steady electric field as
\begin{equation}
c{\bf E}=-{\bf v}\wedge\vec {\bf B}+\eta\nabla\vec {\bf B}={\bf 0},\label{eq:Ezero}
\end{equation}
where ${\bf B}$ and ${\bf v}$ are total quantities, $\eta$ is the sub scale resistivity, and the derivatives are over a sub-scale region.
Normally one averages this equation over a constant mesoscale volume after writing the velocity and magnetic fields in terms of mean and fluctuating parts. Making the usual assumptions about the sub-scale behaviour (\cite{M1978}), gives {\it directly} for the mean magnetic field ${\bf b}$
\begin{equation}
\alpha_d {\bf b}-\eta\nabla\wedge {\bf b}+{\bf v}\wedge {\bf b}={\bf 0}.\label{eq:steadyBdynamo}
\end{equation}
This becomes our equations \ref{eq:beqs} after introducing self-similarity and writing out the components. We do not require from the formalism that the mean field be solenoidal to this point.
As a speculation concerning the relevance of non solenoidal mean magnetic dynamo fields, we offer the following.
The mesoscale averaging volume is usually assumed to be constant (ensuring that it commutes with differential operators) and hence the mean magnetic field should be solenoidal if the sub-scale magnetic field is solenoidal. However, provided that the mesoscale average over the sub-scale curl of the mean magnetic field may be taken equal to the macroscopic curl of the mean magnetic field, one obtains the same equation for the mean field ${\bf B}$ even if the mesoscale averaging volume is not constant. The solenoidal constraint is not then necessarily present.
This debatable condition should be examined. It takes the explicit form
\begin{equation}
\bar\nabla\wedge {\bf b}=\big<\nabla\wedge {\bf b}\big >,
\end{equation}
where $\bar\nabla$ refers to macroscopic derivatives, $\nabla$ refers to sub scale derivatives and ${\bf b}$ is the mesoscale averaged magnetic field. The indicated average is the mesoscale average. If expressed in Cartesian component form we have
\begin{equation}
D_jb_k=\big<\partial_jb_k\big>,
\end{equation}
where $D_j$ is the macroscopic derivative and $\partial_j$ is the sub-scale derivative. Because there is no other derivative in equation (\ref{eq:Ezero}), this may actually be used as the meaning of the macroscopic derivative, when acting on any function that is averaged over randomly fluctuating values .
**
\newpage
|
1,941,325,220,760 | arxiv |
\section{Introduction}
\label{sec:introduction}
\input{introduction}
\section{Hydrate reservoir model}
\label{sec:hydrateReservoirSimulator}
\input{hydrateReservoirSimulator}
\section{Material and Methods}
\label{sec:experimentDetails}
\input{experimentDetails}
\section{Numerical Simulation}
\label{sec:numericalSimulation}
\input{numericalSimulation}
\section{Discussion and Outlook}
\label{sec:discussionAndOutlook}
\input{discussionAndOutlook}
\newpage
\input{mathematicalModel}
\input{materialPropertiesTable}
\newpage
\paragraph{Acknowledgements}
We gratefully acknowledge the support for the first author by the German Research Foundation (DFG), through project no. WO 671/11-1.
This work was further funded by the German Federal Ministries of Economy (BMWi) and Education and Research (BMBF) through the SUGAR project (grant No. 03SX250, 03SX320A \& 03G0856A),
and the EU-FP7 project MIDAS (grant agreement no. 603418).
The reported experimental data is attached as supplemental material with this article.
\bibliographystyle{plain}
|
1,941,325,220,761 | arxiv | \section{Introduction \label{sec:1}}
Topology is originally a mathematical concept to discuss the properties of a geometric object, but has been extended to a variety of research fields in these decades~\cite{Mermin1979, Nakahara2003, Braun2012, Xiao2010}.
In condensed matter physics, the electronic states of solids have been discussed by topology of the electronic band structures in momentum space, which led to the discovery of topological states of matter, e.g., the quantum Hall state~\cite{Ando1974, Klitzing1980, Laughlin1981, Thouless1982} and the topological insulator~\cite{Kane2005, Bernevig2006, Fu2007TI3D, Moore2007, Fu2007TIIS, Roy2009, Hasan2010, Ando2013}.
Topology appears also in geometric structures of the spin textures in magnets.
The typical examples are swirling noncoplanar spin textures, such as magnetic skyrmions~\cite{Bogdanov1989, Bogdanov1994, Bogdanov1995, Roessler2006} and Bloch points~\cite{Feldtkeller1965, Doring1968, Kotiuga1989} (or equivalently, magnetic hedgehogs~\cite{Volovik1987, Kanazawa2016, Fujishiro2019}).
These topological spin textures are characterized by an integer called the topological invariant: for instance, the skyrmion number for skyrmions~\cite{Rajaraman1987, Braun2012,Nagaosa2013} and the monopole charge for hedgehogs~\cite{Volovik1987, Braun2012}.
The topological invariant is robust against perturbation, which ensures the topological protection of the spin textures.
Moreover, the noncoplanar spin structures can generate
the so-called emergent electromagnetic fields through the Berry phase mechanism~\cite{Berry1984,Volovik1987,Xiao2010,Nagaosa2012-1,Nagaosa2012-2,Nagaosa2013}.
They are fictitious electromagnetic fields acting on electrons coupled to the spin textures, and thus, give rise to unusual
quantum transport and optical phenomena, such as the topological Hall effect~\cite{Loss1992, Ye1999, Bruno2004, Onoda2004, Binz2008, Nakazawa2019}, the Nernst effect~\cite{Shiomi2013, Mizuta2016, Hirschberger2020TNE}, the magneto-optical Kerr effect~\cite{Feng2020,Hayashi2021}, and the emergent inductance~\cite{Nagaosa2019, Yokouchi2020, Kurebayashi2021, Ieda2021, Kitaori2021emergent}.
Owing to these distinguishing properties, the topological spin textures have attracted a lot of attention for not only fundamental physics but also applications to next-generation electronic devices.
In magnetic materials, the topological spin textures often appear in the form of a periodic array of the topological objects.
For instance, the magnetic skyrmions appear by forming a periodic lattice called the
skyrmion lattice (SkL)~\cite{Muhlbauer2009, Yu2010, Yu2011, Munzer2010, Seki2012, Adams2012}, and the magnetic hedgehogs (and the antihedgehogs) appear as the hedgehog lattice (HL)~\cite{Tanigaki2015, Kanazawa2016, Yang2016, Fujishiro2019, Ishiwata2020, Okumura2020, Aoyama2021}.
These periodic structures can be represented by superpositions of multiple spin density waves, and hence, called multiple-$Q$ spin textures.
An example is shown in Fig.~\ref{fig:phase_schematic}(a), where a SkL is given by a superposition of three proper screws and called the $3Q$-SkL.
As such superpositions yield superstructures as the interference patterns, the topological spin textures can be viewed as ``spin moir\'e''~\cite{Shimizu2021moire}.
Analogous to moir\'e fringes in optics, there are many ways to modulate the spin moir\'e, such as the number of superposed waves~\cite{Binz2006-1,Binz2006-2,Park2011},
the amplitudes of each spin density waves~\cite{Shimizu2021anisotropy},
and the angles between the propagating directions of the superposed waves~\cite{Shimizu2021moire}.
Such modulations bring about various topological phases with different topological invariants and topological phase transitions between them.
\begin{figure}[tb]
\includegraphics[width=0.95\columnwidth]{3q_phase_schematic.pdf}
\caption{
\label{fig:phase_schematic}
Variations of spin textures while changing the phase in the superpositions of three proper screws with the wave vectors ${\bf q}_1$, ${\bf q}_2$, and ${\bf q}_3$.
(b) and (c) are obtained from (a) by the phase shift of $\frac{\pi}{2}$ and $\pi$, respectively.
The left panels display the schematic pictures of the superposed waves
and the right panels show the spin textures obtained by the superpositions.
The color of the arrows in the left panels represents the out-of-plane component of spins, as indicated in the inset of (c).
The skyrmion number changes from (a) $N_{\rm sk}=1$ to (b) $N_{\rm sk}=0$, and to
(c) $N_{\rm sk}=-1$, and the symmetry of the spin texture changes from (a) sixfold, (b) threefold, and (c) sixfold rotational symmetry.
The black rhombus represents the magnetic unit cell.
See Sec.~\ref{sec:3.2} for the details.
}
\end{figure}
Among such parameters in spin moir\'e, it was recently pointed out that
the phase degree of freedom in the superposed waves is an important parameter
to control not only the spin textures but also their symmetry and topological properties~\cite{Kurumaji2019,Hayami2021phase}.
The situation is illustrated for superpositions of three proper screws in Fig.~\ref{fig:phase_schematic}.
Figure~\ref{fig:phase_schematic}(a) shows a SkL by a superposition of three proper screws running in the $120^\circ$ directions of ${\bf q}_1$, ${\bf q}_2$, and ${\bf q}_3$.
The spin texture comprises a hexagonal array of skyrmions with the skyrmion number $N_{\rm sk}=1$ per magnetic unit cell
and has sixfold rotational symmetry; see Sec.~\ref{sec:3.2} for the details.
Let us consider a phase shift in the ${\bf q}_1$ component from this state.
The results obtained by $\frac{\pi}{2}$ and $\pi$ shifts are shown in Figs.~\ref{fig:phase_schematic}(b) and \ref{fig:phase_schematic}(c), respectively.
The symmetry is reduced to threefold for the $\frac{\pi}{2}$ shift, but recovered to sixfold for the $\pi$ shift.
Accordingly, the topological property is also changed:
The $\frac{\pi}{2}$ shift gives a periodic array of half skyrmions called merons and antimerons, leading to $N_{\rm sk}=0$, while the $\pi$ shift leads to a SkL with $N_{\rm sk}=-1$.
Thus, the phases of the superposed waves are relevant degrees of freedom, but their impact has not been fully elucidated thus far, for not only SkLs but also the other topological spin textures like HLs.
In this paper, we systematically clarify the effect of phase shifts on the typical multiple-$Q$ spin textures, two-dimensional (2D) SkLs and three-dimensional (3D) HLs, focusing on their topological properties and the emergent magnetic fields.
We first establish a generic framework to deal with the phase shift by introducing the hyperspace with an additional dimension corresponding to the phase degree of freedom,
inspired by the description of the phason degree of freedom in quasicrystals~\cite{Levine1984, Levine1986, Socolar1986, Steinhardt1987}.
In the hyperspace representation, the 2D SkLs composed of the three spin density waves with the phase degree of freedom are mapped to 3D HLs in which the Dirac strings connecting the hedgehogs and antihedgehogs correspond to the skymion and antiskyrmion cores in the original 2D SkLs.
Similarly, the 3D HLs composed of four spin density waves are mapped to four-dimensional (4D) loop lattices in which intersections of the membranes defined by the loops, which we call ``the Dirac planes'', by 3D hyperplanes give hedgehog-antihedgehog pairs connected by the Dirac strings in the original 3D HLs.
Analyzing the topological objects in the hyperspace representation, we systematically elucidate the evolution of the multiple-$Q$ spin structures for the phase shift as well as the magnetization change.
In the 2D case, considering the superpositions of three proper screws or sinusoidal waves, we obtain various $3Q$-SkLs with $N_{\rm sk}$ ranging from $-2$ to $2$ depending on the phase and magnetization.
We find that the phase diagram is dominated by the SkLs with $N_{\rm sk}=\pm 1$ in the case of the proper screw superpositions, whereas the $N_{\rm sk}=\pm 2$ regions become wider in the sinusoidal case.
Interestingly, at zero magnetization, we always obtain the $N_{\rm sk}=\pm 1$ ($\pm 2$) SkLs for any phase shifts in the screw (sinusoidal) case; namely, the $N_{\rm sk}=\pm 2$ ($\pm 1$) SkLs are obtained only with nonzero magnetization in the screw (sinusoidal) case.
On the other hand, in the 3D case, we clarify the topological phase diagrams for the superpositions of four proper screws or sinusoidal waves.
We find various $4Q$-HLs classified by the number of the hedgehogs and antihedgehogs per unit cube, $N_{\rm m}$: $N_{\rm m}=8$, $16$, $32$, and $48$ for the screw case, and $N_{\rm m}=8$ $16$, $24$, $32$, and $48$ for the sinusoidal case.
For the former case, the emergent magnetic field is always negative, while for the latter, it takes both positive and negative values.
Notably, we find unusual Dirac strings running on the horizontal planes perpendicular to the magnetization direction.
In the case of the screw superpositions, they give rise to pair creation of the hedgehogs and antihedgehogs and accordingly the increase of $N_{\rm m}$ from $16$ to $48$ while increasing the magnetization.
This is highly unusual since the increase of the magnetization usually results in pair annihilation and the reduction of $N_{\rm m}$.
In this screw case, $N_{\rm m}$ is always $16$ for any phases at zero magnetization, and the HLs with larger $N_{\rm m}$ appear only for nonzero magnetization.
In contrast, in the case of the sinusoidal superpositions, the zero magnetization state has always the largest $N_{\rm m}=48$, and $N_{\rm m}$ decreases monotonically while increasing the magnetization.
We also show that, in both cases, the amplitude of the emergent magnetic field is maximally enhanced by fusion of three hedgehogs and antihedgehogs on the horizontal Dirac strings where $N_{\rm m}$ changes from $48$ to $16$.
Finally, we study how the phases evolve in the actual multiple-$Q$ spin textures in microscopic models.
Specifically, analyzing the numerical data for the 2D Kondo lattice model~\cite{Ozawa2017} and the 3D effective spin model~\cite{Okumura2020}, we extract the sum of phases in the superposed waves by fitting the spin configurations obtained by the numerical simulations.
We show that phase shifts indeed take place in both cases around the topological phase transitions caused by an external magnetic field:
For the SkL, the sum of phases jumps from $\sim 0$ to $\sim \frac{\pi}{4}$ accompanied by the reduction of $|N_{\rm sk}|$ from $2$ to $1$, while for the HLs, it rapidly decreases from $\sim \frac{\pi}{3}$ to $\sim 0$ accompanied by the reduction of $N_{\rm m}$ from $16$ to $8$.
Our results establish the generic and systematic way to investigate the effect of phase shifts in the multiple-$Q$ spin textures. Moreover, they open a way for unexplored topological magnetic states and phase transitions, which may bring about nontrivial electronic structures and quantum transport properties through the emergent electromagnetic fields.
Thus, our findings would shed light on the engineering of the multiple-$Q$ spin textures and related physics through the phase degree of freedom which has been overlooked thus far.
The rest of the paper is organized as follows.
In Sec.~\ref{sec:2}, we introduce the hyperspace representation for general multiple-$Q$ spin textures.
In Sec.~\ref{sec:3}, applying the framework to 2D $3Q$ states (Sec.~\ref{sec:3.1}),
we elucidate the effect of phase shifts and magnetization changes on the spin textures, the symmetry, and
the topological properties of the $3Q$ states composed of three proper screws (Sec.~\ref{sec:3.2}) and
three sinusoidal waves (Sec.~\ref{sec:3.3}).
In Sec.~\ref{sec:4}, we present the results for the 3D $4Q$ states: the hyperspace representation (Sec.~\ref{sec:4.1}), and the effect of phase shifts and magnetization changes on the $4Q$ states composed four proper screws (Sec.~\ref{sec:4.2}) and four sinusoidal waves (Sec.~\ref{sec:4.3}).
In Sec.~\ref{sec:5}, we present the analysis of the actual numerical data for the 2D Kondo lattice model (Sec.~\ref{sec:5.1}) and the 3D effective spin model (Sec.~\ref{sec:5.2}).
We discuss the results in Sec.~\ref{sec:6}.
Section~\ref{sec:7} is devoted to the summary of this paper.
\section{Phase degree of freedom and hyperspace representation \label{sec:2}}
In this section, we propose a theoretical framework to systematically analyze the phase degree
of freedom in multiple-$Q$ spin structures.
We consider a generic form of the multiple-$Q$ spin structures in $d$-dimensional continuous space,
which is given by the function of the real-space position ${\bf r}=(x_1, \ldots, x_d)$ as
\begin{eqnarray}
{\bf S}({\bf r}) \propto
\sum_{\eta=1}^{N_{Q}}~
\left(
\psi_{\eta}^{\rm c}{\bf e}_{\eta}^1\cos\mathcal{Q}_{\eta}
+\psi_{\eta}^{\rm s} {\bf e}_{\eta}^2\sin\mathcal{Q}_{\eta}
\right)
+ m\hat{\bf z},
\label{eq:general_ansatz}
\end{eqnarray}
where $N_{Q}$ is the number of superposed waves, $\psi_{\eta}^{\rm c}$ and $\psi_{\eta}^{\rm s}$ are the amplitudes of cosinusoidal and
sinusoidal waves with the wave vector ${\bf q}_{\eta}=(q_{\eta}^1, \ldots, q_{\eta}^{d})$, respectively, ${\bf e}_{\eta}^1$ and ${\bf e}_{\eta}^2$ are unit vectors,
$\mathcal{Q}_{\eta}={\bf q}_{\eta}\cdot{\bf r}+\varphi_{\eta}$, and $m$ represents the uniform magnetization and $\hat{\bf z}$ is the unit vector along the $z$ direction.
The spin length is normalized as $|{\bf S}({\bf r})|=1$ for any ${\bf r}$.
Note that $m$ is not the net magnetization because of the normalization.
\subsection{Phase degree of freedom \label{sec:2.1}}
Let us consider how the spin structures are modulated by changing the phases $\varphi_{\eta}$.
When $N_Q \leq d$ and ${\bf q}_{\eta}$ are linearly independent, the periodic spin textures are described by the set of $N_Q$ linearly-independent magnetic translation vectors ${\bf a}_{\eta}$,
which satisfy
\begin{eqnarray}
{\bf a}_{\eta}\cdot{\bf q}_{\eta'}=2\pi\delta_{\eta\eta'},
\label{eq:orthogonal_relation}
\end{eqnarray}
where $\delta_{\eta\eta'}$ is the Kronecker delta.
In this case, a phase shift from $\varphi_\eta$ to $\varphi_\eta+\Delta\varphi_{\eta}$ is reduced to a spatial translation from ${\bf r}$ to ${\bf r}+\Delta{\bf r}$ with
\begin{eqnarray}
\Delta{\bf r} = \sum_{\eta=1}^{N_Q} \frac{\Delta\varphi_{\eta}}{2\pi} {\bf a}_{\eta},
\label{eq:phasishift_lin_indep}
\end{eqnarray}
since the following relation holds:
\begin{eqnarray}
{\bf q}_{\eta}\cdot\Delta{\bf r}
=\sum_{\eta'}\frac{\Delta\varphi_{\eta'}}{2\pi}{\bf q}_{\eta}\cdot{\bf a}_{\eta'}
=\Delta\varphi_{\eta}.
\end{eqnarray}
Hence, the phase degree of freedom is irrelevant when $N_Q \leq d$~\cite{2-1-1_note}.
The situation is, however, different for $N_Q>d$.
In this case, the wave vectors ${\bf q}_{\eta}$ are not linearly independent of each other, as exemplified for $N_Q=3$ and $d=2$ in
Fig.~\ref{fig:phase_schematic}.
This means that a phase shift cannot be reduced to a spatial translation, as ${\bf a}_{\eta}$ defined
by Eq.~(\ref{eq:orthogonal_relation}) with $d$ out of $N_Q$ wave vectors ${\bf q}_{\eta}$ do not satisfy
the relation in Eq.~(\ref{eq:phasishift_lin_indep}).
\subsection{Hyperspace representation \label{sec:2.2}}
To discuss the effect of the phase shift in the case of $N_Q>d$ systematically, it is convenient to introduce
$N_Q$-dimensional ($N_Q$D) hyperspace.
In the hyperspace, we can introduce a position vector ${\bf R}=(X_1,X_2, \ldots, X_{N_Q})$ and $N_Q$ linearly-independent wave vectors
${\bf Q}_{\eta}=(Q_{\eta}^{1}, Q_{\eta}^{2}, \ldots, Q_{\eta}^{N_Q})$ ($\eta=1,2,\ldots,N_Q$) so that they satisfy the relations~\cite{22_note}
\begin{eqnarray}
\mathcal{Q}_{\eta}={\bf q}_{\eta}\cdot{\bf r}+\varphi_{\eta}={\bf Q}_{\eta}\cdot{\bf R}.
\label{eq:Q_r_R}
\end{eqnarray}
In this hyperspace, we can define the $N_Q$ linearly-independent magnetic translation vectors ${\bf A}_{\eta}$ ($\eta=1,2,\ldots, N_Q$) satisfying
\begin{eqnarray}
{\bf A}_{\eta}\cdot{\bf Q}_{\eta'} = 2\pi\delta_{\eta\eta'}.
\label{eq:orthogonal_hyper}
\end{eqnarray}
By using Eqs.~(\ref{eq:Q_r_R}) and (\ref{eq:orthogonal_hyper}), the hyperspace position ${\bf R}$ and
the real-space one ${\bf r}$ are related with each other as
\begin{eqnarray}
\left(\begin{array}{c}
X_1 \\ X_2 \\ \vdots \\ X_{N_Q}
\end{array}\right)
=
M_Q^{-1}M_q
\left(\begin{array}{c}
x_1 \\ x_2 \\ \vdots \\ x_d \\ 1
\end{array}\right),
\label{eq:r2R}
\end{eqnarray}
where
\begin{eqnarray}
M_Q
&=&
\left(\begin{array}{c}
{\bf Q}_1 \\ {\bf Q}_2 \\ \vdots \\ {\bf Q}_{N_Q}
\end{array}\right)
=
\left(\begin{array}{cccc}
Q_1^{1} & Q_1^{2} & \ldots & Q_1^{N_Q} \\
Q_2^{1} & Q_2^{2} & \ldots & Q_2^{N_Q} \\
\vdots & \vdots & \ddots & \vdots \\
Q_{N_Q}^{1} & Q_{N_Q}^{2} & \ldots & Q_{N_Q}^{N_Q}
\end{array}\right), \\
M_q
&=&
\left(\begin{array}{c | c}
{\bf q}_1 & \varphi_1 \\
{\bf q}_2 & \varphi_2 \\
\vdots & \vdots \\
{\bf q}_{N_Q} & \varphi_{N_Q}
\end{array}\right)
=
\left(\begin{array}{cccc}
q_1^{1} & \ldots & q_1^{d} & \varphi_1 \\
q_2^{1} & \ldots & q_2^{d} & \varphi_2 \\
\vdots & \ddots & \vdots & \vdots \\
q_{N_Q}^{1} & \ldots & q_{N_Q}^{d} & \varphi_{N_Q} \\
\end{array}\right).
\end{eqnarray}
Note that $M_Q^{-1}$ is given as
\begin{eqnarray}
M_Q^{-1}
&=&
\frac{1}{2\pi}
\left(\begin{array}{cccc}
{\bf A}_1 & {\bf A}_2 & \ldots & {\bf A}_{N_Q}
\end{array}\right) \notag \\
&=&
\frac{1}{2\pi}
\left(\begin{array}{cccc}
A_1^{1} & A_2^{1} & \ldots & A_{N_Q}^{1} \\
A_1^{2} & A_2^{2} & \ldots & A_{N_Q}^{2} \\
\vdots & \vdots & \ddots & \vdots \\
A_1^{N_Q} & A_2^{{N_Q}} & \ldots & A_{N_Q}^{{N_Q}}
\end{array}\right).
\end{eqnarray}
The relation in Eq.~(\ref{eq:r2R}) can be regarded as a surjective mapping from the set of the real-space position ${\bf r}$
and the phases $\varphi_{\eta}$ onto the hyperspace position ${\bf R}$~\cite{1_note}.
In this setting, Eq.~(\ref{eq:Q_r_R}) gives one-to-one correspondence between a multiple-$Q$ spin configuration
with the phase variables $\varphi_{\eta}$ in the original $d$-dimensional real space and that in the $N_Q$D hyperspace with fixed phases
(in other words, without the phase degrees of freedom).
In this representation, a phase shift by $\Delta\varphi_{\eta}$ in the original real space corresponds to a translation in the hyperspace by
\begin{eqnarray}
\Delta{\bf R} = \sum_{\eta=1}^{N_Q} \frac{\Delta\varphi_{\eta}}{2\pi} {\bf A}_{\eta}.
\label{eq:phase_translation_3D}
\end{eqnarray}
Consequently, the hyperspace representation enables us to treat the phase degree of freedom as additional coordinates in the hyperspace.
We note that the situation is analogous to the hyperspace introduced to understand the structures of quasiperiodic crystals, where the number of translation vectors are in general larger than the system dimension and
the quasycrystals are obtained by a ``slice" of a periodic structure in the hyperspace with additional dimensions spanned by the same number of vectors~\cite{Levine1984, Levine1986, Socolar1986, Steinhardt1987}.
In the quasicrystals, the additional variables in the hyperspace are called phasons~\cite{Levine1985, Bak1985, Kalugin1985, Hu2000},
which also supports the analogy.
\section{$3Q$ skyrmion lattices \label{sec:3}}
In this section, we discuss the effect of phase shifts on the spin structures composed of three wave vectors
in two dimensions, i.e., $N_{Q}=3$ and $d=2$, by using the hyperspace representation introduced in Sec.~\ref{sec:2}.
Specifically, we consider Eq.~(\ref{eq:general_ansatz}) with three ${\bf q}_{\eta}$ given by
\begin{align}
&&{\bf q}_1 = \left( q, 0 \right),\
{\bf q}_2 = \left( -\frac{q}{2}, \frac{\sqrt{3}q}{2} \right),\
{\bf q}_3 = \left( -\frac{q}{2}, -\frac{\sqrt{3}q}{2} \right),
\label{eq:3Q_q_eta}
\end{align}
where $|{\bf q}_{\eta}|=q$.
For this $3Q$ spin structure, the 2D magnetic translation vectors are defined as
\begin{align}
{\bf a}_1 = \frac{2\pi}{q}\left(1,\frac{1}{\sqrt{3}}\right),\ \
{\bf a}_2 = \frac{2\pi}{q}\left(0,\frac{2}{\sqrt{3}}\right).
\label{eq:3Q_a_eta}
\end{align}
In the following, we focus on the two types of the $3Q$ spin structures.
One is the superposition of proper screws, which includes a $3Q$-SkL found in a wide range of materials, as introduced in Sec.~\ref{sec:1}.
We call this type the screw $3Q$ state; see Sec.~\ref{sec:3.2}.
The other type is the superposition of sinusoidal waves, which includes a $3Q$-SkL with the skyrmion number of two found in the Kondo lattice system on the
triangular lattice~\cite{Ozawa2017} and its effective spin model~\cite{Hayami2017}.
We call this type the sinusoidal $3Q$ state; see Sec.~\ref{sec:3.3}.
Before going into the analyses, we present the hyperspace representation in Sec.~\ref{sec:3.1}, which is commonly used for these two types.
\subsection{Hyperspace representation of the $3Q$ states \label{sec:3.1}}
\begin{figure}[tb]
\includegraphics[width=1.0\columnwidth]{3q_setup.pdf}
\caption{
\label{fig:3Q_setup}
(a) Schematic pictures of the relations between the wave vectors in the 3D reciprocal hyperspace, ${\bf Q}_{\eta}$,
and those in the 2D reciprocal space, ${\bf q}_{\eta}$.
(b) Corresponding relations between the magnetic translation vectors in the 3D hyperspace,
${\bf A}_{\eta}$, and those in the 2D real space, ${\bf a}_{\eta}$.
The projections of ${\bf A}_{\eta}$ onto the $xy$ plane denoted by $\tilde{\bf a}_{\eta}$ are also shown.
}
\end{figure}
Using the framework introduced in Sec.~\ref{sec:2.2}, we construct the hyperspace representation of the $3Q$ spin structures.
First, we set the 3D wave vectors ${\bf Q}_{\eta}=(Q_{\eta}^X, Q_{\eta}^Y, Q_{\eta}^Z)$ in Eq.~(\ref{eq:Q_r_R}) as
\begin{eqnarray}
&&{\bf Q}_1=q\left( 1 , 0 , \frac{1}{\sqrt{2}} \right),
\label{eq:Q_1}\\
&&{\bf Q}_2=q\left( -\frac{1}{2} , \frac{\sqrt{3}}{2} , \frac{1}{\sqrt{2}} \right),
\label{eq:Q_2}\\
&&{\bf Q}_3=q\left( -\frac{1}{2}, -\frac{\sqrt{3}}{2}, \frac{1}{\sqrt{2}} \right),
\label{eq:Q_3}
\end{eqnarray}
so that the projection of ${\bf Q}_\eta$ onto the $q^xq^y$ plane is ${\bf q}_{\eta}$,
and ${\bf Q}_1$, ${\bf Q}_2$, and ${\bf Q}_3$ are orthogonal to each other as shown in Fig.~\ref{fig:3Q_setup}(a), without loss of generality.
Then, we obtain the corresponding magnetic translation vectors ${\bf A}_{\eta}$ in the 3D hyperspace as
\begin{eqnarray}
&&{\bf A}_1=\frac{4\pi}{3q}\left( 1 , 0 , \frac{1}{\sqrt{2}} \right),
\label{eq:A_1}\\
&&{\bf A}_2=\frac{4\pi}{3q}
\left( -\frac{1}{2} , \frac{\sqrt{3}}{2}, \frac{1}{\sqrt{2}} \right),
\label{eq:A_2}\\
&&{\bf A}_3=\frac{4\pi}{3q}
\left( -\frac{1}{2}, -\frac{\sqrt{3}}{2}, \frac{1}{\sqrt{2}} \right).
\label{eq:A_3}
\end{eqnarray}
Figure~\ref{fig:3Q_setup}(b) illustrates the relation between ${\bf A}_{\eta}$ and ${\bf a}_{\eta}$.
Note that ${\bf a}_{\eta}$ is given by ${\bf a}_{\eta}=\tilde{\bf a}_{\eta}-\tilde{\bf a}_3$, where $\tilde{\bf a}_{\eta}$ is the projection of ${\bf A}_{\eta}$ onto the $xy$ plane.
By using Eq.~(\ref{eq:r2R}), the hyperspace positions are related with the real-space positions as
\begin{eqnarray}
\left(\begin{array}{c}
X \\ Y \\ Z
\end{array}\right)
=
V_{3}
\left(\begin{array}{c}
x \\ y \\ 1
\end{array}\right),
\label{eq:r2R_3Q}
\end{eqnarray}
where $V$ includes the phases as
\begin{align}
V_{3}=
\left(\begin{array}{ccc}
{\bf v}_1 & {\bf v}_2 & {\bf v}_3
\end{array}\right)
=
\left(\begin{array}{ccc}
1 & 0 & \frac{2}{3q}\left( \varphi_1 - \frac{1}{2}(\varphi_2+\varphi_3) \right) \\
0 & 1 & \frac{1}{\sqrt{3}q}\left(\varphi_2 - \varphi_3 \right) \\
0 & 0 & \frac{\sqrt{2}}{3q}\left( \varphi_1 + \varphi_2+\varphi_3 \right)
\end{array}\right).
\end{align}
It is worth noting that Eq.~(\ref{eq:r2R_3Q}) can be rewritten as
\begin{equation}
{\bf R} = x{\bf v}_1 + y{\bf v}_2 + {\bf v}_3.
\end{equation}
This means that the spin configuration on the original 2D $xy$ plane is the same as the one on a slice of the hyperspace spin configuration spanned by
${\bf v}_1$ and ${\bf v}_2$ including the point at ${\bf v}_3$, namely, the horizontal plane with
\begin{equation}
Z=\frac{\sqrt{2}}{3q}\tilde{\varphi},
\label{eq:Z_cubic}
\end{equation}
where
\begin{align}
\tilde{\varphi} = \sum_{\eta} \varphi_\eta = \varphi_1 + \varphi_2 + \varphi_3.
\label{eq:3q_phitilde}
\end{align}
Thus, only the summation of the phases $\varphi_\eta$ is relevant for the present $3Q$ spin textures, instead of each value of $\varphi_\eta$.
Note that $\tilde{\varphi}$ has $2\pi$ periodicity; namely, the spin configuration in the 3D hyperspace becomes equivalent with period of $\frac{2\sqrt{2}\pi}{3q}$ in the $Z$ direction.
In the original 2D space, the $2\pi$ phase shift in $\tilde{\varphi}$ ($\Delta\tilde{\varphi} = \sum_{\eta} \Delta\varphi_{\eta} = 2\pi$)
corresponds to a spatial translation by
\begin{eqnarray}
\Delta{\bf r}_{\eta}=\tilde{\bf a}_{\eta}-\sum_{\eta'=1}^{3}\frac{\Delta\varphi_{\eta'}}{2\pi}\tilde{\bf a}_{\eta'},
\label{eq:3q_2pi_translation}
\end{eqnarray}
where $\eta$ may take any of $1$, $2$, and $3$.
In the following, we apply the above hyperspace representation to analyze the effect of phase shifts on the magnetic and topological properties of two types of 2D $3Q$ states.
\subsection{Screw $3Q$ state \label{sec:3.2}}
In this subsection, we analyze the effect of phase shifts on the $3Q$ state
composed of three proper screws.
The spin texture is given by Eq.~(\ref{eq:general_ansatz}) with $N_Q=3$ and
\begin{eqnarray}
&&\psi_{\eta}^{\rm c}=\psi_{\eta}^{\rm s}=
\frac{1}{\sqrt{3}}
, \\
&&{\bf e}_{\eta}^1=\hat{\bf z}, \
{\bf e}_{\eta}^2={\bf e}_{\eta}^0\times{\bf e}_{\eta}^1, \
{\bf e}_{\eta}^0=\left(\frac{q^x_{\eta}}{q}, \frac{q^y_{\eta}}{q}, 0 \right)^{\mathsf{T}},
\end{eqnarray}
where $\mathsf{T}$ denotes the transpose of the vector. The explicit form is given as
\begin{eqnarray}
{\bf S}({\bf r})
&\propto&
\left(\begin{array}{c}
\frac{\sqrt{3}}{2}(\sin\mathcal{Q}_{2} - \sin\mathcal{Q}_{3}) \\
-\sin\mathcal{Q}_{1}+\frac{1}{2}(\sin\mathcal{Q}_{2} + \sin\mathcal{Q}_{3}) \\
\cos\mathcal{Q}_1+\cos\mathcal{Q}_2+\cos\mathcal{Q}_3+\sqrt{3}m
\end{array} \right).
\label{eq:chiral_3Q_ansatz}
\end{eqnarray}
Before going into the analyses of the topological properties, let us discuss the symmetry of the screw $3Q$ state in Eq.~(\ref{eq:chiral_3Q_ansatz}).
Table~\ref{tab:3q_scr_sym} summarizes the symmetry operations on the spin texture, together with the decomposition of each operation into the change of the sum of phases $\tilde{\varphi}$, spatial translation, and magnetization change.
We here consider the point group operations and time-reversal operation that do not change the wave vectors of the proper screws.
Other symmetry operations are expressed by the combinations of those in Table~\ref{tab:3q_scr_sym}, e.g., $C_{2y}=C_{2z}C_{2x}=C_{6z}^3C_{2x}$.
From Table~\ref{tab:3q_scr_sym}, we can obtain the symmetry operations which do not change both $\tilde{\varphi}$ and $m$, i.e., the spin texture.
Specifically, we find that the screw $3Q$ state is symmetric under $C_{6z}$, $\mathcal{T}C_{2x}$, and their combinations for $\tilde{\varphi}=0$ and $\pi$, otherwise
$C_{3z}$, $\mathcal{T}C_{2x}$, and their combinations.
We note that spatial translation combined with the $C_{6z}$ rotation can be represented by shifting the rotation axis.
\begin{table}
\caption{\label{tab:3q_scr_sym}
Symmetry operations for the screw $3Q$ state in Eq.~(\ref{eq:chiral_3Q_ansatz}) and their decompositions into the change in the sum of phases, spatial translation in the original 2D space, and the magnetization change:
$C_{n\alpha}$ represents an $n$-fold rotation about the axis in the $\alpha$ direction and $\mathcal{T}$ represents the time-reversal operation.
}
\begin{ruledtabular}
\begin{tabular}{c|ccc}
operation & sum of phases & translation & magnetization \\
\hline
$C_{6z}$ & $\tilde{\varphi} \rightarrow 2\pi - \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}+\tilde{\bf a}_{\eta'}$ & $m \rightarrow m$ \\
$C_{2x}$ & $\tilde{\varphi} \rightarrow \pi + \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow -m$ \\
$\mathcal{T}$ & $\tilde{\varphi} \rightarrow \pi + \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow -m$
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[tb]
\includegraphics[width=1.0\columnwidth]{3qscr_spin_hyper.pdf}
\caption{
\label{fig:3qscr_spin_hyper}
(a) Schematic of the three proper screws in the 3D reciprocal hyperspace, whose wave vectors ${\bf Q}_\eta$ are defined by Eqs.~(\ref{eq:Q_1}), (\ref{eq:Q_2}), and (\ref{eq:Q_3}) for the screw $3Q$ state in Eq.~(\ref{eq:chiral_3Q_ansatz}).
(b) Corresponding cubic MUC in the hyperspace and
spin structures on the three horizontal planes with $Z=\frac{\sqrt{2}}{3q}(2n+1)\pi$ ($n=0,1,2$).
The right panel in (b) shows the 2D spin texture on the three slices, which corresponds to
the spin structure in Eq.~(\ref{eq:chiral_3Q_ansatz}) with $\tilde{\varphi}=\varphi_1+\varphi_2+\varphi_3=(2n+1)\pi$,
where $n$ is an integer.
The rhombus consisting of the two triangles and one hexagon denotes the 2D MUC in the original 2D plane.
}
\end{figure}
\subsubsection{Hedgehogs in hyperspace \label{sec:3.2.1}}
To discuss the effect of phase shifts on the screw $3Q$ state in Eq.~(\ref{eq:chiral_3Q_ansatz}),
we study the corresponding spin texture in the 3D hyperspace whose reciprocal space is spanned by the wave vectors ${\bf Q}_\eta$
in Eqs.~(\ref{eq:Q_1}), (\ref{eq:Q_2}), and (\ref{eq:Q_3}).
The schematic picture is shown in Fig.~\ref{fig:3qscr_spin_hyper}(a).
As discussed in the previous subsection, a real-space spin configuration for a given phase summation $\tilde{\varphi}$ on the original 2D plane corresponds to
a hyperspace spin configuration on the horizontal plane with $Z=\frac{\sqrt{2}}{3q}\tilde{\varphi}$ [see Eq.~(\ref{eq:Z_cubic})].
Such a correspondence is exemplified in Fig.~\ref{fig:3qscr_spin_hyper}(b) for $\tilde{\varphi}=(2n+1)\pi$.
In this case, the intersections of the 3D cubic magnetic unit cell (MUC) and the horizontal planes with $Z=\frac{\sqrt{2}}{3q}(2n+1)\pi$ $(n=0,1,2)$ give the hexagon and the two triangles, which comprise the rhombus MUC in the original 2D plane.
In the 3D hyperspace, the spin structure composed of three proper screws may comprise a 3D topological spin structure called
$3Q$-HL~\cite{Kanazawa2016, Zhang2016, Okumura2020, Shimizu2021moire}.
It has a periodic array of topological defects called the hedgehogs and antihedgehogs, whose cores are the singular points where the spin length vanishes (see Sec.~\ref{sec:3.2.2}).
Indeed, by solving the equation ${\bf S}({\bf r})=0$ for Eq.~(\ref{eq:chiral_3Q_ansatz}), we obtain the following eight solutions:
\begin{eqnarray}
(\mathcal{Q}_1^{*},\mathcal{Q}_2^{*},\mathcal{Q}_3^{*}) &=&
\left(\pi+p_1^{\rm scr}(m), \pi+p_1^{\rm scr}(m), \pi+p_1^{\rm scr}(m) \right), \notag \\
&&
\left(\pi-p_1^{\rm scr}(m), \pi-p_1^{\rm scr}(m), \pi-p_1^{\rm scr}(m) \right), \notag \\
&&
\left(\pi-p_2^{\rm scr}(m), \pi-p_2^{\rm scr}(m), p_2^{\rm scr}(m) \right) \notag \\
&&\qquad\qquad\quad
\mbox{and cyclic permutations}, \notag \\
&&
\left(\pi+p_2^{\rm scr}(m), \pi+p_2^{\rm scr}(m), 2\pi-p_2^{\rm scr}(m) \right) \notag \\
&&\qquad\qquad\quad
\mbox{and cyclic permutations},
\label{eq:chiral_3Q_sol}
\end{eqnarray}
where
\begin{eqnarray}
p_1^{\rm scr}(m) = \arccos\left(\frac{m}{\sqrt{3}}\right), \quad
p_2^{\rm scr}(m) = \arccos\left(\sqrt{3}m\right).
\end{eqnarray}
By using the relation
\begin{eqnarray}
{\bf R}^{*} = \sum_{\eta} \frac{\mathcal{Q}_{\eta}^{*}}{2\pi} {\bf A}_{\eta},
\label{eq:R^*}
\end{eqnarray}
we obtain the positions of the eight singular points in the hyperspace as
\begin{eqnarray}
&&
(X^{*}, Y^{*}, Z^{*}) = \notag \\
&&\quad
\frac{\sqrt{2}}{3q}\left(0, 0, 3(\pi+p_1^{\rm scr}(m)) \right),
\
\frac{\sqrt{2}}{3q}\left(0, 0, 3(\pi-p_1^{\rm scr}(m)) \right), \notag \\
&& \quad
\left( \frac{\pi-2p_2^{\rm scr}(m)}{3q}, \frac{\pi-2p_2^{\rm scr}(m)}{\sqrt{3}q}, \frac{\sqrt{2}(2\pi-p_2^{\rm scr}(m))}{3q} \right)\notag \\
&&\quad
\qquad \qquad \qquad \qquad \mbox{ and $C_3^{Z}$ symmetric points}, \notag \\
&&\quad
\left( -\frac{\pi-2p_2^{\rm scr}(m)}{3q}, -\frac{\pi-2p_2^{\rm scr}(m)}{\sqrt{3}q}, \frac{\sqrt{2}(4\pi+p_2^{\rm scr}(m))}{3q} \right)\notag \\
&& \quad
\qquad \qquad \qquad \qquad \mbox{ and $C_3^{Z}$ symmetric points},
\label{eq:chiral_3Q_sol_R}
\end{eqnarray}
where the $C_3^{Z}$ symmetric points are obtained by $2\pi/3$ and $4\pi/3$ rotations about the $Z$ axis.
\subsubsection{Topological transition in 3D hyperspace \label{sec:3.2.2}}
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\columnwidth]{3qscr_defect.pdf}
\caption{
\label{fig:3qscr_defect}
Change of the positions of the hedgehogs and antihedgehogs, and the Dirac strings connecting them in the spin structure in the 3D hyperspace corresponding to the screw $3Q$ state in Eq.~(\ref{eq:chiral_3Q_ansatz}) while changing $m$: (a) $m=0$, (b) $m=0.4$, (c) $m=0.8$, and (d) $m=1.6$.
The red, blue, magenta, and cyan spheres are the topological defects with
$Q_{\rm m}=2$, $-2$, $1$, and $-1$, respectively.
The green and purple lines denote the Dirac strings with the vorticity $\zeta=+1$ and $-1$, respectively.
}
\end{figure}
Figure~\ref{fig:3qscr_defect} illustrates the systematic change of the topological defects while changing
the magnetization along the $Z$ direction, $m$.
When $m=0$, four out of the eight solutions in Eq.~(\ref{eq:chiral_3Q_sol_R}) become identical to
$\frac{\sqrt{2}}{3q} \left( 0,0,\frac{9\pi}{2} \right)$ and the rest four become $\frac{\sqrt{2}}{3q} \left( 0,0,\frac{3\pi}{2} \right)$,
since $p_1^{\rm scr}(0)=\frac{\pi}{2}$ and $p_2^{\rm scr}(0)=\frac{\pi}{2}$.
Hence, there are only two topological defects located on the $Z$ axis, as shown in Fig.~\ref{fig:3qscr_defect}(a).
Following the arguments in Refs.~\cite{Park2011,Zhang2016,Kanazawa2016,Okumura2020,Shimizu2021moire}, we compute
the monopole charge for these defects, which is defined by
\begin{eqnarray}
Q_{\rm m}=\frac{1}{4\pi}\int d\tilde{{\bf S}} \cdot {\bf b}({\bf R}),
\label{eq:monopole_charge}
\end{eqnarray}
where ${\bf b}({\bf R})=(b_X({\bf R}), b_Y({\bf R}), b_Z({\bf R}))$ is the scalar spin chirality defined in the hyperspace as
\begin{eqnarray}
b_{i}({\bf R})=\frac{1}{2}\varepsilon^{ijk} {\bf S}({\bf R})\cdot\left(\frac{\partial {\bf S}({\bf R})}{\partial X_j} \times \frac{\partial {\bf S}({\bf R})}{\partial X_k} \right),
\end{eqnarray}
where $\varepsilon^{ijk}$ is the Levi-Civita Symbol and ${\bf S}({\bf R})$ is the spin at ${\bf R}$ in the 3D hyperspace; the integral in Eq.~(\ref{eq:monopole_charge}) is taken on a closed surface surrounding the defect in the hyperspace.
Here, $Q_{\rm m}$ takes an integer, which defines the topological nature of the defects;
a defect with positive (negative) $Q_{\rm m}$ is a hedgehog (an antihedghog) which is regarded as a source (sink) of
the emergent magnetic fields~\cite{Zhang2016,Kanazawa2016,Okumura2020,Shimizu2021moire}.
We find that the topological defect at $\frac{\sqrt{2}}{3q} \left( 0,0,\frac{3\pi}{2} \right)$ in Fig.~\ref{fig:3qscr_defect}(a) is a hedgehog with $Q_{\rm m}=+2$ (red sphere)
and the other one at $\frac{\sqrt{2}}{3q} \left( 0,0,\frac{9\pi}{2} \right)$ is an antihedgehog with $Q_{\rm m}=-2$ (blue sphere).
Following the procedures in Ref.~\cite{Shimizu2021moire}, we also identify four Dirac strings, which is the lines connecting the hedgehog and antihedgehog by the spins pointing downward (antiparallel to the direction of the magnetization).
The Dirac strings are distinguished by their vorticity given by
\begin{eqnarray}
\zeta = \frac{1}{2\pi}\oint d{\bf l} \cdot \boldsymbol{\nabla}\phi({\bf R}),
\label{eq:vorticity}
\end{eqnarray}
where $\phi({\bf r})$ is the azimuthal angle of ${\bf S}({\bf R})$ and the integral is taken along a closed path surrounding
the string at ${\bf R}$ on a plane perpendicular to the $Z$ axis~\cite{Tatara2019, Shimizu2021moire}.
We find four Dirac strings, as shown in Fig.~\ref{fig:3qscr_defect}(a):
One is the line along the $Z$ axis (green line) and the other three run through the
MUC boundaries and connect the topological defects in the neighboring MUCs (purple lines).
The former has the vorticity of $\zeta=+1$, while the latter three have $\zeta=-1$.
When introducing $m$, each topological defect splits into four;
the hedgehog with $Q_{\rm m}=+2$ splits into three hedgehogs with $Q_{\rm m}=+1$ (magenta spheres) and
one antihedgehog with $Q_{\rm m}=-1$ (cyan sphere), while the antihedgehog with $Q_{\rm m}=-2$ splits into
three antihedgehogs with $Q_{\rm m}=-1$ and one hedgehog with $Q_{\rm m}=+1$, as shown in Fig.~\ref{fig:3qscr_defect}(b).
Note that the total monopole charge is conserved in each splitting.
Thus, we have totally four hedgehogs with $Q_{\rm m}=+1$ and
four antihedgehogs with $Q_{\rm m}=-1$ by introducing $m$.
With a further increase of $m$, three pairs connected by the Dirac strings with $\zeta=-1$ disappear with pair annihilation
at $m=1/\sqrt{3}$, leaving one pair connected by the Dirac string with $\zeta=+1$,
as shown in Fig.~\ref{fig:3qscr_defect}(c).
The remaining hedgehog and antihedgehog move toward each other while further increasing $m$, as shown in Fig.~\ref{fig:3qscr_defect}(d), and they also vanish with pair annihilation at $m=\sqrt{3}$.
Consequently, while increasing $m$, we have two topological transitions caused by pair annihilation of the hedgehogs and antihedgehogs at $m=1/\sqrt{3}$ and $m=\sqrt{3}$.
\subsubsection{Topological transition on 2D plane \label{sec:3.2.3}}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{3qscr_intersection.pdf}
\caption{
\label{fig:3qscr_intersection}
(a) Hedgehogs and antihedgehogs, and Dirac strings for $m=0$ in the 3D hyperspace.
The pale colors are used for the objects in the neighboring MUCs denoted by the dashed cubes.
The gray plane represents the 2D MUC on the intersection of the plane with $Z=\frac{\sqrt{2}}{3q}\frac{9}{4}\pi$.
(b) The spin configuration on the gray plane in (a), which corresponds to the spin structure in
Eq.~(\ref{eq:chiral_3Q_ansatz}) with $\tilde{\varphi}=\frac{9}{4}\pi$.
The green and purple circles denote the intersections of the Dirac strings with $\zeta=+1$ and $-1$, respectively.
The right panel displays the 2D real-space spin configuration by repeating the MUC.
Similar figures for (c),(d) $m=0$ and $\tilde{\varphi}=3\pi$, and (e),(f) $m=0.9$ and $\tilde{\varphi}=\frac{9}{4}\pi$.
}
\end{figure}
Using the results on the topological defects and the Dirac strings in the 3D hyperspace, we can discuss in a systematic way
the topological properties of the 2D spin texture in Eq.~(\ref{eq:chiral_3Q_ansatz}) while changing $\tilde{\varphi}$ and $m$.
Figure~\ref{fig:3qscr_intersection} displays the relation between the horizontal slices in the 3D hyperspace at $Z=\frac{\sqrt{2}}{3q}\tilde{\varphi}$ and the 2D spin textures with $\tilde{\varphi}$.
In Fig.~\ref{fig:3qscr_intersection}(a), we show the configurations of the hedgehogs and antihedgehogs, and the Dirac strings in the hyperspace at $m=0$, and the horizontal plane with $Z=\frac{\sqrt{2}}{3q}\frac{9}{4}\pi$.
The spin configuration on the plane is shown in Fig.~\ref{fig:3qscr_intersection}(b), which corresponds to
the 2D spin texture in Eq.~(\ref{eq:chiral_3Q_ansatz}) with $\tilde{\varphi}=\frac94\pi$.
The topological property of the 2D spin texture is characterized by the skyrmion number~\cite{Rajaraman1987, Braun2012,Nagaosa2013}
\begin{eqnarray}
N_{\rm sk} = \frac{1}{4\pi} \int_{\rm 2D~MUC} dXdY~b_Z({\bf R}),
\label{eq:Nsk}
\end{eqnarray}
where the integral is taken within the 2D rhombic MUC.
The skyrmion number is also obtained by counting the vorticities of
the Dirac strings as~\cite{Shimizu2021moire}
\begin{eqnarray}
N_{\rm sk} = -\sum_{k} \zeta_k(Z),
\label{eq:Nsk_vorticity}
\end{eqnarray}
where $\zeta_k(Z)$ denotes the vorticity of the $k$th Dirac string intersecting the 2D MUC~\cite{3_note}.
For example, in the case of Fig.~\ref{fig:3qscr_intersection}(b), there are two intersections by the Dirac strings
with $\zeta=+1$ and three with $\zeta=-1$, and hence, $N_{\rm sk}=+1$.
Thus, this simple counting in the hyperspace representation enables us to identify the 2D spin texture at
$m=0$ with $\tilde{\varphi}=\frac94\pi$ as the $3Q$-SkL with $N_{\rm sk}=+1$,
while the direct integration by Eq.~(\ref{eq:Nsk}) gives the same conclusion.
Figures~\ref{fig:3qscr_intersection}(c) and \ref{fig:3qscr_intersection}(d) illustrate the situations with $\tilde{\varphi}=3\pi$ at $m=0$.
In this case, the 2D slice has a single intersection by the Dirac string with $\zeta=+1$, and hence, the spin texture with $\tilde{\varphi}=3\pi$ is the $3Q$-SkL with $N_{\rm sk}=-1$.
This demonstrates a switching of the topological property by the phase shift.
In the hyperspace representation, such topological transitions occur when the gray horizontal plane crosses
the hedgehogs or antihedgehogs at the end points of the Dirac strings.
Since the topological defects change their positions with $m$ as shown in Fig.~\ref{fig:3qscr_defect},
the topological properties of the 2D spin structures change also with $m$.
A demonstration is shown in Figs.~\ref{fig:3qscr_intersection}(e) and \ref{fig:3qscr_intersection}(f) for $m=0.9$.
At this value of $m$, three pairs of the hedgehogs and antihedgehogs already vanish by pair annihilation, and only a single pair remains on the $Z$ axis, as shown in Fig.~\ref{fig:3qscr_intersection}(e).
In this case, when the 2D slice intersects the Dirac string connecting the hedgehog-antihedgehog pair, the 2D spin structures becomes a $3Q$-SkL with $N_{\rm sk}=-1$, as exemplified in Fig.~\ref{fig:3qscr_intersection}(f) for $\tilde{\varphi}=\frac94\pi$.
In this way, we can systematically investigate the changes of the magnetic textures and the topological properties
of the 2D $3Q$ spin structures while changing $\tilde{\varphi}$ and $m$.
The procedure is summarized as follows:
(i) Define the 3D spin texture in the hyperspace by using Eqs.~(\ref{eq:Q_r_R}) and (\ref{eq:orthogonal_hyper}),
in the present case, Eqs.~(\ref{eq:Q_1})-(\ref{eq:A_3}),
(ii) identify the hedgehogs and antihedgehogs, and the Dirac strings connecting them for the 3D spin structure in the hyperspace,
(iii) consider the 2D slice of the 3D spin structure at the horizontal plane with $Z=\frac{\sqrt{2}}{3q}\tilde{\varphi}$, which gives the 2D spin texture with the phase summation $\tilde{\varphi}$, and
(iv) take the sum of the vorticities of the Dirac strings intersecting the plane, which gives the skyrmion number $N_{\rm sk}$ of the 2D spin structure through Eq.~(\ref{eq:Nsk_vorticity}).
\subsubsection{Topological phase diagram \label{sec:3.2.4}}
\begin{figure}[tb]
\includegraphics[width=1.0\columnwidth]{3qscr_Nsk.pdf}
\caption{
\label{fig:3qscr_Nsk}
Topological phase diagram for the screw $3Q$ state in Eq.~(\ref{eq:chiral_3Q_ansatz})
determined by the skyrmion number $N_{\rm sk}$ while changing $m$ and $\tilde{\varphi}$.
The colored regions are topologically nontrivial phases with nonzero $N_{\rm sk}$, and
the white areas in the left and right hand sides denote topologically trivial phases with $N_{\rm sk}=0$.
The black dots represent the pair annihilation of the hedgehogs and antihedgehogs in the 3D hyperspace, which are located at $(m,\tilde{\varphi}) = (-\sqrt{3}, 0)$, $(-1/\sqrt{3}, \pi)$, $(1/\sqrt{3}, 0 )$, and $(\sqrt{3}, \pi)$.
The phase diagram is periodic in the $\tilde{\varphi}$ direction with period of $2\pi$.
}
\end{figure}
\begin{figure}[tbh]
\centering
\includegraphics[width=0.9\columnwidth]{3qscr_spin.pdf}
\caption{
\label{fig:3qscr_spin}
Real-space spin configurations of the screw $3Q$ states:
(a) $m=0.7$ and $\tilde{\varphi}=0$, (b) $m=0.7$ and $\tilde{\varphi}=\pi$,
(c) $m=-0.7$ and $\tilde{\varphi}=0$, and (d) $m=-0.7$ and $\tilde{\varphi}=\pi$.
Each spin configuration is topologically nontrivial with
(a) $N_{\rm sk}=-2$, (b) $N_{\rm sk}=-1$, (c) $N_{\rm sk}=1$, and (d) $N_{\rm sk}=2$.
The notations are common to those in Fig.~\ref{fig:3qscr_intersection}.
}
\end{figure}
Figure~\ref{fig:3qscr_Nsk} summarizes the topological phase diagram on the plane of $m$ and $\tilde{\varphi}$ for Eq.~(\ref{eq:chiral_3Q_ansatz}) obtained by the above procedure.
The result is periodic in the $\tilde{\varphi}$ direction with period of $2\pi$ and symmetric with respect to $\tilde{\varphi}=\pi$.
Note that the phase diagram for $m<0$ is obtained by mirroring that for $m>0$ with $\pi$ shift of $\tilde{\varphi}$ and
the sign inversion of $N_{\rm sk}$ since the spin texture with ($\varphi_\eta, m$) is obtained by time-reversal operation on that with ($\varphi_\eta+\pi, -m$).
We find four topologically nontrivial phases with $N_{\rm sk}=-2$, $-1$, $1$, and $2$.
The major portions of the phase diagram are occupied by the SkLs with $N_{\rm sk}=\pm1$,
while the SkL with $N_{\rm sk}=\pm2$ appear in the small areas in between them only for $m\neq 0$; the SkL at $m=0$ always has $N_{\rm sk}=\pm 1$.
The black dots appearing at the ends of the $N_{\rm sk}=\pm 1$ domes at $m=\pm 1/\sqrt{3}$ and $m=\pm \sqrt{3}$ correspond to the topological transitions by
the pair annihilation of the hedgehogs and antihedgehogs in the hyperspace; see Sec.~\ref{sec:3.2.2}.
Typical spin configurations of the screw $3Q$-SkLs with different $N_{\rm sk}$ are shown in Fig.~\ref{fig:3qscr_spin}.
Figure~\ref{fig:3qscr_spin}(a) shows the spin configuration of the SkL with $N_{\rm sk}=-2$ at $m=0.7$ and $\tilde{\varphi}=0$.
In this state, small two skyrmions exist in the MUC and constitute a honeycomb lattice structure.
There are two Dirac strings with $\zeta=1$ crossing the 2D plane, resulting in $N_{\rm sk}=-2$.
Figure~\ref{fig:3qscr_spin}(b) is for the SkL with $N_{\rm sk}=-1$ at $m=0.7$ and $\tilde{\varphi}=\pi$.
In this case, a single skyrmion exists at the center of the MUC.
The Dirac string with $\zeta=1$ through the skyrmion core leads to $N_{\rm sk}=-1$.
Figures~\ref{fig:3qscr_spin}(c) and \ref{fig:3qscr_spin}(d) show the spin configurations for $m=-0.7$ at $\tilde{\varphi}=0$ and
$\tilde{\varphi}=\pi$, respectively.
These are obtained by flipping all the spins in Figs.~\ref{fig:3qscr_spin}(b) and \ref{fig:3qscr_spin}(a),
and hence, $N_{\rm sk}=1$ and $2$, respectively.
We note that the spin configurations with $\tilde{\varphi}=n\pi$ ($n$ is an integer) have sixfold rotational symmetry,
while the others with $\tilde{\varphi}\neq n\pi$ are threefold, consistent with the symmetry arguments in Table~\ref{tab:3q_scr_sym}.
Let us conclude this section by discussing some implications of our topological phase diagram to the phase control.
In the previous experimental and theoretical studies for the chiral magnets~\cite{Binz2006-1, Muhlbauer2009, Yu2010, Yu2011, Han2010, Buhrandt2013},
the SkLs with $N_{\rm sk}=-1$ ($+1$) were observed in an external magnetic field applied to the ($-$)$\hat{\bf z}$ direction.
This corresponds to the state with $\tilde{\varphi}\sim\pi$ for $m>0$
and $\tilde{\varphi}\sim 0$ for $m<0$ in the phase diagram in Fig.~\ref{fig:3qscr_Nsk}.
The other SkLs with $N_{\rm sk}=\pm2$, however, have not been reported thus far.
Our topological phase diagram indicates that it is necessary to cause the phase shift by $\simeq \pi$ in a magnetic field for reaching the $N_{\rm sk}=\pm2$ states.
This is an interesting issue to be addressed since the emergent magnetic field in the $N_{\rm sk}=\pm2$ states becomes twice as large as that in the $N_{\rm sk}=\pm1$ ones.
In addition, in the previous studies, the SkLs with $N_{\rm sk}=\pm1$ turn into a $1Q$ conical state or a uniformly polarized state
while increasing the magnetic field, not into the topologically trivial $3Q$ state with $N_{\rm sk}=0$ shown in the phase diagram in Fig.~\ref{fig:3qscr_Nsk}.
This suggests that it is difficult to access the points where the hedgehogs and antihedgehogs cause pair annihilation in the hyperspace (the black dots in Fig.~\ref{fig:3qscr_Nsk}).
Once one can avoid the transition to the $1Q$ conical state, it might be possible to find novel topological phenomena arising from the singularity in the emergent electromagnetic fields due to the pair annihilation.
It is worth noting that some possible ways to control the phase degree of freedom were recently proposed~\cite{Hayami2021phase}.
We will discuss this issue in Sec.~\ref{sec:6}.
\subsection{ Sinusoidal 3$Q$ state \label{sec:3.3}}
We next discuss the phase degree of freedom for the sinusoidal $3Q$ state given by
\begin{align}
{\bf S}({\bf r})\propto
\left(\begin{array}{c}
\frac{\sqrt{3}}{2}\sin\theta(-\cos\mathcal{Q}_2 + \cos \mathcal{Q}_3 ) \\
(-1)^{\Gamma}\frac{1}{2}\sin\theta\left( 2 \cos\mathcal{Q}_1 - \cos \mathcal{Q}_2 - \cos \mathcal{Q}_3 \right) \\
\cos\theta(\cos\mathcal{Q}_1 + \cos\mathcal{Q}_2 + \cos\mathcal{Q}_3 + 3\tilde{m})
\end{array} \right),
\label{eq:nonchiral_3Q_ansatz}
\end{align}
which is obtained from Eq.~(\ref{eq:general_ansatz}) by taking $N_Q=3$ and
\begin{eqnarray}
&&\psi_{\eta}^{\rm c} = \frac{1}{\sqrt{3}}
, \ \ \psi_{\eta}^{\rm s} = 0, \\
&&{\bf e}_1^1=
\left( 0 , (-1)^{\Gamma} \sin\theta , \cos\theta \right)^{\mathsf{T}},
\ \ {\bf e}_2^1 =
R_{\Gamma}{\bf e}_{1}^1,
\ \ {\bf e}_3^1=
R_{\Gamma}^2{\bf e}_{1}^1,
\label{eq:evecs_nonchiral3Q} \\
&&
\tilde{m} = \frac{m}{\sqrt{3}\cos\theta},
\label{eq:mtilde}
\end{eqnarray}
where $\Gamma$ takes 0 or 1, $0<\theta<\frac{\pi}{2}$, and $R_{\Gamma}$ represents a $(-1)^{\Gamma}\frac{2\pi}{3}$ rotation about the $z$ axis given by
\begin{eqnarray}
R_{\Gamma}=\left(\begin{array}{ccc}
\cos\left((-1)^{\Gamma}\frac{2\pi}{3}\right) & -\sin\left((-1)^{\Gamma}\frac{2\pi}{3}\right) & 0 \\
\sin\left((-1)^{\Gamma}\frac{2\pi}{3}\right) & \cos\left((-1)^{\Gamma}\frac{2\pi}{3}\right) & 0 \\
0 & 0 & 1
\end{array}\right).
\end{eqnarray}
In Eq.~(\ref{eq:nonchiral_3Q_ansatz}), $\Gamma$ is a parameter to describe the chirality of the spin texture; the spin texture with $\Gamma=1$ is obtained by flipping the $S_y$ component of that with $\Gamma=0$.
Note that the spin texture with $\Gamma=0$ has threefold rotational symmetry, whereas that with $\Gamma=1$ does not.
Meanwhile, $\theta$ describes the angle of the sinusoidal plane in the constituent waves.
In the following, we mainly focus on the spin textures with $\Gamma=0$, while we touch on those with $\Gamma=1$ in Sec.~\ref{sec:5.1}.
Following the arguments in Sec.~\ref{sec:3.2}, we summarize the symmetry operations and their decompositions
for the sinusoidal $3Q$ state in Eq.~(\ref{eq:nonchiral_3Q_ansatz})
with $\Gamma=0$ and $1$ in Tables~\ref{tab:3q_sin_sym} and \ref{tab:3q_sin_sym_G=1}, respectively.
Other symmetry operations are expressed by the combinations of those in the tables.
From Table~\ref{tab:3q_sin_sym}, we find that the sinusoidal $3Q$ state with $\Gamma=0$
is symmetric under $C_{3z}$, $\mathcal{I}$, $\mathcal{T}C_{2x}$, and their combinations for $\tilde{\varphi}=0$ and $\pi$, otherwise $C_{3z}$, $\mathcal{T}C_{2x}$, and their combinations.
Meanwhile, from Table~\ref{tab:3q_sin_sym_G=1}, we find that the state with $\Gamma=1$ is symmetric under $\mathcal{I}$, $\mathcal{T}C_{2x}$, and their combinations for $\tilde{\varphi}=0$, $\pi$, otherwise $\mathcal{T}C_{2x}$.
\begin{table}
\caption{\label{tab:3q_sin_sym}
Similar table to Table~\ref{tab:3q_scr_sym} for the sinusoidal $3Q$ state with $\Gamma=0$ in Eq.~(\ref{eq:nonchiral_3Q_ansatz}):
$\mathcal{I}$ represents the spatial-inversion operation and the other notations are common to those in Table~\ref{tab:3q_scr_sym}.
}
\begin{ruledtabular}
\begin{tabular}{c|ccc}
operation & sum of phases & translation & magnetization \\
\hline
$C_{3z}$ & $\tilde{\varphi} \rightarrow \tilde{\varphi}$ & $0$ & $m \rightarrow m$ \\
$C_{2x}$ & $\tilde{\varphi} \rightarrow \pi + \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow -m$ \\
$\mathcal{I}$ & $\tilde{\varphi} \rightarrow 2\pi - \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}+\tilde{\bf a}_{\eta'}$ & $m \rightarrow m$ \\
$\mathcal{T}$ & $\tilde{\varphi} \rightarrow \pi + \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow -m$
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}
\caption{\label{tab:3q_sin_sym_G=1}
Similar table for the sinusoidal $3Q$ state with $\Gamma=1$ in Eq.~(\ref{eq:nonchiral_3Q_ansatz}).
}
\begin{ruledtabular}
\begin{tabular}{c|ccc}
operation & sum of phases & translation & magnetization \\
\hline
$C_{2x}$ & $\tilde{\varphi} \rightarrow \pi + \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow -m$ \\
$\mathcal{I}$ & $\tilde{\varphi} \rightarrow 2\pi - \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}+\tilde{\bf a}_{\eta'}$ & $m \rightarrow m$ \\
$\mathcal{T}$ & $\tilde{\varphi} \rightarrow \pi + \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow -m$
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{Hedgehogs in hyperspace \label{sec:3.3.1}}
Following the procedure in Sec.~\ref{sec:3.2.1}, we can identify the hyperspace positions of the topological defects
in the 3D spin texture corresponding to Eq.~(\ref{eq:nonchiral_3Q_ansatz}).
Solving ${\bf S}({\bf r})=0$, we obtain the following eight solutions:
\begin{eqnarray}
(\mathcal{Q}_1^{*},\mathcal{Q}_2^{*},\mathcal{Q}_3^{*}) &=&
\left(\pi+p^{\rm sin}(\tilde{m}), \pi+p^{\rm sin}(\tilde{m}), \pi+p^{\rm sin}(\tilde{m}) \right), \notag \\
&&
\left(\pi-p^{\rm sin}(\tilde{m}), \pi-p^{\rm sin}(\tilde{m}), \pi-p^{\rm sin}(\tilde{m}) \right), \notag \\
&&
\left(\pi+p^{\rm sin}(\tilde{m}), \pi-p^{\rm sin}(\tilde{m}), \pi-p^{\rm sin}(\tilde{m}) \right) \notag \\
&&\qquad\qquad\quad
\mbox{and cyclic permutations}, \notag \\
&&
\left(\pi-p^{\rm sin}(\tilde{m}), \pi+p^{\rm sin}(\tilde{m}), \pi+p^{\rm sin}(\tilde{m}) \right) \notag \\
&&\qquad\qquad\quad
\mbox{and cyclic permutations},
\label{eq:nonchiral_3Q_sol}
\end{eqnarray}
where
\begin{eqnarray}
p^{\rm sin}(\tilde{m})=\arccos(\tilde{m}).
\end{eqnarray}
By using the relation in Eq.~(\ref{eq:R^*}), we obtain the positions of the eight singular points as
\begin{eqnarray}
(X^{*}, Y^{*}, Z^{*}) &=&
\frac{\sqrt{2}}{3q}\left(0, 0, 3(\pi+p^{\rm sin}(\tilde{m})) \right), \notag \\
&&
\frac{\sqrt{2}}{3q}\left(0, 0, 3(\pi-p^{\rm sin}(\tilde{m})) \right), \notag \\
&&
\left( \frac{4p^{\rm sin}(\tilde{m})}{3q}, 0, \frac{\sqrt{2}}{3q}(3\pi-p^{\rm sin}(\tilde{m})) \right) \notag \\
&&\qquad\qquad\quad
\mbox{and $C_3^{Z}$ symmetric points}, \notag \\
&&
\left( -\frac{4p^{\rm sin}(\tilde{m})}{3q}, 0, \frac{\sqrt{2}}{3q}(3\pi+p^{\rm sin}(\tilde{m})) \right) \notag \\
&&\qquad\qquad\quad
\mbox{and $C_3^{Z}$ symmetric points}.
\label{eq:nonchiral_3Q_sol_R}
\end{eqnarray}
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\columnwidth]{3qsin_defect.pdf}
\caption{
\label{fig:3qsin_defect}
Hedgehogs and antihedgehogs, and Dirac strings in the spin structure in the 3D hyperspace corresponding to the sinusoidal $3Q$ state in Eq.~(\ref{eq:nonchiral_3Q_ansatz}) for (a) $\tilde{m}=0$ and (b) $\tilde{m}=0.8$.
The notations are common to those in Fig.~\ref{fig:3qscr_defect}.
The results are obtained for $\Gamma=0$ in Eq.~(\ref{eq:nonchiral_3Q_ansatz}); in the case of $\Gamma=1$, all the monopole charges of the hedgehogs and antihedgehogs, and the vorticities of the Dirac strings reverse their signs.
}
\end{figure}
\subsubsection{Topological transition in 3D hyperspace \label{sec:3.3.2}}
Figure~\ref{fig:3qsin_defect} illustrates the evolution of the topological defects and
the Dirac strings in the hyperspace while changing $\tilde{m}$ in Eq.~(\ref{eq:nonchiral_3Q_ansatz}) with $\Gamma=0$.
The monopole charges, the Dirac strings, and their vorticities are obtained by the same procedure as
in Sec.~\ref{sec:3.2.2}; see Eqs.~(\ref{eq:monopole_charge}) and (\ref{eq:vorticity}).
Note that $\theta$ in Eq.~(\ref{eq:nonchiral_3Q_ansatz}) is irrelevant to the positions of the topological objects in the hyperspace.
In the absence of the magnetization ($\tilde{m}=0$), four out of the eight defects in Eq.~(\ref{eq:nonchiral_3Q_sol_R})
are classified into the hedgehogs with $Q_{\rm m}=1$ and the other fours are antihedgehogs with $Q_{\rm m}=-1$.
The eight topological defects form the NaCl-like structure, as shown in Fig.~\ref{fig:3qsin_defect}(a).
We find that four pairs of the hedgehogs and antihedgehogs are connected by
four Dirac strings with $\zeta=\pm 1$, which cross each other at the center of the MUC.
When introducing $\tilde{m}$, the hedgehog and antihedgehog pairs move toward each other along the Dirac strings,
and the cube defined by the eight defects shrinks, as shown in Fig.~\ref{fig:3qsin_defect}(b).
All of the eight defects come to the center of the MUC and vanish with pair annihilatation simultaneously at
$\tilde{m}=1$.
In the case of $\Gamma=1$, the positions of the topological defects remain the same, but the signs of
all $Q_{\rm m}$ and $\zeta$ are reversed.
\begin{figure}[tb]
\includegraphics[width=1.0\columnwidth]{3qsin_intersection.pdf}
\caption{
\label{fig:3qsin_intersection}
(a) Hedgehogs and antihedgehogs, and Dirac strings for $\tilde{m}=0$
in the 3D hyperspace for the sinusoidal $3Q$ state with $\Gamma$=0.
The gray plane represents the intersection of the plane with $Z=\frac{\sqrt{2}}{3q}3\pi$ ($\tilde{\varphi}=3\pi$).
(b) The spin configuration on the gray plane in (a), which corresponds with the spin structure in
Eq.~(\ref{eq:nonchiral_3Q_ansatz}) with $\tilde{\varphi}=3\pi$ and $\Gamma=0$.
We take $\theta=\arccos\frac{1}{\sqrt{3}}$ in Eq.~(\ref{eq:evecs_nonchiral3Q}).
The vorticity at the black circle takes $\zeta=-2$ (see the text for details).
(c) and (d) Similar figures for $\tilde{m}=0$ and $\tilde{\varphi}=2\pi$.
The notations are common to those in Fig.~\ref{fig:3qscr_intersection}.
}
\end{figure}
\subsubsection{Topological transition on 2D plane \label{sec:3.3.3}}
As in Sec.~\ref{sec:3.2.3}, the 2D spin structure with phase $\tilde{\varphi}$ in Eq.~(\ref{eq:nonchiral_3Q_ansatz}) is obtained as the slice of the 3D hedgehog lattice at $Z=\frac{\sqrt{2}}{3q}\tilde{\varphi}$, and the skyrmion number $N_{\rm sk}$ is given by
the sum of the vorticity of the Dirac strings as Eq.~(\ref{eq:Nsk_vorticity}).
Figure~\ref{fig:3qsin_intersection}(a) shows the configurations of the hedgehogs, antihedgehogs, and the Dirac strings
for the spin structure of Eq.~(\ref{eq:nonchiral_3Q_ansatz}) with $\Gamma=0$
in the hyperspace at $\tilde{m}=0$, and the slice at $Z=\frac{\sqrt{2}}{3q}3\pi$.
The spin configuration on the slice is shown in Fig.~\ref{fig:3qsin_intersection}(b), which corresponds to the 2D spin texture in Eq.~(\ref{eq:nonchiral_3Q_ansatz}) with $\tilde{\varphi}=(2n+1)\pi$ ($n$ is an integer) and $\Gamma=0$.
While the topological objects in the hyperspace are independent of the value of $\theta$ as mentioned in Sec.~\ref{sec:3.3.2}, the spin configuration in the original 2D plane depends on $\theta$; we take $\theta=\arccos\frac{1}{\sqrt{3}}$ in Fig.~\ref{fig:3qsin_intersection}(b).
In the hyperspace, as the four Dirac strings cross each other at the center of the MUC and the slice includes the crossing point,
the sum of the vorticities at the intersection is given by $\zeta=3\times(-1)+1=-2$, as depicted by the black circles in the figure.
Thus, we can identify the 2D spin texture at $\tilde{m}=0$ and $\tilde{\varphi}=3\pi$ as the $3Q$-SkL with $N_{\rm sk}=2$.
Figures~\ref{fig:3qsin_intersection}(c) and \ref{fig:3qsin_intersection}(d) illustrate the situation with $\tilde{\varphi}=2\pi$ at $\tilde{m}=0$.
In this case, the 2D slice has two intersections of the Dirac strings with $\zeta=+1$, and hence, the spin texture
with $\tilde{\varphi}=2\pi$ is the $3Q$-SkL with $N_{\rm sk}=-2$, which is a time-reversal counterpart of the spin texture in Fig.~\ref{fig:3qsin_intersection}(b).
Thus, similar to the screw $3Q$ spin structures in Sec.~\ref{sec:3.2.3}, the phase shift can cause topological transitions in the sinusoidal
ones.
\subsubsection{Topological phase diagram \label{sec:3.3.4}}
\begin{figure}[tb]
\includegraphics[width=1.0\columnwidth]{3qsin_Nsk.pdf}
\caption{
\label{fig:3qsin_Nsk}
Topological phase diagram for the sinusoidal $3Q$ state in Eq.~(\ref{eq:nonchiral_3Q_ansatz})
determined by the skyrmion number $N_{\rm sk}$ on the plane of $\tilde{m}$ and $\tilde{\varphi}$.
The upper (lower) signs of $N_{\rm sk}$ are for $\Gamma=0$ ($1$).
The notations are common to those in Fig.~\ref{fig:3qscr_Nsk}.
The black points represent the simultaneous annihilation of the four hedgehogs and the four antihedgehogs in the hyperspace, which are located at $(\tilde{m},\tilde{\varphi})=(-1, 0 )$ and $(1, \pi)$.
}
\end{figure}
\begin{figure*}[tb]
\centering
\includegraphics[width=1.9\columnwidth]{3qsin_spin2.pdf}
\caption{
\label{fig:3qsin_spin}
Real-space spin configurations of the sinusoidal $3Q$ state in Eq.~(\ref{eq:nonchiral_3Q_ansatz})
with (a)-(d) $\Gamma=0$ and (e)-(h) $\Gamma=1$, and $\theta=\arccos\frac{1}{\sqrt{3}}$:
(a)(e) $\tilde{m}=0.5$ and $\tilde{\varphi}=\frac{\pi}{2}$, (b)(f) $\tilde{m}=0.5$ and $\tilde{\varphi}=\pi$,
(c)(g) $\tilde{m}=-0.5$ and $\tilde{\varphi}=0$, (d)(h) $\tilde{m}=-0.5$ and $\tilde{\varphi}=\frac{\pi}{2}$.
Each spin configuration is topologically nontrivial and has
(a) $N_{\rm sk}=-1$, (b) $N_{\rm sk}=2$, (c) $N_{\rm sk}=-2$, (d) $N_{\rm sk}=1$,
(e) $N_{\rm sk}=1$, (f) $N_{\rm sk}=-2$, (g) $N_{\rm sk}=2$, and (h) $N_{\rm sk}=-1$.
The white circles in (f) denote the vorticity $\zeta=2$.
Other notations are common to those in Fig.~\ref{fig:3qsin_intersection}.
In (b) and (f), the four Dirac strings cross at the center of the MUC; see Fig.~\ref{fig:3qsin_defect} and the text for details.
}
\end{figure*}
Figure~\ref{fig:3qsin_Nsk} summarizes the topological phase diagram on the plane of
$\tilde{m}$ and $\tilde{\varphi}$ for the 2D spin texture in Eq.~(\ref{eq:nonchiral_3Q_ansatz}).
The result is common to $\Gamma=0$ and $1$, while the sign of the skyrmion number $N_{\rm sk}$ in each phase is opposite: The upper (lower) signs are for $\Gamma=0$ ($1$).
Similar to Fig.~\ref{fig:3qscr_Nsk}, the phase diagram has $2\pi$ periodicity and symmetric with respect to $\tilde{\varphi}=\pi$, and the result for $m<0$ is obtained by mirroring that for $m>0$ with $\pi$ shift of $\tilde{\varphi}$ and the sign inversion of $N_{\rm sk}$.
We find four topologically nontrivial phases with $N_{\rm sk}=-2$, $-1$, $1$, and $2$ as the proper screw case in Fig.~\ref{fig:3qscr_Nsk}, but with different distributions of each phase.
In the present sinusoidal case, the large portions of the phase diagram are occupied by the SkL with $N_{\rm sk}=\pm2$,
while the SkL with $N_{\rm sk}=\pm1$ appear in between them only for $\tilde{m}\neq 0$; the state at $\tilde{m}=0$ always has $N_{\rm sk}=\pm 2$, in contrast to Fig.~\ref{fig:3qscr_Nsk}.
Both $N_{\rm sk}=\pm2$ ($\mp2$) and $\mp1$ ($\pm1$) regions end at $\tilde{m}=1$ ($-1$) and $\tilde{\varphi}=\pi$ ($0$) with the simultaneous
pair annihilation of all the hedgehogs and antihedgehogs in the hyperspace,
which are denoted by the black dots in Fig.~\ref{fig:3qsin_Nsk}.
Figure~\ref{fig:3qsin_spin} showcases typical spin configurations of the sinusoidal $3Q$ state with $\theta=\arccos\frac{1}{\sqrt{3}}$.
Here, we take $\Gamma=0$ in Figs.~\ref{fig:3qsin_spin}(a)-\ref{fig:3qsin_spin}(d) and $\Gamma=1$ in Figs.~\ref{fig:3qsin_spin}(e)-\ref{fig:3qsin_spin}(h).
Figure~\ref{fig:3qsin_spin}(a) shows the spin configuration of the $N_{\rm sk}=-1$ state at $\tilde{m}=0.5$ and
$\tilde{\varphi}=\frac{\pi}{2}$.
In this state, there is a single Bloch type skyrmion with $N_{\rm sk}=-1$ per MUC.
The Dirac string with $\zeta=1$ through the skyrmion core contributes to $N_{\rm sk}=-1$.
Figure~\ref{fig:3qsin_spin}(b) is for the $N_{\rm sk}=2$ state at $\tilde{m}=0.5$ and $\tilde{\varphi}=\pi$.
This spin structure has a skyrmion with $N_{\rm sk}=2$ at the center of the MUC.
In this state, the 2D plane in the hyperspace intersects the crossing point of the four Dirac strings,
which gives the total vorticity as $-2$; see Sec.~\ref{sec:3.3.3}.
Figures~\ref{fig:3qsin_spin}(c) and \ref{fig:3qsin_spin}(d) show the spin configurations with $\tilde{m}=-0.5$ at $\tilde{\varphi}=0$
and $\tilde{\varphi}=\frac{\pi}{2}$; the former is obtained by time-reversal operation on Fig.~\ref{fig:3qsin_spin}(b),
while the latter is obtained by time-reversal operation combined with sixfold rotation operation about the $z$ axis on Fig.~\ref{fig:3qsin_spin}(a).
The corresponding results for $\Gamma=1$ are shown in Figs.~\ref{fig:3qsin_spin}(e)-\ref{fig:3qsin_spin}(h).
In contrast to the screw $3Q$ case in Sec.~\ref{sec:3.2.4}, all these sinusoidal $3Q$ cases
with $\Gamma=0$ have threefold rotational symmetry independent of $\tilde{\varphi}$ and $\tilde{m}$,
while those with $\Gamma=1$ do not; see Sec.~\ref{sec:3.3}.
In addition, the spin textures with $\tilde{\varphi}=n\pi$ ($n$ is an integer) has inversion symmetry independent of $\Gamma$, which is consistent with the symmetry arguments in Tables~\ref{tab:3q_sin_sym} and \ref{tab:3q_sin_sym_G=1}.
The phase diagram in Fig.~\ref{fig:3qsin_Nsk} indicates that the system undergoes a topological phase transition from $N_{\rm sk}=\pm 2$ to $\pm 1$, and finally to $N_{\rm sk}=0$ while increasing $\tilde{m}$.
Such transitions were found in the previous numerical study of the Kondo lattice model on a triangular lattice while increasing the magnetic field~\cite{Ozawa2016}.
Since the $N_{\rm sk}=\pm 2$ and $\pm 1$ phases appear predominantly in the different $\tilde{\varphi}$ regions in our phase diagram, the topological phase transition between them might be accompanied by a phase shift, but the phase degree of freedom was not studied in the previous study.
We will discuss this issue by analyzing the phases in the spin structures obtained by the previous study in Sec.~\ref{sec:5.1}.
\section{$4Q$ hedgehog lattices \label{sec:4} }
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{4q_setup.pdf}
\caption{
\label{fig:4q_setup}
(a) Schematic picture of the wave vectors in 3D reciprocal space, ${\bf q}_{\eta}$.
(b) Corresponding magnetic translation vectors in the 3D real space, ${\bf a}_{\eta}$, represented by orange arrows.
The brown arrows represent the projections of ${\bf A}_{\eta}$ onto the $xyz$ space denoted by $\tilde{\bf a}_{\eta}$.
The gray rhombohedron is the MUC, and the dashed cube with the side length of
$L=\frac{2\sqrt{3}\pi}{q}$ includes four MUCs.
The small gray cube has the side length of $\frac{L}{2}$.
}
\end{figure}
In this section, we elucidate the effect of phase shifts on the spin textures composed of four wave vectors
in three dimensions, i.e., $N_{Q}=4$ and $d=3$, by using the hyperspace representation.
Specifically, we consider Eq.~(\ref{eq:general_ansatz}) with four ${\bf q}_{\eta}$ given by
\begin{eqnarray}
&&{\bf q}_1 = \frac{q}{\sqrt{3}}\left( 1, 1, 1 \right),\ \
{\bf q}_2 = \frac{q}{\sqrt{3}}\left( -1, -1, 1 \right), \nonumber \\
&&
{\bf q}_3 = \frac{q}{\sqrt{3}}\left( -1, 1,- 1 \right),\ \
{\bf q}_4 = \frac{q}{\sqrt{3}}\left( 1, -1, -1 \right).
\label{eq:4Q_q_eta}
\end{eqnarray}
For this $4Q$ spin structure, the 3D magnetic translation vectors are defined as
\begin{eqnarray}
&&
{\bf a}_1 = \frac{\sqrt{3}\pi}{q}\left(1, 0, 1\right),\ \
{\bf a}_2 = \frac{\sqrt{3}\pi}{q}\left(1, 1, 0\right), \nonumber \\
&&
{\bf a}_3 = \frac{\sqrt{3}\pi}{q}\left(0, 1, 1\right).
\label{eq:4Q_a_eta}
\end{eqnarray}
The four wave vectors and the three magnetic translation vectors are depicted in
Figs.~\ref{fig:4q_setup}(a) and \ref{fig:4q_setup}(b), respectively.
While the 3D MUC is given by the gray rhombohedron in Fig.~\ref{fig:4q_setup}(b),
we compute the topological properties for the dashed cube with the side length of $L=\frac{2\sqrt{3}\pi}{q}$,
which includes four MUCs, in the following analyses.
In parallel with the arguments in Sec.~\ref{sec:3}, we focus on the two types of the $4Q$ spin structures in the following:
the superposition of four proper screws and that of four sinusoidal waves.
The former screw $4Q$ state breaks spatial-inversion symmetry and possesses the chirality;
see Sec.~\ref{sec:4.2}.
This type of spin structure is found in a noncentrosymmetric material MnSi$_{1-x}$Ge$_{x}$ and a centrosymmetric material SrFeO$_3$, as introduced in Sec.~\ref{sec:1}.
On the other hand, the latter sinusoidal $4Q$ state retains spatial-inversion symmetry or $S_4$ improper rotational symmetry about the $z$ axis; see Sec.~\ref{sec:4.3}.
In the next subsection, we present the hyperspace representation applicable to these two types of spin configurations.
\begin{comment}
\begin{eqnarray}
&&{\bf q}_1 = q\left( \begin{array}{c}
\sin\beta\cos\gamma \\ \sin\beta\sin\gamma \\ \cos\beta
\end{array} \right),\\
&&{\bf q}_2 = q\left( \begin{array}{c}
-\sin\beta\cos\gamma \\ -\sin\beta\sin\gamma \\ \cos\beta
\end{array} \right),\\
&&{\bf q}_3 = q\left( \begin{array}{c}
-\sin\beta\cos\gamma \\ \sin\beta\sin\gamma \\ -\cos\beta
\end{array} \right),\\
&&{\bf q}_4 = q\left( \begin{array}{c}
\sin\beta\cos\gamma \\ -\sin\beta\sin\gamma \\ -\cos\beta
\end{array} \right),
\end{eqnarray}
\begin{eqnarray}
&&{\bf a}_1 = \frac{\pi}{q}\left( \begin{array}{c}
-\frac{1}{\sin\beta\cos\gamma} \\ 0 \\ \frac{1}{\cos\beta}
\end{array} \right),\\
&&{\bf a}_2 = \frac{\pi}{q}\left( \begin{array}{c}
\frac{1}{\sin\beta\cos\gamma} \\ \frac{1}{\sin\beta\sin\gamma} \\ 0
\end{array} \right),\\
&&{\bf a}_3 = \frac{\pi}{q}\left( \begin{array}{c}
0 \\ \frac{1}{\sin\beta\cos\gamma} \\ \frac{1}{\cos\beta}
\end{array} \right).
\end{eqnarray}
\end{comment}
\subsection{Hyperspace representation of the $4Q$ states \label{sec:4.1}}
Following the same procedure as for the $3Q$ states in Sec.~\ref{sec:3.1},
we construct the hyperspace representation of the $4Q$ spin structures.
Considering 4D reciprocal hyperspace, we set the wave vectors
${\bf Q}_{\eta}=( Q_{\eta}^{X}, Q_{\eta}^{Y}, Q_{\eta}^{Z}, Q_{\eta}^{W})$ in Eq.~(\ref{eq:Q_r_R}) without loss of generality:
\begin{eqnarray}
&&
{\bf Q}_1=q\left(
\frac{1}{\sqrt{3}} , \frac{1}{\sqrt{3}} , \frac{1}{\sqrt{3}} , \frac{1}{q}
\right), \label{eq:4Q_Q1}\\
&&
{\bf Q}_2=q\left(
-\frac{1}{\sqrt{3}} , -\frac{1}{\sqrt{3}} , \frac{1}{\sqrt{3}} , \frac{1}{q}
\right),\\
&&
{\bf Q}_3=q\left(
-\frac{1}{\sqrt{3}} , \frac{1}{\sqrt{3}} , -\frac{1}{\sqrt{3}} , \frac{1}{q}
\right),\\
&&
{\bf Q}_4=q\left(
\frac{1}{\sqrt{3}} , -\frac{1}{\sqrt{3}} , -\frac{1}{\sqrt{3}} , \frac{1}{q}
\right),
\label{eq:4Q_Q4}
\end{eqnarray}
where $Q_{\eta}^{X}$, $Q_{\eta}^{Y}$, and $Q_{\eta}^{Z}$ are taken to be proportional to $q_{\eta}^{x}$, $q_{\eta}^{y}$, and $q_{\eta}^{z}$ in Eq.~(\ref{eq:4Q_q_eta}), respectively.
In this setting, ${\bf q}_{\eta}$ is a projection of ${\bf Q}_{\eta}$ onto the $q^x q^y q^z$ space.
Then, we obtain the corresponding magnetic translation vectors ${\bf A}_{\eta}$ in the 4D hyperspace as
\begin{eqnarray}
&&
{\bf A}_1=\frac{\pi}{2q}\left(
\sqrt{3} , \sqrt{3} , \sqrt{3} , q
\right),\\
&&
{\bf A}_2=\frac{\pi}{2q}\left(
-\sqrt{3} , -\sqrt{3} , \sqrt{3} , q
\right),\\
&&
{\bf A}_3=\frac{\pi}{2q}\left(
-\sqrt{3} , \sqrt{3} , -\sqrt{3} , q
\right),\\
&&
{\bf A}_4=\frac{\pi}{2q}\left(
\sqrt{3} , -\sqrt{3} , -\sqrt{3} , q
\right).
\end{eqnarray}
Note that ${\bf a}_1$, ${\bf a}_2$, and ${\bf a}_3$ are given by $\tilde{\bf a}_1-\tilde{\bf a}_3$, $\tilde{\bf a}_1-\tilde{\bf a}_2$, and $\tilde{\bf a}_1-\tilde{\bf a}_4$, respectively, where $\tilde{\bf a}_{\eta}$ are the projections of ${\bf A}_{\eta}$ onto the $xyz$ space;
see Fig.~\ref{fig:4q_setup}(b).
By using Eq.~(\ref{eq:r2R}), the hyperspace positions ${\bf R}=(X, Y, Z, W)$ are related with the real-space positions ${\bf r}$ as
\begin{eqnarray}
\left(\begin{array}{c}
X \\ Y \\ Z \\ W
\end{array}\right)
=
V_{4}
\left(\begin{array}{c}
x \\ y \\ z \\ 1
\end{array}\right),
\label{eq:r2R_4Q}
\end{eqnarray}
where $V$ includes the phases as
\begin{eqnarray}
V_{4}=
\left(\begin{array}{cccc}
1 & 0 & 0 & \frac{\sqrt{3}}{4q}\left( \varphi_1 - \varphi_2 - \varphi_3 + \varphi_4 \right) \\
0 & 1 & 0 & \frac{\sqrt{3}}{4q}\left( \varphi_1 - \varphi_2 + \varphi_3 - \varphi_4 \right) \\
0 & 0 & 1 & \frac{\sqrt{3}}{4q}\left( \varphi_1 + \varphi_2 - \varphi_3 - \varphi_4 \right) \\
0 & 0 & 0 & \frac{1}{4}\left( \varphi_1 + \varphi_2 + \varphi_3 + \varphi_4 \right)
\end{array}\right).
\end{eqnarray}
Equation~(\ref{eq:r2R_4Q}) tells that the spin configuration in the original 3D $xyz$ space is the same as the one on a hyperplane in the 4D hyperspace with
\begin{eqnarray}
W=\frac{1}{4}\tilde{\varphi},
\end{eqnarray}
where
\begin{eqnarray}
\tilde{\varphi} = \sum_{\eta} \varphi_{\eta} = \varphi_1 + \varphi_2 + \varphi_3 + \varphi_4.
\end{eqnarray}
Similar to the $3Q$ case in Sec.~\ref{sec:3.1}, only the summation of the phases $\varphi_{\eta}$ is relevant,
instead of each value of $\varphi_{\eta}$, and $\tilde{\varphi}$ has $2\pi$ periodicity.
Due to this periodicity, the $2\pi$ phase shift in $\tilde{\varphi}$ ($\Delta\tilde{\varphi}=\sum_{\eta} \varphi_{\eta} = 2\pi$) corresponds to a spatial translation by
\begin{eqnarray}
\Delta{\bf r}_{\eta}=\tilde{\bf a}_{\eta}-\sum_{\eta'=1}^{4}\frac{\Delta\varphi_{\eta'}}{2\pi}\tilde{\bf a}_{\eta'},
\end{eqnarray}
where $\eta$ may take any of 1, 2, 3, and 4.
\begin{comment}
\begin{eqnarray}
&&
{\bf Q}_1=Q\left( \begin{array}{c}
\sin\alpha\sin\beta\cos\gamma \\ \sin\alpha\sin\beta\sin\gamma \\ \sin\alpha\cos\beta \\ \cos\alpha
\end{array} \right), \\
&&
{\bf Q}_2=Q\left( \begin{array}{c}
-\sin\alpha\sin\beta\cos\gamma \\ -\sin\alpha\sin\beta\sin\gamma \\ \sin\alpha\cos\beta \\ \cos\alpha
\end{array} \right),\\
&&
{\bf Q}_3=Q\left( \begin{array}{c}
-\sin\alpha\sin\beta\cos\gamma \\ \sin\alpha\sin\beta\sin\gamma \\ -\sin\alpha\cos\beta \\ \cos\alpha
\end{array} \right), \\
&&
{\bf Q}_4=Q\left( \begin{array}{c}
\sin\alpha\sin\beta\cos\gamma \\ -\sin\alpha\sin\beta\sin\gamma \\ -\sin\alpha\cos\beta \\ \cos\alpha
\end{array} \right),
\end{eqnarray}
\begin{eqnarray}
&&
{\bf A}_1=\frac{2\pi}{Q}A(\alpha,\beta,\gamma)\left( \begin{array}{c}
\sin\alpha^{*}\sin\beta^{*}\cos\gamma^{*} \\ \sin\alpha^{*}\sin\beta^{*}\sin\gamma^{*} \\ \sin\alpha^{*}\cos\beta^{*} \\ \cos\alpha^{*}
\end{array} \right),\\
&&
{\bf A}_2=\frac{2\pi}{Q}A(\alpha,\beta,\gamma)\left( \begin{array}{c}
-\sin\alpha^{*}\sin\beta^{*}\cos\gamma^{*} \\ -\sin\alpha^{*}\sin\beta^{*}\sin\gamma^{*} \\ \sin\alpha^{*}\cos\beta^{*} \\ \cos\alpha^{*}
\end{array} \right),\\
&&
{\bf A}_3=\frac{2\pi}{Q}A(\alpha,\beta,\gamma)\left( \begin{array}{c}
-\sin\alpha^{*}\sin\beta^{*}\cos\gamma^{*} \\ \sin\alpha^{*}\sin\beta^{*}\sin\gamma^{*} \\ -\sin\alpha^{*}\cos\beta^{*} \\ \cos\alpha^{*}
\end{array} \right),\\
&&
{\bf A}_4=\frac{2\pi}{Q}A(\alpha,\beta,\gamma)\left( \begin{array}{c}
\sin\alpha^{*}\sin\beta^{*}\cos\gamma^{*} \\ -\sin\alpha^{*}\sin\beta^{*}\sin\gamma^{*} \\ -\sin\alpha^{*}\cos\beta^{*} \\ \cos\alpha^{*}
\end{array} \right).
\end{eqnarray}
\end{comment}
\subsection{Screw $4Q$ state \label{sec:4.2}}
In this subsection, we analyze the effect of phase shifts on the screw $4Q$ state
composed of four proper screws given by
\begin{widetext}
\begin{eqnarray}
{\bf S}({\bf r})
&\propto&
\left(
\begin{array}{c}
\frac{1}{\sqrt{2}}\left(
(-\cos {\mathcal {\mathcal Q}}_1+\cos {\mathcal Q}_2-\cos {\mathcal Q}_3+\cos {\mathcal Q}_4)
+\frac{1}{\sqrt{3}}(-\sin {\mathcal Q}_1+\sin {\mathcal Q}_2-\sin {\mathcal Q}_3+\sin {\mathcal Q}_4)
\right) \\
\frac{1}{\sqrt{2}}\left(
(\cos {\mathcal Q}_1-\cos {\mathcal Q}_2-\cos {\mathcal Q}_3+\cos {\mathcal Q}_4)
-\frac{1}{\sqrt{3}}(\sin {\mathcal Q}_1-\sin {\mathcal Q}_2-\sin {\mathcal Q}_3+\sin {\mathcal Q}_4)
\right) \\
\sqrt{\frac{2}{3}}(\sin {\mathcal Q}_1+\sin {\mathcal Q}_2+\sin {\mathcal Q}_3+\sin {\mathcal Q}_4) + 2m
\end{array}
\right),
\label{eq:4qchiral_ansatz}
\end{eqnarray}
\end{widetext}
which is obtained from Eq.~(\ref{eq:general_ansatz}) by taking $N_Q=4$ and
\begin{eqnarray}
&&\psi_{\eta}^{\rm c}=\psi_{\eta}^{\rm s}=\frac{1}{2}, \\
&&{\bf e}_{\eta}^1=\frac{\hat{\bf z}\times{\bf e}_{\eta}^0}{|\hat{\bf z}\times{\bf e}_{\eta}^0|}, \
{\bf e}_{\eta}^2={\bf e}_{\eta}^0\times{\bf e}_{\eta}^1, \
{\bf e}_{\eta}^0=\frac{{\bf q}_{\eta}}{q}.
\end{eqnarray}
With the same manner to Tables~\ref{tab:3q_scr_sym}--\ref{tab:3q_sin_sym_G=1}, we summarize the symmetry operations and their decompositions for the screw $4Q$ state in Table~\ref{tab:4q_scr_sym}.
Similar to the previous arguments, from Table~\ref{tab:4q_scr_sym}, we can obtain the symmetry operations which do not change the spin texture.
In the 3D system, however, some of the symmetry operations are nonsymmorphic.
For instance, the $C_{4z}$ operation at $\tilde{\varphi}=\pi$ is reduced to the spatial translation by $\tilde{\bf a}_{\eta}$ as shown in Table~\ref{tab:4q_scr_sym}, and hence, the operations of $\{C_{4z}|-\tilde{\bf a}_{\eta}\}$ does not change the spin texture, where $\{\mathcal{O}|{\bf t}\}$ denotes the translation by ${\bf t}$ after operating $\mathcal{O}$.
This corresponds to a screw operation.
Note that the situation is different from the $3Q$ cases, where a translation combined with rotation can be represented by a shift of the rotation axis.
By considering such relations, we find that the screw $4Q$ state is symmetric under
$C_{4z}$, $\mathcal{T}C_{2x}$, and their combinations for $\tilde{\varphi}=0$,
$\{C_{4z}|-\tilde{\bf a}_{\eta}\}$, $\mathcal{T}C_{2x}$, and their combinations for $\tilde{\varphi}=\pi$,
otherwise $C_{2z}$, $\mathcal{T}C_{2x}$, and their combinations.
\begin{table}
\caption{\label{tab:4q_scr_sym}
Similar table to Tables~\ref{tab:3q_scr_sym}--\ref{tab:3q_sin_sym_G=1} for the screw $4Q$ state in Eq.~(\ref{eq:4qchiral_ansatz}).
The notations are common to those in Table~\ref{tab:3q_scr_sym}.
We take $\varphi_{\eta}=\frac{\tilde{\varphi}}{4}$ without loss of generality.
}
\begin{ruledtabular}
\begin{tabular}{c|ccc}
operation & sum of phases & translation & magnetization \\
\hline
$C_{4z}$ & $\tilde{\varphi} \rightarrow 2\pi - \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow m$ \\
$C_{2x}$ & $\tilde{\varphi} \rightarrow \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}+\tilde{\bf a}_{\eta'}$ & $m \rightarrow -m$ \\
$\mathcal{T}$ & $\tilde{\varphi} \rightarrow \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}+\tilde{\bf a}_{\eta'}$ & $m \rightarrow -m$
\end{tabular}
\end{ruledtabular}
\end{table}
\if0{
where $\hat{\bf z}$ is the unit vector along the $z$ direction:
\begin{widetext}
\begin{eqnarray}
{\bf S}({\bf r})
&\propto&
\frac{1}{2}
\left(
\begin{array}{c}
\frac{1}{\sqrt{2}}\left(
(-\cos {\mathcal {\mathcal Q}}_1+\cos {\mathcal Q}_2-\cos {\mathcal Q}_3+\cos {\mathcal Q}_4)
+\frac{1}{\sqrt{3}}(-\sin {\mathcal Q}_1+\sin {\mathcal Q}_2-\sin {\mathcal Q}_3+\sin {\mathcal Q}_4)
\right) \\
\frac{1}{\sqrt{2}}\left(
(\cos {\mathcal Q}_1-\cos {\mathcal Q}_2-\cos {\mathcal Q}_3+\cos {\mathcal Q}_4)
-\frac{1}{\sqrt{3}}(\sin {\mathcal Q}_1-\sin {\mathcal Q}_2-\sin {\mathcal Q}_3+\sin {\mathcal Q}_4)
\right) \\
\sqrt{\frac{2}{3}}(\sin {\mathcal Q}_1+\sin {\mathcal Q}_2+\sin {\mathcal Q}_3+\sin {\mathcal Q}_4) + 2m
\end{array}
\right),
\label{eq:4qchiral_ansatz}
\end{eqnarray}
\end{widetext}
}\fi
\subsubsection{Hedgehogs in hyperspace \label{sec:4.2.1}}
\begin{figure*}[tb]
\centering
\includegraphics[width=1.0\textwidth]{4qscr_hedgehogs.pdf}
\caption{
\label{fig:4qchiral_hedgehogs}
Real-space distribution of the hedgehogs and antihedgehogs, and the Dirac strings
within the $L^3$ cube [see Fig.~\ref{fig:4q_setup}(b)] for the screw $4Q$ state in Eq.~(\ref{eq:4qchiral_ansatz})
while changing $m$ and $\tilde{\varphi}$:
(a) $m=0$, (b) $m=0.3$, and (c) $m=0.9$ at $\tilde{\varphi}=\pi/3$, and
(d) $m=0$, (e) $m=0.7$, and (f) $m=0.9$ at $\tilde{\varphi}=\pi$.
The notations are common to those in Fig. \ref{fig:3qscr_defect}, except for the Dirac strings running on the horizontal planes,
whose vorticities are ill-defined, denoted by the white lines.
}
\end{figure*}
To discuss the phase shift in the screw $4Q$ state in Eq.~(\ref{eq:4qchiral_ansatz}), we study the corresponding spin texture
in the 4D hyperspace whose reciprocal space is
spanned by the wave vectors ${\bf Q}_{\eta}$ in Eqs.~(\ref{eq:4Q_Q1})--(\ref{eq:4Q_Q4}).
As discussed in the previous subsection, a real-space spin configuration for a given phase summation $\tilde{\varphi}$ in the original 3D space corresponds to a hyperspace spin configuration on the hyperplane with $W=\frac{\tilde{\varphi}}{4}$.
Following the procedure in Sec.~\ref{sec:3.2.1}, we first compute the positions of the topological defects in the 4D hyperspace.
In the present case, however, the solutions for ${\bf S}({\bf R})=0$ are given by lines rather than points in the 4D hyperspace.
When $|m|<\frac{4}{\sqrt{6}}\sin W$, we obtain two solutions analytically:
\begin{eqnarray}
(X^*, Y^*, Z^*)=
L\left( 0, 0, \frac12 \pm \frac{1}{2\pi}r_1^{\rm scr}(m, W) \right),
\label{eq:4qchiral_sol1}
\end{eqnarray}
where
\begin{eqnarray}
r_1^{\rm scr}(m, W)=\arccos\left(\frac{\sqrt{6}m}{4\sin W}\right).
\end{eqnarray}
Here, we show the solutions within $0 \leq X,Y < \frac{L}{2}$ and $0 \leq Z < L$, but the spatial translations of them with ${\bf a}_{\eta}$ in Eqs.~(\ref{eq:4Q_a_eta}) also satisfy ${\bf S}({\bf R})=0$.
Meanwhile, when $|m|<\frac{4}{\sqrt{6}}\cos W$, we obtain other two solutions as
\begin{eqnarray}
(X^*, Y^*, Z^*)&=&
L\left( \frac14, \frac14, \frac{1}{2\pi}r_2^{\rm scr}(m, W) \right), \notag\\
&&
L\left( \frac14, \frac14, \frac12 - \frac{1}{2\pi}r_2^{\rm scr}(m, W) \right),
\label{eq:4qchiral_sol2}
\end{eqnarray}
where
\begin{eqnarray}
r_2^{\rm scr}(m, W)=\arcsin\left(\frac{\sqrt{6}m}{4\cos W}\right).
\end{eqnarray}
The spatial translations also apply to this case.
These solutions do not change their $XY$ coordinates with $m$ and $\tilde{\varphi}$.
In addition to the above analytical solutions, we also find the other solutions by numerically solving the equation
${\bf S}({\bf R})=0$.
The two conditions $S_x({\bf R})=0$ and $S_y({\bf R})=0$ can be reduced to
\begin{eqnarray}
\tan\left(\frac{q}{\sqrt{3}}Z^*\right) = \pm\sqrt{\frac{3\tan^2W-1}{3-\tan^2W}}
\label{eq:4qchiral_sol3_1}
\end{eqnarray}
and
\begin{align}
\tan \left(\frac{q}{\sqrt{3}}
Y^{*}\right)=\mp\sqrt{
\frac{\left(\sqrt{3}\tan W+1\right)\left(\sqrt{3}+\tan W\right)}
{\left(\sqrt{3}\tan W-1\right)\left(\sqrt{3}-\tan W\right)}
}\tan \left(\frac{q}{\sqrt{3}}X^{*}\right)
\label{eq:4qchiral_sol3_2}
\end{align}
Meanwhile, $S_z({\bf R})=0$ is reduced to
\begin{eqnarray}
&&\cos \left(\frac{q}{\sqrt{3}}X^{*}\right)\cos \left(\frac{q}{\sqrt{3}}Y^{*}\right)\cos \left(\frac{q}{\sqrt{3}}Z^{*}\right)\sin W \notag \\
&&- \sin \left(\frac{q}{\sqrt{3}}X^{*}\right)\sin \left(\frac{q}{\sqrt{3}}Y^{*}\right)\sin \left(\frac{q}{\sqrt{3}}Z^{*}\right)\cos W \notag \\
&&+\frac{\sqrt{6}}{4}m=0.
\label{eq:4qchiral_sol3_3}
\end{eqnarray}
We find that the numerical solutions exist when $\frac{1}{\sqrt{3}}<\tan W<\sqrt{3}$, namely $\frac{2\pi}{3}<\tilde{\varphi}<\frac{4\pi}{3}$.
We note that the solutions in Eqs.~(\ref{eq:4qchiral_sol3_1})-(\ref{eq:4qchiral_sol3_3}) were not mentioned in the previous study for the screw $4Q$ state with $\tilde{\varphi}=\pi$~\cite{Park2011}.
While above solutions are points in the original 3D space, the line solutions in original 3D space are also obtained for only $m=0$.
When $W=0$, $\frac{\pi}{6}$ and $\frac{\pi}{3}$, we find analytical solutions
\begin{eqnarray}
(X^*,Y^*,Z^*) = \left(0,0,*\right), \left(*, \frac{L}{4}, 0\right), \left(0, *, \frac{L}{4}\right),
\label{eq:4qchiral_line_node}
\end{eqnarray}
where $*$ takes any value, respectively.
In the 4D hyperspace, the topological objects defined by the above solutions of ${\bf S}({\bf R})=0$ except for Eq.~(\ref{eq:4qchiral_line_node}) form closed loops, and
the intersection of the loops by the hyperplane with $W=\frac{\tilde{\varphi}}{4}$ gives topological point defects in the original 3D space,
which correspond to the hedgehogs and antihedgehogs discussed below.
The hedgehog and antihedgehog always appear in pairs for each closed loop.
The Dirac string connecting the hedgehog-antihedgehog pair in the 3D space is derived from the intersection of a 2D membrane in the 4D hyperspace whose edge and surface are defined by ${\bf S}({\bf R})=0$ and ${\bf S}({\bf R})=-\hat{\bf z}$, respectively.
The 2D membrane can be regarded as an extension of the Dirac string in the higher dimension and hence, we may call it the Dirac plane.
Thus, the hyperspace representation of the $4Q$ state is given by a 4D lattice composed of such closed loops, and the $4Q$ spin texture
with the phase degree of freedom is defined as a 3D intersection of the 4D loop lattice.
Since it is difficult to visualize the 4D hyperspace, we present the topological objects in the original 3D space which are derived from the hyperspace representation above.
Figure~\ref{fig:4qchiral_hedgehogs} shows the systematic change of the topological defects in the 3D space while changing $m$ in Eq.~(\ref{eq:4qchiral_ansatz}) with $\tilde{\varphi}=\frac{\pi}{3}$ and $\pi$.
Following the procedure in Sec.~\ref{sec:3.2.2}, we compute the monopole charge for the topological defects, $Q_{\rm m}$ in Eq.~(\ref{eq:monopole_charge}), and the vorticity of the Dirac strings which connect the hedgehogs and antihedgehogs, $\zeta$ in Eq.~(\ref{eq:vorticity}), by replacing ${\bf R}$ with ${\bf r}$.
In order to distinguish the different topological phases, we also compute the total number of the topological defects,
the hedgehogs and antihedgehogs, within the cube
shown in Fig.~\ref{fig:4q_setup}(b), denoted by $N_{\rm m}$; the total number per MUC is given by $\frac{N_{\rm m}}{4}$.
First, we discuss the case of $\tilde{\varphi}=\frac{\pi}{3}$ shown in Figs.~\ref{fig:4qchiral_hedgehogs}(a),
\ref{fig:4qchiral_hedgehogs}(b), and \ref{fig:4qchiral_hedgehogs}(c).
For $m=0$, there are 16 topological defects in total and half of them are hedgehogs with $Q_{\rm m}=+1$ and the others are antihedgehogs with $Q_{\rm m}=-1$, as shown in Fig.~\ref{fig:4qchiral_hedgehogs}(a).
The hedgehogs and antihedgehogs derived from Eq.~(\ref{eq:4qchiral_sol1}) are connected by the Dirac strings with $\zeta=-1$,
while the other topological defects from Eq.~(\ref{eq:4qchiral_sol2}) are connected by the Dirac strings with $\zeta=+1$.
All the Dirac strings run along the $z$ axis and have the same length of $\frac{L}{2}$.
When introducing $m$, the hedgehogs and antihedgehogs move toward their counterparts along the Dirac strings,
as exemplified in Fig.~\ref{fig:4qchiral_hedgehogs}(b) for $m=0.3$.
The Dirac strings with $\zeta=-1$ become shorter than those with $\zeta=+1$: The lengths change as
$L\left(\frac{1}{2}-\frac{1}{\pi}r_2^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right)\right)$ and
$\frac{L}{\pi}r_1^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right)$ for $\zeta=+1$ and $-1$, respectively.
This differentiation gives rise to a net emergent magnetic field, as will be discussed in Sec.~\ref{sec:4.2.2}.
By further increasing $m$, the hedgehogs and antihedgehogs connected by the Dirac strings with $\zeta=-1$ disappear
with pair annihilation at $m=\frac{4}{\sqrt{6}}\sin\frac{\tilde{\varphi}}{4}$; namely $N_{\rm m}$ is reduced to 8, which defines a
topological transition between the phases with different $N_{\rm m}$.
The remaining hedgehogs and antihedgehogs move toward each other while further increasing $m$,
as shown in Fig.~\ref{fig:4qchiral_hedgehogs}(c).
They also vanish with pair annihilation at $m=\frac{4}{\sqrt{6}}\cos\frac{\tilde{\varphi}}{4}$, which defines the other topological transition into a topologically trivial state with $N_{\rm m}=0$.
Next, we discuss the case of $\tilde{\varphi}=\pi$ shown in Figs.~\ref{fig:4qchiral_hedgehogs}(d),
\ref{fig:4qchiral_hedgehogs}(e), and \ref{fig:4qchiral_hedgehogs}(f).
When $m=0$, as shown in Fig.~\ref{fig:4qchiral_hedgehogs}(d), the system has eight pairs of the hedgehogs and antihedgehogs, similar to the case of $\tilde{\varphi}=\frac{\pi}{3}$.
However, while the positions of the topological defects as well as their total number are same as those in Fig.~\ref{fig:4qchiral_hedgehogs}(a), half of the hedgehog and antihedgehog pairs are exchanged;
all the Dirac strings have the hedgehogs at their lower edges in Fig.~\ref{fig:4qchiral_hedgehogs}(d), whereas only half of them do in Fig.~\ref{fig:4qchiral_hedgehogs}(a).
Moreover, we find additional Dirac strings running on the planes perpendicular to the $z$ axis,
which are denoted by the white lines in the figure.
Note that the vorticity in Eq.~(\ref{eq:vorticity}) is ill-defined for these horizontal Dirac strings.
As shown in Fig.~\ref{fig:4qchiral_hedgehogs}(d), they intersect with
the Dirac strings running along the $z$ axis, whose vorticities $\zeta$ change their signs at the crossing points.
By introducing $m$, the hedgehogs and antihedgehogs move toward each other along the vertical Dirac strings as in the case of $\tilde{\varphi}=\frac{\pi}{3}$, while the horizontal Dirac strings are intact.
When $m$ exceeds $\frac23$, however, $N_{\rm m}$ increases from 16 to 48, as depicted in Fig.~\ref{fig:4qchiral_hedgehogs}(e).
This is caused by a peculiar topological transition with the increase of $N_{\rm m}$ discussed in detail below.
The additional defects are obtained from the numerical solutions with Eqs.~(\ref{eq:4qchiral_sol3_1}), (\ref{eq:4qchiral_sol3_2}), and (\ref{eq:4qchiral_sol3_3}) for $m>\frac23$, which
appear in pair on the horizontal Dirac strings; the horizontal Dirac strings are cut into pieces, both ends of which form hedgehogs or antihedgehogs.
In other words, a cut results in pair creation of the hedgehog and antihedgehog.
In this region, three hedgehogs and three antihedgehogs (a pair of hedgehog and antihedgehog with the vertical Dirac string, and hedgehog pair and antihedgehog pair with the horizontal Dirac strings) form a cluster like a twisted two-barred cross.
While further increasing $m$, the hedgehogs and antihedgehogs move along the Dirac strings, and
$N_{\rm m}$ decreases from 48 to 16 at $m=\sqrt{\frac{2}{3}}$.
Here, two hedgehogs and one antihedgehog (or one hedgehog and two antihedgehogs) collide with each other at the same time in each cluster,
leaving one (anti)hedgehog (see below).
After this topological transition, all the Dirac strings have the hedgehogs at the upper edges, and hence, the vorticities are $\zeta=+1$,
as shown in Fig.~\ref{fig:4qchiral_hedgehogs}(f).
Finally, the remaining hedgehogs and antihedgehogs move toward their counterparts along the Dirac strings, and cause pair annihilation at
$m=\frac{2}{\sqrt{3}}$, by which the system enters into a topologically trivial phase with $N_{\rm m}=0$.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{4qscr_sign_change.pdf}
\caption{
\label{fig:4qchiral_sign_change}
Enlarged figures for the systematic evolution of the hedgehogs and antihedgehogs while changing $m$ with $\tilde{\varphi}=\pi$:
(a) $m=0$, (b) $m=0.7$, (c) $m=0.8$, and (d) $m=0.9$.
The hedgehogs and antihedgehogs enclosed with the black dotted lines in (b) are generated by pair creation on the horizontal Dirac strings.
On the other hand, the three topological defects enclosed with the black dotted lines in (c) collide
with each other simultaneously and cause a fusion into a hedgehog or an antihedgehog in (d).
A part of the topological defects and the Dirac strings are depicted for better visibility.
The notations are same with those in Fig.~\ref{fig:4qchiral_hedgehogs}.
}
\end{figure}
The evolution of the topological objects for the case of $\tilde{\varphi}=\pi$ includes, at least, two striking features.
One is the increase of the total number of the hedgehogs and antihedgehogs while increasing $m$.
This is highly nontrivial since usually the hedgehogs and antihedgehogs move toward each other along the Dirac string and cause pair annihilation, and hence, their number does not increase while increasing $m$~\cite{Binz2006-1, Park2011, Zhang2016, Kanazawa2016, Shimizu2021moire}.
It is worthy noting that the increase is caused by pair creation on the horizontal Dirac strings whose vorticities are ill-defined.
The process is detailed in Figs.~\ref{fig:4qchiral_sign_change}(a) and \ref{fig:4qchiral_sign_change}(b).
The other striking feature is the simultaneous fusion of three defects,
as depicted in Figs.~\ref{fig:4qchiral_sign_change}(c) and \ref{fig:4qchiral_sign_change}(d).
In this process, two hedgehogs (antihedgehogs) on the horizontal Dirac string and one antihedgehog (hedgehog) on the vertical Dirac string move toward the crossing point of the vertical and horizontal Dirac strings, and cause the fusion into a single hedgehog (antihedgehog).
Note that the total monopole charge is conserved through the fusion.
Hence, both striking features originate from the peculiar horizontal Dirac strings.
Thus far, we show the results only for $\tilde{\varphi}=\pi/3$ and $\pi$, but we note that the other cases fall into either behavior qualitatively.
Specifically, the cases for $\frac{2\pi}{3}<\tilde{\varphi}<\frac{4\pi}{3}$ belong to the latter, while the others to the former.
In the latter class, we find the additional numerical solutions as described above, which lead to the peculiar pair creation and fusion of the topological defects.
We note that the fusion occurs simultaneously for all the created pairs when $\tilde{\varphi}=\pi$, while it takes place successively half by half for other values in
$\frac{2\pi}{3}<\tilde{\varphi}<\frac{4\pi}{3}$, as discussed below.
\subsubsection{Topological phase diagram \label{sec:4.2.2}}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{4qscr_phasediagram.pdf}
\caption{
\label{fig:4qch_pd}
Topological phase diagram for the screw $4Q$ state on the plane of $m$ and $\tilde{\varphi}$, determined by the number of
hedgehogs and antihedgehogs within the cube, $N_{\rm m}$ [see Fig.~\ref{fig:4q_setup}(b)]. The contour plot indicates the emergent magnetic field
$-\bar{b}_z$; the white lines denote the contours drawn every $0.1$, and the black dashed lines denote the boundary to $\bar{b}_z=0$.
The gray solid lines denote pair annihilation of hedgehogs and antihedgehogs.
The yellow line and the orange lines
denote pair creation of hedgehogs and antihedgehogs and fusion of three topological defects, respectively, while increasing $m$.
}
\end{figure}
\begin{figure*}[tb]
\centering
\includegraphics[width=2.0\columnwidth]{4qscr_spin.pdf}
\caption{
\label{fig:4qch_spin}
Real-space spin configurations on the isosurfaces with $S_z({\bf r})=-0.9$ within the $L^3$ cube for the screw 4$Q$ state in Eq.~(\ref{eq:4qchiral_ansatz}) for different topological phases in Fig.~\ref{fig:4qch_pd}:
(a) $m=0$ and $\tilde{\varphi}=\frac{\pi}{3}$ ($N_{\rm m}=16$), (b) $m=0$ and $\tilde{\varphi}=\pi$ ($N_{\rm m}=16$), (c) $m=0.4$ and $\tilde{\varphi}=0$ ($N_{\rm m}=8$), (d) $m=0.7$ and $\tilde{\varphi}=\frac{5\pi}{6}$ ($N_{\rm m}=32$), (e) $m=0.7$ and $\tilde{\varphi}=\pi$ ($N_{\rm m}=48$), and (f) $m=0.9$ and $\tilde{\varphi}=\pi$ ($N_{\rm m}=16$).
The spin configurations are also shown on the bottom plane of the cube.
The color of the arrows and the isosurfaces denote the $xyz$ and $xy$ components of ${\bf S}({\bf r})$, respectively; see the inset in (a).
In each figure, the right panel shows the top view.
}
\end{figure*}
Figure~\ref{fig:4qch_pd} summarizes the topological phase diagram on the plane of $m$ and $\tilde{\varphi}$
for the screw $4Q$ state in Eq.~(\ref{eq:4qchiral_ansatz}), determined by $N_{\rm m}$.
The result is periodic in the $\tilde{\varphi}$ direction with period of $2\pi$ and symmetric with respect to $\tilde{\varphi}=\pi$.
We also plot the emergent magnetic field $\bar{b}_z$, which is defined as
\begin{eqnarray}
\bar{b}_z =
\frac{1}{4\pi L}\int d\mathcal{V} b_z({\bf r}),
\label{eq:bbar}
\end{eqnarray}
where the volume integration is taken within the $L^3$ cube in Fig.~\ref{fig:4q_setup}(b).
Following the arguments in Refs.~\cite{Park2011, Zhang2016, Kanazawa2016, Shimizu2021moire},
$\bar{b}_z$ is rewritten into
\begin{eqnarray}
\bar{b}_z = -\frac{1}{L}\sum_{k}\left( l_k^{+} - l_{k}^{-} \right),
\label{eq:bbar2}
\end{eqnarray}
where $l_{k}^{+}$ and $l_{k}^{-}$ denote the length of the $k$th Dirac strings with $\zeta=+1$ and $\zeta=-1$
projected onto the $z$ axis, respectively, and the sum is taken for all the Dirac strings involved in the cube.
Note that the values of $-\bar{b}_z$ are plotted by contour in Fig.~\ref{fig:4qch_pd}.
The phase diagram for $m<0$ is obtained in the same form with sign reversal of $-\bar{b}_z$ since the spin texture with $(\varphi_{\eta}, m)$ is obtained by time-reversal operation on that with $(\varphi_{\eta}+\pi, -m)$.
In the phase diagram, we find the topological phases with $N_{\rm m}=8$, $16$, $32$, and $48$,
in addition to the trivial phase with $N_{\rm m}=0$ in the large $m$ region.
When $0 < \tilde{\varphi} < \frac{2\pi}{3}$ or $\frac{4\pi}{3} < \tilde{\varphi} < 2\pi$, the phases with $N_{\rm m}=8$ and 16 appear.
In the phase with $N_{\rm m}=16$ for small $m$, 8 hedgehogs and 8 antihedgehogs are connected by the Dirac strings with $\zeta=+1$ and $-1$, and the Dirac strings with $\zeta=-1$ are shorter than those with $\zeta=+1$, as exemplified in Fig.~\ref{fig:4qchiral_hedgehogs}(b).
While increasing $m$, the length difference between the long and short Dirac strings increases, leading to the increase of $-\bar{b}_z$.
Specifically, the value of $-\bar{b}_z$ is given by
\begin{equation}
-\bar{b}_z = \pm2\left[ 1-\frac{2}{\pi}\left(
r_1^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right) + r_2^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right)
\right) \right],
\end{equation}
where the upper (lower) sign is for $0 < \tilde{\varphi} < \frac{2\pi}{3}$ ($\frac{2\pi}{3} < \tilde{\varphi} < 2\pi$).
At $m=\frac{4}{\sqrt{6}}\sin\frac{\tilde{\varphi}}{4}$ for $0<\tilde{\varphi}<\frac{2\pi}{3}$ and $m=\frac{4}{\sqrt{6}}\cos\frac{\tilde{\varphi}}{4}$ for
$\frac{4\pi}{3}<\tilde{\varphi}<2\pi$, half of the topological defects connected by the Dirac strings with $\zeta=-1$ disappear with pair annihilation, leaving the defects connected by the
Dirac stings with $\zeta=+1$, as exemplified in Fig.~\ref{fig:4qchiral_hedgehogs}(c).
The pair annihilation occurs at smaller $m$ when $\tilde{\varphi}$ approaches $0$ or $2\pi$, and $-\bar{b}_z$ is enhanced to $-\bar{b}_z\to 2$ when $m\to 0$ at $\tilde{\varphi}=0$ or $2\pi$
(in the limit, there are only four Dirac string with $\zeta=+1$ and length $\frac{L}{2}$).
Meanwhile, in the phase with $N_{\rm m}=8$, the increase of $m$ reduces the length of the Dirac strings with $\zeta=+1$, leading to the decrease of $-\bar{b}_z$ as
\begin{equation}
-\bar{b}_z =
\begin{cases}
2\left[ 1-\frac{2}{\pi}r_2^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right) \right]
&
{\rm for}
\ \ 0 < \tilde{\varphi} <
\frac{2\pi}{3} \\
\frac{4}{\pi}r_1^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right)
&
{\rm for}
\ \ \frac{4\pi}{3} < \tilde{\varphi} < 2\pi.
\end{cases}
\end{equation}
Finally, $-\bar{b}_z$ vanishes by the pair annihilation of the remaining hedgehogs and antihedgehogs at
$m=\frac{4}{\sqrt{6}}\cos\frac{\tilde{\varphi}}{4}$ for $0<\tilde{\varphi}<\frac{2\pi}{3}$ and $m=\frac{4}{\sqrt{6}}\sin\frac{\tilde{\varphi}}{4}$ for
$\frac{4\pi}{3}<\tilde{\varphi}<2\pi$, and the system enters into the topologically trivial phase with $N_{\rm m}=0$ for larger $m$.
On the other hand, when $\frac{2\pi}{3} < \tilde{\varphi} < \frac{4\pi}{3}$, the topological phases with $N_{\rm m}=32$ and 48 appear additionally
in the intermediate region of $m$, as shown in Fig.~\ref{fig:4qch_pd}.
In this range of $\tilde{\varphi}$, while increasing $m$ from the phase with $N_{\rm m}=16$, pair creation of the hedgehogs and antihedgehogs
occurs on the phase boundary to the phase with $N_{\rm m}=48$, which is denoted by the yellow line in the figure.
No anomaly is found in $-\bar{b}_z$ at the topological transition since the pair-created topological objects move along the horizontal Dirac strings and this evolution does not contribute to $l_{k}^{\pm}$ in Eq.~(\ref{eq:bbar2}); see Fig.~\ref{fig:4qchiral_hedgehogs}(e).
In these phases, however, the increase of $m$ reduces the length of the Dirac strings with $\zeta=-1$, leading to the increase of
$-\bar{b}_z$ as
\begin{equation}
-\bar{b}_z = 2\left[ 1 - \frac{2}{\pi}\left(
r_1^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right) - r_2^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right)
\right) \right].
\end{equation}
While increasing $m$, half of the pair-created topological defects disappear through the fusion, which causes the topological transition from $N_{\rm m}=48$ to $32$ (orange lines in the figure).
In the $N_{\rm m}=32$ region, the value of $-\bar{b}_z$ is given by
\begin{eqnarray}
-\bar{b}_z=&&2\left[ 1 \mp \frac{2}{\pi}\left(
r_1^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right) + r_2^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right)
\right) \right. \notag \\
&& \qquad \left. \pm \frac{4}{\pi}\arccos\left(\sqrt{2\cos^2\frac{\tilde{\varphi}}{4}-\frac{1}{2}}\right) \right],
\end{eqnarray}
where the upper (lower) signs are for $\frac{2\pi}{3} < \tilde{\varphi} \leq \pi$ ($\pi \leq \tilde{\varphi} < \frac{4\pi}{3}$).
With a further increase of $m$, the rest half of the pair-created topological defects cause the fusion and $N_{\rm m}$ is reduced from $32$ to $16$.
In the $N_{\rm m}=16$ phase, all of the topological defects are connected by the Dirac strings with
$\zeta=+1$, as exemplified in Fig.~\ref{fig:4qchiral_hedgehogs}(f), where $-\bar{b}_z$ is given as
\begin{equation}
-\bar{b}_z=2\left[ 1 + \frac{2}{\pi}\left(
r_1^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right) - r_2^{\rm scr}\left(m, \frac{\tilde{\varphi}}{4}\right)
\right) \right].
\end{equation}
Note that $-\bar{b}_z$ takes the maximum value of $-\bar{b}_z=2$ at $(m, \tilde{\varphi}) = \left(\sqrt{\frac{2}{3}}, \pi \right)$,
where all the pair-created defects cause the fusion simultaneously.
Figure~\ref{fig:4qch_spin} showcases typical spin configurations of the screw $4Q$ states for all the topological phases in Fig.~\ref{fig:4qch_pd}, together with the hedgehogs and antihedgehogs, and the Dirac strings.
The spin configurations are shown on the isosurfaces with $S_z({\bf r})=-0.9$ as well as the bottom plane of the $L^3$ cube.
The isosurfaces, by definition, extend from the hedgehogs and antihedgehogs, and involve the Dirac strings inside.
Figure~\ref{fig:4qch_spin}(a) is for the $N_{\rm m}=16$ state at $m=0$ and $\tilde{\varphi}=\frac{\pi}{3}$.
From the spin configurations on the isosurfaces, it is observed that the helicity of the spin texture gradually increases or decreases with the $z$ coordinates,
which leads to $\pm \pi$ difference between the top and bottom of each Dirac string.
Figure~\ref{fig:4qch_spin}(b) is for the different $N_{\rm m}=16$ state at $m=0$ and $\tilde{\varphi}=\pi$.
The result demonstrates that the phase shift drastically changes the spin configurations as well as the real-space distributions of the topological objects,
while $N_{\rm m}$ is same as in Fig.~\ref{fig:4qch_spin}(a).
In this state, the isosurfaces have complicated 3D networks due to the existence of the Dirac strings running on the horizontal planes.
Figure~\ref{fig:4qch_spin}(c) is for the $N_{\rm m}=8$ phase at $m=0.4$ and $\tilde{\varphi}=0$.
In this state, the isosurfaces become much simpler.
Figures~\ref{fig:4qch_spin}(d) and \ref{fig:4qch_spin}(e) are for the $N_{\rm m}=32$ state at $m=0.7$ and $\tilde{\varphi}=\frac{5\pi}{6}$ and $N_{\rm m}=48$ states at $m=0.7$ and $\tilde{\varphi}=\pi$, respectively.
In both states, the spin configuration on a horizontal $xy$ plane comprise a SkL with Bloch type skyrmions, as exemplified on the bottom plane of the cube in the figure.
We note that the skyrmions are deformed and elongated along the direction of the horizontal Dirac strings.
Figure~\ref{fig:4qch_spin}(f) is for the $N_{\rm m}=16$ state at $m=0.9$ and $\tilde{\varphi}=\pi$.
In this state, while $N_{\rm m}$ takes the same value as that in Figs.~\ref{fig:4qch_spin}(a) and \ref{fig:4qch_spin}(b), the spin configuration is completely different from them.
In all the cases, the spin configurations have twofold rotational symmetry about each vertical Dirac string.
We note that the spin configurations with $\tilde{\varphi}=0$ have fourfold rotational symmetry about each Dirac string, as exemplified in Fig.~\ref{fig:4qch_spin}(c), while those with $\tilde{\varphi}=\pi$ are symmetric for the screw operation $\{C_{4z}|-\tilde{\bf a}_{\eta}\}$, as exemplified in Figs.~\ref{fig:4qch_spin}(b), \ref{fig:4qch_spin}(e), and \ref{fig:4qch_spin}(f).
These results are consistent with the symmetry arguments in Table~\ref{tab:4q_scr_sym}.
Let us briefly discuss the present results in comparison with the previous studies. A $4Q$ HL was experimentally discovered in MnSi$_{1-x}$Ge$_x$~\cite{Fujishiro2018}. In this study, the spin texture at zero magnetic field was interpreted as the screw $4Q$ state with four hedgehogs and four antihedgehogs ($N_{\rm m}=8$), which corresponds to $\tilde{\varphi}=0$ in our results, although the value of $\tilde{\varphi}$ was not examined experimentally. Meanwhile, the topological properties and the emergent magnetic field in the screw $4Q$ state were theoretically studied at $\tilde{\varphi}=\pi$ while changing the external magnetic field~\cite{Park2011}, but the study was limited to the $N_{\rm m}=16$ state as the solutions in Eqs.~(\ref{eq:4qchiral_sol3_1})-(\ref{eq:4qchiral_sol3_3}) were not included. Recently, some of the authors studied the evolution of the screw $4Q$ state on a 3D simple cubic lattice by variational calculations and simulated annealing, and found various types of topological transitions depending on the direction of the magnetic field~\cite{Okumura2020}. We will discuss one of them, paying attention to the phase shift in Sec.~\ref{sec:5.2}.
\subsection{Sinusoidal $4Q$ state \label{sec:4.3}}
Next, we analyze the phase shift in the sinusoidal $4Q$ state given by
\begin{eqnarray}
{\bf S}({\bf r})
\propto
\left(
\begin{array}{c}
\cos\mathcal{Q}_1 - \cos\mathcal{Q}_2 - \cos\mathcal{Q}_3 + \cos\mathcal{Q}_4 \\
\cos\mathcal{Q}_1 - \cos\mathcal{Q}_2 + \cos\mathcal{Q}_3 - \cos\mathcal{Q}_4 \\
\cos\mathcal{Q}_1 + \cos\mathcal{Q}_2 - \cos\mathcal{Q}_3 - \cos\mathcal{Q}_4 + 2\sqrt{3}m
\end{array}
\right),\notag \\
\label{eq:4qnc_ansatz}
\end{eqnarray}
which is obtained from Eq.~(\ref{eq:general_ansatz}) by taking $N_Q=4$ and
\begin{eqnarray}
&&\psi_{\eta}^{\rm c}=\frac{1}{2}
,\ \ \psi_{\eta}^{\rm s}=0, \ \
{\bf e}_{\eta}^1 = \frac{{\bf q}_{\eta}}{q}.
\end{eqnarray}
Following the arguments in Sec.~\ref{sec:4.2}, we summarize the symmetry operations for the sinusoidal $4Q$ state in Table~\ref{tab:4q_sin_sym}.
We find that the spin texture is unchanged for $C_{4z}$, $\mathcal{I}$, $\mathcal{T}C_{2x}$, and their combinations for $\tilde{\varphi}=0$,
$\{C_{4z}|-\tilde{\bf a}_{\eta}\}$, $\mathcal{I}$, $\mathcal{T}C_{2x}$, and their combinations for $\tilde{\varphi}=\pi$,
otherwise $C_{2z}$, $S_{4z}$, $\mathcal{T}C_{2x}$, and their combinations, where $S_{4z}$ represents the fourfold improper rotation operation about the $z$ axis.
\begin{table}
\caption{\label{tab:4q_sin_sym}
Similar table to Tables~\ref{tab:3q_scr_sym}--\ref{tab:4q_scr_sym} for the sinusoidal $4Q$ state in Eq.~(\ref{eq:4qnc_ansatz}).
}
\begin{ruledtabular}
\begin{tabular}{c|ccc}
operation & sum of phases & translation & magnetization\\
\hline
$C_{4z}$ & $\tilde{\varphi} \rightarrow 2\pi - \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}$ & $m \rightarrow m$ \\
$C_{2x}$ & $\tilde{\varphi} \rightarrow \tilde{\varphi}$ & $0$ & $m \rightarrow -m$ \\
$\mathcal{I}$ & $\tilde{\varphi} \rightarrow 2\pi - \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}+\tilde{\bf a}_{\eta'}$ & $m \rightarrow m$ \\
$\mathcal{T}$ & $\tilde{\varphi} \rightarrow \tilde{\varphi}$ & $\tilde{\bf a}_{\eta}+\tilde{\bf a}_{\eta'}$ & $m \rightarrow -m$
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{Hedgehogs in hyperspace \label{sec:4.3.1}}
\begin{figure*}[tb]
\centering
\includegraphics[width=1.0\textwidth]{4qsin_hedgehogs.pdf}
\caption{
\label{fig:4qnc_hedgehogs}
Real-space distribution of the hedgehogs and antihedgehogs, and the Dirac strings
within the $L^3$ cube [see Fig.~\ref{fig:4q_setup}(b)] for the sinusoidal $4Q$ state in Eq.~(\ref{eq:4qnc_ansatz})
while changing $m$ and $\tilde{\varphi}$:
(a) $m=0$, (b) $m=0.1$, and (c) $m=0.7$ at $\tilde{\varphi}=\pi/3$, and
(d) $m=0$, (e) $m=0.5$, and (g) $m=0.7$ at $\tilde{\varphi}=\pi$.
The notations are common to those in Fig.~\ref{fig:4qchiral_hedgehogs}.
}
\end{figure*}
Following the same procedure in Sec.~\ref{sec:4.2.1}, we compute the positions of the topological defects in the 4D hyperspace
for the spin texture corresponding to Eq.~(\ref{eq:4qnc_ansatz}).
Here, we show the solutions ${\bf S}({\bf R})=0$ within $0 \leq X,Y < \frac{L}{2}$ and $0 \leq Z < L$, but the spatial translations of them
with ${\bf a}_{\eta}$ in Eq.~(\ref{eq:4Q_a_eta}) also satisfy ${\bf S}({\bf R})=0$.
When $|m| < \frac{2}{\sqrt{3}} \sin W$, we obtain two solutions analytically:
\begin{eqnarray}
(X^*, Y^*, Z^*)=L\left( 0, 0, \frac14 \pm \frac{1}{2\pi}r_1^{\rm sin}(m, W) \right),
\label{eq:4qnc_sol1}
\end{eqnarray}
where
\begin{eqnarray}
&&r_1^{\rm sin}(m,W)=\arccos\left(\frac{\sqrt{3}m}{2\sin W}\right).
\end{eqnarray}
Meanwhile, when $|m| < \frac{2}{\sqrt{3}}\cos W$, we obtain other two analytical solutions as
\begin{eqnarray}
(X^*, Y^*, Z^*)&=&
L\left( \frac14, \frac14, \frac34 + \frac{1}{2\pi}r_2^{\rm sin}(m, W) \right), \notag\\
&&
L\left( \frac14, \frac14, \frac14 - \frac{1}{2\pi}r_2^{\rm sin}(m, W) \right),
\label{eq:4qnc_sol2}
\end{eqnarray}
where
\begin{eqnarray}
&&r_2^{\rm sin}(m,W)=\arcsin\left(\frac{\sqrt{3}m}{2\cos W}\right).
\end{eqnarray}
The solutions in Eqs.~(\ref{eq:4qnc_sol1}) and (\ref{eq:4qnc_sol2}) do not change their $XY$ coordinates
with $m$ and $\tilde{\varphi}$, as Eqs.~(\ref{eq:4qchiral_sol1}) and (\ref{eq:4qchiral_sol2}) for the screw $4Q$ case.
On the other hand, when $-\frac{2}{\sqrt{3}} \cos^2 W < m < \frac{2}{\sqrt{3}}\sin^2 W $, we obtain four solutions:
\begin{eqnarray}
(X^*, Y^*, Z^*)&=&
L\left( \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac12 - \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac12 + \frac{W}{2\pi} \right), \notag\\
&&
L\left( \frac12 - \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac12 + \frac{W}{2\pi} \right), \notag\\
&&
L\left( \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac12 - \frac{W}{2\pi} \right), \notag\\
&&
L\left( \frac12 - \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac12 - \frac{1}{2\pi}r_{3+}^{\rm sin}(m,W), \frac12 - \frac{W}{2\pi} \right), \notag \\
\label{eq:4qnc_sol3}
\end{eqnarray}
where
\begin{eqnarray}
r_{3+}^{\rm sin}(m,W)=\frac{1}{2}\arccos\left( \cos \left(2W \right) + \sqrt{3}m \right).
\end{eqnarray}
For $-\frac{2}{\sqrt{3}} \sin^2 W < m < \frac{2}{\sqrt{3}}\cos^2 W $, we obtain four solutions:
\begin{eqnarray}
(X^*, Y^*, Z^*)&=&
L\left( \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), \frac12 - \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), \frac{W}{2\pi} \right), \notag\\
&&
L\left( \frac12 - \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), \frac{W}{2\pi} \right), \notag\\
&&
L\left( \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), 1 - \frac{W}{2\pi} \right), \notag\\
&&
L\left( \frac12 - \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), \frac12 - \frac{1}{2\pi}r_{3-}^{\rm sin}(m,W), 1 - \frac{W}{2\pi} \right), \notag \\
\label{eq:4qnc_sol4}
\end{eqnarray}
where
\begin{eqnarray}
&&r_{3-}^{\rm sin}(m,W)=\frac{1}{2}\arccos\left( \cos \left( 2W \right) - \sqrt{3}m \right).
\end{eqnarray}
In contrast to Eqs.~(\ref{eq:4qnc_sol1}) and (\ref{eq:4qnc_sol2}), the solutions in Eqs.~(\ref{eq:4qnc_sol3}) and (\ref{eq:4qnc_sol4})
change their $XY$ coordinates with $m$ and $\tilde{\varphi}$, while the $Z$ coordinates do not change.
These solutions correspond to those obtained numerically in the screw $4Q$ case.
Figure~\ref{fig:4qnc_hedgehogs} showcases the systematic change of the topological defects in the original 3D space
while changing $m$ in Eq.~(\ref{eq:4qnc_ansatz}) with $\tilde{\varphi}=\frac{\pi}{3}$ and $\pi$.
The monopole charge of the hedgehogs and antihedgehogs, $Q_{\rm m}$, and the vorticity of the Dirac strings, $\zeta$, are calculated
by the same procedure in Sec.~\ref{sec:4.2.1}.
First, we discuss the case of $\tilde{\varphi}=\frac{\pi}{3}$ shown in Figs.~\ref{fig:4qnc_hedgehogs}(a),
\ref{fig:4qnc_hedgehogs}(b), and \ref{fig:4qnc_hedgehogs}(c).
When $m=0
$, there are 48 defects in total and half of them are hedgehogs with $Q_{\rm m}=+1$ and the others are antihedgehogs with $Q_{\rm m}=-1$,
as shown in Fig.~\ref{fig:4qnc_hedgehogs}(a).
These topological defects are derived from Eqs.~(\ref{eq:4qnc_sol1}), (\ref{eq:4qnc_sol2}), (\ref{eq:4qnc_sol3}), and (\ref{eq:4qnc_sol4}).
There are two types of the Dirac strings connecting the hedgehogs and antihedgehogs:
One is the vertical strings for the hedgehog-antihedgehog pairs, and the other is the horizontal ones for the hedgehog-hedgehog pairs or the antihedgehog-antihedgehog pairs.
Similar to the screw $4Q$ case, the vorticity is ill-defined for the latter horizontal ones, and they intersect the former vertical ones whose vorticities change their signs at the crossing points;
three hedgehogs and three antihedgehogs form a cluster like a twisted two-barred cross.
However, the horizontal ones are straight, in contrast to the curved ones in the screw $4Q$ case (see Fig.~\ref{fig:4qchiral_hedgehogs}).
Moreover, they are found for all $\tilde{\varphi}$, while they appear only for $\frac{2\pi}{3}<\tilde{\varphi}<\frac{4\pi}{3}$ in the screw $4Q$ case.
All the vertical Dirac strings have the length of $\frac{L}{2}$ at $m=0
$, while the horizontal ones have a long or short length, resulting in the two types of clusters;
the hedgehogs and antihedgehogs in the four clusters with longer horizontal strings are given by Eqs.~(\ref{eq:4qnc_sol2}) and (\ref{eq:4qnc_sol4}),
while those in the rest four with shorter ones are by Eqs.~(\ref{eq:4qnc_sol1}) and (\ref{eq:4qnc_sol3}).
By introducing $m$, the topological defects move along the Dirac strings, and two hedgehogs and one antihedgehog
(or one hedgehog and two antihedgehogs) collide with each other, leaving one (anti)hedgehog, at
$m=\frac{2}{\sqrt{3}}\sin^2\frac{\tilde{\varphi}}{4}$ in the four clusters with shorter horizontal Dirac strings.
The fusion process is similar to those in Fig.~\ref{fig:4qchiral_sign_change}.
At the topological transition by the fusion, $N_{\rm m}$ decreases from $48$ to $32$, and for larger $m$, the vertical Dirac strings left by the fusion, which have $\zeta=-1$, coexist with the clusters remaining, as shown in Fig.~\ref{fig:4qnc_hedgehogs}(b).
By further increasing $m$, the hedgehogs and antihedgehogs connected by the vertical Dirac strings cause the pair annihilation
at $m=\frac{2}{\sqrt{3}}\sin\frac{\tilde{\varphi}}{4}$, leaving four clusters derived from Eqs.~(\ref{eq:4qnc_sol2}) and (\ref{eq:4qnc_sol4}), as shown in Fig.~\ref{fig:4qnc_hedgehogs}(c).
At the topological transition by the pair annihilation, $N_{\rm m}$ is further reduced from $32$ to $24$.
When $m=\frac{2}{\sqrt{3}}\cos^2\frac{\tilde{\varphi}}{4}$, the fusion takes place in the remaining four clusters, which reduces $N_{\rm m}$
from 24 to 8, and leaves four vertical Dirac strings with $\zeta=-1$.
Finally, the hedgehogs and antihedgehogs on the Dirac strings
pair annihilate at $m=\frac{2}{\sqrt{3}}\cos\frac{\tilde{\varphi}}{4}$, where the system becomes topologically trivial with $N_{\rm m}=0$.
Next, we discuss the case of $\tilde{\varphi}=\pi$ shown in Figs.~\ref{fig:4qnc_hedgehogs}(d),
\ref{fig:4qnc_hedgehogs}(e), and \ref{fig:4qnc_hedgehogs}(f).
When $m=0
$, $N_{\rm m}$ is 48 as in the case of $\tilde{\varphi}=\frac{\pi}{3}$; the positions of the hedgehogs and antihedgehogs
connected by the vertical Dirac strings are the same, but those connected by the horizontal Dirac strings are different, as shown in Fig.~\ref{fig:4qnc_hedgehogs}(d).
By introducing $m$, the topological defects move along the Dirac strings as shown in Fig.~\ref{fig:4qnc_hedgehogs}(e), and
the fusion occurs at $m=\frac{1}{\sqrt{3}}$ simultaneously in all the clusters.
This leaves eight pairs of the hedgehogs and antihedgehogs connected by the vertical Dirac strings, as shown in
Fig.~\ref{fig:4qnc_hedgehogs}(f); $N_{\rm m}$ decreases from $48$ to $16$.
In this state, all the Dirac strings have the length of $\frac{L}{\pi}r_1^{\rm sin}\left(m,\frac{\tilde{\varphi}}{4}\right)=L\left(\frac{1}{2}-\frac{1}{\pi}r_2^{\rm sin}\left(m,\frac{\tilde{\varphi}}{4}\right)\right)$ and the vorticity $\zeta=-1$.
By further increasing $m$, the remaining pairs annihilate at $m=\sqrt{\frac{2}{3}}$.
\subsubsection{Topological phase diagram \label{sec:4.3.2}}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{4qsin_phasediagram.pdf}
\caption{
\label{fig:4qnc_pd}
Topological phase diagram for the sinusoidal $4Q$ state determined by $N_{\rm m}$, with the contour plot of $-\bar{b}_z$, on the plane of $m$ and $\tilde{\varphi}$.
$N_{\rm m}$ is indicated in each phase.
The white lines denote the contours drawn every 0.1, and the black dashed lines denote $\bar{b}_z=0$.
The gray and orange lines are the phase boundaries between different $N_{\rm m}$ phases: The former denotes the pair annihilation of hedgehogs and antihedgehogs, while the latter denotes the fusion of three topological defects.
}
\end{figure}
Performing similar calculations to those in Sec.~\ref{sec:4.2.2} while changing $\tilde{\varphi}$ and $m$, we elaborate the phase diagram shown in Fig.~\ref{fig:4qnc_pd}.
We also plot $-\bar{b}_z$ by the contour, as in Sec.~\ref{sec:4.2.2}.
Similar to Fig.~\ref{fig:4qch_pd}, the result is again symmetric with respect to $\tilde{\varphi}=\pi$ and has $2\pi$ periodicity, and the phase diagram for $m<0$ is obtained in the same form with sign inversion of $-\bar{b}_z$.
We find the topological phases with $N_{\rm m}=8$, 16, 24, 32, and 48 in the phase diagram.
There are two interesting features, in comparison with the result for the screw $4Q$ case in Fig.~\ref{fig:4qch_pd}.
One is that the topological phases with large values of $N_{\rm m}$, such as $N_{\rm m}=24$, 32, and 48, appear in the large portions of the phase diagram.
In particular, the system has $N_{\rm m}=48$ for all $\tilde{\varphi}$ in the small $m$ limit, and turns into the $N_{\rm m}=32$ state by the fusion on the orange lines in the phase diagram.
Near $\tilde{\varphi}=0$ or $2\pi$, the $N_{\rm m}=24$ regions extend widely in the intermediate $m$ region, and the $N_{\rm m}=32$ states appear in between.
The value of $-\bar{b}_z$ in the phase with $N_{\rm m}=48$ is given by
\begin{equation}
-\bar{b}_z=2\left[ 1+\frac{2}{\pi}\left(
r_1^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right) - r_2^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right)
\right) \right].
\end{equation}
Meanwhile, the values of $-\bar{b}_z$ in the $N_{\rm m}=32$ and $24$ phases are given as
\begin{equation}
-\bar{b}_z = 2\left[ 1 - \frac{2}{\pi}\left(
r_1^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right) + r_2^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right)
\right) - \frac{\tilde{\varphi}}{\pi} \right],
\label{eq:bz_Nm=32}
\end{equation}
and
\begin{equation}
-\bar{b}_z = 2\left[ 1 - \frac{2}{\pi}r_2^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right) - \frac{\tilde{\varphi}}{\pi} \right],
\label{eq:bz_Nm=24}
\end{equation}
respectively, when $0 \leq \tilde{\varphi} \leq \pi$;
$-\bar{b}_z$ for $\pi \leq \tilde{\varphi} \leq 2\pi$ is obtained by the symmetry with respect to $\tilde{\varphi}=\pi$.
These are in stark contrast to the screw $4Q$ case where the topological phases
with $N_{\rm m}=32$ and 48 appear only in the limited region for $\frac{2\pi}{3} < \tilde{\varphi} < \frac{4\pi}{3}$ and $N_{\rm m}=24$ is not found.
The other interesting feature is that the sign of $-\bar{b}_z$ is not limited to positive unlike the screw $4Q$ case; it can be
negative while changing $m$
and $\tilde{\varphi}$ in the phases with $N_{\rm m}=32$ and $24$.
The sign changes are caused by the competition between the lengths of the Dirac strings with $\zeta=+1$ and $-1$; see also Eqs.~(\ref{eq:bz_Nm=32}) and (\ref{eq:bz_Nm=24}).
We find that $-\bar{b}_z$ takes the minimum value of $-2$ at $(m,\tilde{\varphi})=\left(\frac{1}{\sqrt{3}}, \pi \right)$, where all the defects connected by the horizontal Dirac strings cause the fusion simultaneously.
This state has the hedgehog-antihedgehog pairs given by the analytical solutions in Eqs.~(\ref{eq:4qnc_sol1}) and (\ref{eq:4qnc_sol2}) connected by the Dirac strings whose vorticities and lengths are all $\zeta=-1$ and $\frac{L}{4}$, respectively.
On the other hand, $-\bar{b}_z$ takes the maximum value of $2$ in the limit of $m\to 0$ at $\tilde{\varphi}=0$ ($2\pi$).
This is understood as follows.
When $\tilde{\varphi}=0$, the system has the hedgehogs and antihedgehogs given by Eqs.~(\ref{eq:4qnc_sol2}) and (\ref{eq:4qnc_sol4}) connected by the Dirac strings with the vorticities $\zeta=1$ and the lengths $L\left(\frac{1}{2} - \frac{1}{\pi}r_2^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right)\right)$.
This leads to
\begin{equation}
-\bar{b}_z=
2\left[
1 - \frac{2}{\pi}r_2^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right)\right].
\label{eq:4qnc_bbar_tvp=0}
\end{equation}
Meanwhile, when $\tilde{\varphi}=2\pi$, the hedgehogs and antihedgehogs are given by
Eqs.~(\ref{eq:4qnc_sol1}) and (\ref{eq:4qnc_sol3}), and $-\bar{b}_z$ is given by
\begin{eqnarray}
-\bar{b}_z=
\frac{4}{\pi}r_1^{\rm sin}\left(m, \frac{\tilde{\varphi}}{4}\right).
\label{eq:4qnc_bbar_tvp=2pi}
\end{eqnarray}
Note that Eqs.~(\ref{eq:4qnc_bbar_tvp=0}) and (\ref{eq:4qnc_bbar_tvp=2pi}) are related with each other by the symmetry with respect to $\tilde{\varphi}=\pi$.
Hence, $-\bar{b}_z$ goes to $2$ in the limit of $m \rightarrow 0$ for both $\tilde{\varphi}=0$ and $2\pi$.
\begin{figure*}[tb]
\centering
\includegraphics[width=2.0\columnwidth]{4qsin_spin.pdf}
\caption{
\label{fig:4qnc_spin}
Real-space spin configurations on the isosurfaces with $S_z({\bf r})=-0.9$ for the sinusoidal 4$Q$ state in Eq.~(\ref{eq:4qnc_ansatz}) for different topological phases in Fig.~\ref{fig:4qnc_pd}:
(a) $m=0$ and $\tilde{\varphi}=\pi$ ($N_{\rm m}=48$), (b) $m=0.2$ and $\tilde{\varphi}=\frac{\pi}{3}$ ($N_{\rm m}=32$), (c) $m=0.6$ and $\tilde{\varphi}=\frac{\pi}{3}$ ($N_{\rm m}=24$), (d) $m=0.6$ and $\tilde{\varphi}=\pi$ ($N_{\rm m}=16$), and (e) $m=0.8$ and $\tilde{\varphi}=\frac{5\pi}{6}$ ($N_{\rm m}=8$).
The notations are common to those in Fig.~\ref{fig:4qch_spin}.
}
\end{figure*}
Figure~\ref{fig:4qnc_spin} showcases typical spin configurations of the sinusoidal $4Q$ states for all the topological phases in Fig.~\ref{fig:4qnc_pd}, in a similar manner to Fig.~\ref{fig:4qch_spin}.
Figure~\ref{fig:4qnc_spin}(a) is for the $N_{\rm m}=48$ state at $m=0$ and $\tilde{\varphi}=\pi$.
In this state, all the isosurfaces have the same shape and volume.
Figure~\ref{fig:4qnc_spin}(b) is for the $N_{\rm m}=32$ state at $m=0.2$ and $\tilde{\varphi}=\frac{\pi}{3}$.
The isosurfaces including the pairs of the hedgehog and antihedgehog connected by the Dirac strings with $\zeta=-1$ are small (hardly seen in the figure) compared to those including the twisted two-barred crosses,
and they disappear by pair annihilation while increasing $m$, as shown in Fig.~\ref{fig:4qnc_spin}(c) for the $N_{\rm m}=24$ state at $m=0.6$ and $\tilde{\varphi}=\frac{\pi}{3}$.
In these states, any horizontal $xy$ plane intersecting the Dirac strings with $\zeta=-1$ gives a SkL with antiskyrmions, as exemplified on the bottom planes of the cubes.
Figure~\ref{fig:4qnc_spin}(d) is for the $N_{\rm m}=16$ state at $m=0.6$ and $\tilde{\varphi}=\pi$.
In this state, the horizontal Dirac strings disappear, and all the isosurfaces become small, while they retain knoblike features as the remnant of the horizontal Dirac strings.
In this case also, the horizontal $xy$ plane is a SkL with antiskyrmions, and the antiskyrmions appear on the $xy$ planes with almost all $z$ coordinates.
Figure~\ref{fig:4qnc_spin}(e) is for the $N_{\rm m}=8$ state at $m=0.8$ and $\tilde{\varphi}=\frac{5\pi}{6}$.
In this state, the number of the Dirac strings is halved and both Dirac strings and surrounding isosurfaces are shrunk.
Hence, the antiskyrmions are found on the horizontal planes with limited $z$ coordinates, in contrast to the case in Fig.~\ref{fig:4qnc_spin}(d).
In all the above cases, the spin configurations have fourfold improper rotational symmetry about each vertical Dirac string, as shown in Fig.~\ref{fig:4qnc_spin}.
Moreover, the spin configurations with $\tilde{\varphi}=\pi$ are symmetric for the screw operation $\{C_{4z}|-\tilde{\bf a}_{\eta}\}$ and the spatial-inversion operation, as exemplified in Figs.~\ref{fig:4qnc_spin}(a) and \ref{fig:4qnc_spin}(d).
These results are consistent with the symmetry arguments in Table~\ref{tab:4q_sin_sym}.
\section{Numerical analysis of phase shift \label{sec:5}}
Thus far, we elucidated the topological properties of 2D $3Q$-SkLs and 3D $4Q$-HLs by systematically changing the phases of their constituent waves as well as the magnetization.
In this section, we study how the actual phases of the superposed waves evolve while increasing an external magnetic field based on specific model Hamiltonians.
For the 2D SkLs, we analyze the numerical data of the real-space spin configurations obtained for the Kondo lattice model in Ref.~\cite{Ozawa2017}, and extract the phases for two types of SkLs with $|N_{\rm sk}|=1$ and $2$ in Sec.~\ref{sec:5.1}.
In Sec.~\ref{sec:5.2}, we apply similar analysis to the 3D HLs obtained for an effective spin model
in Ref.~\cite{Okumura2020} to extract the phases for two types of HLs with $N_{\rm m}=8$ and $16$.
\subsection{Phase shift in the sinusoidal 3$Q$ state \label{sec:5.1}}
First, using the numerical data obtained for the Kondo lattice model on a 2D triangular lattice in the previous study~\cite{Ozawa2017}, we extract the phases for the $3Q$ states, and discuss their magnetic field dependences in comparison with our result in Sec.~\ref{sec:3.3}.
The Kondo lattice model is a fundamental model for the systems in which itinerant electrons are coupled with localized spins, whose
Hamiltonian is given by
\begin{eqnarray}
\mathcal{H}&=&-\sum_{{\bf r}_l,{\bf r}_{l'},\sigma}t_{{\bf r}_l {\bf r}_{l'}}
\hat{c}^{\dag}_{{\bf r}_{l}\sigma}\hat{c}_{{\bf r}_{l'}\sigma}
-J\sum_{{\bf r}_{l},\sigma,\sigma'} {\bf S}_{{\bf r}_l}\cdot \hat{c}^{\dag}_{{\bf r}_l \sigma}
\boldsymbol{\sigma}_{\sigma\sigma'} \hat{c}_{{\bf r}_{l} \sigma'} \notag \\
&&
-h\sum_{{\bf r}_{l}} S_{{\bf r}_{l}}^z,
\label{eq:KLmodel}
\end{eqnarray}
where the operator $\hat{c}^{\dag}_{{\bf r}_l \sigma}$ $(\hat{c}_{{\bf r}_l \sigma})$ creates (annihilates) an electron with spin index
$\sigma=\pm$ at site ${\bf r}_l$.
The first term represents the kinetic energy of the itinerant electrons and $t_{{\bf r}_l{\bf r}_{l'}}$ denotes the hopping integral
between the sites ${\bf r}_l$ and ${\bf r}_{l'}$.
The second term represents the spin-charge coupling with the coefficient $J$, where $\boldsymbol{\sigma}$ is the vector
of Pauli matrices.
The localized spins ${\bf S}_{{\bf r}_l}$ are treated as classical vectors with $|{\bf S}_{{\bf r}_l}|=1$.
The last term denotes the Zeeman coupling to the external magnetic field $h$, which is taken into account only for the localized spins for simplicity.
In the previous study, the ground state of the model in Eq.~(\ref{eq:KLmodel}) with the nearest-neighbor hopping $t_1=1$, the third-neighbor hopping $t_3=-0.85$, and $J=0.5$ was studied by using the numerical method based on the kernel polynomial method (KPM) and the Langevin dynamics (LD), which is called
the modified KPM-LD method~\cite{Barros2013,Ozawa2017,Ozawa2017shape}.
The ground state was obtained by minimizing the grand potential $\Omega$ given by $\Omega=\braket{\mathcal{H}}/N-\mu n_{\rm e}$, where $N$ is the number of sites ($N=96^2$), $\mu=-3.5$ is the chemical potential, and $n_{\rm e}$ is the electron density defined by $n_{\rm e}=\sum_{{\bf r}_l,\sigma} \braket{\hat{c}^{\dag}_{{\bf r}_l \sigma}\hat{c}_{{\bf r}_l \sigma}}/N$.
The model was found to stabilize $3Q$ states whose wave vectors are dictated by the Fermi surfaces and given by Eq.~(\ref{eq:3Q_q_eta}) with $q=\frac{\pi}{3}$.
When $0 \leq h \lesssim 0.00325$, the ground state obtained by the numerical simulation is a $3Q$-SkL with $|N_{\rm sk}|=2$, which is well described by a superposition of three sinusoidal waves like in Eq.~(\ref{eq:nonchiral_3Q_ansatz}).
On the other hand, it turns into a different $3Q$-SkL with $|N_{\rm sk}|=1$ for $h \gtrsim 0.00325$, and finally, to yet another $3Q$ state with $N_{\rm sk}=0$ for $h \gtrsim 0.0065$.
In the following, we focus on the region for $0 \leq h \leq 0.006$ since the $3Q$ state with $N_{\rm sk}=0$ for $h \gtrsim 0.0065$ cannot be well represented by Eq.~(\ref{eq:nonchiral_3Q_ansatz}).
From the spin configurations obtained in the previous study, we extract the phases of the constituent waves in the two SkLs by assuming Eq.~(\ref{eq:nonchiral_3Q_ansatz}).
For this purpose, we estimate the cost function defined by
\begin{eqnarray}
&&U(\{\varphi_\eta\}, m, \theta, \Gamma; \hat{\bf n}, \xi)=\notag \\
&&\quad\frac{1}{N}\sum_{{\bf r}_l}
\left(1-{\bf S}^{\rm KPM-LD}_{{\bf r}_l} \cdot \tilde{{\bf S}}^{{\rm sin}3Q}_{{\bf r}_l}(\{\varphi_\eta\}, m, \theta, \Gamma; \hat{\bf n}, \xi) \right),
\label{eq:3q_cost}
\end{eqnarray}
where ${\bf S}^{\rm KPM-LD}_{{\bf r}_l}$ is the spin configuration obtained by the modified KPM-LD simulation and $\tilde{{\bf S}}^{{\rm sin}3Q}_{{\bf r}_l}$ is that generated from Eq.~(\ref{eq:nonchiral_3Q_ansatz}) as
\begin{align}
\tilde{{\bf S}}^{{\rm sin3Q}}_{{\bf r}_l}(\{\varphi_\eta\}, m, \theta, \Gamma; \hat{\bf n}, \xi)=
R\left(\hat{\bf n}, \xi\right){\bf S}^{{\rm sin}3Q}_{{\bf r}_l}(\{\varphi_\eta\}, m, \theta, \Gamma).
\label{eq:3q_fit_ansatz}
\end{align}
Here, ${\bf S}^{{\rm sin}3Q}_{{\bf r}_l}(\{\varphi_\eta\}, m, \theta, \Gamma)$ is given by Eq.~(\ref{eq:nonchiral_3Q_ansatz})
with $q=\frac{\pi}{3}$, and $R\left(\hat{\bf n}, \xi\right)$ denotes the 3D rotation matrix about the unit vector
$\hat{\bf n}$ with the angle $\xi$.
We note that the model in Eq.~(\ref{eq:KLmodel}) does not change the energy by flipping all the $y$ components of
the localized spins, which corresponds to the change of $\Gamma$ between $0$ and $1$ in Eq.~(\ref{eq:nonchiral_3Q_ansatz});
hence, the ground states with $\Gamma=0$ and $1$ are energetically degenerate.
By minimizing $U$ in Eq.~(\ref{eq:3q_cost}) for the numerical data $\{{\bf S}^{\rm KPM-LD}_{{\bf r}_l}\}$ at each value of $h$, we obtain the optimal values of the phases,
$\varphi_{\eta}^{*}$, as well as the other parameters.
For comparison with $\tilde{\varphi}$ in Sec.~\ref{sec:3.3},
we define the sum of $\varphi_{\eta}^{*}$ in the form of
\begin{eqnarray}
\tilde{\varphi}^{*} = \pi - \left|\pi - {\rm Mod}\left[\sum_{\eta=1}^{3} \varphi^{*}_{\eta}, 2\pi\right]\right|,
\label{eq:phase_sum_3q}
\end{eqnarray}
paying attention to the sixfold rotational symmetry of the model and the symmetry by the transformation from $\varphi_{\eta}$ to $-\varphi_{\eta}$, in addition to the $2\pi$ periodicity.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{3q_fit_cost.pdf}
\caption{
\label{fig:3q_fit_cost}
Magnetic field dependences of (top) the cost function $U$ in Eq.~(\ref{eq:3q_cost}) and (bottom) the grand potential calculated from $\{{\bf S}_{{\bf r}_l}^{\rm KPM-LD}\}$ and $\{\tilde{\bf S}_{{\bf r}_l}^{{\rm sin}3Q}\}^{*}$, and the difference of the grand potential, $\Delta\Omega$.
}
\end{figure}
We plot the $h$ dependence of the optimal values of $U$ in the upper panel of Fig.~\ref{fig:3q_fit_cost}.
We obtain sufficiently small values less than $4\times 10^{-4}$ for $h < 0.00325$.
On the other hand, $U$ suddenly increases to $\simeq 4\times 10^{-2}$ at $h = 0.0035$ and shows a gradual increase while increasing $h$.
The sudden increase of $U$ is related with the topological phase transition at $h \simeq 0.00325$ as discussed below.
The values for $h> 0.00325$ are, however, still very small, a few percent.
The results indicate that the optimal spin state $\{\tilde{\bf S}_{{\bf r}_l}^{{\rm sin}3Q}\}^{*}$ reproduces well $\{{\bf S}_{{\bf r}_l}^{\rm KPM-LD}\}$ for all $h$.
In the lower panel of Fig.~\ref{fig:3q_fit_cost}, we show the $h$ dependences of the grand potential calculated from the numerically obtained ground state $\{{\bf S}_{{\bf r}_l}^{\rm KPM-LD}\}$ and the optimal state $\{\tilde{\bf S}_{{\bf r}_l}^{{\rm sin}3Q}\}^{*}$.
We also plot the difference of the grand potential $\Delta \Omega=\Omega^{{\rm sin}3Q} - \Omega^{{\rm KPM-LD}}$.
The results indicate that the optimal states $\{\tilde{\bf S}_{{\bf r}_l}^{{\rm sin}3Q}\}^{*}$ almost perfectly reproduce $\{{\bf S}_{{\bf r}_l}^{\rm KPM-LD}\}$ for $h < 0.00325$ ($\Delta\Omega < 9.0 \times 10^{-6}$, whose relative error is less than $3.2\times10^{-3}$~\%), while slight modifications could improve the results for $h > 0.00325$ ($2.0 \times 10^{-4} < \Delta\Omega < 4.0 \times 10^{-4}$, whose relative error is less than $0.14$~\%).
We return to this point in the end of this subsection.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{3q_fit_phase.pdf}
\caption{
\label{fig:3q_fit_phase}
(a) Magnetic field dependences of $\tilde{\varphi}^{*}$ and $|N_{\rm sk}|$ calculated from the optimal spin textures
$\{\tilde{{\bf S}}_{{\bf r}_l}^{{\rm sin}3Q}\}^*$.
For comparison, $|N_{\rm sk}|$ obtained by the modified KPM-LD simulations is plotted by the orange points connected by the dashed line.
At $h=0$, we plot the results of $\tilde{\varphi}^*$ at both $\tilde{\varphi}^*\simeq 0$ and $\pi$ because of the time-reversal symmetry; the solid and dotted lines are the guides for the eye (see the text).
(b) Evolution of $\tilde{m}^*$ and $\tilde{\varphi}^{*}$ obtained by the optimization.
The background is a part of the phase diagram in Fig.~\ref{fig:3qsin_Nsk}.
The orange and green circles represent that the data are for $|N_{\rm sk}|=2$ and $|N_{\rm sk}|=1$, respectively.
}
\end{figure}
Figure~\ref{fig:3q_fit_phase}(a) shows the $h$ dependences of $\tilde{\varphi}^{*}$ and $|N_{\rm sk}|$ calculated from $\{\tilde{{\bf S}}_{{\bf r}_l}^{{\rm sin}3Q} \}^*$.
For comparison, we plot $|N_{\rm sk}|$ obtained by the modified KPM-LD simulations.
For the $|N_{\rm sk}|=2$ state at $h=0$, we obtain $\tilde{\varphi}^{*}\simeq\pi$, but $\tilde{\varphi}^{*}$ is equivalent to $\tilde{\varphi}^{*} +\pi$ at $h=0$
because of time-reversal symmetry; to show this explicitly, we plot the data points at both $\tilde{\varphi}^{*}\simeq0$ and $\pi$.
By increasing $h$, we find that $\tilde{\varphi}^{*}\simeq0$ tends to be favored in the region with $|N_{\rm sk}|=2$,
whereas $\tilde{\varphi}^{*}$ becomes $\simeq \pi$ only at $h=0.001$.
This is presumably due to the failure of the modified KPM-LD simulation in the small $h$ region where the states with $\tilde{\varphi}^{*}=0$ and $\pi$ have a small energy difference.
When $h$ is increased above $0.003$, $\tilde{\varphi}^{*}$ is shifted to $\sim \frac{\pi}{4}$,
and becomes almost constant in the region with $|N_{\rm sk}|=1$.
This clearly shows that the topological transition with the change of $|N_{\rm sk}|$ caused by the magnetic field is accompanied by the phase shift.
We note that $|N_{\rm sk}|$ for the optimal spin configuration remains $2$ at $h=0.0035$ where the phase is shifted already,
while $|N_{\rm sk}|$ is reduced to $1$ for the modified KPM-LD result; see below.
Figure~\ref{fig:3q_fit_phase}(b) shows the evolution
of $\tilde{m}^*$ and $\tilde{\varphi}^{*}$ obtained for
$\{\tilde{\bf S}_{{\bf r}_l}^{{\rm sin}3Q} \}^*$, plotted on the phase diagram in Fig.~\ref{fig:3qsin_Nsk}.
Note that $\tilde{m}^*$ obtained by the fitting does not correspond to the magnetization in the simulation data; see also Eq. (\ref{eq:mtilde}).
The orange and green circles represent $|N_{\rm sk}|=2$ and 1, respectively.
The result indicates that $|N_{\rm sk}|$ calculated from $\{\tilde{\bf S}_{{\bf r}_l}^{{\rm sin}3Q} \}^*$ is consistent with the phase diagram calculated in Sec.~\ref{sec:3.3.2} and the topological transition from $|N_{\rm sk}|=2$ to 1 is associated with the phase shift, except for $h = 0.0035$.
When $h = 0.0035$, $\{\tilde{\bf S}_{{\bf r}_l}^{{\rm sin}3Q} \}^*$ gives $|N_{\rm sk}|=2$, although the values of $\tilde{m}^*$ and $\tilde{\varphi}^*$ are in the $|N_{\rm sk}|=1$ region,
as shown in Fig.~\ref{fig:3q_fit_phase}(b).
The discrepancy at $h = 0.0035$ is presumably due to the insufficient approximation by Eq.~(\ref{eq:3q_fit_ansatz}).
Throughout the above analysis, we use Eq.~(\ref{eq:3q_fit_ansatz}) for two different topological phases with different $|N_{\rm sk}|$.
However, it was recently pointed out that the spin state with $|N_{\rm sk}|=1$ is better described by a different form of a superposition of sinusoidal and cosinsoidal waves~\cite{Hayami2021locking}.
An extension including such a superposition reconciles the discrepancy of $N_{\rm sk}$ at $h=0.0035$ in Fig.~\ref{fig:3q_fit_phase}, and at the same time, suppresses the increases of $U$ and $\Delta\Omega$ in the $|N_{\rm sk}|=1$ region~\cite{Shimizu2021fit}.
We here, however, restrict ourselves to Eq.~(\ref{eq:3q_fit_ansatz}) to discuss the phase shift within the same form of the constituent waves as in Sec.~\ref{sec:3.3}.
\subsection{Phase shift in the screw 4$Q$ state \label{sec:5.2}}
Next, we perform similar analysis for the 3D $4Q$ states on a 3D simple cubic lattice obtained in the previous study~\cite{Okumura2020}
for an effective spin model derived from the Kondo lattice model with an antisymmetric spin-orbit coupling~\cite{Hayami2018, Okumura2020}.
The Hamiltonian reads
\begin{eqnarray}
\mathcal{H}&=&2\sum_{\eta}\left[
-J{\bf S}_{{\bf q}_{\eta}}\cdot{\bf S}_{-{\bf q}_{\eta}}
+\frac{K}{N}({\bf S}_{{\bf q}_{\eta}}\cdot{\bf S}_{-{\bf q}_{\eta}})^2 \right. \notag \\
&& \qquad \ \left.
-iD\frac{{\bf q}_{\eta}}{|{\bf q}_{\eta}|}\cdot({\bf S}_{{\bf q}_{\eta}}\times{\bf S}_{-{\bf q}_{\eta}})
\right]
-h\sum_{l}S_{{\bf r}_l}^z,
\label{eq:4q_ham}
\end{eqnarray}
where ${\bf S}_{{\bf q}_{\eta}}=\frac{1}{\sqrt{N}}\sum_l {\bf S}_{{\bf r}_l}e^{-i{\bf q}_{\eta}\cdot{\bf r}_l}$;
the first, second, and third terms represent the bilinear, biquadratic, and DM-type interactions in momentum space, while the last term is the Zeeman coupling.
The parameters are set as $J=1$, $K=0.6$, $D=0.3$, and $N=16^3$.
The first sum in Eq.~(\ref{eq:4q_ham}) is taken for a particular set of the wave vectors ${\bf q}_{\eta}$ shown in Fig.~\ref{fig:4q_setup}(a)
and $q=|{\bf q}_{\eta}|=\frac{\sqrt{3}\pi}{4}$, i.e., $L=8$.
The model was found to stabilize $4Q$ states
for $0 \leq h \lesssim 1.395$, by using the variational calculation for $h=0$ and the simulated annealing for $h\geq 0$~\cite{Okumura2020}.
When $h=0$, the ground state is given by a superposition of four proper screws, similar to the one discussed in Sec.~\ref{sec:4.2}.
When $0 \leq h \lesssim 0.575$, the ground state obtained by the simulated annealing is a $4Q$-HL with $N_{\rm m}=16$.
By increasing $h$, half of the hedgehog-antihedgehog pairs cause pair annihilation at $h \simeq 0.575$, which changes the spin state to a different $4Q$-HL with $N_{\rm m}=8$.
The remaining hedgehogs and antihedgehogs pair annihilate at $h \simeq 1.395$ and the system turns into a topologically trivial $4Q$ state with $N_{\rm m}=0$, and finally, into the forced ferromagnetic state for $h \gtrsim 1.395$.
In the following, we focus on the region for $0\leq h\leq 1.2$ where the spin configurations can be well described by Eq.~(\ref{eq:4qchiral_ansatz}).
Following the procedure in Sec.~\ref{sec:5.1}, we extract the phases of the constituent waves in the two $4Q$-HLs by assuming that the ground state is well approximated by
a superposition of proper screws like in Eq.~(\ref{eq:4qchiral_ansatz}).
In this case, we estimate the cost function defined by
\begin{eqnarray}
&U&(\varphi_1, \varphi_2, \varphi_3, \varphi_4, m)=\notag \\
&&\frac{1}{N}\sum_{{\bf r}_l}
\left(1-{\bf S}^{\rm SA}_{{\bf r}_l} \cdot {\bf S}^{{\rm scr}4Q}_{{\bf r}_l}(\varphi_1, \varphi_2, \varphi_3, \varphi_4, m) \right),
\label{eq:4q_cost}
\end{eqnarray}
where ${\bf S}^{\rm SA}_{{\bf r}_l}$ is the spin configuration obtained by the simulated annealing
and ${\bf S}^{{\rm scr}4Q}_{{\bf r}_l}$ is that generated from Eq.~(\ref{eq:4qchiral_ansatz}) with ${\bf e}_{\eta}^2 \rightarrow -{\bf e}_{\eta}^2$
[in the previous study~\cite{Okumura2020}, $D$ was taken to be positive in Eq.~(\ref{eq:4q_ham}), which prefers left-handed screws].
We define the sum of $\varphi_{\eta}^*$ in the same form as in Eq.~(\ref{eq:phase_sum_3q}), with the summation of $\eta$ from 1 to 4.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{4q_fit_cost.pdf}
\caption{
\label{fig:4q_fit_cost}
Magnetic field dependences of (top) the cost function $U$ in Eq.~(\ref{eq:4q_cost})
and (bottom) the energy per site calculated from $\{{\bf S}_{{\bf r}_l}^{\rm SA}\}$ and $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$,
and the difference of the energy, $\Delta E$.
}
\end{figure}
The upper panel of Fig.~\ref{fig:4q_fit_cost} shows the $h$ dependence of the optimal values of $U$.
In the whole range of $h$, the optimal $U$ is less than $0.021$, indicating that the optimal $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$ reproduces well the numerically obtained ground state $\{{\bf S}_{{\bf r}_l}^{\rm SA}\}$.
The lower panel of Fig.~\ref{fig:4q_fit_cost} shows the $h$ dependences of the energy per site calculated from $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$ and $\{{\bf S}_{{\bf r}_l}^{\rm SA}\}$, $E^{{\rm scr}4Q}$ and $E^{{\rm SA}}$, respectively, and their difference, $\Delta E=E^{{\rm scr}4Q}-E^{{\rm SA}}$.
We find that the energy is well reproduced for all $h$ (the relative error is less than $1.9$~\%.)
Both $U$ and $\Delta E$ show rapid changes when approaching $h \sim 0.6$, which
is related to the topological transition discussed below.
We note that $U$ as well as $\Delta E$ shows a hump at $h \simeq 0.58$; we return to this point in the end of this subsection.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{4q_fit_phase.pdf}
\caption{
\label{fig:4q_fit_phase}
(a) Magnetic field dependences of $\tilde{\varphi}^{*}$ and $N_{\rm m}$ calculated from the optimal spin textures $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$, which are represented by blue and red points, respectively.
For comparison, $N_{\rm m}$ obtained by the simulated annealing is plotted by the orange points.
(b) Evolution of $m^*$ and $\tilde{\varphi}^{*}$ on the phase diagram in Fig.~\ref{fig:4qch_pd}.
The background is an enlarged figure of the phase diagram in Fig.~\ref{fig:4qch_pd}.
The orange and green circles for each data represent $N_{\rm m}=16$ and $N_{\rm m}=8$, respectively.
}
\end{figure}
Figure~\ref{fig:4q_fit_phase}(a) shows the $h$ dependences of $\tilde{\varphi}^{*}$ and $N_{\rm m}$ calculated from the optimal $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$.
The results of $N_{\rm m}$ well reproduce those for $\{{\bf S}_{{\bf r}_l}^{\rm SA}\}$ plotted by the orange points in the figure.
For the $N_{\rm m}=16$ state at $h=0$, we obtain $\tilde{\varphi}^{*} \simeq \frac{\pi}{3}$, while it is equivalent to $\tilde{\varphi}^{*}\simeq\pi$ and $\frac{4\pi}{3}$
because of the threefold rotational symmetry about the [111] axis.
By increasing $h$, $\tilde{\varphi}^{*}$ gradually decreases from $ \simeq \frac{\pi}{3}$, but rapidly reduces to $\simeq 0$ when approaching $h \simeq 0.585$; $\tilde{\varphi}^{*} \simeq 0$ for $h \gtrsim 0.585$.
The rapid change of $\tilde{\varphi}^{*}$ appears to occur as a precursor of the topological change from $N_{\rm m}=16$ to $8$.
Figure~\ref{fig:4q_fit_phase}(b) shows the evolution of $m^{*}$ and $\tilde{\varphi}^{*}$ obtained from the optimal $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$, plotted on the phase diagram in Fig.~\ref{fig:4qch_pd}.
The orange and green circles represent the phases with $N_{\rm m}=16$ and 8 in Fig.~\ref{fig:4q_fit_phase}(a), respectively.
The result indicates that $N_{\rm m}$ calculated from $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$ is almost consistent with the phase diagram
calculated in Sec.~\ref{sec:4.2.2} and the topological transition from $N_{\rm m}=16$ to 8 is correlated with the rapid phase shift to $\simeq 0$.
We note that two orange points with $N_{\rm m}=16$ are obtained in the $N_{\rm m}=8$ region of the phase diagram.
As in the $3Q$ case in Sec.~\ref{sec:5.1}, we speculate that the discrepancy is presumably due to the insufficiency of Eq.~(\ref{eq:4qchiral_ansatz}).
Another possible reason is that the phase diagram in Fig.~\ref{fig:4qch_pd} is calculated in continuous space, while the analysis in this section is done for the discrete lattice system; the phase boundary between $N_{\rm m}=16$ and $8$ may be shifted by the discretization.
Near the topological transition, we note that $m^*$ increases monotonically up to $h=0.58$, but slightly decreases at $h \simeq 0.59$ after $\tilde{\varphi}^*$ reduces to $\simeq 0$, and increases again for larger $h$, as shown in Fig.~\ref{fig:4q_fit_phase}(b).
In the previous study, it was pointed out that the system shows the first-order phase transition at $h \simeq 0.595$ accompanied by a jump of the net magnetization~\cite{Okumura2020}.
Hence, the nonmonotonic behavior of $m^*$ implies that $\{{\bf S}_{{\bf r}_l}^{{\rm scr}4Q}\}^*$ is insufficient to approximate the sudden change of the spin texture through the first-order transition. We speculate that the hump in $U$ in the top panel of Fig.~\ref{fig:4q_fit_cost} appears to be related with this issue.
\section{Discussion \label{sec:6}}
Through this study, we clarified the effect of the phase shift on the spin textures, the symmetry, the topological properties, and the emergent magnetic field of the 2D $3Q$-SkLs and the 3D $4Q$-HLs, by developing the systematic way to deal with the phase degree of freedom, the hyperspace representation.
Our complete phase diagrams in terms of the sum of phases of the constituent waves, $\tilde{\varphi}$, and the magnetization $m$ provide a ``guiding map'' for searching topologically nontrivial phases and
novel topological phase transitions, which would shed light on the engineering of the topological properties of the multiple-$Q$ spin states.
In the case of the 2D $3Q$-SkLs, we unveiled the parameter regions for the SkLs with high skrmion number $|N_{\rm sk}|=2$, in addition to those for the conventional ones with $|N_{\rm sk}|=1$.
Different values of $N_{\rm sk}$ bring about different emergent electromagnetic phenomena, e.g., in the topological Hall effect~\cite{Loss1992, Ye1999, Bruno2004, Onoda2004, Binz2008, Nakazawa2019}
and the anomalous Nernst effect~\cite{Mizuta2016,Hirschberger2020TNE}, since $N_{\rm sk}$ is related to the scalar spin chirality by Eq.~(\ref{eq:Nsk}).
Moreover, the dynamics of the skyrmions also shows different aspects depending on $N_{\rm sk}$, as discussed in Refs.~\cite{Thiele1973, Everschor2011, Schulz2012, Seki2016skyrmions, Zhang2017}.
While it is usually difficult to directly measure the phase degree of freedom in experiments, our results indicate that the information of the phases
can be obtained by such transport and optical responses.
In the 3D $4Q$-HLs, our results also revealed that the phase shift gives rise to a variety of topological phases with different number of the hedgehogs and antihedgehogs, $N_{\rm m}$, ranging from $8$ to $48$ per unit cube.
In this case also, different values of $N_{\rm m}$ and different distributions of the hedgehogs and antihedgehogs affect the emergent electromagnetic phenomena, such as the topological Hall effect~\cite{Kanazawa2012, Kanazawa2016} and the thermoelectric effect
~\cite{Shiomi2013, Fujishiro2018}.
This means that such responses can be good probes of the phase shifts in the $4Q$-HLs.
Most interestingly, we discovered the appearance of the horizontal Dirac strings, which give rise to unconventional pair creation and fusion of the hedgehogs and antihedgehogs.
Our results indicate that both pair creation and fusion do not affect the emergent magnetic field.
It would be interesting to explore the emergent electromagnetic phenomena specific to the hidden topological objects.
A crucial question is how to control the phase degree of freedom.
Once one can establish a systematic way to cause the phase shift, it is possible to generate intriguing emergent electromagnetic phenomena associated with the topological changes, which would lead to new functionalities of the multiple-$Q$ spin textures.
In the present study, by analyzing the previous numerical data, we demonstrated that the external magnetic field causes characteristic changes in the phase degree of freedom associated with the topological transitions.
The phase shifts are, however, limited in the narrow regions of the topological phase diagrams, and there remain wide interesting parameter regions, e.g., the topological changes in the 2D $3Q$-SkLs caused by the pair annihilation of the hedgehogs and antihedgehogs in the 3D hyperspace (black dots in Figs.~\ref{fig:3qscr_Nsk} and \ref{fig:3qsin_Nsk}), and the maxima of the emergent magnetic field by the fusion of the hedgehogs and antihedgehogs in the 3D $4Q$-HLs (crossing points of the orange
lines in Figs.~\ref{fig:4qch_pd} and \ref{fig:4qnc_pd}).
It was recently pointed out that the sinusoidal $3Q$ state changes the phase from
$\tilde{\varphi}=0$ (or $\pi$) to $\tilde{\varphi}=\frac{\pi}{2}$
by effective six-spin interactions arising from the entropic contribution or the spin-charge coupling~\cite{Hayami2021phase}
.
In general, the interactions which can modulate the sum of phases in an $N_Q Q$ spin texture are given by $N_Q$ ($2N_Q$) multiple-spin interactions when $N_Q$ is even (odd).
Such higher-order multiple spin interactions have been discussed as an origin of noncoplaner spin textures~\cite{Akagi2012, Muhlbauer2009, Binz2006-1, Binz2006-2, Park2011, Hayami2017, Grytsiuk2020, Okumura2020}.
Further studies on the higher-order interactions are desired as the key ingredients to control the phase degree of freedom.
While we have considered the 2D 3$Q$ and 3D 4$Q$ states as typical examples of the multiple-$Q$ spin textures with the phase degree of freedom in this study, one can extend the current analysis to other spin textures, such as the 2D sextuple-$Q$ state~{\cite{Okada2018} and the 3D sextuple-$Q$ states~\cite{Binz2006-1, Binz2006-2, Ritz2013}.
In such general cases, the number of the phase degree of freedom can be more than one; for instance, for the 3D sextuple-$Q$ states,
the additional degree of freedom is $N_Q-d=6-3=3$.
Such extensions would bring further intriguing topological phenomena beyond the present cases with only one phase degree of freedom.
\section{Summary \label{sec:7}}
To summarize, we have theoretically studied the effect of phase shifts on the magnetic and topological properties of the multiple-$Q$ spin textures.
We established the generic framework to systematically deal with the phase shifts by introducing a hyperspace with the additional dimension representing the phase degree of freedom in Sec.~\ref{sec:2}.
In this framework, we can regard a multiple-$Q$ spin texture with phase variables as the one on an intersection of the corresponding spin texture in the hyperspace.
Topological objects in the hyperspace characterize the topological defects in the original spin textures: For instance, the Dirac strings in the 3D hyperspace define the cores of 2D skyrmions and antiskyrmions, and the closed loops composed of the singularities and the membranes of the downward spins (Dirac planes) in the 4D hyperspace define the hedgehogs and antihedgehogs, and the Dirac strings connecting them, respectively, in the 3D HLs.
Thus, we can discuss not only the magnetic textures but also the topological properties from the configuration of topological objects in the hyperspace.
In Sec.~\ref{sec:3}, we have elucidated the effect of phase shifts on the 2D $3Q$ states composed of three proper screws or three sinusoidal waves, by analyzing the 3D $3Q$ states obtained by the hyperspace representation.
The 3D $3Q$ states involve the hedgehogs and antihedgehogs, and the Dirac strings connecting them, whose configurations in the hyperspace change with the magnetization $m$.
We elucidate that the topological defects evolve in a different way between the screw and sinusoidal cases, which leads to distinct topological phase diagrams for the 2D $3Q$ states while changing the sum of phases, $\tilde{\varphi}$.
For the screw case, we clarified that the major portions of the phase diagram are occupied by the SkLs with $|N_{\rm sk}|=1$, whose structures are ubiquitously found in the chiral magnets under the magnetic field, and the remaining small regions with nonzero magnetization realize the SkLs with high topological numbers, $|N_{\rm sk}|=2$.
In contrast, we discovered that the regions of the SkLs with $|N_{\rm sk}|=2$ are extended in the sinusoidal case, including all the states with zero magnetization.
The results indicate that the types of the superposed waves crucially affect the topology of the multiple-$Q$ spin textures through the phase degree of freedom.
In Sec.~\ref{sec:4}, we have studied the 3D $4Q$ states.
In this case, the hyperspace representation is given by the 4D loop lattices, whose patterns evolve with $m$.
In the case of the $4Q$ states composed of the proper screws, we found the topological phases with the number of hedgehogs and antihedgehogs per unit cube $N_{\rm m}=8$, 16, 32, and 48 while changing $\tilde{\varphi}$ and $m$.
On the other hand, for the $4Q$ states composed of the sinusoidal waves, we obtained the topological phases with $N_{\rm m}=8$, 16, 24, 32, and 48.
In the former, the large portions of the phase diagram as a function of $\tilde{\varphi}$ and $m$ are occupied by the phases with $N_{\rm m}=8$ and $16$, while for the latter, the major portions are occupied by the phases with larger $N_{\rm m}$.
The emergent magnetic field $\bar{b}_z$ is always negative for the former, but it changes the sign depending on $\tilde{\varphi}$ and $m$ for the latter.
Interestingly, we discovered that unusual Dirac strings appear on the planes perpendicular to the magnetization direction, and their evolution leads to unconventional topological phenomena:
pair creation of hedgehogs and antihedgehoges, which increases $N_{\rm m}$ with $m$, in the screw case,
and fusion of three hedgehogs and antihedgehogs, which maximizes the amplitude of $\bar{b}_z$, in both screw and sinusoidal cases.
These topological phenomena caused by the horizontal Dirac strings have not been found in the 3D HLs~\cite{Kanazawa2016, Zhang2016, Shimizu2021moire}.
Our finding indicates the importance of the phases for such unexplored topological transitions.
Finally, in Sec.~\ref{sec:5},
we have studied how the phases evolve with an external magnetic field,
by fitting the spin configurations obtained by the numerical simulations for the microscopic models.
For the sinusoidal $3Q$ states, by analyzing the results for the Kondo lattice model on the triangular lattice,
we found that $\tilde{\varphi}^*$ is shifted from $\sim 0$ to $\sim \frac{\pi}{4}$ while increasing the magnetic field, accompanied by the topological transition with reduction of $|N_{\rm sk}|$ from 2 to 1.
Meanwhile, for the screw $4Q$ states, from the results obtained for the effective spin model on the 3D cubic lattice,
we elucidated that $\tilde{\varphi}^{*}$ is shifted from $\sim\frac{\pi}{3}$ to $\sim0$ accompanied by the topological transition with reduction of $N_{\rm m}$ from $16$ to $8$.
These results not only demonstrate that the phase shifts can be caused by the external magnetic field but also suggest further variety of phase shifts depending on the situations.
Our results have unveiled the unconventional topological phases and topological transitions by the comprehensive study of the phase degree of freedom in the multiple-$Q$ spin textures.
In order to access such interesting physics, it is crucial to establish the way of controlling the phase variables.
Once one can control the phase degree of freedom, it is possible to flexibly change the symmetry, the topological properties, and the emergent magnetic field.
Such changes by the phase shifts are important for not only the magnetic properties including the spin excitation spectra~\cite{Kato2021}
but also the electronic quantum transport phenomena, as the electronic band structure of conduction electrons is modulated by the spin texture.
Furthermore, dynamics related with the phase degree of freedom is also an interesting issue since the dynamical change of the spin texture gives rise to not only the emergent magnetic field but also the emergent electric field.
Such dynamical control would produce electromagnetic phenomena beyond the conventional electromagnetism.
Our findings provide a guiding map for the future studies.
\begin{acknowledgments}
The authors thank R. Ozawa for providing the numerical data, and Y. Fujishiro, M. Hirschberger, S. Hayami, N. Kanazawa, K. Nakazawa, and R. Yambe for fruitful discussions.
This research was supported by Grant-in-Aid for Scientific Research Grants (Nos. JP18K03447, JP19H05822, JP19H05825, and JP21J20812), JST CREST (Nos. JP-MJCR18T2 and JP-MJCR19T3), and the Chirality Research Center in Hiroshima University and JSPS Core-to-Core Program, Advanced Research Networks. K.S. was supported by the Program for Leading Graduate Schools (MERIT-WINGS). Parts of the numerical calculations were performed in the supercomputing systems in ISSP, the University of Tokyo.
\end{acknowledgments}
|
1,941,325,220,762 | arxiv | \section{Introduction}
High order harmonic generation (HHG) is a nonlinear atomic
process which can be described using a simple classical
picture \cite{Co94,hhg1,hb93}. Driven by a strong
electromagnetic (EM) field, the atomic electron emerges into
the continuum with zero velocity at some particular moment
of time. At a later time, the classical electron trajectory
returns to the nucleus, where the electron can recombine and
emit a photon. The frequency of the emitted photon is
determined by the amount of energy acquired by the electron
and the atomic ionization potential (assuming that the
electron recombines to the ground state).
The classical analysis shows that, for the monochromatic EM field
of the form $F_0\cos{\Omega t}$,
the kinetic energy
of the electron returning back to the nucleus
cannot exceed the value of $3.17 U_{\rm p}$,
where $\displaystyle U_{\rm p}=F_0^2/(4\Omega^2)$
is the ponderomotive potential.
This leads to the
well-known $I_{\rm p}+3.17 U_{\rm p}$ cut-off rule for the
maximum harmonic order. Here $I_{\rm p}$
is the atomic ionization potential.
The quantum counterpart of the classical model \cite{hhgd}
assumes, that the released electron moves only under the
action of the EM field neglecting the influence of the
atomic potential ( the so-called strong-field approximation
- SFA \cite{Keldysh64,hhgd}.
The SFA employs the analytical Volkov states, which
makes the problem tractable. The classical returning
trajectories emerge as extrema in the saddle-point
analysis of the quantum-mechanical amplitudes computed
within the SFA \cite{hhgd}.
The aforementioned classical and quantum-mechanical results
correspond to the pure cosine form of the driving EM
field. Available pulse-shaping techniques \cite{pshape} make
it possible to modify the HHG characteristics by suitably
tailoring the driving EM field. This problem belongs to a
rapidly developing field of the quantum optimal control
\cite{tutorial}.
Several aspects of the optimal control of the HHG
process were addressed in the literature. In the paper
\cite{wavelet1}, the emission intensity of a given harmonic order
was optimized by tailoring the laser pulse. In
Ref.~\cite{opt2}, the emphasis was placed on optimizing the
particular high order harmonics from which single attosecond
pulses could be synthesized. Both these works employed the
so-called genetic algorithm, which mimicked the natural
selection process by introducing the mutation procedure and
suitable fitness function emphasizing the desired properties
of the target state. Numerically, this procedure
requires multiple solution of the time-dependent
Schr\"odinger equation (TDSE) which may constitute a
considerable computational task if one is interested in
formation of HHG in real atomic systems.
If the desired goal is to increase the harmonics cut-off
order, there is a possibility to find the optimum field
parameters using a purely classical approach based on the
electron trajectory analysis \cite{kinsler,hhgmcol2}.
In the paper by \citet{kinsler} such an analysis, supplemented by
the quantum calculation relying on the genetic algorithm,
was used to show, that the optimum waveform allowing to
maximize the recollision energy is a linear ramp with
the DC offset $\displaystyle F(t)=\alpha t+\beta$
for $t\in (0,T)$, $T$ being the period of oscillations.
Such a form has been shown to provide an
absolute maximum of the kinetic energy of the electron at
the moment of its return to the nucleus. This energy was
approximately 3 times larger than the corresponding energy
for the pure cosine wave with the same period and field
intensity \cite{hhgmcol2}. To
avoid using a strong DC field in practice, it was suggested in
\cite{hhgmcol2}, that it could be replaced by an AC field
of a lesser $\Omega/2$ frequency, while the linear ramp could
be replaced by a combination of the harmonics with
frequencies $n\Omega$. Here $\Omega$ is the frequency
corresponding to the oscillation period $T$, $n$ is integer.
The overall pulse has thus a period of $2T$, rather than
$T$. The weights corresponding to different harmonics
constituting the pulse were found by means of the genetic
algorithm. Results of the quantum calculation relying on
SFA reported in \cite{hhgmcol2} confirmed, that this
waveform allowed to achieve considerable increase in the
cut-off position.
In the present work, we address a related question: what gain
in the HHG cut-off can be achieved if we use the driving EM
field with the waveform composed of the harmonics with the
multiple frequencies $n\Omega$. In other word, we demand
the driving EM pulse to be strictly $T$-periodic and such, that
its integral over a period is zero (i.e., no DC component is
present). It turns
out, that a moderate increase in the cut-off position is
possible in this case. A simpler case of adding the second
harmonic $2\Omega$ to the waveform was considered in earlier
works \cite{mauritsson:013001,zeng:203901}.
We supplement the classical trajectory analysis by a
quantum mechanical TDSE calculation of the HHG process in
the lithium atom.
Choice of this particular target was motivated by the
experiments on the
laser field ionization of magneto-optically trapped (MOT) Li
atoms \cite{Steinmann07}.
Numerical solution of TDSE takes
full account of the effect of the atomic potential. Such
a calculation ensures, that the effect of the
extended HHG cut-off, which we report below,
is not an artifact of a simplified
treatment.
We shall consider below EM fields for which the field amplitude
$F(t)$ is a periodic function of time, having a
fixed period $T$. The field intensity for such EM fields can be expressed
as $\displaystyle W={c\over 4\pi T}\int\limits_0^T F(t)^2\ dt$,
where $c$ is the speed of light.
For
the monochromatic EM field $F(t)=F_0 \cos{\Omega t}$ this gives
the well-known relation
$\displaystyle W={F_0^2c\over 8\pi}$.
Throughout the paper we shall use the
atomic units. The unit of the EM field intensity
corresponding to the
unit field strength $F_0 = 1~{\rm a.u.} = 5\times 10^9$ V cm$^{-1}$
is $3.51\times 10^{16}$ W cm$^{-2}$ \cite{shreview}.
The field intensity of the
monochromatic wave $F_0 \cos{\Omega t}$ can thus be expressed as
$W=3.5\times 10^{16} F_0^2$, if
field intensity is measured in W/cm$^2$ and the field
strength is expressed in the atomic units. From the expressions
above it is clear, that $T$-periodic EM fields with different $F(t)$ but
equal values of $\int\limits_0^T F(t)^2\ dt$,
will have the same intensities. In particular,
EM field $F(t)$ will have the same intensity as the monochromatic
wave $F_0 \cos{\Omega t}$ of the same period
if $\int\limits_0^T F(t)^2\ dt=T F_0^2/2$.
\section{Theory}
\subsection{Classical approach}
We begin with a purely classical problem of finding returning
trajectories of an electron moving in a periodic EM field
with a given period $T$, corresponding frequency $\Omega=2\pi/T$,
and which does not contain a DC component:
\begin{equation}
F(t)=2{\rm Re} \sum\limits_{k=1}^{K} a_k e^{i k \Omega t}
\label{comp}
\end{equation}
The field is assumed to be linearly polarized along the $z$-axis.
Our task is to find the set of coefficients $a_k $ in
\Eref{comp} for which electron returning to the nucleus
possesses the highest possible kinetic energy. For this
problem to be well-defined, we must impose some restrictions
on the possible choice of this set. A natural requirement is
that only the fields $F(t)$
of the same intensity
are to be
considered. This implies that $ 4\sum\limits_{k=1}^{K}
|a_k|^2=F_0^2$, where $F_0$ is amplitude of the monochromatic
waveform $F_0 \cos{\Omega t}$ having the same intensity.
As it is customarily done in the classical 3-step model of
HHG, we neglect the influence of the atomic core on the
electron motion. We solve the classical equations of motion
of electron in the EM field
with the initial conditions $z(t_0)$=0,
$\dot z(t_0)=0$. Here $t_0$ is the moment of time when the atomic
ionization event occurs.
In the classical calculation,
we do not introduce any envelope
function to describe the EM field, i.e., as a driving
force in the classical equations of motion,
we use the flat envelope pulse of infinite duration.
This is permissible, since
in the quantum calculation
presented in the next section we shall use
a pulse long enough, so that all transient effect,
as well as all
effects due to the finite duration of the pulse (such as
dependence on the carrier phase) become unimportant.
The results of both calculations can, therefore, be legitimately
compared, and, as we shall see,
will give qualitatively similar results.
We are interested only in the
returning trajectories for which $z(t_1)$=0 for some
$t_1$. For such trajectories, we compute the kinetic energy
$E$ at the moment of return.
We use the following field parameters: $I=10^{12}$ W/cm$^2$,
$F_0=0.0053$~a.u., $\Omega=0.185$ eV (6.705 $\mu$m). In this and the
subsequent section we consider the case of the Li atom with
the ionization potential $I_p=0.196$ a.u. For this set of the
field and atomic parameters, the value of the Keldysh
parameter $\gamma=\sqrt{I_p/2U_p}$=0.8.
Our choice of the field
parameters was motivated, primarily, by the following reason.
We need to choose a combination of $F_0$, $\Omega$, and
$I_p$ such that the picture of the HHG process \cite{hhgd},
which establishes the connection of HHG with returning classical
trajectories remained valid. Among the conditions of the validity of
this picture are the requirements, that depletion of the ground state
can be ignored, and that the value of the
Keldysh parameter should be less than one \cite{hhgd}.
For the lithium atom, with its small ionization
potential, we have a rather narrow corridor of the field parameters,
which satisfy both these requirements.
For the field parameters thus defined we have the
value $F_0/\Omega^2\approx 115$ a.u. for the excursion radius
of electron motion in the EM field of the cosine form
$F_0\cos{\Omega t}$. Similar values
for the excursion radius are obtained for all EM fields given
by \Eref{comp} we consider below. Thus, the electron
moves predominantly far from the nucleus, and neglect of the
Coulomb potential in the classical equations of motion is
legitimate.
For the pure cosine form of the EM field, the classical procedure
described above
leads to the typical dependence of the kinetic energy at the
moment of return on the time of release shown in
\Fref{fig1} by the solid (red) line.
For convenience, in \Fref{fig1} we plot not just the kinetic
energy itself, but the quantity $ N=(E+I_P)/\Omega$, which
gives us the order of the harmonic corresponding to given kinetic
energy $E$.
The solid curve in \Fref{fig1} shows that, for the
parameters we chose, the maximum harmonic order
is approximately $N_{\rm cut-off}\approx 100$, which is a
visualization of the well-known $I_{\rm p}+3.17 U_{\rm p}$
cut-off rule.
For the set of parameters in \Eref{comp}, defining the EM
field different from the pure cosine wave, we proceed as
follows. For each set of parameters in \Eref{comp},
subject to the
constraint of
the fixed intensity, so that the field intensity had the
same value as in the case of the pure cosine wave,
we can
compute a maximum kinetic energy of the returning electron.
This gives us a function defined on the set of the parameters $a_k$
in \Eref{comp}.
We look for the
maximum of this function using the gradient ascent method,
giving as a starting values of the independent variables
some particular set of the
parameters in \Eref{comp}, satisfying the fixed intensity
constraint.
This procedure is guaranteed to converge to a local maximum.
Since the convergence is generally quite fast, and requires only
modest computational effort,
we can repeat the procedure many times with different
starting values, untill we can be reasonably sure, that
we have found the global maximum.
We perform two calculations of this kind. In the first, we
impose an additional restriction that
only the terms with odd $k$-values
are to be present in \Eref{comp}. This ensures that the
resulting HHG spectrum contains only odd harmonics of the main
frequency. In the
second calculation, we retain the terms
with both odd and even $k$-values
in the expansion \eref{comp}.
In this case, the resulting HHG spectrum
contains even harmonics as well, since symmetry of the Hamiltonian,
which for the case when only odd harmonics are present
in \eref{comp}, leads to only odd harmonics in the HHG spectrum,
is broken by superimposition of fields of $\Omega$ and $2\Omega$
frequencies \cite{evenh}.
The first calculation
was performed with $K=7$, while in the second we chose
$K=5$. The resulting sets of coefficients $a_k$ for which
the maximum of the highest kinetic energy of the returning
electron is attained, are presented in Table~\ref{tab1}. Also
presented is the set consisting of only $a_1$, which defines
the pure cosine wave for the field parameters considered
above.
The degree to which this procedure increases the highest
energy of the returning electron is illustrated in
\Fref{fig1}. Resulting shapes of the driving field $F(t)$,
corresponding to the three cases considered above are
visualized in \Fref{fig2}.
\begin{table}[h]
\begin{tabular} {c crr}
& cosine wave & odd harmonics & odd and even harmonics \\
k & $ a_k\cdot 10^3$ & $ a_{2k-1}\cdot 10^3$ & $ a_{k}\cdot 10^3$ \\
\hline\hline\\
$1$ & 2.665 & $ 2.503-0.076i$ & $2.123-1.033i$ \\
$2$ & 0 & $-0.443-0.566i$ & $0.403+0.754i$ \\
$3$ & 0 & $ 0.061-0.385i$ & $-0.558+0.271i$ \\
$4$ & 0 & $ 0.138-0.264i$ & $-0.302-0.358i$ \\
$5$ & 0 & 0 & $0.224-0.248i$ \\
\end{tabular}
\caption{Coefficients in \Eref{comp} for which
the highest kinetic energy of the returning electron is
maximized. The second column: pure cosine
wave; the third column: odd harmonics with $K=7$;
the fourth column: odd and even harmonics with $K=5$
\label{tab1}}
\end{table}
\begin{figure}[h]
\epsfxsize=10cm
\epsffile{fig2.eps}
\caption{(Color online)
Classical dependence of the quantity $ (E+I_{\rm P})/\Omega$
on the time of electron release within an optical cycle
($E$- electron energy at the moment of return to the
nucleus, $I_{\rm P}=0.196$~a.u.- ionization potential of the
Li atom). The three sets of curves correspond,
respectively, to the pure cosine wave -- solid (red) line; odd
harmonics in Table~\ref{tab1} -- dashed (green) line; odd and
even harmonics in Table~\ref{tab1} -- short (blue) dash. }
\label{fig1}
\end{figure}
\begin{figure}[h]
\epsfxsize=10cm
\epsffile{fig1.eps}
\caption{(Color online)
EM fields corresponding to the coefficients $a_k$ listed in
Table~\ref{tab1}. The pure cosine wave -- (red) solid line; odd
harmonics in \Eref{comp} with $K=7$ -- (green) dashed line; odd
and even harmonics in
\Eref{comp} with $K=5$ -- (blue) short dash. }
\label{fig2}
\end{figure}
As one can see, the set of the parameters corresponding to
only odd harmonics present in \Eref{comp} allows to achieve
a 10\% gain in the position of the cut-off. The curve
representing dependence of the kinetic energy on the time of
release remains symmetric with respect to the translation
$t\to t+T/2$, as in the case of the pure cosine wave. This
is, in fact, a general property exhibited by the classical
solutions of the equations of motion in the EM field with
only odd harmonics present in \Eref{comp}, which leads to
essentially the same structure of the classical returning
electron trajectories as in the case of the cosine wave.
There are two such trajectories per every half cycle of the
EM field (the so-called "long" and "short" trajectories )
for the plateau region, i.e. for the kinetic energies below
the apex of the corresponding curves in \Fref{fig1}.
There is one trajectory per every half cycle with the
kinetic energy of the returning electron near the apex of
the curves (the cut-off harmonics).
Situation is different for the case of even and odd harmonics present
in \Eref{comp}. The kinetic energy curves are no longer
symmetric with respect to the half cycle translation
$t\to t+T/2$.
One should note, that increase in the cut-off position shows
very little sensitivity to further increase of the number of
terms in \Eref{comp}. If, for example, we used $K=9$ instead
of $K=7$ in the case of only odd harmonics included in
\Eref{comp}, we would have gained additional increase in the
cut-off position of the order of 1\%. Similar observation
applies for the case of even and odd harmonics in
\Eref{comp}. This indicates, that the low order harmonics
in the series (\ref{comp}) are primarily responsible for the
increase in the cut-off position, and the pulses composed
using the coefficients in Table~\ref{tab1} are optimal
in the sense that no further significant increase in the
cut-off position is possible as long as we rely on the
expansion \eref{comp} for the waveform.
The discussion presented so far was purely classical and
constituted a simple generalization of the 3-step model for the
case of the EM field given by \Eref{comp}.
Quantum calculation
is needed to confirm the classical results.
Such calculation is presented in the next section.
\subsection{Quantum calculation}
In this section, we present results of the HHG calculation
for the Li atom for the set of coefficients $a_k$ given in
Table~\ref{tab1}. We use the procedure, which we developed
recently in Ref.~\cite{hhresn} for the solution of TDSE for
realistic atomic targets, which can be described within the
single active electron approximation. For completeness, most
essential features of this procedure are outlined below.
The field-free atom in the ground state is described by
solving a set of self-consistent Hartree-Fock equations
\cite{CCR76}. The field-free Hamiltonian ${\hat H}_{\rm atom}$
in this model is thus a non-local integro-differential
operator.
The EM field is chosen to be linearly polarized along the
$z$-axis. We describe the atom-EM field interaction using
the length gauge: $\hat H_{\rm {int}}=zF_z(t)$, where
$F_z(t)=f(t)F(t)$. Function $F(t)$ is given by \Eref{comp}
where we use one of the three sets of the coefficients from
Table~\ref{tab1}. The switching function $f(t)$ smoothly
grows from 0 to 1 on a switching interval $0<t<T_1$, and
is constant for $t>T_1$. The switching time is $T_1=5T$.
We represent solution of TDSE in the form of an expansion on
a set of the so-called pseudostates:
\begin{equation}
\Psi({\bm r},t)=\sum\limits_{j}
b_j(t) f_j({\bm r})
\label{exp}
\end{equation}
This set is obtained by diagonalizing the field-free atomic
Hamiltonian on a suitable square integrable basis
\cite{B94,bstel}:
\begin{equation}
\langle f^N_{i}|{\hat H}_{\rm atom}| f^N_{j}\rangle=E_{i}
\delta_{ij}
\ .
\label{pseud}
\end{equation}
Here the index $j$ comprises the principal $n$ and orbital $l$
quantum numbers, $E_{j}$ is the energy of a
pseudostate and $N$ is the size of the basis.
To construct the set of pseudostates satisfying \Eref{pseud}, we
use either the Laguerre basis, or the set of
B-splines (for angular momenta $l>15$), confined to a box of a
size $R_{\rm max}=200$ a.u.
B-splines of the order $k=7$ with the knots located at the
sequence of points lying in $[0,R_{\rm max}]$ are employed.
All the knots $t_i$
are simple, except for the knots located at the origin and the
outer boundary $R=R_{\rm max}$ of the box. These knots have
multiplicity $k=7$. The simple knots were distributed in $(0,R_{\rm
max})$ according to the rule $t_{i+1}=\alpha t_i+\beta$. The parameter
$\alpha$ was close to 1, so that the resulting distribution of the
knots was almost equidistant. For each value of the angular momentum
$l$, the first $l+1$ B-splines and the last B-spline resulting from
this sequence of knots were discarded. Any
B-spline in the set thus decreases at least as fast as $r^{l+1}$
and assumes zero value at the outer boundary.
In the present calculation, the system is confined within a
box of a finite size which may lead to appearance of
spurious harmonics in the spectrum due to the reflection of the
wavepackets from the boundaries of the box \cite{hhg1}. One
can minimize this effect by using a mask function or an
absorbing potential. We use the absorbing potential
$-iW({\bm r})$ which is a smooth function, zero for $r\leq
180$ a.u. and continuously growing to a constant $-iW_0$
with $W_0=2$ a.u. outside this region.
For the EM field parameters which we employed in the
classical treatment of the previous section, the maximum
harmonic order is of
the order of a hundred. This implies that to describe
accurately formation of all harmonics, we have to retain
pseudostates with correspondingly high angular momenta. In
the calculation we present below, the pseudostates with
angular momenta $l<120$ were retained in \Eref{exp}.
With the total Hamiltonian and basis set thus defined, the
TDSE can be rewritten as a a system of differential
equations for the coefficients $b_j(t)$ in \ref{exp}. This
system is solved for the time interval $(0,30T)$, where $T$
is a cycle of the EM field, using the Crank-Nicholson method
\cite{crank}.
Finally, the harmonics spectrum is computed as \cite{hhg1}:
\begin{equation}
|d(\omega)|^2=
\left |{1\over t_2-t_1}
\int\limits_{t_1}^{t_2}e^{-i\omega t}d(t)\ dt \right|^2
\ .
\label{hhg}
\end{equation}
Here $\displaystyle d(t)=\langle \Psi(t)|z|\Psi(t)\rangle$
is expectation value of the dipole momentum, limits of
integration $t_1$ and $t_2$ are chosen to be large enough to
minimize the transient effects (we use last 10 cycles of the
pulse duration, i.e., $t_1=20T$, $t_2=30T$).
\section{Results}
In Figures\ {\ref{fig3a},\ref{fig3b},\ref{fig3c}}
we show the harmonics spectra resulting from
the TDSE calculation for the three choices of the EM field
coefficients listed in Table~\ref{tab1}. We remind the
reader that in all three cases field intensities are equal to
$W=10^{12}$ W/cm$^2$.
\begin{figure}[h]
\epsfxsize=10cm
\epsffile{fig3a.eps}
\caption{(Color online)
Harmonics spectra of Li for the EM fields from Table
\ref{tab1}. Pure cosine wave ((red) solid line),
classical cut-off position
marked with the (green) dashed line.}
\label{fig3a}
\end{figure}
\begin{figure}[h]
\epsfxsize=10cm
\epsffile{fig3b.eps}
\caption{(Color online)
Harmonics spectra of Li for the EM fields from Table
\ref{tab1}.
Odd
harmonics in \Eref{comp} with $K=7$
((red) solid line),
classical cut-off position
marked with the (green) dashed line.}
\label{fig3b}
\end{figure}
\begin{figure}[h]
\epsfxsize=10cm
\epsffile{fig3c.eps}
\caption{(Color online)
Harmonics spectra of Li for the EM fields from Table
\ref{tab1}.
Odd and
harmonics in \Eref{comp} with $K=5$
((red) solid line),
classical cut-off position
marked with the (green) dashed line.}
\label{fig3c}
\end{figure}
General appearance of these spectra agrees with the
expectations based on the classical results of the previous
section. One can observe the increase in the cut-off
position for the pulse shaped according to the recipe from
the third column of Table~\ref{tab1} (only odd harmonics
with $K=7$ in \Eref{comp}), comparing to the cut-off
position for a pure cosine wave of the same intensity.
Cut-off position increases yet further for the pulse
constructed using the set of the coefficients from the
fourth column of Table~\ref{tab1} (odd and even harmonics with
$K=5$ in
\Eref{comp}). In this case, the spectrum contains
harmonics of both odd and even orders. A magnified fragment
of the spectrum illustrating this fact is shown in
\Fref{fig4}.
\begin{figure}[h]
\epsfxsize=10cm
\epsffile{fig4.eps}
\caption{(Color online)
Part of the spectrum of Li for
odd and even harmonics in \Eref{comp} with $K=5$.
}
\label{fig4}
\end{figure}
Quantitatively, the TDSE results for the cut-off positions
are in good agreement with the classical predictions,
summarized in \Fref{fig1}. Use of the pulse constructed
from all harmonics with $K=5$ in \Eref{comp} allows to
increase the cut-off position by about 20\%, in agreement
with the classical analysis given above.
We can, in fact, establish a closer correspondence between
classical and quantum results by performing the
time-frequency analysis of our data. The techniques used
for this purpose, the wavelet transform
\cite{wavelet4,wavelet3,wavelet2,wavelet1}, or the closely
related Gabor transform \cite{wavelet4,hhres}, offer
possibility to track the process of harmonics formation in
time, combining both the frequency and temporal resolution of
a signal. By using these techniques, we can try to find, in
the quantum domain, the traces left by the classical
trajectories. The fact that such traces may be present,
follows from the quantum-mechanical treatment of the HHG
process given in \cite{hhgd}, where the classical
trajectories naturally appear in the saddle-point
analysis. Such manifestation of the classical trajectories
in the HHG spectra was demonstrated, for example, for the
hydrogen atom \cite{wavelet2}.
We perform our analysis of the HHG process by applying the wavelet
transform of the dipole operator $d(t)$ in
\Eref{hhg}. This transform
is defined as \cite{wavelet}
\begin{equation}
T_{\Psi}(\omega,\tau)=\int d(t)\sqrt{|\omega|}
\Psi^*(\omega t-\omega \tau)\ dt\ .
\label{wav1}
\end{equation}
The transform is generated by the Morlet wavelet
$\displaystyle \Psi(x)= x_0^{-1} \exp(-ix) \exp[-x^2/(
2x_0^2)]$.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\resizebox{50mm}{!}{\epsffile{fig5a.eps}} &
\resizebox{50mm}{!}{\epsffile{fig5b.eps}} \\
\end{tabular}
\caption
{(Color online)
Wavelet time-spectrum of Li for the 61-st (left panel) and
101-st (right panel) harmonics for the pure cosine wave
in \Eref{comp}.
}
\label{fig5}
\end{center}
\end{figure}
\Fref{fig5} presents a well-known picture of the harmonics
formation in time \cite{wavelet2}. For the plateau
harmonics, the amplitude of the wavelet transform has four
maxima per cycle, corresponding to the two pairs of the
so-called long and short trajectories for the harmonics at
the plateau. For the near cut-off 101-st harmonic, two
maxima per cycle are present. Those features agree
completely with the classical picture shown in
\Fref{fig1}.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\resizebox{50mm}{!}{\epsffile{fig6a.eps}} &
\resizebox{50mm}{!}{\epsffile{fig6b.eps}} \\
\end{tabular}
\caption
{(Color online)
Wavelet time-spectrum of Li for the 65-th (left panel) and
99-th (right panel) harmonics for the pulse
with only odd harmonics in \Eref{comp} ($K=7$).
}
\label{fig6}
\end{center}
\end{figure}
For the pulse containing only
odd harmonics in \Eref{comp}, classical picture of
the dependence of kinetic energy on the time of release, presented
on \Fref{fig1}, is very similar to the curve for the pure
cosine wave. We can expect, therefore,
results of the wavelet transform in this case to be
qualitatively similar to those shown on \Fref{fig5}.
That this is indeed the
case can be observed from \Fref{fig6}.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\resizebox{50mm}{!}{\epsffile{fig7a.eps}} &
\resizebox{50mm}{!}{\epsffile{fig7b.eps}} \\
\resizebox{50mm}{!}{\epsffile{fig7c.eps}} &
\resizebox{50mm}{!}{\epsffile{fig7d.eps}} \\
\end{tabular}
\caption
{(Color online) Wavelet time-spectrum of Li for the 65-th,
75-th, 97-th and 111-th harmonics (from left to right and
top to bottom). The driving field contains terms with both odd and
even $k$-values in \Eref{comp} with $K=5$. }
\label{fig7}
\end{center}
\end{figure}
For the field waveform containing both odd and even harmonics
in \Eref{comp}, the classical analysis reveals
a somewhat different picture. As one can see from
\Fref{fig1}, there are two pairs of the
classical trajectories per cycle for which kinetic energy of
the returning electron is such, that less than approximately
60 harmonics can be formed. When the harmonic order
increases and reaches the value of
approximately 75 (cut-off region for the smaller maximum of
the corresponding curve in \Fref{fig1}), there are three
returning trajectories per cycle. For higher energies, there
remain only two classical trajectories, which can
participate in the formation of the harmonics. For higher
yet energy, a single such trajectory exists.
As can be observed from \Fref{fig7}, the quantum calculation
apparently confirms these classical considerations. Wavelet
spectra do demonstrate that number of maxima per cycle
progressively decreases with the increase of the harmonics
order.
\section{Conclusion}
We demonstrated an increase of the cut-off value for the HHG
process when a superposition of several harmonics of a given
frequency is used to build a waveform of the driving EM
field. We analyzed the classical returning electron
trajectories for the fields thus constructed. Such an
analysis shows, that a field spectral composition can be
found, for which a 20\% increase in the value
of the maximum classical kinetic energy of the recombining
electron is achieved as compared to the case of a cosine wave
of the same intensity.
TDSE calculation of the HHG spectrum for such a driving
field, performed for the Li atom, confirms the classical
result. It does demonstrate the increase in the cut-off
value of the order of 20\%.
This value represents a maximum increase which
can be achieved if we restrict the trial waveform to that
given by \Eref{comp} under condition of a fixed
intensity. Indeed, the classical calculation shows, that no
further noticeable increase of the maximum classical kinetic
energy of the recombining electron can be achieved by adding
higher order harmonic terms in expansion
\eref{comp}.
Our result thus presents an upper limit in the increase of the HHG
cutoff achieved for the class of the waveforms given by
\Eref{comp}, i.e. for the waveforms which are periodic with
a given period $T$ and do not contain the DC components.
This suggests, that to achieve more substantial increase
in the HHG cutoff condition, one should use the waveforms which
cannot be described by \Eref{comp}. Such are the ideal waveform
proposed in \cite{kinsler},
for which we should allow the term with $k=0$
to the sum in \Eref{comp}), or the field configurations containing
subharmonic fields with frequencies $\Omega/2$, as those used in
\cite{hhgmcol2,subharmonic}. As results of these works indicate, a
considerably more important gain in the cutoff energy can be achieved
for such waveforms. These results, and the result obtained in the
present work, allow us to draw the following conclusion. The strategy
based on the low-frequency (subharmonic) modifications of the waveform
may be more efficient than the strategy relying on introducing
multiple-frequency components in the trial waveform as in
\Eref{comp}. This may provide a useful guide to the problems related
to modification of the high frequency part of the HHG spectrum.
The time-frequency analysis of the results of the TDSE
calculation illustrates the role, which the classical
trajectories play in the formation of the harmonics. The
usual picture of HHG rendered by this technique exhibits
traces of four (for the plateau harmonics) or two (harmonics
near cut-off) trajectories per optical cycle, which
participate in forming a particular harmonic. In the case
of the waveform constructed from the terms of odd and even order in
\Eref{comp}, the picture revealed by the wavelet analysis is
different. Number of contributing trajectories in this case
varies with energy in agreement with the classical picture
of \Fref{fig1}. Depending on the harmonics order, there may
be four, three, two or just a single such trajectory.
For a single atom, each
such a trajectory leads to the formation of a short burst of EM
radiation, producing a pulse train. In the case of the HHG driven by
the single color $T$-periodic EM field, such a train is a $T/2$ periodic
sequence of bursts, with two bursts on each interval of the length
$T/2$, corresponding to the short and long trajectories within
a half cycle. For each harmonic order, the contributions of these two
trajectories interfere strongly, leading to the random distribution
of the phases of different harmonics in the plateau region.
This situation is changed \cite{trains} if propagation
effects are taken into account. Depending on the particular
propagation geometry, one of the contributions (of either short
or long trajectories) is suppressed, the propagated harmonic
components become locked in phase, and the microscopic signal
is a train with one pulse per every half cycle.
For the case of
the waveform with only odd harmonics in \Eref{comp},
propagation should have exactly the same effect, as for the
single color field. For this waveform, the classic curve in \Fref{fig1}
has exactly the same form as in the single color case, giving rise
to the same set of long and short trajectories per every half cycle
of the laser field. Analysis given in the work \cite{trains} shows, that
propagation effects reduce contribution of one of the trajectories since
their phases change differently with laser intensity, and hence
behave differently in
the nonlinear medium. Depending on the particular geometry,
contribution of one of the trajectories can thus be reduced.
The curve in \Fref{fig1} suggests, that in the case of
the waveform with only odd harmonics in \Eref{comp} we should
have analogous situation.
For the case of the waveform with odd and even
harmonics in \Eref{comp}, the pulse train produced by the
single-atom is no longer $T/2$-periodic, but $T$-periodic.
This is clearly seen from \Fref{fig1}. It is, of course, also
obvious from the fact, that even harmonics are present in the HHG
spectrum in this case, separation of the harmonics is not $2\Omega$
but $\Omega$, consequently
the signal is a $T$-periodic function. On each of the
intervals of length $T$ we have, depending on the number of the
classical trajectories four, three, two, or a single pulse of
different intensities. For the harmonics with orders $N>80$, when,
as seen from \Fref{fig1}, there are only two trajectories to consider,
propagation should produce essentially the same effect as
in the case of the single color field. These two trajectories interfere,
their phases depending differently
on the laser intensity. Thus, as in the single color case,
propagation may reduce contribution of one of these trajectories, making
harmonics phase-locked.
The microscopic signal will be in this case a train
with one pulse per every cycle.
For the lower order harmonics, when all four trajectories contribute
with different amplitudes and phases, situation is more complicated.
It can hardly be expected, that propagation effects may suppress
contributions of all but one trajectory, and thus eliminate the
interference of the contributions due to different trajectories
completely.
The harmonics, therefore,
may not be locked in phase in this case.
\section{Acknowledgements}
The authors acknowledge support of the Australian Research Council in
the form of the Discovery grant DP0771312. Resources of the National
Computational Infrastructure (NCI) Facility were
employed. One of the authors (ASK) wishes to thank the Kavli
Institute for Theoretical Physics for hospitality.
This work was supported in part by the NSF Grant No.~PHY05-51164
|
1,941,325,220,763 | arxiv | \section{Omnipresence of filamentary structures in the interstellar medium}
While molecular clouds were already known to exhibit filamentary structures \citep[e.g.,][]{Schneider1979,Abergel1994},
the omnipresence of filaments in the interstellar medium (ISM) and molecular clouds
has only recently been revealed thanks to the high resolution and the high dynamic range of \textit{Herschel}\ observations of the submillimeter (submm) dust emission \citep[e.g.,][and Fig.\,\ref{Herschel_Mosaic}]{Andre2010,Molinari2010}.
Furthermore, the all-sky maps of dust submm emission observed by \planck\ in total intensity, as well as in polarized intensity,
emphasize the large-scale, hierarchical filamentary texture of the Galactic ISM \citep{planck2015-XIX,planck2016-XXXII}.
\textit{Herschel}\ and \planck\ data show that interstellar matter is organized in web-like networks of filaments,
which appear to be formed as a natural result of the physics at play in the ISM.
The presence of interstellar filaments in both star-forming regions as well as in non-star-forming quiescent clouds, alludes to a formation process of filaments preceding any star-forming activity \citep{Andre2010}. The spatial distribution of prestellar cores and protostars extracted from \textit{Herschel}\ images observed mainly along the densest filaments \citep{Konyves2015} indicates that the properties of interstellar filaments may be a key element defining the initial conditions required for the onset of star formation \citep{Andre2014}.
\begin{figure*}
\hspace{-0.5cm}
\begin{minipage}{1\linewidth}
\centering
\resizebox{18.cm}{!}{\includegraphics[angle=0]{Arzoumanian_Fig1.pdf}}
\end{minipage}
\caption{Column density, $N_{\rm H_2}$ [cm$^{-2}$], maps derived from \textit{Herschel}\ five-wavelength images [from 70 to 500\,$\mu$m] observed as part of the \textit{Herschel}\ Gould Belt survey \citep{Andre2010}.
These seven molecular clouds are framed with colours reflecting their star forming activity:
from actively star forming (blue) to mostly quiescent (red).
}
\label{Herschel_Mosaic
\end{figure*}
\begin{figure*}[ht]
\hspace{-0.1cm}
\begin{minipage}{1\linewidth}
\centering
\resizebox{17.cm}{!}{\includegraphics[angle=0]{Arzoumanian_Fig2.pdf}}
\end{minipage}
\caption{
{\bf Left:} Distribution of deconvolved FWHM widths for 278 filaments observed in 8 different regions (black solid histogram, filled in orange), with a
median value of $0.09\pm0.04$\,pc.
For comparison, the blue dashed histogram represents the distribution of central (thermal) Jeans lengths of the filaments.
{\bf Right:} Radial column density profile averaged along the length of the B211/13 filament in Taurus \citep{Palmeirim2013}. The median absolute deviation of the radial profiles along the filament length is shown in yellow. The profile is well fitted with a {\it Plummer-like} function (red dashed curve) where the density decreases as $r^{-2}$ at large radii \citep[][]{Arzoumanian2011,Palmeirim2013}.
}
\label{ProfWidth
\end{figure*}
Hence, characterizing the observed filament properties in detail, combining tracers of gas and dust in total and polarized intensities is essential to make progress in our understanding of the physical processes involved in the formation and evolution of interstellar filaments and their role in the star formation process.
In the following, I discuss the main results on the properties of interstellar filaments derived from \textit{Herschel}, \planck, and molecular line observations.
\textit{Herschel}\ continuum observations are essential to describe the filament (column) density distribution. These data are complemented with ground based molecular line observations to access to the kinematics of the filamentary structures, while \planck\ dust polarization observations give unprecedented information on the structure of the magnetic field and its connection with interstellar matter.
These results are presented in the context of a new paradigm of star formation, which is closely linked to the formation and fragmentation of self-gravitating filaments.
\section{Filament properties as derived from \textit{Herschel}\ observations of nearby clouds}\label{profiles}
Statistical analysis of nearby interstellar filaments has been possible thanks to the \textit{Herschel}\ Gould Belt survey observations \citep{Andre2010}, which are ideal to characterize the filament properties, providing the resolution, the sensitivity, and the statistics needed for such studies. \textit{Herschel}\ observations of a large number of clouds with different star formation activities and environments have been analysed \citep[e.g.,][and Fig.\,\ref{Herschel_Mosaic}]{Men'shchikov2010,Peretto2012,Schneider2013}.
Detailed measurements of the radial column density profiles (see Fig.\,\ref{ProfWidth}-Right) derived from \textit{Herschel}\ column density maps show that interstellar filaments are characterized by a narrow distribution of central widths, around 0.1 pc (Fig.\,\ref{ProfWidth}-Left), while they span almost three orders of magnitude in central column density as can be seen on the left hand side of Fig.\,\ref{VelDisp_coldens}\,\citep[][and Arzoumanian et al., in prep.]{Arzoumanian2011}.
This characteristic filament width \citep[cf.,][for an independent analysis]{Koch2015} is well resolved by \textit{Herschel}\ observations of the Gould Belt clouds as can be seen on the radial profile shown in Fig.\,\ref{ProfWidth}-Right and Fig.\,\ref{VelDisp_coldens}-Left, where the measurements of the filament widths lie above the horizontal lines corresponding to the resolution limits.
This typical filament width of 0.1\,pc is also in contrast with the much broader distribution of central Jeans lengths, $\lambda_{\rm J} \propto c_{\rm s}^2/(GN_{\rm H_{2}}^0)$,
from 0.02 to 1.3\,pc (for $T=10$\,K), implying that these filaments are not in hydrostatic equilibrium.
These filaments also span a wide range in mass per unit length ($M_{\rm line}$), estimated from their column density profiles.
The mass per unit length of a filament, is a very important parameter, which defines its ``stability": a filament is subcritical, unbound when its mass per unit length is smaller than the critical value $M_{\rm line, crit} = 2\, c_s^2/G \sim 16\, M_\odot$/pc, where $c_{\rm s} \sim 0.2$~km/s is the isothermal sound speed for $T \sim 10$~K, and $G$ is the gravitational constant \citep[][]{Ostriker1964}. A filament is supercritical, unstable for radial collapse and fragmentation, when $M_{\rm line}>M_{\rm line, crit}$ \citep{Inutsuka1997}.
\section{Internal velocity dispersions of interstellar filaments derived from molecular line observations}
The total velocity dispersion ($\sigma_{\rm tot}$) of selected positions towards a sample of 46 filaments, derived from ($^{13}$CO, C$^{18}$O, and N$_2$H$^+$)
molecular line observations, are presented in Fig.\,\ref{VelDisp_coldens}--Right. Thermally subcritical and nearly critical filaments have
transonic velocity dispersions ($c_{\rm s} \lesssim \sigma_{\rm tot} < 2c_{\rm s}$) independent of column density and are gravitationally unbound.
The velocity dispersion of thermally supercritical filaments increases as a function of their column density (roughly as $ \sigma_{\rm tot} \propto {N_{\rm H_2}}^{0.5} $).
These measurements confirm that there is a critical threshold in $M_{\rm line}$ above which filaments are self-gravitating and below which they are unbound. The position of this threshold, is consistent within a factor of two with the critical value $M_{\rm line,crit} \sim$ 16~M$_{\odot}$/pc for T=10~K, equivalent to a column density of $8\times10^{21}$~cm$^{-2}$ \citep[][]{Arzoumanian2013}.
These observations show that the mass per unit length of supercritical filaments is close to their virial mass per unit length $M_{\rm line,vir}=2\sigma_{\rm tot}^2/G$ \citep{Fiege2000} where $\sigma_{\rm tot}$ is the observed total velocity dispersion (instead of the thermal sound speed used in the expression of $M_{\rm line,crit}$).
\begin{figure*}
\hspace{-0.05cm}
\begin{minipage}{1\linewidth}
\centering
\resizebox{8.cm}{!}{\includegraphics[angle=0]{Arzoumanian_Fig3a.pdf}}
\hspace{0.05cm}
\resizebox{8.cm}{!}{\includegraphics[angle=0]{Arzoumanian_Fig3b.pdf}}
\hspace{0.05cm}
\end{minipage}
\caption{
{\bf Left:}
Mean deconvolved width versus background subtracted central column density for the filament sample analysed in 8 regions (indicated on the plot). The spatial resolutions of the column density maps
are marked by the horizontal dotted lines. The solid line running from top left to bottom right shows the central Jeans length as a function of central column density. The upper $x$-axis scale is an estimate of the filament mass per unit length in units of the thermal critical value $M_{\rm line,crit} = 2c^2_{\rm s} /G$, where $M_{\rm line}\propto W N_{\rm H_{2}}^0$ with $W = 0.1$~pc \citep[][]{Arzoumanian2011}.
{\bf Right:}
Filament total velocity dispersion versus observed central column density.
The vertical dashed line marks the boundary between thermally subcritical and thermally supercritical filaments where the estimated mass per unit length $M_{\rm line}$ is approximately equal to the critical value $M_{\rm line,crit} \sim$ 16~M$_{\odot}$/pc for T=10~K, equivalent to a column density of $8\times10^{21}$~cm$^{-2}$. The grey band shows a dispersion of a factor of 3 around this nominal value. The dotted line running from the bottom left to the top right corresponds to $ \sigma_{\rm tot} \propto {N_{\rm H_2}}^{0.5} $ \citep[][]{Arzoumanian2013}.
}
\label{VelDisp_coldens
\end{figure*}
We suggest that the large velocity dispersions of supercritical filaments is not a result of the supersonic interstellar turbulence but may be driven by gravitational contraction/accretion \citep{Arzoumanian2013}.
Mass accretion is indirectly suggested by transverse velocity gradients observed across a self-gravitating filament in Taurus.
The systemic velocities \citep{Goldsmith2008} of the observed emission in the north and south sides of the filament are
redshifted and blueshifted, respectively, with respect to the velocity of the emission observed towards the B211/13 filament \citep{Palmeirim2013}.
Such a velocity field pattern
may indicate convergence of matter onto the densest parts of the supercritical filament.
This is also compatible with theoretical models for the evolution of supercritical filaments \citep{HennebelleAndre2013,Heitsch2013}. Such models put forward the role of continuous accretion, which may be a physical reason to explain the observed properties of supercritical filaments.
\section{Magnetic field structure as derived from \planck\ dust polarization observations}
Dust polarization observations are essential to infer the orientation of the magnetic field ($\vec{B}$) component projected on the plane of the sky (POS).
While the observed polarization fraction ($p$) depends on several parameters \citep[dust polarization properties, grain alignment efficiency, and $\vec{B}$-field structure, e.g.,][]{Hildebrand1983}, the observed
polarization angle ($\psi$) derived from dust polarized emission
is perpendicular to orientation of the $\vec{B}$-field component on the POS ($\vec{B}_{\rm POS}$) averaged along the line of sight (LOS).
\planck\ observations at 353\,GHz provide the first fully sampled maps of the polarized dust emission towards interstellar filaments and their backgrounds, providing unprecedented insight into the $\vec{B}$-field structure.
The first striking result is the impressively ordered structure of $\vec{B}_{\rm POS}$ from the largest Galactic scales down to the smallest scales probed by \planck\ observations, $\sim$0.2\,pc in nearby molecular clouds \citep[Fig.\,\ref{planckMaps}, and][]{planck2015-XIX,planck2016-XXXV}.
To quantify the polarized intensity observed towards the filaments,
we derive radial profiles perpendicular to their crests and
averaged along their length to increase the signal-to-noise ratio in polarization, while keeping the highest resolution (4\parcm8) of the \planck\ data.
We describe the observations as a two-component model and separate the emission of the filaments from that of their background (surrounding cloud).
This allows us to characterize and compare the polarization properties of each emission component: the filament and its background.
This is an essential step in measuring the intrinsic polarization fraction ($p$) and polarization angle ($\psi$) of each emission component.
The analyses show that both the polarization angle and fraction measured at the intensity peak of the filaments (before background subtraction) differ from their intrinsic values (after background subtraction) as described in \citep{planck2016-XXXIII}.
%
The left panel of Fig.\,\ref{PolarParam} shows the profile of $\psi$ across the Musca, B211, and L1506 filaments.
In all three cases, we measure variations in the polarization angle intrinsic to the filaments ($\psi_{\rm fil}$) with respect to that of their
backgrounds ($\psi_{\rm bg}$). These variations are found to be coherent along the pc-scale length of the filaments.
The differences between $\psi_{\rm fil}$ and $\psi_{\rm bg}$ for two of the three filaments are larger
than the dispersion of the polarization angles across and along the filaments \citep[see Table\,2 in][]{planck2016-XXXIII}. Hence, these differences are not random fluctuations and they indicate a change in the orientation
of the POS component of the magnetic field between the filaments and their backgrounds.
The observations show a decrease in the polarization fraction $p$ from the background to the Musca filament (Fig.\,\ref{PolarParam}--right), as well as towards the Taurus B211 and L1506 filaments \citep[see Fig.\,10 of][]{planck2016-XXXIII}.
A decrease in $p$ with the total column density $\NH$ (i.e., from the backgrounds to the filament crests), has been already shown in previous studies.
This decrease has been usually interpreted as due to the turbulent component of the field and/or variations of dust alignment efficiency with increasing column density \citep[e.g.,][and references therein]{Whittet2008,Jones2015}.
In our study, the bulk of the drop in $p$ within the filaments cannot be explained by random fluctuations of the orientation of the magnetic field because they are too small ($\sigma_{\psi}<10^\circ$).
We argue that the observed drop in $p$ towards the filaments may be due to the 3D structure of the magnetic field: both its orientation in the POS and with respect to the POS \citep{planck2016-XXXIII}.
Indeed, the observed changes of $\psi$ are direct evidence of variations of the orientation of the POS projection of the magnetic field.
The systematic variations of $\psi$ suggest changes of the angle of the magnetic field with respect to the POS. This angle must statistically vary as much as $\psi$, contributing to the observed decrease of $p$ in the filaments.
The observed variation of $\psi$ between the filaments and their backgrounds always depolarizes the total emission, due to the integration of
the emission along the LOS of two emission components where the angle of $\vec{B}_{\rm POS}$ varies \citep[see Fig.\,\ref{PolarParam} and][]{planck2016-XXXIII}.
The inner structure of the filaments \citep[as seen, e.g., with \textit{Herschel}, cf.,][]{Palmeirim2013,Cox2016}, is not resolved, but at the smallest scales accessible with \planck\ ($\sim$0.2\,pc towards the nearby clouds), the observed changes of $\psi$ and $p$ (derived from \planck\ polarization data at the resolution of 4\parcm8) hold some information on the magnetic field structure within filaments \citep{planck2016-XXXIII}. They show that both the mean field and its fluctuations in the filaments are different from those in the background clouds, which points to
a coupling between the matter and the $\vec{B}$-field in the filament formation process.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{Arzoumanian_Fig4.pdf}
\caption{
\planck\ 353\,GHz (850\,$\mu$m) total dust intensity (Stokes $I$) maps at a resolution of 4\parcm8, towards the Taurus B211/13 and L1506 filaments {\bf (Left)} and the Musca filament {\bf (Right)}. The maps are in Galactic coordinate system. The blue contours show the levels of 3 and 6 MJy\,sr$^{-1}$.
The black segments show the $\vec{B}_{\rm POS}$-field orientation ($\psi$+90$^{\circ}$). The length of the pseudo-vectors is proportional to the polarization fraction. The polarization angles and fractions are computed at a resolution of 9\parcm6 (indicated by the white filled circles on the left hand side map) for increased S/N \citep{planck2016-XXXIII}.
}
\label{planckMaps}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{Arzoumanian_Fig5.pdf}
\caption{{\bf Left:}
Filament intrinsic polarization angle (tracing the angle of $\vec{B}_{\rm POS}$) across the crests of the B211, L1506, and Musca filaments. The $x$-axis shows the radial distance from the filament crest.
The crosses are data points computed from $Q$ and $U$ background subtracted maps.
The dashed line represents the background polarization angle.
The difference between the polarization angle of the filament and that of the background is indicated on the plots \citep[taken from Table\,2 in][]{planck2016-XXXIII}.
{\bf Right:}
Observed profile (in black) of the polarization fraction ($p$) perpendicular to the crest of the Musca filament.
The dashed blue curve shows the polarization fraction of the background. It is derived from the Stokes $I$, $Q$, and $U$ parameters of the background.
}
\label{PolarParam}
\end{figure*}
\section{Summary and conclusions}
The ubiquity of filaments in both quiescent clouds and active star-forming regions, where they are associated with the presence of prestellar cores and protostars \citep{Konyves2015}, supports the view
that filaments are (first) form in the ISM
and the densest of them fragment into star-forming cores \citep{Andre2014}.
The observational finding of a filament uniform 0.1\,pc width \citep{Arzoumanian2011} sets strong constraints on the physics at play in the ISM. This result has been recently observed (at higher-resolution) from the ground in regions farther away than the Gould Belt \citep{Hill2012Artemis,Andre2016}.
Interestingly 0.1\,pc corresponds to the sonic scale below which interstellar turbulence becomes subsonic in diffuse, non-star-forming gas. This occurrence, along with the observed thermal velocity dispersion of low column density filaments, suggests that large-scale turbulence may be a main player in the formation of the filamentary web observed in molecular clouds \citep{Arzoumanian2011}.
On the other hand, the increase of the non-thermal
velocity dispersion of supercritical, self-gravitating filaments, with column density, may indicate the generation of internal turbulence due to gravitational accretion.
This may be an explanation for the observed constant width of self-gravitating collapsing filaments \citep{Arzoumanian2013}.
While the dissipation of interstellar turbulence provides a plausible mechanism for filament formation, the observed organization between the magnetic field lines and the intensity structures, derived from the analysis of \planck\ data, indicates that the $\vec{B}$-field plays a dynamically important role in shaping the interstellar matter
\citep{planck2016-XXXIII,planck2016-XXXV}. In particular, they may be a key element to understand the channel of mass flows in the ISM.
The fact that most prestellar cores lie in dense, self-gravitating filaments \citep{Konyves2015} suggests that gravity is a major driver in the evolution of supercritical filaments and their fragmentation in star-forming prestellar cores.
The combination of these observational results, derived from dust and gas tracers in total and polarized intensity, give strong constraints on our understanding of the formation and evolution of filaments in the ISM, which provides important clues to the initial conditions of the star formation process along supercritical filaments. Higher resolution dust polarization observations and large scale molecular line mapping are nevertheless required to investigate in more details the internal velocity and magnetic field structures of interstellar filaments.
\section*{Acknowledgments}
DA has received support from the European Research Council grant ORISTARS (No.\,291294)
and is currently an International Research Fellow of the Japan Society for the Promotion of Science (FY2016).
|
1,941,325,220,764 | arxiv | \section{Introduction}
An important consideration on the path towards general AI is to minimize the amount of prior knowledge needed to set up learning systems. Ideally, we would like to identify principles
that transfer to a wide variety of different problems without the need for manual tuning and problem-specific adjustments.
In recent years, we have witnessed substantial progress in the ability of reinforcement learning algorithms to solve difficult control problems
from first principles, often directly from raw sensor signals such as camera images, without need for carefully handcrafted features that would require human understanding of the particular environment or task \cite{mnih2015humanlevel, SilverHuangEtAl16nature}.
One of the remaining challenges is the definition of reward schemes that appropriately indicate task success, facilitate exploration without biasing the solution in undesirable ways, and that can be implemented on real robotics systems without expensive instrumentation.
In this paper we focus on situations, where the external task is given by a sparse reward signal, that is `$1$' if and only if the task is solved. Such reward functions are often easy to define, and by solely focusing on task success strongly mitigate the bias on the final solution. The associated challenge is, however, that starting from scratch with a naive exploration strategy will most likely never lead to task success and the agent will hardly receive any learning signal.
The Scheduled Auxiliary Control (SAC-X) \cite{Riedmiller2018_Learning} framework tackles this problem by introducing a set of auxiliary rewards, that help the agent to explore the environment. For each auxiliary reward an auxiliary policy (`intention') is learned and executed to collect data into a shared replay-buffer. This diverse data facilitates learning of the main task.
The important insight of \cite{Riedmiller2018_Learning} is that
the exact definition of auxiliary tasks can vary, as long as they jointly lead to an adequate exploration strategy that allows to collect rich enough data such that learning of the main task can proceed.
However, in the original work the auxiliary tasks are still defined with some semantic understanding of the environment in mind, e.g. `move an object' or `place objects close to each other'. This requires both task-specific knowledge for the definition of the auxiliaries, as well as the technical prerequisites to estimate the relevant quantities required for the computation of the rewards
- e.g. camera calibration, object detection and object pose estimation. In this work we want to make a step towards a more generic approach for defining auxiliary tasks, that reduces the need for task-specific semantic interpretation of environment sensors, in particular of camera images.
\begin{figure}
\centering
\centerline{
\includegraphics[width=.45\linewidth]{assets/sawyer_manipulation.jpg}
\includegraphics[width=.45\linewidth]{assets/sawyer_ball_in_cup.jpg}
}
\caption{Manipulation setup with a Rethink Sawyer robotic arm and a Robotiq 2F-85 parallel gripper (left). Ball-in-a-cup task setup with a Rethink Sawyer robotic arm and a custom made Ball-and-Cup attachment (right).}
\label{fig:experimental_setup}
\end{figure}
A fundamental principle to enable exploration in sparse reward scenarios, is to learn auxiliary behaviours that deliberately change sensor responses. While variants of this idea have been already suggested earlier, e.g. \cite{Sutton2011horde, Jaderberg2017Unreal}, we here introduce a generic way to implement this concept into the SAC-X framework: 'simple sensor intentions' (SSIs) encourage the agent to explore the environment and help with collecting meaningful data for solving the sparsely rewarded main task. Being largely task agnostic, SSIs can be reused across tasks with no or only minimal adjustments. Further, simple sensor intentions (SSIs) are based on rewards defined on the deliberate change of scalar sensor responses, that are derived from raw sensor values. We propose two ways to effect change, namely (a) to attain certain set points (like e.g. the minimum or maximum of a sensor response), or (b) by rewarding increase or decrease of a sensor response.
However, not all sensory inputs can be directly mapped to scalar values, like e.g. raw camera images. As an exemplary procedure, we suggest a concrete pre-processing for mapping raw images into simple sensor responses by computing and evaluating basic statistics, such as the spatial mean of color-filtered images. While SSIs propose a general way to deal with all kinds of sensor values available in a robotic system (like touch sensors, joint angle sensors, position sensors, ...), we mostly investigate pixel inputs here as an example of a sensor type that is widely used in robotics.
As a proof of concept, we show that these simple sensor intentions (SSIs) can be applied on a variety of interesting robotic domains, both in simulation and on real robots. Most notably, we show that with SSIs, SAC-X is capable of learning to play the Ball-in-a-Cup game from scratch, purely from pixel and proprioceptive inputs - for both as observation and for computing the auxiliary rewards given to the agent.
\medskip
\section{Preliminaries}
We consider a reinforcement learning setting with an agent operating in a Markov Decision Process (MDP) consisting of the state space $\mathcal{S}$, the action space $\mathcal{A}$ and the transition probability $p(s_{t+1}|s_t, a_t)$ of reaching state $s_{t+1}$ from state $s_t$ when executing action $a_t$ at the previous time step $t$. The goal of an agent in this setting is to succeed at learning a given task $k$ out of a set of possible tasks $\mathcal{K}$.
The actions are assumed to be drawn from a probability distribution over actions $\pi_k(a|s)$ referred to as the agent's policy for task $k$. After executing an
action $a_t$ in state $s_t$ and reaching state $s_{t+1}$ the agent receives a task-dependent, scalar reward $r_k(s_t, s_{t+1})$. Given a target task $g$ we define the expected return (or value) when following the task-conditioned policy $\pi_g$, starting from state $s$, as
\begin{equation*}
V^{\pi_g}(s) = \mathbb{E}_{\pi_g} [ \: \sum_{t = 0}^{\infty} \gamma^t r_g(s_t, s_{t+1}) \: | \: s_0 = s \: ],
\end{equation*}
with $a_t \sim \pi_g(\cdot | s_t)$ and $s_{t+1} \sim p(\cdot | s_t, a_t)$ for all $s \in \mathcal{S}$.
The goal of Reinforcement Learning then is to find the policy $\pi^*_g$ that maximizes the value. The auxiliary intentions, defined in the following based on simple sensor rewards -- that is task $k \in \mathcal{K}, k \neq g$ -- give rise to their own values and policies and serve as means for efficiently exploring the MDP.
\medskip
\section{Simple sensor intentions}
\label{sec:SSI}
The key idea behind simple sensor intentions can be summarized by the following principle: in the absence of an external reward signal, a sensible exploration strategy can be formed by learning policies that deliberately cause an effect on the observed sensor values by e.g. driving the sensor responses to their extremas or by controling the sensor readings at desirable set-points.
Clearly, policies that achieve the above can learn to cover large parts of the observable state-space, even without external rewards, and should thus be useful for finding `interesting' regions in the state-space. This idea is distinct from, but reminiscent of, existing work on intrinsic motivation for reinforcement learning \citep{gregor2016variational} which often defines some form of curiosity (or coverage) signal that is added to the external reward during RL. Our goal here is to learn separate exploration policies from general auxiliary tasks, that can collect good data for the main learning task at hand.
As a motivating example, an analogy to this idea can be found by considering the exploration process of an infant: in the absence of `knowing what the world is about', a baby will often move its body in a seemingly `pointless', but goal directed, way until it detects a new stimulus in its sensors, e.g. a new sense of touch at the fingertips or a detected movement in a toy.
Simple sensor intentions (SSIs) propose a generic way to implement this principle in the multi-task agent framework SAC-X. SSIs are a set of auxiliary tasks, defined by standardized rewards over a set of scalar sensor responses. While many sensors in robotics naturally fit into this scheme (like e.g. a binary touch sensor or a sensor for a joint position), other sensors like raw camera images may need some transformation first in order to provide a scalar sensor signal that can be used to define a reasonable simple sensor intention.
In general, SSIs are derived from raw sensor observations in two steps:
\medskip
\noindent\emph{First step:} In the first step we map the available observations to scalar sensor responses that we want to control. Each observation $o \in s$ is a vector of sensor values coming from different sensory sources like e.g. proprioceptive sensors, haptic sensors or raw images.
We define a scalar (virtual) sensor response by mapping an observation $o$ to a scalar value $z$, i.e.
$$
z = f(o), \text{where } o \in s,
$$
and where $f$ is a simple transformation of the observation. For scalar sensors, $f$ can be the identity function, while other sensors might require some pre-processing (like e.g. raw camera images, for which we describe a simple transformation in more detail in section \ref{sec:I2SSI}).
In addition -- as a consequence of choosing the transformation -- for each derived scalar sensor value $z$ we can also assume to know the maximum and minimum attainable value $[z_\text{min}, z_\text{max}]$.
\medskip
\noindent\emph{Second step:} In the second step we calculate a reward following one of two different schemes (described in detail below):
\begin{enumerate}
\item[a)] rewarding the agent for reaching a specific target response, or
\item[b)] rewarding the agent for incurring a specific change in response.
\end{enumerate}
Importantly, these schemes do not require a detailed semantic understanding of the environment: changes in the environment may have an a-priori unknown effect on sensor responses.
Irregardless of this relationship between environment and sensor values we follow the hypothesis outlined earlier: a change in a sensor response indicates some change in the environment and by learning a policy that deliberately triggers this change (a simple sensor intention, SSI) we obtain a natural way of encouraging diverse exploration of the environment.
\medskip
\subsection{Target Response Reward} \label{sec:TRR}
Let $z_t \in \{z_t \in \mathbb{R} \, | \, z_{\textrm{min}} \leq z_t \leq z_{\textrm{max}} \}$ be a sensor response $z_t = f(o)$ for observation $o \in s_t$. We define the reward at time step $t$ for controlling the response $z$ towards a desired set point $\hat{z}$ as
\begin{equation*}
r^{\hat{z}}(s_t) \coloneqq 1 - \frac{\abs{z_t - \hat{z}}}{z_{\textrm{max}} - z_{\textrm{min}}},
\end{equation*}
\noindent where $\hat{z}$ is the set point to be reached. The set point could be chosen arbitrarily in the range $[z_{\textrm{min}}, z_{\textrm{max}}]$, a sensible choice that we employ in our experiments, is to use the minimum and maximum response values as set points to encourage coverage of the sensor response space. We denote the two corresponding rewards as \emph{`minimize z'} and \emph{`maximize z'}, respectively in our experiments.
\medskip
\subsection{Response Change Reward} \label{sec:DRR}
While set point rewards encourage controlling sensor values to a specific value, response change rewards encourage the policy to incur a signed change of the response. Let $\Delta_t^z \coloneqq z_t - z_{t-1}$ be the temporal difference between consecutive responses. We define the reward for \emph{changing} a sensor value $z$ as
\begin{equation*}
r^\Delta(s_t) \coloneqq \frac{\alpha \Delta_t^z}{z_{\textrm{max}} - z_{\textrm{min}}}.
\end{equation*}
where $\alpha \in \{1, -1\}$ serves to distinguish between \emph{increasing} and \emph{decreasing} sensor responses.
In both cases, undesired changes are penalized by our definition. This ensures that a successful policy moves the response consistently in the desired direction, instead of exploiting positive rewards by developing a cyclic behaviour \citep{randlov1998learning, ng1999policy}.
\medskip
\section{Transforming Images to Simple Sensor Intentions \label{sec:I2SSI}}
In the following, we describe the transformation we use to obtain scalar sensor responses from raw camera images.
Cameras are an especially versatile and rich type of sensor for observing an agent's surroundings and are thus particularly interesting for our purposes.
Cameras typically deliver two dimensional pixel intensity arrays as sensor values. There are numerous ways to map these pixel arrays to one dimensional sensor responses, which can then be used as part of the simple sensor reward schemes.
\begin{figure}
\centering
\includegraphics[width=.92\linewidth]{assets/color_masks.png}
\caption{The transformation used for deriving one dimensional sensor responses from camera images. We compute a binary mask and it's spatial distribution along the image axes and select the resulting distribution's mean as a sensor response.}
\label{fig:histogram}
\end{figure}
In principle, one could treat every pixel channel as an individual response, or calculate averages in regions of the image (similar to 'pixel control' in \citep{Jaderberg2017Unreal}), and subsequently define sensor rewards for each of the regions. However, our goal is to learn a sensor intention policy for e.g. maximizing / minimizing each sensor response, that can then be executed to collect data for a target task. In such a setting, having a smaller amount of efficient exploration policies is preferable (opposed to the large number of policies a reward based on single pixel values would mandate). We therefore propose to transform images into a small amount of sensor responses by aggregating statistics of an image's spatial color distribution. As illustrated in Figure \ref{fig:histogram}, we first threshold the image to retain only a given color (resulting in a binary mask) and then calculate the mean location of the mask along each of the image's axes, which we use as the sensor value. Formally we can define the two corresponding sensor values for each camera image as
\begin{equation*}
\begin{aligned}
z^{c_\text{range}}_x &= \frac{1}{W} \sum_{x=0}^W x \max_y [\mathbf{1}_{c_\text{range}}(o_\text{image})[y, x]] \\
z^{c_\text{range}}_y &= \frac{1}{H} \sum_{x=0}^H y \max_x [\mathbf{1}_{c_\text{range}}(o_\text{image})[y, x]],
\end{aligned}
\end{equation*}
where $H$ denotes the image height and $W$ the width, and $c_\text{range} = [c_\text{min}$, $c_\text{max}]$ correspond to color ranges that should be filtered.
Combined with the reward scheme above, these simple sensor responses can result in intentions that try to color given positions of an image in a given color. Perhaps surprisingly, we find such a simple reward to be sufficient for encouraging meaningful exploration in a range of tasks in the experiments.
We note that the reward formulation outlined in Section \ref{sec:SSI} mandates a sensor response to be available at each time step $t$. To avoid issues in case no pixel in the image matches the defined color range, we set any reward based on $z^\text{image}$ to zero at time $t=0$ and subsequently always fall back to the last known value for the reward if no pixel response matches.
The choice of the `right' color range $c_\text{range}$, for a task of interest, is a design decision that needs to be made manually. In practice, we define a set of color ranges (and corresponding sensor values) from rough estimates of the color of objects of interest in the scene.
Alternatively, the color filters could be defined very broadly, which increases generality and transfer. For example, similar to a baby's preference for bright or vivid colors, one might use a color filter that matches a broad range of hues but only in a narrow saturation range.
Furthermore, if the number of interesting color ranges is large (or one chooses them randomly) one can also define aggregate intentions, where sensor rewards are averaged over several sensor responses for various color channels, such that the resulting intention has the goal of changing an arbitrary color channel's mean instead of a specific one. In addition to manually defining color ranges, we as well conduct experiments with this aggregate approach.
\medskip
\section{Learning Simple Sensor Intentions for Active Exploration} \label{sec:sac}
In general, exploration policies based on the sensor rewards described in the previous section could be learned with any multi-task Reinforcement Learning algorithm or, alternatively, added as exploration bonus for an Reinforcement Learning algorithm that optimizes the expected reward for the target task $g$.
In this work we demonstate how simple sensor intentions can be used in the context of a multi-task RL algorithm, to facilitate exploration. We make use of ideas from the recent literature on data-efficient multi-task RL with auxiliary tasks (in our case defined via the simple sensory rewards). Concretely, we follow the setup from Scheduled Auxiliary Control (SAC-X) \citep{Riedmiller2018_Learning} and define the following policy optimization problem over $\mathcal{K}$ tasks:
\begin{equation*}
\arg \max_{\pi} \mathbb{E}_{s \in \mathcal{B}} \Big[ \sum_{k=1}^K \mathbb{E}_{a \sim \pi_k(\cdot | s)} [Q^k_\phi(s, a)] \Big],
\end{equation*}
where $\pi_k(a | s)$ is a task-conditioned policy and $Q_\phi(s, a, k)$ is a task-conditional Q-function (with parameters $\phi)$; that is learned alongside the policy by minimizing the squared temporal difference error:
\begin{equation*}
\min_{\phi} \mathop{\mathbb{E}}_{(s, a, s') \in \mathcal{B}} \Big[ \sum_{k=1}^K \big(r_k(s, a) + \gamma \mathbb{E}_{\pi_k} [Q^k_{\hat{\phi}}(s', a')] - Q^k_\phi(s, a) \big)^2 \Big],
\end{equation*}
where
$\hat{\phi}$ are the periodically updated parameters of a target network. We refer to the appendix for a detail description of the neural networks used to represent $\pi_k(a | s)$ and $Q_\phi(s, a, k)$.
The set of tasks $\mathcal{K}$ is given by the reward functions for each of the SSIs that we want to learn, as well as the externally defined goal reward $r_g$. The transition distribution, for which the policy and Q-function are learned, is obtained from the replay buffer $\mathcal{B}$, which is filled by \emph{executing both the policy for the target task $\pi_g$ as well as all other available exploration SSIs $\pi_k \in \mathcal{K}, k \neq g$}. In SAC-X, each episode is divided into multiple sequences and a policy to execute is chosen for each of the sequences. The decision which policy to execute is either made at random (referred to as Scheduled Auxiliary Control with uniform sampling or SAC-U) or based on a learned scheduler that maximizes the likelihood of observing the sparse task reward $r_g$ (referred to as SAC-Q). More details on the Reinforcement Learning procedure can be found in the Appendix as well as in the original Scheduled Auxiliary Control publication \citep{Riedmiller2018_Learning}.
\medskip
\section{Experiments}
In the following, simple sensor intentions (SSIs) based on basic sensors (like e.g. touch)
and more complex sensors, like raw camera images, are applied to several
robotic experiments in simulation and on a real robot. We show, that by using the concept of SSIs,
several complex manipulation tasks can be solved: grasping and lifting an object, stacking
two objects and solving a ball-in-cup task end-to-end from raw pixels. In all experiments, we assume the final task reward to be given in form of a sparse (i.e. binary) external reward signal.
\medskip
\subsection{Experimental Setup}
In all following experiments we employ a Rethink Sawyer robotic arm, with either a Robotiq 2F-85 parallel gripper as end-effector or, in the case of the Ball-in-a-Cup task, with a custom made cup attachment. In the manipulation setups, the robot faces a 20cm x 20cm basket, containing a single colored block, or - in the case of the stack task - two differently colored blocks. The basket is equipped with three cameras, that are used as the only exteroceptive inputs in all manipulation experiments (Figure \ref{fig:experimental_setup}, left). For the Ball-in-a-cup task, the robotic arm is mounted on a stand as shown in Figure \ref{fig:experimental_setup} (right). The Ball-in-a-cup cell is equipped with two cameras positioned orthogonally -- and both facing the robot -- as well as with a Vicon Vero setup. The Vicon system is solely used for computing a sparse external ''catch`` reward.
In all experiments we use Scheduled Auxiliary Control (SAC-X) with a scheduler choosing a sequence of 3 intentions per episode. Since the environment is initialized such that the initial responses are uniformly distributed, the expected return of the `increase' and `decrease' rewards following the optimal policy $\pi^*_k$ for a respective sensor reward is
\begin{equation*}
V^{\pi^*_k}(s) \leq \frac{|z^k_{\textrm{max}} - z^k_{\textrm{min}}|}{2} \quad \forall \: s \in \mathcal{S}.
\end{equation*}
\noindent Accordingly, we scale the increase and decrease rewards by a constant factor $\frac{2\sigma}{|z_{\textrm{max}} - z_{\textrm{min}}|}$ in all experiments and choose $\sigma = 200$ (which corresponds to the number of steps an SSI is executed in an episode) to achieve comparable reward scales across the reward schemes.
In all simulated setups we use uniform scheduling (SAC-U), where intentions are chosen by uniform random sampling \cite{Riedmiller2018_Learning}. Also, we use a multi-actor setup with 64 concurrent actors that collect data in parallel. This approach optimizes for wall-clock time and sacrifices data-efficiency, which we accepted, since these experiments were primarily intended to answer the general question whether SSIs allow us to learn sparse tasks. The question of data-efficiency is relevant, however, in the real robot experiments and we thus used a jointly learned scheduler to select between policies to execute (SAC-Q, \cite{Riedmiller2018_Learning}) for improved data-efficiency. All real world experiments were conducted using a single robot (corresponding to a single-actor setup in simulation).
In all cases, the agent is provided with an observation that comprises proprioceptive information - joint positions, velocities and torques - as well as a wrist sensor's force and torque readings in the manipulation setups. Additionally, the agent receives the camera images as exteroceptive inputs. Thus, all tasks have to be learned from proprioceptive sensors and raw pixel inputs. The action space in the manipulation setups is five dimensional and continuous and consists of the three Cartesian translational velocities, the angular velocity of the wrist around the vertical axis and the speed of the gripper’s fingers. The action space in the Ball-in-a-Cup setup is four dimensional and continuous and consists of the raw target joint velocities \cite{schwab19simultaneously}. In all cases, the robot is controlled at 20 Hz.
\medskip
\subsection{Learning to grasp and lift in simulation}
\begin{figure}[t]
\centering
\includegraphics[width=.9\linewidth]{assets/inc_dec_lift_pixels_sim.png}
\caption{`Lift' learned from pixels in the simulated manipulation setup with the `increase` and `decrease` rewards used as auxiliary intentions.}
\label{fig:inc_dec_lift_pixels_sim}
\end{figure}
Grasping and lifting an object with a robotic arm is a challenging task, in particular when learned from scratch and purely from pixels: The agent must learn to recognize the object, approach it, find the right position for grasping and eventually close the fingers and lift the object. Learning this from scratch, when only a sparse final reward is given, is very unlikely.
We assume the external target reward for lift is given by a binary signal: the external reward is 1, if the touch sensor is triggered and the gripper is at least 15cm above the basket.
In a first experiment, we are using the 'delta response rewards' SSIs, that give reward for pushing the distribution of pixels of a selected color channel in a certain direction. In this experiment, we select the color channel to roughly match the color of the block. However, we will discuss below in section \ref{sec::ablations}, how we can get rid of this assumption, or make it more general, respectively. This selection of SSIs results in overall four auxiliary intentions, two for increasing the mean in x- or y-direction, and two for decreasing the mean in x- or y-direction of the pixel image. The learning curves for the four auxiliaries and the final lift reward (violet line) are shown in Figure \ref{fig:inc_dec_lift_pixels_sim}. After about 500 episodes (times 64 actors), the agent sees some reward for moving the mean in various directions. This results in first, small interactions of the arm with the object. After about 1000 episodes, interactions get stronger, until after about 2000 episodes, the agent learns to lift the object deliberately. Final performance is reached after about 4000 episodes. We note that the reward curve for the `decrease y' intention is lower than the other curves because of the fact, that the block is typically at the bottom of the basket and therefore the mean of the color channel's distribution is typically already pretty low, so 'decrease y' is not able to earn as much reward as the other auxiliaries. This is an example of an intention that is potentially not very useful, but however does not prevent the agent of finally learning the goal task.
If no auxiliary rewards are given, the agent does not learn to lift the object at all. This is shown by the flat blue learning curve in Figure
\ref{fig:lift_pixels_sim}. This Figure also shows the learning curve for the alternative `target response reward' SSI formulation, which tries to minimize/ maximize the value of the mean (green line) in comparison to the 'delta response reward' SSI formulation (red line). For the lift experiment, not much difference in the learning curves can be seen; both SSI formulations work successfully in this setting. For further reference, we also conducted a typical learning experiment using a dedicated shaping reward for reaching and grasping the block (orange line). As expected, the shaped approach learns faster, but at the costs of a considerable effort in the specification of the shaping reward and the required instrumentation for computing the position the object (which would require camera calibration, object detection, and pose estimation on a real system).
\begin{figure}[t]
\centering
\includegraphics[width=.9\linewidth]{assets/lift_pixels_sim.png}
\caption{Comparison of the different reward schemes for learning `Lift` from pixels in the simulated manipulation setup.}
\label{fig:lift_pixels_sim}
\end{figure}
\subsection{Ablation studies \label{sec::ablations}}
We conducted several ablation studies and summarize our findings below:
\medskip
\subsubsection{Learning success does not depend on a particular camera pose for reward giving}
We investigated this by varying the perspective of the camera, that is used for reward computation, in various ways (see Figure \ref{fig:camera_positions}). As shown by the learning curves in Figure \ref{fig:lift_reward_cameras}, the concrete pose of the camera has an influence on learning speed, but in all cases, the SSIs were powerful enough to learn the final lifting task eventually.
\medskip
\subsubsection{The SSI color channel does not necessarily need to specify a single object}
We investigated this by two experiments: an experiment with two blocks of the same color (see figure \ref{fig:color_mask_ablations}, left) and an experiment, where part of the background had the same color as the block (see Figure \ref{fig:color_mask_ablations}, right). In both cases lift could be learned successfully using the standard pixel based SSIs. If we increase the
proportion of the background pixels with the same color as the object, at some point the agent fails to learn to interact with the brick, but instead 'exploits' the reward scheme by trying to move the mean by hiding pixels with the arm and gripper. It is possible to filter the background from the scene; however, this is beyond the scope of this paper and therefore left for future investigations.
\begin{figure}
\centering
\centerline{
\includegraphics[width=.25\linewidth]{assets/reward_camera_0.png}
\includegraphics[width=.25\linewidth]{assets/reward_camera_1.png}
\includegraphics[width=.25\linewidth]{assets/reward_camera_3.png}
}
\caption{The three camera angles used for computing the SSIs for the experiments shown in Figure \ref{fig:lift_reward_cameras}.}
\label{fig:camera_positions}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{assets/lift_reward_cameras.png}
\caption{Comparison of different camera angles for computing SSIs when learning the `Lift' task. In all experiments the `increase' and `decrease' rewards are used as auxiliary intentions.}
\label{fig:lift_reward_cameras}
\end{figure}
\begin{figure}[b!]
\centering
\centerline{
\includegraphics[width=.36\linewidth]{assets/two_blue.png}
\includegraphics[width=.36\linewidth]{assets/distractor_blue.png}
}
\caption{The setups used for demonstrating robustness. On the left, both objects in the scene have the same color. On the right a non-moveable part of the basket has the same color as the object.}
\label{fig:color_mask_ablations}
\end{figure}
\medskip
\subsubsection{One can use a much more general selection of the color channel}
To demonstrate this, we used the `aggregate' SSI formulation suggested above: We compute rewards (here: delta rewards) for a potentially large set of different color channels and add those up in one so-called `aggregate' SSI. This will clearly solve the single block lift task as described above, as long as at least one of the color channels matches the color of the block. However, this works also in a setting, where blocks with different colors are used throughout the experiment. Figure \ref{fig:lift_any_pixels_sim} shows the learning curve for the `Lift Any' task (red line), where we randomly changed the color of the block in every episode (see Figure \ref{fig:multi_color_bricks} showing a subset of the colored blocks used in the experiment).
A similar experiment was also conducted on the real robot. The results for the real world experiment is shown in Figure \ref{fig:lift_any_pixels_real}.
\medskip
\subsubsection{The SSI method is not restricted to pixels, but works with a general set of (robot) sensors}
In particular, we conducted experiments in a setup with two blocks in the scene, where we applied SSIs to basic sensors, like the touch sensor and the joint angles, but also used the SSIs described before on camera images, resulting in a total of 22 auxiliary intentions (minimize / maximize touch, minimize / maximize joint angles of arm and gripper joints, minimize / maximize the mean of the color distribution along the x and y axes). To deal efficiently with the extended set of intentions, the agent used a Q-based scheduler (SAC-Q), which successfully learned the `Lift' task.
\begin{figure}[t]
\centering
\includegraphics[width=.36\linewidth]{assets/multi_color_bricks.png}
\includegraphics[width=.36\linewidth]{assets/lift_any_eval.jpg}
\caption{Left: A subset of the colored blocks used in the simulated `Lift Any' experiments. In each episode one of the available blocks is selected. Right: Evaluation of the `Lift Any' experiment showing that the robot is able to grasp and lift a non-rigid, multi-colored baby toy.}
\label{fig:multi_color_bricks}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.9\linewidth]{assets/inc_dec_lift_any_pixels_sim.png}
\caption{`Lift Any' learned from pixels in the simulated manipulation setup with the `increase' and `decrease` rewards used as auxiliary intentions. Here, the `increase' and `decrease' rewards are aggregated over color channels and the block's color is changed every episode.}
\label{fig:lift_any_pixels_sim}
\end{figure}
\medskip
\subsubsection{It is mandatory to have a penalty term for moving the sensor response in the opposite direction}
In learning experiments, where the agent only received a positive reward, if the response is moved in the intended direction, the agent quickly learns to cleverly exploit the reward, e.g. by moving the gripper back and forth in front of camera, hiding and revealing the block and thereby collecting reward for moving the response (see \ref{sec:TRR} and \ref{sec:DRR}).
\medskip
\subsection{Learning to stack}
Learning to stack a block on another block poses additional challenges compared to the `grasp-and-lift' scenario described above: The scene is more complex since there are two objects now, reward is given only if the object is placed above the target object, and the target object can move. The external task reward for `stack' is 1, if and only if the block 1 is properly placed on block 2 and the gripper is open.
We use the minimize/ maximize SSI approach in combination with a SAC-U agent: the SSI auxiliary rewards are computed from raw pixels and the input observations for the agent comprises raw pixels and proprioceptive information only.
Figure \ref{fig:stack_pixels_sim} shows the learning curves for the auxiliaries and the final `Stack' reward. This is a much more difficult task to learn from pure pixels, but after about 20,000 episodes (times 64 actors) the agent has figured out how to reliably solve the task, purely from pixels. Not surprisingly, without SSI auxiliaries, the agent is not able to learn the task. The minimize / maximize auxiliary rewards are position based rewards - therefore the learning curves are offset by roughly the average reward of 100 which corresponds to the usual average position of the pixel mean.
Although the amount of data used to learn this task (20,000 episodes times 64 actors) is huge, it is still surprising, that learning such a complex task is possible from raw sensor information and an external task reward only. We assume this is possible, since the used simple sensor intentions encourage a rich playing within the scene, and the additional sequencing of these intentions as performed by SAC-X increases the probability of seeing also more complex configurations - like stacking in this case. From seeing external rewards for these occasional stacks, learning finally can take off.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{assets/min_max_stack_pixels_sim.png}
\caption{`Stack' learned from pixels in the simulated manipulation setup with the `minimize' and `maximize' rewards used as auxiliary intentions.}
\label{fig:stack_pixels_sim}
\end{figure}
\medskip
\subsection{Learning to grasp and lift on a real robot}
To investigate the behaviour of SSI based agents in a real world robotic setup, we apply the agent to learn to grasp and lift objects (figure \ref{fig:experimental_setup}, left). As in the simulation experiments, the agent's input is based on raw proprioceptive sensory information and the raw images of two cameras placed around the basket. Real world experiments add the additional challenge of noisiness of sensors and actuators, with which the agent has to cope. Also, the approach naturally has to work in a single actor setup, since there is only one robot available to collect the data.
For the real robot experiment, we use 6 SSIs based on the touch sensor as well as on the camera images: increase/ decrease of the color distribution's mean in x- and y-direction of raw camera images (4 intentions) plus minimize / maximize touch sensor value ('on' / 'off', 2 intentions). The task reward for `Lift' is given sparsely, if and only if the touch sensor is activated while the gripper is 15 cm above the table. To make the SSI reward signal from pixels more robust, we first computed the SSI values from each camera image and then aggregated the rewards to get one single accumulated reward. This means that an auxiliary task receives the maximum reward only if it achieves to move the color distribution's mean in both cameras in the desired direction.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{assets/inc_dec_lift_pixels_real.png}
\caption{`Lift' learned from pixels on the real robot with the `increase' and `decrease' rewards used as auxiliary intentions. In this setup, the `increase' and `decrease' rewards are aggregated over two perpendicular camera angles.}
\label{fig:lift_pixels_real}
\end{figure}
The learning agent is again based on Scheduled Auxiliary Control and we apply a Q-table based scheduler (SAC-Q) \cite{Riedmiller2018_Learning} to make learning as data-efficient as possible. We also add a special exploration scheme based on `multi-step' actions \cite{SR02, SR03:NCAF}: when selecting an action, we additionally determine how often to repeat its application, by drawing a number of action repeats uniformly in the range $[1, 20]$. Also, for safety reasons, external forces are measured at the wrist sensor and the episode is terminated if a threshold of 20N on any of the three principle axes is exceeded for more than three time steps in a row. If this happens, the episode is terminated with zero reward, enforcing the agent to avoid these failure situations in the future.
We find that the agent successfully learns to lift the block as illustrated in Figure \ref{fig:lift_pixels_real}. After an initial phase of playful, pushing-style interaction, the agent discovers a policy for moving the block to the sides of the basket and after exploring the first successful touch rewards, it quickly learns to grasp and lift the block. After about 9000 episodes the agent shows a reliable lifting behaviour. This corresponds to roughly 6 days of training on the real robot.
In a further experiment, we replaced the above SSIs for a single color channel with SSIs, that aggregate rewards over multiple color channels, allowing to learn with objects of any color. We tested this by starting to learn with a single object, until the robot started to lift, and then replacing the object by another object of different color and/ or of different shape. The learning curve is shown in figure \ref{fig:lift_any_pixels_real}. The drops in the learning curve indicate, that if the object is replaced, the agent does not know how to lift yet. After some time of adaptation, it starts to manage to lift again. Continuing this, we saw the robot being able to learn to lift a wide variety of different objects, all from raw sensor information.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{assets/inc_dec_lift_any_pixels_real.png}
\caption{`Lift Any' learned from pixels on a real robot with the `increase' and `decrease' rewards used as auxiliary intentions. The block is replaced at various points throughout the experiment.}
\label{fig:lift_any_pixels_real}
\end{figure}
\medskip
\subsection{Ball in a Cup}
An important aspect of simple sensor intentions is, that SSIs constitute a basic concept and show some generality of being helpful for different external target tasks. To illustrate how the same set of SSIs can be employed to master a completely different control problem, we show results on the dynamic Ball-in-a-Cup task \cite{schwab19simultaneously}: the task is to swing up a ball attached to a robot arm and to catch it with a cup mounted as the end effector of the arm. The agent only receives a binary positive reward, if the ball is caught in the cup (see figure \ref{fig:experimental_setup}, right).
Dynamic tasks in general exhibit additional difficulties compared to static tasks, e.g. the importance of timing and the difficulty of reaching (and staying in) possibly unstable regimes of the robot's configuration-space. As a result, learning to catch the ball purely from pixels is out-of-reach for learning setups, that only employ the sparse catch reward.
To show the versatility of simple sensor intentions, we choose the standard increase / decrease SSIs, resulting in 4 auxiliary tasks plus
the binary task reward for catch. The learning agent used is a
SAC-Q agent.
As shown in Figure \ref{fig:catch_real}, the agent is able to learn this dynamic task purely from the standard observation, pixels and proprioceptive inputs. In order to cope with the dynamic nature of the task, 2 consecutive pixel frames were stacked for the controller's input.
The simple sensor intentions encourage to move the colored pixels in the image, which results in learning to deliberately move the ball. The sequential combination of different auxiliary tasks as enforced by the SAC-X agent, lead to a broad exploration of different movements of ball and robot. Eventually, first catches are observed, and once this happens, the agent quickly learns the `Catch' task. For the roughly 4000 episodes needed, the learning took about 3 days in real time.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{assets/catch_pixels_real.png}
\caption{`Catch' learned from pixels on a real robot.}
\label{fig:catch_real}
\end{figure}
\section{Related Work}
Transfer via additional tasks has a long-standing history in reinforcement learning to address exploration challenges and accelerate learning \citep{torrey2010transfer,pan2010survey}.
We are able to conceptually distinguish into two main categories with auxiliary tasks \citep{taylor2009transfer,Jaderberg2017Unreal}, which are used to accelerate training on a final task, and multitask learning \citep{caruana1997multitask} with focus on the performance of all involved tasks.
Early work on multitask learning \citep{Dayan93} introduced the prediction of expected sums of future values for multiple tasks of interest.
Many successive approaches have extended modelling of multiple task rewards and investigated further types of transfer across tasks \citep{Sutton2011horde, Schaul15, Barreto2017}.
Similarly, work on the options framework investigates directions for decomposition of task solutions \citep{Dietterich98, Bacon17,daniel2016hierarchical, wulfmeier2019regularized}.
Auxiliary tasks have been investigated as manually chosen to help in specific domains \citep{Riedmiller2018_Learning, Dosovitskiy2017, Jaderberg2017Unreal,Mirowski16,cabi2017} and as based on agent behaviour \citep{andrychowicz2017hindsight}. As manual task design presents significant burden for human operators, these methods demonstrate that even simpler sets of tasks can demonstrate considerable benefits. In comparison to methods using auxiliary tasks mostly for representation shaping by sharing a subset of network parameters across tasks \citep{Jaderberg2017Unreal, Mirowski16}, SSI shares data between tasks which directly uses additional tasks for exploration.
For both directions, success considerably relies on the source of tasks and correlated reward functions, which this work focuses on.
Multiple agent behaviour based sources for tasks and additional objectives have been considered in prior work including diversity of behaviours
\citep{sharma2019dynamics,grimm2019disentangled, eysenbach2018diversity},
intrinsic motivation \citep{Chentanez04,Singh09,blaes2019control, Ngo2012, Kulkarni2016},
and empowerment of the agent \citep{klyubin2005all, mohamed2015variational,houthooft2016vime}.
Additionally, representation learning and the automatic identification of independently controllable features have provided another perspective on identifying tasks for transfer and improving exploration \citep{grimm2019disentangled, bengio2017independently, blaes2019control}.
Recent work on diversity has in particular demonstrated the importance of the space used for skill discovery \citep{sharma2019dynamics}. SSI provides a valuable perspective on determining valuable task spaces with limited human effort.
Given large sets of tasks, the higher-level problem of choosing which task to learn creates a novel challenge similar to exploration within a single task. Often random sampling over all possible tasks provides a strong baseline \citep{graves2017automated}. However, to accelerate learning different perspectives build on
curriculum \citep{bengio2009curriculum, heess2017emergence, Oudeyer17},
iterative task generation \citep{Schmidhuber2013PowerPlayTA,wang2019paired}. In this work, we rely on task scheduling similar to \citet{Riedmiller2018_Learning} in order to optimize the use of training time.
\section{Conclusion and future work}
Learning to change sensor responses deliberately is a promising exploration principle in settings, where it is difficult or impossible to experience an external task reward purely by chance. We introduce the concept of simple sensor intentions (SSIs) that implements the above principle in a generic way within the SAC-X framework. While the general concept of SSIs applies to any robotic sensor, the application to more complex sensors, like camera images is not straight forward. We provide one concrete way to implement the SSI idea for camera images, which first need to be mapped to scalar values to fit into the proposed reward scheme. We argue that our approach requires less prior knowledge than the broadly used shaping reward formulation, that typically rely on task insight for their definition and state estimation for their computation.
In several case studies we demonstrated the successful application to various robotic domains, both in simulation and on real robots. The SSIs we experimented with were mostly based on pixels, but also touch and joint angle based SSIs were used. The definition of the SSIs was straight-forward with no or minor adaptation between domains.
Future work will concentrate on the extension of this concept in various directions, e.g. improving the scheduling of intentions to deal with a large number of auxiliary tasks will enable the automatic generation of rewards and reward combinations.
\section*{Acknowledgments}
The authors would like to thank Konstantinos Bousmalis and Patrick Pilarski
and many others of the DeepMind team for their help and numerous useful discussions and
feedback throughout the preparation of this manuscript. In addition, we would like to particularly thank
Thomas Lampe and Michael Neunert and the robot lab team under the lead of Francesco Nori
for their continuous support with the real robot experiments.
\bibliographystyle{plainnat}
|
1,941,325,220,765 | arxiv | \section{Introduction and Background}
\label{sec:intro}
At least since 1800 BC, attempts have been made to measure the seabed. While currently, bathymetry is employed primarily to measure ocean depth, many other use cases exist for lakes, dams, rivers, and other freshwater basins \cite{hare2008small}. Mapping of hydroelectric power plants, whose infrastructures must be inspected regularly, is a typical example of such a use case. The total volume and depth distribution, particularly around the dam discharge and the submerged spillway equipment, are of significant importance \cite{bourgeois1999autonomous}. Monitoring changes in ecologically sensitive water bodies that are under anthropogenic threat is another such use case. While considerable research has been done for deep water bodies, not much attention has been paid to efficient mapping of shallow water bodies using cost-effective means.
Remote sensing is the acquisition, without physical touch, of information about a phenomenon or an object. Remote sensing is used in several areas, such as military intelligence, planning, geographical surveys, and ecological understanding \cite{campbell2011introduction}. Remote Remote sensing helps to collect rapid and cost-effective data in large areas without disrupting ecology or geology. The phrase "remote sensing" refers primarily to satellite or aerial sensing technology used to detect or classify earthly objects based on electromagnetic signals. Orbital platforms gather and transmit electromagnetic spectrum data that provide researchers with adequate data to follow patterns such as catastrophes or natural calamities and other occurrences. But, in terms of space, spectrum, and radiometry, the major problem of such approaches is the poor resolution of the data. On the other hand, traditional manual survey is time-consuming, labor-intensive, and costly. We thus require new ways or tools to measure water bodies carefully \cite{hudson2014underway}. Unmanned and autonomous vehicles can help solve these problems and allow scientists and researchers to understand and minimize the consequences of a constantly changing environment.
To be autonomous, a vehicle must function without external help or control to navigate and perform its intended operations. For the current prototypes of autonomous aquatic vehicles, there are numerous parameters and the design varies considerably depending on the applications. A typical unit has a hull, a propulsion system, a navigation system, and a data collection and transmission system \cite{bourgeois1999autonomous}. An ASV (autonomous surface vehicle) differs from a USV (unmanned surface vehicle) in that the former does not have a remote driver who operates the vehicle. ASVs can perform its function, such as navigation and data collection without a remote pilot, and can either transmit the data to a home base or store the data on board.
For side-scan sonar surveys, the usage of a non-ferrous vessel and installation of the sensors linked to the ship's hull are more practical for shallow-water bodies. Hare et al. \cite{hare2008small} illustrates different adjustments to shallow water applications of traditional mid-size and small survey boats. However, complicated ship designs have limited utility in shallow waters and restricted places near the shores. The use of surface vehicles, as opposed to a hydrographic vessel capable of taking measurements in shallow seas, is an alternate method, which is being employed more because of continuous technological advancements. Specht et al. \cite{specht2017application} describe the idea of bathymetric measurements for shallow seas using an independent, unmanned survey vessel. Due to their relevance for the safety of navigation and transit, the focus will be on developing bathymetric charts, especially in the coastal area.
Suhari et al. \cite{suhari2017small} outline the transition from the bathymetric survey to a bathymetric vehicle utilized for the examination of the seafloor topography utilizing hydrographic measures. The concentration is on inland water and lake surveys. Similarly, the gathering of data on bathymetry and environmental variables in shallow seas through the use of several sensors in tiny surface vehicles is demonstrated by Giordano et al. \cite{giordano2016integrating}. This article provides several advances for guiding, navigating and controlling USV so that autonomous survey campaigns can be carried out in shallow seas.
We developed a cost-effective and autonomous solution for mapping shallow-water bodies using a side-scan sonar. We describe its design and the data collection procedure.
\section{Materials and Method} \label{sec:format}
\subsection{Design of the vehicle}
Catamaran structures have higher stability due to the wider beams, lesser power requirements due to smaller hydrodynamic resistance, and shallower draught, which helps in shallower areas. \cite{stanghellini2020openswap} Since the acoustic and ultrasonic signal generated by the sonar transducer should be perfectly coupled with the water and the sensor should also be shielded from the effect of turbulence, the design of the components, propellants, and sensor placements was planned accordingly as shown in Figures~\ref{fig_datasets} and~\ref{fig_datasets2}.
\begin{figure}
\centering
\begin{subfigure}[t]{.45\linewidth}
\centering\includegraphics[width=.99\linewidth]{images/sonar-install.JPG}
\caption{Sonar transducer fitted in hull}
\label{fig_datasets:sub1}
\end{subfigure}
\medskip
\begin{subfigure}[t]{.45\linewidth}
\centering\includegraphics[width=.99\linewidth]{images/sonarHead.JPG}
\caption{Sonar head in hull}
\label{fig_datasets:sub2}
\end{subfigure}
\begin{subfigure}[t]{.45\linewidth}
\centering\includegraphics[width=.99\linewidth]{images/IMG-4444.JPG}
\caption{Electronics in testing stage}
\label{fig_datasets:sub3}
\end{subfigure}
\caption{ASV in testing stages}
\label{fig_datasets}
\end{figure}
To ease the implementation, we selected a readily available catamaran hull, and 3D printed the required structure on them to safely incorporate the electronic components.
\subsection{Electronic Components}
Main electronics components used for this study in Table~\ref{tab:Electronic-Component}
\begin{table}[]
\caption{Main Electronic Components}
\label{tab:Electronic-Component}
\begin{tabular}{ll}
\toprule
\textbf{Component} & \textbf{Specification} \\
\midrule
Sonar & CHIRP SI GPS G2 \\
Controller & PixHawk PX4 \\
Positioning System & Ublox Neo-M8N GNSS \\
Motor with ESC & 3100/3930KV Brushless \\
Power Supply & 8000mAh Lipo \& 18650 Li-ion \\
RC Transmitter Receiver & 2.4GHz 6CH AFHDS \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Humminbird Side Imaging (SI) Sonar}
This study used the Humminbird Helix 5 CHIRP SI GPS G2 to record the bathymetry data. The instrument comes with a transducer and a control head that have three options for underwater imaging: down imaging, dual beam, and side imaging. It is capable of 3 frequencies: 83, 200, and 455 kHz.
For dual beam, the transducer projects two concentric conical sonar beams onto water directly under the transducer at an angle of $20^{\deg}$ (200 kHz) and $60^{\deg}$ (83 kHz) openings. The down-imagery uses a razor-thin, high-frequency down-looking sonar beam to create a 2D profile of the waterbed. For side imaging, the sonar beams are thin-shaped, with angles of $86^0$, and could reach a range of 73 meters side to side.
CHIRP technology allows the sonar pulse to be modulated in a range of frequencies. For 2D profiling, frequency ranges of 75-95 kHz and 175-225 kHz were used. For down imaging and side imaging, modulation of 420-520 kHz was used. The unit has a maximum depth capability of 457 m. The unit is equipped with an internal precision GPS module for geolocation tagging.
\begin{figure}
\centering
\begin{subfigure}[t]{.45\linewidth}
\centering\includegraphics[height=3cm]{images/IMG-6619.jpg}
\caption{Inside View}
\label{fig_datasets:sub3}
\end{subfigure}
\medskip
\begin{subfigure}[t]{.45\linewidth}
\centering\includegraphics[height=3cm]{images/IMG-6618.jpg}
\caption{Outside View}
\label{fig_datasets:sub4}
\end{subfigure}
\caption{3D printed part with component placing }
\label{fig_datasets2}
\end{figure}
\subsubsection{Controller:PixHawk}
PixHawk internal software architecture consists of two main layers: flight stack and middleware. The flight stack is an autonomous control and estimation system and the middleware is the general robotic layer that can support any autonomous robot providing internal-external communication and hardware integration.
An overview of the flight stack is shown in Figure~\ref{fig:flight-stack}. The estimator combines one or more sensor inputs (such as GPS, IMU, etc.) and computes the vehicle state. The position controller takes the inputs from the sensors (through the estimator), the navigator (such as autonomous flight controller), and the remote controller (RC) inputs and takes the decision and drives that to actuators, such as motors or servo controllers. A controller’s goal is to adjust the values of measurement(estimated state) such that it matches the position setpoints (desired position) from the navigator or RC. The output from the controller is a correction to eventually reach that set point. The mixer takes the commands (such as turning right or left, etc.) from the controller and translates them into the individual motors. Tuning the variables of the controller such as P, I, D, maximum throttle, angle, cruise speed, acceleration, etc. ensures that the vehicle works properly in real-world conditions.
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centering\includegraphics[width=.96\linewidth]{images/ArduBoat.png}
\end{minipage}
\caption{overview of the flight stack.}
\label{fig:flight-stack}
\end{figure}
\subsubsection{Positioning system}
To provide the position during the survey to the ASV, Ublox Neo-M8N, a separate GNSS module with a digital compass (HMC5883L) used in conjunction with PixHauk. This GNSS module offers the capability to track the satellite constellations including GPS, GLONASS, Galileo, BeiDou, QZSS, and SBAS.
\subsection{Sensor calibration}
We calibrated the sonar and controller in the controlled environment of a swimming pool. The depth error was corrected using the known depth of the swimming pool and was given as an offset as shown below:
\begin{equation}
error = Depth_{sonar} - Depth_{known}
\end{equation}
Using Mission Planner software, the parameters in steering rate, steering modes, speed, throttle, motors, and navigation were tuned. This includes P, I, D values, turn radius, max acceleration, speed, etc.
\begin{figure}
\centering
\begin{subfigure}[t]{.49\linewidth}
\centering\includegraphics[height=3cm]{images/swimmingpool-test.JPG}
\caption{ASV in Swimming Pool}
\label{fig_datasets:sub1}
\end{subfigure}
\medskip
\begin{subfigure}[t]{.49\linewidth}
\centering\includegraphics[height=3cm]{images/testing-powai.jpg}
\caption{ASV in Powai Lake}
\label{fig_datasets:sub2}
\end{subfigure}
\caption{ASV in data collection}
\label{fig_datasets3}
\end{figure}
\section{Data Acquisition}
\subsection{Study Area}
The study area was Powai Lake located in Mumbai, India, which is an artificial freshwater lake. The lake spreads over around 2 square kilometers from N019.08.140, E072.53.740 in North-West to N019.07.176, E072.54.829 South-East.
Data acquisition was done during summer, in the month of March. The presence of aquatic weeds such as water hyacinth and water lettuce at the time of the survey limited us from covering the lake in whole.
\begin{figure}
\centering
\begin{subfigure}[t]{.49\linewidth}
\centering\includegraphics[height=3cm]{images/powai-lake-study.png}
\caption{Powai Lake}
\label{fig_datasets:sub1}
\end{subfigure}
\medskip
\begin{subfigure}[t]{.49\linewidth}
\centering\includegraphics[height=3cm]{images/path-plan-powai.PNG}
\caption{Path Planned}
\label{fig_datasets:sub2}
\end{subfigure}
\caption{Powai Lake: Study Area}
\label{fig:powai-depth-map}
\end{figure}
\section{Results and Discussions}
Obtained sonar files in the format of *.SON, *.IDX, and *.DAT were viewed and interpreted using the software “ReefMaster”. A final map is obtained as shown in Figure~\ref{fig:powai-depth-map} after interpolating the data points using TIN (triangulated irregular network). Here, the data points were triangulated and the values between the data points were interpolated using the slope of the connecting triangles.
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centering\includegraphics[width=.96\linewidth]{images/powai-depth-with-legend.png}
\end{minipage}
\caption{Depth Map Generated}
\label{fig:powai-depth-map}
\end{figure}
\begin{table}[]
\caption{Water Volume}
\label{tab:waterVolume}
\begin{tabular}{llll}
\toprule
\textbf{Lower} (m) & \textbf{Upper} (m) & \textbf{Volume ($m^3$)} & \textbf{Area ($m^2$)} \\
\midrule
0.00 & 1.00 & 1651393.27 & 1765845.00 \\
1.00 & 2.00 & 1143354.21 & 1407421.00 \\
2.00 & 3.00 & 657685.43 & 857950.00 \\
3.00 & 4.00 & 290903.70 & 485206.00 \\
4.00 & 5.00 & 37717.81 & 91768.00 \\
5.00 & 6.00 & 1911.85 & 12448.00 \\
\bottomrule
\end{tabular}
\end{table}
Table \ref{tab:waterVolume} shows the volume and surface area covered by each depth area in a 1-meter interval. For example, an area of 12448 $m^2$ is present within the depth range of 5 to 6 meters, and that area has a volume capacity of 1911.85 $m^3$.
The following findings were obtained from the survey of the Powai lake, after the interpolated depth:
\begin{itemize}
\item Total Volume: 3782966 $m^3$
\item Total Mapped Area: 1765845 $m^2$
\item Average Depth : 2.1m
\item Maximum Depth : 5.83m
\end{itemize}
\section{Conclusion and Future Work}
\label{sec:typestyle}
The results of the survey show that this technique is suitable for monitoring in shallow areas. These shallow areas, especially during storms, are rapidly changing due to erosion and deposition, as well as human activity. Because of the difficulty in navigating in low water and the large space sample rates necessary, such water bodies present a tough problem for standard ship-sounding.This catamaran-based autonomous surface vehicle can assist in investigating the implications of a changing environment for such shallow water bodies.
The independent system can remain on duty as long as it has enough energy. The autonomous system can adjust on site to environmental changes itself or to survey tasks with advanced mission planning and decision-making capabilities on board. The modest size of this model decreases the likelihood of wind catches, since the top half is aerodynamic and has little surface presence. The stability and shape of the streamline also assist the model to cope with difficult environmental fluctuations and also enhance autonomous surface vehicle management.
This research paper introduced a low-cost robust autonomous surface vehicle for bathymetry collection. It should also be noted that although such systems can collect data independently, there is a requirement for a considerable amount of human labor and resources to manage and implement such systems. New sensors and technologies can be integrated into the system to improve vehicle independence and efficiency.
\bibliographystyle{IEEEbib}
|
1,941,325,220,766 | arxiv | \section{Introduction}
\label{sec:intro} Let $y_0=y_0(t)$ be a solution of the linear differential
equation
\begin{equation}\label{aa}
a_0(t) y^{(n)} + a_1(t) I^{(n-1)} + \dots a_n(t) I = 0
\end{equation}
where $a_i\in
k=\mathbb{C}(t)$ are functions, rational in the independent variable $t$. We
are interested in determining the equation of minimal degree $d\leq n$
\begin{equation}\label{bb}
b_0(t) y^{(d)} + b_1(t) y^{(d-1)} + \dots b_d(t) y = 0
\end{equation}
such that
\begin{itemize}
\item $y_0$ is a solution
\item the coefficients $b_i$ are algebraic functions in $t$
\end{itemize}
Recall that a function $b(t)$ is said to be algebraic in $t$ if there exists a
polynomial $P$ with coefficients in $k=\mathbb{C}(t)$, such that $P(b(t))\equiv
0$.
We shall suppose that, more generally, $k$ is an arbitrary differential field
of characteristic zero with algebraically closed field of constants, $k^a$ is
its algebraic closure, and $a_i\in k$, $b_j\in k^a$. To find the equation
$(\ref{bb})$ we consider the differential Galois group $G$ of $(\ref{aa})$ and
its connected subgroup $G^0$ containing the unit element of $G$. Our first
result, Theorem \ref{main}, says that the orbit of $y_0$ under the action of
$G^0$ spans the solution space of $(\ref{bb})$.
A particular attention is given further to the case in which (\ref{aa}) is of
Fuchs or Picard-Fuchs type. Theorem \ref{main} is re-formulated in terms of the
action of the corresponding monodromy groups in Theorem \ref{th:main1} and
Theorem \ref{th:algeq}.
In the last part of the paper, section \ref{sec:ex}, we apply the general
theory to some Abelian integrals appearing in the study of perturbations of the
Lotka-Volterra system. These integrals have the form
$$
I(t) = \int_{\gamma(t)} \omega
$$
where
$$\gamma(t)\subset \{ (x,y)\in\mathbb{C}^2 : F(x,y)=t\}$$
is a continuous family of ovals,
$$
F(x,y)= x^py^p(1-x-y) \mbox{ or } F(x,y)= x^p (y^2-x-1)^q,\;\; p,q\in
\mathbb{N}
$$
and $\omega$ is a suitable rational one-form on $\mathbb{C}^2$. In the first
case the Abelian integral satisfies a Picard-Fuchs equation of order $2p+2$. It
has been shown by van Gils and Horozov \cite{hvg}, that $I(t)$ satisfies also a
second order differential equation whose coefficients are functions algebraic
in $t$. This allows to compute the zeros of $I(t)$ (by the usual Roll's theorem
for differential equations) and finally, to estimate the number of limit cycles
of the perturbed plane foliation defined by
$$
dF +\varepsilon \widetilde{\omega}=0
$$
where $\widetilde{\omega}$ is a real polynomial one-form on $\mathbb{R}^2$. By
making use of Theorem \ref{main} we provide the theoretical explanation of
the phenomenon observed observed first in \cite{hvg}, see section
\ref{sec:hvg}. Another interesting case, studied in the paper is when $F(x,y)=
x^p (y^2-x-1)^q$ ($p,q$ relatively prime). The Abelian integral $I(t)$
satisfies a Picard-Fuchs equation of order $p+q+1$, which is the dimension of
the first homology group of the generic fiber $F^{-1}(t)$. We show that the
minimal order of the equation (\ref{bb}) is $p+q+1$ or $p+q$ or $p+q-1$, and
that the coefficients $b_i(t)$ are rational in $t$, see section
\ref{sec:parab}. The meaning of this is that the differential Galois group of
the Picard-Fuchs equation is connected and, in contrast to \cite{hvg}, there is
no reduction of the degree, which may only drop by one or two, depending on
whether $\omega$ has or has not residues "at infinity".
\begin{center}
\emph{Acknowledgements}\end{center}
Part of the paper was written while the first author was
visiting the University Paul Sabatier of Toulouse. He is obliged for the
hospitality.
\section{Statement of the result}
\label{sec:gen}
Let $k$ be a differential field of characteristic zero with algebraically
closed field of constants $C$, $E\supset k$ be a Picard-Vessiot extension for
the homogeneous monic linear differential operator $L$
\begin{equation}\label{l}
L(y)= y^{(n)} + a_{n-1}y^{(n-1)}+ \dots + a_1 y^{(1)} + a_0 y, a_i \in k
\end{equation}
and $y_0\in E$ a solution, $L(y_0)=0$. We denote by $k^a \supset k$ the
algebraic closure of $k$ which is also a differential field.
\begin{definition}
\emph{A homogeneous monic linear differential operator $\widetilde{L}$ with
coefficients in $k^a$ is said to be annihilator of $y_0$, provided that
$\widetilde{L}(y_0)=0$. The annihilator $\widetilde{L}$ is said to be minimal,
provided that its degree is minimal.}
\end{definition}
The
definition has a sense, because the algebraic closure $E^a$ of $E$ is a
differential field which contains $E$ and $k^a$ as differential subfields. The
minimal annihilator obviously exists and is unique, its degree is bounded by
the degree of $L$ which is an annihilator of $y_0$.
We are interesting in the following question
\emph{For a given solution $y_0$ as above, find the corresponding minimal
annihilator $\widetilde{L}$}
To answer, consider the differential Galois group $G=Gal(E/k)$, which is the
group of differential automorphisms of $E$ fixing $k$. Recall that $G$ is an
algebraic group over $C$, and let
$G^0$ be the connected component of $G$, containing the unit element (the identity). The intermediate
field $\widetilde{k}=E^{G^0}$, $k\subset \widetilde{k} \subset E$, of elements
invariant under $G^0$ is then a finite algebraic extension of $k$. We denote it
by $\widetilde{k}$.
Let $y_0, y_1, \dots , y_{d-1}$ be a basis of the $C$-vector space spanned by
the orbit
$$
G^0y_0 = \{g(y_0): g\in G^0\}\subset E
$$
and consider the Wronskian determinant in $s$ variables
$$
W(y_1,y_2,\dots,y_s) = \det \left(%
\begin{array}{cccc}
y_1 & y_2 & \dots & y_s \\
y_1' & y_2' & \dots & y_s' \\
\vdots & \vdots & \ddots & \vdots \\
y_1^{(s-1)} & y_2^{(s-1)} & \ldots & y_{s-1}^{(s-1)} \\
\end{array}%
\right) .
$$
$y_0$ satisfies the differential equation
\begin{equation
W(y,y_0, y_1, \dots , y_{d-1})=0
\end{equation}
and because of the $C$-linear independence of $y_i$
$$
W(y_0,y_1,\dots,y_{d-1})\neq 0 .
$$
Let $\widetilde{L}$ be the monic linear differential operator defined by
\begin{equation}\label{minimal}
\widetilde{L} (y) = \frac{W(y,y_0,y_1,\dots,y_{d-1})}{W(y_0,y_1,\dots,y_{d-1})}
.
\end{equation}
Its coefficients are invariant under the action of $G^0$, and hence they belong
to the differential field $ \widetilde{k}=E^{G^0} . $
Our first result is the following
\begin{theorem}
\label{main} The differential operator $\widetilde{L}$ (\ref{minimal}) is the
minimal annihilator of the solution $y_0$.
\end{theorem}
\begin{proof}
Let $L_{min}$ be the unique differential operator of minimal degree with
coefficients in some algebraic extension $k_{min}$ of $k$, such that
$L_{min}(y_0)=0$. Denote by $E_{min}$ the Picard-Vessiot extension for
$L_{min}$.
As a first step, we shall show that $E_{min}$ can be identified to a
differential subfield of the Picar-Vessiot extension $E$ for $L$. The algebraic
closure $E^a$ of $E$ is a differential field which contains $E$ and every
algebrtaic extension of $k$ (hence it contains $k_{min}$). Therefore the
compositum $k_{min} E$ of $ k_{min}$ and $ E$, that is to say the smallest
field containing $E$ and $\tilde{k}$, is well defined \cite{lang}. The
differential automorphisms group $Gal(k_{min} E/k)$ acts on the compositum
$k_{min} E$ and leaves $E$ invariant. Therefore $Gal(k_{min} E/k_{min})\subset
Gal(k_{min} E/k)$ leaves $E$ invariant too, and the orbit $Gal(k_{min}
E/k_{min}) y_0$ is contained in $E$. Let $y_0, y_1, \dots , y_{m-1}$ be a basis
of the $C$-vector space spanned by this orbit. Then $y_0$ satisfies the
differential equation
\begin{equation}\label{wronskian}
\frac{W(y,y_0, y_1, \dots , y_{m-1})}{W(y_0, y_1, \dots , y_{m-1})}=0
\end{equation}
and the coefficients of the corresponding monic linear homogeneous differential
operator belong to $k_{min}$.
Consider the ring of differential polynomials
$$
k_{min}\{Y\}= k_{min}[{Y^{(i)}: i=0,1,2,\dots }]
$$
in formal variables $Y^{(i)}$. Identifying differential operators on
$\widetilde{k}$ to polynomials ( the derivatives $y^{(i)}$ correspond to
variables $Y^{(i)}$ ), we may consider the ideal $I$ generated by homogeneous
linear differential operators with coefficients in $k_{min}$ which annihilate
$y_0$. This is obviously a linear ideal which, according to the general theory
(see \cite[Proposition 1.8]{magid} ), is principal in the following sense.
There exists a linear differential operator with coefficients in $k_{min}$,
which generates $I$. Clearly the generator of $I$ is the operator $L_{min}$
defined above. It follows that the solution space of $L_{min}$ can be
identified to a $C$-vector subspace of the solution space of the operator
defined by (\ref{wronskian}), which implies
\begin{equation}
\label{kkee} k \subset k_{min} \subset E_{min} \subset E .
\end{equation}
(the first two inclusions hold by definition).
At the second step of the proof we shall show that $\deg L_{min} = \deg
\widetilde{L}$. Indeed, the automorphisms group $G^0$ leaves fixed the elements
of $E$ which are algebraic on $k$. In particular, the elements of $k_{min}$ are
fixed by $G^0$ and hence $G^0$ induces differential automorphisms of the
Picard-Vessiot extension $E_{min}$. This shows that the solution space of
$L_{min}$ contains the solution space of $\widetilde{L}$ and
$$
\deg L_{min} \geq \deg \widetilde{L} .
$$
Reciprocally, if we consider (by the construction above) the ideal in
$\widetilde{k}\{Y\}$ generated by all linear homogeneous differential operators
with coefficients in $\widetilde{k}$, which annihilate $y_0$, then this ideal
is linear and principal. The generator of the ideal corresponds to the operator
$L_{min}$, and hence
$$
\deg L_{min} \leq \deg \widetilde{L} .
$$
Theorem \ref{main} is proved.
\end{proof}
To the end of this section we apply Theorem \ref{main} to Fuchs and
Picard-Fuchs differential operators. The minimal annihilator of a solution is
described in terms of the action of the monodromy group.
Let $L$ be a Fuchsian differential operator of order $n$ on the Riemann sphere
$\mathbb{P}^1$, $\Delta=\{ t_1,\ldots,t_s,\infty \}$ be the set of its
singular points. The field of constants is $C=\mathbb{C}$, the coefficients of
$L$ belong to the field of rational functions $k=\mathbb{C}(t)$. Denote by $S\cong
\mathbb{C}^n$ the complex vector space of solutions of $L=0$. The monodromy
group $\mathcal{M}$ of $L$ is the image of the homomorphism (monodromy representation)
$$
\pi_1(\mathbb{P}^1 \setminus \Delta,*) \rightarrow GL(S) .
$$
The Zariski closure of $\mathcal{M}$ in $GL(S)$ is the differential Galois group $G$ of
$L$:
$$\overline {\mathcal{M}}=G .$$
A vector subspace $V \subset S$ is invariant under the action of $G$ if and
only if it is invariant under the action of $\mathcal{M}$. A subspace $V \subset S$ is
said to be \emph{virtually invariant}, provided that it is invariant under the
action of identity component $G^0$ of $G$, or equivalently, under the action of
$\mathcal{M}\bigcap G^0$. For an automorphism $g\in G$ the set $g(V)\subset S$ is a
vector subspace of the same dimension. Thus $G$ acts on the Grassmannian space
$Gr(d,S)$
$$
G \times Gr(d,S) \rightarrow Gr(d,S): (g,V) \mapsto g(V) .
$$
and for every
plane $V\in Gr(d,S)$ the orbit
$$
G(V)= \{ g(V): g\in G\}\subset Gr(d,S)
$$
is well defined.
\begin{lemma}
\label{virtual} A plane $V\in Gr(d,S)$ is virtually invariant, if and only if
the orbit $G(V)\subset Gr(d,S)$ is finite.
\end{lemma}
\begin{proof}
We have
$$
\overline{\mathcal{M}(V)} = \overline{\mathcal{M}}(V) = G(V) \supset G^{0} (V)
.
$$
If the orbit $\mathcal{M}(V)$ is finite, then $\mathcal{M}(V)=
\overline{\mathcal{M}(V)}$ and hence $G^{0}(V)$ is finite. As $G^0$ is a
connected Lie group, then $G^{0}(V)= V$ and $V$ is virtually invariant.
Suppose that $V$ is virtually invariant. As $G/G^0$ is a finite group, then
$G^{0}(V)= V$ implies that the orbit $\overline{\mathcal{M}}(V) = G(V)\subset
Gr(d,S)$ is finite and hence $\mathcal{M}(V) \subset \overline{\mathcal{M}}(V)$
is finite too.\\
\end{proof}
Let $L$ be a Fuchsian differential operator as above, and $y_0$ a solution,
$L(y_0)=0$. The minimal annihilator of $y_0$ is a differential operator
$\widetilde{L}$ of minimal degree with coefficients in some algebraic extension
of $\mathbb{C}(t)$. Thus $\widetilde{L}$ is a Fuchsian operator too, but on a
suitable compact Riemann surface realized as a finite covering of
$\mathbb{P}^1$. Let $V_1, V_2\subset S$ be two virtually invariant planes
containing the solution $y_0$. Then $V_1\cap V_2$ is a virtually invariant
plane containing $y_0$. This shows the existence of a unique virtually
invariant plane $V$ of minimal dimension, containing $y_0$. We call such a
plane minimal. According to Lemma \ref{virtual} and Theorem \ref{main} the
minimal annihilator of $y_0$ is constructed as follows. Let
$y_0,y_1,\dots,y_{d-1}$ be a basis of the minimal virtually invariant plane $V$
containing $y_0$. Consider the Fuchsian differential operator defined as in
formula (\ref{minimal}).
\begin{theorem}
\label{th:main1} The differential operator $\widetilde{L}$ is the minimal
annihilator of the solution $y_0$. The degree of $\widetilde{L}$ equals the
dimension of the minimal virtually invariant plane containing $y_0$.
\end{theorem}
Suppose finally that $L$ is a linear differential operator of Picard-Fuchs (and
hence of Fuchs) type. We shall adapt Theorem \ref{th:main1} to this particular
setting.
Let $F: \mathbb{C}^2 \rightarrow \mathbb{C}$ be a bivariate non-constant polynomial. It is
known that there is a finite number of atypical points $\Delta=\{
t_1,\ldots,t_n \}$, such that the fibration defined by $F$
\begin{equation}\label{milnor}
F:\mathbb{C}^2\setminus F^{-1}(\Delta) \rightarrow \mathbb{C}\setminus \Delta
\end{equation}
is locally trivial. The fibers $F^{-1}(t)$, $t\not\in \Delta$ are open Riemann
surfaces, homotopy equivalent to a bouquet of a finite number of circles.
Consider also the associated homology and co-homology bundles with fibers
$H_1(F^{-1}(t),\mathbb{C})$ and $H^1(F^{-1}(t),\mathbb{C})$ respectively. Both
of these vector bundles carry a canonical flat connection. Choose a locally
constant section $\gamma(t)\in H_1(F^{-1}(t),\mathbb{C}$ and consider the
Abelian integral
\begin{equation}\label{abelian}
I(t)= \int_{\gamma(t)}\omega
\end{equation}
where $\omega$ is a meromorphic one-form on $\mathbb{C}^2$ which restricts to a
holomorphic one-form on the complement $\mathbb{C}^2\setminus F^{-1}(\Delta)$.
The Milnor fibration (\ref{milnor}) induces a representation
\begin{equation}\label{hmilnor}
\pi_1(\mathbb{C}\setminus\{\Delta\},*)\rightarrow
Aut(H_1(F^{-1}(t),\mathbb{C}))
\end{equation}
which implies the monodromy representation of the Abelian integral $I(t)$.
Let $V_t\subset H_1(F^{-1}(t),\mathbb{C})$ be a continuous family of complex
vector spaces obtained by a parallel transport. The space $V_t$ can be seen as
a point of the Grassmannian variety $Gr(d,H_1(F^{-1}(t),\mathbb{C}))$.
Therefore the representation (\ref{hmilnor}) induces an action of the
fundamental group $\pi_1(\mathbb{C}\setminus\{\Delta\},*) $ on
$Gr(d,H_1(F^{-1}(t),\mathbb{C}))$.
\begin{definition}
We say that a complex vector space $V_t\subset H_1(F^{-1}(t),\mathbb{C})$ of
dimension $d$ is virtually invariant, provided that its orbit in the
Grassmannian $Gr(d,H_1(F^{-1}(t),\mathbb{C}))$ under the action of
$\pi_1(\mathbb{C}\setminus\{\Delta\},*) $ is finite. A virtually invariant
space $V_t$ is said to be irreducible, if it does not contain non-trivial
proper virtually invariant subspaces.
\end{definition}
Let $\gamma(t)$ be a locally constant section of the homology bundle defined by
$F$. As intersection of virtually invariant vector spaces $V_t\subset
H_1(F^{-1}(t),\mathbb{C})$ containing $\gamma(t)$ is virtually invariant again,
then such an intersection is the minimal virtually invariant space containing
$\gamma(t)$. Clearly a virtually invariant minimal space containing $\gamma(t)$
need not be irreducible : it might contain a virtually invariant subspace
subspace not containing $ \gamma(t)$.
Consider the Abelian integral $I(t)= \int_{\gamma(t)}\omega$, where $\gamma(t)$
is a locally constant section of the homology bundle and $ \omega$ is a
meromorphic one-form as above. Denote by $V_t$ the minimal virtually invariant
vector space containing $\gamma(t)$.
\begin{theorem}
\label{th:algeq} If $V_t$ is irreducible, then either the Abelian integral
$I(t)$ vanishes identically, or its minimal annihilator is a linear
differential operator of degree $d=\dim V_t$.
\end{theorem}
\begin{proof}
Let $S_t$ be the complex vector space of germs of analytic functions in a
neighborhood of $t$, obtained from $I(t)$ by analytic continuation along a
closed path in $\mathbb{C}\setminus \Delta$. It suffice to check that $V_t$ is
isomorphic to $S_t$. Equivalently, for every locally constant section
$\delta(t)\in V_t$ we must show that $\int_{\delta(t)} \omega\not\equiv 0$.
Indeed, the vector space of all locally constant sections $\delta(t)$ with
$\int_{\delta(t)} \omega\equiv 0$ is an invariant subspace of $V_t$. As $V_t$
is supposed to be irreducible, then this space is trivial. Theorem
\ref{th:algeq} follows from Theorem \ref{th:main1}.
\end{proof}
The above theorem is easily generalized. For instance, the coefficients of the
minimal annihilator of $I$ are rational functions of $t$ if and only if the
minimal virtually invariant space $V_t$ containing $\gamma$ is monodromy
invariant, i.e.\ its orbit in the Grassmannian consists of a single point.
Further, it might happen that $V_t$ is reducible. Let $V_t^0$ be a proper
virtually invariant subspace of $V_t$. If the factor space $V_t/V_t^0$ is
irreducible (does not contain proper virtually invariant subspaces), then
Theorem \ref{th:algeq} still holds true, but the minimal annihilator of $I(t)$
is of order equal to $\dim V_t - \dim V_t^0$. Multidimensional Abelian
integrals (along $k$-cycles) are studied in a similar way.
\section{Examples of Abelian integrals related to perturbation of the Lotka-Volterra system}
\label{sec:ex} Let $F$ be a real polynomial and $\omega=Pdx + Qdy$ a real
polynomial differential one-form in $\mathbb{R}^2$. Consider the perturbed real
foliation in $\mathbb{R}^2$ defined by
\begin{equation}\label{16th}
dF + \varepsilon \omega = 0 .
\end{equation}
The infinitesimal 16th Hilbert problem asks for the maximal number of limit
cycles of (\ref{16th}) when $\varepsilon \sim 0$ as a function of the degrees
of $F, P, Q$. Let $\gamma(t)\subset F^{-1}(t)$ be a continuous family of closed
orbits of (\ref{16th}). The zeros of the Abelian integral $I(t)=
\int_{\gamma(t)}\omega$ approximate limit cycles (at least far from the
atypical points of $F$) in the following sense. If $I(t_0)=0, I'(t_0)\neq 0$,
then a limit cycle of (\ref{16th}) tends to the oval $\gamma(t_0)$, when
$\varepsilon$ tends to $t_0$. The question of explicit computing the number of
zeros of Abelian integrals remains open (although a substantial progress was
recently achieved, see \cite{ilya02,bny} and the references therein).
Generically an Abelian integral satisfies a Picard-Fuchs differential equation
$$
I^{(d)} + a_1 I^{(d-1)}+\dots a_d I = 0, a_i\in \mathbb{R}(t)
$$
of order equal to the dimension of the homology group of the typical fiber
$F^{-1}(t)$. We are interested in the possibility of reducing the degree of
this equation, assuming that the coefficients of the equation are algebraic in
$t$, $a_i\in \mathbb{C}(t)^a$. Indeed, the zeros of the solutions of a second
order equation are easily studied (by the Rolle's theorem).
In this section we study Abelian integrals which appear in the perturbations of
foliations $dF=0$ with $F = x^p (y^2+x-1)^q$ and $F(x,y)=(xy)^p(x+y-1)$, where
$p,q$ are positive integers. The corresponding foliation $dF=0$ is a special
Lotka-Volterra system.
\subsection{Toy example $F=x^p y^q$}
Consider first the fibration defined by the polynomial $F=x^p y^q$. We assume
that $p,q$ are \emph{relatively prime}. The base of the fibration is the
punctured plane $B=\mathbb{C}\setminus \{0\}$. Each fiber is a sphere with two points
removed. The homology bundle is one-dimensional with trivial monodromy
representation. We investigate the monodromy representation on the
\emph{relative homology} bundle. It will be a basic ingredient of the monodromy
investigation in more complicated cases.
Consider a set of marked points $B_t$ on the complex fibre $F^{-1}(t)$
\[
B_t=(F^{-1}(t)\cap \{x=L\})\cup (F^{-1}(t)\cap \{y=L\}),
\]
where $L$ is a real positive number. The relative homology $H_1(F^{-1}(t),B_t)$ is a free group with $p+q$ generators. A convenient model for the pair $(F^{-1}(t),B_t)$ consists of a cylinder with some strips attached; marked points are located at the ends of these strips.
Note that there exist a unique pair of positive integers $(m,n)$ satisfying the
following relation
\begin{equation}
\label{pqmn}
p\, m+q\, n =1,\qquad |m|<q, \quad |n|<p.
\end{equation}
Let $S\subset \mathbb{C}$ be the strip in complex plane around the real segment $[1,L]$. Let $C(r,R)\subset\mathbb{C}$ be ring (cylinder) which radii $r$ and $R$ satisfy relations $L^{-1}<r<1<R<L$. The model $M$ is a surface constructed with three charts $U_x$, $U_y$, $U_c$:
\begin{equation}
\begin{split}
U_x &= \{(x,\nu):\ x\in S,\ \nu\in \mathbb{Z}/q\},\\
U_y &= \{(y,\mu):\ y\in S,\ \mu\in \mathbb{Z}/p\},\\
U_c &= \{u\in C(r,R) \}
\end{split}
\end{equation}
with the following transition functions (strips $U_x$ are attached to the external circle of radius $R$ and strips $U_y$ are attached to the internal boundary of $U_c$)
\begin{equation}
\label{trfn}
u(x,\nu)= x^{1/q} e^{2\pi i/q\, (-\nu m)},\qquad u(y,\mu)= y^{-1/p} e^{2\pi i/p\, (\mu n)}.
\end{equation}
The marked points are $\{x=L\}$ and $\{y=L\}$ at the end of strips. To construct a map $\psi_t$ we will use a bump function $\varphi\in C^\infty([0,1])$ which is 0 near $s=0$ and 1 near $s=1$. The map $\psi_t:M\rightarrow \mathbb{C}^2$ reads
\begin{equation}
\label{psidef}
\psi_t :
\left\{
\begin{aligned}
\psi_t(x,\nu) &= (x,\ t^{1/q} x^{-p/q} e^{2\pi i/q\,\nu})\\
\psi_t(y,\mu) &= (t^{1/p} y^{-q/p} e^{2\pi i/p\,\mu},\ y)\\
\psi_t(u) &= \big(u^q\, \exp(\tfrac {\log t}{p}\, \varphi(\tfrac{|u|-r}{R-r})),\ t^{1/q} u^{-p} \exp(-\tfrac{\log t}{q}\, \varphi(\tfrac{|u|-r}{R-r}) ) \big).\\
\end{aligned}\right.
\end{equation}
\begin{lemma}
\label{lem:toy}
The surface $M$ and the map $\psi_t$ provides a model of fiber for fibration defined by $F=x^p y^q$.
The monodromy transformation
$\mathcal{M}on :M\rightarrow M$ around $t_0=0$ reads
\begin{equation}
\label{toymon}
\mathcal{M}on:
\left\{
\begin{aligned}
\mathcal{M}on(x,\nu) &= (x,\nu+1)\\
\mathcal{M}on(y,\mu) &= (y,\mu+1)\\
\mathcal{M}on(u) &= u\, \exp\big(2\pi i(-\tfrac{m}{q}+\tfrac{1}{pq}\varphi (\tfrac{|u|-r}{R-r}))\big)
\end{aligned}\right.
\end{equation}
\end{lemma}
The surface $M$ and its monodromy transformation described in the above lemma are drawn on the figure \ref{fig:toymon3d}.
\begin{figure}[htpb]
\input{figs/toy3d.pspdftex}
\caption{Monodromy transformation of the model surface $M$ and a relative cycle $\gamma$.}
\label{fig:toymon3d}
\end{figure}
\begin{proof}
Complex level curves $F^{-1}(t)$ intersect line at infinity in two points: $[1:0:0]$ and $[0:1:0]$. The neighborhood of any of them is a punctured disc. Thus, there exists an isotopy of the level curve $F^{-1}(t)$ shrinking it to the region $\{|x|\leq R,\ |y|\leq R\}$ for sufficiently big $R$.
We will assume that $t$ is sufficiently close to $0$. The intersection of $F^{-1}(t)$ with the neighborhood $\{|x|\leq r,\ |y|\leq r\}$ of $(0,0)$ is a cylinder parametrized by the formula
\begin{equation}
\label{toyzerdisk}
u\mapsto (g^q\, u^q,\ g^{-p}\,u^{-p}t^{1/q}),
\end{equation}
where $g(t,u)$ is a function which will be fixed later.
The intersection of $F^{-1}(t)$ with set $\{|x|\leq R,\ |y|\leq R\}\setminus \{|x|\leq r,\ |y|\leq r\}$ decomposes into two connected components $V_x$ and $V_y$; one is located close to the $x$-plane and another to the $y$-plane respectively. The component $V_x$ is a graph of multi-valued ($q$-valued) function $y=t^{1/q}x^{-p/q}$ defined over the ring $\{r \leq|x|\leq R\}$. Marked points are images of point $x=L$ located on the real axis. We deform this domain by isotopy to the strip $S$ along real line -- see figure \ref{fig:toymod}.
\begin{figure}[htpb]
\input{figs/toydeform.pspdftex}
\caption{Deformation of domain to the strip $S$}
\label{fig:toymod}
\end{figure}
The values (leaves) of function $x^{-p/q}$ are numbered by $\nu\in\mathbb{Z}/q$. Thus, the domain $U_x$ and the map $\psi_t$ is defined as in lemma.
The model of $V_y$ is constructed in an analogous way.
To glue the above map together with parametrization \eqref{toyzerdisk} of disk around zero, we use the auxiliary function $g$. It must be equal $1$ near the internal circle of the ring $C(r,R)$ (i.e. $|u|=r$) and $t^{1/pq}$ near the exterior boundary ($|u|=R$). It is easy to check that $g=\exp(\tfrac{1}{pq}\log t\, \varphi(\tfrac{|u|-r}{R-r}))$ solves the problem.
The formula \eqref{toymon} for the monodromy around $t=0$ is a direct consequence of the formula \eqref{psidef}.\\
\end{proof}
A 2-dimensional version of figure \ref{fig:toymon3d} presenting the model surface $M$ is drawn below. It is obtained from figure \ref{fig:toymon3d} by cutting the cylinder along a vertical line. We will use these planar style of drawing models in subsequent, more complicated cases.
The segments that are unified are marked with arrows. Strips $U_x$ and $U_y$ are enumerated by integers $\tfrac{q}{2\pi}\arg u$ and $\tfrac{p}{2\pi}\arg u$ respectively; the argument $\arg u$ is calculated in point $u\in U_c$ which is glued with the point $1\in S$ according to relations \eqref{trfn}. Generators of the relative homology of $M$ are also marked.
\begin{figure}[htpb]
\input{figs/toy2d.pspdftex}
\caption{The model surface $M$ with generators of the relative homology.}
\label{fig:toy2d}
\end{figure}
\begin{proposition}
\label{prop:toyhom}
The relative homology is $H_1(F^{-1}(t),B_t)$of the complex fiber $F^{-1}(t)$ has dimension $p+q$. It is generated by cycles
\[
\gamma,\Delta,Q_0,\ldots,Q_{q-1},P_0,\ldots,P_{p-1}
\]
with the relations:
\[
Q_0+\cdots+Q_{q-1}=-\Delta,\qquad P_0+\cdots+P_{p-1}=\Delta.
\]
The monodromy representation on the relative homology space reads:
\begin{equation}
\label{toymonhom}
\begin{split}
\mathcal{M}on Q_j=Q_{j-m},\qquad \mathcal{M}on P_k=P_{k+n},\qquad \mathcal{M}on \Delta = \Delta\\
\mathcal{M}on \gamma = \gamma+Q_0+\cdots+Q_{-m+1}+P_0+\cdots +P_{n-1}.
\end{split}
\end{equation}
\end{proposition}
The proposition is a direct consequence of lemma \ref{lem:toy}.
\vskip 1cm \subsection{The parabolic case} \label{sec:parab}
Consider the fibration given by a polynomial $F=x^p (y^2+x-1)^q$, where $p,q$ is a pair of positive, relatively prime integer numbers. Thus, they satisfy the relation \eqref{pqmn}
with a pair of integers $m,n$. They must be of opposite signs; we assume $m>0$ and so $n\leq 0$.
The base of locally trivial fibration in this case is a plane with two points removed $B=\mathbb{C}\setminus\{0,c\}$,
where $c=(\tfrac{p}{p+q})^p(\tfrac{-q}{p+q})^q$ corresponds to a center $(\tfrac{p}{p+q},0)$ of the Hamiltonian vector field $X_F$. The cycle $\gamma_t$ for $t\in (0,c)$ is an oval (compact component) of the real level curve $F^{-1}(t)$.
The model of complex fiber is presented on figure \ref{fig:parab}. It consists of two cylinders and $p+q$ strips glued together as shown on the figure. Cylinders are drown as rectangles, with horizontal edges unified. To simplify the combinatorial structure, there are opposite orientations on these two cylinders. Vertical, dotted lines mark another unification.
\begin{figure}[htpb]
\input{figs/parab.pspdftex}
\caption{Model of the level curve $F^{-1}(h)$ for $F=x^p(y^2 + x-1)^q$.}
\label{fig:parab}
\end{figure}
\begin{lemma}
\label{lem:parabmodel}
The surface shown on the figure \ref{fig:parab} provides a model $M$ for complex fiber $F^{-1}(t)$. The homology group $H_1(M)$ has dimension $p+q+1$ and is generated by cycles $\gamma,\Delta_1,\Delta_2,Q_0,\ldots,Q_{q-1},P_0,\ldots,Q_{q-1}$ with the following relations
\begin{equation}
\label{parel}
Q_0+\cdots+Q_{q-1} = \Delta_2-\Delta_1, \qquad, P_0+\cdots+P_{p-1}= \Delta_1-\Delta_2.
\end{equation}
Intersection indices of $\gamma$ and other generators of the homology group reads
\begin{equation}
\label{parints}
\begin{split}
\gamma\cdot Q_0=-1, \quad \gamma\cdot Q_{q-1}=-1,\quad \gamma\cdot Q_j=0, \ \text{for}\ j=1,\ldots,q-2,\\
\gamma\cdot P_0=+1, \quad \gamma\cdot P_{p-1}=+1,\quad \gamma\cdot P_j=0, \ \text{for}\ j=1,\ldots,p-2,\\
\gamma\cdot\Delta_1 = +1,\qquad \gamma\cdot \Delta_2 = -1.
\end{split}
\end{equation}
The monodromy around zero takes the form:
\begin{equation}
\label{parmon}
\begin{split}
\mathcal{M}on_0 Q_j = Q_{j+m},\qquad \mathcal{M}on_0 P_k = P_{k-n}, \qquad \mathcal{M}on_0 \Delta_j = \Delta_j \\
\mathcal{M}on_0 \gamma = \gamma+Q_0+\cdots+Q_{m-1}+P_0+\cdots +P_{-n+1}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
The idea of proof is similar to the proof of lemma \ref{lem:toy}. We shrink the level curve $F^{-1}(t)$
by isotopy to the region $\{|x|\leq R,\ |y|\leq R\}$. We take the value of $t$ sufficiently close to 0. The intersections of $F^{-1}(t)$ with neighborhoods of saddle points $(0,1),\; (0,-1)$ are cylinders; we parametrize them by formulas similar to \eqref{toyzerdisk}. The remaining part of the fiber $F^{-1}(t)$ splits into two pieces: $V_l$ and $V_p$, located close to the line $x=0$ and close
to the parabola $y^2+x-1=0$ respectively. The part $V_l$ is the graph of $p$-valued function
$x=t^{1/p} (y^2+x-1)^{-q/p}$ defined over the disc of radius $R$ with small discs around points $y=\pm 1$ removed:
\[
U_l = \{y:\quad |y|\leq R,\ |y-1|\geq r,\ |y+1|\geq r\}.
\]
We deform the domain $U_l$ to the strip $S$ along the real segment -- see figure \ref{fig:pardeform}.
\begin{figure}[htpb]
\input{figs/parabdeform.pspdftex}
\caption{Deformation of domain $U_l$ to the strip $S$}
\label{fig:pardeform}
\end{figure}
Leaves of function over the strip $S$ are numbered by $\mu\in\mathbb{Z}/p$. In analogous way we deform $V_p$ to the graph of the $q$-valued function defined over the strip along real segment of
the parabola $\{y^2+x-1=0\}$. Leaves of the function are numbered by $\nu\in\mathbb{Z}/q$.
We glue together both collections of strips with two cylinders in a way analogous to the toy example. Indeed,
in sufficiently small neighborhood of the point $(0,1)$ the pair of functions $(x,y^2+x-1)$ define a holomorphic
chart. In this chart the function $F$ takes the form as in the toy example. The same is true for the other
saddle $(0,-1)$. Both cylinders are glued by $p$ strips going along the line $x=0$ and $q$ strips going along
parabola $y^2+x-1$. The monodromy around zero permutes strips according to the rule
$\nu\mapsto \nu+1$, $\mu\mapsto \mu+1$, what is compatible with the formula \eqref{toymon} for monodromy in the
toy example case. Thus, the monodromy acts on both cylinders around saddles as in the toy example.
The surface shown on figure \ref{fig:parab} provides a model for complex fibre $F^{-1}(t)$. Formulas
\eqref{parmon} follow from respective formulas \eqref{toymonhom} in the toy example. The relations
\eqref{parel} and intersection indices \eqref{parints} one can read from the figure \ref{fig:parab}.\\
\end{proof}
\begin{corollary}
\[
\mathcal{M}on^{pq}_0 \gamma = \gamma + \Delta_2-\Delta_1.
\]
\end{corollary}
The critical value $t=c$ corresponds to a Morse critical point of $F$. The
monodromy operator $\mathcal{M}on_c$ around $c$is therefore described by the usual
Picard-Lefschetz formula. Let $\gamma=\gamma(t)$ be the continuous family of
cycles, vanishing $c$.
\begin{corollary}
\label{cor:parmoncen}
\begin{equation}
\label{parcenmon}
\begin{split}
\mathcal{M}on_c Q_0=Q_0 - \gamma, \qquad \mathcal{M}on_c Q_{q-1}=Q_{q-1}-\gamma,\\
\mathcal{M}on_c Q_j=Q_j, \ \text{for}\ j=1,\ldots,q-2,\\
\mathcal{M}on_c P_0=P_0+\gamma, \qquad \mathcal{M}on_c P_{p-1}=P_{p-1}+\gamma,\\
\mathcal{M}on_c P_j=P_j, \ \text{for}\ j=1,\ldots,p-2,\\
\mathcal{M}on_c\Delta_1 = \Delta_1+\gamma,\qquad \mathcal{M}on_c \Delta_2 = \Delta_2-\gamma.
\end{split}
\end{equation}
\end{corollary}
\begin{theorem}
\label{th:parab} The related Abelian integral $I=\int_\gamma \omega$ is either
identically zero, or it does not satisfy any differential equation with
algebraic coefficients of order $k< p+q-1$.
\end{theorem}
\begin{proof}
The proof is based on theorem \ref{th:algeq}. Let $H$ be a $k$-dimensional subspace of the (complex) homology space $H_1=H_1(F^{-1}(t),\mathbb{C})$ and $\gamma\in H$. Assume that the monodromy orbit of $H$ in the Grassmannian $G_k(H_1)$ is finite. We show that the dimension of $H$ satisfies $\dim H \geq p+q$.
Let $\mathcal{M}on_0$ be the operator of monodromy around $t=0$ (i.e.\ along a loop winding once around $t=0$); let $\mathcal{M}on_c$ be a monodromy around the center $t=c$.
It follows formulas \eqref{parcenmon} that the $\mathcal{M}on_c - \mathrm{Id}$ is nilpotent operator and its image is one dimensional, generated by $\gamma$. The homology space $H_1$ splits into 2-dimensional $\mathcal{M}on_c$-invariant subspace $N$ and $(\dim H_1 -2)$-dimensional; the monodromy $\mathcal{M}on_c$ restricted to the latter one is the identity. The matrix of the restricted monodromy operator $\mathcal{M}on_c|_N$ in a basis $(\gamma,\delta)$ has the form
\[
[\mathcal{M}on_c|_N]_{(\gamma,\delta)}=\left(\begin{smallmatrix}1&1\\ 0&1\\ \end{smallmatrix}\right).
\]
Note that the subspace $N$ is not defined uniquely. It is spanned by $\gamma$ and any element $\delta\in H_1$ such that $\gamma\cdot \delta \neq 0$.
Consider the intersection $H\cap N$. The property that $H$ has a finite $\pi_1$ orbit (see theorem \ref{th:algeq}) implies that the $\mathcal{M}on_c$-orbit of $\mathcal{M}on_0^k H$, $k\in\mathbb{Z}$, is finite. Thus, the intersection $HN_k=(\mathcal{M}on_0^k H)\cap N$ has also finite $\mathcal{M}on_c$ orbit in $N$. The form of $\mathcal{M}on_c|_N$ implies that there are only 3 subspaces with a finite orbit:
\begin{equation}
\label{ninv}
HN_k=\{0\}, \qquad HN_k =\mathbb{C}\; \gamma, \qquad HN_k= N.
\end{equation}
Note that all these subspaces are $\mathcal{M}on_c$-invariant.
\begin{lemma}
\label{lem:ninvcnd}
Assume that the monodromy orbit of $H$ in $G_k(H_1)$ is finite. If $u\cdot \gamma \neq 0$ for an element
$u\in\mathcal{M}on^l_0 H$, $l\in\mathbb{Z}$, then $\gamma\in\mathcal{M}on^l_0 H$.
\end{lemma}
\begin{proof}
Take $\delta = u$ and consider 2-dimensional, $\mathcal{M}on_c$ invariant space $N$ spanned by $\gamma$ and $\delta$.
The $\mathcal{M}on_c$ orbit of the $\mathcal{M}on_0^l H$ space is finite, so the intersection $HN_l=N\cap\mathcal{M}on_0^l H$ has one of three
forms listed in \eqref{ninv}. Since $\delta \in HN_l$ then $HN_l=N$ and so $\gamma\in \mathcal{M}on_0^l H$.\\
\end{proof}
Consider the $\mathcal{M}on_0$-orbit of $\gamma\in H$. The lemma \ref{lem:ninvcnd} (for $l=-m$) and the property that the
intersection number is preserved by the monodromy implies the following condition
\begin{equation}
\gamma\cdot \mathcal{M}on_0^m \gamma \neq 0 \Rightarrow \mathcal{M}on_0^m \gamma\in H.
\end{equation}
Consider an element $\mathcal{M}on_0^{pq} \gamma = \gamma+(\Delta_2-\Delta_1)$. Since $\gamma\cdot (\Delta_2-\Delta_1)=-2$, then $(\Delta_2-\Delta_1)\in H$. Consider elements $\mathcal{M}on_0^l \gamma$ for $l=1,\ldots,pq$. The intersection index $| \gamma\cdot(\mathcal{M}on_0^l \gamma) |\leq N$. So, the cycles
\begin{equation}
\label{monkq}
\mathcal{M}on_0^{pq\, N+ q\, l}\gamma = \gamma + (N+lm)\,(\Delta_2-\Delta_1) + \sum_{j=0}^p a_j(l) P_j, \qquad l=1,\ldots,p
\end{equation}
have a nonzero intersection indices with $\gamma$. Since $p,q$ are relatively prime the space spanned sums $\sum_{j=0}^p a_j(l) P_j, \ l=1,\ldots,p$ coincide to the space generated by $(P_0+\cdots+P_{-n-1})$, $(P_{-n}+\cdots+P_{-2 n-1})$,\ldots; the latter one is the full space generated by $P_0,\ldots,P_{p-1}$. Both claims follow the fact that $p$ and $n$ are also relatively prime (see \eqref{pqmn}) and the following observation
\begin{lemma}
\label{linalg}
Let $V$ be a vector space of dimension $p$ and let $q$ be an integer. Assume that $p,q$ are relatively prime. Let $e_0,\ldots,e_{p-1}$ be a basis of $V$. Then the following sums
\begin{equation}
\label{qsums}
(e_0+\cdots+e_{q-1}),\ (e_{q}+\cdots+e_{2q-1}),\ \ldots (e_{(p-1)q}+\cdots+e_{pq-1})
\end{equation}
(all indices $\mod p$ assumed) generate the whole space $V$.
\end{lemma}
The proof of lemma is based on the following observations. Since $p,q$ are relatively prime, any sum of
length $q$ appears in a sequence \eqref{qsums}. The difference of two sums has the form
$e_j - e_{j+q}$, $j=0,\ldots,p-1$; they generate a hyperplane orthogonal to vector $e_0+\cdots+e_{p-1}$.
Since the scalar product $(e_0+\cdots+e_{p-1})\cdot(e_0+\cdots+e_{q-1})=q$, the space generated by
vectors \eqref{qsums} is a whole $V$.
Thus, it is proved that the subspace $H$ must contain the subspace generated by
$P_0,\ldots,P_{p-1}$. In a similar way we show that $H$ contains the subspace
generated by $Q_0,\ldots,Q_{q-1}$.
We have shown that the subspace of the homology group containing $\gamma$, with finite $\pi_1$-orbit
must be necessarily $\pi_1$-invariant hyperplane in the homology space $H_1$. It proves the theorem
for a \emph{generic} 1-form $\omega$ (when the zero subspace $Z_\omega=\{0\}$). To finish the proof
we show that either $\dim Z_\omega\leq 1$ or $Z_\omega = H$.
Consider an element
\[
H\cap Z_\omega\ni v = a\, \gamma + \sum_{j=0}^{p-1} \alpha_j P_j + \sum_{i=0}^{q-1} \beta_i Q_i
\]
and its images under the monodromy around $t=0$: $\mathcal{M}on_0^l v$. Since $Z_\omega$ is monodromy invariant, all elements $\mathcal{M}on_0^l v\in Z_\omega$. If the intersection index $\gamma\cdot\mathcal{M}on_0^{l_0}v\neq 0$, then monodromy around the center $t=c$ adds a multiple of $\gamma$, so $\gamma\in Z_\omega$. Then, it follows from the previous analysis that $Z_\omega=H$. Assume now that all intersection indices $\gamma\cdot \mathcal{M}on_0^l v=0$. The coefficient $a$ must then vanish, otherwise $\mathcal{M}on_0^{pq}$ adds the cycle $\Delta_2-\Delta_1$ which realizes intersection index $-2$. Consider monodromies $\mathcal{M}on_0^{q l}v$, $l=0,\ldots,p-1$. It preserves the expression $\sum_{i=0}^{q-1} \beta_i Q_i$. Vanishing of the intersection indices $\gamma\cdot \mathcal{M}on_0^{q l}v$ implies equations
\begin{equation}
\label{intsind}
\alpha_j + \alpha_{j+1} = \beta_0+\beta_{q-1}, \qquad j=0,1,\ldots,p-1.
\end{equation}
The solution of \eqref{intsind} depends on the parity of $p$. If $p$ is odd then all coefficients $\alpha_j$ are equal: $\alpha_j=\alpha=\tfrac12 (\beta_0+\beta_{q-1})$. If $p$ is even the solution of \eqref{intsind} reads:
\[
\alpha_{2 l} = \alpha_0,\quad \alpha_{2l+1}=\alpha_1,\quad \alpha_0+\alpha_1=\beta_0+\beta_{q-1}.
\]
We repeat then the analogous analysis with iterations of $\mathcal{M}on_0^p$. We obtain the following form of $Z_\omega$
\begin{equation}
Z_\omega \cap H \subset
\begin{cases}
\{0\} & \text{for}\ p,q \text{ odd} \\
\mathrm{Span}(2 \sum_{j=1}^{p/2} P_{2j} + (\Delta_2-\Delta_1)) & \text{for $p$ even and $q$ odd}\\
\mathrm{Span}(2 \sum_{j=1}^{q/2} Q_{2j} - (\Delta_2-\Delta_1)) & \text{for $q$ even and $p$ odd}\\
\end{cases}
\end{equation}
Thus, $\dim Z_\omega \cap H \leq 1$ and so the theorem is proved.\\
\end{proof}
\begin{corollary}
We have actually shown that the Abelian integral does not satisfy any
differential equation with algebraic coefficients of order lower than the Fuchs
type equation with rational coefficients which follows the general theory.
\end{corollary}
\vskip 1cm \subsection{The special Lotka-Volterra case} \label{sec:hvg}
Consider a fibration given by a polynomial $F(x,y)=(xy)^p(x+y-1)$. It defines a locally trivial fibration defined over plane with two points removed $B=\mathbb{C}\setminus \{0,c\}$, where $c=F(\tfrac{p}{1+2 p},\tfrac{p}{1+2 p})$ corresponds to a center. The cycle $\gamma_t$ for $t\in (0,c)$ is an oval (compact component) of the real level curve $F^{-1}(t)$. Note, that the fibration has a Morse type singularity at $t=c$ and $\gamma_t$ is a vanishing cycle at the center.
Below we investigate the fibration and the monodromy representation on the sufficiently small neighborhood of $t=0$: $|t|<\varepsilon_0$. The monodromy around center $t=c$ follows the Picard-Lefshetz formula. Thus, to determine the monodromy representation it is enough to investigate the monodromy around $t=0$ and intersection indices with the cycle $\gamma$.
The model of complex fiber is presented on figure \ref{fig:hvg} which should be understood as follows. Any rectangle represents a cylinder; unification of edges according to arrows assumed. Another unification is assumed on vertical, dotted lines.
\begin{figure}[htpb]
\input{figs/hvg.pspdftex}
\caption{Model of the level curve $F^{-1}(h)$ for $F=(xy)^p(x+y-1)$.}
\label{fig:hvg}
\end{figure}
\begin{lemma}
\label{lem:hvgmodel}
The complex level curve $F^{-1}(t)$ is a surface of genus $p-1$ with $3$ points removed (intersection with the line at infinity). The surface shown on the figure \ref{fig:hvg} provides a model $M$ for $F^{-1}(t)$. The homology group $H_1(M)$ has dimension $2p+2$; it is generated by cycles $\gamma,\Delta_1,\Delta_2,P_0,\ldots,P_{p-1},\delta_0,\ldots,\delta_{p-1}$ with the following relation
\begin{equation*}
P_0+\cdots+P_{p-1} = \Delta_1-\Delta_2 + \delta_0.
\end{equation*}
Intersection indices of $\gamma$ with other generators of the homology group reads
\begin{equation}
\label{hvgints}
\begin{gathered}
\gamma\cdot P_{p-1}=-1, \qquad \gamma\cdot P_j=0, \quad \text{for}\ j=0,\ldots,p-2,\\
\gamma\cdot \delta_0=-1, \qquad \gamma\cdot \delta_j=0, \quad \text{for}\ j=1,\ldots,p-1,\\
\gamma\cdot\Delta_1 = -1,\qquad \gamma\cdot \Delta_2 = -1.
\end{gathered}
\end{equation}
The monodromy representation takes the form (monodromy around zero assumed)
\begin{equation}
\label{hvgmon}
\begin{gathered}
\mathcal{M}on \Delta_j = \Delta_j, \qquad \mathcal{M}on \delta_j=\delta_{j+1},\qquad \mathcal{M}on \gamma = \gamma+P_0,\\
\mathcal{M}on P_j = P_{j+1}, \quad \text{for}\ j=0,\ldots,p-2,\qquad\\
\mathcal{M}on P_{p-1} = P_0+\delta_1-\delta_0.\\
\end{gathered}
\end{equation}
\end{lemma}
\begin{proof}
The proof is analogous to proofs of lemma \ref{lem:toy} and \ref{lem:parabmodel}. We modify the level curve $F^{-1}(t)$ by isotopy to the part contained in the compact region $|x|\leq R,\ |y|\leq R$. We consider points $t$ sufficiently close to $0$. We cut the level curve $F^{-1}(t)$ into pieces lying close to lines $x=0$, $y=0$, $x+y-1=0$ and close to saddles $(0,0),\, (1,0),\, (0,1)$. The analysis of pieces of level curve $F^{-1}(t)$ close to the saddles $(1,0)$ and $(0,1)$ and close to the line $x+y-1=0$ is completely analogous to the parabola case -- see proof of lemma \ref{lem:parabmodel}. The model of this part of the level curve consist of two cylinders joined by a single strip.
Now we consider the region which is in finite distance from the line $x+y-1=0$. The level curve $F^{-1}(t)$ outside the line $x+y-1=0$ splits into $p$ components defined by the equation:
\[
xy\, (x+y-1)^{1/p}=t^{1/p} \varepsilon_p^\nu,\qquad \nu=0,1,\ldots,p-1.
\]
Any of these components coincide to the toy example with $p=q=1$. Thus, it is isotopic to the cylinder with two strips attached. As $t$ winds around zero $t\mapsto e^{2\pi i} t$ we rotate components according to the rule $\nu\mapsto \nu+1 \mod p$. The $p$-th power of the monodromy (winding $p$ times around zero) $\mathcal{M}on^p$ corresponds to the usual monodromy in the toy example; it follows from the formula \eqref{toymonhom} (for $p=q=1$) that $\mathcal{M}on^p$ adds the generator $\delta_j$.
It proves that the combinatorial structure of model of the level curve $F^{-1}(t)$, defining how cylinders and strips are glued, must be as that shown on figure \ref{fig:hvg}.\\
\end{proof}
\begin{proposition}
\label{pr:hvg}
Let $H$ be the following 2-dimensional subspace
\begin{equation*}
H=\mathrm{Span} (\gamma,\; \Delta_1-\Delta_2+\delta_0)
\end{equation*}
of the (complex) homology space $H_1(F^{-1}(t))$. The orbit of $H$ under the monodromy representation $\pi_1\cdot H$ in Grassmannian $Gr_2(H_1)$ consists of $p$ elements and so is finite.
\end{proposition}
\begin{proof}
Denote by $\mathcal{M}on$ and $\mathcal{M}on_c$ the monodromy around $t=0$ and around the center critical value $t=c$. By lemma \ref{lem:hvgmodel}, we have
\begin{equation}
\label{hvgorbit}
\mathcal{M}on^k H = \begin{cases}
\mathrm{Span} \big( \gamma+P_0+\cdots+P_{k-1}, \; \Delta_1-\Delta_2+\delta_{k}\big) &\text{for}\ k=1,\ldots,p-1\\
\mathrm{Span} \big( \gamma+\Delta_1-\Delta_2+\delta_{0}, \; \Delta_1-\Delta_2+\delta_{0}\big)=H &\text{for}\ k=p.\\
\end{cases}
\end{equation}
The crucial observation is that subspaces $\mathcal{M}on^k H$ for $k\in\mathbb{Z}$ are $\mathcal{M}on_c$-invariant. Indeed, the subspace $H$ is $\mathcal{M}on_c$-invariant since $\gamma\in H$ is a vanishing cycle corresponding to the center critical value $t=c$. We calculate (using formulas \eqref{hvgints}) the intersection indices of $\gamma$ and generators of $\mathcal{M}on^k H$ for $k=1,\ldots,p-1$:
\[
\gamma\cdot \big( \gamma+(P_0+\cdots+P_{k-1})\big)=0,\qquad \gamma \cdot \big(\Delta_1-\Delta_2+\delta_{k})\big)=0.
\]
Thus, both generators are $\mathcal{M}on_c$-invariant. It proves that the orbit $\pi_1\cdot H$ in Grassmannian $Gr_2(H_1)$ consists of $p$ subspaces given in formula \eqref{hvgorbit}.\\
\end{proof}
\begin{corollary}
The proposition \ref{pr:hvg} provides a geometric explanation of phenomenon described in \cite{hvg}. According to the general theory given in theorem \ref{th:algeq} and calculations of monodromy given in lemma \ref{lem:hvgmodel} and proposition \ref{pr:hvg} the Abelian integral along cycle $\gamma$ satisfies a linear second order equation with algebraic coefficients in variable $t$.
\end{corollary}
\vskip 1cm
|
1,941,325,220,767 | arxiv | \section{Introduction} \label{1-introduction}
As an important data type to describe 3D scene, point cloud has received considerable attention \cite{charles_pointnet_cvpr_2017,wang_dgcnn_tog_2019,li_tcsvt_multi_2021}. More importantly, 3D point cloud registration, as a key problem in 3D computer vision community, has been adopted in various applications, such as 3D reconstruction \cite{deschaud_imls_icra_2018,zhang_loam_rss_2014}, autonomous driving \cite{yang_robust_iros_2018,wan_robustlocalization_icra_2018}, simultaneous localization and mapping (SLAM) \cite{ding_deepmapping_cvpr_2019}, locating 3d object \cite{pahwa_locate_tcsvt_2018}, point cloud code \cite{Mekuria_codec_tcsvt_2017}.
The point cloud registration task aims at solving the relative pose of 6 degrees of freedom to optimally align the two input point clouds, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot \textit{source} and \textit{target}, which has been studied for many years.
Many traditional approaches have achieved remarkable performance. For example, \cite{besl_icp_pami_1992,huang_ctf_tcsvt_2018} advocate using a coarse-to-fine strategy to solve for accurate 3D registration.
Recently, benefiting from the rise of deep learning technique, learning-based 3D point cloud registration has become a new hot spot, where correspondences-free methods (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot \cite{aoki_ptlk_cvpr_2019,huang_featuremetric_cvpr_2020}) and correspondences-based methods (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot \cite{lu_deepvcp_iccv_2019,3dlocal_Deng_CVPR_19}) are developed depending on whether the correspondences are explicitly built or not.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{Figure/Fig1.pdf}\vspace{-1.5mm}
\caption{Illustration of our VRNet. \ding{172} \textit{source} and \ding{175} \textit{target} have different poses and different shapes (broken tail and wing in \textit{source} and \textit{target} respectively). The existing methods will learn degenerated VCPs indicated by the pink in \ding{173} (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot \figref{Fig:comparison}). Conversely, our VRNet devotes to learning the RCPs indicated by \ding{174}, which maintain the same shape as the \textit{source} and the same pose as the \textit{target}, by unfolding VCPs and rectifying the partiality of the wing. Hence, the reliable correspondences of these consistent point clouds, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot \textit{source} and RCPs, can be obtained easily since the influence of outliers has been eliminated. Further, the relative pose between \textit{source} and RCPs can be solved accurately, which is same as the relative pose between \textit{source} and \textit{target}.}
\label{Fig:rcps}
\vspace{-5mm}
\end{figure}
However, the widespread presence of outliers, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the points without corresponding points in the paired point clouds, has always been a significant challenge for both correspondences-free and correspondences-based point cloud registration methods.
Note that the essence of the correspondences-free methods is to estimate the relative pose by comparing the global representations of two input point clouds \cite{aoki_ptlk_cvpr_2019,sarode_pcrnet_arxiv_2019,huang_featuremetric_cvpr_2020}. Thus, the outliers are destructive for these correspondences-free methods because the difference between their global representations can no longer indicate their pose difference (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the relative pose) accurately.
In other words, the shape differences due to the outliers, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the head of rabbit only exists in the \textit{source} without corresponding points as illustrated in \figref{Fig:refinement}, also contribute to the difference in their global representations.
As a result, the correspondences-based methods are gaining more and more attention, which advocate going further to deal with the disturbance of the outliers by building some accurate correspondences from the contaminated input point clouds.
To this end, a virtual point-based strategy is employed \cite{wang_dcp_iccv_2019,lu_deepvcp_iccv_2019,yew_rpmnet_cvpr_2020}. It advocates the use of \underline{v}irtual \underline{c}orresponding \underline{p}oints (VCPs), which are constructed by performing weighted average on the \textit{target}, instead of the real points in the \textit{target}.
However, as shown in \figref{Fig:comparison}, the correspondences brought by this strategy are not reliable because the learned VCPs exhibit serious collapse degeneration and lose the shape and geometry structure, which has been proved in \cite{hpnet_arxiv_2021}.
Two reasons exist for these degenerations: 1) the existing supervision usually focuses on the relative pose only, which is insufficient and more than one feasible solutions exist; 2) the distribution of the virtual points is limited in the \textit{target} due to the weighted average operation as depicted in \figref{Fig:refinement}.
Nevertheless, it is worth mentioning that because of the uniform treatment of the \textit{source} points without the complicated distinguishing process, the virtual point-based approaches usually own a high time-efficiency.
Meanwhile, real point-based approaches have received more and more attention recently, which build reliable correspondences on real points. To this end, a natural idea is to identify the inliers and then build correspondences on these inliers only.
PRNet \cite{wang_prnet_nips_2019} proposes to select the points with more obvious feature as the inliers, however, this operation is neither interpretable nor persuasive.
\cite{predator_Huang_2021_CVPR} utilizes the attention mechanism to recognize inliers, but this is operationally complex as well as time-inefficient.
Real point-based approaches also devote to selecting reliable correspondences from the constructed initial correspondences. In \cite{pais_3dregnet_cvpr_2020,choy_dgr_cvpr_2020,probst_consensusMax_cvpr_2019}, the correspondences are selected based on the learned reliability weight of each correspondence. RANSAC is also widely adopted to select consistent correspondences \cite{3dlocal_Deng_CVPR_19,choy_dgr_cvpr_2020}. However, these real point-based methods often struggle with high computational efficiency and the ability to obtain reliable correspondences.
\begin{figure}[!t]
\centering
\includegraphics[width=0.99\linewidth]{Figure/Fig2.pdf}
\caption{The degeneration of the learned corresponding points. The results are generated from the consistent input point clouds for a clear comparison of loss function, where the effect of the distribution limitation is excluded naturally. Red and blue represent the \textit{source} and the \textit{target} respectively. Pink indicates the learned corresponding points. The matching lines connect the \textit{source} points and the corresponding points.
Due to insufficient supervision, the learned corresponding points of DCP \cite{wang_dcp_iccv_2019} and RPMNet \cite{yew_rpmnet_cvpr_2020} degenerate seriously. Our VRNet achieves much better performance in which the learned corresponding points maintain the original shape and geometry structure due to the proposed hybrid loss function.}
\label{Fig:comparison}
\vspace{-\baselineskip}
\end{figure}
Due to the respective limitations of both virtual point-based and real point-based approaches, we point out that constructing the reliable corresponding points of all the \textit{source} points uniformly without distinguishing the inliers and the outliers can effectively incorporate their advantages. By this way, high time-efficiency and high accuracy can be achieved at the same time. For this goal, we propose to learn a new type of virtual points called \underline{r}ectified virtual \underline{c}orresponding \underline{p}oints (RCPs), which are defined as the point set with the same shape as the \textit{source} and with the same pose as the \textit{target}, as shown in \figref{Fig:rcps}.
Therefore, a pair of consistent point clouds, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot \textit{source} and RCPs, can be formed to eliminate the influence of outliers via rectifying \underline{V}CPs to \underline{R}CPs (VRNet). Then one can easily yield reliable correspondences to solve for the relative pose between the \textit{source} and RCPs, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the relative pose between the \textit{source} and \textit{target}.
Our VRNet consists of two main steps. Firstly, we construct the initial VCPs by using a soft matching matrix to perform the weighted average on the \textit{target} point cloud. Secondly, we propose a correction-walk module to learn an offset to rectify VCPs to RCPs, which breaks the inherent distribution limitation of original VCPs. Besides, a novel hybrid loss function is proposed to enhance the consistency of shape and geometric structure between the learned RCPs and the \textit{source} point cloud.
The proposed hybrid loss function consists of \textit{corresponding point supervision}, \textit{local motion consensus}, \textit{geometry structure supervision}, and \textit{amendment offset supervision}. It is proved to be effective to supervise the entire network from the perspectives of the inliers distribution, the consistency of local and global motions, the geometry structure, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot.
Finally, we evaluate the proposed VRNet through extensive experiments on synthetic and real data, achieving state-of-the-art registration performance.
Furthermore, our method is time-efficient since it circumvents the complicated processes of inliers determination and correspondences selection.
\begin{figure}[!t]
\centering
\includegraphics[width=0.65\linewidth]{Figure/Fig3.pdf}\vspace{-1.5mm}
\caption{Illustration of the distribution limitation of VCPs. The red and green represent the \textit{source} and the \textit{target} respectively. In this case, only a part of corresponding points can be fitted by the VCPs, which are generated by performing the weighted average on the \textit{target}. And the corresponding points of the \textit{source} points marked by the box can never be fitted since the distribution of the VCPs is limited in the convex set of the \textit{target}.}
\label{Fig:refinement}
\vspace{-5mm}
\end{figure}
Our contributions can be summarized as follows:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item[1).] We propose a point cloud registration method named VRNet to guarantee high accuracy and high time-efficiency. We present a new type of virtual points called RCPs, which maintain the same shape as the \textit{source} and the same pose as the \textit{target}, to help build reliable correspondences.
\item[2).] We design a novel correction-walk module in our VRNet to learn an offset to break the distribution limitation of the initial VCPs. Besides, a hybrid loss function is proposed to enhance the rigidity and geometric structure consistency between the learned RCPs and the \textit{source}.
\item[3).] Remarkable results on benchmark datasets validate the superiority and effectiveness of our proposed method for robust 3D point cloud registration.
\end{itemize}
\section{Related work} \label{sec:relatedwork}
Employing the deep learning technique to the 3D point cloud registration task has received widespread attention recently. In this section, we provide a brief review of the learning-based point cloud registration methods. And the detailed summary of traditional point cloud registration methods has been provided in \cite{ruslinkiewicz_efficientVariantsIcp_3DDIM_2001,Fran_prc_2015}.
\subsection{Correspondences-free methods}
Deep learning technique provides a new perspective for the 3D point cloud registration task, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot solving the rigid transformation by comparing the holistic representations of the \textit{source} point cloud and the \textit{target} point cloud. This kind of method is usually called the correspondences-free method and consists of two main steps: global feature extraction and rigid motion solving.
PointNetLK \cite{aoki_ptlk_cvpr_2019} represents a pioneer, which uses PointNet \cite{charles_pointnet_cvpr_2017} to extract the global features of the \textit{source} and \textit{target}. And then a modified LK algorithm is designed to solve the rigid transformation from the difference between these two global features.
A similar work, PCRNet \cite{sarode_pcrnet_arxiv_2019}, proposes to replace the modified LK algorithm with a regression strategy, which brings more accurate registration results. Huang \emph{et al}\onedot \cite{huang_featuremetric_cvpr_2020} propose a more effective global feature extractor inspired by the reconstruction methods \cite{zhao_capsule_cvpr_2019,yang_foldingnet_cvpr_2018}, in which an encoder-decoder network is designed to learn a more comprehensive global representation.
However, when outliers exist, there are significant differences in shape and geometry structure between the \textit{source} and \textit{target} in addition to the poses, thus the correspondences-free method usually fails to obtain accurate registration results.
\subsection{Correspondences-based methods}
Correspondences-based methods are built upon the correspondences, which consist of two main steps: correspondences building and rigid transformation estimation.
Comparing with the point-to-plane correspondences, plane-to-plane correspondences, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot, the point-to-point correspondences are the most common in correspondences-based methods. Among them, feature extractor and reliable correspondences building modules are depthly explored for more accurate registration.
\noindent\textbf{Feature extractors.}
A number of effective point feature learning methods are employed in the point cloud registration task to obtain more accurate and suitable descriptors for more accurate alignment.
3DFeat-Net \cite{yew_3dfeatnet_eccv_2018} utilizes a set abstraction module proposed by \cite{charles_pointnet2_nips_2017} to summarize the local geometric structure.
DCP \cite{wang_dcp_iccv_2019} uses DGCNN \cite{wang_dgcnn_tog_2019} and Transformer \cite{vaswani_attention_nips_2017} to learn the task-specific features.
Different from the above methods, RPMNet \cite{yew_rpmnet_cvpr_2020} proposes to use a 10D hybrid feature representation, where the normal is additionally used besides the 3D coordinate.
To handle the large-scale scene data, DGR \cite{choy_dgr_cvpr_2020} uses a fully connected network \cite{choy_fcgf_iccv_2019} to extract features.
Deng \emph{et al}\onedot propose to learn a globally informed 3D local feature in \cite{PPFNet_Deng_2018_CVPR}.
Via an unsupervised learning network, PPF-FoldingNet \cite{PPFFolding_Deng_2018_ECCV}, a rotation invariant 3D local descriptor is designed.
In \cite{perfect_Gojcic_cvpr19}, a rotation invariant feature, voxelized smoothed density value, is used for point matching. D3Feat \cite{D3Feat_Bai_2020_CVPR} leverages KPConv \cite{thomas_kpconv_iccv_19} to predict both a detection score and a feature for each 3D point, which helps to detect key points and extract effective features at the same time. With the advent of deep learning techniques, learning-based feature extractors have become standard modules that are easy to integrate.
\begin{figure*}[!t]
\centerline{\includegraphics[width=0.9\linewidth]{Figure/Fig4.pdf}}
\caption{The network architecture of our proposed VRNet. Given the \textit{source} and \textit{target}, DGCNN and Transformer are applied to extract point features. Then, a soft matching matrix is achieved based on the constructed similarity matrix. Virtual corresponding points and corresponding point features are obtained by using the matching matrix to perform the weighted average on the \textit{target} point cloud and the \textit{target} point features respectively. To break the distribution limitation, a correction-walk module is proposed to learn the offset to amend the VCPs to the desired RCPs. Finally, the rigid transformation is solved by the Procrustes algorithm. The network is supervised by the proposed hybrid loss function, which enforces the rigidity and geometry structure consistency between the learned RCPs and the \textit{source} point cloud.}
\label{Fig:network}
\end{figure*}
\noindent\textbf{Correspondences building.}
Note that the accuracy of the correspondences is more important than the number of correspondences in the registration task, constructing some reliable correspondences for robust registration has become a widely accepted strategy.
To this end, some methods propose to distinguish inliers and outliers, and then solely build correspondences on the identified inliers.
For example, Predator \cite{predator_Huang_2021_CVPR} proposes an overlap attention module to recognize the inliers. PRNet \cite{wang_prnet_nips_2019} selects the points with more obvious features as inliers.
Besides, selecting reliable correspondences from the constructed initial correspondences is also an effective method.
3DRegNet \cite{pais_3dregnet_cvpr_2020}, DGR \cite{choy_dgr_cvpr_2020}, and consensus maximization method \cite{probst_consensusMax_cvpr_2019} concentrate on estimating the reliability weight of each correspondence.
The 3DRegNet derives each pair of points individually, and DGR considers the neighbor information by high dimension convolution.
The consensus maximization method is unsupervised using the principle of maximizing the number of consistent correspondences. Moreover, consistent correspondences are also selected based on the RANSAC \cite{3dlocal_Deng_CVPR_19}.
However, these operations are complicated and time-consuming.
Thus, an alternative virtual point-based strategy is proposed, which constructs the correspondences for all \textit{source} points without distinguishing inliers and outliers using virtual points. In DeepVCP \cite{lu_deepvcp_iccv_2019}, these virtual corresponding points are constructed by the weighted average on the points generated based on the prior transformation. In contrast, all the points in the \textit{target} point cloud are used in DCP \cite{wang_dcp_iccv_2019}. However, this virtual point-based outlier processing strategy suffers from the serious degeneration due to two reasons, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the insufficient supervision and the distribution limitation of the learned virtual corresponding points.
\noindent\textbf{Rigid transformation estimation}. The Procrustes algorithm \cite{gower_procrustes_1975} is the most common strategy to solve the rigid transformation \cite{wang_dcp_iccv_2019, wang_prnet_nips_2019,lu_deepvcp_iccv_2019,yew_rpmnet_cvpr_2020}, which has been proved to be optimal based on the correct correspondences. In addition, the direct regression of motion parameters has also received widespread attention in recent years \cite{pais_3dregnet_cvpr_2020,sarode_pcrnet_arxiv_2019,3dlocal_Deng_CVPR_19}.
\section{Proposed Method} \label{4-method}
\subsection{Preliminaries}
3D point cloud registration devotes to estimating the rigid transformation best aligning the two given point clouds $\mathbf{X} = [{\mathbf{x}_i}] \in \mathbb{R}^{3\times N_\mathbf{X}}$, and $\mathbf{Y} = [\mathbf{y}_j] \in \mathbb{R}^{3\times N_\mathbf{Y}}$, where ${\mathbf{x}_i} \in \mathbb{R}^3$, $\mathbf{y}_j \in \mathbb{R}^3$, $N_\mathbf{X}$ and $N_\mathbf{Y}$ are the numbers of points in $\mathbf{X}$ and $\mathbf{Y}$ respectively and $N_\mathbf{X}$, $N_\mathbf{Y}$ do not need to be equal. Usually, $\mathbf{X}$ and $\mathbf{Y}$ are called the \textit{source} point clouds and the \textit{target} point clouds, respectively. In this paper, we model the rigid transformation by the rotation matrix $\mathbf{R} \in SO(3)$ and the translation vector $\mathbf{t} \in \mathbb{R}^3$.
Furthermore, point matching is a key problem in the 3D point cloud registration task, which is usually tackled by solving a binary matching matrix $\mathbf{M}=[m_{ij}]_{{N}_\mathbf{X} \times {N}_\mathbf{Y}}$, where $m_{ij} \in \{0,1\}$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot
\begin{equation}
m_{ij} =
\begin{cases}
1 & \text{if $\boldsymbol{x}_i$ and $\boldsymbol{y}_j$ are matched,} \\
0 & \text{otherwise.}
\end{cases}
\label{Eq:matching_matrix}
\end{equation}
Within the virtual point-based methods, the matching matrix $\mathbf{M}$ is relaxed to $[0,1]^{{N}_\mathbf{X} \times {N}_\mathbf{Y}}$, where $m_{ij}$ represents the matching probability between the point $\mathbf{x}_i$ and the point $\mathbf{y}_j$, and $\mathbf{M}$ is called the soft matching matrix.
\subsection{VRNet architecture} \label{3-1-pipeline}
We advocate constructing the reliable corresponding points of all the \textit{source} points uniformly without distinguishing the inliers and the outliers to ensure both high time-efficiency and high accuracy.
To this end, we propose to learn a new type of virtual points called RCPs, which are defined as the point set with the same shape as the \textit{source} and with the same pose as the \textit{target}. By this way, a pair of consistent point clouds, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot \textit{source} and RCPs, is produced to eliminate the influence of outliers. Meanwhile, rectifying \underline{V}CPs to \underline{R}CPs facilitates generating reliable correspondences to solve for the relative pose between the \textit{source} and RCPs, which is same with the relative pose between the \textit{source} and \textit{target}.
The entire architecture of our VRNet is illustrated in \figref{Fig:network}. First, the initial VCPs are constructed. Then, the RCPs are achieved by learning a rectified offset in the correction-walk module. Finally, the rigid transformation is estimated by the Procrustes algorithm. We introduce these procedures in detail as follows.
\noindent\textbf{VCPs construction.}
Inspired by DCP \cite{wang_dcp_iccv_2019}, we construct the virtual corresponding points by using the matching matrix $\mathbf{M}$ to perform weighted average on the \textit{target} point cloud $\mathbf{Y}$. To this end, we apply the ``DGCNN + Transformer'' as our feature extractor at first. Specifically, a shared DGCNN \cite{wang_dgcnn_tog_2019} is employed to compute the initial point features for the two input point clouds because it can achieve informative representation by summarizing the neighbor information through edge convolution operation. Besides, inspired by the recent success of attention mechanism, the Transformer module \cite{vaswani_attention_nips_2017} is also used to learn co-contextual information of the \textit{source} point clouds and the \textit{target} point clouds. Formally, the DGCNN feature extraction can be summarized as,
\begin{equation}
\mathbf{F}_{\mathbf{x}_i}^\ell = \text{maxpool}(\textbf{MLP}_\alpha(\text{cat}(\mathbf{F}_{\mathbf{x}_i}^{\ell-1}, \mathbf{F}_{\mathbf{x}_{ik}}^{\ell-1}))),\ \mathbf{F}_{\mathbf{x}_{ik}}\in \mathbb{N}({\mathbf{F}_{\mathbf{x}_i}}),
\label{equ:dgcnn}
\end{equation}
where $\ell$ represents the $\ell\text{-th}$ layer of edge convolution. $\mathbb{N}({\mathbf{F}_{\mathbf{x}_i}})$ denotes the K-nearest neighbors of ${\mathbf{F}_{\mathbf{x}_i}}$ in thhe feature space with the pre-defined parameter $K$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $k\in [1,K]$. The initial point feature is the original 3D coordinate. $\textbf{MLP}_\alpha$ is a multi-layer perceptron (MLP) network parameterized by $\alpha$. $\text{cat}(\cdot,\cdot)$ represents the concatenation operation and $\text{maxpool}(\cdot)$ represents the max-pooling operation.
After several edge convolutions, point features are achieved and denoted as $\mathbf{F}_\mathbf{X}=[\mathbf{F}_{\mathbf{x}_i}]\in \mathbb{R}^{N_\mathbf{X}\times c}$, $\mathbf{F}_\mathbf{Y}=[\mathbf{F}_{\mathbf{y}_j}]\in \mathbb{R}^{N_\mathbf{Y}\times c}$ where $c$ is the pre-defined feature dimension. Then, the Transformer module is applied as,
\begin{equation}
\left\{ {\begin{aligned}
\Phi_\mathbf{X} &= \mathbf{F}_\mathbf{X} + \eta_1(\mathbf{F}_\mathbf{X}, \mathbf{F}_\mathbf{Y})\\
\Phi_\mathbf{Y} &= \mathbf{F}_\mathbf{Y} + \eta_2(\mathbf{F}_\mathbf{Y}, \mathbf{F}_\mathbf{X})
\end{aligned}} \right. ,
\end{equation}
where $\eta_1: \mathbb{R}^{N_\mathbf{X}\times c} \times \mathbb{R}^{N_\mathbf{Y}\times c} \to \mathbb{R}^{N_\mathbf{X} \times c}$ and $\eta_2: \mathbb{R}^{N_\mathbf{Y}\times c} \times \mathbb{R}^{N_\mathbf{X}\times c} \to \mathbb{R}^{N_\mathbf{Y} \times c}$ represent the Transformer function. The features of $\mathbf{x}_i$ and $\mathbf{y}_j$ are denoted as $\Phi_{\mathbf{x}_i}$ and $\Phi_{\mathbf{y}_j}$, so $\Phi_\mathbf{X}\in \mathbb{R}^{N_\mathbf{X}\times c}$ and $\Phi_\mathbf{Y}\in \mathbb{R}^{N_\mathbf{Y}\times c}$ indicate the final point features of all \textit{source} points and \textit{target} points.
Then, we take the scaled dot product attention metric to calculate the similarity matrix $\mathbf{S}=[s_{ij}]_{N_{\mathbf{X}} \times N_{\mathbf{Y}}}$, where
\begin{equation}
s_{ij} = \Phi_{\mathbf{x}_i} \Phi_{\mathbf{y}_j}^\text{T} / \sqrt{c}.
\end{equation}
Next, row-wise $\text{softmax}$ operation is employed for final soft matching matrix $\mathbf{M} = \text{softmax}(\mathbf{S})$. Then, VCPs of the \textit{source} point cloud are achieved as $\mathbf{Y}^{\prime} = \mathbf{Y}\mathbf{M}^\mathrm{T}$, $\mathbf{Y}^{\prime} \in \mathbb{R}^{3\times N_\mathbf{X}}$.
\noindent\textbf{RCPs construction by correction-walk.}
Constructing the VCPs to fit the real corresponding points is the fundamental principle of existing virtual point-based methods. This idea makes sense when the real corresponding points are surrounded or overlapped by the \textit{target} points. In this case, these points can be fitted by learning a soft matching matrix $\mathbf{M}\in [0,1]^{N_\mathbf{X} \times N_\mathbf{Y}}$.
However, note that the weighted average on \textit{target} using the soft matching matrix $\mathbf{M}$ can only cover a convex set in 3D space, some real corresponding points of outliers cannot be fitted since they are outside the range of this convex set. A typical example is presented in \figref{Fig:refinement}. This shortage is common in practice and results in many wrong correspondences. To break this distribution limitation of the VCPs, we propose a correction module called correction-walk to learn the offsets to rectify VCPs to RCPs.
For VCPs $\mathbf{Y}^{\prime}$, we construct the corresponding virtual point features analogously, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\Phi_{\mathbf{Y}^{\prime}}= \Phi_\mathbf{Y} \mathbf{M}^\mathrm{T}$.
Then, we formulate $\mathbf{E}=\text{cat}(\Phi_{\mathbf{X}},\Phi_{\mathbf{Y}^{\prime}}) \in \mathbb{R}^{N_\mathbf{X}\times 2c }$, where $\text{cat}(\cdot,\cdot)$ denotes the concatenation operation. Because we advocate rectifying the degenerated VCPs to the RCPs, whose shape is defined to be the same as the \textit{source}. Thus, according to the feature differences between the VCPs and the \textit{source}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\Phi_{\mathbf{X}}$ and $\Phi_{\mathbf{Y}^{\prime}}$, it is expected that the offsets from VCPs to RCPs are generated from $\mathbf{E}$. Therefore, $\mathbf{E}$ is called seeds in our paper.
The proposed correction-walk module learns the correction displacement from the seeds. Specifically, the correction-walk module is implemented by another MLP network, which consumes the seeds and outputs the Euclidean space offset $\Delta \mathbf{t}_{\mathbf{X}} \in \mathbb{R}^{3\times N_\mathbf{X}}$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot
\begin{equation}
\Delta \mathbf{t}_{\mathbf{X}} = \mathbf{MLP}_\beta(\mathbf{E}).
\end{equation}
Thus, the final learned RCPs are produced by adding the learned offset to VCPs, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\mathbf{Y}^{\prime\prime} = \mathbf{Y}^{\prime} + \Delta \mathbf{t}_{\mathbf{X}}$, $\mathbf{Y}^{\prime\prime} \in \mathbb{R}^{3\times N_\mathbf{X}}$.
\noindent\textbf{Rigid transformation estimation.}
After matching the points in $\mathbf{X}$ with the points in $\mathbf{Y}^{\prime\prime}$, the rigid transformation between the \textit{source} and RCPs can be solved in closed-form by the Procrustes algorithm \cite{gower_procrustes_1975}. Because the pose of RCPs is same as the \textit{target}, the desired final relative pose is obtained naturally. Specifically, $\mathbf{H} = \sum_{i = 1}^{N_\mathbf{X}} (\mathbf{x}_{i} -\mathbf{\bar{x}} ) (\mathbf{y}_{i}^{\prime\prime} - \mathbf{\bar{y^{\prime\prime}}})^\mathrm{T}$, where $\mathbf{ \bar{x}}$ and $\mathbf{\bar{ y^{\prime\prime}}}$ are the centers of $\mathbf{X}$ and $\mathbf{{Y}^{\prime\prime}}$. Then, by using the singular value decomposition (SVD) to decompose $\mathbf{H} = \mathbf{UDV}^\mathrm{T}$, we obtain the final rigid transformation as,
\begin{equation}
\left\{
\begin{aligned}
\mathbf{R} &= \mathbf{V}\mathbf{U}^\mathrm{T} \\
\mathbf{t} &= \mathbf{-R} \mathbf{\bar x} + \mathbf{\bar{y^{\prime\prime}}}
\end{aligned}
\right. .
\end{equation}
Note that the Procrustes algorithm makes sense based on the correct correspondences. Hence, this rigid transformation estimation solvers would be problematic or even invalid if the learned virtual corresponding points degenerate. Unfortunately, this trap has been generally ignored. In this paper, we propose the RCPs, which rectify this inherent drawback of original VCPs to guarantee the reliability of constructed correspondences for the final relative pose estimation.
\subsection{Loss function} \label{3-2-loss}
In existing virtual point-based point cloud registration methods, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot DCP \cite{wang_dcp_iccv_2019}, the loss function usually only focuses on supervising the final rigid transformation. Due to this insufficient constraint, the distribution of the learned corresponding points degenerates as shown in \figref{Fig:comparison}.
To solve this problem, we propose a novel hybrid loss function, which devotes to driving the learned RCPs and the \textit{source} point cloud keep consistent in terms of the rigidity and geometry structure.
\noindent\textbf{Corresponding point supervision.}
This supervised loss function concentrates on the predicted matching matrix $\mathbf{M}$. Although $\mathbf{M}$ is a soft probability matrix, it is enforced to the ground truth binary matching matrix to keep rigidity. Herein, we design the loss function as:
\begin{equation}
\mathcal{L}_0 = - \frac{{\sum_{i=1}^{N_X} \sum_{j=1}^{N_Y} \left( {{m}_{ij}^\text{pred} {m}_{ij}^\text{gt}} \right)}}{{\sum_{i=1}^{N_X} \sum_{j=1}^{N_Y} \left( {{m}_{ij}^\text{gt}} \right)}},
\label{Eq:single_loss}
\end{equation}
where the superscript ``$\text{pred}$'' and ``$\text{gt}$'' represent the prediction and ground truth respectively, $m_{ij}$ is the entry of the matching matrix $\mathbf{M}$. However, if $\mathbf{x}_i$ is outlier, $m_{ij}^\text{gt}=0$ for all $j=1,...,N_{\mathbf{Y}}$. At present, $m_{ij}^\text{pred}$ is divergent, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot\ $\mathcal{L}_0$ can only supervise the inliers but ignore the outliers. \textit{By corresponding point supervision, we emphasize the distribution of inliers. }
\noindent\textbf{Local motion consensus.}
To guarantee the predicted corresponding points to keep rigidity, the rigid motion estimated according to all correspondences and the rigid motion estimated according to the correspondences subset should be the same, that is \textit{local motion consensus}. Specifically, because the \textit{source} point cloud $\mathbf{X}$ and the predicted RCPs $\mathbf{Y}^{\prime\prime}$ are matched, the correspondences set can be obtained as $\Omega = \{(\mathbf{x}_i,\mathbf{y}_i^{\prime\prime})|i \in [1,N_\mathbf{X}], \mathbf{x}_i \in \mathbf{X}, y_i^{\prime\prime} \in \mathbf{Y}^{\prime\prime}\}$. The global optimal rigid motion can be solved by the Procrustes algorithm \cite{gower_procrustes_1975} based on $\Omega$, notated as $\mathbf{R}$, $\mathbf{t}$. Then, we select $G$ subsets of correspondences randomly, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\Omega_g \subset \Omega$, $g\in [1,G]$. The size of each subset is $|\Omega_g| \geq 3$. At present, the local rigid motion $\mathbf{R}_g,\mathbf{t}_g$ can also be solved according to $\Omega_g$ by the Procrustes. Ideally, $\mathbf{R}_g,\mathbf{t}_g$ should be the same as $\mathbf{R},\mathbf{t}$. Based on this observation, we define an unsupervised loss function as:
\begin{equation}
\mathcal{L}_1 = \frac{1}{G}\sum_{g=1}^{G} (\text{rmse}(\textbf{R}_g^\textrm{T}\textbf{R}, \mathbf{I}_3)+\text{rmse}(\textbf{t}_g,\textbf{t})),
\label{Eq:local_loss}
\end{equation}
where $\mathbf{I}_3$ is a 3-order identity matrix, $\text{rmse}(\cdot,\cdot)$ is the root mean squared error. $\mathcal{L}_1$ should converge to 0 ideally. \textit{By the local motion consensus, we drive the motion of each local part to be consistent with the global motion}.
\noindent\textbf{Geometry structure supervision.}
In this part, the \textit{source} point cloud $\mathbf{X}$ and the predicted RCPs $\mathbf{Y}^{\prime\prime}$ are formulated as two graphs, respectively. Each point is a node and the distance of arbitrary two points is an edge. Obviously, the edge of $\mathbf{x}_i, \mathbf{x}_j \in \mathbf{X}$ should be the same as the edge of the corresponding two points $\mathbf{y}_i^{\prime\prime}, \mathbf{y}_j^{\prime\prime} \in \mathbf{Y}^{\prime\prime}$.
Here, we denote the edge matrix of the \textit{source} as $\mathbf{D}$,
\begin{equation}
\mathbf{D} = \left[ {\begin{array}{*{20}{c}}
0&{d_{1,2}}&{\cdots}&{d_{1,N_\mathbf{X}}}\\
{d_{2,1}}&0&{\cdots}&{d_{2,N_\mathbf{X}}}\\
{\vdots}&{\vdots}& \ddots & \vdots \\
{d_{N_\mathbf{X},1}}&{d_{N_\mathbf{X},2}}& \cdots &0
\end{array}} \right],
\end{equation}
where $d_{ij}$ is the Euclidean distance of points $\mathbf{x}_i, \mathbf{x}_j$, $\mathbf{D} \in \mathbb{R}^{N_\mathbf{X} \times N_\mathbf{X}}$. Analogously, we achieve the edge matrix of the learned RCPs $\mathbf{Y}^{\prime\prime}$ as $\mathbf{D}^{\prime\prime} \in \mathbb{R}^{N_\mathbf{X} \times N_\mathbf{X}}$ . Because $\mathbf{X}$ and $\mathbf{Y}^{\prime\prime}$ are matched sequentially,
we propose to supervise these two edge matrices by defining the loss function as:
\begin{equation}
\mathcal{L}_2 = \text{rmse}(\mathbf{D}, \mathbf{D}^{\prime\prime}).
\label{Eq:double_loss}
\end{equation}
In addition to the edge constraint, we also supervise the node.
Here, we constrain the node distribution by defining the unsupervised loss function as follows:
\begin{equation}
\mathcal{L}_3 = \text{rmse}(\textbf{R}^\text{pred}\mathbf{X}+\mathbf{t}^\text{pred}, \mathbf{Y}^{\prime\prime}),
\label{Eq:global_loss}
\end{equation}
where $\textbf{R}^\text{pred}$ and $\textbf{t}^\text{pred}$ is the predicted rigid motion.
\textit{By the geometry structure supervision, we emphasize the geometry structure consistency of the two point clouds.}
\noindent\textbf{Amendment offset supervision.}
In addition to emphasizing the rigidity and geometry structure, a special supervised loss function is proposed here to supervise the amendment offset explicitly, which is defined as:
\begin{equation}
\mathcal{L}_4 = \text{rmse}(\textbf{R}^\text{gt}\mathbf{X}+\mathbf{t}^\text{gt}- \mathbf{Y}^{\prime}, \Delta \mathbf{t}_\mathbf{X}),
\label{Eq:finetune_loss}
\end{equation}
where $\textbf{R}^\text{gt}$ and $\textbf{t}^\text{gt}$ represent the ground truth rigid motion. \textit{By amendment offset supervision, we enforce the correction-walk module to learn the offset accurately.}
In our implementation, we train the feature extractor at first by $\mathcal{L}_0$. Then, we freeze this part and train the correction-walk module by $\mathcal{L}_1$, $\mathcal{L}_2$, $\mathcal{L}_3$ and $\mathcal{L}_4$ as follows, where $\lambda_1$, $\lambda_2$, $\lambda_3$ and $\lambda_4$ are trade-off parameters.
\begin{equation}
\mathcal{L} = \lambda_1 \mathcal{L}_1+ \lambda_2 \mathcal{L}_2+ \lambda_3 \mathcal{L}_3 + \lambda_4 \mathcal{L}_4.
\label{Eq:loss}
\end{equation}
Note that $\mathcal{L}_4$ is a supervised loss function and $\mathcal{L}_1$, $\mathcal{L}_2$, $\mathcal{L}_3$ are unsupervised loss functions.
\subsection{Implementation details}
\noindent\textbf{Network architecture details.} The framework is shown in \figref{Fig:network}, where two deep learning modules exist, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot feature extractor and correction-walk. The feature extractor module consists of the DGCNN and the Transformer. The correction-walk module is an MLP network.
In DGCNN part, five edge convolution layers are used where the numbers of filters are set to $[64,64,128,256,512]$. In each edge convolution layer, BatchNorm and ReLU are taken. The parameter $K$ in \equref{equ:dgcnn} is set to 20. In Transformer part, one encoder and one decoder are applied with 4 heads and the embedding dimension $c$ is set to 512.
The output dimension of the correction-walk module is set as $[$512,256,512, 256,128,16,3$]$. Except for the final layer, BatchNorm and ReLU are applied. The final 3D output represents the ``walk'' on the XYZ coordinates.
\noindent\textbf{Training the network.}
At first, we train the network using $\mathcal{L}_0$ by the Adam optimizer with an initial learning rate of 1e-3 and the batch size of 28 for 100 epochs. We implement the network with PyTorch and train it on GTX 1080Ti. Then, we freeze the feature extractor part and fine-tune the network for 100 epochs resetting the initial learning rate to 1e-4. Here, we use the loss $\mathcal{L}$ and empirically set $\lambda_1=\lambda_2=\lambda_3=1.0$ and $\lambda_4=100$ in \equref{Eq:loss}. We set $G=10$ in \equref{Eq:local_loss}.
\section{Experiments and evaluation}
\label{4_experiments}
In this section, to demonstrate the superiority of our proposed method, we perform extensive experiments and comparisons on several benchmark datasets, including the synthetic dataset, ModelNet40 \cite{wu_modelnet40_cvpr_2015}, real indoor dataset, SUN3D \cite{xiao_sun3d_iccv_2013} and 3DMatch \cite{zeng_3dmatch_cvpr_17}, and real outdoor dataset, KITTI \cite{geiger_kitti_rr_13}.
\begin{table*}[ht]
\renewcommand\arraystretch{1.0}
\caption{Evaluation on the consistent point clouds. The boldface indicates the best performance and the underline represents the second-best performance. The green indicates the increase of our method compared with the second-best results. And if ours is not the best, it indicates the gap compared with the best results. All tables follow this protocol.}
\vspace{-0.4cm}
\begin{center}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{lcccccccccccc}
\toprule
\multirow{2}*{\textbf{Method}}&\multicolumn{3}{c}{\textbf{RMSE(R)}}&\multicolumn{3}{c}{\textbf{MAE(R)}}&\multicolumn{3}{c}{\textbf{RMSE(t)}\ ($\times 10^{-4}$)}&\multicolumn{3}{c}{\textbf{MAE(t)}\ ($\times 10^{-4}$)} \\
\cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
~&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}}&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}}&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}}&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}} \\
\midrule
\textbf{ICP} &12.282 &12.707 &11.971 &4.613 &5.075 &4.497 &477.44 &485.32 &483.20 & 22.80 &23.55 & \underline{43.35}\\
\textbf{FGR} &20.054 &21.323 &18.359 &7.146 &8.077 &6.367 &441.21 &457.77 &391.01 &164.20 &\underline{18.07} &144.87\\
\textbf{PTLK} &13.751 &15.901 &15.692 &3.893 &4.032 &3.992 &199.00 &261.15 &239.58 & 44.52 &62.13 & 56.37\\
\textbf{DCP-v2} & \underline{1.094} & 3.256 & 8.417 &0.752 &2.102 &5.685 &\underline{17.17} & \underline{63.17} &318.37 & \underline{11.73} &46.29 &233.70\\
\textbf{PRNet} & 1.722 & \underline{3.060} & \underline{3.218} &\underline{0.665} &\underline{1.326} &\underline{1.446} &63.72 &100.95 &\underline{111.78} & 46.52 &75.89 & 83.78\\
\midrule
\textbf{VRNet}
&\makecell[c]{\textbf{0.091}\\\scriptsize{\color{SpringGreen}+91.68\%}}
&\makecell[c]{\textbf{0.209}\\\scriptsize{\color{SpringGreen}+93.17\%}}
&\makecell[c]{\textbf{2.558}\\\scriptsize{\color{SpringGreen}+20.51\%}}
&\makecell[c]{\textbf{0.012}\\\scriptsize{\color{SpringGreen}+98.20\%}}
&\makecell[c]{\textbf{0.028}\\\scriptsize{\color{SpringGreen}+97.89\%}}
&\makecell[c]{\textbf{1.016}\\\scriptsize{\color{SpringGreen}+29.74\%}}
&\makecell[c]{\textbf{2.97} \\\scriptsize{\color{SpringGreen}+82.70\%}}
&\makecell[c]{\textbf{7.83} \\\scriptsize{\color{SpringGreen}+87.60\%}}
&\makecell[c]{\textbf{57.02}\\\scriptsize{\color{SpringGreen}+48.99\%}}
&\makecell[c]{\textbf{0.47} \\\scriptsize{\color{SpringGreen}+95.99\%}}
&\makecell[c]{\textbf{0.99} \\\scriptsize{\color{SpringGreen}+94.52\%}}
&\makecell[c]{\textbf{28.97}\\\scriptsize{\color{SpringGreen}+33.17\%}} \\
\bottomrule
\end{tabular}}
\label{Tab:consistent_registration}
\end{center}
\vspace{-0.4cm}
\end{table*}
\begin{table*}[t]
\renewcommand\arraystretch{1.0}
\caption{Evaluation on point clouds processed by partial-view, random sample, partial-view \& random sample strategy.}
\vspace{-0.4cm}
\begin{center}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{lccccccccccccc}
\toprule
\multirow{2}*{\textbf{Method}}&\multicolumn{3}{c}{\textbf{RMSE(R)}}&\multicolumn{3}{c}{\textbf{MAE(R)}}&\multicolumn{3}{c}{\textbf{RMSE(t)}}&\multicolumn{3}{c}{\textbf{MAE(t)}} &\\
\cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13}
~&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}}&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}}&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}}&\multicolumn{1}{c}{\textit{UPC}}&\multicolumn{1}{c}{\textit{UC}}&\multicolumn{1}{c}{\textit{ND}} &\\
\midrule
\textbf{ICP} &33.683 &34.894 &35.067 &25.045 &25.455 &25.564 &0.293 &0.293 &0.294 &0.250 &0.251 &0.250 &\multirow{7}{*}{\rotatebox{90}{PV}}\\
\textbf{FGR} &11.238 & 9.932 &27.653 &2.832 &\underline{1.952} &13.794 &0.030 &0.038 &0.070 &\underline{0.008} &\underline{0.007} &0.039&\\
\textbf{PTLK} &16.735 &22.943 &19.939 &7.550 &9.655 &9.076 &0.045 &0.061 &0.057 &0.025 &0.033 &0.032&\\
\textbf{DCP-v2} & 6.709 & 9.769 & 6.883 &4.448 &6.954 &4.534 &0.027 &0.034 &0.028 &0.020 &0.025 &0.021&\\
\textbf{PRNet} & \underline{3.199} & \underline{4.986} & \underline{4.323} &\underline{1.454} &2.329 &\underline{2.051} &\underline{0.016} &\underline{0.021} &\underline{0.017} &0.010 &0.015 &\underline{0.012}&\\
\cmidrule(r){1-13}
\textbf{VRNet}
&\makecell[c]{\textbf{0.982}\\\scriptsize{\color{SpringGreen}+69.30\%}}
&\makecell[c]{\textbf{2.121}\\\scriptsize{\color{SpringGreen}+57.46\%}}
&\makecell[c]{\textbf{3.615}\\\scriptsize{\color{SpringGreen}+16.38\%}}
&\makecell[c]{\textbf{0.496}\\\scriptsize{\color{SpringGreen}+65.89\%}}
&\makecell[c]{\textbf{0.585}\\\scriptsize{\color{SpringGreen}+70.03\%}}
&\makecell[c]{\textbf{1.637}\\\scriptsize{\color{SpringGreen}+20.19\%}}
&\makecell[c]{\textbf{0.0061}\\\scriptsize{\color{SpringGreen}+61.88\%}}
&\makecell[c]{\textbf{0.0063}\\\scriptsize{\color{SpringGreen}+70.00\%}}
&\makecell[c]{\textbf{0.0101}\\\scriptsize{\color{SpringGreen}+40.59\%}}
&\makecell[c]{\textbf{0.0039}\\\scriptsize{\color{SpringGreen}+51.25\%}}
&\makecell[c]{\textbf{0.0039}\\\scriptsize{\color{SpringGreen}+44.29\%}}
&\makecell[c]{\textbf{0.0063}\\\scriptsize{\color{SpringGreen}+47.50\%}}& \\
\midrule \midrule
\textbf{ICP} &11.247 &12.723 &11.472 &4.531 &5.289 &4.752 &0.0421 &0.0454 &0.0430 &0.0232 &0.0231 &0.0249&\multirow{7}{*}{\rotatebox{90}{RS}}\\
\textbf{FGR} &19.293 &19.62 &37.452 &7.054 &7.566 &23.230 &0.0414 &0.0433 &360.6727 &0.0157 &0.0167 &6.0463&\\
\textbf{DCP-v2} & 5.018 & 6.015 & 5.536 &2.921 &3.964 &3.162 &\underline{0.0116} &0.0147 &\underline{0.0127} &\underline{0.0087} &0.0112 &\underline{0.0096}&\\
\textbf{PRNet} & \underline{4.851} & \textbf{3.484} & \underline{3.824} &\underline{2.429} &\textbf{1.764} &\underline{1.781} &0.0174 &\underline{0.0129} &0.0128 &0.0134 &\underline{0.0100} &0.0099&\\
\cmidrule(r){1-13}
\textbf{VRNet}
&\makecell[c]{\textbf{1.496}\\\scriptsize{\color{SpringGreen}+69.16\%}}
&\makecell[c]{\underline{5.651}\\\scriptsize{\color{SpringGreen}-62.20\%}}
&\makecell[c]{\textbf{3.099}\\\scriptsize{\color{SpringGreen}+18.96\%}}
&\makecell[c]{\textbf{0.593}\\\scriptsize{\color{SpringGreen}+75.59\%}}
&\makecell[c]{\underline{1.971}\\\scriptsize{\color{SpringGreen}-11.73\%}}
&\makecell[c]{\textbf{1.476}\\\scriptsize{\color{SpringGreen}+17.13\%}}
&\makecell[c]{\textbf{0.0025}\\\scriptsize{\color{SpringGreen}+78.45\%}}
&\makecell[c]{\textbf{0.0041}\\\scriptsize{\color{SpringGreen}+68.22\%}}
&\makecell[c]{\textbf{0.0077}\\\scriptsize{\color{SpringGreen}+39.37\%}}
&\makecell[c]{\textbf{0.0016}\\\scriptsize{\color{SpringGreen}+81.61\%}}
&\makecell[c]{\textbf{0.0071}\\\scriptsize{\color{SpringGreen}+36.61\%}}
&\makecell[c]{\textbf{0.0057}\\\scriptsize{\color{SpringGreen}+40.63\%}}& \\
\midrule \midrule
\textbf{ICP} &11.971 &13.669 &12.215 &5.298 &6.018 &5.624 &0.0499 &0.0521 &0.0502 &0.0296 &0.0287 &0.0309&\multirow{7}{*}{\rotatebox{90}{PV \& RS}}\\
\textbf{FGR} &7.837 &9.187 &32.491 &\underline{2.076} &2.017 &14.680 &0.0167 &0.0168 &0.0604 &\underline{0.0063} &\underline{0.0065} &0.0360&\\
\textbf{DCP-v2} & 5.818 &7.059 & 6.286 &3.399 &4.564 &3.844 &0.0184 &0.0176 &0.0222 &0.0138 &0.0131 &0.0168&\\
\textbf{PRNet} & \underline{4.924} & \underline{3.836} & \underline{4.519} &2.573 &\underline{1.901} &\underline{1.925} &\underline{0.0162} &\underline{0.0163} &\underline{0.0139} &0.0112 &0.0112 &\underline{0.0099}&\\
\cmidrule(r){1-13}
\textbf{VRNet}
&\makecell[c]{\textbf{1.109}\\\scriptsize{\color{SpringGreen}+77.48\%}}
&\makecell[c]{\textbf{1.842}\\\scriptsize{\color{SpringGreen}+51.98\%}}
&\makecell[c]{\textbf{2.411}\\\scriptsize{\color{SpringGreen}+46.65\%}}
&\makecell[c]{\textbf{0.513}\\\scriptsize{\color{SpringGreen}+75.29\%}}
&\makecell[c]{\textbf{0.702}\\\scriptsize{\color{SpringGreen}+63.07\%}}
&\makecell[c]{\textbf{1.020}\\\scriptsize{\color{SpringGreen}+47.01\%}}
&\makecell[c]{\textbf{0.0037}\\\scriptsize{\color{SpringGreen}+77.16\%}}
&\makecell[c]{\textbf{0.0045}\\\scriptsize{\color{SpringGreen}+72.39\%}}
&\makecell[c]{\textbf{0.0072}\\\scriptsize{\color{SpringGreen}+48.20\%}}
&\makecell[c]{\textbf{0.0023}\\\scriptsize{\color{SpringGreen}+63.49\%}}
&\makecell[c]{\textbf{0.0026}\\\scriptsize{\color{SpringGreen}+60.00\%}}
&\makecell[c]{\textbf{0.0050}\\\scriptsize{\color{SpringGreen}+49.49\%}}& \\
\bottomrule
\end{tabular}}
\label{Tab:partial}
\end{center}
\vspace{-0.4cm}
\end{table*}
\subsection{Evaluation on synthetic dataset: ModelNet40} \label{sec:exp:modelnet40}
\noindent\textbf{Dataset and processing.}
We first evaluate our method on ModelNet40 \cite{wu_modelnet40_cvpr_2015}, which is a synthetic dataset consisting of 3D CAD models from 40 categories. ModelNet40 is a widely used benchmark in point cloud registration evaluation \cite{aoki_ptlk_cvpr_2019,wang_dcp_iccv_2019,wang_prnet_nips_2019,yew_rpmnet_cvpr_2020,huang_featuremetric_cvpr_2020}. Following \cite{wang_dcp_iccv_2019,wang_prnet_nips_2019,yew_rpmnet_cvpr_2020}, we sample 1024 points randomly as the \textit{source} point cloud $\mathbf X$. And then, $\mathbf{X}$ is rigidly transformed to generate the \textit{target} point cloud $\mathbf{Y}$. Because the dataset is synthetic and the \textit{target} is generated from the \textit{source}, the correspondences are obtained naturally. The employed rotation and translation are uniformly sampled in $[0^ \circ, 45^ \circ]$ and $[-0.5, 0.5]$ respectively along each axis. Both the \textit{source} and \textit{target} point clouds are shuffled. These settings are widely adopted in the community for a fair comparison.
We test the proposed VRNet on ModelNet40 with/without outliers. Here, four data processing settings are provided as follows. Note that each of the consistent input point clouds consists of 1024 points as mentioned above.
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item \underline{Co}nsistent point clouds (\textbf{\textit{CO}}). The \textit{source} and \textit{target} point clouds are exactly same except for the pose, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot each point has a corresponding point in the paired point cloud.
\item \underline{P}artial-\underline{v}iew (\textbf{\textit{PV}}). Following PRNet \cite{wang_prnet_nips_2019}, given a random point in 3D space, we select its nearest 768 points from the original consistent input point clouds. However, despite this strategy is widely adopted, it results in the same distribution of overlapped parts, and limited partiality is obtained with low outliers ratio.
\item \underline{R}andom-\underline{s}ample (\textbf{\textit{RS}}). We randomly select 768 points from each consistent point cloud, leading to the random distribution of outliers and high outliers ratio.
\item \underline{P}artial-\underline{v}iew \& \underline{R}andom-\underline{s}ample (\textbf{\textit{PV+RS}}). A more challenging data processing setting is provided by combining the above partial-view and random sample strategies. Specifically, we select 896 points randomly from each consistent point cloud at first, and then 768 nearest points are selected from these sampled 896 points, respectively.
\end{itemize}
\noindent\textbf{Dataset split setting.}
Following \cite{wang_dcp_iccv_2019,wang_prnet_nips_2019}, three dataset split settings are applied here for a comprehensive evaluation.
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item {\underline{U}nseen \underline{P}oint \underline{C}louds} (\textbf{\textit{UPC}}). The ModelNet40 is divided into training and testing sets with the official setting.
\item {\underline{U}nseen \underline{C}ategories} (\textbf{\textit{UC}}). To test the generalization ability to the unseen point cloud categories, we divide ModelNet40 according to the object category. The first 20 categories are selected for training and the rest are used for testing. This setting is consistent with \cite{wang_dcp_iccv_2019, wang_prnet_nips_2019, yew_rpmnet_cvpr_2020}.
\item {\underline{N}oisy \underline{D}ata} (\textbf{\textit{ND}}). To test the robustness, random Gaussian noise (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{N}(0,0.01)$, and the sampled noise out of the range of $[-0.05,0.05]$ will be clipped) is added to each point. The dataset splitting is the same as \textit{UPC}.
\end{itemize}
\noindent\textbf{Evaluation metrics.}
Following \cite{wang_dcp_iccv_2019, wang_prnet_nips_2019}, root mean square error (RMSE), and mean absolute error (MAE) between the ground truth and the prediction in Euler angle and translation vector are used as our evaluation metrics, notated as {RMSE(R)}, {MAE(R)}, {RMSE(t)} and {MAE(t)} respectively.
\noindent\textbf{Performance evaluation.} Herein, we present the rigid motion estimation results in the mentioned settings for a comprehensive comparison. Meanwhile, we also provide the time-efficiency to validate the proposed VRNet.
$\bullet$ \textbf{Consistent point clouds:}
Following the protocol of DCP \cite{wang_dcp_iccv_2019}, we take consistent point clouds as our input. The results are reported in \tabref{Tab:consistent_registration}.
In \textbf{\textit{UPC}} setting, among all baselines, PRNet \cite{wang_prnet_nips_2019} achieves the best performance in MAE(R) and DCP-v2 obtains the best results in RMSE(R), RMSE(t) and MAE(t). However, our proposed VRNet are better than all these baselines.
In \textbf{\textit{UC}} setting, VRNet maintains the best performance while PRNet achieves the second-best results for rotation estimation, and DCP-v2, FGR obtain the second-best results in RMSE(t) and MAE(t) respectively.
In \textbf{\textit{ND}} setting, our proposed VRNet improves the performance to a large extent in all evaluation metrics compared with all baselines, which validates the robustness of our method further.
$\bullet$ \textbf{Partial-view:} Following PRNet \cite{wang_prnet_nips_2019}, we test the performance of the proposed method using the partial-view input point clouds. We report the registration results in \tabref{Tab:partial}, where the proposed method achieves the best results in all evaluation metrics including RMSE(R) MAE(R), RMSE(t) and MAE(t) in all dataset split settings including \textbf{\textit{UPC}}, \textbf{\textit{UC}} and \textbf{\textit{ND}}. Besides, PRNet, which is designed for partial-view setting specifically, achieves the second-best performance in most evaluations.
$\bullet$ \textbf{Random-sample:} Due to the random sample strategy, the outliers distribute randomly. The registration performance is reported in \tabref{Tab:partial}. Our VRNet achieves the best performance in \textbf{\textit{UPC}}, \textbf{\textit{ND}}. In \textbf{\textit{UC}}, VRNet obtains the best estimation of translation, and the second-best rotation estimation results.
$\bullet$ \textbf{Partial-view \& Random-sample:} In this part, we combine the above two data processing strategies to evaluate our VRNet, and the results are presented in \tabref{Tab:partial}. Obviously, our VRNet achieves the best performance in all evaluation metrics including RMSE(R), MAE(R), RMSE(t) and MAE(t) in all dataset split settings including \textbf{\textit{UPC}}, \textbf{\textit{UC}}, and \textbf{\textit{ND}}.
$\bullet$ \textbf{Time-efficiency:}
We counts the average inference time of all learning-based methods in \textbf{partial-view} setting using a Xeon E5-2640 [email protected] CPU and a GTX 1080 Ti, where 768 points exist in each input point cloud. \tabref{Tab:times} shows that ours achieves advanced time-efficiency meeting the real-time requirement since the complicated processes of inliers recognition and reliable correspondences selection are avoided.
\begin{table}[!h]
\renewcommand\arraystretch{1.0}
\caption{Inference time comparison on ModelNet40.}
\vspace{-0.2cm}
\centering
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{lcccccc}
\toprule
\textbf{Methods} & \textbf{PTLK} &\textbf{DCP} & \textbf{PRNet} & \textbf{RPMNet} & \textbf{VRNet} \\
\midrule
\textbf{Time[ms]} &47.2 &\textbf{17.66} & 37.48 & 54.75 & \underline{19.92}\\
\bottomrule
\end{tabular}}
\label{Tab:times}
\vspace{-0.5cm}
\end{table}
\subsection{Evaluation on real indoor dataset: SUN3D, 3DMatch} \label{sec:exp:indoor}
\noindent\textbf{Dataset.} Besides the synthetic dataset, we also conduct evaluations on real indoor scene dataset, SUN3D \cite{xiao_sun3d_iccv_2013} and 3DMatch \cite{zeng_3dmatch_cvpr_17}. SUN3D is composed of 13 randomly selected scenes, and the version processed by 3DRegNet \cite{pais_3dregnet_cvpr_2020} is used here, which is a sparse dataset with around 3000 points in each point cloud. 3DMatch is a hybrid indoor dataset, and the input has been voxelized with the voxel size of 5cm following \cite{choy_dgr_cvpr_2020}. This dataset is a dense, large-scale dataset and each point cloud contains around 50000 points. To train the network, we construct the ground truth correspondences manually. Specifically, we transform the \textit{source} point cloud based on the ground truth transformation matrix first. Then, the nearest neighbor searching is applied to solve the corresponding points. Notably, if the distance between the transformed point and the searched corresponding point is greater than a predefined threshold (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot 3cm for SUN3D and 5cm for 3DMatch in our implementation), the match is discarded.
\noindent\textbf{Evaluation metrics.} For a fair comparison, we follow the evaluation metrics of 3DRegNet \cite{pais_3dregnet_cvpr_2020} and DGR \cite{choy_dgr_cvpr_2020} respectively.
For SUN3D, we report the mean, median \underline{r}otation \underline{e}rror (RE), and the mean, median \underline{t}ranslation \underline{e}rror (TE), and time-efficiency (Time), where RE and TE are calculated by
\begin{equation}
\left\{
\begin{aligned}
\text{RE}&=\text{arccos}(({\textrm{trace}(\mathbf{R}^{-1}\mathbf{R}^\text{gt})-1)/2})\frac{180}{\pi} \\
\text{TE}&=\|\mathbf{t}-\mathbf{t}^\text{gt}\|_2^2
\end{aligned}
\right. ,
\end{equation}
where the superscript ``$\text{gt}$'' indicates the ground truth. For 3DMatch, besides the mean RE, mean TE and time-efficiency, we also report the recall following \cite{choy_dgr_cvpr_2020}, which is the ratio of successful pairwise registrations. Here, the successful pairwise is confirmed if the rotation error and translation error are smaller than the pre-defined thresholds (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot 15 deg, 30cm). It is worth mentioning that, mean RE and mean TE reported in \tabref{Tab:3dmatch} are computed only on these successfully registered pairs since the relative poses returned from the failed registrations can be drastically different from the ground truth, making the error metrics unreliable.
\begin{table}[h]
\renewcommand\arraystretch{1.0}
\caption{Comparison of registration results in SUN3D following the metrics of 3DRegNet \cite{pais_3dregnet_cvpr_2020}}
\vspace{-0.4cm}
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{lccccc}
\toprule
\multirow{2}*{\textbf{Methods}}&\multicolumn{1}{c}{\textbf{RE}[deg]}
&\multicolumn{1}{c}{\textbf{RE}[deg]}
&\multicolumn{1}{c}{\textbf{TE}[m]}
&\multicolumn{1}{c}{\textbf{TE}[m]}
&\multirow{2}*{\textbf{Time}[ms]}\\
\cmidrule(r){2-3} \cmidrule(r){4-5}
~&\textbf{Mean}&\textbf{Median}&\textbf{Mean}&\textbf{Median}\\
\midrule
\textbf{FGR} & 2.57 & 1.92 & 0.121 & \underline{0.067} &44.34\\
\textbf{ICP} & 3.18 & \underline{1.50} & 0.146 & 0.079 &\underline{32.17}\\
\textbf{RANSAC} & 3.00 &1.73 & 0.148 & 0.074 &170.2\\
\midrule
\textbf{3DRegNet} &\underline{1.84} &1.69 &\underline{0.087} &0.078 &166.7\\
\midrule
\textbf{VRNet}
&\makecell[c]{\textbf{1.49}\\\scriptsize{\color{SpringGreen}+19.02\%}}
&\makecell[c]{\textbf{0.38}\\\scriptsize{\color{SpringGreen}+74.67\%}}
&\makecell[c]{\textbf{0.075}\\\scriptsize{\color{SpringGreen}+13.79\%}}
&\makecell[c]{\textbf{0.058}\\\scriptsize{\color{SpringGreen}+13.43\%}} &\textbf{25.6}\\
\bottomrule
\end{tabular}}
\label{Tab:sun3d_regnet}
\vspace{-0.2cm}
\end{center}
\end{table}
\noindent\textbf{Performance evaluation.}
In \tabref{Tab:sun3d_regnet}, we provide the performance comparison on SUN3D. Traditional methods, including ICP \cite{besl_icp_pami_1992}, FGR \cite{zhou_fgr_eccv_2016} and RANSAC, all present an acceptable registration result. Moreover, ICP and FGR even achieve the second-best performance on median TE and median RE metrics respectively within all methods. The learning-based method, 3DRegNet \cite{pais_3dregnet_cvpr_2020} presents the better performance on mean RE and mean TE than these traditional methods, however, it is complicated. We also conduct DCP-v2 and PRNet, unfortunately, they fail in this setting with divergent results. Our proposed VRNet outperforms all these baselines on both transformation estimation and time-efficiency performance, which validates the superiority of our method.
\begin{table}[h]
\renewcommand\arraystretch{1.0}
\caption{Evaluation on 3DMatch dataset.}
\vspace{-0.4cm}
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{lcccc}
\toprule
\textbf{Methods} & \textbf{TE}[cm] & \textbf{RE}[deg] & \textbf{Recall}(\%) & \textbf{Times}[s]\\
\midrule
\textbf{ICP} & 18.1 & 8.25 & 6.04 & 0.25 \\
\textbf{FGR} & 10.6 & 4.08 & 42.7 & 0.31 \\
\textbf{Go-ICP} & 14.7 & 5.38 & 22.9 & 771.0 \\
\textbf{Super4PCS} & 14.1 & 5.25 & 21.6 & 4.55 \\
\textbf{RANSAC} & 8.85 & \underline{3.00} & 66.1 & 1.39 \\
\midrule
\textbf{DCP-v2} & 21.4 & 8.42 & 3.22 & \textbf{0.07} \\
\textbf{PTLK} & 21.3 & 8.04 & 1.61 & 0.12 \\
\textbf{DGR} & \textbf{7.34} & \textbf{2.43} & \textbf{91.3} & 1.21 \\
\midrule
\textbf{VRNet} & \underline{8.64} & 3.12 & \underline{72.9} & \underline{0.11}\\
\bottomrule
\end{tabular}}
\label{Tab:3dmatch}
\end{center}
\vspace{-0.2cm}
\end{table}
\tabref{Tab:3dmatch} presents the evaluation results on 3DMatch, we find that ICP \cite{besl_icp_pami_1992} achieves the weakest performance because the dataset is so challenging since the large rigid motion exists and there is no reliable prior provided. Meanwhile, a sampling-based algorithm, Super4PCS \cite{mellado_super4pcs_cgf_14}, and the ICP variant with branch-and-bound search, Go-ICP \cite{yang_goicp_pami_2015} perform similarly. The feature-based methods, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot FGR and RANSAC, perform much better than these methods which are built on the 3D point directly.
As for learning-based methods, DGR \cite{choy_dgr_cvpr_2020}, which is designed for dense scene dataset specifically, achieves the best performance in all registration metrics. However, because DGR devotes to selecting reliable correspondence, it is complicated and time-consuming. PointNetLK is a correspondences-free method, which fails in this setting because of the numerous outliers. DCP-v2 also fails here despite the best time-efficiency is achieved, we suspect that the feature extractor of DCP-v2 is not suitable to 3DMatch dataset. Inspired by this, we use the FCGF \cite{choy_fcgf_iccv_2019} feature extractor instead in our VRNet, which is designed for such large-scale scene point clouds specifically. Furthermore, although only the second-best rigid transformation estimation results are achieved by ours, which is weaker than DGR, our VRNet obtains the better time-efficiency when ignoring the failed method, DCP. This characteristic of balancing the transformation estimation and the running time is crucial in actual applications.
\subsection{Evaluation on real outdoor data: KITTI} \label{sec:exp:kitti}
The typical outdoor scene dataset, KITTI \cite{geiger_kitti_rr_13} is used here to evaluate our VRNet, which consists of LIDAR scans. Following \cite{choy_dgr_cvpr_2020}, we build the point cloud pairs with at least 10m apart, and the ground-truth transformation is generated using GPS followed by ICP to fix the inherent errors. The strategy to construct ground truth correspondences is the same as the strategy for the SUN3D, 3DMatch datasets, and the threshold to determine the acceptable correspondences is set to 5cm.
Besides, we use the voxel size of 30cm to downsample the input point clouds. It is worth mentioning that we use the FCGF \cite{choy_fcgf_iccv_2019} feature extractor in our VRNet as \secref{sec:exp:indoor}, which is designed for such large-scale scene dataset.
\tabref{Tab:kitti} reports the registration performance. In learning-based approaches, the proposed VRNet obtains better performance on translation estimation while DGR \cite{choy_dgr_cvpr_2020} achieves better rotation estimation results. Meanwhile, the best time-efficiency is achieved by our VRNet, which is important in actual applications. DCP \cite{wang_dcp_iccv_2019} and PRNet \cite{wang_prnet_nips_2019} also can make sense in this setting. However, since the GPU consuming is out of memory for ``DGCNN + transformer'' feature extractor in DCP and PRNet, we have to downsample the input point clouds with the voxel size of 70cm further for these two baselines. Thus, we do not report their time-efficiency here for a fair comparison.
\begin{table}[h]
\renewcommand\arraystretch{1.0}
\caption{Evaluation on KITTI dataset.}
\vspace{-0.4cm}
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{lccccc}
\toprule
\multirow{2}*{\textbf{Methods}}&\multicolumn{2}{c}{\textbf{Rotation}[deg]}
&\multicolumn{2}{c}{\textbf{Translation}[cm]}
&\multirow{2}*{\textbf{Time}[s]}
\\
\cmidrule(r){2-3} \cmidrule(r){4-5}
~&\textbf{RMSE}&\textbf{MAE}&\textbf{RMSE}&\textbf{MAE}\\
\midrule
\textbf{ICP} & \underline{7.54} & \textbf{2.10} & 4.84 & 2.94 & \underline{1.44} \\
\textbf{FGR} & 60.45 & 27.81 &42.1 &12.69 & 1.50 \\
\midrule
\textbf{DCP} & 11.09 & 10.42 &18.04 &16.29 & - \\%$\ast$\\
\textbf{PRNet} & 10.93 & 7.86 &13.28 &10.52 & - \\%$\ast$\\
\textbf{DGR} & \textbf{5.59} & \underline{2.32} &\underline{4.72} &\underline{2.31} & 2.42 \\
\midrule
\textbf{VRNet} & 7.56 & 3.42 &\textbf{1.72} &\textbf{1.18} & \textbf{0.24}\\
\bottomrule
\end{tabular}}
\label{Tab:kitti}
\end{center}
\vspace{-0.6cm}
\end{table}
\subsection{Ablation studies} \label{4-5-ablation}
\noindent\textbf{The consistency comparison.}
The essence of our method is to construct correspondences for all \textit{source} points without distinguishing outliers and inliers. To this end, we drive the learned corresponding points to keep rigidity and geometry structure consistency with the \textit{source} point cloud. Here, we measure this consistency using the Chamfer distance, which can be calculated as follows,
\begin{equation}
\textbf{CD}(\mathbf{X},\mathbf{Y}) =
\frac{1}{N_\mathbf{X}} \sum \limits_{\mathbf{x} \in \mathbf{X}} \min \limits_{\mathbf{y}\in \mathbf{Y}} \|\mathbf{x}-\mathbf{y}\|_2^2
+ \frac{1}{N_\mathbf{Y}} \sum \limits_{\mathbf{y}\in \mathbf{Y}} \min \limits_{\mathbf{x}\in \mathbf{X}} \|\mathbf{x}-\mathbf{y}\|_2^2.
\end{equation}
We provide the comparison in \figref{Fig:CD}, where ``Source \& VCPs'' represents the Chamfer distance between the \textit{source} point cloud and the learned VCPs, ``Source \& RCPs'' represents the Chamfer distance between the \textit{source} and the learned RCPs, and ``Source \& Target'' represents the Chamfer distance between the \textit{source} and \textit{target}.
The \textit{source} has been transformed with the ground truth rigid transformation.
As shown in \figref{Fig:CD}, because of outliers, the \textit{source} and the \textit{target} are not entirely the same, thus even the ground truth transformation is applied, the corresponding Chamfer distance is still large. Meanwhile, because the distribution limitation has been broken, the learned RCPs are more consistent with the \textit{source} point cloud in which the less Chamfer distance is obtained.
\begin{figure}[h]
\centering\includegraphics[width=0.9\linewidth]{Figure/Fig5.pdf}
\caption{We count the Chamfer distance in different settings including \textit{partial-view}, \textit{random-sample}, \textit{partial-view}\&\textit{random-sample}, \textit{consistent} point clouds in \textit{noise} ($CO+ND$), \textit{partial-view} in \textit{noise} ($PV+ND$), \textit{random-sample} in \textit{noise} ($RS+ND$), \textit{partial-view}\&\textit{random-sample} in \textit{noise} ($PV+RS+ND$).
The corresponding points learned by the VRNet are more consistent to the \textit{source}, because the Chamfer distances between the \textit{source} and the learned RCPs are always smallest.}
\label{Fig:CD}
\end{figure}
\noindent\textbf{Registration improvement by the correction-walk module.}
To break the distribution limitation, we propose a correction-walk module to learn an offset to amend the corresponding points. Herein, we give a quantitative effectiveness analysis of this module by comparing the registration performance. As shown in \figref{Fig:improve}, via our correction-walk module, the registration performance shows an obvious improvement in all metrics and all settings.
\begin{figure*}[h]
\centerline{\includegraphics[width=1.0\linewidth]{Figure/Fig6.png}}
\caption{We show the improvement caused by the correction-walk module. The orange/blue indicates the results without/with the correction-walk module. In all settings and all metrics, the registration results are improved.}
\label{Fig:improve}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\linewidth]{Figure/Fig7.pdf}
\caption{The registration performance decreases as the outliers ratio increases. Traditional methods, like RANSAC, FGR, present excellent performance. Our VRNet achieves state-of-the-art performance in rotation estimation and the best translation estimation in learning-based methods.}
\label{Fig:degeneration}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\linewidth]{Figure/Fig8.pdf}
\caption{The performance comparison of RANSAC strategy and our proposed correction-walk module with the different outliers.
}
\label{Fig:ransac}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Figure/Fig9.pdf}
\caption{The illustration of registration performance with respect to different weights of the amendment offset supervision, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\lambda_4$. X-axis is labeled with this weight coefficient $\lambda_4$. We notice that, the proposed method achieves the best performance when $\lambda_4 = 100$, which is taken in our applications.}
\label{Fig:weight}
\vspace{-0.4cm}
\end{figure}
\noindent\textbf{Coefficients of loss functions.}
In this paper, we take a hybrid loss function for point cloud registration. As mentioned in \secref{3-2-loss}, corresponding point supervision $\mathcal{L}_0$ is used to train the feature extractor and the local motion consensus $\mathcal{L}_1$, geometry structure supervision $\mathcal{L}_2$, $\mathcal{L}_3$ and the amendment offset supervision $\mathcal{L}_4$ are used to train the correction-walk module. Here, $\mathcal{L}_1$, $\mathcal{L}_2$, $\mathcal{L}_3$ are unsupervised, where we take $\lambda_1 = \lambda_2 = \lambda_3 = 1$. And $\mathcal{L}_4$ is supervised and we have a test about $\lambda_4$ in partial-view setting.
As shown in \figref{Fig:weight}, our proposed VRNet achieves the best performance when $\lambda_4 = 100$, which is taken in our application. Besides, we compare the performance of tests conducted with different loss function combinations in \tabref{Tab:weight}. We find that the unsupervised loss functions not only help keep the shape and geometry structure of the learned corresponding points, but also improve the final registration performance.
\begin{table}[h]
\renewcommand\arraystretch{1.0}
\caption{The comparison of different loss function combinations applied in correction-walk module training.}
\vspace{-3mm}
\begin{center}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{ccccc}
\toprule
\textbf{Methods} & \textbf{RMSE(R)} & \textbf{MAE(R)} & \makecell[c]{\textbf{RMSE(t)}\\$(\times 10^{-2})$} & \makecell[c]{\textbf{MAE(t)}\\$(\times 10^{-2})$} \\
\midrule
$\mathbf{\mathcal{L}_4}$ &1.254&0.639&0.838&0.540\\
$\mathbf{\mathcal{L}_4}$+$\mathbf{\mathcal{L}_1}$&1.062&0.593&0.672&0.403\\
$\mathbf{\mathcal{L}_4}$+$\mathbf{\mathcal{L}_2}$&1.054&0.539&0.638&0.410\\
$\mathbf{\mathcal{L}_4}$+$\mathbf{\mathcal{L}_3}$&1.119&0.608&0.715&0.462\\
\textbf{All loss} &\textbf{0.982}&\textbf{0.496}&\textbf{0.611}&\textbf{0.389}\\
\bottomrule
\end{tabular}}
\label{Tab:weight}
\end{center}
\end{table}
\noindent\textbf{VRNet with iteration.}
Currently, our method has achieved a high-quality registration performance in a single pass. Here, we provide an evaluation to test our VRNet in the iteration strategy like ICP. Specifically, in each iteration, we refine the \textit{source} point clouds with the predicted transformation matrix solved in the last iteration, and resolve the new transformation matrix between the updated \textit{source} point clouds and the \textit{target} point clouds. Finally, all predicted transformation matrices are summarized to obtain the final estimated transformation matrix. The tests are conducted on ModelNet40 under the \textbf{\textit{UPC}} using the \textbf{\textit{partial-view}}. The results are provided in \tabref{Tab:iteration}.
\begin{table}[h]
\renewcommand\arraystretch{1.0}
\caption{The performance comparison when inserting the VRNet to the iteration pipeline.}
\vspace{-3mm}
\begin{center}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{cccccc}
\toprule
\textbf{Iteration} & \textbf{RMSE(R)} & \textbf{MAE(R)} & \makecell[c]{\textbf{RMSE(t)}\\$(\times 10^{-2})$} & \makecell[c]{\textbf{MAE(t)}\\$(\times 10^{-2})$}& \textbf{Times}[ms] \\
\midrule
1& 0.982 &0.496 &0.611 &0.389 & 19.92\\
2& 0.931 &0.447 &0.563 &0.327 & 41.73\\
3& 0.904 &0.412 &0.528 &0.301 & 65.34\\
4& 0.886 &\textbf{0.398} &0.501 &0.292 & 96.42\\
5& \textbf{0.884} &0.401 &0.497 &0.292 & 127.51\\
6& 0.891 &0.399 &\textbf{0.495} &\textbf{0.291} & 162.46\\
\bottomrule
\end{tabular}}
\label{Tab:iteration}
\end{center}
\vspace{-0.5cm}
\end{table}
From \tabref{Tab:iteration}, we can find that, before the $4$-th iteration, the registration performance has been significantly improved as the number of iterations increases. However, when the number of iterations increases further, the performance tends to be stable. The time-efficiency becomes weaker due to the iteration strategy.
\noindent\textbf{Robustness to outliers.}
To verify the robustness to outliers, we evaluate the registration performance of ours and the baselines under different outliers ratios as shown in \figref{Fig:degeneration}. The tests are conducted on ModelNet40 under the \textbf{\textit{UPC}} using the \textbf{\textit{partial-view}}. By different sample ratios, different outliers ratios are achieved. And the fewer points are sampled, the higher outliers ratio are obtained. Specifically, there are 1024 points in the each original consistent input point cloud, whether it is the \textit{source} or the \textit{target}. We reconstruct the \textit{source} and the \textit{target} by sampling some nearest points from them. The corresponding relationship between the size of the sampled point cloud and the outliers ratio is 960 (6.67\%), 896 (15.33\%), 832 (23.08\%), 768 (33.33\%), 704 (46.39\%).
From \figref{Fig:degeneration}, obviously, for rotation estimation, VRNet achieves stable and the most accurate performance. For translation estimation, FGR achieves the better performance. However, for learning-based methods, ours remains the best performance\footnote{In the ablation study of ``robustness to outliers'', ``correct matches ratio vs outliers ratio'', ``RANSAC vs correction-walk'' and ``visualization'', for a clear comparison and analysis of the proposed VRNet, we adjust the \textbf{\textit{partial-view}} setting proposed in PRNet\cite{wang_prnet_nips_2019}. Specifically, we set the view points at the symmetrical position to sample the \textit{source} and \textit{target} rather than the same position. This operation will result in a larger outliers ratio and more obvious partiality. These tests are conducted on ModelNet40 under \textbf{\textit{UPC}}.}.
\noindent\textbf{Correct matches ratio vs. outliers ratio.}
We evaluate the correct matches ratio as the outliers ratio increases in \figref{Fig:matches}.
We compare the ground truth, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the correct matches ratio of the \textit{source} point cloud and the \textit{target} point cloud, the correct matches ratio of the \textit{source} and the learned VCPs, and the correct matches ratio of the \textit{source} and the learned RCPs. The thresholds to confirm the successful matches are set to 0.15, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, if the distance between the predicted corresponding point and the ground truth corresponding point is less than the threshold, this match is a successful match.
From \figref{Fig:matches}, the correct matches ratio of the \textit{source} and the VCPs approximates the ground truth, which validates our method leaned an accurate corresponding points distribution of inliers due to the proposed sufficient supervision. And the correct matches ratio of the \textit{source} and RCPs is even better than the ground truth, which verifies that the distribution of the learned RCPs is more
consistent with the \textit{source} than the original \textit{target} point cloud due to the correction-walk module.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]{Figure/Fig10.pdf}
\vspace{-0.2cm}
\caption{The correct matches ratio comparison with the different outliers ratios. }
\vspace{-0.5cm}
\label{Fig:matches}
\end{center}
\end{figure}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.9\textwidth]{Figure/Fig11.pdf}
\vspace{-0.2cm}
\caption{Visualization of the \textit{source} point (purple), the \textit{target} point cloud (green), the learned VCPs (gray), RCPs (blue), and the learned offset (red line). All points clouds are calibrated to the same pose for clear comparison. Obviously, the VCPs approximate the \textit{source} as much as possible but it is limited in the \textit{target} distribution. Then, the correction-walk module amends the VCPs to the RCPs, which present a more consistent distribution with the \textit{source} than the VCPs and the origianl \textit{target}.}
\label{Fig:vis}
\end{figure*}
\begin{figure*}[!ht]
\centering
\vspace{-0.4cm}
\includegraphics[width=1.0\textwidth]{Figure/Fig12.pdf}
\vspace{-0.4cm}
\caption{Our VRNet presents the accurate registration results on real indoor dataset, 3DMatch including 8 subsets, where \textbf{Left} is the input \textit{source} and \textit{target} point clouds, and \textbf{Right} is the aligned point clouds.}
\label{Fig:vis_3dmatch}
\end{figure*}
\begin{figure*}[!ht]
\centering
\vspace{-0.4cm}
\includegraphics[width=0.9\textwidth]{Figure/Fig13.pdf}
\vspace{-0.2cm}
\caption{Our VRNet achieves accurate registration performance on the real outdoor KITTI dataset. The left subfigure is the input pair and the right subfigure indicates the registration result of our VRNet. We can find the input \textit{source} and \textit{target} point clouds are with different poses while the point clouds in the right subfigure are aligned. For example, in the left subfigure, the edges of the road are biased. And the blank parts of the center overlap marked by the blue box but they should be different in fact because the scanner locates at different positions. Then, in the right subfigure, the edges overlap, and the blank parts deviate since the point clouds have been registered successfully.}
\label{Fig:vis_kitti}
\end{figure*}
\noindent\textbf{RANSAC vs. correction-walk.}
The principle behind the VRNet is to consider the \textit{source} uniformly without distinguishing inliers and outliers rather than selecting reliable correspondences. Here, we evaluate these two strategies. RANSAC, which is the representative correspondences selection method, is compared here. The results of RANSAC, which are applied to \textit{source} point cloud and the \textit{target} point cloud, are presented in \figref{Fig:ransac}. Obviously, our VRNet achieves more accurate results with different outliers ratios than RANSAC in terms of rotation estimation and RMSE(t). With respect to MAE(t), when the outliers ratio is high, RANSAC method is better while ours obtains more accurate results when the outliers ratio is low. To sum up, VRNet achieves better performance than RANSAC, especially facing low outliers ratio.
Besides, we test the performance of combining the RANSAC and our VRNet. Two settings are carried out here, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot employing RANSAC to the \textit{source} and the learned VCPs, employing RANSAC to the \textit{source} and the learned RCPs.
From \figref{Fig:ransac}, we can find that, RANSAC between the \textit{source} and the learned RCPs gets a litter better performance than RANSAC between the \textit{source} and the learned VCPs. Meanwhile, these two settings are both much more stable than using RANSAC or VRNet alone when outliers ratio increases. It is worth mentioning that, when outliers ratio is low, especially approximates to 0, RANSAC provides negative affect to our VRNet. We suspect that the tolerance in RANSAC strategy (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the threshold to determine the reliable correspondences) decreases the registration performance.
\noindent\textbf{Visualization}
We provide the visualization of the learned VCPs, the learned RCPs and the offset learned by the correction-walk module in \figref{Fig:vis}. VCPs are limited in the \textit{target} point cloud, however, they are amended to RCPs that are more consistent with the \textit{source} point cloud. In addition, for a clear presentation of the effectiveness of the proposed VRNet, we provide some registration results on 3DMatch and KITTI datasets in \figref{Fig:vis_3dmatch} and \figref{Fig:vis_kitti}.
\section{Discussion and conclusion} \label{5_conclusion}
In this paper, we have proposed VRNet, an end-to-end robust 3D point cloud registration network. However, some limitations also exist in our method. Specifically, 1) our proposed method cannot handle the object or scene with strong symmetry well. Our method advocates learning the correction displacement by comparing the features of the \textit{source} point clouds and the virtual point clouds to facilitate the RCPs to tend to be consistent with the \textit{source} point clouds. However, if the point features are confused due to the same geometry in the symmetric object, the learned offsets are also confused, which can not rectify the VCPs accurately;
2) even though our method has presented strong robustness as the overlap ratio decreases, when the overlap ratio becomes very low, the registration performance also will be significantly affected. This is a stubborn illness in the point cloud registration field \cite{wang_dcp_iccv_2019,wang_prnet_nips_2019,choy_dgr_cvpr_2020}. For our method, we suspect the reason is that the learned RCPs cannot be built accurately since there are too many outliers in the \textit{source} that need to be fitted by the virtual points.
Nevertheless, our VRNet can effectively avoid the complicated inliers and outliers screening and reliable correspondences selection by modeling the corresponding points for all \textit{source} points uniformly.
Its is proven to be effective and efficient in recovering the rectified virtual corresponding points that maintain the same shape as the \textit{source} and same pose of the \textit{target}, thanks to the use of the proposed correction-walk module and the hybrid loss function.
Our experiments show that VRNet can achieve state-of-the-art rigid transformation estimation results and high time-efficiency on both synthetic and real sparse datasets.
Meanwhile, for large-scale dense datasets, VRNet can balance time-efficiency and accuracy. It not only achieves comparable performance as the most advanced methods but also maintains good time superiority, which is crucial for practical applications.
In the future, we plan to extend our VRNet to 2D-2D and 2D-3D registrations.
In addition, we would further investigate more effective downsampling strategies to help VRNet improve the registration accuracy for large-scale dense point cloud data.
\section*{Acknowledgement}
\thanks{This work was supported in part by the National Key Research and Development Program of China under Grant 2018AAA0102803 and National Natural Science Foundation of China (61871325, 61901387, 62001394). This work was also sponsored by Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University.}
|
1,941,325,220,768 | arxiv | \section{Introduction}
Consider the optimization problem
\begin{equation}
\label{eq:optimal_control}
\begin{aligned}
&\text{minimize}\; J(y,u):=\frac{1}{2}\norm{Ay-g}^2_H + \frac{\alpha}{2} \norm{u}_U^2 ,\quad\text{over }(y,u)\in Y\times U,\\
&\text{subject to (s.t.)}\; e(y,u)=0, \text{ and } u \in \mathcal{C}_{ad},
\end{aligned}
\end{equation}
where $y\in Y$, $u \in U$ are the state and control variables, respectively, with $Y$ a suitable Banach space and $U$ a Hilbert space. Moreover, $g \in H$ denotes given data with $H$ the pertinent Hilbert space, $\alpha>0$ is the control cost, and $A:Y \to H$ is a bounded linear (observation) operator, i.e., $A\in\mathcal{L}(Y,H)$. While in \eqref{eq:optimal_control} feasible controls $u$ are confined to a nonempty, closed, and convex set $\mathcal{C}_{ad}$, the relationship between admissible controls and states is through the equality constraint associated with a possibly nonlinear operator $e: Y\times U \to Z$, with $Z$ a Banach space. Often, $e(y,u)=0$ is given by (a system of) ordinary or partial differential equations (ODEs or PDEs) describing, e.g., underlying physics. For the ease of discussion we assume that, for given $u\in U$, there is a unique $y\in Y$ such that $e(y,u)=0$. This allows us to write
\begin{equation*}
y=\Pi(u),
\end{equation*}
where $\Pi$ denotes the (implicitly defined) control-to-state map with $e(\Pi(u),u)=0$.
Given $\Pi$, a popular approach in the study of \eqref{eq:optimal_control} is based on the reduced problem
\begin{equation}
\label{eq:optimal_control_reduced}
\begin{aligned}
& \text{minimize}\; \mathcal{J}(u):=\frac{1}{2}\norm{Q(u)-g}^2_H + \frac{\alpha}{2} \norm{u}_U^2 ,\quad\text{over }u\in U,\\
&\text{s.t. }\; u \in \mathcal{C}_{ad},
\end{aligned}
\end{equation}
where $Q:=A \Pi(\cdot): U\to H$. Note that $\mathcal{J}(u)=J(\Pi(u),u)$.
In general, \eqref{eq:optimal_control} or its reduced form \eqref{eq:optimal_control_reduced}
represent a class of optimal control problems, for which a plethora of studies exist in the literature; see, e.g., \cite{Tro10} for an introduction and \cite{MR1669395, MR0271512, MR2183776} as well as the references therein for more details. In contrast, in many applications one is confronted with control problems where $e$ or, alternatively, $\Pi$ are only partly known along with measurement data which can be exploited to obtain (approximations) of missing information. Such minimization tasks have barely been treated in the literature and motive the present work.
In order to inspire such a setting, we briefly highlight here two classes of applications which will be further studied from Section \ref{sec:appl_1} onwards.
Our first motivating example is related to the fact that many phenomena in engineering, physics or life sciences, for instance, can be modeled by elliptic partial differential equations of the form
\begin{equation}\label{mh.semilinear}\left.
\begin{aligned}
&Ly + f(x,y)=Ru & \quad \text{ in } \; \Omega, \\
&b(x)\partial_\nu y + d(x)y=0 & \quad \text{ on } \; \partial \Omega .
\end{aligned}\right\}
\end{equation}
Here $L$ denotes a second-order linear elliptic partial differential operator with measurable, bounded and symmetric coefficients, $f(x,y)$ is a nonlinearity, and $R$ models the impact of the control action $u$. Moreover, $b$ and $d$ are given coefficient functions. The
set $\Omega\subset \mathbb{R}^d$ represents the underlying domain with boundary $\partial\Omega$, and $\partial_\nu$ denotes the derivative along the outward (unit) normal $\nu$ to $\Omega$. Often the precise form of $f$ is unknown, but rather only accessible through a data set $D:=\{(y_i,u_i): e(y_i,u_i)\approx 0, i=1,\ldots,{n_{D}}\}$, ${n_{D}}\in\mathbb{N}$, i.e., given pre-specified control actions, one collects associated state responses (through measurements or computations). Utilizing data-driven approximation techniques such as artificial neural networks (ANNs), one may then get access to a data-driven model of $f$ which can be used even outside the range of the data set $D$ to yield a valid model of the underlying real-world process. In such a setting, associated optimal control problems depend on approximations $\mathcal{N}$ of $f$, and theoretical investigations as well as numerical solutions of the control problem need to take the construction of $\mathcal{N}$ into account.
The second example comes from quantitative magnetic resonance imaging - qMRI. In this context, one integrates a mathematical model of the acquisition physics (the Bloch equations \cite{DonHinPap19}) into the associated image reconstruction task in order to relate qualitative information (such as the net magnetization $y=\rho m$) with objective, tissue dependent quantitative information (such as $T_1$ and $T_2$, the longitudinal and the transverse relaxation times, respectively, or the proton spin density $\rho$). This model is then used to obtain quantitative reconstructions from subsampled measurement data $g$ in k-space by a variational approach. The provision of such quantitative reconstructions is highly important, e.g., for subsequent automated image classification procedures to identify tissue anomalies. Moreover, in \cite{DonHinPap19} it is demonstrated that such an {\it integrated physics-based} approach is superior to the state-of-the-art technique of magnetic resonance fingerprinting (MRF) \cite{Ma_etal13} and its improved variants \cite{DavPuyVanWia14, MazWeiTalEld18}.
Specifically in MRI, acquisition data are obtained at different pre-specified times (read-out times) $t_{1}, \ldots, t_{L}$, during which the magnetization of the matter is excited through the control of a time dependent external magnetic field $B$. Given $u=(T_1,T_2,\rho)$, the magnetization time vector at $t_{1}, \ldots t_{L}$ is then given by
$y=\Pi(u)$,
where $\Pi$ denotes the solution map associated with a discrete version of the Bloch equations. Crucial to this approach is the fact that, at least for specific variations of the external magnetic field $B$, explicit formulas for the solution map of the Bloch equations are available. For instance, in \cite{DavPuyVanWia14} and \cite{DonHinPap19} Inversion Recovery balanced Steady-State Free Precession (IR-bSSFP) \cite{Sche99} is used which involves certain flip angle sequence patterns that characterize the external magnetic field $B$. These flip angle patterns allow for a simple approximation of the solutions of the Bloch equations at the read-out times through a recurrence formula.
However, in general, it is quite typical that for more complicated external magnetic fields one does not have at hand explicit representations for the Bloch solution map. More generally, for most nonlinear differential equations (including those relevant in image reconstruction tasks) explicit solution maps might be too complicated to obtain. However, one may employ numerical methods to approximate their solutions $(y_i)_{i=1}^{{n_{D}}}$ given a specific (coarse) selection of parameters $(u_i)_{i=1}^{{n_{D}}}$ within a certain range. This generates a data set $D$ which is then employed in a learning procedure to generate an ANN based approximation $\Pi_{\mathcal{N}}$ of $\Pi$. This gives rise to $Q_{\mathcal{N}}:=A\Pi_{\mathcal{N}}$ in \eqref{eq:optimal_control_reduced} and requires an associated analytical as well as numerical treatment of the (reduced) minimization problem.
In general, learning-informed models are getting nowadays increasingly more popular in different scientific fields. Some works focus on the design of ANNs, e.g., by constructing novel network architectures \cite{BakGupNaiRas17}, or on developing fast and reliable algorithms in order to train ANNs more efficiently \cite{BotCurNoc18}.
More relevant for our present work, ANNs have been applied to the simulation of differential dynamical systems \cite{QinWuXiu18} and high dimensional partial differential equations \cite{HanJenE18,SirSpi18}, as well as to the coefficient estimation in nonlinear partial differential equations \cite{LonLuMaDon18}, also in connection with optimal control \cite{E17,HabRut18} and inverse problems \cite{ArrMaaOekSch19}.
Note, however, that in our approach neural networks do not aim to approximate the solution of \eqref{eq:optimal_control}, but rather they are part of the physical process encoded in $\Pi$. We emphasize that this is a different strategy to some of the recent works \cite{AdlOek17,Bal_etal18} in the literature that focus on learning the entire model or reconstruction process. More precisely, in the present work we suggest to use an operator $\Pi_{\mathcal{N}}$ that is induced by trained neural networks modelling the equality constraint (with, e.g., $f$ replaced by an ANN-based model $\mathcal{N}$ in our example \eqref{mh.semilinear}) or its (implicitly defined) solution map $\Pi$.
In such a setting, existence, convergence, stability and error bounds of the corresponding approximations need to be analyzed. Particularly, we are interested in the error propagation from the neural network based approximation to the solution of the optimal control problem.
Moreover, in the case of partial differential equations, when replacing $f$ by $\mathcal{N}$, the regularity of solutions has to be checked carefully before approaching the optimal control problem.
Further, from a numerical viewpoint, in order to use derivative-based numerical methods, it is important for these approximating solution maps to have certain smoothness.
This aspect is typically tied to the regularity of the activation functions employed in ANN approximations.
The remaining part of the paper is organized as follows:
Section \ref{sec:analysis} provides a general error analysis for solutions of the proposed learning-informed framework.
Some basic definitions and approximation properties of artificial neural networks are recalled in Section \ref{sec:ANN}, and
Section \ref{sec:appl_1} presents a concrete case study on optimal control of semilinear elliptic equations with general nonlinearities, including both error analysis and numerical results.
Section \ref{sec:appl_2} contains another case study on quantitative magnetic resonance imaging, again including computational results.
\section{Mathematical analysis of the general framework problem}
\label{sec:analysis}
We start our analysis by studying \eqref{eq:optimal_control_reduced}
or its variant where $Q$, the original physics-based operator, is replaced by a (data-driven) approximation.
Existence of a solution to \eqref{eq:optimal_control_reduced} follows from standard arguments which are provided here for the sake of completeness.
\begin{proposition}\label{pro:existence_wsc}
Suppose that $Q$ is weakly-weakly sequentially closed, i.e., if $u_{n}\overset{U}{\rightharpoonup} u$ and $Q(u_{n})\overset{H}{\rightharpoonup}\bar{g}$, then $\bar{g}=Q(u)$.
Then \eqref{eq:optimal_control_reduced} admits a solution $\bar u\in U$.
In the special case where $\mathcal{C}_{ad}$ is a bounded set of a subspace $\hat{U}$ which is compactly embedded into $U$, it suffices that $Q$ is strongly-weakly sequentially closed to guarantee existence of a solution to \eqref{eq:optimal_control_reduced}.
\end{proposition}
\begin{proof}
Suppose that $Q$ is weakly-weakly sequentially closed and let $(u_{n})_{n\in\mathbb{N}}\subset \mathcal{C}_{ad}$ be an infimizing sequence for \eqref{eq:optimal_control_reduced}. Since $\alpha>0$, $(u_n)_{n\in\mathbb{N}}$ is bounded in $U$, and thus we can extract an (unrelabelled) weakly convergent subsequence, i.e., $u_{n}\overset{U}{\rightharpoonup} \bar u$ for some $\bar u\in U$. Since $\mathcal{C}_{ad}$ is strongly closed and convex, it is weakly closed and therefore $\bar u\in \mathcal{C}_{ad}$. Moreover, since the sequence $(Q(u_{n}))_{n\in\mathbb{N}}$ is also bounded in $Y$, passing to a subsequence if necessary, we get that there exists a $\bar{g}\in H$ such that $Q(u_{n})\overset{H}{\rightharpoonup} \bar{g}$. Due to the weak sequential closedness we have $\bar{g}=Q(\bar u)$. Finally, from the weak lower semicontinuity of $\|\cdot\|_{H}$ and $\|\cdot\|_{U}$ we have $\mathcal{J}(\bar u) \le \liminf_{n\to\infty} \mathcal{J}(u_{n})= \inf_{u\in \mathcal{C}_{ad}} \mathcal{J}(u)$
and hence $\bar u$ is a solution of \eqref{eq:optimal_control_reduced}.
For the special case let $(u_{n})_{n\in\mathbb{N}}$ again be an infimizing sequence for \eqref{eq:optimal_control_reduced}. Due to the compact embedding, we have that $(u_{n})_{n\in\mathbb{N}}$ has an (unrelabelled) subsequence such that $u_{n}\to \bar u$ strongly in $U$ as $n\to \infty$. Then the proof follows the same steps as above.
\end{proof}
\begin{remark}\label{rem:continuity}
We note here that in many examples in optimal control of (semilinear) PDEs, the control-to-state map actually maps $U$ to a solution space $Y$ which is of higher regularity than $H$ and even compactly embeds into it; e.g., $Y:= H^{1}(\Omega)\hookrightarrow L^{2}(\Omega)=:H$. Provided that the control-to-state map is bounded, in that case weak convergence in $U$ results, up to subsequences, in strong convergence in $H$ with the latter used to show closedness of the control-to-state operator.
\end{remark}
Assuming that $Q$ is Fr\'echet differentiable with derivative $Q'(\cdot)\in\mathcal{L}(U,H)$, the first-order optimality condition of \eqref{eq:optimal_control_reduced} is
\begin{equation}\label{eq:first_optimality}
\langle \mathcal{J}^\prime (\bar u),u-\bar u\rangle_{U^\ast,U}\geq 0 \quad \text{ for all } \; u\in \mathcal{C}_{ad},
\end{equation}
where $\mathcal{J}'(\bar u)\in\mathcal{L}(U,\mathbb{R})=:U^\ast$ is the Fr\'echet derivative of $\mathcal{J}$ at $\bar u$, and $\langle\cdot,\cdot\rangle_{U^\ast,U}$ denotes the duality pairing between $U$ and its dual $U^\ast$.
Utilizing the structure of $\mathcal{J}$ we get
\begin{align*}
&\big\langle(Q^\prime(\bar u))^\ast \iota_H^{-1}(Q(\bar u)-g) + \alpha \iota_U^{-1}\bar u, u-\bar u \big\rangle_{U^{\ast}, U} \ge 0\quad \text{for all }\; u\in \mathcal{C}_{ad},
\end{align*}
or alternatively
\[ \bar u = \mathcal{P}_{\mathcal{C}_{ad}}\left (-\frac{\iota_U(Q^\prime (\bar u))^\ast \iota_H^{-1}(Q(\bar u)-g) }{\alpha }\right),\]
where $\mathcal{P}_{\mathcal{C}_{ad}}$ is the projection in $U$ onto $\mathcal{C}_{ad}$, and $\iota_H:H^*\to H$ as well as $\iota_U:U^*\to U$ are Riesz isomorphisms, respectively. For ease of notation, however, we will leave off the Riesz maps in what follows whenever there is no confusion.
We now proceed to the error analysis of \eqref{eq:optimal_control_reduced}, where we
assume that $(Q_{n})_{n\in\mathbb{N}}$ is a family of operators approximating $Q$, and clarify the convergence of the associated minimizers $u_n\in\mathcal{C}_{ad}$.
\begin{theorem}\label{thm:convergence}
Let $Q$ and $Q_{n}$, $n\in\mathbb{N}$, be weakly sequentially closed operators with
\begin{equation} \label{eq:operator_err}
\|Q(u)-Q_{n}(u)\|_{H}\le \epsilon_{n}, \quad
\text{ for all } \; u\in \mathcal{C}_{ad},
\end{equation}
and {$\epsilon_{n}\downarrow 0$. Furthermore let $(u_{n})_{n\in \mathbb{N}}$ be a sequence of minimizers of \eqref{eq:optimal_control_reduced}} with $Q$ replaced by $Q_n$ for all $n\in\mathbb{N}$.
Then, we have the strong convergences
\begin{equation}\label{eq:strong_convergence}
u_{n} \to \bar{u} \; \text{ in }\; U,\quad\text{ and }\quad Q_n(u_{n}) \to Q(\bar{u}) \; \text{ in } \;H, \quad \text{ as } \; n \to \infty,
\end{equation}
where $\bar{u}$ is a minimizer of \eqref{eq:optimal_control_reduced}.
\end{theorem}
\begin{proof}
As $(u_{n})_{n\in \mathbb{N}}$ is a sequence of minimizers, we have for $C:=\max_{n} \epsilon_{n}<\infty$ and every $u\in \mathcal{C}_{ad}$:
\begin{equation*}
\begin{aligned}
\frac{1}{2} \norm{Q_{n}(u_{n}) -g }_{H}^2+\frac{\alpha}{2}\norm{ u_{n}}_{U}^2
\leq \norm{Q(u) -g}_{H}^2+ C^2+\frac{\alpha}{2}\norm{ u}_{U}^2 .
\end{aligned}
\end{equation*}
Note also that $\|Q(u_{n})\|_{H}\le \|Q_{n}(u_{n})\|_{H}+ \epsilon_{n}$.
Hence $(u_{n})_{n\in\mathbb{N}}$, $(Q(u_{n}))_{n\in\mathbb{N}}$ and $(Q_{n}(u_{n}) )_{n\in\mathbb{N}}$ are bounded sequences and therefore there exist (unrelabelled) subsequences and $\bar u\in U$ such that
$u_{n} \stackrel{U}{\rightharpoonup} \bar{u}$ with $\bar{u}\in\mathcal{C}_{ad}$ by weak closedness, $Q(u_{n})\stackrel{H}{\rightharpoonup} Q(\bar{u})$, and ${Q_{n}(u_{n}) \stackrel{H}{\rightharpoonup} Q(\bar{u})}$,
where we have also used that $Q$ is weakly sequentially closed for the second limit.
For the third limit, note that for an arbitrary $\tilde{g}\in H$, by using \eqref{eq:operator_err}, we get
\begin{align*}
\abs{( Q_{n}(u_{n}) -Q(\bar{u}),\tilde{g})_H}
&\le \abs{( Q_{n}(u_{n}) -Q(u_{n}),\tilde{g})_H}+ \abs{( Q(u_{n}) -Q(\bar{u}),\tilde{g})_H}\\
&\le \epsilon_{n} \|g\|_{H} + \abs{( Q(u_{n}) -Q(\bar{u}),\tilde{g})_H} \to 0,
\end{align*}
where $(\cdot,\cdot)_H$ denotes the inner product in $H$.
Using the lower semicontinuity of the norms, we have for every $u\in \mathcal{C}_{ad}$ that
\begin{align*}
\frac{1}{2} \norm{Q(\bar{u}) -g}_{H}^{2} &+ \frac{\alpha}{2} \norm{\bar{u}}_{U}^2
\le \liminf_{n} \frac{1}{2}\norm{Q_{n}(u_{n})-g}_{H}^{2} + \frac{\alpha}{2} \norm{u_n}_{U}^{2}\\
&\le \lim_{n} \frac{1}{2}\norm{Q_{n}(u)-g}_{H}^{2} + \frac{\alpha}{2} \norm{u}_{U}^{2}
= \frac{1}{2} \norm{Q(u)-g}_{H}^{2} + \frac{\alpha}{2} \norm{u}_{U}^{2}.
\end{align*}
Thus, we conclude that $\bar{u}$ is a minimizer of \eqref{eq:optimal_control_reduced}.
We still need to show that $u_{n} \to \bar{u}$ strongly in $U$.
Suppose there exists a $\mu>0$ such that $\mu=\limsup_{n}\norm{u_n}_{U}> \norm{\bar{u}}_{U}$.
Let $(u_{n_{k}})_{k\in\mathbb{N}}$ be a subsequence with $\norm{u_{n_{k}}}_{U} \to \mu$ as $k\to \infty$.
Then we have
\begin{equation}\label{eq:upper_limit}
\begin{aligned}
\limsup_k \;\frac{1}{2}\norm{Q_{n_{k}}(u_{n_{k}}) -g}_{H}^2
&=\limsup_k \left(\frac{1}{2}\norm{Q_{n_{k}}(u_{n_{k}}) -g}_{H}^2+\frac{\alpha}{2} (\norm{u_{n_{k}}}_{U}^2 - \mu^2)\right)\\
& \leq \lim_k \frac{1}{2}\norm{Q_{n_{k}}(\bar{u}) -g}_{H}^2 +\frac{\alpha}{2}( \norm{\bar{u}}_{U}^2-\mu^2)\\
&= \frac{1}{2}\norm{Q(\bar{u}) -g}_{H}^2 +\frac{\alpha}{2}( \norm{\bar{u}}_{U}^2-\mu^2)
<\frac{1}{2}\norm{Q(\bar{u}) -g}_{H}^2 .
\end{aligned}
\end{equation}
This contradicts the lower semicontinuity of the norm and $Q_{n}(u_{n}) \rightharpoonup Q(\bar{u}) $
Thus, $\|u_n\|_{U} \to \|\bar{u}\|_{U}$ as $n\to\infty$.
Together with the weak convergence $u_{n} \rightharpoonup \bar{u}$ we get $u_{n} \to \bar{u}$ strongly in $U$ and further
\[\limsup_n \norm{ Q_{n}(u_{n}) - g }_{H}\leq \norm{Q(\bar{u})-g}_H \leq \liminf_n \norm{ Q_{n}(u_{n}) - g }_{H} .\]
Hence, $\lim_n \norm{Q_{n}(u_{n})}_H=\norm{Q(\bar{u})}_H $, which
implies the second limit in \eqref{eq:strong_convergence}.
\end{proof}
For a quantitative convergence result, we invoke the following assumptions which are motivated by the analysis of nonlinear inverse problems \cite{Han10,LuFle12}.
\begin{assumption}\label{assum:operator_derivative}
Assume that $Q$ is Fr\'echet differentiable and that there exists $L_0>0$ such that
\begin{equation}\label{eq:der_bounded}
\norm{ Q^\prime(u)}_{\mathcal{L}(U,H)}\leq L_0 \quad \text{ for all }\; u\in \mathcal{C}_{ad}.
\end{equation}
Assume further that the Fr\'echet derivative is locally Lipschitz with modulus $L_1>0$, i.e.,
\begin{equation}\label{eq:second_Lip}
\norm{Q^\prime (u_a)- Q^\prime (u_b)}_{\mathcal{L}(U,H)} \leq L_1\norm{u_a-u_b}_U, \quad \text{ for all }\; u_a, u_b \in \mathcal{C}_{ad}.
\end{equation}
Moreover, let the Fr\'echet derivatives of $Q$ and $Q_{n}$ satisfy the following error bounds
\begin{equation} \label{eq:operator_deriv_err}
\norm{Q^\prime(u)-Q^\prime_{n}(u)}_{\mathcal{L}(U,H)}\leq \eta_n, \;
\text{ for all } \; u\in \mathcal{C}_{ad},
\end{equation}
where $\eta_n\in (0,1)$ for all $n\in\mathbb{N}$ and $\eta_n \downarrow 0$.
Finally, let the two constants $L_0$ and $L_1$ satisfy
\begin{equation}\label{eq:Lip_const_condition2}
L_0(L_0+1) +L_1 \norm{Q(\bar{u})-g}_H<\alpha,
\end{equation}
with $\bar{u}$ being the minimizer of \eqref{eq:optimal_control_reduced}.
\end{assumption}
The condition in \eqref{eq:der_bounded} indicates that
\begin{equation}\label{eq:Q_Lip}
\norm{Q(u_a)- Q(u_b)}_{H} \leq L_0\norm{u_a-u_b}_U, \quad \text{ for all }\; u_a,u_b \in \mathcal{C}_{ad}.
\end{equation}
\begin{theorem}\label{thm:error_bound2}
Let the assumptions of Theorem \ref{thm:convergence} as well as Assumption \ref{assum:operator_derivative} hold.
Then, we have
\begin{equation}\label{eq:error_bound2}
\norm{u_{n}- \bar{u}}_U\leq \frac{1 }{ \alpha -L_0(L_0+\eta_n) -L_1\norm{Q(\bar{u})-g}_H}\left( L_0 \epsilon_n + \epsilon_{n}\eta_n+ \norm{Q(\bar{u})-g}_H \eta_n \right) .
\end{equation}
\end{theorem}
\begin{proof}
First-order optimality yields
\begin{equation}
\bar{u} =\mathcal{P}_{\mathcal{C}_{ad}}\left(- (Q^\prime(\bar{u}))^\ast w \right) \quad \text{ and } \quad
u_{n} =\mathcal{P}_{\mathcal{C}_{ad}}\left( - (Q_{n}^\prime(u_{n}))^\ast w_{n} \right),
\end{equation}
where $w=\frac{Q(\bar{u})-g}{\alpha}$ and $w_{n} =\frac{ Q_{n}(u_n)-g}{\alpha} $.
The inequalities in \eqref{eq:der_bounded}, \eqref{eq:second_Lip}, \eqref{eq:operator_deriv_err}, and \eqref{eq:Q_Lip} and the fact that $\norm{Q'(u)}_{\mathcal{L}(U,H)}=\norm{(Q'(u))^{\ast}}_{\mathcal{L}(H^*,U^*)}$ imply
\begin{equation*}
\begin{aligned}
\norm{ u_{n} - \bar{u}}_{U}
\leq &\norm{ (Q_{n}^\prime(u_{n}))^\ast w_{n}- (Q^\prime(\bar{u}))^\ast w }_{U^*} \\
\leq &
\norm{ (Q_{n}^\prime(u_{n}))^\ast \left(w_{n}- w \right) }_{U^*}
+\norm{ \left((Q_{n}^\prime(u_{n}))^\ast - (Q^\prime(\bar{u}))^\ast \right) w }_{U^*}\\
\leq & (L_0+\eta_n) \norm{ w_{n}- w }_{H}
+ \norm{w}_{H} \eta_n
+ L_1 \norm{ w}_{H} \norm{u_{n}- \bar{u}}_{U} \\
\leq & \frac{L_0+\eta_n }{\alpha}\norm{Q(\bar{u})- Q_{n}(u_{n})}_{H}
+\norm{w}_{H}\eta_n+ L_1 \norm{ w}_{H} \norm{u_{n}- \bar{u}}_{U} \\
\leq & \frac{L_0+\eta_n }{\alpha}(\epsilon_n +L_0\norm{u_{n} - \bar{u}}_{U})
+ \norm{w}_{H}\eta_n + L_1 \norm{ w}_{H} \norm{u_{n}- \bar{u}}_{U}.
\end{aligned}
\end{equation*}
Moving all terms that involve $\norm{u_{n}- \bar{u}}_{U} $ to the left-hand side we get
\[ (1- \frac{L_0(L_0+\eta_n)}{\alpha } -L_1\norm{w}_{H}) \norm{ u_{n} - \bar{u}}_{U} \leq \frac{L_0}{\alpha}\epsilon_n +\frac{\epsilon_{n}\eta_n}{\alpha}+ \norm{w}_{H}\eta_n. \]
Finally, using $w=\frac{Q(\bar u)-g}{\alpha}$ we find \eqref{eq:error_bound2}.
\end{proof}
Observe that for $Q(\bar{u})=g$ (perfect matching)
the a priori bound is essentially controlled by $\epsilon_n$ only:
\[\norm{u_{n}- \bar{u}}_{U} \leq \frac{L_0+\eta_n }{ \alpha -L_0(L_0+\eta_n) }\epsilon_{n}. \]
Note further that the error bound depends on a sufficiently large $\alpha$ such that \eqref{eq:Lip_const_condition2} is satisfied.
In the special case where $\mathcal{C}_{ad}$ is redundant, i.e., when $\mathcal{J}'(\bar{u})=0$, improved error bounds can be derived.
This is in particular true for perfect matching which also allows to relax the conditions on $\alpha$.
\begin{theorem}\label{thm:error_bound}
Let the assumptions of Theorem \ref{thm:convergence} hold and suppose that the Lipschitz condition \eqref{eq:second_Lip} is satisfied with the constant $L_1$ such that
\begin{equation}\label{eq:Lip_const_condition}
L_1 \norm{Q(\bar{u})-g}_H< \alpha.
\end{equation}
If $\mathcal{J}'(\bar u)=0$, then for sufficiently large $n\in\mathbb{N}$ we have the following error bound
\begin{equation}\label{eq:error_bound}
\norm{u_{n}- \bar{u}}_U\leq \sqrt{ \frac{3}{\alpha -L_1 \norm{g-Q(\bar{u})}_H}} \sqrt{ \epsilon_n^2 +2\norm{Q(\bar{u})-g}_H^2}.
\end{equation}
\end{theorem}
\begin{proof}
Since $u_{n}$ is a minimizer for every $n\in\mathbb{N}$, we have that $\mathcal{J}_n(u_n)\leq \mathcal{J}_n(\bar u)$ with $\mathcal{J}_n(u):=J(Q_n(u),u)$.
Adding $\frac{\alpha}{2}(\norm{ u_n- \bar{u}}_{U}^2 - \norm{u_n}_{U}^2)$ to both sides of the inequality gives
\begin{equation}\label{eq:error1}
\frac{1}{2} \norm{Q_{n}(u_{n}) -g }_{H}^2+\frac{\alpha}{2}\norm{ u_n- \bar{u}}_{U}^2 \leq \frac{1}{2} \norm{Q_{n}(\bar{u}) -g}_{H}^2+\alpha \langle \iota_U^{-1} \bar{u},\bar{u}- u_{n} \rangle_{U^\ast,U}.
\end{equation}
Using Theorem \ref{thm:convergence}, Taylor's expansion and \eqref{eq:second_Lip}, we get for sufficiently large $n\in\mathbb{N}$
\[Q(u_{n}) - Q(\bar{u})=Q^\prime(\bar{u})(u_{n}-\bar{u}) +q(u_{n},\bar{u}), \text{
where } \norm{q(u_{n},\bar{u})}_{H}\leq \frac{L_1}{2}\norm{ u_{n}-\bar{u}}_{U}^2 .\]
By our assumptions and first-order optimality we have
$\bar{u} = -\iota_U(Q^\prime(\bar{u}))^\ast w$
where $w=\alpha^{-1}(Q(\bar{u})-g)$
with $L_1 \norm{w}_{H}< 1$ because of \eqref{eq:Lip_const_condition}.
This leads to
\begin{equation}\label{eq:error2}
\begin{aligned}
& \langle \iota_U^{-1} \bar{u},\bar{u}- u_{n} \rangle_{U^\ast,U} = \left( -w,Q^\prime(\bar{u})(\bar{u}- u_{n}) \right)_H \leq \norm{w}_{H}\norm{Q^\prime(\bar{u})(\bar{u}- u_{n})}_{H} \\
\leq & \norm{w}_{H}\left(\frac{L_1}{2}\norm{ u_{n}-\bar{u}}_{U}^2
+ \norm{Q(u_{n}) - Q_{n}(u_{n}) }_{H} + \norm{ Q_{n}(u_{n}) -g }_{H} +\norm{g - Q(\bar{u})}_{H} \right)\\
\leq & \frac{\norm{w}_{H}L_1 }{2}\norm{ u_{n}-\bar{u}}_{U}^2 + \frac{1}{2}\left(\alpha \norm{w}^2 + \frac{1}{\alpha}\norm{ Q_n(u_{n}) -g }^2 \right) \\
& + \left(\alpha \norm{w}_{H}^2 + \frac{1}{2\alpha} \norm{Q(u_{n}) - Q_n(u_{n}) }_{H}^2 + \frac{1}{2\alpha}\norm{g - Q(\bar{u})}_{H}^2 \right) ,
\end{aligned}
\end{equation}
where we have used the identity $ab\le \frac{1}{2\alpha} a^{2}+ \frac{\alpha}{2} b^{2} $.
Returning to \eqref{eq:error1} and using \eqref{eq:error2},
we derive
\begin{equation}
\label{eq:error3}
\begin{aligned}
\norm{ u_n- \bar{u}}_{U}^2 \leq &\frac{1}{\alpha} \norm{Q_{n}(\bar{u}) -g }_{H}^2+
\norm{w}_{H}L_1 \norm{ u_{n}-\bar{u}}_{U}^2 + 3\alpha \norm{w}_{H}^2 \\
& + \frac{1}{\alpha }( \norm{Q(u_{n}) - Q_n(u_{n}) }_{H}^2 + \norm{g - Q(\bar{u})}_{H}^2)\\
\leq & \frac{2}{\alpha }\norm{Q_{n}(\bar{u}) - Q(\bar{u})}_{H}^2 +
\norm{w}_{H}L_1 \norm{ u_{n}-\bar{u}}_{U}^2 + 3\alpha \norm{w}_{H}^2 \\
& + \frac{1}{\alpha}\norm{Q(u_{n}) - Q_n(u_{n}) }_{H}^2 +\frac{3}{\alpha} \norm{g - Q(\bar{u})}_{H}^2.
\end{aligned}
\end{equation}
Taking into account \eqref{eq:Lip_const_condition}, we get
\[
\begin{aligned}
\norm{ u_n- \bar{u}}_{U}^2 \leq \frac{1}{\left( 1- \norm{w}_{H}L_1 \right)} \frac{3}{\alpha}\left( \epsilon_n^2 + \alpha^2\norm{w}_{H}^2 +\norm{g - Q(\bar{u})}_{H}^2\right),
\end{aligned}
\]
for sufficiently large $n\in\mathbb{N}$.
Replacing now $\norm{w}_{H}$ by $\frac{\norm{g-Q(\bar{u})}_{H}}{\alpha}$ yields \eqref{eq:error_bound}.
\end{proof}
Note that in the case of perfect matching $Q(\bar{u})=g$, \eqref{eq:error_bound} becomes
\begin{equation}\label{eq:error_bound_zero_res}
\norm{u_{n}- \bar{u}}_{U}\leq \epsilon_n\sqrt{\frac{3}{\alpha }}\quad\text{for sufficiently large }n\in\mathbb{N}.
\end{equation}
As stated earlier, our aim is to use approximations $Q_n=Q_{\mathcal{N}_n}=A\Pi_{\mathcal{N}_n}$ resulting from artificial neural networks to replace the partially unknown exact control-to-state map $\Pi$ and $Q=A\Pi$. Therefore, we next collect some fundamental properties of such neural network based approximations.
\section{A brief primer on artificial neural networks (ANNs)}\label{sec:ANN}
Here, we briefly review some (well-known) results for ANNs as they will be useful in what follows. For more introduction on ANNs, one may refer to many textbooks of this topic, e.g., \cite{GooBenCou16}.
We recall that a standard feedforward ANN with one hidden layer is a function $\mathcal{N}: \mathbb{R}^{r}\to \mathbb{R}^{s}$ of the following structure:
\begin{equation}\label{NN_def_section}
\mathcal{N}(x)= W_0\sigma (W_1x+b_1)+b_0,\quad x\in \mathbb{R}^{r},
\end{equation}
where $W_1\in \mathbb{R}^{l\times r}$, $b_1 \in \mathbb{R}^{l}$, $W_0\in \mathbb{R}^{s\times l}$ and $b_0\in \mathbb{R}^{s}$. In that case we say that the hidden layer has $l$ \emph{neurons}.
Here, $\sigma: \mathbb{R} \to \mathbb{R}$ is an infinitely differentiable activation function which acts component-wise on a vector in $ \mathbb{R}^l$.
In the output layer, the activation function is usually the identity map, therefore ignored in \eqref{NN_def_section}, while in the other hidden layers, it involves nonlinear transformations.
Some standard smooth activation functions are the following ones:
\begin{itemize}
\item[$\bullet$] Sigmoid: a term denoting a family of functions, e.g., tansig ($\sigma(z)=\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$), logsig ($\sigma(z)=\frac{1}{1+e^{-z}}$)), arctan ($\sigma(z)=\arctan(z))$, etc
\item[$\bullet$] Probability functions, e.g., softmax ($\sigma_i(z)=\frac{e^{-z_i}}{\sum_j e^{-z_j}}$). Here the index $i$ denotes the $i$-th neuron in a given layer, with the summation indexed by $j$ being taken over all the neurons of the same layer.
\end{itemize}
We see that for the softmax function, neurons of the same layer may have different activition functions. Notice that the smoothness of the activation function is the one that determines the smoothness of $\mathcal{N}$.
Next we state a classical result, see, for instance, \cite[Theorem 3.1]{Pin99}. Below ``$\cdot$'' denotes the standard inner product in the underlying Euclidean space.
\begin{theorem}\label{thm:function_app}
Let $\sigma \in C( \mathbb{R})$ and consider the set
\[R_\sigma:=\set{\mathcal{N}: \mathbb{R}^r \to \mathbb{R} \,|\, \mathcal{N}(x)= w_0\cdot \sigma(W_1 x+ b_1) ,\text{ with } w_0 \in \mathbb{R}^l, \; W_1\in \mathbb{R}^{l\times r},\; b_1\in \mathbb{R}^l}.\]
Then $R_\sigma$ is dense in $C( \mathbb{R}^r)$ in the topology of uniform convergence on compact sets if and only if $\sigma$ is not a polynomial function.
\end{theorem}
Hence, for any $\epsilon>0$, and for any given function $f\in C(K)$, $K\subset \mathbb{R}^r$ compact, there exists a function $\mathcal{N}=\mathcal{N}^\epsilon \in R_\sigma$ such that
\[ \max_{ x\in K}\abs{f( x )- \mathcal{N}^\epsilon( x)}< \epsilon.\]
This approximation property can be also carried over to the derivatives of a given smooth function; see, e.g., \cite[Theorem 4.1]{Pin99}.
\begin{theorem}\label{thm:deriv_app}
Let $\,m=\max \set{\abs{ m^i}:\; i=1,2,\ldots, s}$, where each $m^{i}$ is a standard differentiation multi-index, and define $C^{m^1, \ldots , m^s}( \mathbb{R} ^{r}):=\bigcap_{i=1}^s C^{m^i}( \mathbb{R}^r)$.
Then $R_\sigma$ is dense in $C^{m^1, \ldots , m^s}( \mathbb{R}^r)$ if $\sigma\in C^m( \mathbb{R})$ is not a polynomial function.
\end{theorem}
As a consequence, for any $f\in C^{m^1, \ldots , m^s}(K)$, for every compact $K\subset \mathbb{R}^{r}$ and every $\epsilon>0$, there exists a function $\mathcal{N}=\mathcal{N}^\epsilon\in R_\sigma$ such that
\[\max_{ x\in K}\abs{D^{k} f( x)- D^{ k} \mathcal{N}^\epsilon( x)}< \epsilon,\]
for all multi-indices $ k$ such that $ 0\leq k \le m^{i}$ for some $i$.
Note that
these results imply analogous error bounds for \eqref{NN_def_section}, i.e., for the vector-valued case. They can be also generalized to mutiple-hidden-layer networks as the next theorem shows, see \cite{LesLinPinSch93}.
\begin{theorem}\label{thm:mul_lay_deriv_app}
A standard multi-layer feedforward network with a continuous activation function can uniformly approximate any continuous function to any degree of accuracy if and only if its activation function is not a polynomial.
\end{theorem}
One of the main tasks of deep learning, a specific branch of machine learning, is to identify suitable choices for $W_0\in\mathbb{R}^{s\times l_{\ell}}$, $W_1\in \mathbb{R}^{l_{1}\times r}$, $W_i\in\mathbb{R}^{l_{i}\times l_{i-1}}$ for $i=2,\ldots,\ell$, and $b_0\in \mathbb{R}^{s}$, $b_i\in\mathbb{R}^{l_i}$, where $i=1,\ldots,\ell$ represents the $i$-th hidden layer of the underlying ANN, from a given data set $D=\{(x_j,f_j)\in\mathbb{R}^r\times\mathbb{R}^s:j=1,\ldots,n_D\}$, with $n_D\in\mathbb{N}$ sufficiently large. A typical approach in this context seeks to find a (global) solution to the nonconvex minimization problem
\begin{equation}\label{mh.ann.min}
\text{minimize }\sum_{j=1}^{n_D}\mathfrak{d}(\mathcal{N}(x_j),f_j)+\mathfrak{r}(W,b)\quad\text{over }(W,b)\in\mathcal{F}_{ad},
\end{equation}
where $\mathcal{N}$ results from a multi-layer ANN that depends on $\Theta:=(W,b)$, with $W:=(W_{0}, \ldots, W_{\ell})$ and $b:=(b_{0}, b_{1}, \ldots, b_{\ell})$. Further, $\mathfrak{d}$ denotes a suitable distance measure, $\mathfrak{r}$ is an optional regularization term inducing some a priori properties of $\Theta$, and $\mathcal{F}_{ad}$ encodes possible additional constraints. While the study of \eqref{mh.ann.min} is an interesting and challenging subject in its own right, here we rather assume that the learning process, i.e., the computation of a suitable $\Theta$, has been completed. We then study analytical properties of the resulting $\mathcal{N}$, or the solution map $\Pi_\mathcal{N}$ or $Q_\mathcal{N}$ in view of \eqref{eq:optimal_control_reduced}, in the context of our target applications and report on associated numerical results.
\section{Application: Distributed control of semilinear elliptic PDEs }
\label{sec:appl_1}
In our first application we consider the following model problem associated with the distributed optimal control of a semilinear elliptic PDE:
\begin{align}\label{eq:cost}
&\text{minimize}\quad J(y,u):= \frac{1}{2}\|y-g\|_{L^2(\Omega)}^{2} +\frac{\alpha}{2} \|u\|_{L^2(\Omega)}^{2},\quad\text{over }\;(y,u)\in H^1(\Omega)\times L^2(\Omega)\\
&\text{s.t. }\quad \label{eq:state_eq}
-\Delta y + f(x,y)=u\;\; \text{ in }\;\Omega,\quad \partial_{\nu}y=0\;\; \text{on }\; \partial \Omega,\\
&\label{eq:control_constr} \phantom{\text{s.t. }}\quad\; u\in \mathcal{C}_{ad}:=\{v\in L^2(\Omega):\underline{u}(x)\le v(x) \le \overline{u}(x),\quad \text{for a.e. }x\in\Omega\},
\end{align}
where $\underline{u},\overline{u}$ with $\underline{u}\leq \overline{u}$ belong to $L^\infty(\Omega)$, and 'a.e.' stands for 'almost every' in the sense of the Lebesgue measure. Moreover, we have $g\in L^{2}(\Omega)$, and $\Omega \subset \mathbb{R}^{d}$, $d\ge 2$, is a bounded domain with Lipschitz boundary. In view of our general model problem class \eqref{eq:optimal_control} we have $H=U=L^2(\Omega)$, $Y=H^1(\Omega)$, $Z=H^{-1}(\Omega)$, $A=\operatorname{id}$, and $e$ is given by the PDE in \eqref{eq:state_eq}. For more details on the involved Lebesgue and Sobolev spaces we refer to \cite{MR2424078}. Concerning $f$ we invoke the following assumption throughout this section:
\begin{assumption}\label{assu:non_monotone}
The nonlinear function $f=f(x,z):\Omega \times \mathbb{R} \to \mathbb{R}$ is measurable with respect to $x$ for every $z\in \mathbb{R}$ and continuously differentiable with respect to $z$ for almost every $x\in\Omega$.
There exists a function $F:\Omega \times \mathbb{R} \to \mathbb{R}$ so that $\partial_z F(\cdot,z)=f(\cdot,z)$. $F$ and $f$ satisfying the following conditions, for all $z\in \mathbb{R}$
\begin{align}\label{eq:growth_rate_Ff}
\abs{f(\cdot,z)}\le b_1+ c_{1}\abs{z}^{p-1}\quad \text{ and }\quad
-f(\cdot,z)z+F(\cdot,z)\leq b_2,
\end{align}
which combined also result to
\begin{equation}\label{eq:growth_rate_Fminusf}
F(\cdot,z)\leq b_0+c_{0}\abs{z}^{p},
\end{equation}
for some constants $b_{0}, b_{1}, b_{2}\in \mathbb{R}$ and $c_{0}, c_{1}>0$ and for $p$ with $1<p\leq \frac{2d}{d-2}$ for $d\geq 3$, $ 1<p<+\infty$ for $d=2$, or $1< p \leq +\infty$ for $d=1$.The interpretation of $p=\infty$ for $d=1$ is that the growth conditions in \eqref{eq:growth_rate_Ff} are not required to hold.
Finally, we assume that $F$ is coercive in the sense that $\lim_{\norm{y}_{L^p(\Omega)}\to \infty} \frac{\int_{\Omega}F(x,y)dx}{\norm{y}_{L^p(\Omega)}} \to \infty$, and $F$ is bounded from below, i.e., $F(x,z)\geq F_0$ for some $F_0\in \mathbb{R}$, for all $z\in \mathbb{R}$ and for almost every $x\in\Omega$.
\end{assumption}
The above assumption particular indicates that both $f$ and $F$ satisfy the Carath\'eodory condition, and thus induce some operators of Nemytskii type.
Moreover, observe also that the conditions on $p$ enable the embedding $H^1(\Omega)\subset L^{p}(\Omega)$.
Also note that the Assumption \ref{assu:non_monotone} is satisfied for $F(x,z)=\alpha(x) \pi_{p}(z)$ with $\alpha\in L^{\infty}(\Omega)$ and $\alpha(x)>0$ for almost every $x\in\Omega$ and $\pi_{p}$ being a polynomial of degree $p$ and positive coefficient on the term of degree $p$; the latter being equal to $|z|^{p}$ if $p$ is odd such that the coercivity assumption is not violated.
Given the above assumption, the PDE \eqref{eq:state_eq} is related to the variational problem
\begin{equation}\label{eq:non_convex_variational}
\text{minimize}\quad G(y):=\frac{1}{2} \|\nabla y\|_{L^{2}(\Omega)}^{2} +\int_\Omega F(x,y)\,dx-\int_{\Omega} uy\,dx\quad\text{ over }\;y\in H^{1}(\Omega).
\end{equation}
A particular example is given by a Ginzburg-Landau model for superconductivity where
$f(z) = \eta^{-1}(z^3-z)$ with a parameter $\eta>0$. It gives rise to the double-well type variational model
\begin{equation}\label{eq:double_well_variational}
\text{minimize}\quad \frac{1}{2} \|\nabla y\|_{L^{2}(\Omega)}^{2} +\frac{1}{4\eta}\int_{\Omega} (y^{2}-1)^{2}dx -\int_{\Omega} uy\,dx\quad\text{ over }\;y\in H^{1}(\Omega),
\end{equation}
for given $u\in L^2(\Omega)$ or in fact, to a more a general space. The next proposition shows existence of solutions for \eqref{eq:non_convex_variational}.
\begin{proposition}\label{prop:existence_variation}
Let Assumption \ref{assu:non_monotone} hold, and suppose that $u\in L^r(\Omega)$ for some $r \geq \frac{p}{p-1}$.
Then the optimization problem \eqref{eq:non_convex_variational} admits a solution in $H^1(\Omega)$.
\end{proposition}
\begin{proof}
Notice that due to the coercivity assumption we can find a $C>0$ such that $\|u\|_{L^{r}(\Omega)}<CC_{1}$ with $C_{1}$ being the constant involved in the embedding $L^{p}(\Omega) \subset L^{\frac{r}{r-1}}(\Omega)$ such that
\begin{equation}\label{F_coercive}
\begin{aligned}
\int_{\Omega} F(x,y)\, dx- \int_{\Omega} uy\, dx
&\ge C\|y\|_{L^{p}(\Omega)}-\|u\|_{L^{r}(\Omega)} \|y\|_{L^{\frac{r}{r-1}}(\Omega)}\\
&\ge (CC_{1} -\|u\|_{L^{r}(\Omega)}) \|y\|_{L^{\frac{r}{r-1}}(\Omega)}\ge 0,
\end{aligned}
\end{equation}
provided $\|y\|_{L^{p}(\Omega)}$ is large enough. This together with the lower bound $F\ge F_{0}$ implies that the energy $G$ is bounded from below and thus there is an infimizing sequence $(y_{n})_{n\in \mathbb{N}}\in H^{1}(\Omega)\subset L^{p}(\Omega)$. Using the above inequality one easily deduces that $\|y_{n}\|_{L^{\frac{r}{r-1}}(\Omega)}$ is bounded, and with the help of the Poincar\'e inequality a uniform $H^{1}(\Omega)$ bound is also obtained for that sequence.
Therefore, we only need to show that $G(\cdot)$ is weakly lower semicontinuous in $H^{1}(\Omega)$.
For this, it suffices to check the term involving $F$, since the arguments for the other two terms are straightforward.
Assuming $y_{n}\rightharpoonup y$ in $H^{1}(\Omega)$, by the compact embedding of $H^{1}(\Omega){\hookrightarrow} L^{1}(\Omega)$, we have that
$y_{n}\to y$ almost everywhere, {up to a subsequence}. Due to the continuity of $F$ with respect to the second variable, we have $F(\cdot,y)=\lim_{n\to \infty} F(\cdot,y_n) $ almost everywhere.
Since $F(\cdot,y_n),F(\cdot,y)\geq F_0$, by Fatou's lemma we have
\[\int_{\Omega}F(x,y)\,dx\le \liminf_{n\to\infty} \int_{\Omega} F(x,y_n)\,dx, \]
and thus $G(\cdot)$ is weakly lower semicontinuous.
\end{proof}
Before we proceed, it is useful to recall the following standard result on linear elliptic PDEs \cite{Eva10,Tro10}.
\begin{theorem}\label{thm:exi_linear}
Let $ v\in L^r(\Omega)$, $a\in L^\infty(\Omega)$ with $a>0$. Then the following equation admits a unique solution
\[ -\Delta s + a s =v\quad \text{ in }\;\Omega, \qquad \partial_\nu s=0\;\text{ on }\;\partial \Omega. \]
Furthermore there exist constants $C_h>0$ and $C_l >0$ independent of $a$ and $v$ such that
\begin{equation}\label{eq:linearPDE_energy}
\norm{s}_{H^1(\Omega)} \leq C_h \norm{v}_{L^r(\Omega)} \quad \text{ and } \quad \norm{s}_{C(\overline{\Omega})} \leq C_l \norm{v}_{L^r(\Omega)}.
\end{equation}
\end{theorem}
Using the polynomial growth of $F$ together with the continuous embedding $H^1(\Omega)\subset L^{\frac{r}{r-1}}(\Omega)$, one verifies the Fr\'echet differentiability of $G:H^1(\Omega)\to\mathbb{R}$. The Euler-Lagrange equation associated with \eqref{eq:non_convex_variational} is given by
\begin{equation}\label{eq:non_monotone_y0}
-\Delta y+f(x,y)=u \quad \text{ in }\;\Omega,\qquad \partial_\nu y=0\;\text{ on }\;\partial\Omega,
\end{equation}
and it is satisfied for every solution $y$ of \eqref{eq:non_convex_variational}.
Under Assumption \ref{assu:non_monotone}, the solutions of \eqref{eq:non_monotone_y0} can be uniformly bounded with respect to $\|\cdot\|_{C(\overline{\Omega})}$, as shown next.
\begin{proposition}\label{lem:uniform_C_norm_bounds}
Let the Assumption \ref{assu:non_monotone} be satisfied, and let $\mathcal{C}_{ad}\subset L^{\infty}(\Omega)$ be bounded. Then there exists a constant $K>0$ such that for all solutions of \eqref{eq:non_monotone_y0}, it holds
\begin{equation}\label{eq:y0_C_estimate}
\|y\|_{H^{1}(\Omega)}+\|y\|_{C(\overline{\Omega})}\le K, \quad \text{ for all } \; u\in \mathcal{C}_{ad}.
\end{equation}
\end{proposition}
\begin{proof}
From the fact that $y\in L^{p}(\Omega)$, the growth condition \eqref{eq:growth_rate_Ff} and the measurability of $f$, we have $f(\cdot,y)\in L^{\frac{p}{p-1}}(\Omega)$.
We can rewrite \eqref{eq:non_monotone_y0} in the following form
\begin{equation}\label{eq:re_elliptic_y0}
-\Delta y+\epsilon y=u +\epsilon y - f(x,y) \quad \text{ in }\;\Omega,\qquad \partial_\nu y=0\;\text{ on }\;\partial\Omega,
\end{equation}
for some $\epsilon>0$.
Let us define $\tilde{r}:= \min\set{\frac{r}{r-1}, \frac{p}{p-1} }$.
Then $u+\epsilon y+ f(\cdot,y)\in L^{\tilde{r}}(\Omega) $ since $u\in \mathcal{C}_{ad}\subset L^\infty(\Omega)$.
Applying \eqref{eq:linearPDE_energy} to \eqref{eq:re_elliptic_y0} yields
\begin{equation}\label{eq:est_y0}
\|y\|_{H^{1}(\Omega)}+ \|y\|_{C(\overline{\Omega})}
\leq (C_h+C_l) \left(\norm{u}_{L^{\tilde{r}}(\Omega)} +\epsilon\norm{y}_{L^{\tilde{r}}(\Omega)} + \norm{f(\cdot, y)}_{L^{\tilde{r}}(\Omega)}\right) .
\end{equation}
As all solutions of \eqref{eq:re_elliptic_y0} are stationary points of $G$, in view of \eqref{eq:growth_rate_Ff}, every weak solution $y$ satisfies
\begin{equation}\label{eq:Lp_energy_bounds}
G(y)=\frac{1}{2} \|\nabla y\|_{L^{2}(\Omega)}^{2} +\int_\Omega F(x,y)\,dx-\int_{\Omega} uy = \int_\Omega -f(x,y)y+F(x,y)\,dx\leq b_{2} |\Omega|,
\end{equation}
where we use the weak formulation of \eqref{eq:re_elliptic_y0} tested with $y$.
Using the coercivity of $G$, we can find some constant $M>0$ independent of $y$ such that $\norm{y}_{L^p(\Omega)}\leq M$.
Since $(p-1)\tilde{r}\leq p$, by \eqref{eq:growth_rate_Ff}, we have
\begin{equation}\label{eq:y0_C_estimate_01}
\norm{f(\cdot,y)}_{L^{\tilde{r}}(\Omega)}\leq d_0+d\norm{y^{p-1}}_{L^{\tilde{r}}(\Omega)}\leq d_0+\tilde{d}\norm{y}^{p-1}_{L^p(\Omega)}\le \tilde{M}.
\end{equation}
Returning to \eqref{eq:est_y0}, we choose a sufficiently small $\epsilon>0$ such that the second term on the right-hand side of \eqref{eq:est_y0} is absorbed by $\norm{y}_{H^1(\Omega)}$. Since $L^{\tilde{r}}(\Omega)\subset L^\infty(\Omega)$ and $\mathcal{C}_{ad}$ is bounded, $\norm{u}_{L^{\tilde{r}}(\Omega)}$ is uniformly bounded for all $u\in \mathcal{C}_{ad}$. Finally, taking into account \eqref{eq:est_y0} and \eqref{eq:y0_C_estimate_01} we have
\begin{equation}\label{eq:y0_C_estimate_02}
\|y\|_{H^{1}(\Omega)} +\norm{y}_{C(\overline{\Omega})} \leq (\tilde{C}_h+\tilde{C}_l) (\norm{u}_{L^{\tilde{r}}(\Omega)} + \tilde{M})\le K,
\end{equation}
which is the conclusion.
\end{proof}
Notice that for monotone $f$, one can directly refer to standard results in the literature, e.g., \cite{Tro10}, where uniform bounds on the solution of \eqref{eq:non_monotone_y0} are shown for that case.
\subsection{Continuity and sensitivity of the control-to-state map}
Since $f(\cdot,\cdot)$ might be nonmonotone with respect to the second variable, this may give rise to a lack of uniqueness of a solution to the semilinear PDE \eqref{eq:state_eq}. In the monotone case, the continuity result is more direct to show, thus we focus on the nonmonotone case here.
Under our standing assumptions, \eqref{eq:state_eq} has a nonempty set of solutions $y$ satisfying
\[\|y\|_{H^{1}(\Omega)} + \norm{y}_{C(\overline{\Omega})}\leq K\]
for some constant $K$ independent of $u$ since $\mathcal{C}_{ad}$ is bounded. The associated continuity result stated next, relies on a $\Gamma$--convergence technique. We note that for this section we take $r=2$.
\begin{proposition}\label{prop:double_well_Gamma_Conv}
Let $u_{n}\to u$ in $L^{2}(\Omega)$ and $G_{n},G: H^{1}(\Omega)\to \mathbb{R}$ be the corresponding energies in \eqref{eq:non_convex_variational}. Then $G_{n}$ $\Gamma$--converges to $G$ with respect to the $H^{1}$ topology. Furthermore, $G_{n}$ is equi-coercive.
\end{proposition}
\begin{proof}
{Observe first that one easily checks that $G_{n}$ $\Gamma$--converges to $G$. This is because the function $\frac{1}{2}\|\nabla (\cdot)\|_{L^{2}(\Omega)}^{2}+\int_\Omega F(x, \cdot)\,dx$ is weakly lower semicontinuous with respect to the $H^{1}(\Omega)$ convergence (and hence it $\Gamma$--converges to itself), while the function $y\mapsto\int_{\Omega}u_{n}y\,dx$ continuously converges to the function $y\mapsto \int_{\Omega}u y\,dx$ (see \cite[Def. 4.7]{dalmasogamma} for the notion of continuous convergence). The assertion follows from the stability of $\Gamma$-convergence under continuous perturbations \cite[Prop. 6.20]{dalmasogamma}.}
In order to see that $G_{n}$ is equi-coercive, it suffices to find a lower semicontinuous coercive function $\Psi: H^{1}(\Omega) \to \mathbb{R}$ such that $G_{n}\ge \Psi$ on $H^{1}(\Omega)$, cf. \cite[Prop. 7.7]{dalmasogamma}. This follows from the fact that $(\|u_{n}\|_{L^{2}(\Omega)})_{n\in\mathbb{N}}$ is a bounded sequence and from the coercivity condition in Assumption \eqref{assu:non_monotone}, see also \eqref{F_coercive}.
\end{proof}
With the help of $\Gamma$--convergence and equi-coercivity one can get the classical results on $\Gamma$--convergence with respect to global and local minimizers. It is of particular interest whether $y_{0}$ is an isolated local minimizer of $G$ (and in particular satisfies \eqref{eq:state_eq}). In this case there exists a sequence $\tilde{y}_{n}$ with $\tilde{y}_{n}\to y_{0}$ in $H^{1}(\Omega)$ such that for all sufficiently large $n$, $\tilde{y}_{n}$ is a local minimizer of $G_{n}$ (hence it also satisfies \eqref{eq:state_eq}); see \cite{braides2014convergence}.
{This implies that if $u_{n}\to u_{0}$ in $L^{2}(\Omega)$ and $y_{0}\in \Pi (u_{0})$ is an isolated local minimizer of $G$, then there exists a sequence $(y_{n})_{n\in \mathbb{N}}$ in $H^{1}(\Omega)$ such that $y_{n}\in \Pi (u_{n})$ and $y_{n}\to y_{0}$ in $H^{1}(\Omega)$.}
\begin{remark}\label{rem:local_minimizer}
We note that solutions of the PDE \eqref{eq:state_eq} are not necessarily local minimizers of the variational problem \eqref{eq:non_convex_variational}. In order to make sure that $y_0$ is an isolated local minimizer, one can check second-order conditions on \eqref{eq:non_convex_variational}. In this context, second-order sufficiency relates to
$(s,-\Delta s + \partial_y f(\cdot,y_0)s)>\epsilon \norm{s}^2_{H^1(\Omega)}$ for all $s\in H^1(\Omega)$ with some $\epsilon>0$.
Therefore, if $f(\cdot,\cdot)$ is a strictly monotone function with respect to its second variable, then the positive definiteness condition is automatically guaranteed.
For the more general case, it turns out that a similar, but yet milder condition (see \eqref{eq:smallness} below) helps to establish the sensitivity result for the control-to-state map.
\end{remark}
Given this approximating sequence $(y_{n})_{n\in \mathbb{N}}$ for $y_{0}\in \Pi(u_{0})$, convergence rates and differentiability of the control-to-state map in a certain sense are shown next. For this, we also assume that
\begin{equation}\label{lip_partialy_f}
\text{$\forall \,M>0\:$ $\exists\, L_{M}>0\,$: } \text{$|\partial_{y}f(x,y_{1})-\partial_{y}f(x,y_{2})|\le L_{M}|y_{1}-y_{2}|$, }
\end{equation}
for almost every $x\in\Omega$ and for all $y_{1}, y_{2}\in [-M,M]$.
This also implies
\begin{equation}\label{partialz_f_bounded}
\text{$\forall \,M>0\:$ $\exists\, C>0\,$: }\quad |\partial_{y}f(x,y)|<C \text{ for a.e. $x\in\Omega$ and $\forall \,y\in[-M,M]$.}
\end{equation}
\begin{theorem}\label{thm:continuity_multi_map}
Assume that \eqref{lip_partialy_f} holds for $f$, let $\Pi: L^{2}(\Omega) \rightrightarrows H^{1}(\Omega)$ be the possibly multi-valued control-to-state map of \eqref{eq:state_eq} and fix some $u_{0}, h\in L^{2}(\Omega)$ as well as $y_{0}\in \Pi(u_{0})$.
Define $(\partial_y f(\cdot,y_0))^-:=\min \set{\partial_y f(\cdot,y_0),0}$, and assume that
\begin{equation}\label{eq:smallness}
\norm{(\partial_y f(\cdot,y_0))^-}_{L^2(\Omega)}< \frac{1}{C_l} \quad \text{ and } \quad \norm{(\partial_y f(\cdot,y_0))^-}_{L^\infty(\Omega)}< \frac{1}{C_h},
\end{equation}
where $C_l$ and $C_h$ are the positive constants defined in \eqref{eq:linearPDE_energy}.
Suppose $u_{n}=u_{0}+t_{n}h$ for a sequence $t_{n}\to 0$, and suppose there exists $y_{{n}}\in \Pi(u_{{n}})$ with $y_{{n}}\to y_0$ in $H^{1}(\Omega)$. Then we have
\begin{equation}\label{multi_valued_cnt}
\|y_{{n}}-y_{0}\|_{H^{1}(\Omega)}\le C t_{n},
\end{equation}
for some constant $C$ and large enough $n\in\mathbb{N}$. Moreover, one has that every weak cluster point of $\frac{y_{{n}}-y_{0}}{t_{n}} $, denoted by $p$,
solves the following linear PDE
\[
-\Delta p +\partial_y f(\cdot,y_0)p=h \quad \text{ in } \;\Omega,\qquad
\partial_\nu p=0 \; \text{ on }\; \partial \Omega.
\]
In particular, for every $h\in L^2(\Omega)$, $p$ satisfies the energy bounds:
\begin{equation}\label{eq:adjoint_bounds}
\norm{p}_{H^1(\Omega)}\leq C_H\norm{h}_{L^2(\Omega)} \quad \text{ and } \quad \norm{p}_{C(\overline{\Omega})}\leq C_c\norm{h}_{L^2(\Omega)} ,
\end{equation}
with constants $C_H$ and $C_c$ depending on $C_h$ and $C_l$.
\end{theorem}
\begin{proof}
Subtracting the equations that correspond to the pairs $(u_{n},y_{n})$ and $(u_{0}, y_{0})$ and using the mean value theorem, we get
\begin{equation}\label{eq:difference}
-\Delta (y_{{n}}-y_{0}) =t_{n}h + f(\cdot,y_{0})- f(\cdot,y_{{n}})= t_n h - \partial_y f(\cdot,y_0+\gamma_h(y_{{n}}-y_{0})) (y_{{n}}-y_{0}),
\end{equation}
where $\gamma_h\in L^\infty(\Omega)$ with $\norm{\gamma_h}_{L^\infty(\Omega)}\leq 1$, see Remark \ref{rem:mean_value_thm} regarding measurability of such $\gamma_{h}$.
Note that $y_{n},y_0\in C(\overline{\Omega})$ with a uniform bound $K>0$, therefore from \eqref{partialz_f_bounded} we have $\partial_y f(\cdot,y_0+\gamma_h(y_{{n}}-y_{0}))\in L^\infty(\Omega)$.
Then, given $\epsilon>0$, we rewrite \eqref{eq:difference} as
\begin{equation}\label{eq:difference1}
-\Delta (y_{{n}}-y_{0})+(\epsilon+(\partial_y f(\cdot,\xi_n^h))^+)(y_{{n}}-y_{0})=t_{n}h + (\epsilon+(\partial_y f(\cdot,\xi_n^h))^+ -\partial_y f(\cdot,\xi_n^h) )(y_{{n}}-y_{0}),
\end{equation}
where $\xi_n^h:=y_0+\gamma_h(y_{{n}}-y_{0})$, and $(\partial_y f(\cdot,\xi_n^h))^+=\max\set{ \partial_y f(\cdot,\xi_n^h),0}$.
Now, using \eqref{eq:linearPDE_energy}, we have
\begin{equation}\label{eq:pde_residual}
\begin{aligned}
&\frac{ \epsilon}{C_h} \norm{y_{n}-y_0}_{H^1(\Omega)}+\norm{y_{n}-y_0}_{L^\infty(\Omega)} \\ \leq& (\epsilon+C_l)\left(t_{n}\norm{h}_{L^2(\Omega)} +\norm{ (\epsilon+(\partial_y f(\cdot,\xi_n^h))^+ -\partial_y f(\cdot,\xi_n^h))(y_{{n}}-y_{0})}_{L^2(\Omega)}\right)\\
\leq &(\epsilon+C_l) \left(t_{n}\norm{h}_{L^2(\Omega)} + \norm{\epsilon +(\partial_y f(\cdot,\xi_n^h))^-}_{L^2(\Omega)} \norm{y_{{n}}-y_{0}}_{L^\infty(\Omega)}\right).
\end{aligned}
\end{equation}
The last inequality holds since both $y_n$ and $y_0$ are $C(\overline{\Omega})$ functions.
Because $y_n \to y_0$ in $H^1(\Omega)$, we also have that $\xi_n^h\to y_0$ in $L^2(\Omega)$.
From the continuity of $\partial_y f(x,\cdot)$, the fact that $y_{n}$, $y_{0}$ are uniformly bounded in $C(\overline{\Omega})$ and from dominated convergence, we have that $\partial_y f(\cdot,\xi_n^h)\to \partial_y f(\cdot,y_0)$ in $L^2(\Omega)$.
Thus, because of \eqref{eq:smallness}, there exists $\epsilon=\epsilon_0$ small enough, such that for sufficiently large $n$, we have $(\epsilon_0+C_l) \norm{\epsilon_0 +(\partial_y f(\cdot,\xi_n^h))^-}_{L^2(\Omega)}\leq 1$.
Then \eqref{eq:pde_residual} leads to
\begin{equation}\label{eq:diff_bounds}
\norm{y_{n}-y_0}_{H^1(\Omega)} \leq \frac{C_h(\epsilon_0+C_l)}{\epsilon_0} \norm{h}_{L^2(\Omega)}t_{n}.
\end{equation}
From the above inequalities we have that $(\frac{y_{{n}}-y}{t_{n}})_{n\in\mathbb{N}}$ is uniformly bounded in $H^{1}(\Omega)$ and therefore admits a weakly convergent subsequence (unrelabelled) with weak limit $p$.
Then, dividing by $t_{n}$ and letting $t_{n}\to 0$ in \eqref{eq:difference}, we have that $p$ satisfies the following equation
\begin{equation}\label{eq:direc_derivative}
-\Delta p +\partial_y f(\cdot,y_{0})p=h \quad \text{in } \Omega,\quad
\partial_\nu p=0 \; \text{ on }\; \partial \Omega.
\end{equation}
Note that \eqref{eq:diff_bounds} readily implies the first energy bound in \eqref{eq:adjoint_bounds}.
For the second bound in \eqref{eq:adjoint_bounds}, the procedure is similar. For this we consider
\begin{equation}\label{eq:pde_residual2}
\begin{aligned}
& \norm{y_{n}-y_0}_{H^1(\Omega)}+\frac{ \epsilon}{C_l}\norm{y_{n}-y_0}_{C(\overline{\Omega})} \\
\leq &(\epsilon+C_h) \left(t_{n}\norm{h}_{L^2(\Omega)} + \norm{\epsilon +(\partial_y f(\cdot,\xi_n^h))^-}_{L^\infty(\Omega)} \norm{y_{{n}}-y_{0}}_{L^2(\Omega)}\right).
\end{aligned}
\end{equation}
Invoking now the second condition in \eqref{eq:smallness}, and using exactly the same steps as for the first bound of \eqref{eq:adjoint_bounds}, we find some $\epsilon'_0>0$ to conclude the second bound in \eqref{eq:adjoint_bounds} when $n$ is sufficiently large.
\end{proof}
\begin{remark}\label{rem:sensitivity}
The proof of Theorem \ref{thm:continuity_multi_map} provides an alternative strategy for proving existence and energy estimates of solutions for certain type of linear elliptic PDEs, e.g. as in \eqref{eq:direc_derivative} when the elliptic coercivity is mildly violated. Also note that in the monotone case, $(\partial_y f(\cdot,y_0))^-\equiv 0$, and thus the conditions in \eqref{eq:smallness} are always fulfilled.
\end{remark}
\subsection{Existence results for learning-informed semilinear PDEs}
As motivated in the introduction, in many applications the precise form of $f$ is not known explicitly, but rather it can be inferred from given data only.
Here we are particularly interested in neural networks to learn the hidden physical law or nonlinear mapping from such data.
The corresponding existence result for PDEs that include such neural network approximations is stated next.
\begin{proposition}\label{prop:first_aprox}
Let $f:\Omega \times \mathbb{R} \to \mathbb{R}$ and $F:\Omega\times \mathbb{R} \to \mathbb{R} $ be given as in Assumption \ref{assu:non_monotone} with the extra assumption that $f\in C(\overline{\Omega}\times \mathbb{R})$.
Then, for every $\epsilon>0$ there exists a neural network $\mathcal{N}\in C^{\infty}( \mathbb{R}^{d}\times \mathbb{R})$ such that
\begin{equation}\label{g_N}
\sup_{\| y\|_{L^{\infty}(\Omega)}< K} \|f(\cdot,y)-\mathcal{N}(\cdot,y)\|_{U}<\epsilon,
\end{equation}
with $K$ cf. \eqref{eq:y0_C_estimate}. Moreover, the learning-informed PDE
\begin{equation}\label{eq:nonconvex_learn}
\begin{aligned}
-\Delta y + \mathcal{N}(\cdot,y)&=u\quad \text{ in }\; \Omega,\qquad
\partial_\nu y=0\; \text{ on }\; \partial \Omega,
\end{aligned}
\end{equation}
admits a weak solution which also satisfies \eqref{eq:y0_C_estimate} for sufficiently small $\epsilon>0$.
\end{proposition}
\begin{proof}
From Theorem \ref{thm:function_app} we have that for every $\tilde{\epsilon}>0$ there exists a neural network $\mathcal{N}\in C^{\infty}( \mathbb{R}^{d}\times \mathbb{R})$ such that $|f(x,y)-\mathcal{N}(x,y)|< \tilde{\epsilon}$ for every $(x,y)\in \overline{\Omega}\times [-K-1, K+1]$.
Thus, the existence of $\mathcal{N}$ such that \eqref{g_N} holds can be directly shown; note that $U=L^{\infty}(\Omega)$ is feasible in \eqref{g_N}.
Consider next the function $N:\Omega\times \mathbb{R} \to \mathbb{R}$ given by
\[
N(x,t):=
\begin{cases}
\int_{0}^{t} \mathcal{N}(x,s)\,ds + F(x,0), & -(K+1)\le t\le K+1,\\
r_{0}(x) +F(x,t), & t>K+1,\\
r_{1}(x) +F(x,t), & t<-(K+1),
\end{cases}
\]
with $r_{0}(x):=\int_{0}^{K+1}\mathcal{N}(x,s)\,ds+F(x,0)-F(x,K+1)$, $r_{1}(x):=\int_{0}^{-K-1}\mathcal{N}(x,s)\,ds+F(x,0)-F(x,-K-1)$. Notice that $N(x,t)$ is continuous with $|{{N}}(x,t)-F(x,t)|< \epsilon(K+1)$ for every $t\in \mathbb{R}$ and $x\in \Omega$. Next we apply some smoothing of $N(x,\cdot)$ in a small neighbourhood of $\Omega\times \{-K-1\}$ and $\Omega\times \{K+1\}$ such that the previous approximation estimate still holds true, and continue to use the symbol $N$ for the result. Then $N(x,\cdot)$ is differentiable with respect to the second variable for every $x\in \Omega$. Consider now the minimization problem
\begin{equation}\label{eq:nonconvex_variational_learn}
\inf_{y\in H^{1}(\Omega)} \frac{1}{2} \|\nabla y\|_{L^{2}(\Omega)}^{2} +\int_{\Omega} {N}(x,y)\,dx -\int_{\Omega} uy\,dx.
\end{equation}
One can now prove existence of a solution to \eqref{eq:nonconvex_variational_learn} analogously to the proof of Proposition \ref{prop:existence_variation} for \eqref{eq:non_convex_variational}.
We can show that the functional in $y\mapsto \int_{\Omega} N(x,y)\,dx$ is Frech\'et differentiable in $H^{1}(\Omega)$ with Frech\'et derivative $h\mapsto \int_\Omega \partial_y N(x,y)h\,dx$, see discussion after this proof. Thus any solution to \eqref{eq:nonconvex_variational_learn} satisfies the PDE
\begin{equation}\label{eq:nonconvex_learn_ex}
-\Delta y + \partial_y N(\cdot,y)=u,\quad \text{ in }\; \Omega , \quad
\partial_\nu y=0\; \text{ on }\; \partial \Omega.
\end{equation}
By following estimates analogous to the ones leading to \eqref{eq:y0_C_estimate}, we have in view of \eqref{eq:y0_C_estimate_01}--\eqref{eq:y0_C_estimate_02} and \eqref{g_N}, that any solution $y_{0}$ also satisfies $\|y_{0}\|_{C(\overline{\Omega})}<K$ when $\epsilon$ is sufficiently small. Since $\partial_y N=\mathcal{N}$ on $\Omega\times [-K,K]$ we conclude that $y_{0}$ is a solution of \eqref{eq:nonconvex_learn}.
\end{proof}
Concerning the announced differentiability of $\Phi_N(y):= \int_{\Omega} {N}(x,y)\,dx$, define
\[\Phi_N^\prime(y)h:=\int_{\Omega}\partial_y {N}(x,y)h\,dx.\]
Since $\frac{\abs{\Phi_N(y+h)-\Phi_N(y)-\Phi_N^\prime(y)h}}{\norm{h}_{H^1(\Omega)}}
=\frac{\abs{\Phi_N^\prime(y+\tau_h h)h-\Phi_N^\prime(y)h}}{\norm{h}_{H^1(\Omega)}}$ for some $\tau_h\in L^\infty(\Omega)$ with $\norm{\tau_h}_{L^\infty(\Omega)}\leq 1$, using the mean value theorem along with $H^1(\Omega)\subset L^{\frac{r}{r-1}}(\Omega)$, we have for a $C>0$
\begin{equation}\label{eq:differ_N}
\begin{aligned}
\frac{\abs{\Phi_N(y+h)-\Phi_N(y)-\Phi_N^\prime(y)h}}{\norm{h}_{H^1(\Omega)}}
\leq C \norm{\partial_y ({N} (\cdot,y+\tau_h h)-{N} (\cdot,y))}_{L^{r}(\Omega)}.
\end{aligned}
\end{equation}
Note that by definition, the growth rate of ${N}(x,\cdot)$ outside of $[-K-1,K+1]$ is exactly the same as the one of $F(x,\cdot)$.
Therefore $\partial_y {N}(\cdot,y)$ is indeed an element of $L^{r}(\Omega)$.
Finally, we need to verify that
\begin{equation*}
\begin{aligned}
\lim_{h\to 0} \norm{\partial_y {N}(y+\tau_h h)-\partial_y N(y)}_{L^{r}(\Omega)}=0 \quad \text{ for } h\in H^1(\Omega).
\end{aligned}
\end{equation*}
This is true due to the continuity of the Nemytskii operator $\partial_y {N}:L^{\frac{r}{r-1}}(\Omega)\to L^{r}(\Omega)$.
\begin{remark}\label{rem:mean_value_thm}
Notice that in \eqref{eq:differ_N} the mean value theorem is applied for every $x\in\Omega$ and $\tau_{h}$ is defined as a selector function of the multi-valued map $\tau:\Omega\rightrightarrows [0,1]$ with
\[\tau(x)=\{\lambda\in [0,1]:\, N(x,y(x)+h(x))-N(x,y(x))- \partial_y N(x,y(x)+\lambda h(x))h(x)=0\}.\]
Even though by definition $\tau_{h}$ is a bounded function, one still needs to show its measurability such that $\tau_{h}\in L^{\infty}(\Omega)$. Such a measurable selector function is indeed guaranteed by the Kuratowski--Ryll--Nardzewski selection theorem \cite[Theorem 18.13]{aliprantis} whose conditions can be verified in our case. In fact, we may choose $\tau_{h}(x):=\max \tau(x)$; see \cite[Theorem 18.19]{aliprantis}.
\end{remark}
Note that the above set up covers a wide range of problems, including the class of problems where the nonlinear function $f(\cdot,\cdot)$ is strictly monotone with respect to the second variable. In that case, the nonlinear PDE \eqref{eq:state_eq} admits a unique solution \cite{Tro10}. We also point out that in the monotone case direct methods allow to prove the existence of solutions and energy bounds for a wider array of monotone nonlinearities (such as, e.g., exponential functions). Moreover in that case, the regularity and growth conditions on the nonlinear function $f$ can be relaxed.
However, as pursuing such a generality is not the focus of the current paper, we skip detailed discussions here. We note however that structural aspects of the control problem such as first-order optimality, adjoints etc. remain intact even under relaxed conditions.
In order to give an example on this, we show in the next proposition how strict monotonicity for the learning-based model can indeed be preserved.
\begin{proposition}\label{prop:N_approx_f}
Let $f:\Omega\times \mathbb{R}\to \mathbb{R}$ satisfy Assumption \ref{assu:non_monotone} and $\partial_y f(x,y)\geq C_f$ for almost every $x\in\Omega$ and $y\in \mathbb{R}$ for some $C_f>0$. We additionally assume that $f\in C(\Omega\times \mathbb{R})$.
Then for every $\epsilon>0$, for every compact set $\Omega_c \subset \Omega$, and for every $M>0$, there exists a neural network $\mathcal{N}:=\mathcal{N}^{\epsilon}_{\Omega_c , M} \in C^{\infty}( \mathbb{R}^{d} \times \mathbb{R})$ such that
\begin{align}
&|f(x,z)-\mathcal{N}(x,z)|<\epsilon, \quad \text{ for every } x\in \Omega_c \text{ and every } z\in [-M, M], \label{NN_f_1}\\\
&\partial_{z}\mathcal{N}(x,z)\ge C_{\mathcal{N}},\quad \text{ for all } x\in \Omega \text{ and } z\in[-M, M] \text{ for some } C_{\mathcal{N}}>0.\label{NN_f_2}
\end{align}
If $f\in C^1(\Omega\times \mathbb{R})$, then we have in addition that
\begin{align}
&|\partial_{z} f(x,z)-\partial_{z}\mathcal{N}(x,z)|<\epsilon, \quad \text{ for all } x\in \Omega_c \text{ and } z\in [-M, M]. \label{NN_nablaf_1}
\end{align}
\end{proposition}
\begin{proof}
Let $\epsilon>0$, $\Omega_c\subset \Omega$ compact, and $M>0$. Further, let $\tilde{f}: \mathbb{R}^{d} \times \mathbb{R}\to \mathbb{R}$ be the extension by zero of $f$ outside $\Omega \times \mathbb{R}$, $\rho_{\delta}$ a standard mollifier \cite[Sec.2.2.2]{MR3288271}, and $\tilde{f}_{\delta}:=\tilde{f}\ast \rho_{\delta}: \mathbb{R}^{d}\times \mathbb{R}\to \mathbb{R}$. Next we choose $\delta>0$ such that the following hold true: (i) $\bar B(x,\delta):=\{{\hat{x}}\in\mathbb{R}^d:\|{\hat{x}}-x\|_2\leq\delta\}\subset\Omega$ for every $x\in \Omega_c$, (ii) $\tilde{f}_{\delta}(x,y)=f_{\delta}(x,y)$ for $(x,y)\in \Omega_c\times \mathbb{R}$, and (iii) $|f(x,y)-\tilde{f}_{\delta}(x,y)|<\epsilon/2$ for every $x\in \Omega_c$, $y\in [-M,M]$. Moreover, one finds that for sufficiently small $\delta>0$ it holds that $\partial_{y} \tilde{f}_{\delta}(x,y)\ge C_{\tilde{f}}$ for some $C_{\tilde{f}}>0$ for all $x\in\Omega$, $y\in \mathbb{R}$. Indeed, note that Assumption \ref{assu:non_monotone} and the mean value theorem yield for almost every $x'\in\Omega$, $y_{1}<y_{2}$
\begin{equation}\label{bigger_linear}
f(x',y_{2})-f(x',y_{1})\ge C_{f} (y_{2}-y_{1}).
\end{equation}
Hence, using $\rho_\delta(\cdot)=\delta^{-(d+1)}\rho(\cdot / \delta)$ \cite[Sec.2.2.2]{MR3288271}, we have
\begin{align*}
& \tilde{f}_{\delta}(x,y_{1})
= \int_{B_{\delta}(x,y_{1})\cap (\Omega\times \mathbb{R})} \tilde{f}(x',y') \delta^{-d-1}\rho\left (\frac{(x,y_{1})-(x',y')}{\delta} \right)d(x',y')\\
& \le \int_{B_{\delta}(x,y_{2})\cap(\Omega\times \mathbb{R})} \left(\tilde{f}(x',y') -C_{f}(y_{2}-y_{1})\right) \delta^{-d-1}\rho\left (\frac{(x,y_{2})-(x',y')}{\delta} \right)d(x',y')\\
& = \tilde{f}_{\delta}(x,y_{2})- C_{f} \Big (\underbrace{\int_{B_{\delta}(x,y_{2})\cap (\Omega\times \mathbb{R})} \delta^{-d-1} \rho\left (\frac{(x,y_{2})-(x',y')}{\delta} \right)}_{=:\tilde{C}} d(x',y')\Big)(y_{2}-y_{1})\\
& = \tilde{f}_{\delta}(x,y_{2})- C_{f}\tilde{C}(y_{2}-y_{1}).
\end{align*}
We now use the fact that the boundary of $\Omega$ is Lipschitz to deduce that for some small enough $\delta>0$ we have $\tilde{C}:=\tilde{C}_{x,y}>c$ for some $c>0$, for every $x\in\Omega$, $y\in \mathbb{R}$, and set $C_{\tilde{f}}:=C_{f}c$. Hence from the last inequality above we deduce $\partial_{y} \tilde{f}_{\delta}(x,y)\ge C_{\tilde{f}}$. Utilizing now Theorems \ref{thm:function_app} and \ref{thm:deriv_app} for the compact set $\overline{\Omega}\times [-M,M]\subset \mathbb{R}^{d}\times \mathbb{R}$, we find a neural network $\mathcal{N}\in C^{\infty}( \mathbb{R}^{d}\times \mathbb{R})$ such that $|\tilde{f}_{\delta}(x,y)-\mathcal{N}(x,y)|<\epsilon/2$ as well as $|\partial_{y} \tilde{f}_{\delta}(x,y)-\partial_{y} \mathcal{N}(x,y)|<C_{\tilde{f}}/4$ for every $x\in \overline{\Omega}$ and $y\in [-M,M]$. Then with the use of the triangle inequality we get \eqref{NN_f_1} and \eqref{NN_f_2} for $C_{\mathcal{N}}=\frac{3}{4}C_{\tilde{f}}$.
Finally, when $f$ is also continuously differentiable in $\Omega \times \mathbb{R}$, we can proceed as before with the extra care to choose $\delta>0$ such that $|\partial_{y}f(x,y)-\partial_{y}\tilde{f}_{\delta}(x,y)|<\epsilon/2$ for every $x\in \Omega_c$, $y\in [-M,M]$.
\end{proof}
Note that if $f$ is bounded on $\Omega\times [-K,K]$, for instance if $f\in C(\overline{\Omega}\times \mathbb{R})$ as in Proposition \ref{prop:first_aprox}, then the estimate \eqref{g_N} holds here as well and if analogous conditions hold for the derivative of $f$ then with the help of \eqref{NN_nablaf_1} we also have
\begin{equation}\label{sup_L2_app_der}
\sup_{\|y\|_{L^{\infty}(\Omega)}<K} \|\partial_{y}f(\cdot,y)-\partial_{y}\mathcal{N}(\cdot,y)\|_{U}\le \epsilon.
\end{equation}
\subsection{Error analysis for the control-to-state map}
Our next target is to show the error bounds \eqref{eq:operator_err} and \eqref{eq:operator_deriv_err} for the solution maps (control-to-state maps) of the learning-informed versus the original PDE.
Before we proceed, we first show the local Lipschitz conditions \eqref{eq:Q_Lip} and \eqref{eq:second_Lip}.
For the ease of presentation we confine ourselves to a
monotone $f(x,\cdot)$ here. For the nonmonotone $f(x,\cdot)$, we would require \eqref{eq:smallness} to be satisfied for solutions uniformly bounded by $K$.
Consider the following pairs of equations for $i\in\{1,2\}$
\begin{equation}\label{eq:pair3}
\left\{
\begin{aligned}
-\Delta y_i+ f(\cdot,y_i) &=u_i\;\text{ in }\;\Omega,\;\; \\
\partial_{\nu} y_i &=0\;\;\text{ on }\;\partial\Omega,\;\;
\end{aligned} \right.\quad
\text{ and } \quad
\left\{
\begin{aligned}
-\Delta p_i+\partial_y f(x,\bar{y}_{i}) p_i &=v\;\text{ in }\;\Omega,\;\; \\
\partial_{\nu} p_i &=0\;\text{ on }\;\partial\Omega,\;\;
\end{aligned} \right.\quad
\end{equation}
where $v\in U$ is unitary, $\overline{y}_{i}=\Pi (u_{i})$, and $p_i=\Pi^\prime(u_i) v$ for $i=1,2$.
Taking the difference of the first equations in \eqref{eq:pair3} for $i=1,2$, testing with $y_1-y_2$, and using the mean value theorem
we get for some $C_f>0$ that
\[ \begin{aligned}
C_{f} \norm{y_1-y_2}_{H}^2
&\leq \|\nabla y_{1}-\nabla y_{2}\|_{L^{2}(\Omega)}^{2}+ \int_{\Omega} (f(x,y_{1})-f(x,y_{2}))(y_{1}-y_{2})\,dx\\&= \int_{\Omega} (u_{1}-u_{2})(y_{1}-y_{2})\,dx
\leq \norm{u_1-u_2}_U \norm{y_1-y_2}_H,
\end{aligned}
\]
which yields the Lipschitz property $\norm{y_1-y_2}_{H}\leq \frac{1}{C_f} \norm{u_1-u_1}_U $.
In order to show the local Lipschitz continuity of $\Pi'$, we need to further assume condition \eqref{lip_partialy_f}.
Consider now the difference of the right-hand side equations for $i=1,2$ in \eqref{eq:pair3}. Using standard PDE arguments (see, e.g., \cite[Theorem 4.7]{Tro10}) we find
\[ \begin{aligned}
\norm{p_1-p_2}_{H^1(\Omega)} &+\norm{p_1-p_2}_{C(\bar{\Omega})}
\leq C\norm{(\partial_y f(\cdot,\bar{y}_1) - \partial_y f(\cdot,\bar{y}_2))p_1}_{L^2(\Omega)}\\
&\leq CL\norm{p_1}_{C(\overline{\Omega})}\norm{\bar{y}_1-\bar{y}_2}_{L^2(\Omega)}
\leq C\frac{L}{C_{f}} c\norm{v}_{L^2(\Omega)}\norm{u_1-u_2}_{L^2(\Omega)}.
\end{aligned}
\]
Here, we also used the estimate $\|p_{1}\|_{C(\overline{\Omega})}\le c \|v\|_{L^{2}(\Omega)}$ from Theorem \ref{thm:continuity_multi_map}.
For the desired error bounds we focus now on the state equations
\begin{equation}\label{eq:pair1}
\left\{
\begin{aligned}
-\Delta y +\mathcal{N}(x,y)&=u\;\text{ in }\;\Omega,\;\; \\
\partial_{\nu}y &=0\;\text{ on }\; \partial \Omega,\;\;
\end{aligned} \right.\quad
\text{ and } \quad
\left\{
\begin{aligned}
-\Delta y +f(x,y)&=u \;\text{ in }\;\Omega,\\
\partial_{\nu}y&=0 \; \text{ on }\; \partial \Omega,
\end{aligned}\right.
\end{equation}
and the associated adjoints
\begin{equation}\label{eq:pair2}
\left\{
\begin{aligned}
-\Delta p +\partial_{y}\mathcal{N}(x,\bar{y})p&=v\;\text{ in }\;\Omega,\;\; \\
\partial_{\nu} p &=0\;\text{ on }\; \partial \Omega,\;\;
\end{aligned} \right.\quad
\text{ and } \quad
\left\{
\begin{aligned}
-\Delta p + \partial_y f(x,\bar{y}) p &= v\;\; \text{ in }\;\Omega,\\
\partial_{\nu}p &=0\;\; \text{ on }\; \partial \Omega.
\end{aligned}\right.
\end{equation}
The main approximation result is stated below. It guarantees that the uniform approximation properties of the control-to-state operator $\Pi$ and its derivative (compare \eqref{eq:operator_err} and \eqref{eq:operator_deriv_err} of Theorem \ref{thm:convergence} and Assumption \ref{assum:operator_derivative}, respectively) are met by the corresponding learning-informed operators.
\begin{proposition}\label{prop:state_error}
Let $\epsilon>0$ and $M> K >0$, with $K$ being the constant from \eqref{eq:y0_C_estimate}. Suppose the first inequality in \eqref{eq:smallness} holds for $f$ for every $y$ such that $\|y\|_{L^{\infty}(\Omega)}\le K$.
Assume
that $\mathcal{N}\in C^{\infty}( \mathbb{R}^{d} \times \mathbb{R})$ satisfies the approximation property
\begin{equation}\label{eq:f_app_error}
\sup_{\|y\|_{L^{\infty}(\Omega)}<M}\norm{f(\cdot,y)-\mathcal{N}(\cdot,y)}_U \leq \epsilon,
\end{equation}
for $\epsilon>0$ sufficiently small.
Then, the following error estimate holds :
\begin{equation}\label{eq:ope_est}
\norm{y_{0}-y_{\epsilon}}_{H}\leq C \epsilon, \quad \text{ for all }\; u \in \mathcal{C}_{ad},
\end{equation}
where the constant $C>0$ depends only on $f$, and $y_{\epsilon}$, $y_{0}$ are solutions of the left and right equations of \eqref{eq:pair1} respectively.
Moreover, assuming \eqref{lip_partialy_f} and also that the condition
\begin{equation}\label{eq:der_f_app_error}
\sup_{\|y\|_{L^{\infty}(\Omega)}<M} \|\partial_{y}f(\cdot,y)-\partial_{y}\mathcal{N}(\cdot,y)\|_{U}\le \epsilon_{1},
\end{equation}
holds for sufficiently small $\epsilon_{1}>0$, then, there exist some constants $C_0>0$ and $C_1>0$ so that
\begin{equation}\label{eq:ope_der_est}
\norm{p_{0}-p_{\epsilon}}_{H^1(\Omega)\cap C(\overline{\Omega})}
\leq C_1 \epsilon_1 +C_0\epsilon, \quad \text{ for all }\; u \in \mathcal{C}_{ad},
\end{equation}
where $p_{\epsilon}$, $p_{0}$ are solutions of the left and right equations of \eqref{eq:pair2} respectively.
\end{proposition}
\begin{proof}
Let $y_\epsilon$ and $y_0$ be solutions of the learning-informed PDE and the original PDE, respectively. Recall that the $H^1$ norms of both $y_\epsilon$ and $y_0$ are bounded by $K>0$. Subtracting the two PDEs we get
\begin{equation}\label{eq:diff_PDE}
-\Delta (y_0-y_\epsilon) =\mathcal{N}(\cdot,y_\epsilon)-f(\cdot,y_0)\;\text{ in }\;\Omega \quad \text{ and } \quad \partial_\nu (y_0-y_\epsilon)=0\;\text{ on }\;\partial\Omega.
\end{equation}
Using the same technique as in the proof of Theorem \ref{thm:continuity_multi_map}, the equation in \eqref{eq:diff_PDE} can be rewritten as
\begin{equation}\label{eq:diff_PDE2}
\left( -\Delta +\kappa_{0} +(\partial_y f(\cdot,\zeta_\epsilon))^+\right)(y_0 -y_\epsilon) =\mathcal{N}(\cdot,y_\epsilon)-f(\cdot,y_\epsilon) + (\kappa_{0} -(\partial_y f(\cdot,\zeta_\epsilon))^-)(y_0 -y_\epsilon) ,
\end{equation}
where $\zeta_\epsilon$ is a pointwise convex combination of $y_0$ and $y_\epsilon$ that results from a pointwise application of the mean value theorem, and $\kappa_{0}>0$ is a fixed small constant.
We have then the estimate
\[
\begin{aligned}
&\frac{\kappa_{0}}{C_h}\norm{y_0-y_\epsilon}_{H^1(\Omega)} +\norm{y_0 -y_\epsilon}_{C(\overline{\Omega})}\\
\leq& (\kappa_{0}+C_l) (\norm{ \mathcal{N}(\cdot,y_\epsilon) -f(\cdot,y_\epsilon)}_{L^2(\Omega)}
+ \norm{ (\kappa_{0}-(\partial_y f(\cdot,\zeta_\epsilon))^{-})(y_0 -y_\epsilon)}_{L^2(\Omega)}),
\end{aligned}
\]
Rearranging the above inequality, and taking into account the Lipschitz continuity of $\partial_y f$ and the condition \eqref{eq:smallness} for $\zeta_{\epsilon}$ for which it holds $\|\zeta_{\epsilon}\|_{L^{\infty}(\Omega)}\le K$, for sufficiently small $\epsilon$ we derive finally
\[\norm{y_0 -y_\epsilon}_H \leq C \epsilon. \]
For deriving \eqref{eq:ope_der_est} we use a similar approach. Let $p_\epsilon$ and $p_0$ be the solutions of the left and right equations in \eqref{eq:pair2}, respectively. Subtracting these two equations gives
\begin{equation}\label{eq:error_pde2}
\begin{aligned}
-\Delta (p_\epsilon -p_0) +\partial_y f(x,y_0)(p_\epsilon-p_0)&=(\partial_y f(x,y_0) - \partial_y \mathcal{N}(x,y_\epsilon))p_\epsilon\quad \text{ in }\;\Omega,\\
\partial_\nu (p_\epsilon-p_0)&=0\quad \text{ on }\;\partial\Omega.
\end{aligned}
\end{equation}
Using again the same trick as above, we rewrite \eqref{eq:error_pde2} as
\begin{equation}\label{eq:error_pde3}
\begin{aligned}
& -\Delta (p_\epsilon -p_0) +(\kappa_{1}+(\partial_y f(x,y_0))^+ (p_\epsilon-p_0)\\
=&(\partial_y f(x,y_0) - \partial_y \mathcal{N}(x,y_\epsilon))p_\epsilon +(\kappa_{1}-(\partial_y f(x,y_0))^-)(p_\epsilon-p_0),
\end{aligned}
\end{equation}
and then similarly we get
\begin{equation}\label{est_p_minus_peps}
\norm{p_\epsilon-p_0}_{H^1(\Omega)} \leq C \norm{p_\epsilon}_{C(\bar{\Omega})}\norm{\partial_y f(\cdot,y_0) - \partial_y \mathcal{N}(\cdot,y_\epsilon)}_{L^2(\Omega)},
\end{equation}
for some constant $C$ independent of both $p_0$ and $p_\epsilon$, but depending on the constants $C_h$ and $C_l$. The estimate in \eqref{est_p_minus_peps} holds also for $\norm{p_\epsilon-p_0}_{C(\overline{\Omega})}$ but with a different constant, say $\tilde{C}>0$.
Focusing on the right-hand side of the inequality above and using the triangle inequality we have
\[\begin{aligned}
\norm{\partial_y f(\cdot,y_0) - \partial_y \mathcal{N}(\cdot,y_\epsilon)}_{L^2(\Omega)}
&\leq
\norm{\partial_y f(\cdot,y_0) - \partial_y f(\cdot,y_\epsilon)}_{L^2(\Omega)} \\
+ & \norm{\partial_y f(\cdot,y_\epsilon) - \partial_y \mathcal{N}(\cdot,y_\epsilon)}_{L^2(\Omega)}
\leq L\norm{y_0-y_\epsilon}_{L^2(\Omega)}+\epsilon_1,
\end{aligned}
\]
where $L$ is the local Lipschitz constant of $\partial_{y}f(\cdot,\cdot)$ for those $y\in H^1(\Omega)\cap C(\overline{\Omega})$ with $\norm{y}_{L^\infty(\Omega)}\leq K$.
Finally we need to estimate $\|p_\epsilon\|_{C(\overline{\Omega})}$ in \eqref{est_p_minus_peps}. For this we note that for sufficiently small $\epsilon_{1}$, the second bound in \eqref{eq:adjoint_bounds} also holds for the solution of PDEs with $\mathcal{N}$. This yields the estimate
\begin{equation}\label{est_p}
\|p_\epsilon\|_{C(\overline{\Omega})}\le C_c \|v\|_{{L^{2}(\Omega)}},
\end{equation}
with the constant $C_c$ independent of $v$ and $\epsilon$. Finally we conclude
\begin{align*}
\|p_{0}-p_{\epsilon}\|_{H^1(\Omega)\cap C(\overline{\Omega})}
&=\sup_{\|v\|_{{L^{2}(\Omega)}}\le 1} \|p_{0}-p_{\epsilon}\|_{H^{1}(\Omega)\cap C(\overline{\Omega})}\\
&=\sup_{\|v\|_{{L^{2}(\Omega)}}\le 1} \norm{p_0-p_\epsilon}_{H^1(\Omega)} +\norm{p_0-p_\epsilon}_{C(\overline{\Omega})}\\
&\le C_c (C+\tilde{C})(L\epsilon +\epsilon_{1})
\le C_{1}\epsilon_{1} +C_{0}\epsilon,
\end{align*}
which ends the proof.
\end{proof}
\begin{remark}\label{rmk:multivalue_map}
Notice that the condition \eqref{eq:smallness} imposed to all $y$ with $\norm{y}_{L^\infty(\Omega)}\leq K$ in fact enforces a unique solution to the semilinear PDE \eqref{eq:state_eq}, which also satisfies the same constraint. It is possible to treat the multi-solution case using a similar strategy as Theorem \ref{thm:continuity_multi_map}, by using $\Gamma$--convergence arguments to show the convergence of $y_\epsilon \to y$ in a certain sense, and then apply the condition \eqref{eq:smallness} to $y_0$.
\end{remark}
\begin{remark}\label{rmk:zeroNeumann}
The results above also hold for more general types of boundary conditions, including homogeneous Dirichlet boundary conditions.
\end{remark}
\subsection{Existence of solutions of the learning-informed optimal control}
After having replaced the unknown $f$ by the neural network based approximation $\mathcal{N}$ we are now interested in the following optimal control problem with a partially learning-informed state equation:
\begin{align}\label{eq:cost_NN}
&\text{minimize}\quad J(y,u):= \frac{1}{2}\|y-g\|^2_{L^2(\Omega)} +\frac{\alpha}{2} \|u\|_{L^2(\Omega)}^2,\quad\text{over }(y,u)\in H^1(\Omega)\times L^2(\Omega),\\
&\text{s.t. } \quad
\label{eq:state_eq_NN}
-\Delta y +\mathcal{N}(x,y)=u\quad \text{ in }\;\Omega,\quad
\partial_{\nu}y=0\;\; \text{on }\; \partial \Omega,\\
&\phantom{\text{s.t. }}\quad\; u\in\mathcal{C}_{ad}.\label{eq:control_constr_NN}
\end{align}
In what follows we prove the existence of an optimal control for the problem \eqref{eq:cost_NN}--\eqref{eq:control_constr_NN}. Here we consider that the control-to-state operator is single-valued, that is, the learning-informed PDE \eqref{eq:state_eq_NN} has a unique solution for every $u\in\mathcal{C}_{ad}$.
According to Proposition \ref{pro:existence_wsc}, we only need to check that the operator $Q_\mathcal{N}:U \to H$ is weakly sequentially closed. In fact, an even stronger property holds true as we show next.
\begin{proposition}\label{prop:existence_opt_con}
Let $\mathcal{N}\in C^{\infty}( \mathbb{R}^{d}\times \mathbb{R})$ be a neural network such that any solution of the learning-informed PDE \eqref{eq:state_eq_NN} satisfies a bound as in \eqref{eq:y0_C_estimate}. Then the reduced operator $Q_\mathcal{N}:U=L^2(\Omega)\supset C_{ad}\to H=L^2(\Omega)$ induced from the control-to-state map of \eqref{eq:state_eq_NN} is weakly-strongly continuous, in the sense that if $u_{n} \rightharpoonup u$ in $U$ and $y_{n}\in \Pi_\mathcal{N}(u_n)$ then, $y_{n}\to y$ in $H$ for some $y\in \Pi(u)$.
\end{proposition}
\begin{proof}
Let $u_{n} \rightharpoonup u$ in $U$ and $y_n\in \Pi_\mathcal{N}(u_n)$. Then $(y_n)_{n\in\mathbb{N}} $ is a bounded sequence in $Y=H^1(\Omega)\cap C(\bar\Omega)$ as $(u_n)_{n\in\mathbb{N}} \subset U$ is a bounded set in $L^{\infty}(\Omega)$. Thus, up to a subsequence, still denoted by $(y_n)$, there is $\bar{y}\in {H^{1}(\Omega)}$ such that $y_n \rightharpoonup \bar{y}$ in ${H^{1}(\Omega)}$.
Since ${H^{1}(\Omega)}$ embeds compactly into $H$, we can consider that $y_n \to \bar{y}$ strongly in $H$.
We show that $\bar{y}=\Pi_\mathcal{N}(\bar{u})$, i.e., $\bar{y}$ is a weak solution of the PDE in \eqref{eq:state_eq_NN}. Since $y_n$ is the weak solution of \eqref{eq:state_eq_NN} with right hand-side $u_{n}$, we have
\begin{equation}\label{eq:PDE_sequence}
\int_\Omega \nabla y_n\cdot \nabla v\,dx+\int_\Omega \mathcal{N}(x, y_n) v\,dx=\int_\Omega u_n v\,dx\quad\text{for all }v\in H^1(\Omega).
\end{equation}
We only need to show that
\begin{equation}\label{eq:nonlinear_error}
\int_\Omega\left( \mathcal{N}(x, y_n) - \mathcal{N}(x, \bar{y})\right) v\,dx=0,
\end{equation}
since the convergence of the other two terms readily follows from weak convergence.
Taking into account that $\mathcal{N}\in C^{1}( \mathbb{R}^{d}\times \mathbb{R})$
we have that for every $M>0$, there exists an $L_{M}>0$ such that for every $x\in\Omega$ and $y_{1},y_{2}\in [-M,M]$, we have
\begin{equation}\label{N_Lip}
|\mathcal{N}(x,y_{1})-\mathcal{N}(x,y_{2})|\le L_{M} |y_{1}-y_{2}|.
\end{equation}
Using the estimate \eqref{eq:y0_C_estimate}, we have that $(y_{n})_{n\in\mathbb{N}}$ and, hence, $\bar{y}$ are uniformly bounded in $L^{\infty}(\Omega)$, say by a constant $M>0$. Thus we have
\begin{align*}
\| \mathcal{N}(\cdot, y_n) - \mathcal{N}(\cdot, \bar{y})\|_{U}
\le
L_{M}\|y_{n}-\bar{y}\|_{H}.
\end{align*}
Due to the inequality above and the strong convergence of $y_n\to \bar{y}$ in $H$, \eqref{eq:nonlinear_error} is verified.
Passing to the limit $n\to \infty$ in \eqref{eq:PDE_sequence} we get that $\bar{y}$ is a weak solution of \eqref{eq:state_eq_NN} corresponding to $\bar{u}$. Since any other subsequence of $(y_{n})_{n\in\mathbb{N}}$ will have a further subsequence that converges to $\Pi_{\mathcal{N}}(\bar{u})$ the assertion follows.
\end{proof}
For the error analysis on the optimal controls of \eqref{eq:cost_NN} with \eqref{eq:state_eq_NN} to solutions from \eqref{eq:cost} with \eqref{eq:state_eq}, we can readily apply Theorems \ref{thm:convergence}, \ref{thm:error_bound} and \ref{thm:error_bound2} for the monotone function $f$, in view of the error bounds shown in Proposition \ref{prop:state_error}.
For the nonmonotone case, these results are still applicable up to a selection of subsequences of the solutions.
Finally, we would like to make a remark regarding the approximation of $f:\Omega \times \mathbb{R} \to \mathbb{R}$ in a semilinear PDE, given a set of input-output data.
The input data is a family of sampled points from $\Omega \times [y_{min}, y_{max} ] $, denoted by $(x_i,y(x_i))_{i\in I}$, and the outputs are the corresponding values $(f(x_i,y(x_i)))_{i\in I}$, which are computed from \eqref{eq:state_eq} via
\[f(x_i,y(x_i))=u(x_i)+\Delta y(x_i).\]
In real world applications, we assume that we have access to the data points $y(x_{i})$ and thus also to $\Delta y(x_{i})$, while $u$ is a control which is at our disposal to be tuned.
In order to be consistent with the functional analytic setting, one needs to give pointwise meaning to $\Delta y$, which in general is an object in $H^{-1}(\Omega)$, only. This can be achieved by choosing controls $u\in\mathcal{C}_{ad}$ of sufficient regularity.
Indeed, since both $f$ and $y$ are continuous functions when choosing continuous $u$, equation \eqref{eq:state_eq} implies that $\Delta y$ is continuous, too, and hence admits a pointwise evaluation.
\subsection{Numerical algorithm for the optimal control problems}\label{subsec:Newton}
In this section we briefly describe an algorithm for solving the optimal control problem \eqref{eq:cost}. Even though it is suitable for rather general problems, we outline it here for the version with the learning-informed state equation.
In order to compute a numerical solution, we first state the Karush-Kuhn-Tucker (KKT) conditions, which are justified by constraint regularity (see \cite{ZowKur79} for a general setting):
\begin{equation}\label{eq:stationary1}
\begin{aligned}
- \Delta y +\mathcal{N}(\cdot,y) -u&=0\; \text{ in } \Omega ,\quad
\partial_\nu y=0\; \text{ on } \partial \Omega ,\\
- \Delta p+ \partial_y \mathcal{N}(\cdot,y) p +y&= g\; \text{ in } \Omega ,\quad
\partial_\nu p=0\; \text{ on } \partial \Omega,\\
-p+\lambda + \alpha u&=0\; \text{ in } \Omega ,\\
\lambda - \max (0,\lambda+ c(u-\overline{u})) -\min(0,\lambda + c(u-\underline{u}))&=0\; \text{ in } \Omega ,
\end{aligned}
\end{equation}
where $c>0$ is some constant, which in practice, is useful to be chosen $c=\alpha$.
The first equation with its boundary condition is just the learning-informed PDE constraint, while the next one is the associated adjoint equation. The third equation represents optimality w.r.t. $u$ and, together with the last one, it incorporates the control constraint $\underline{u} \leq u \leq \overline{u}$. Indeed, notice that the last equation is equivalent to the usual complementarity system as it secures a.e. that
\begin{equation*}
\lambda=0: \: \underline{u} < u < \overline{u},\quad
\lambda \geq 0: \: u=\underline{u},\quad
\lambda \leq 0: \: u=\overline{u}.
\end{equation*}
Letting $\phi:=(y,u,p,\lambda)^\top$, \eqref{eq:stationary1} can be compactly rewritten as the nonsmooth equation
\begin{equation}\label{eq:stationary2}
M_\mathcal{N}(\phi) -(0,g,0,0)^\top=0.
\end{equation}
For solving \eqref{eq:stationary2}, we employ a semi-smooth Newton method (SSN); see, e.g., \cite{HinItoKun02}. It operates as follows: Given an initial guess $\phi_0$ of a solution to \eqref{eq:stationary2}, compute for all $k=0,1,2,\ldots$
\[ \begin{aligned}
\phi_{k+1}& =\phi_k - (\mathcal{G}_\mathcal{N}(\phi_k) )^{-1}(M_\mathcal{N}(\phi_k) -(0,g,0,0)^\top).
\end{aligned}
\]
Here, $\mathcal{G}_\mathcal{N}(\phi_k)$ is a Newton derivative of the operator $ M_\mathcal{N}$ at $\phi_k$ given by
\[\mathcal{G}_\mathcal{N}(\phi_k)=
\left(
\begin{array}{cccc}
- \Delta +\partial_y \mathcal{N} (\cdot ,y_k) & 0&- \text{ Id} & 0\\
\partial_{yy} \mathcal{N}(\cdot ,y_k) p_k + \text{ Id} & - \Delta + \partial_y \mathcal{N}(\cdot ,y_k) & 0 & 0\\
0 & - \text{ Id} & \alpha \text{ Id} & \text{ Id}\\
0& 0 & - cG_k & \text{ Id}-G_k
\end{array} \right),
\]
where for $x\in\Omega$,
\[
G_k(x):=\left\{
\begin{aligned}
1, & \quad \text{if }c(\underline{u}(x)-u_k(x))\leq \lambda_k(x) \leq c(\overline{u}(x)-u_k(x)),\\
0, & \quad \text{else},
\end{aligned} \right.
\]
is a Newton derivative that corresponds to the nonsmooth functions $\max(0,\cdot)$ and $\min(0,\cdot)$ in \eqref{eq:stationary1}. SSN can be shown to converge locally at a superlinear rate, provided $\phi_0$ is sufficiently close to a solution and the selection of Newton derivatives for $M_\mathcal{N}$ is uniformly bounded and invertible along the iteration sequence; see \cite{HinItoKun02} and \cite{HinUlb04}.
Moreover, under a nondegeneracy assumption the method exhibits a mesh independent convergence upon proper discretization of \eqref{eq:stationary2}; see \cite{Hin07, HinUlb04}. Globalization of the SSN iteration can be achieved, e.g., by employing a path search \cite{DirFer95, Ral94}, which we did not pursue here, however. Rather we intertwined SSN with a sequential quadratic programming (SQP) iteration, with the latter specified below. This combination helped the globally convergent SQP solver to escape from unfavorable local minimizers or stationary points. Obviously, one cannot expect a general theoretical result supporting such a behavior. It, hence, merely reflects a useful numerical observation, in particular in connection with our example with a nonmonotone $f$.
\paragraph{SQP algorithm}
Here we consider the reduced SQP approach which operates on the reduced optimal control problem. Given an estimate $u_k$ of an optimal control, in every iteration it seeks to solve the following quadratic problem:
\begin{equation}\label{eq:SQP}
\begin{aligned}
&\text{minimize}\quad \;
\langle \mathcal{J}_{\mathcal{N}}^\prime(u_k) + \frac{1}{2} H_k(u_k)\delta_u,\delta_u\rangle_{U^*,U}, \quad\text{over }\delta_u\in U,\\
&\text{subject to } \; \underline{u} \leq u_k+\delta_u \leq \overline{u}\quad \text{a.e. in }\Omega,
\end{aligned}
\end{equation}
where $\mathcal{J}_{\mathcal{N}}^\prime(u_k)$ is the Fr\'echet derivative of the reduced functional $\mathcal{J}_{\mathcal{N}}$, and $H_k(u_k)$ is a positive definite approximation of the second-order derivative of $\mathcal{J}_{\mathcal{N}}$ at $u_k$.
First-order optimality for \eqref{eq:SQP} yields
\begin{equation}\label{eq:SQP_optimal}
\begin{aligned}
& \mathcal{J}_{\mathcal{N}}^\prime(u_k) +H_k(u_k)\delta_u +\lambda=0 , \\
& \lambda - \max (0,\lambda+ c(u_k+\delta_u -\overline{u})) -\min(0,\lambda + c(u_k +\delta_u -\underline{u}))=0,
\end{aligned}
\end{equation}
for some fixed $c>0$. This nonsmooth system can be again solved using a semi-smooth Newton method which yields $\delta_{u,k}$ and $\lambda_k$.
Concerning the Hessian approximation, in our implementation we choose $H_k(u_k):=(\mathcal{J}_{\mathcal{N}}^\prime(u_k))^*\mathcal{J}_{\mathcal{N}}^\prime(u_k)$, where '$^*$' denotes the adjoint operator.
For globalization we use a classical line search with the merit function
\begin{equation}\label{eq:merit}
\Phi_k(\mu)=\mathcal{J}_{\mathcal{N}}(u_k+\mu\delta_{u,k}) + \beta_k \Psi_k(\mu) \quad \text{ for some } \beta_k >0,
\end{equation}
where
\[\Psi_k(\mu):= \norm{(u_k+\mu\delta_{u,k}-\overline{u})^+}_{L^2(\Omega)}+\norm{(u_k+\mu\delta_{u,k}-\underline{u})^-}_{L^2(\Omega)},\]
with $ a^+:=\max\set{a,0}, \text{ and }\; a^-:=\min \set{0,a}$.
We employ a backtracking line search method starting with $\mu:=1$ to decide on the step length. Note that the reduced problem requires to enforce the PDE constraint for every $u_k$. For this purpose a (smooth) Newton iteration was embedded into every SQP update step.
This Newton iteration is terminated when $\|-\Delta_h y_k+\mathcal{N}(\cdot,y_k)-u_k \|_{H^{-1}(\Omega)}\leq\text{tol}=10^{-16}$ or a maximum of 15 iterations was reached.
To summarize, we utilize the following overall algorithm:
\renewcommand{\thealgorithm}{\arabic{algorithm}}
\setcounter{algorithm}{0}
\begin{algorithm}
\begin{itemize}
\item[$\bullet$] {Initialization:} Choose $\phi_0:=(y_0,\; u_0\;, p_0,\; \lambda_0)$, and compute $\Phi_0(0)$. Fix a lower bound $\epsilon>0$ for the step length, choose $\rho\in (0,1)$, and $\beta_0>0$. Set $k:=0$.
\item[$\bullet$] {Unless the stopping criteria are satisfied, iterate:}
\begin{itemize}
\item[(1)] Compute an update direction $\delta_{u,k}$ by solving \eqref{eq:SQP_optimal} using SSN. Let $\mu_k^0:=1$, $y_k^{-1}:=y_k$ and set $l:=0$. Iterate:
\begin{itemize}
\item[(a1)] Compute $y_k^l:=\Pi_{\mathcal{N}}(u_k+\mu_k^l \delta_{u,k})$, where $\Pi_{\mathcal{N}}$ is realized by performing Newton iterations as a nonlinear PDE solver initialized by $y_k^{l-1}$.\\ Setting $y:=y_k^l$ and $u:=u_k+\mu_k^l \delta_{u,k}$ compute the remaining quantities in $\phi_k^l$ according to \eqref{eq:stationary1} with $p=:p_k^l$ and $\lambda=:\lambda_k^l$. This yields $\phi_k^l$.
\item[(a2)] Increase $\beta_k$, if necessary, to get $\beta_k^l$.
\item[(a3)] Check the Armijo condition \eqref{eq:updating_condition}.\\ If it is satisfied, then set $l_k:=l$ and continue with step $(2)$; otherwise update $\mu_k^{l+1}:=r \mu_k^l$, $l:=l+1$. \\ If $\mu_k^{l+1}<\epsilon$, then terminate the algorithm; otherwise return to Step (a1).
\end{itemize}
\item[(2)] Set $\phi_{k+1}:={\phi}_k^{l_k}$, and $\beta_{k+1}:=\beta_k^{l_k}$, and $k:=k+1$.
\end{itemize}
\item[$\bullet$] {Output:} The value of $\phi_k$ which contains both the control and state variables.
\end{itemize}
\caption{A semi-smooth Newton SQP algorithm for PDE control problems} \label{alg:SQP}
\end{algorithm}
In our examples, we choose $\mu_0=1$, $\epsilon=10^{-5}$, $r=2/3$, and $\beta_0=\norm{\lambda_0}_{L^2(\Omega)}+1$.
In order to solve the nonsmooth system in \eqref{eq:SQP_optimal}, we employ a primal-dual active set strategy (pdAS), which was shown to be equivalent to an efficient SSN solver for classes of constrained optimization problems \cite{HinItoKun02}. For the precise set-up of pdAS and the associated active/inactive set estimation we also refer to \cite{HinItoKun02}. For minimizing quadratic objectives subject to box constraints and utilizing highly accurate linear system solvers, pdAS is typically terminated when two consecutive active and inactive set estimates coincide. We recall here that the active set for \eqref{eq:SQP} at the solution $\delta_{u,k}$ is a subset $\mathcal{A}_k$ of $\Omega$ with $(u_k+\delta_{u,k})(x)\in[\underline{u}(x),\overline{u}(x)]$ for $x\in\mathcal{A}_k$; $\mathcal{I}_k:=\Omega\setminus\mathcal{A}_k$ denotes the associated inactive set. Alternatively one may stop the iteration once the residual norm of the nonsmooth system at an iterate drops below a user specified tolerance.
In view of \eqref{eq:SQP_optimal} and constraint satisfaction, the function $\Psi_k(\mu)$ in \eqref{eq:merit} appears irrelevant as a penalty for violations of the box constraints. However, it becomes relevant when early stopping is employed in SSN (respectively pdAS).
In this case we still need to guarantee that $\delta_{u,k}$ is a descent direction for our merit function to obtain sufficient decrease of $\Phi_k$ in our
line search \eqref{eq:updating_condition}. This is needed for getting convergence of $(u_k)$ (along a subsequence) to a stationary point. For deriving a proper stopping rule for SSN to guarantee sufficient decrease, we multiply the first equation in \eqref{eq:SQP_optimal}
by the solution $\delta_{u}$, use $\lambda(u_k+\delta_{u}-\overline{u})(u_k+\delta_{u}-\underline{u})=0$ a.e. in $\Omega$ and the feasibility of $u_k+\delta_{u}$, both according to the second line in \eqref{eq:SQP_optimal}. We further set $\beta_k>\|\lambda\|_{U}$ (upon identifying $U^*\widehat{=}U$) to find
\[
\langle \mathcal{J}_{\mathcal{N}}^\prime(u_k), \delta_{u} \rangle_{U^*,U}+ \beta_k ( \underbrace{\Psi_k(1)}_{=0} - \Psi_k(0)) \leq -\langle H_k(u_k)\delta_{u},\delta_{u}\rangle_{U^*,U}<0,
\]
unless $\delta_{u}=0$, i.e., $u_k$ is stationary for the original reduced problem. Here, $\delta_u$ replaces $\delta_{u,k}$ in $\Psi_k(1)$. This motivates our termination rule for SSN when solving \eqref{eq:SQP_optimal}. In fact, let superscript $l$ denote the iteration index of SSN for the outer iteration $k$, i.e., for given $u_k$. For some initial guess $(\delta_u^0,\lambda^0)$ (typically chosen to be $(\delta_{u,k-1},\lambda_{k-1})$) SSN computes iterates $(\delta_u^{l},\lambda^{l})$, $l\in\mathbb{N}$, and terminates at iteration $l_k$, which is the smallest index with
\begin{equation}\label{eq:stopping_rule}
\begin{aligned}
&\langle \mathcal{J}_{\mathcal{N}}^\prime(u_k), \delta_{u}^{l_k}\rangle_{U^*,U} + \beta_k(\Psi_k(1)-\Psi_k(0))
\leq - \xi \langle H_k(u_k)\delta_{u}^{l_k},\delta_{u}^{l_k}\rangle_{U^*,U}\\\
&\text{and}\;\quad { \Psi_k(1)} \leq (1- \xi) \Psi_k(0)
\end{aligned}
\end{equation}
for some $\xi\in(0,1)$, with $\beta_k>\|\lambda^{l_k}\|_U$, and where $\delta_{u}^{l_k}$ is used in $\Psi_k(1)$.
In our tests, we choose $\xi=0.9$, and terminate SSN iterations whenever \eqref{eq:stopping_rule} is satisfied or two consecutive active set estimates are identical. Then we set $\delta_{u,k}:=\delta^{l_k}_u$, $\lambda_k:=\lambda^{l_k}$, and determine a suitable step size $\mu_k$.
For the latter we use a backtracking line search based on the Armijo condition \cite{Pow76}. Indeed, given $u_k$, $\delta_{u,k}$, and $\lambda_k$, let $l$ now denote the running index of the line search iteration. Then $l_k\in\mathbb{N}$ is the smallest index such that
\begin{equation}\label{eq:updating_condition}
\Phi_k(\mu_k^{l_k})-\Phi_k(0)\leq \kappa \mu _k^{l_k}\left (\langle\mathcal{J}_{\mathcal{N}}^\prime(u_k), \delta_{u,k}\rangle_{U^*,U} + \beta_k(\Psi_k(1)-\Psi_k(0)) \right),
\end{equation}
for some parameter $0<\kappa<1$, and $\beta_k=\max\{\beta_{k-1}, \zeta\|\lambda_k\|_{U}\}>\|\lambda_k\|_{U}$, for some $\zeta>1$ in (a2). In our implementation we use $\kappa=10^{-3}$ and $\zeta=2$.
Regarding the stopping criteria for the SQP iterations, we set a tolerance for the norm of the residual of \eqref{eq:stationary1} along with a maximal number of iterations. We note here that \eqref{eq:stationary1} matches \eqref{eq:SQP_optimal} upon introducing the adjoint state for efficiently computing $\mathcal{J}_{\mathcal{N}}'(u_k)$ to the latter.
In our implementation we simplified the Newton derivative of the first-order system \eqref{eq:stationary1} by dropping the second-order derivatives $\partial_{yy}\mathcal{N}(\cdot, y_k)p_k$ from $\mathcal{G}_{\mathcal{N}}(\phi_k)$. The corresponding approximation reads
\[
\left(
\begin{array}{llll}
- \Delta +\partial_y \mathcal{N} (\cdot ,y_k) & 0 &- \text{Id} & 0\\
\text{Id} & - \Delta + \partial_y \mathcal{N}(\cdot ,y_k) & 0 & 0\\
0 & - \text{Id} & \alpha \text{Id} & \text{Id}\\
0& 0 & - cG_k & \text{Id}-G_k
\end{array} \right)\simeq\mathcal{G}_\mathcal{N}(\phi_k) .
\]
This helped to stabilize the SSN iterations, while maintaining almost the same convergence rates as for the exact Newton derivative in our tests.
\subsection{Numerical results on distributed optimal control of semilinear elliptic PDEs }
\label{subsec:monotone_example}
Our first test problem is given by
\begin{equation} \label{eq:example_op_pde}
\left.\begin{aligned}
&\text{minimize}\quad \frac{1}{2}\norm{y-g}_{L^{2}(\Omega)}^2+\frac{\alpha}{2} \norm{u}_{L^{2}(\Omega)}^2,\text{ over }(y,u)\in H^1(\Omega)\times L^2(\Omega),\\
&\text{subject to}\quad -\Delta y+ f(x,y)=u \;\text{ in } \Omega :=(0,2)\times (0,2) ,\quad \partial_\nu y=0 \;\text{ on } \partial \Omega,\\
&\phantom{\text{subject to}\quad}-20\leq u \leq 20.
\end{aligned}\right\}
\end{equation}
with exact underlying nonlinearity $ f(x,z) = z+ 5\cos^2(\pi x_1x_2) z^3$ and $x=(x_1,x_2)\in\mathbb{R}^2$, $z\in\mathbb{R}$.
\subsubsection{Training of artificial neural networks}
For learning the function $f$
we use neural networks that are built from standard (multi-layer) feed-forward networks.
Their respective architecture together with the loss function as well as the training data and method are specified next.
\paragraph{Loss function and training method}
Let $\Theta=(W,b)$ denote the parameters associated with an ANN $\mathcal{N}=:\mathcal{N}_\Theta$ that needs to be trained by solving an associated minimization problem; compare \eqref{mh.ann.min}.
We use here the mean squared error
\[\mathfrak{d}(\f x,\f f) = \frac{1}{n_D} \sum_{j=1}^{n_D}\abs{\mathcal{N}_{\Theta}(\f x_j) -\f f_j}^2, \]
as a loss function, no regularization, i.e, $\mathfrak{r}\equiv 0$, and $\mathcal{F}_{\text{ad}}$ is the full space. In this context, $(\f x_j, \f f_j )_{j=1}^{n_D}$ are the input-output training pairs. For simplicity of presentation we assume that $n_D$ is larger than the number of unknowns in $\Theta$.
For solving \eqref{mh.ann.min}, we adopt a Bayesian regularization method \cite{Mac92} which is based on a Levenberg-Marquardt (LM) algorithm,
and is available in MATLAB packages. We initialized the LM algorithm by unitary random vectors using the Nguyen-Widrow method \cite{NguWid90}, and terminated it as soon as the Euclidean norm of the gradient of the loss function dropped below $10^{-7}$ or a maximum of $1000$ iterations was reached. For other methods that are suitable for this task we refer to the overview in \cite{BotCurNoc18}.
\paragraph{Architecture of the network}
In order to have a representative study of the influence of ANN architectures on our computational results, we used networks with a total number of hidden layers (HL) equal to 1, 3 or 5. In each choice, we further varied the number of neurons per layer such that the final number of unknowns in $\Theta$ (degree(s) of freedom; DoF) remained in essence the same. Such tests were performed for three different DoF (small, medium, large) resulting in a total of nine different architectures; cf.\ Table \ref{tab:net_arc_pde}. All underlying networks operate with input layer size of three neurons and one neuron in the output layer.
In all tests for this example, the log-sigmoid transfer function (\verb+logsig+ in MATLAB) was chosen as the activation function at all the hidden layers.
\begin{table}[h!]
\begin{center}
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{|l|l|l|l|l|l|l|}\hline
& HL 1 & HL 2 & HL 3 &HL 4 & HL 5 & Total DoF \\
\hline
&\multicolumn{6}{c|}{Small DoF}\\
\hline
No. of neurons & 30 & - & - & - & - & 151 \\
No. of neurons& 6 & 10 & 5& - & - & 155 \\
No. of neurons & 3 & 5 & 10& 5 & 1 & 155 \\
\hline
&\multicolumn{6}{c|}{Medium DoF}\\
\hline
No. of neurons & 60 & - & - & - & - & 301 \\
No. of neurons&10 & 12 & 10 & - & - & 313 \\
No. of neurons & 5 & 8 & 10 & 8 & 6 & 307 \\
\hline
&\multicolumn{6}{c|}{Large DoF}\\
\hline
No. of neurons & 120 & - & - & - & - & 601 \\
No. of neurons&15 & 18 & 13& - & - & 609 \\
No. of neurons & 10 & 10 & 15 & 10 & 10 & 596 \\
\hline
\end{tabular}\\[8pt]
\caption{\label{tab:net_arc_pde} Architecture of networks.
HL $i$: $i$ hidden layers; DoF: degrees of freedom in $\Theta$.}
\end{center}
\end{table}
\paragraph{Training and validation data}
The training data rest on chosen control actions $(u_j)_{j=1}^{n_D}\subset \mathcal{C}_{ad}$ with
\begin{equation*}\begin{aligned}
u^j= &-2d_{j}\pi^2\cos(\pi x_1)\cos(\pi x_2)\\
&-d_{j}\cos(\pi x_1)\cos(\pi x_2)-5d_{j}^3\cos^2(\pi x_1x_2) \cos^3(\pi x_1)\cos^3(\pi x_2),
\end{aligned}
\end{equation*}
and $(d_j)=\set{[0.01:0.4:2.01]}$ (in MATLAB notation).
The procedure for generating the training data is as follows: First, numerical solutions are computed on a uniform discrete mesh $\Omega_{h}=\{x^{k}\}_{k=1}^{\bar N_h}$ (represented here by the associated mesh nodes including those on $\partial\Omega$) with mesh width $h=\frac{1}{50}$, and $\bar N_h=(n_h+1)^2$, $n_h=1/h$.
The Laplace operator is discretized by the standard five-point finite difference stencil respecting the homogeneous Neumann boundary conditions. This yields the $N_h\times N_h$-matrix $\Delta_h$ related to nodes $x_k$ in $\Omega$ with $N_h=(n_h-1)^2$. The nonlinearity as well as the controls are evaluated at such mesh points $x^k$, and the resulting discrete nonlinear PDE \eqref{eq:example_op_pde} is solved by Newton's method. The Newton iteration is terminated once the PDE residual in the discrete $H^{-1}(\Omega)$-norm drops below $10^{-16}$, or a maximum of $30$ iterations is reached. Thus for each $u^j$, $j=1,\ldots, n_D$, we obtain numerical values $y_{h}^{j}=(y_{h,1}^j,\ldots,y_{h,N_h}^j)^\top$ associated with the (interior) mesh nodes $x^{k}$ and approximating $y^j(x^k)=-d_{j}\cos(\pi x_{1}^k)\cos(\pi x_{2}^k)$, the analytical PDE solution. Using these data we compute the output values of $f$ denoted by $f^j_h\in\mathbb{R}^{N_h}$ according to the PDE by
\[f(x^{k},y^{j}(x^{k}))\approx u^{j}(x^{k})+(\Delta_h y_{h}^{j})_k=:f_{h,k}^j,\quad k=1,\ldots, N_h, \quad j=1,\ldots, n_D.\]
These input-output pairs both are prepossessed using \verb+mapminmax+ function in MATLAB without change of notation here.
The training data are then obtained through subsampling $f_{h,k}^j$ by restriction to a coarse mesh $\Omega_H$, with $H>h$. For this purpose we use $H\in\{0.2,0.1,0.08\}$ giving rise to a small, medium and large training set, respectively. The corresponding reduction rates are 1/10, 1/5, and 1/4 with respect to the data for $h=1/50$.
This subsampled data set is then split into a training data set, a validation data set and a testing data set at the ratio of $8:1:1$. In our tests, such a data partitioning is done randomly by using MATLAB's \verb+randperm+ function.
\subsubsection{Numerical results}
We start by comparing the exact, numerical and learning-based solutions, respectively.
The exact reference solution is chosen to be
\[y^*=1.5 \cos(\pi x_1)\cos(\pi x_2),\]
and the numerical approximation $y_h$ resulted from a mesh with $h=2^{-7}$ and the use of the exact nonlinearity $f$. The same grid is used for obtaining the numerical approximation of $y_{\mathcal{N}}$. Note, however, that the grid for data generation is different from the grid for numerical computation.
Our report on the experiments involves several discrete norms. In fact, for $z_h\in\mathbb{R}^{N_h}$ we have \[\abs{z_h}^2_{1}:=h^2(\Delta_hz_h)^\top z_h,\quad \norm{z_h}_{0}^2:=h^2z_h^\top z_h,\]
where $\abs{\cdot}_1$ and $\norm{\cdot}_0$ correspond to the $H^1$-seminorm and $L^2$-norm, respectively.
\begin{table}[!ht]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|ll|ll|ll|ll|}
\hline
& $\abs{y_\mathcal{N}-y^*_h}_1$ & $\abs{y_\mathcal{N}-y^*_h}_1$ &
$\abs{y_\mathcal{N}-y^*}_1$ & $\abs{y_\mathcal{N}-y^*}_1$ & $\norm{y_\mathcal{N}-y^*_h}_0$ & $\norm{y_\mathcal{N}-y^*_h}_0$ &$\norm{y_\mathcal{N}-y^*}_0$ & $\norm{y_\mathcal{N}-y^*}_0$ \\ \hline
& min & max & min & max & min & max & min & max \\ \hline
1-L & $0.2506 $ & $0.6532 $ & $ 0.2868 $ & $ 0.6713 $ & $0.0752 $ & $ 0.2422 $ & $0.0808 $ & $0.2435 $ \\ \hline
3-L & $ 0.2575 $ & $0.7537 $ & $ 0.2391 $ & $0.7777 $ & $ 0.0817 $ & $0.2524 $ & $0.0791 $ & $0.2565 $ \\ \hline
5-L & $0.2157 $ & $36.2640 $ & $ 0.2235 $ & $ 36.2731 $ & $0.0539 $ & $29.4926 $ & $ 0.0544 $ & $ 29.4936 $ \\ \hline
& mean& deviation & mean & deviation & mean& deviation & mean& deviation \\ \hline
1-L & $0.4276 $ & $0.1099 $ & $ 0.4496 $ & $ 0.1075 $ & $0.1472 $ & $ 0.0484 $ & $0.1506 $ & $0.0485 $ \\ \hline
3-L & $ 0.3853 $ & $0.1350 $ & $ 0.4003 $ & $0.1687 $ & $ 0.1425 $ & $0.0462 $ & $0.1268 $ & $0.0482 $ \\ \hline
5-L & $3.0242 $ & $ 8.9087 $ & $3.0287 $ & $8.9103 $ & $ 2.1309 $ & $7.3143 $ & $ 2.1299 $ & $ 7.3149 $ \\ \hline
\end{tabular}}
{\small \caption{\label{tab:layer_comparison}Statistics on learning-informed PDEs with different layers in neural networks using small size training data, small DoF in $\Theta$, and 15 samples in total.}}
\end{center}
\end{table}
Table \ref{tab:layer_comparison} depicts the approximation results for different ANN architectures with small DoF as described in \ref{tab:net_arc_pde} and in all cases the small training data set.
We find that the $1$-layer network is robust in terms of the statistical quantities shown, and the $3$-layer network has the smallest errors on average, but exhibits a larger deviation than the $1$-layer network. The $5$-layer network yields the smallest error, but also the largest ones with a very big deviation.
This behavior may be attributed to the fact that deeper networks give rise to increasingly more nonlinear compositions entering the loss function. This may be stabilized by tuned initializations, additional regularization, or sufficient training data. A study along these lines, however, is not within the scope of the present work as noted earlier.
\begin{table}[!ht]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|ll|ll|ll|ll|}
\hline
& $\abs{y_\mathcal{N}-y^*_h}_1$ & $\abs{y_\mathcal{N}-y^*_h}_1$ &
$\abs{y_\mathcal{N}-y^*}_1$ & $\abs{y_\mathcal{N}-y^*}_1$ & $\norm{y_\mathcal{N}-y^*_h}_0$ & $\norm{y_\mathcal{N}-y^*_h}_0$ &$\norm{y_\mathcal{N}-y^*}_0$ & $\norm{y_\mathcal{N}-y^*}_0$ \\ \hline
& min & max & min & max & min & max & min & max \\ \hline
3-L S & $0.0546 $ & $0.1658 $ & $ 0.0889 $ & $ 0.2211 $ & $0.0086 $ & $ 0.0546 $ & $0.0207 $ & $0.0515 $ \\ \hline
3-L M & $ 0.0090 $ & $0.1508 $ & $ 0.0876 $ & $0.2039 $ & $ 0.0026 $ & $0.0492 $ & $0.0168 $ & $0.0591 $ \\ \hline
3-L L & $0.0155 $ & $0.2815 $ & $ 0.0833 $ & $ 0.3306 $ & $0.0036 $ & $0.0901 $ & $ 0.0161 $ & $ 0.0996 $ \\ \hline
& mean& deviation & mean & deviation & mean& deviation & mean& deviation \\ \hline
3-L S& $0.1103 $ & $0.0357 $ & $ 0.1464 $ & $ 0.0329 $ & $0.0266 $ & $ 0.0125 $ & $0.0339 $ & $0.0095 $ \\ \hline
3-L M& $ 0.0631 $ & $0.0407 $ & $ 0.1113 $ & $0.0367 $ & $ 0.0170 $ & $0.0120 $ & $0.0250 $ & $0.0117 $ \\ \hline
3-L L & $ 0.0559 $ & $ 0.0626 $ & $0.1115 $ & $0.0609 $ & $ 0.0149 $ & $0.0205 $ & $ 0.0250 $ & $ 0.0204 $ \\ \hline
\end{tabular}}
\caption{\label{tab:width_comparison}Statistics on learning-informed PDEs with different numbers of neurons in networks using medium size training data of 15 samples in total.}
\end{center}
\end{table}
In Table \ref{tab:width_comparison}, we provide statistics on the influence of the number of neurons for fixed layers. We use $3$-layer networks and medium sized training data for this set of experiments. All three levels of DoF for the networks as given in Table \ref{tab:net_arc_pde} are studied. The results in terms of 'mean' and 'deviation' indicate that a large number of neurons gives typically better approximations when compared to the smaller size of DoFs.
However, we also observe that the deviation and the maximum error increases with the number of DoF.
This can be attributed to an increase in training error for increasing DoFs.
Next we present some computational results where we use the learning-informed PDE as constraint when numerically solving the optimal control problem \eqref{eq:cost_NN}.
Here we consider a target function $g=y^*+\delta$ where $\delta$ is a variable denoting zero-mean Gaussian noise of standard deviation $\hat\sigma$, for different values of $\hat\sigma$.
For convenience of comparison, we take $y^*$ to be the solution from the last set of experiments.
We denote by $u_\mathcal{N}$ and $\bar{u}$ the optimal controls
with respect to the learning-informed PDE constraint and the original PDE constraint, respectively, both computed by the semi-smooth Newton algorithm as described in Section \ref{subsec:Newton} with a fixed number of $30$ iterations which turns out to be sufficient for this example,
as the sum of all residual norms of the first-order system \eqref{eq:stationary1} is less than $10^{-10}$.
As before, $y_\mathcal{N}$ and $\bar{y}$ are the states corresponding to $u_\mathcal{N}$ and $\bar{u}$, respectively.
\begin{table}[!ht]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|lll|lll|lll|} \hline
&\multicolumn{3}{c|}{Small DoF} &\multicolumn{3}{c|}{Medium DoF}
&\multicolumn{3}{c|}{Large DoF} \\ \hline
& $\norm{u_\mathcal{N}-\bar{u}}_0$ & $\norm{y_\mathcal{N}- \bar{y}}_0$ & $ \abs{y_\mathcal{N}- \bar{y}}_1$ & $\norm{u_\mathcal{N}-\bar{u}}_0$ & $\norm{y_\mathcal{N}- \bar{y}}_0$ & $ \abs{y_\mathcal{N}- \bar{y}}_1$ & $\norm{u_\mathcal{N}-\bar{u}}_0$ & $\norm{y_\mathcal{N}- \bar{y}}_0$ & $ \abs{y_\mathcal{N}- \bar{y}}_1$ \\ \hline
& \multicolumn{9}{c|}{Small size of training data } \\ \hline
1-L & $0.5578 $ & $0.0330 $ & $0.1609 $ & $0.3055 $ & $0.0283 $ & $0.1423 $ & $ 0.2548$ & $0.0194 $ & $ 0.1143 $ \\ \hline
3-L & $ 0.3426$ & $ 0.0274$ & $0.1246 $ & $ 0.3597$ & $0.0343 $ & $0.1777 $ & ${\bf 0.3932 }$ & ${\bf 0.0354 }$ & $ {\bf0.1722 } $ \\ \hline
5-L & $ 0.3888 $ & $ 0.0183 $ & $ 0.1041$ & $ 0.1771$ & $0.0117 $ & $ 0.0666 $ & $ 0.3986$ & $0.0359 $ & $ 0.1698$ \\ \hline
& \multicolumn{9}{c|}{Medium size of training data } \\ \hline
1-L & $0.2145 $ & $0.0071 $ & $ 0.0413$ & $ 0.1153$ & $0.0072 $ & $0.0587 $ & $ 0.0655$ & $0.0029 $ & $ 0.0244$ \\ \hline
3-L & $0.1647 $ & $ 0.0069$ & $0.0419 $ & $ 0.0985$ & $0.0082 $ & $ 0.0423 $ & ${\bf 0.0623} $ & ${\bf 0.0046 }$ & ${\bf0.0287 } $ \\ \hline
5-L & $0.2971 $ & $0.0271 $ & $ 0.1223 $ & $0.0325$ & $ 0.0014$ & $0.0081 $ & $ 0.0736$ & $0.0064 $ & $0.0414 $ \\ \hline
& \multicolumn{9}{c|}{Large size of training data } \\ \hline
1-L & $0.1417 $ & $0.0089 $ & $ 0.0481 $ & $ 0.0920$ & $0.0040 $ & $0.0266 $ & $0.0447 $ & $0.0009 $ & $ 0.0055 $ \\ \hline
3-L & $0.0566 $ & $0.0020 $ & $ 0.0126 $ & $ 0.0467$ & $0.0024 $ & $0.0122 $ & ${\bf 0.0076 }$ & ${\bf 0.0004} $ & ${\bf 0.0020 }$ \\ \hline
5-L & $ 0.1239$ & $0.0070 $ & $ 0.0435$ & $0.2135$ & $0.0098 $ & $ 0.0645 $ & $0.0192 $ & $0.0018 $ & $ 0.0115 $ \\ \hline\addlinespace[5pt]
\multicolumn{10}{c}{ Using the same noisy data $g$ (Gaussian noise of mean zero and deviation $0.1$) with $\alpha=0.001$ in all the tests} \\
\end{tabular}}
\caption{Optimal control with learning-informed PDEs using different layers, different size of networks, and a variety of training data.} \label{tab:result_op_control}
\end{center}
\end{table}
In general, we observe in Table \ref{tab:result_op_control} that most combinations give similar results. This shows the robustness of our proposed method with respect to a wide range of network architectures.
Here, the presented errors are just computed from one specific initialization.
Note that when using $3$-hidden-layer networks with large DoF, we observe a clear increase in the levels of accuracy for both the control and state variables as the training data increase from small to large size. These are highlighted with bold font numbers in Table \ref{tab:result_op_control}.
A similar behavior occurs for $1$-hidden-layer and $5$-hidden-layer networks.
By fixing the $3$-hidden-layer networks, and for each case of DoFs provided in Table \ref{tab:result_op_control}, we are next interested in exploring how the noise level $\sigma$ and the cost parameter $\alpha$ further influence the optimal control approximation.
\begin{table}[!ht]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|lll|lll|lll|}
\hline
&\multicolumn{3}{c|}{Noise free} &\multicolumn{3}{c|}{Mild noise $\hat\sigma=0.05$}
&\multicolumn{3}{c|}{Larger noise $\hat\sigma=0.5$} \\ \hline
& $\norm{u_\mathcal{N}-\bar{u}}_0$ & $\norm{y_\mathcal{N}- \bar{y}}_0$ & $ \abs{y_\mathcal{N}- \bar{y}}_1$ & $\norm{u_\mathcal{N}-\bar{u}}_0$ & $\norm{y_\mathcal{N}- \bar{y}}_0$ & $ \abs{y_\mathcal{N}- \bar{y}}_1$ & $\norm{u_\mathcal{N}-\bar{u}}_0$ & $\norm{y_\mathcal{N}- \bar{y}}_0$ & $ \abs{y_\mathcal{N}- \bar{y}}_1$ \\ \hline
& \multicolumn{9}{c|}{$\alpha =0.00001$} \\ \hline
3-L-S NN & $1.9523 $ & $0.0210 $ & $0.2041 $ & $1.9518$ & $0.0210 $ & $0.2043$ & $ 2.1480$ & $0.0213 $ & $ 0.2085 $ \\ \hline
3-L-M NN & $ 0.1187$ & $ 0.0018$ & $0.0253 $ & $ 0.1190$ & $0.0018 $ & $0.0253 $ & $0.1264 $ & $0.0018 $ & $ 0.0254 $ \\ \hline
3-L-L NN & $ 0.0213$ & $0.0004 $ & $0.0046 $ & $ 0.0215$ & $0.0004 $ & $ 0.0046 $ & $ 0.0258$ & $0.0004 $ & $ 0.0047$ \\ \hline
& \multicolumn{9}{c|}{$\alpha=0.0001$} \\ \hline
3-L-S NN & $1.3489 $ & $0.0395 $ & $ 0.2695$ & $ 1.3560$ & $0.0397 $ & $0.2705 $ & $ 1.4181$ & $0.0410 $ & $ 0.2796$ \\ \hline
3-L-M NN& $0.1361 $ & $ 0.0032$ & $0.0314 $ & $ 0.1357$ & $0.0032 $ & $ 0.0314 $ & $0.1384 $ & $ 0.0032$ & $0.0315 $ \\ \hline
3-L-L NN & $0.0137$ & $0.0005 $ & $ 0.0039 $ & $0.0136 $ & $ 0.0005 $ & $0.0039 $ & $ 0.0136 $ & $0.0005 $ & $0.0039 $ \\ \hline
& \multicolumn{9}{c|}{$\alpha=0.001$ } \\ \hline
3-L-S NN & $0.3903 $ & $0.0350 $ & $ 0.1706 $ & $ 0.3917 $ & $0.0352 $ & $0.1714 $ & $0.4067 $ & $0.0371 $ & $ 0.1792 $ \\ \hline
3-L-M NN & $0.0628 $ & $0.0046 $ & $ 0.0286 $ & $ 0.0630$ & $0.0046 $ & $0.0286 $ & $0.0671 $ & $0.0046 $ & $ 0.0293 $ \\ \hline
3-L-L NN & $ 0.0076$ & $0.0004 $ & $ 0.0020$ & $0.0076$ & $0.0004 $ & $ 0.0020 $ & $0.0080 $ & $0.0004 $ & $ 0.0021 $ \\ \hline
& \multicolumn{9}{c|}{$\alpha=0.01$ } \\ \hline
3-L-S NN & $0.0570 $ & $0.0066 $ & $ 0.0209 $ & $ 0.0572$ & $0.0066 $ & $0.0210 $ & $0.0592 $ & $0.0069 $ & $ 0.0217 $ \\ \hline
3-L-M NN & $0.0271 $ & $0.0020 $ & $ 0.0080 $ & $ 0.0271$ & $0.0021 $ & $0.0081 $ & $0.0277 $ & $0.0022 $ & $ 0.0083 $ \\ \hline
3-L-L NN & $ 0.0035$ & $0.0003 $ & $ 0.0008$ & $0.0035$ & $0.0003 $ & $ 0.0008 $ & $0.0035 $ & $0.0003 $ & $ 0.0008 $ \\ \hline\addlinespace[5pt]
\multicolumn{10}{c}{ Variant level of noise in $g$ with respect to different $\alpha$ and coarser to finer neural networks} \\
\end{tabular}}
\caption{Optimal control on learning-informed PDEs with networks by 3 layers networks, but different sizes on the neurons (DoF), and a variant amount of training data.}
\label{tab:result_op_control_2}
\end{center}
\end{table}
From Table \ref{tab:result_op_control_2} we draw several interesting conclusions. In both, the noisy and noise free case, we have that the error $\norm{u_\mathcal{N}-\bar{u}}$ is proportional to the accuracy of the neural network approximation, and inverse proportional to $\sqrt{\alpha}$.
This verifies the results of Theorem \ref{thm:error_bound} and Theorem \ref{thm:error_bound2}, respectively.
The dependence on $\alpha$ could only be proved for the noise-free case in Theorem \ref{thm:error_bound}.
Therefore the convergence rates provided by our tests here seem to indicate that better convergence rates or more relaxed assumptions appear plausible.
\subsection{Numerical results on optimal control of stationary Allen-Cahn equation}
Next we study the optimal control of the Allen-Cahn equation, which involves a nonmonotone $f$ and reads
\begin{equation} \label{eq:Allen_Cahn}
-\Delta y+ \frac{1}{\eta}(y^3-y) =u\quad \text{ in }\; \Omega ,\quad \partial_\nu y=0 \quad \text{ on }\; \partial \Omega,
\end{equation}
with $\eta>0$.
In our numerical tests, we set $\eta=0.004$, use $\Omega=(0,2)^2$, and $h:=2^{-7}$.
We focus on $3$-hidden-layer neural networks with $10$, $12$ and $10$ neurons per layer yielding DoF$=293$. In each hidden layer we use log-sigmoid transfer functions.
Note also that since the input data here does not depend explicitly on the spatial variable $x$, i.e., $f=f(y)$, both the input and output layers have only one neuron, respectively. This is different to the previous test examples.
In our tests, we obtained the training data by solving the PDE in \eqref{eq:Allen_Cahn} with
\[ u=u^d:=\left\{\begin{aligned}
1000, & \quad x\in \Omega^l:=(0,2)\times (0,1),\\
-1000, & \quad x\in \Omega/\Omega^l.
\end{aligned} \right. \]
In order to train the neural networks described above, the solution of the PDE is subsampled uniformly at a rate of $0.25$, that is $H=0.08$.
As $f$ has an one dimensional image space, it suffices that the data $u^d$ correspond to a PDE solution that has a relatively wide range of values. Indeed, using our choice of $u^{d}$, the value of the corresponding solution $y$ varies between $-2.5$ and $2.5$ which turns out to be sufficient for learning $f$.
\begin{figure}[h!]
\begin{minipage}[t]{0.32\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/doub_well_F.tex}}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/non_monotone_f.tex}}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/non_monotone_f_prime.tex}}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/doub_well_F_zoom.tex}}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/non_monotone_f_zoom.tex}}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/non_monotone_f_prime_zoom.tex}}
\end{minipage}
\caption{Functions $F$, $f$ and its first order derivative $f'$ along with the corresponding approximations learned from a neural network. We note that the range of the learning-informed function is influenced by the training data. The second row of images shows that the functions are well-approximated by their neural network counterparts in the ranges where the training data cover well, which here is around the interval $[-2,2].$}
\label{fig:non_monotone_f}
\end{figure}
In Figure \ref{fig:non_monotone_f}, we provide the plots of $F(z)=\int_{-1}^{z} f(t)\,dt$, the function $f$ and its derivative $f^\prime$ on $[-K,K]\subset \mathbb{R}$, ($K=10$ and $K=2$, respectively) as well as their learned counterparts.
We observe that all the learning-informed versions preserve the key features of their exact counterparts very well. This is due to the fact that the training data cover exactly those ranges where important features are located.
As a next step, we consider the corresponding optimal control problem when the function $f$ is replaced by its learned version.
Notice that both the original and the learning-informed PDE admit no unique solution. Therefore the initial guess for the Newton iteration is crucial for the convergence to the final solutions.
The algorithm for solving the optimal control problem for both PDEs is a combination of the semi-smooth Newton algorithm for \eqref{eq:stationary1} (with $0$ as the initial guess) and the SQP algorithm.
The switch between the solvers operates as follows: Consider the summed up residual of
the four equations in \eqref{eq:stationary1} with respect to their norms in the spaces $H^{-1}(\Omega)$, $H^{-1}(\Omega)$, $L^{2}(\Omega)$ and $L^{2}(\Omega)$, respectively. Then
we start our algorithm by calling the semi-smooth Newton iterations, and when the residual drops below a threshold value (e.g., $5$ in our tests), then we switch to the SQP algorithm. The iteration is stopped if the residual is smaller than $10^{-10}$, or a maximum of $30$ iterations is reached.
We fix $\alpha= 10^{-5}$ and
$\mathcal{C}_{ad}:=\set{u: -50\leq u \leq 50}$. Next consider $g$ to be some polarized data preferring the values $-1$ and $1$ and representing two distinct material states, e.g., a binary alloy; see Figure
\ref{fig:Neumann_Allen_Cahn_optimal_control}.
\begin{figure}[h!]
\begin{minipage}[t]{0.48\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/Neumann_Allen_Cahn_merit.tex}}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\centering
\resizebox{0.95\textwidth}{!}{
\input{figures/Neumann_Allen_Cahn_resi_norm.tex}}
\end{minipage}
\caption{Merit function (left) and residual norm (right).}
\label{fig:Neumann_merit_function}
\end{figure}
In Figure \ref{fig:Neumann_merit_function}
we show the plots of the merit function values and also the overall residual of the first-order system in \eqref{eq:stationary1}.
The increasing part in the first few steps in the left plot (merit function) is due to the initilization of SSN while full step length is accepted. We notice that the threshold is reached by $10$ overall iterations including also the SSN initialization steps.
\begin{figure}[h!]
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_state_nn.png}
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_state_or.png}
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_state_noisy.png}\\ \includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_state_nn_error.png}
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_state_or_error.png}
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_state_diff.png}\\
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_Control_nn.png}
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_Control_or.png}
\includegraphics[width=0.32\textwidth]{./figures/Neumann_Allen_Cahn_Control_diff.png}
\caption{Optimal control of the stationary Allen-Cahn equation. First row: states (right: target data $g$; left and the middle: optimal states of learning-informed and original PDE, respectively);
second row: difference images of states (left and the middle: differences (in absolute values) of optimal states to target state $g$; right: actual difference between the two optimal states $\abs{y_{\mathcal{N}}-\bar{y}}$ in the first row; third row: left and middle the optimal controls corresponding to the learning-informed and original PDE respectively, as well as their difference $\abs{u_{\mathcal{N}}-\bar{u}}$ on the right.}
\label{fig:Neumann_Allen_Cahn_optimal_control}
\end{figure}
Since neither the optimal control problem nor the PDE admit unique solutions, many local minima make the semi-smooth Newton algorithm rather sensitive to the initial guess.
Concerning SQP we note here that enforcing the PDE and the box constraints too strongly in the early iterations, might result to the SQP algorithm getting trapped at some unfavorable stationary point. This has been numerically observed, e.g., when initializing the SQP algorithm by zero.
In our tests, the combination of the semi-smooth Newton algorithm with the SQP algorithm, however, turns out to be robust against the aforementioned adverse effects.
From Figure \ref{fig:Neumann_Allen_Cahn_optimal_control} (right plot)
we observe a high accuracy approximation of the solutions of the learning-informed control to the solutions of the original control problem. Both, the PDE constraint and also the box constraint are satisfied with high accuracy.
\section{Application: Quantitative magnetic resonance imaging (qMRI)}
\label{sec:appl_2}
According to \cite{DonHinPap19}, we consider the following optimization task in qMRI:
\begin{equation}
\label{eq:qMRI_optimal_control}
\begin{aligned}
&\text{minimize}\quad \frac{1}{2}\norm{P\mathcal{F}(y)-g^\delta}_{H}^{2} + \frac{\alpha}{2}\norm{u}^2_{U},\quad\text{over }(y,u:=(T_1,T_2,\rho) ^\top)\in Y\times U, \\
&\text{s.t.}\quad \frac{\partial y}{\partial t}(t) = y(t) \times \gamma B(t) - \left ( \frac{y_{1}(t)}{T_{2}}, \frac{y_{2}(t)}{T_{2}}, \frac{y_{3}(t)- \rho m_{e}}{T_{1}} \right ), \quad t=t_{1},\ldots, t_{L},\\
&\phantom{\text{s.t.}}\quad y(0)= \rho m_{0},\\
& \phantom{\text{s.t.}}\quad u\in \mathcal{C}_{ad}.
\end{aligned}
\end{equation}
where $0< t_1<\ldots<t_L$, $L\in\mathbb{N}$, $u\in U:=[H^1(\Omega)]^3$ and $Y:=[L^2(\Omega)^{3}]^{L}$ with $\Omega\subset\mathbb{R}^2$ the image domain, $H=\left [L^2(\mathbb{K})^{2}\right]^{L}$ with $\mathbb{K}$ the Fourier space. By $\mathcal{F}: Y\to H$ we denote the component-wise Fourier transform acting on
$(y_{1}, y_{2})$, i.e., the first two coordinates of $y$, and $P:H\to H$ is a subsampling operator.
Further, $g^\delta=(g_{l}^\delta)_{l=1}^L\in H$ are (noisy) data, and $\mathcal{C}_{ad}$ is an nonempty, closed, convex, and bounded subset of
$ [L_{\epsilon}^\infty(\Omega)^{+}]^{3}$ with $L_{\epsilon}^\infty(\Omega)^{+}:=\{f\in L^{\infty}(\Omega):\; \mathrm{ess}\,\mathrm{inf} f>\epsilon\}, $ for some $\epsilon>0$,
which takes care of practical properties of physical quantities.
The system of ordinary differential equations in \eqref{eq:qMRI_optimal_control} with initial value $\rho m_0$ represents the renowned Bloch equations (BE), which model the evolution of nuclear magnetization in MRI \cite{Blo46} with the parameters $\gamma$ and $m_e$ being fixed constants. In our context, the external magnetic field $B$ is assumed to be a uniformly bounded function in time.
To accommodate different scaling, we consider $\frac{\mathbf{\alpha}}{2}\norm{u}^2_{U }:=\frac{\alpha_0}{2}\norm{u}^2_{[L^2(\Omega)]^3}+\frac{1}{2}\abs{u}^2_{[H^1(\Omega)]^3},$ and
\[ \abs{u}^2_{[H^1(\Omega)]^3 }:=\int_\Omega \left(\alpha_{1,1} \abs{\nabla T_1}^2 +\alpha_{1,2}\abs{\nabla T_2}^2 + \alpha_{1,3}\abs{\nabla \rho}^2\right)dx, \]
with $\alpha_0>0$ and $\alpha_{1,j}>0$ for $j=1,2,3$.
For the ease of presentation, below we omit these scaling parameters.
\begin{remark}\label{rem:Bloch_bounds}
One readily checks that the solutions to the BE are bounded uniformly as long as $T_1,T_2$ are positive values and the magnetic field $B(t)$ is bounded. This property persists if either of the two terms on the right hand side of the equation is missing.
\end{remark}
Fixing the external magnetic field $B$ according to an excitation protocol with a specific sequence of frequency pulses (cf., e.g., \cite{DonHinPap19}) and associated echo times $\{t_i\}_{i=1}^L$ we have $u\mapsto \{y(t_i)\}_{i=1}^L$ yielding the solution map $\Pi:\mathcal{C}_{ad}\to [(L^{\infty}(\Omega))^3]^L$. Using this notation we have $Q(\cdot)=P\mathcal{F}(\Pi(\cdot))$. Noting that $\Pi(T_{1}, T_{2}, \rho)=\rho \Pi(T_{1}, T_{2},1)$ we show first continuity and differentiability results for $\tilde{\Pi}(\theta):=\Pi(T_{1}, T_{2},1)$ where $\theta:=(T_1,T_2)^\top$. Even though for simplicity we do that for $\theta\in [L_{\epsilon}^{\infty}(\Omega)^{+}]^{2}$, with $\epsilon>0$, we note that the map $\tilde{\Pi}$ can be continuously extended also for $T_{1}=0$ and/or $T_{2}=0$.
\begin{proposition}\label{prop:continuity_Bloch}
The operator $\tilde{\Pi}: [ L_{\epsilon}^{\infty}(\Omega)^{+}]^2\to [(L^{\infty}(\Omega))^3]^L$ is locally Lipschitz continuous, and Fr\'echet differentiable with locally Lipschitz derivative.
\end{proposition}
\begin{proof}
Let $\theta,\theta^a\in [L_{\epsilon}^\infty(\Omega)^+]^2$ be given with associated solutions $y, y^a$ of the BE, respectively. Suppressing $x\in\Omega$ in our notation, subtracting the BE for both $\theta$ values, and letting $r^a:=y-y^a$ as well as $R(\theta) :=\operatorname{diag}(\frac{1}{T_2},\frac{1}{T_2},\frac{1}{T_1})$, we get
\begin{equation}\label{eq:diff_Bloch}
\frac{\partial r^a}{\partial t}(t) - r^a(t) \times \gamma B(t) +R(\theta) r^a =\left(R(\theta^a)-R(\theta)\right) (y^a(t)-(0,0,y_e))^\top ,\;
r^a(0) = 0.
\end{equation}
This equation and its homogeneous counterpart (i.e., with zero right hand side) admit unique solutions, respectively, cf. \cite{Tes12}, for instance.
According to \cite[Theorem 3.12]{Tes12} the solution to \eqref{eq:diff_Bloch} is
\begin{equation}\label{eq:solution_diff_Bloch}
r^a(t)= \int_0^t \Phi(t,s) \left(R(\theta^a)-R(\theta)\right) (y^a(s)-(0,0,y_e)^\top )ds,
\end{equation}
where $ \Phi(t,s) $ is the principal matrix consisting of the three independent solutions of the homogeneous counterpart of \eqref{eq:diff_Bloch}
resulting from the initial data $h(s)=e_{i}$, $i=1,2,3$, with $\{e_1,e_2,e_3\}$ the canonical orthonormal basis in $ \mathbb{R}^{3}$. Note that it is easy to check that any such solution is uniformly bounded both in $t\ge 0$ and $\theta\ge0$ almost everywhere.
Since $R(\cdot)$ restricted to $[\epsilon,\infty)$ is Lipschitz (modulus $L>0$),
\eqref{eq:solution_diff_Bloch} can be further estimated as follows
\[ \abs{r^a(t)}\leq L\int_0^t |\Phi(t,s) (y^a(s)-(0,0,y_e)^\top )|ds \abs{\theta^a-\theta} \leq \tilde{L}(t) \abs{\theta^a-\theta},\]
for all $\theta^a,\theta \in [L_{\epsilon}^\infty(\Omega)^+]^2$. Note that the above estimate and in particular $\tilde{L}(t)$ can be considered independent of the spatial variable $x$ due to the uniform bound on the solution of BE for every element of $\mathcal{C}_{ad}$ (cf. Remark \ref{rem:Bloch_bounds}).
Therefore we have for some $L_{\Pi}>0$ that
\begin{equation*}
\|y^{a}(\cdot,t)-y(\cdot, t)\|_{[L^{q}(\Omega)]^{3}} \leq L_{\Pi} \| \theta^a-\theta\|_{[L^{q}(\Omega)]^{2}} \; \text{ for all } 1\le q\le \infty.
\end{equation*}
By considering the above estimate at $\{t_{i}\}_{i=1}^L$ we get the asserted local Lipschitz continuity of $\tilde{\Pi}$.
We now proceed to Fr\'echet differentiability.
Let $\theta\in [L_{\epsilon}^\infty(\Omega)^+]^2$, $v\in [L^\infty(\Omega)]^2$ be an arbitrary vector, and let $\theta^a= \theta+a v$ where $a >0$ is such that $\theta^a\in [L_{\epsilon}^\infty(\Omega)^{+}]^2$.
Dividing \eqref{eq:diff_Bloch} by $a$ and letting $p_\theta^a:=\frac{r^a}{a}$, we get:
\begin{equation}\label{eq:adjoint_Bloch}
\frac{\partial p_\theta^a}{\partial t}(t) - p_\theta^a(t) \times \gamma B(t) +R(\theta) p_\theta^a =\frac{\left(R(\theta^a)-R(\theta)\right)}{a} (y^a(t)-(0,0,y_e))^\top,\;\;
p_\theta^a(0) = 0.
\end{equation}
Existence, uniqueness and representation of a solution again follows from \cite[Theorem 3.12]{Tes12}:
\[ p_\theta^a(t)= \int_0^t \Phi(t,s) \frac{\left(R(\theta+ a v)-R(\theta)\right)}{a} (y^a(s)-(0,0,y_e)^\top )ds.\
Recall that $R(\cdot)$ is continuously differentiable for $\theta >0$ and time independent. For $a\downarrow 0$ and $p_\theta:=\lim_{a \to 0} p_\theta^a$, we have
\[ p_\theta(t)= \int_0^t \Phi(t,s) R'(\theta;v) (y(s)-(0,0,y_e)^\top )ds , \]
where $R'(\theta;v)$ denotes the directional derivative of $R$ at $\theta$ in direction $v$.
By considering again the uniform boundedness with respect to the spatial variable and pointwise evaluation at $\{t_{i}\}_{i=1}^L$, we get that $p_\theta=\tilde{\Pi}'(\theta;v) $ is bounded, and also linear with respect to the direction $v\in [L^\infty(\Omega)]^2$.
Thus, $\tilde{\Pi}$ is Gateaux differentiable.
Notice further that, due to $R'(\cdot;v) $ being locally Lipschitz, we have also the local Lipschitz continuity (modulus $L_{p_\theta}>0$) of the directional derivative:
\begin{equation}\label{eq:p_norm_estimate}
\abs{p_{\theta^a}-p_\theta}^q \leq L^q_{p_\theta} \abs{\theta^a-\theta}^q\|v\|_{[L^\infty(\Omega)]^2} \;\text{ for all }\; \theta^a, \theta \in [L_{\epsilon}^\infty(\Omega)^+]^2, \text{ and } 1 \leq q \leq \infty,
\end{equation}
with the above estimate again independent of the spatial variable.
This together with the linearity of the Gateaux derivative implies the Fr\'echet differentiability of $\tilde{\Pi}$.
Finally we also conclude the Lipschitz continuity of the Fr\'echet derivative:
\begin{equation} \label{eq:B_prime_Lip}
\norm{(\tilde{\Pi}^\prime(\theta^a) -\tilde{\Pi}^\prime(\theta))v}_{[L^{\infty}(\Omega)]^{3L}} \leq L_{p_\theta} \norm{\theta^a -\theta}_{[L^{\infty}(\Omega)]^{2}}\norm{v}_{[L^{\infty}(\Omega)]^{2}}.
\end{equation}
This ends the proof.
\end{proof}
Note that the continuity and differentiability of $\Pi=\rho\tilde{\Pi}$ for $u\in \mathcal{C}_{ad}$ follows readily as $\rho\in L^\infty(\Omega)$. As a consequence, existence of a solution to \eqref{eq:qMRI_optimal_control} can be shown similarly to Proposition \ref{pro:existence_wsc}.
\begin{remark}
The estimate \eqref{eq:p_norm_estimate} indicates that for every $u=(\theta^\top,\rho)^\top\in \mathcal{C}_{ad}$, and $h \in [L^\infty(\Omega)]^2$ sufficiently small, we even have
\[ \norm{\tilde{\Pi}(\theta+h)-\tilde{\Pi}(\theta)- \tilde{\Pi}^\prime(\theta)h}_{[L^q(\Omega)]^{3L}}= \mathcal{O}(\norm{h}^2_{[L^q(\Omega)]^2} ) \quad \text{ for all } 1\leq q\leq \infty. \]
We also note that due to properties of the Bloch operator, we have that both $\tilde{\Pi}^\prime(\theta): [L^2(\Omega)]^2\to [L^2(\Omega)]^{3L}$ and $Q^\prime(u): [L^2(\Omega)]^3 \to [(L^2(\mathbb{K}))^{2}]^{L}$ are bounded linear operators, respectively, as soon as $u=(\theta^\top,\rho)^\top\in \mathcal{C}_{ad}$. In this sense, we consider in the following $\tilde{\Pi}^\prime(\theta)$ and $Q^\prime(u)$ to be elements in $\mathcal{L}([L^2(\Omega)]^2,Y)$ and $\mathcal{L}(U,H)$, respectively.
\end{remark}
We are now interested in finding a data-driven approximation $\Pi_{\mathcal{N}}(u) := \rho\mathcal{N}(T_1,T_2)$ of $\Pi$ and in solving the reduced problem
\begin{equation}
\label{eq:qMRI_nn}
\begin{aligned}
&\text{minimize}\quad \frac{1}{2}\norm{Q_{\mathcal{N}}(u)-g^\delta}_{H}^{2} + \frac{\alpha}{2}\norm{u}^2_{U},\quad\text{over }u\in U,\\
&\text{s.t. } \quad u=(T_{1}, T_{2}, \rho) ^\top\in \mathcal{C}_{ad},
\end{aligned}
\end{equation}
with $Q_\mathcal{N}(u)=P\mathcal{F}(\Pi_{\mathcal{N}}(T_1,T_2,\rho))$. Existence of a solution to \eqref{eq:qMRI_nn} can again be argued similarly to Proposition \ref{pro:existence_wsc}.
We finish this section with the corresponding approximation result.
\begin{proposition}
Let $\theta=(T_1,T_2)^\top$, $u=(\theta^\top,\rho)^\top \in \mathcal{C}_{ad}$. Assume the following error bounds in the neural network approximations
\[\norm{\mathcal{N}(\theta)-\tilde{\Pi}(\theta)}_{[L^{\infty}(\Omega)^{3}]^L}\leq \epsilon \quad \text{
and } \quad \norm{\mathcal{N}^\prime(\theta)-\tilde{\Pi}^\prime(\theta)}_{\mathcal{L}([L^2(\Omega)]^2,[L^{\infty}(\Omega)^{3}]^L)}\leq \epsilon_1, \]
Then we have
\begin{align}
\norm{Q(u)-Q_{\mathcal{N}}(u)}_H&\leq C\epsilon,\\
\norm{Q^\prime (u)-Q^\prime_{\mathcal{N}}(u)}_{\mathcal{L}(U,H)}&\leq C_1 \epsilon+C_2 \epsilon_{1},
\end{align}
for some positive constants $C$, $C_1$ and $C_2$ which are all independent of $\epsilon$ and $\epsilon_{1}$.
\end{proposition}
Before we commence with the proof, note that the above assumptions are plausible in view of $u\in \mathcal{C}_{ad}\subset [(L_\epsilon^\infty(\Omega))^+]^3$ and Theorems \ref{thm:function_app} and \ref{thm:deriv_app}.
\begin{proof}
The first estimate is straightforward from the definition of $Q$
\begin{equation}
\norm{Q(u)-Q_{\mathcal{N}}(u)}_H =\norm{P\mathcal{F}(\rho (\mathcal{N}(\theta)-\tilde{\Pi}(\theta)))}_H\leq \norm{\rho (\mathcal{N}(\theta)-\tilde{\Pi}(\theta))}_{[L^2(\Omega)^{3}]^L} \leq C\epsilon,
\end{equation}
since $\mathcal{C}_{ad}\subset [L^\infty(\Omega)]^3$ is a bounded set.
To see the second estimate, notice that for every $v:=(v_{1}, v_{2}, v_{3})^\top \in [L^2(\Omega)]^3$,
\begin{equation}\label{eq:mri_derivative}
Q^\prime(u)v=P\mathcal{F}(v_1 \tilde{\Pi}(\theta)) + P\mathcal{F}(\rho \tilde{\Pi}^\prime(\theta) (v_2,v_3)^\top),
\end{equation}
and similarly for $Q_{\mathcal{N}}'$. Thus
\begin{align*}
\norm{(Q^\prime (u)-Q^\prime_{\mathcal{N}}(u))v}_H
\leq & C_1\norm{\mathcal{N}(\theta)-\tilde{\Pi}(\theta)}_{ [L^{\infty}(\Omega)^{3}]^{L}} \|v_{1}\|_{L^{2}(\Omega)}\\ & +C_2\norm{\mathcal{N}^\prime(\theta)-\tilde{\Pi}^\prime(\theta)}_{\mathcal{L}([L^2(\Omega)]^2,[L^{\infty}(\Omega)^{3}]^{L})} \|(v_{2}, v_{3})\|_{[L^2(\Omega)]^2},
\end{align*}
which ends the proof.
\end{proof}
Finally, we show the Lipschitz continuity of $Q$ and $Q'$. For the learning-informed versions this is done similarly. Using the isometric property of the Fourier transform and the triangle inequality, we get for every $u_a,u_b\in \mathcal{C}_{ad}$ and some $C\geq 1$:
\[\norm{Q(u_a) -Q(u_b)}_H
\leq C\left(\norm{\rho_a-\rho_b}_{L^2(\Omega)}+ \norm{\theta_a-\theta_b}_{[L^2(\Omega)]^2}\right). \]
Similarly, we estimate $\norm{(Q^\prime(u_a)- Q^\prime(u_b))v}_H$ assuming that $v$ is unitary:
\[
\begin{aligned}
& \norm{(Q^\prime(u_a)- Q^\prime(u_b))v}_H\\
\leq& \norm{ P\mathcal{F}(v_1 (\tilde{\Pi}(\theta_1) - \tilde{\Pi}(\theta_2))) }_H
+\norm{P\mathcal{F}\left((\rho_1 \tilde{\Pi}^\prime(\theta_1) -\rho_2 \tilde{\Pi}^\prime(\theta_2))[v_2,v_3] \right)}_H\\
\leq& L_{\tilde{\Pi}}\norm{\theta_1-\theta_2}_{[L^2(\Omega)]^2} + \norm{\rho_1-\rho_2}_{L^\infty(\Omega)} +L_{p_\theta}\norm{\rho_2}_{L^\infty(\Omega)} \norm{\theta_1-\theta_2}_{[L^2(\Omega)]^2}.
\end{aligned}
\]
Here, we use the fact that $\mathcal{F}$ is a unitary operator, $\| \tilde{\Pi}(\theta)\|_{[L^\infty(\Omega)^{3}]^{L}}$ is uniformly bounded, and $L_{\tilde{\Pi}}$ and $L_{p_\theta}$ are the Lipschitz constants of $\tilde{\Pi}(\theta)$ and $\tilde{\Pi}^\prime(\theta)$, respectively.
\subsection{Numerical algorithm}
For the numerical solution of the reduced optimization problem associated with the present qMRI problem, we adopt the SQP method, i.e., Algorithm \ref{alg:SQP}, from the previous application to the qMRI setting. The only difference is that we do not need the Newton iterations in Step $(a1)$ there.
Recall that now we have $u=(T_1,T_2,\rho)^\top$. In comparison to the previous PDE examples, the sensitivity of the reduced objective functional in \eqref{eq:qMRI_nn} is directly available as
\begin{equation}\label{eq:stationary_mri}
\mathcal{J}'_{\mathcal{N}}(u)=(\rho (\mathcal{N}^\prime(T_1,T_2))^\ast,\mathcal{N}(T_1,T_2))^\top \mathcal{F}^\ast(\mathcal{F} (\rho \mathcal{N}(T_1,T_2)) -g)+\alpha(\text{Id}-\Delta) ( T_1, T_2, \rho)^\top.
\end{equation}
Further, in every QP-step one is confronted with solving
\begin{equation}\label{eq:SQP_MRI}
\begin{aligned}
\text{minimize}& \quad \langle \mathcal{J}_{\mathcal{N}}'(u_k),h\rangle_{U^{\ast}, U} + \frac{1}{2}\langle H_k(u_k)h, h \rangle_{U^{\ast}, U}\quad\text{over }h\in U\\
\text{s.t.} & \quad u_k+h \in \mathcal{C}_{ad},
\end{aligned}
\end{equation}
where now $H_k(u_k)$ is the following symmetrized version of the Hessian of $ \mathcal{J}_{\mathcal{N}}$ at $u_k \in \mathcal{C}_{ad}$:
\[
\begin{array}{ll}
(\rho (\mathcal{N}^\prime(T_1,T_2))^\ast,\mathcal{N}(T_1,T_2))^\top \mathcal{F}^\ast \mathcal{F} (\rho (\mathcal{N}^\prime(T_1,T_2)),\mathcal{N}(T_1,T_2)) +\alpha( \text{Id}-\Delta) .
\end{array}
\]
In the following tests, we choose $\mu_0=1$, $\epsilon=10^{-5}$, $r=0.618$, $\kappa=10^{-3}$, and $\xi =0.5$. We stop the SQP iteration when the norm of the residuals of the first-order optimality system drops below a user-specified threshold value of $10^{-3}$ or a maximum of $40$ iterations is reached. The regularization parameter is $\alpha_0=[1,1,1]\times 10^{-10}$ for the $L^2$ part in the regularization functional in \eqref{eq:qMRI_nn}, and $\alpha_1 = [1,20,2]\times10^{-9} $ for the $H^1$ seminorm part in \eqref{eq:qMRI_nn}, with respect to $T_1, \; T_2,\; \rho$, respectively. The parameter $c$ in the complementary constraint is chosen to be $10^{9}\alpha_1$ in the numerical tests, which is different to the previous examples. The values of all remaining parameters in Algorithm \ref{alg:SQP} not explicitly mentioned here, are kept the same as in the previous tests. We notice here that due to the analytical structure of the problem, the primal-dual active set algorithm for this example is equivalent to a SSN approach only in the discretized setting. We refer to \cite{HiKu-PathII} for a path-following SSN solver which works in function space upon Moreau-Yosida regularization of the indicator function of the constraint set.
\subsection{Numerical results on qMRI}
For the generation of the training data, we use the explicit Bloch dynamics of \cite{DavPuyVanWia14} where a specific pulse sequence with acronym IR-bSSFP (short for {\it Inversion Recovery balanced Steady State Free Precession}) is considered.
Let $(M_l)_{l=1}^L$ denote the pertinent explicit solution. This yields $\Pi(u)=\rho (M_l(T_1,T_2))_{l=1}^{L}$, with $u=(T_1,T_2,\rho)^\top$.
The MRI tests are implemented based on an anatomical brain phantom, publicly available from the Brain Web Simulated Brain Database \cite{Brainweb,Col_etal_98}.
We use a slice with $217\times 181$ pixels from this database and cut some of the zero fill-in pixels so that we finally arrive at a $181\times 181$-pixel image. The selected range for $u$ reflects natural values encountered in the human body. This gives rise to the box constraint $\mathcal{C}_{ad}:=\{u= (T_1,T_2,\rho)^\top: T_1\in (0, 5000), T_2\in (0,1800), \rho\in(0,6000)\}$.
In Figure \ref{fig:ideal_solutions}, we show the images from the brain phantom for ideal parameter maps $T_1$, $T_2$ and $\rho$.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{./figures/MRI_exact_solution.png}
\caption{ Simulated ideal tissue parameters of a brain phantom.}
\label{fig:ideal_solutions}
\end{figure}
\paragraph{Loss function and training method} For each residual of two neighbored images in the time series, we use the mean squared error as the loss function and the Bayesian regularization algorithm based on the Levenberg-Marquardt method for the training of the residual neural networks DRNN described below. The learning algorithm and the setting are the same as the previous examples.
\paragraph{Architecture of the network}
In order to approximate the Bloch solution map, we use Direct Residual Neural Networks (DRNNs). Here the solution map at a given time is approximated by a neural network depending only on the initial condition $M_0$. To explain this in detail, let $\hat{M}$ be the learned approximation of $M$, i.e. $\hat{M}_{l}(T_{1}, T_{2})\simeq M_{l}(T_{1}, T_{2})$, $l=1,\ldots, L$. The DRNN framework then reads:
\begin{equation}
\label{eq:dir_res_nn
\hat{M}_{l}(T_{1}, T_{2})=\hat{M}_{0}(T_{1}, T_{2}) + \mathcal{N}_{\Theta_l}(T_{1}, T_{2}),\quad \text{} l=1,\ldots, L,\quad
\hat{M}_{0}(T_{1}, T_{2})=M_0,
\end{equation}
with sub-networks $\{\mathcal{N}_{\Theta_{l}}\}_{l=1}^{L}$.
The map $(M_l)_{l=1}^L$ is then simply approximated by the map $(M_0 + \mathcal{N}_{\Theta_l})_{l=1}^{L}$.
We use sub-networks with a total number of hidden layers equal to 1, 2, or 3. In each case, we design the architecture at every layer so that the total degrees of freedom in $\Theta$ are essentially the same.
The detailed description is summarized in Table \ref{tab:net_arc_MRI}.
In total, we test $9$ different architectures. For every network, we use the 'softmax' activation function in the layer next to the output layer, and the 'logsigmoid' function in all other hidden layers.
The difference to the previous optimal control examples is that the architecture applies to every sub-network which is of residual type, as described above.
\begin{table}[h!]
\begin{center}
\renewcommand{\arraystretch}{1.0}
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|llll|llll|llll|}\hline
& HL 1 & HL 2 & HL 3 & DoF & HL 1 & HL 2 & HL 3 & DoF & HL 1 & HL 2 & HL 3 & DoF \\
\hline
&\multicolumn{4}{c|}{Small DoF}&\multicolumn{4}{c|}{Medium DoF}&\multicolumn{4}{c|}{Large DoF}\\
\hline
1-L-NN & 24 & - & - & 122& 75 & - & - & 377& 130 & - & - & 652 \\
2-L-NN & 7 & 10 & - & 123&17 & 16 & - & 373 &23 & 22 & -& 643 \\
3-L-NN &5 & 8 & 5& 120&10 & 15 & 10 & 377&15 & 18 & 15& 650 \\
\hline
\end{tabular}}\\
\vspace{5pt}
\caption{The architecture of every sub-network. Both input and output layers have two neurons.}\label{tab:net_arc_MRI}
\end{center}
\end{table}
\paragraph{Training and validation data}
The training including also the validation data are generated from the dictionary which has been used in methods for magnetic resonance fingerprinting (MRF), e.g., \cite{DavPuyVanWia14, Ma_etal13}.
These are time series resulting from the dynamics, such as e.g. IR-bSSFP, which was introduced in \cite{Sche99}, given the initial value $M_0=(0,0,-1)$. We fix the length of the pulse sequence to be $L=20$.
Of course, other numerical simulations of the Bloch equations can also be proper options as input-output training data.
We test each of the networks with architectures according to Table \ref{tab:net_arc_MRI} using three levels of training data, which we term 'small', 'medium' and 'large'.
For the small size training data, we generate parameter values for $(T_1,T_2)$ from $D_1:=(0:400:5000)$ and $D_2:=(0:100:1800)$ (in MATLAB notation) which contribute $247 $ entries of time series; for the medium size training data from $D_1:=(0:200:5000)$ and $D_2:=(0:50:1800)$ with a total of $962$ entries; and
for the large size data $D_1:=(0:50:5000)$ and $D_2:=(0:20:1800)$ resulting in total in $9191$ entries.
The input data of the neural networks consist of elements of the set $D_1 \times D_2$.
Note here that we include $0$ for both $T_1$ and $T_2$, respectively, to take care of the marginal area in the imaging domain.
The output data will be the Bloch dynamics corresponding to each pair of elements in $D_1 \times D_2$.
Both input and output data are normalized to pairs whose elements take values in the range $[-1,1]$. This is done by \verb+mapminmax+ function in MATLAB.
For the SQP we consider the image domain to be $[0,1]\times [0,1]$, thus the spatial discretization size is $h=1/180$.
We compare the results of the learning-based method with results from the algorithm proposed in our previous work \cite{DonHinPap19}.
The initialization to the SQP algorithm and also the algorithm in \cite{DonHinPap19} is done by using the so-called BLIP algorithm of \cite{DavPuyVanWia14} with a dictionary resulting from the small size $D_1\times D_2$. The parameters are tuned as in \cite{DonHinPap19}.
Concerning the degradation of our image data we consider here Gaussian noise of mean $0$ and standard deviation $30$.
\begin{table}[!ht]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|llll|llll|llll|}
\hline
&\multicolumn{4}{c|}{Small DoF} &\multicolumn{4}{c|}{Medium DoF}
&\multicolumn{4}{c|}{Large DoF} \\ \hline
& $T_1$ & $T_2$ & $\rho $ & $M(\theta)$ & $T_1$ & $T_2$ & $\rho$ & $M(\theta)$ & $T_1$ & $T_2$ & $\rho$ & $M(\theta)$ \\ \hline
& \multicolumn{12}{c|}{Small training data } \\ \hline
1 Layer NN & $0.084$ & $0.056$ & $0.004 $ & $0.016$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ - $ & $- $ \\ \hline
2 Layer NN & $0.093$ & $0.054$ & $0.005$ & $ 0.013 $ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ \\ \hline
3 Layer NN & $0.087$ & $0.052$ & $0.009$ & $ 0.012 $ &$ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ & $ -$ \\ \hline
& \multicolumn{12}{c|}{Medium training data} \\ \hline
1 Layer NN & $0.084$ & $0.058$ & $0.003 $ & $0.004$ &$ 0.089$ & $0.052 $ & $ 0.002 $ & $0.005 $ & $ -$ & $ -$ & $ -$ & $ -$ \\ \hline
2 Layer NN & $0.143 $ & $0.060 $ & $0.006 $ & $ 0.004 $ & $0.090$ & $0.052 $ & $0.005 $ & $ 0.003$ & $ -$ & $ -$ & $ -$ & $ -$ \\ \hline
3 Layer NN & $0.086$ & $0.051$ & $0.003$ & $ 0.004 $ & $0.087$ & $0.051 $ & $ 0.004$ & $0.002 $ & $ -$ & $ -$ & $ -$ & $ -$ \\ \hline
& \multicolumn{12}{c|}{Large training data} \\ \hline
1 Layer NN & $0.120$ & $0.078$ & $0.005$ & $0.002$ & $0.120 $ & $ 0.081 $ & $0.004 $ & $ 0.0014 $ & $ 0.090 $ & $0.050 $ & $ 0.004 $ & $ 0.0009 $ \\ \hline
2 Layer NN & $0.094 $ & $0.057 $ & $0.006 $ & $ 0.001 $ & $0.094 $ & $0.043$ & $ 0.002 $ & $0.002 $ & $0.089 $ & $ 0.056 $ & $0.004 $ & $ 0.0012 $ \\ \hline
3 Layer NN & $0.096$ & $0.059$ & $0.005$ & $ 0.0007 $ & $0.087 $ & $0.051 $ & $0.004 $ & $0.0004$ & $ 0.087 $ & $0.051 $ & $ 0.004 $ & $ 0.0006$ \\ \hline
Method \cite{DonHinPap19} & $0.102$ & $0.094$ & $ 0.004$ & $- $ & \multicolumn{4}{r|}{proposed Algorithm using exact Bloch} & $0.084$ & $0.051$ & $0.003$ & $-$ \\ \hline\addlinespace[5pt]
\multicolumn{13}{c}{For $25\%$ Cartesian subsampled k-space data with Gaussian noise of mean $0$ and standard deviation $30$.} \\ \multicolumn{13}{c}{\vspace{10pt} Relative error computed from $\frac{\norm{x-x^*}}{\norm{x^*}}$ for $x=T_1,\; T_2,\; \rho,\; M$ where $\norm{\cdot}$ is the discrete $2$-norm.}\\
\end{tabular}}
\caption{Error comparison for qMRI: Using Bloch maps by networks with different layers, different size of neurons, and a variant of training data} \label{tab:MRIcomparison}
\end{center}
\end{table}
Concerning the results reported in Table \ref{tab:MRIcomparison}, the columns of $M(\theta)$ reflect the approximation accuracy to the discrete dynamical Bloch sequences using various neural networks. A smaller value refers to a smaller error, or in other words to higher accuracy in the approximation. However, higher accuracy in the Bloch solution operator approximation does not necessarily result in a better estimation of the $T_1$, $T_2$ parameters.
For this purpose, note that differently to the previous example, here the error is evaluated against the ideal solutions.
The dashes in Table \ref{tab:MRIcomparison} belong to cases where the training data are not sufficient to guarantee well enough learning under the current setting our paper.
We observe that the results are varying slightly under different network architectures and also when using different volumes of training data. In particular, we have the observations: (i) When the training data is sufficiently rich, with the same number of hidden layers, then the larger the number of neurons the better becomes the approximation to the Bloch mapping. However, this does not mean necessarily better to the estimated parameters in terms of the error rates provided. (ii) We find that the small DoF networks with small volume training data achieve already almost the same accuracy as the ones using medium and large DoF networks. The results are almost as good as using SQP with the exact Bloch solution formula. We have also observed that the SQP method with learning-based operators can be computationally more efficient than the one with the exact Bloch operators. This is due to the fact that evaluating the learning-based operator can be much cheaper than solving the exact physical model, although a learning process has to be performed before-hand.
In Figures \ref{fig:solutions} and \ref{fig:errors}, we provide visual comparison of results from different methods for quantitative MRI. Particularly, we compare to the method proposed by the authors in \cite{DonHinPap19} assuming knowledge of the exact Bloch solution map and also the BLIP algorithm in \cite{DavPuyVanWia14} in which the fine dictionary (i.e., a large size data set) is used.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{./figures/30_MRI_BLIP_solution.png}\\
\vspace*{1pt}
\includegraphics[width=\textwidth]{./figures/30E_MRI_LM_solution.png}\\
\vspace*{1pt}
\includegraphics[width=\textwidth]{./figures/40_MRI_SQP_solution.png}\\
\vspace*{1pt}
\includegraphics[width=\textwidth]{./figures/40E_MRI_SQP_solution.png}
\caption{Estimated tissue parameters from subsampled and noisy measurements.
First row: Solution using the BLIP method in \cite{DavPuyVanWia14} using a fine dictionary; Second row: Solution using method in \cite{DonHinPap19}; Third row: Our SQP solution with learning-informed model small size DoF, 1-hidden-layer residual networks and trained with medium size data.
Forth row: Our SQP solution using the analytical formula for the Bloch solution map.}
\label{fig:solutions}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{./figures/30_MRI_BLIP_error.png}\\
\vspace*{1pt}
\includegraphics[width=\textwidth]{./figures/30E_MRI_LM_error.png}\\
\vspace*{1pt}
\includegraphics[width=\textwidth]{./figures/40_MRI_SQP_error.png}\\
\vspace*{1pt}
\includegraphics[width=\textwidth]{./figures/40E_MRI_SQP_error.png}\\
\caption{Relative errors of the estimated tissue parameters from subsampled and noisy measurements.
First row: Error map from BLIP \cite{DavPuyVanWia14} using a fine dictionary; Second row: Error map from \cite{DonHinPap19}; Third row: Error map for our SQP solution with learning-informed model. Forth row: Error map for our SQP solution with exact formula for the Bloch map as \cite{DonHinPap19}. All errors are normalized.}
\label{fig:errors}
\end{figure}
The images produced by the proposed algorithm with a learning-informed model are based on the $1$-hidden-layer network with a small size of DoF which is trained with medium volume data.
We can see that the proposed approach clearly gives better results for the recovering of the quantitative parameters when compared with the methods in \cite{DonHinPap19} and BLIP \cite{DavPuyVanWia14}.
In particular, we observe the $T_1$, $T_2$ parameters estimated by the proposed method are significantly better than the results from the other two methods in terms of spatial regularity. In particular, some artifacts are avoided by the proposed method. This is due to using an $H^1$ term for $u$ in the objective while the method in \cite{DonHinPap19}, for instance, uses an $L^2$ term only.
We notice that the method in \cite{DonHinPap19} is superior only if the noise in the data is small. The learning-informed operator could also be applied yielding results similar to those of the original method \cite{DonHinPap19}.
Since for real MRI experiments, the $k$-space data may be contaminated by different sources of noise, certain spatial regularization could help to stabilize solutions.
The proposed method in this paper seems to be new to qMRI in this respect, since previous methods typically use pixel-wise estimation so that spatial regularity is harder to enforce.
Along this line, one may consider more sophisticated regularization methods such as, e.g., total variation or total generalized variation regularization, to take care of spatial discontinuities. Such a study, however, is clearly beyond the scope of the present paper.
\section{Conclusion}
In this paper, we have proposed and analyzed a general optimization scheme for solving optimal control problems subject to constraints which are governed by learning-informed differential equations. The applications and numerical tests have verified the feasibility of the proposed scheme for two key applications. We envisage that our work will provide a fundamental framework for dealing with physical models whose underlying differential equation is partially unknown and thus needed to be learned by data, with the latter typically obtained from experiments or measurements. Our approach avoids learning the full model, i.e., learning directly the solution of the overall minimization problem as this could be on the one hand too complicated and on the other, it could render the method more towards being a black box solver. By learning only a component, i.e., a nonlinearity, or the solution map of the underlying differential equation, the method is kept more faithful to the true physics-based model.
An important factor for the applicability of the proposed framework is the learnability of the operator resulting from differential equations. We observed that the uniform boundedness of the range of the input and output data (state variable) played a crucial role, stemming from the fact that the density of neural networks holds in the topology of uniform convergence on compact sets. As we observed in the double-well potential example, learning the nonlinearity in its whole range is not necessarily needed, but only in a range in which the state variables lie, with this range being known due to a priori estimates. Indeed, in the stationary Allen-Cahn control problem, the learning is only performed over a very local part of the nonlinearity (the double-well part), giving an almost perfect result. This shows some potential for reducing the training load by properly analyzing the properties of the nonlinearities.
From the quantitative MRI example we furthermore observed that the embedding of the learned operator in the reconstruction process led to a reduction in the computational load, since it avoids a repetitive solution of the exact physical model.
A series of future studies arise from the present work. The analysis implemented here asks for smooth neural networks approximating (part of) the control-to-state map. A theory incorporating nonsmooth neural networks is an important extension as this will include networks with ReLU activation functions. Further studies can also incorporate the network structure (in the spirit of optimal experimental design) as well as aspects of the training process into the overall minimization process to further optimize and robustify the new technique.
Finally, the errors due to the early stopping of the numerical algorithm as well as due to the ones from the numerical discretization, can be incorporated in the a priori error analysis. This could be of benefit for designing more suitable network architectures.
\subsection*{Acknowledgment}
The authors acknowledge the support of Tsinghua--Sanya International Mathematical Forum (TSIMF), as some of the ideas in the paper were discussed there while all the authors attended the workshop on ``Efficient Algorithms in Data Science, Learning and Computational Physics'' in January 2020.
|
1,941,325,220,769 | arxiv | \section{Introduction}
Three-dimensional higher-derivative theories of gravity have received considerable attention over the years. The first example of such a higher-derivative theory is the ``Topologically Massive Gravity'' (TMG) model \cite{Deser:1981wh}.
The TMG Lagrangian consists of the usual Einstein-Hilbert (EH) term, which by itself does not describe any degrees of freedom in three dimensions, and a Lorentz Chern-Simons (LCS) term which is parity-odd and third-order in the derivatives. The two terms together describe a single massive state of
helicity +2 or --2, depending on the relative sign between the EH and LCS terms. A more recent example is the ``New Massive Gravity'' (NMG) model\cite{Bergshoeff:2009hq}.
NMG is the parity even version of TMG and its Lagrangian contains besides the EH term a particular combination of two fourth-order derivative terms, of which one is quadratic in the Ricci tensor and the other is quadratic in the Ricci scalar. The NMG Lagrangian describes, unitarily, two massive
states of helicity +2 and --2. The signs in front of the kinetic terms corresponding to these two states are the same as a
consequence of the fact that the Lagrangian is parity even.
Recently, it was pointed out that the NMG model can be extended to four dimensions, at the linearized level, provided one describes the massive spin-2 state by a non-standard representation
corresponding to a mixed-symmetry Young tableau
with two columns of height 2 and 1, respectively \cite{Bergshoeff:2012ud}.
A similar extension does not apply to the TMG model. This can be understood as follows.
One may view TMG as the ``square root'' of NMG in the same way that one may view Topologically Massive Electrodynamics (TME) \cite{Siegel:1979fr}
as the ``square root'' of the Proca theory.
The latter property is based on the fact that the Klein-Gordon operator, when acting on divergence-free vectors, as it does in the 3D Proca equation, factorises into the product of two first-order operators each of which separately describes a single state of helicity $+1$ and $-1$ \cite{Bergshoeff:2009tb}.\,\footnote{Alternatively, one may act on vectors that are {\sl not} divergence-free. The product of the two first-order operators then leads to a modified Proca equation. Next, by taking the divergence of this modified equation one may derive that the vector is divergence-free.}
The equation of motion describing one of the two helicity states is a massive self-duality equation
\cite{Townsend:1983xs,Deser:1984kw}.
This property of the 3D Proca equation carries over to the 3D Fierz-Pauli (FP) equation, describing masssive
spin-2 particles, where the Klein-Gordon operator acts on
a divergence-free symmetric tensor of rank 2. It also applies to 3D generalised FP equations, describing massive
particles of higher spin, where one considers
symmetric tensors of rank $p>2$ \cite{Bergshoeff:2011pm}.
The above property of the Klein-Gordon operator, when acting on 3D divergence-free vectors,
can be extended as follows. Consider a generalized Proca equation where the Klein-Gordon
operator acting on a divergence-free form-field of given rank gives zero. One can show that
in $D=4k-1$ dimensions this Klein-Gordon operator
factorizes into the product of two first-order operators provided the form-field is of rank $2k-1$.
Each of the two operators describes half of the helicity states that were described by the original generalized
Proca equation. For $k=1$ one obtains 3D 1-forms which we already discussed. The next case to consider is $k=2$
which leads to 3-forms in D=7 dimensions. The corresponding massive self-duality equation was encountered first
in the context of seven-dimensional gauged supergravity where the mass $m$
plays the role of the gauge coupling constant \cite{Townsend:1983xs}. The 7D Proca equation describes
20 degrees of freedom that transform as the ${\bf 10}^+ + {\bf 10}^-$ of the little group SO(6).
The ${\bf 10}^+$ and ${\bf 10}^-$ degrees of freedom are each separately described by the two massive self-duality
equations.\,\footnote{A similar factorisation of the Klein-Gordon operator, when acting on
divergence-free 5D 2-forms, requires that one
considers a Klein-Gordon operator with the wrong sign in front of the mass term \cite{Townsend:1983xs}.
Such a wrong sign can be avoided by considering a symplectic doublet of 2-forms and using the corresponding epsilon
symbol in the massive self-duality equation. This is very similar to extending Majorana spinors to
Symplectic Majorana spinors. We will not consider this possibility
further in this letter.}
As we will discuss in this letter the above property of the 7D Proca equation carries over to
generalised FP equations \cite{Curtright:1980yk,Labastida:1987kw,Bekaert:2002dt} in $D=4k-1$
dimensions where the Klein-Gordon operator acts on fields whose
indices are described by a GL(D,$\mathbb{R}$) Young tableau with an arbitrary number of columns each of which has
height $2k-1$. We are interested in models describing propagating massive
spin-2 particles that generalize, at the linearized level, the 3D
TMG model.\,\footnote{A different extension, which we will not consider here, is to add higher-derivative topological terms to the
Einstein-Hilbert term. Such an extension in 7D has been considered in \cite{Lu:2010sj}.}
Interpreting ``spin'' in higher dimensions as the
number of columns in the Young tableau that characterizes the index
structure of the field under consideration,\,\footnote{More precisely, for massless spins we
only consider two-column Young tableaux where the first column has a maximum number
of D$-$3 boxes. For massive spins the maximum number is D$-$2. The Young tableaux with more boxes describe either ``spin 1'' particles
or no degrees of freedom at all.} we are led to consider
7D fields $h_{\mu_1\mu_2\mu_3,\nu_1\nu_2\nu_3}$ whose index
structure is given by the following GL(7,$\mathbb{R})$ Young
tableau
\begin{equation} \label{youngsymm}
\begin{tabular}{l}{\footnotesize
\begin{Young}
$\mu_{1}$ & $\nu_{1}$ \cr
$\mu_{2}$ & $\nu_{2}$ \cr
$\mu_{3}$ & $\nu_{3}$ \cr
\end{Young}
}
\end{tabular} \,.
\end{equation}
In order to keep in line as much as possible with the construction
of the 3D TMG model, and, furthermore, to avoid writing down too
many indices, we will use a notation where $\bar{\mu}$ stands for a
collection of three antisymmetrized indices $\mu_1$, $\mu_2$ and
$\mu_3$, i.e.~$\bar{\mu} \ \leftrightarrow \ [\mu_1\, \mu_2 \,
\mu_3]$ or $h_{\bar\mu,\bar\nu} \equiv
h_{\mu_1\mu_2\mu_3,\nu_1\nu_2\nu_3}$. If we regard the field $h$ as
a field describing the propagation of a massive particle via a
generalised FP equation, the number of propagating degrees of
freedom equals the dimension of the irreducible, traceless, representation of the little
group SO(6), given by the same Young diagram \eqref{youngsymm}.
This leads to 70 propagating degrees of freedom which transform as
the $\mathbf{35}^+ + \mathbf{35}^-$ of SO(6). These two
representations are interchanged by the action of parity.
In the next section we wish to construct a parity violating free 7D ``Topologically Massive Spin-2 Gauge Theory'' for the field $h$, such that 35 degrees of freedom are propagated. This theory is an analogue of
the 3D TMG model at the linearized level. The construction of this topologically massive gauge theory will proceed in the same fashion as
can be done for the 3D TMG model. We will first consider the massive self-duality equation and, next, boost up the number of derivatives by solving the differential subsidiary conditions.
\section{The Model}
Our starting point are the generalised FP equations for a field ${\tilde h}$ with the symmetry properties
\eqref{youngsymm}. These equations consist of the Klein-Gordon equation
\begin{equation} \label{FP}
(\Box - m^2) {\tilde h}_{\bar{\mu},\bar{\nu}} = 0 \,,
\end{equation}
together with two subsidiary constraints, one algebraic and one differential:
\begin{equation}\label{FP2}
\eta^{\mu \nu} {\tilde h}_{\bar{\mu},\bar{\nu}} = 0 \,,\hskip 3truecm
\partial^\mu {\tilde h}_{\bar{\mu},\bar{\nu}} = 0 \,.
\end{equation}
We have used here a notation where the contraction of an unbarred index $\mu$ with a barred index $\bar{\mu}$ means that the index $\mu$ is contracted with the first index $\mu_1$ of the collection $\bar{\mu}$, e.g.
\begin{equation}
\partial^\mu {\tilde h}_{\bar{\mu},\bar{\nu}} = \partial^{\mu_1} {\tilde h}_{\mu_1\mu_2 \mu_3,\nu_1\nu_2\nu_3} \,.
\end{equation}
Note that the symmetry properties of ${\tilde h}$ imply that divergence-freeness on the first three indices of ${\tilde h}$ also implies divergence-freeness on the second three indices. One can show via an explicit counting that the two subsidiary constraints reduce the number of components of ${\tilde h}$ to 70 propagating degrees of freedom.
To obtain a massive self-duality equation for ${\tilde h}$ we use the property that the Klein-Gordon operator
$(\Box - m^2) \delta_{\bar{\mu}}^{\bar{\nu}}$ in the space of divergence-free 1-forms can be factorized as follows
\begin{equation}
(\Box - m^2) \delta_{\bar{\mu}}^{\bar{\nu}} = \left(
\frac{1}{3!}\varepsilon _{\bar{\mu} }{}^{\alpha \bar{\rho}}\partial
_{\alpha }+m\delta _{\bar{\mu}}^{\bar{\rho} }\right) \left(
\frac{1}{3!}\varepsilon _{\bar{\rho}}{}^{\beta \bar{\nu} }\partial
_{\beta }-m\delta _{\bar{\rho}}^{\bar{\nu}}\right)\,.
\end{equation}
This suggests the following massive self-duality equation for
${\tilde h}$:
\begin{equation} \label{sqrtFP}
\left( \frac{1}{3!}\varepsilon _{\bar{\mu}}{}^{\alpha
\bar{\rho}}\partial _{\alpha }-m\delta
_{\bar{\mu}}^{\bar{\rho}}\right) {\tilde h}_{\bar{\rho},\bar{\nu} }=0 \,.
\end{equation}
A similar massive self-duality equation describing the parity transformed
degrees of freedom is obtained by replacing $m$ by $-m$.
Contracting the massive self-duality equation \eqref{sqrtFP} with $\partial^\mu$
leads to the divergence-freeness condition of ${\tilde h}$.
Furthermore, a contraction with $\eta^{\mu \nu}$ of the same
equation and using the symmetry properties of ${\tilde h}$ proofs
the tracelessness condition of ${\tilde h}$. The Schouten identity
shows that the tensor
$\varepsilon_{\bar{\mu}}{}^{\alpha\bar{\rho}}\partial_{\alpha }
{\tilde h}_{\bar{\rho},\bar{\nu}}$ has the same symmetry properties
as ${\tilde h}$ provided that ${\tilde h}$ is divergence-free and
traceless.
We next proceed by boosting up the derivatives of the above model
by solving the differential subsidiary condition that expresses that
${\tilde h}$ is divergence-free, see eq.~\eqref{FP2}.
This condition is solved in terms of a new field $h$, with the same index structure and symmetry properties as $h$,
by applying twice the Poincar\'e lemma for 3-forms:
one time on the $\bar\mu$ indices of ${\tilde h}_{\bar\mu,\bar\nu}$ and a second time on the $\bar\nu$ indices of ${\tilde h}_{\bar\mu,\bar\nu}$.
One thus obtains the following solution
\begin{equation}\label{solsub}
{\tilde h}_{\bar\mu,\bar\nu} = G_{\bar\mu,\bar\nu}(h)\,,
\end{equation}
where the tensor $G_{\bar\mu,\bar\nu}(h)$ is defined by
\begin{equation} \label{diffinv}
G_{\bar{\mu},\bar{\nu}}( h) =\varepsilon _{\bar{\mu}}{}^{\alpha
\bar{\rho}}\varepsilon _{\bar{\nu}}{}^{\beta \bar{\sigma}}\partial _{\alpha
}\partial _{\beta }\,h_{\bar{\rho},\bar{\sigma}}\,.
\end{equation}
Using a Schouten identity, one can show that the tensor $G(h)$ has the same symmetry properties as $h$.
In terms of $h$ the massive self-duality
equation now reads
\begin{equation}\label{hdsdm}
\left( \frac{1}{3!}\varepsilon _{\bar{\mu}}{}^{\alpha
\bar{\rho}}\partial _{\alpha }-m\delta
_{\bar{\mu}}^{\bar{\rho}}\right) G_{\bar{\rho},\bar{\nu} }(h)=0\,.
\end{equation}
We note that the higher-derivative equations of motion in terms of $h$ are invariant under gauge transformations of $h$
with a gauge parameter $\xi$ that has a
symmetry structure corresponding to a Young tableau with two
columns, one of height 3 and one of height 2. Schematically, in
terms of Young tableaux, these gauge transformations are
given by, ignoring indices, $\delta h = \partial \xi$ or, in terms of Young tableaux, by
\begin{equation}\label{schem}
\delta\hskip -.3truecm
\begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
& \cr
& \cr
\end{Young}
}
\end{tabular}
\ =\
\begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
& \cr
& $\partial$\cr
\end{Young}
}
\end{tabular}\,.
\end{equation}
It is understood here that when taking the derivative of the gauge parameter at the right-hand-side one first takes the curl of
the two indices in the second column of the Young tableau describing the index structure of the gauge parameter,
and next applies a Young symmetrizer\,\footnote{A Young symmetrizer is an operator that projects onto the symmetries corresponding to
a given Young tableaux. For the precise definition and its basic properties,
see e.g.~\cite{Fulton,Hamermesh}. Following the notation of
\cite{Francia:2005bv} a Young symmetrizer $Y_{[p,q]}$ is a
projection operator, $Y^2=Y$, that acts on a $(p,q)$ bi-form and
projects onto the part that corresponds to a two-column Young
tableau of height $p$ and $q$, respectively. When the bi-form is
already of the desired symmetry type it acts like the identity
operator. For instance, $Y_{[3,3]} h_{\bar\mu,\bar\nu} =
h_{\bar\mu,\bar\nu}$. }
to obtain the same
index structure at both sides of the equation.
The gauge-invariant
curvature $R(h)$ of $h$ is obtained by hitting $h$
with two derivatives: one which takes the curl of the first three
indices of $h$ and another which takes the curl of the second
three indices:
\begin{equation}
R_{\alpha\bar\rho,\beta\bar\sigma}(h) = \partial_{[\alpha}\partial^{[\beta} h_{\bar\rho],}{}^{\bar\sigma]}\,.
\end{equation}
This leads to a curvature tensor with an index
structure corresponding to a Young tableau with two columns of
height 4. By construction, this curvature tensor satisfies a
generalised Bianchi identity. The
tensor $G(h)$ defined above is obtained from the curvature $R(h)$ by taking the dual on the first 4 indices of $R(h)$
and a second dual on the second 4 indices. One thus obtains a tensor corresponding to a
Young tableau with two columns of height 3 each.
Due to the Bianchi identity of $R(h)$, the tensor $G(h)$ is divergence-free in each of its indices. We therefore call it the
``Einstein tensor'' of $h$.
Summarizing we have
\begin{equation}
{ h}\ = \hskip -.3truecm \begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
& \cr
& \cr
\end{Young}
}
\end{tabular}
\ \ \ \rightarrow \ \ \
R(h)\ = \hskip -.3truecm \begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
& \cr
& \cr
$\partial$&$\partial$\cr
\end{Young}
}
\end{tabular}
\ \ \ \rightarrow \ \ \ G(h)\ =\ {}^\star {}^\star R(h)\ = \hskip -.3truecm \begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
& \cr
& \cr
\end{Young}
}
\end{tabular}\,.
\end{equation}
The equations of motion \eqref{hdsdm} for $h$ describe the same degrees of freedom as the original massive self-duality equation
\eqref{sqrtFP} for ${\tilde h}$. For instance,
the trivial solution ${\tilde h}=0$ of the massive self-duality equation \eqref{sqrtFP} is mapped under eq.~\eqref{solsub} to
the solutions of the equation $G_{\bar\mu,\bar\nu}(h)=0$\,.
Since the Einstein tensor $G(h)$ is the double dual of the curvature
$R(h)$ this equation implies that the curvature of $h$ is zero.
This in its turn implies that $h$ is a pure gauge degree of freedom \cite{Bekaert:2002dt}.
The equations of motion \eqref{hdsdm} define a 7D Topologically Massive Spin-2 Gauge Theory.
We note that these equations imply that the Einstein tensor of $h$ is traceless, i.e.~$\eta^{\mu\nu}G_{\bar\mu ,\bar\nu}(h)=0$.
To construct an action giving rise to these equations it is useful to introduce the following ``generalized Cotton tensor'':
\begin{equation} \label{Ctensor}
C_{\bar{\mu},\bar{\nu}}(h) = Y_{[3,3]} \left[ \varepsilon
_{\bar{\mu}}{}^{\alpha \bar{\rho}}\partial _{\alpha }G_{
\bar{\rho},\bar{\nu}}\left( h\right) \right] \,,
\end{equation}
where $Y_{[3,3]}$ is a Young symmetrizer,
that ensures that $C_{\bar{\mu},\bar{\nu}}$
has the symmetry properties of the Young tableau given in
eq.~\eqref{youngsymm}. Note that we have to write this Young
symmetrizer explicitly, as we want to use the Cotton tensor in the
action and we cannot assume that the condition that
$G_{\bar{\mu},\bar{\nu}}$ is traceless is satisfied off-shell. Once
one can show that, as a consequence of the equations of motion, $G$
is traceless, the Young symmetrizer can be dropped. Independent of
whether $G_{\bar\mu,\bar\nu}$ is traceless or not, one can show that
the Cotton tensor $C_{\bar{\mu},\bar{\nu}}$ is divergence-free on
both sets of indices $\bar{\mu}$ and $\bar{\nu}$, as well as
traceless
\begin{equation}\label{constraint}
\partial^\mu C_{\bar{\mu},\bar{\nu}} = 0 \,, \qquad \eta^{\mu \nu} C_{\bar{\mu},\bar{\nu}} = 0 \,.
\end{equation}
The equations of motion \eqref{hdsdm} can now be integrated to the
following action:\,\footnote{Note that, due to the second constraint
in \eqref{constraint}, the first term in \eqref{7Daction} has a
generalized scale invariance. This is similar to the scale
invariance of the 3D Cotton tensor.}
\begin{equation}\label{7Daction}
I\left[h\right] =\int d^{7}x\left\{
\frac{1}{12}h^{\bar{\mu},\bar{\nu}}C_{ \bar{\mu},\bar{\nu}}\left(
h\right) -\frac{1}{2}mh^{\bar{\mu},\bar{\nu}}G_{
\bar{\mu},\bar{\nu}}\left( h\right) \right\} \text{ .}
\end{equation}
This action defines the 7D Topologically
Massive Spin-2 Gauge Theory.
Indeed, varying this action with respect to $h$ leads to the equations of motion
\begin{equation}
\frac{1}{6}C_{\bar{\mu},\bar{\nu}}\left( h\right)
-m G_{\bar{\mu},\bar{\nu} }\left( h\right) =0\text{ .} \label{TMG
original eom}
\end{equation}
Contracting these equations of motion with $\eta^{\mu \nu}$, one obtains the tracelessness condition
\begin{equation}
\eta^{\mu\nu}G_{\bar{\mu},\bar{\nu}}\left( h\right) =0\text{ .}
\end{equation}
With the tracelessness condition in hand, the Young
symmetrizer in (\ref{Ctensor}) can be dropped, and the
equation of motion (\ref {TMG original eom}) reproduces the equation of motion given in
eq.~\eqref{hdsdm}.
\section{Canonical Analysis}
As a check we will verify, by canonical analysis, that the action \eqref{7Daction} indeed describes 35 spin-2 degrees of freedom.
We first split the indices into temporal and spatial components like $\mu = (0,i)\,, i=1,\cdots,6,$ and impose the gauge-fixing conditions
\begin{equation}\label{gcondition}
\partial ^{i}h_{i\mu _{2}\mu _{3},\nu _{1}\nu _{2}\nu _{3}}=0\text{ .}
\end{equation}
We next parametrize $h$ in terms of the independent components $(a,b,c,d,e)$ as follows:\,\footnote{
The notation $\left\{ \ \right\} _{\rm a.s.}$ stands for antisymmetrizing all indices within the curly bracket that have the same latin letter. For instance, $\left\{ S_{i_2 i_3 j_1 j_2 j_3} \right\} _{\rm a.s.}= S_{[ i_2 i_3 ] [ j_1 j_2 j_3 ]} $.
}
\begin{subequations}
\label{canonical decomp.}
\begin{eqnarray}
h_{0i_{2}i_{3},0j_{2}j_{3}} &=&a_{i_{2}i_{3},j_{2}j_{3}}\text{ ,} \\ [.2truecm]
h_{0i_{2}i_{3},j_{1}j_{2}j_{3}} &=&\varepsilon
_{j_{1}j_{2}j_{3}}{}^{k_{1}k_{2}k_{3}}\partial
_{k_{1}}b_{k_{2}k_{3},i_{2}i_{3}}\ + \
\left\{ \left( \delta _{i_{3}j_{3}}-\frac{\partial _{i_{3}}\partial _{j_{3}}
}{\nabla ^{2}}\right) c_{j_{1}j_{2},i_{2}}\right. \notag \\ [.2truecm]
&&\left. \ \ \ \ +\left( \delta _{i_{2}j_{2}}\delta
_{i_{3}j_{3}}-\frac{\partial _{i_{2}}\partial _{j_{2}}}{\nabla
^{2}}\delta _{i_{3}j_{3}}-\delta _{i_{2}j_{2}}\frac{\partial
_{i_{3}}\partial _{j_{3}}}{\nabla ^{2}}\right)
d_{j_{1}}\right\}_{\rm a.s.} \text{ ,} \\ [.2truecm]
h_{i_{1}i_{2}i_{3},j_{1}j_{2}j_{3}} &=&\varepsilon
_{i_{1}i_{2}i_{3}}{}^{k_{1}k_{2}k_{3}}\varepsilon
_{j_{1}j_{2}j_{3}}{}^{l_{1}l_{2}l_{3}}\partial _{k_{1}}\partial
_{l_{1}}e_{k_{2}k_{3},l_{2}l_{3}}\text{ .}
\end{eqnarray}
\end{subequations}
All components $a,b,c,d,e$ are divergence-free. Furthermore, the components $b,c$ are traceless in each pair of its indices but the components $a$
and $e$ contain their traces.
It is instructive to count the different degrees of freedom at this
point. Our starting point is the field $h$ of symmetry-type
\eqref{youngsymm} which is in the ${\bf 490}$ representation of
GL(7,$\mathbb{R}$). This field transforms under the gauge
transformations schematically denoted by \eqref{schem}. We should be
careful with counting the number of independent gauge parameters
because the gauge transformations \eqref{schem} are double
reducible: the 490 gauge parameters $\xi$ have their own gauge
symmetry with 210 gauge parameters $\zeta$ which are given by, ignoring indices, $\delta\xi = \partial\zeta$ or in terms
of Young tableaux by
\begin{equation}\label{schemg2}
\delta\hskip -.3truecm
\begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
& \cr
\cr
\end{Young}
}
\end{tabular}
\ =\
\begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
& $\partial$\cr
\cr
\end{Young}
}
\end{tabular}\,.
\end{equation}
In its turn the 210 gauge parameters $\zeta$ have a gauge symmetry
by themselves with 35 gauge parameters $\lambda$ which are
irreducible. These transformations are given by, ignoring indices, $\delta\zeta=\partial\lambda$ or in terms of Young tableaux by
\begin{equation}\label{schemg3}
\delta\hskip -.3truecm
\begin{tabular}{l}{\footnotesize
\begin{Young}
& \cr
\cr
\cr
\end{Young}
}
\end{tabular}
\ =\
\begin{tabular}{l}{\footnotesize
\begin{Young}
& $\partial$\cr
\cr
\cr
\end{Young}
}
\end{tabular}\,.
\end{equation}
A correct counting yields that there are 490--210+35 = 315
independent gauge parameters. The gauge symmetries corresponding to these gauge parameters are fixed by
the gauge conditions \eqref{gcondition} on the field $h$. To see this, one first varies \eqref{gcondition}
under the $\xi$-symmetries \eqref{schem} and requires this variation to be zero. The resulting condition
on the $\xi$-parameters has a gauge-symmetry which can be fixed by imposing the following restriction on the $\xi$-parameters:
\begin{equation}
\partial ^{i_{2}}\xi _{i_{2}\mu _{3},\nu _{1}\nu _{2}\nu _{3}} =0\text{ .}
\label{gauge-fix zeta0}
\end{equation}
Varying this condition under the $\zeta$-symmetries \eqref{schemg2} leads to a gauge-invariant condition on the $\zeta$-parameters.
To fix this gauge symmetry we impose
the following gauging-fixing conditions on the $\zeta$-parameters:
\begin{equation}
\partial ^{i_{3}}\zeta _{i_{3},\nu _{1}\nu _{2}\nu _{3}} =0\text{ .}
\label{gauge-fix zeta}
\end{equation}
After imposing these gauge conditions all parameters $\xi$ can be solved for without any ambiguity, i.e.~there is
no gauge symmetry acting on the parameters left. This
leaves us with 490--315 = 175 degrees of freedom represented by the
$a,b,c,d,e$ components defined in eq.~\eqref{canonical decomp.}:
\begin{equation}\label{counting}
a:\ 50\,,\hskip .7truecm
b:\ 35\,,\hskip .7truecm
c:\ 35\,,\hskip .7truecm
d:\ 5\,,\hskip .7truecm
e:\ 50\,.
\end{equation}
Using the canonical decomposition \eqref{canonical decomp.} we next calculate the different components of the Einstein tensor
\eqref{diffinv} and the Cotton tensor \eqref{Ctensor}. Substituting these results into the action \eqref{7Daction}, one obtains, after a lengthy calculation which we shall not repeat here, the following expression for the action \eqref{7Daction}:
\begin{eqnarray}
I &=& \int d^7x\ \bigg\{
-\frac{1}{2}\left( 3!\right) ^{4}b^{i_{2}i_{3},j_{2}j_{3}}\left( \nabla
^{2}\right) ^{2}\left( {\hat a}_{i_{2}i_{3},j_{2}j_{3}}+4\Box {\hat e
}_{i_{2}i_{3},j_{2}j_{3}}\right) \notag \\ [.2truecm]
&&-\left( 3!\right) ^{4}m\,\hat {a}^{i_{2}i_{3},j_{2}j_{3}}\left( \nabla
^{2}\right) ^{2}\hat {e}_{i_{2}i_{3},j_{2}j_{3}}-\left( 3!\right)
^{4}m\,b^{i_{2}i_{3},j_{2}j_{3}}\left( \nabla ^{2}\right)
^{2}b_{i_{2}i_{3},j_{2}j_{3}} \notag \\ [.2truecm]
&&-\frac{3}{4}\left( 3!\right) ^{4}m\,\bar{a}^{i_{3},j_{3}}\left( \nabla
^{2}\right) ^{2}\bar{e}_{i_{3},j_{3}}-10\left( 3!\right) ^{4}m\,a\left(
\nabla ^{2}\right) ^{2}e \notag \\ [.2truecm]
&&-\frac{3}{10}\left( 5!\right) m\,c^{j_{1}j_{2},i_{2}}\nabla
^{2}c_{j_{1}j_{2},i_{2}}+\frac{9}{2}\left( 2!4!\right) m\,d^{j_{1}}\nabla
^{2}d_{j_{1}} \bigg\}\text{ .}
\end{eqnarray}
Here we have used the following decomposition of $a$ in terms of a traceless part $\hat {a}$, single traces $\bar a$ and double traces $a$:
\begin{eqnarray}
a_{i_{2}i_{3},j_{2}j_{3}} &=&{\hat a}_{i_{2}i_{3},j_{2}j_{3}} \ +
\left\{ \left( \eta _{i_{2}j_{2}}-\frac{\partial _{i_{2}}\partial _{j_{2}}}{
\nabla ^{2}}\right) \bar{a}_{i_{3},j_{3}}\right. \ + \notag \\
&&\ \ \ \ \ \ \left. +\left( \eta _{i_{2}j_{2}}\eta _{i_{3}j_{3}}-\frac{
\partial _{i_{2}}\partial _{j_{2}}}{\nabla ^{2}}\eta _{i_{3}j_{3}}-\eta
_{i_{2}j_{2}}\frac{\partial _{i_{3}}\partial _{j_{3}}}{\nabla ^{2}}\right)
a\right\}_{\text{a.s.}}
\end{eqnarray}
and we used a similar decomposition for $e$.
Finally, after making the field redefinitions
\begin{eqnarray}\label{redefinitions}
\hat {a}_{i_{2}i_{3},j_{2}j_{3}} &=&\tilde{a}_{i_{2}i_{3},j_{2}j_{3}}-
\frac{2}{m}\Box b_{i_{2}i_{3},j_{2}j_{3}}\text{ ,} \hskip .7truecm
\hat {e}_{i_{2}i_{3},j_{2}j_{3}} =\tilde{e}_{i_{2}i_{3},j_{2}j_{3}}-
\frac{1}{2m}b_{i_{2}i_{3},j_{2}j_{3}}\text{ ,}
\end{eqnarray}
we obtain the following expression for the action:
\begin{eqnarray}
I &=&\int d^7x\ \bigg\{\frac{1}{m}\left( 3!\right)
^{4}b^{i_{2}i_{3},j_{2}j_{3}}\left( \nabla ^{2}\right) ^{2}\left( \Box
-m^{2}\right) b_{i_{2}i_{3},j_{2}j_{3}} \notag \\ [.2truecm]
&&-\left( 3!\right) ^{4}m\,\tilde{a}^{i_{2}i_{3},j_{2}j_{3}}\left( \nabla
^{2}\right) ^{2}\tilde{e}_{i_{2}i_{3},j_{2}j_{3}} \notag \\ [.2truecm]
&&-\frac{3}{4}\left( 3!\right) ^{4}m\,\bar{a}^{i_{3},j_{3}}\left( \nabla
^{2}\right) ^{2}\bar{e}_{i_{3},j_{3}}-10\left( 3!\right) ^{4}m\,a\left(
\nabla ^{2}\right) ^{2}e \notag \\ [.2truecm]
&&-\frac{3}{10}\left( 5!\right) m\,c^{j_{1}j_{2},i_{2}}\nabla
^{2}c_{j_{1}j_{2},i_{2}}+\frac{9}{2}\left( 2!4!\right) m\,d^{j_{1}}\nabla
^{2}d_{j_{1}}\bigg\}\text{ .}
\end{eqnarray}
This form of the action shows that only the $b$ components propagate and, according to eq.~\eqref{counting}, they do describe, unitarily, 35 degrees of freedom which transform as the ${\bf 35}^+$ of the SO(6) little group. Note that these degrees of freedom are not only described by the $b$-components
of $h$ but also, due to the redefinitions \eqref{redefinitions}, by the ${\hat a}$- and ${\hat e}$-components.
Replacing $m$ by $-m$ in the above action, we see that, after changing
the overall sign of the action, we again obtain 35 degrees of freedom. These degrees of freedom transform as the ${\bf 35}^-$ of the SO(6) little group. They are described by a different set of components of $h$ than the ${\bf 35}^+$ degrees of freedom due to the fact that one should also replace $m$ by $-m$ in the redefinitions \eqref{redefinitions}.
\section{Discussion}
We showed how the 3D TMG model, at the linearized level, can be extended beyond three dimensions to a free parity-odd Topologically Massive Gauge theory for a
``spin-2'' particle. We worked out the case of a massive ``spin-2'' particle in 7D; similar models exist in $4k-1$ dimensions for $k=3,4,5, \cdots$.
The construction of the model is based on the factorization of the Klein-Gordon operator in $4k-1$ dimensions, when acting on forms
of rank $2k-1$, in terms of two first-order operators.
A similar generalization of the parity-even 3D NMG model exists but in that case there are more extensions possible. For instance, a 4D extension
exists without a corresponding parity-breaking topological version \cite{Bergshoeff:2012ud}. In 7D there are three different extensions: one is
based on the same Young tableau \eqref{youngsymm} that we used for the topological model constructed in this letter
and
one is based on the dual of the spin connection, like in the 4D extension of \cite{Bergshoeff:2012ud}.\,\footnote{In 7D this corresponds to a description
in terms of a two-column Young tableau with height 5 and 1, respectively.} The third model is based on a description
in terms of a 2-column Young tableau of height 4 and 2, respectively. All these extensions have in common that the number of boxes $\#_{\text{boxes}}$ in the
two-column Young tableaux described by $h$ is given by
\begin{equation}
\#_{\text{boxes}} = D-1\,.
\end{equation}
One can show that this property guarantees that the index
structure of the double dual of the curvature tensor $R(h)$, which we have called the ``Einstein tensor'' $G(h)$,
is the same as that of $h$. This is a crucial property that enables one to
integrate the higher-derivative equations of motion to an action.
It is not difficult to write down the parity-even massive ``spin-2'' model
based on the Young tableau \eqref{youngsymm}. Starting from the corresponding generalized FP equations one ends up, after boosting up the derivatives, with the following action:
\begin{equation}
I [h] =\int d^{7}x\left\{ \frac{1}{72}h^{\bar{\mu},\bar{\nu}
}\varepsilon _{\bar{\mu}}{}^{\alpha \bar{\rho}}\partial _{\alpha }C_{\bar{
\rho},\bar{\nu}}\left( h\right) -\frac{1}{2}m^{2}h^{\bar{\mu},\bar{\nu}}G_{
\bar{\mu},\bar{\nu}}\left( h\right) \right\} \text{ ,} \label{7Dactione}
\end{equation}
where $C_{\bar\mu,\bar\nu}(h)$ is the Cotton tensor, see eq.~\eqref{Ctensor}, and $G_{\bar\mu,\bar\nu}(h)$ is
the Einstein tensor, see eq.~\eqref{diffinv}. This action is the parity-even version of the action \eqref{7Daction}.
A canonical analysis, like the one we performed in section 3, shows that this model describes 70 ``spin-2'' states.
It is interesting to consider the massless limit of the models \eqref{7Daction} and \eqref{7Dactione}. A canonical analysis
shows that in the case of the parity-odd topological model \eqref{7Daction} the massless limit describes zero degrees of freedom while
for the parity-even model \eqref{7Dactione} one ends up with 35 massless ``spin-2'' states which transform as the
${\bf 35}$ of the massless little group SO(5). The result
for the parity-odd model is similar to what happens for the 3D TMG model while the result for the parity-even model
resembles the parity-even cases in 3D \cite{Deser:2009hb}
and 4D \cite{Bergshoeff:2012ud}.
The crucial question remains whether the extensions we discussed in this letter are curiosities of the linearized approximation or whether
one can go beyond the linearized approximation and introduce non-trivial interactions. This is a non-trivial issue in view of the fact that we
are using non-standard representations to describe the massive ``spin-2'' particle. Perhaps, a slightly easier question to ask
is whether one can introduce interactions for only the mass term, i.e.~the term with two derivatives. For both the
parity-odd model \eqref{7Daction} and the parity-even model \eqref{7Dactione} this term is given by
\begin{equation}\label{trivial7D}
I\left[h\right] =\int d^{7}x\left\{
\frac{1}{2}h^{\bar{\mu},\bar{\nu}}G_{
\bar{\mu},\bar{\nu}}\left( h\right) \right\} \text{ .}
\end{equation}
This term by itself leads to the equation of motion $G(h)=0$ and therefore does not describe any degree of freedom, as
one would expect from a mass term. Given that there are no propagating degrees of freedom one might hope that it
will be an easier task to construct interactions.
The model \eqref{trivial7D} is the 7D version of the 3D gravity
action that neither describes any degree of freedom. The 3D gravity
action has the interesting feature that it can be reformulated as a
Chern-Simons (CS) action \cite{Achucarro:1987vz,Witten:1988hc}. In
order to achieve this, one must use a first-order formalism with the
Dreibein $e_\mu{}^a$ and spin-connection $\omega_\mu{}^a$ as
independent fields. Writing $e_\mu{}^a = \delta_\mu{}^a+ h_\mu{}^a$
this 3D CS action is at the linearized level given by
\begin{equation}\label{CSaction}
I_{\text{CS}}\left[h,\omega\right] =\int d^{3}x\, \varepsilon^{\mu\nu\rho}\left\{\omega_\mu{}^a\partial_\nu h_\rho{}^b\eta_{ab} - \frac{1}{2}\omega_\mu{}^a \delta_\nu{}^b \omega_\rho{}^c\varepsilon_{abc}\right\}\,.
\end{equation}
It is invariant under the linearized Lorentz transformation
\begin{equation}\label{3DL}
\delta h_{\mu a} = \Lambda_{\mu a}\,,\hskip 2 truecm \delta
\omega_\mu{}^a = -\frac{1}{2}\varepsilon^{abc}\partial_\mu
\Lambda_{bc}\text{ ,}
\end{equation}
for anti-symmetric parameters $\Lambda_{\mu a} = -\Lambda_{a\mu}$.
These linearized gauge transformations can be fixed by imposing the gauge-fixing condition $h_{\mu a} = h_{a \mu}$.
One then obtains a first-order action in terms of $\omega_\mu{}^a$ and a symmetric tensor $h_{\mu\nu}$.
One of the reasons that this action can be extended to include interactions is that the Kronecker delta $\delta_\alpha{}^b$,
occurring in the action \eqref{CSaction}, is in the same representation as the Dreibein $e_\mu{}^a$ and, therefore, can become part of this
Dreibein at the non-linear level. The interactions are then determined by introducing the non-Abelian CS structure, dictated by the Lorentz structure of the different gauge fields.
It turns out that a similar first-order formulation exists of the model defined by the action \eqref{trivial7D}
in terms of two fields $h_{\bar\mu,\bar\nu}$ and $\omega_{\bar\mu,\bar\nu}$ which both have the symmetry properties corresponding
to the Young tableau
\begin{equation}\label{istr}
\Yvcentermath1
{\tiny \yng(1,1,1)}\otimes {\tiny
\yng(1,1,1)}\,.
\end{equation}
Similar to \cite{Skvortsov:2008sh}, at the quadratic level such a
first-order action can be written in the following form
\begin{equation}
I [ h,\omega ] =\int d^{7}x\ \varepsilon ^{\bar{\mu}\alpha
\bar{\nu }}\left\{ \omega _{\bar{\mu},}{}^{\bar{\rho}}\partial
_{\alpha }h_{\bar{\nu}, \bar{\rho}}-\frac{1}{72}\omega
_{\bar{\mu},}{}^{\bar{\sigma}}\delta _{\alpha }{}^{\beta }\omega
_{\bar{\nu},}{}^{\bar{\tau}}\varepsilon _{\bar{\sigma} \beta
\bar{\tau}}\right\} \text{ .} \label{action h omega 2}
\end{equation}
This action has a gauge invariance under a ``generalised'' linearized Lorentz transformation,
with parameters $\Lambda_{\mu_1\mu_2,\nu_1\nu_2\nu_3\nu_4}$, given by
\begin{eqnarray}\label{Lorentz-like transf. h}
\delta h_{\bar{\mu},\bar{\nu}}&=&\Lambda _{\lbrack \mu _{1}\mu
_{2},\mu _{3}]\nu _{1}\nu _{2}\nu _{3}}\,, \nonumber\\ [.2truecm]
\delta \omega _{\bar{\rho},}{}^{\bar{\mu}} &=&\varepsilon
^{\bar{\mu}\alpha \bar{\nu}}\partial _{\alpha }\Lambda _{\nu _{1}\nu
_{2},\nu _{3}\rho _{1}\rho _{2}\rho _{3}}-\frac{1}{4}\delta
_{\bar{\rho}}^{\bar{\mu} }\varepsilon ^{\bar{\sigma}\alpha
\bar{\nu}}\partial _{\alpha }\Lambda _{\nu _{1}\nu _{2},\nu
_{3}\sigma _{1}\sigma _{2}\sigma _{3}} \\ [.2truecm] &&+\left\{
-\frac{9}{2}\delta _{\rho _{1}}^{\mu _{1}}\varepsilon ^{\sigma
_{1}\mu _{2}\mu _{3}\alpha \bar{\nu}}\partial _{\alpha }\Lambda
_{\nu _{1}\nu _{2},\nu _{3}\sigma _{1}\rho _{2}\rho _{3}}+3\delta
_{\rho _{1}\rho _{2}}^{\mu _{1}\mu _{2}}\varepsilon ^{\sigma
_{1}\sigma _{2}\mu _{3}\alpha \bar{\nu}}\partial _{\alpha }\Lambda
_{\nu _{1}\nu _{2},\nu _{3}\sigma _{1}\sigma _{2}\rho _{3}}\right\}
_{\text{a.s.}}\text{ .}\nonumber
\end{eqnarray}
In effect, the $\Lambda$-transformation represents three independent
gauge transformations whose parameters are given by the following
Young tableaux:
\begin{equation}\label{gparameters}
\Yvcentermath1
{\tiny \yng(1,1)}\,\otimes {\tiny
\yng(1,1,1,1)}\ = \ {\tiny \yng(1,1,1,1,1,1)}\ \oplus\ {\tiny \yng(2,1,1,1,1)}\ \oplus \ {\tiny \yng(2,2,1,1)}\ \,.
\end{equation}
The gauge transformations \eqref{Lorentz-like transf. h} are the generalization of the 3D Lorentz transformations \eqref{3DL}.
It is easy to see that the action \eqref{action h omega 2} is equivalent to \eqref{trivial7D}. One first imposes the condition
\begin{equation}
h_{\bar\mu,\bar\nu} = Y_{[3,3]}\, h_{\bar\mu,\bar\nu}
\end{equation}
to fix the gauge transformations \eqref{Lorentz-like transf. h}.
Next, one uses the equation of motion for $\omega_{\bar\mu,\bar\nu}$
to solve for $\omega_{\bar\mu,\bar\nu}$ in terms of $h_{\bar\mu,\bar\nu}$:
\begin{equation}\label{solo}
\omega_{\bar\mu,\bar\nu} = \epsilon_{\bar\nu}{}^{\alpha\bar\rho}\partial_\alpha h_{\bar\rho,\bar\mu}\,.
\end{equation}
Note that this equation implies that $\omega_{\bar\mu,\bar\nu}$ is traceless, i.e.~$\eta^{\mu\nu}\omega_{\bar\mu,\bar\nu}=0$.
Substituting this solution back into \eqref{action h omega 2} the two terms in \eqref{action h omega 2}
coincide and become identical to the single term in \eqref{trivial7D} with the Einstein tensor given in eq.~\eqref{diffinv}.
The gauge-invariant first-order formulation we have obtained at this point resembles the 3D CS structure. There are, however, also important differences. First of all, it is not clear how to introduce in the 7D case the notion of flat and curved indices, thereby anticipating a possible CS-like structure. A related issue is that we are working now with tensors instead of gauge vectors. It is
not obvious how to introduce non-Abelian structures for these tensors. The structure we have obtained so far suggests an extension of CS terms for vectors to a ``generalised CS'' structure for a non-Abelian version of free differential algebras.
An alternative approach to introduce interactions could be to use a bi-metric formulation. One metric describes the massive spin-2 particle and is used to absorb the $h_{\bar\mu,\bar\nu}$ field, while the other metric is a reference metric that can be used to absorb the Kronecker delta that occurs in the second term of \eqref{action h omega 2}.
For now, we leave these possibilities as intriguing open issues.
\section*{Acknowledgements} We thank Paul Townsend for useful discussions and for pointing out reference \cite{Lu:2010sj} to us. YY wishes to thank Andrea Borghese, Giuseppe Dibitetto, Jose Juan Fernandez-Melgarejo, Teake Nutma and Diederik Roest for discussions on group theory and useful software.
The work of JR is supported by the Stichting Fundamenteel Onderzoek der Materie
(FOM). The work of MK and YY is supported by the Ubbo Emmius Programme administered
by the Graduate School of Science, University of Groningen.
We acknowledge the frequent use of the software Cadabra \cite{Peeters:2006kp} to perform Young projection calculations.
\providecommand{\href}[2]{#2}\begingroup\raggedright |
1,941,325,220,770 | arxiv | \section{Introduction}\label{s:intro}
This paper grew out of work on algebras satisfying a polynomial
identity (PI). We recall \cite[pp.~28ff.]{BR} that a PI-algebra
$R$ over an integral domain $C$ is {\bf representable} if it can
be embedded as a subalgebra of $\M[n](K)$ for a suitable field
$K\supset C$ (which can be much larger than $C$). One main
byproduct of Kemer's theorem \cite[Corollary 4.67]{BR} is that
every relatively free affine PI-algebra over an infinite field is
representable. From this perspective, the proof of Kemer's theorem
is based on a close study of representable algebras. The strategy
is to find the PI-algebra with the ``best'' structure, that is
PI-equivalent to a given representable algebra, in order to study
its identities very carefully. (Note that in characteristic 0 for
the non-affine case, Kemer proved that any relatively free algebra
can be embedded in the Grassmann envelope of a finite dimensional
superalgebra, so similar considerations also hold in this
case.)
Whereas over an infinite field, any representable algebra is
PI-equivalent to a finite dimensional $K$-algebra (thus leading to
a very careful study of identities of finite dimensional algebras
in the proof of Kemer's theorem), this is no longer the case over
finite fields (in positive characteristic). Thus, we need to
replace finite dimensional algebras by a more general class,
called {\bf {Zariski-closed}\ algebras}, which, surprisingly, satisfy much of
the structure theory of finite dimensional algebras. Since the
relatively free affine algebra of an affine PI-algebra is
representable, we are led finally to study the {Zariski closure}\ of a
(representable) relatively free algebra.
Throughout the paper, $F \subseteq K$ will be fields, with $F$ finite
or infinite and $K$ usually being algebraically closed; $A$ is an
$F$-algebra contained in a finite dimensional $K$-algebra $B$. We
usually assume that $F$ has characteristic $p>0$, since the theory
becomes standard in characteristic 0.
After some introductory comments in Section \ref{sec2}, we introduce the
{\bf {Zariski closure}} of a representable algebra $A$ in Section \ref{s:zcr}, showing
that it shares many of the important structure theorems of finite
dimensional algebras, such as Wedderburn's principal theorem and
the fact that every semiprime {Zariski-closed}\ algebra is semisimple; it
turns out that {Zariski-closed}\ algebras are semiperfect. Identities and
defining relations of $A$ also are studied in terms of its {Zariski closure}\
in $B$, to be defined below.
In Section \ref{sec:3} we delve more deeply into the generation of
polynomial relations of a {Zariski-closed}\ algebra, showing that the center
is defined in terms of finitely many polynomial relations, which
can be written in the form $\lambda _i = 0$, $\lambda _i- \lambda _i^s = 0$,
where $s$ is a $p$-power, or
$\lambda _i - \lambda _j^s = 0$, $j \ne i$, where $s$ is a
$p$-power. These polynomial relations are said to be of {\it
Frobenius type}. This enables us explicitly to study
representations of {Zariski-closed}\ algebras in \Sref{sec:4}, focusing on
their Peirce decomposition, and its refinements. The explicit
representation of algebras is complicated even in characteristic
0, and one of our main techniques is ``gluing,'' or identifying
different components in a representation.
In \Sref{s:explicit}, we also obtain results concerning the
off-diagonal polynomial relations, which requires us to consider
\defin{$q$-polynomials}, which we call polynomial relations of
{\it weak Frobenius type}. The main result is that the weak
Frobenius relations comprise a free module over the group algebra
of the Frobenius automorphism. We thank B.~Kunyavskii for bringing to
our attention the references \cite{KombMiyanMasayoshi},
\cite{Miyanishi}, and \cite{Tits}.
Finally, in \Sref{sec:6} we describe the relatively free
algebras of {Zariski-closed}\ algebras. These turn out to have an especially
nice description and play a key role in the proof of Specht's
conjecture for affine PI-algebras of arbitrary characteristic.
\section{Background}\label{sec2}
Let us bring in the main tools for our study.
\subsection{Results from the theory of finite dimensional
algebras}
We start with a classical theorem of Wedderburn about finite
dimensional algebras:
\begin{thm}[Wedderburn's Principal Theorem] \Label{Wed2}
Any finite dimensional algebra $A$ over a perfect field $F$ has a
Wedderburn decomposition $A = S\oplus J$, where $J$ is the
Jacobson radical of $A$, which in this case is also the largest
nilpotent ideal, and $S \cong A/J$ is a semisimple subalgebra
of $A$.%
\end{thm}
When the base field is algebraically closed, Wedderburn's
Principal Theorem enables us to find a direct product of matrix
rings inside any finite dimensional algebra $A$. The following
notion helps us to better understand the structure of~$A$.
We call $\{e_1,\dots, e_n \}$ a {\bf 1-sum set} of orthogonal
idempotents if they are orthogonal and $\sum _{i=1}^n e_{i} = 1$.
\begin{rem}[``Peirce decomposition'']
If $A$ has a {\bf 1-sum set} of orthogonal idempotents
$\{e_1,\dots, e_n\}$ (i.e., $e_ie_j = 0$ for all $i\ne j$), then
$$A\ = \bigoplus_{i,j=1}^t e_iAe_j $$ as
additive groups.
\end{rem}
For example, the Peirce decomposition of $A = \M[n](R)$ with
respect to the matrix units $e_{11}, e_{22}, \dots, e_{nn}$ is
just
\begin{equation}\Label{PDM}
A= \bigoplus _{i,j = 1}^n Ae_{ij}.
\end{equation}
Note that any set $\{e_1,\dots , e_n\}$ of orthogonal idempotents
of $A$ can be expanded to a 1-sum set $\{e_0, e_1,\dots , e_n\}$
by taking $e = \sum _{i=1}^n e_i$ and putting $e_{0} = 1 -e$.
Even for algebras without $1$, one can reproduce an analog of the
Peirce decomposition by formally defining a left and right
operator $e_0$ from $A$ to $A$, given by
$$e_0a = a -ea, \qquad ae_0 = a - ae.$$
\subsection{Affine varieties and algebraic groups}
We need some basic facts from affine algebraic geometry and the
theory of affine algebraic groups. We use \cite{Hum} as a
reference for algebraic groups. Suppose $K$ is an algebraically
closed field. Write $K[\Lambda]$ for the polynomial algebra $
K[\lambda_1, \dots, \lambda_n]$. For any subset $S\subset K[\Lambda]$, we
define the {\bf zero set} of $S$ to be
$$\mathcal Z (S) = \{ {\mathbf a}= (\alpha_1, \dots, \alpha_n ) \in K^{(n)}:
f(\alpha_1, \dots, \alpha_n) = 0, \,\forall f \in S\}.$$ %
$K^{(n)}$ has the
{\bf Zariski topology} whose closed sets are the zero sets of
subsets of $K[\Lambda]$. This is the smallest topology under
which all polynomial maps $K^{(n)} {\rightarrow} K$ are continuous, assuming
$K$ has the co-finite topology.
A closed set is {\bf irreducible} if it is not the union of two
proper closed subsets. An {\bf affine variety} is a {Zariski-closed}\ subset
of $K^{(n)}$. The {\bf dimension} of a variety is the length of a
maximal chain of irreducible subvarieties, with respect to
(proper) inclusion. A {\bf morphism} of varieties is a continuous
function with respect to the respective topologies. (In this text
we concern ourselves only with affine varieties, so ``variety''
means ``affine variety.'')
A {\bf locally closed} set is the intersection of a closed set and
an open set. A {\bf constructible set} is the finite union of
locally closed sets. We need the following theorem of Chevalley:
\begin{thm}[{\cite[Theorem 4.4]{Hum}}]\Label{Chev}
Any morphism of varieties sends constructible sets to
constructible sets. (In particular, the image of a variety is
constructible.)\end{thm}
An {\bf (affine) algebraic group} is an (affine) variety $G$
endowed with a group structure $(G,\cdot,e)$ such that the inverse
operation (given by $g \mapsto g^ {-1}$) and multiplication map $ G
\times G \to G$ (given by $ (a,b) \mapsto a\cdot b)$ are morphisms
of varieties. A {\bf morphism} $ \varphi{\,{:}\,} G\to H$ of algebraic
groups is a group homomorphism that is also a morphism of
varieties.
\begin{thm}[{\cite[Proposition 7.3]{Hum}}] In any algebraic group $G$, the
irreducible component $G_e$ of the identity is a closed connected
subgroup of finite index, whose cosets are precisely the
(connected) irreducible components of $G$. Thus, as a variety, $G$
is the direct product of an irreducible variety and a finite set.
\end{thm}
By \cite[Theorem 11.5]{Hum}, for any affine algebraic group $G$
with closed normal subgroup $N$, the group $G/N$ can be provided
with the structure of an algebraic group.
\subsection{Frobenius automorphisms and finite fields}\Label{ss:ff}
Much of our theory depends on the properties of endomorphisms of
finite fields. Towards this end, we recall the {\bf Frobenius
endomorphism} of a field $F$ of characteristic $p$ given by $a
\mapsto a^{p^t}$ for suitable fixed $t$. When $F$ is finite, then
every algebra endomorphism of $F$ is obviously an automorphism
(over its characteristic subfield), and it is well known by Galois
theory that every automorphism of $F$ is Frobenius.
When $F$ is an infinite field, there may of course be
non-Frobenius endomorphisms (but one can show using a Vandermonde
matrix argument, for any automorphism $\sigma$, that if $\sigma (a)$ and
$a$ are algebraically dependent of bounded degree for all $a \in
F$, then $\sigma$ is a Frobenius endomorphism).
Note that the Frobenius endomorphism of an algebraically closed
field $K$ also is an automorphism of $K$, although $K$ is
infinite.
\begin{thm}[Wedderburn's theorem about finite division rings]
\Label{Wed3} %
Any finite division ring is commutative. Consequently, any finite
dimensional simple algebra over a finite field $F$ must have the
form $\M[n](F_1)$ for a finite extension $F_1$ of $F$.
\end{thm}
Any finite field $F$ can be viewed as the zero set of the
polynomial $\lambda ^q - \lambda$ in its algebraic closure $K$, where $q =
\card{F}$. This observation enables us to view finite fields
explicitly as subvarieties (of dimension $0$) of the affine line.
Likewise, matrices over finite fields can be viewed naturally as
varieties.
\subsection{Examples of representable PI-algebras over finite and
infinite fields}
A polynomial identity (PI) of an algebra $A$ is a polynomial which
vanishes identically for any substitution in $A$. Recall that a
ring $R$ is called a {\bf central extension} of a subring $A$ if
$R = \Cent{R}A$. If $A$ is an algebra over an infinite field, then
any central extension of $A$ is PI-equivalent to $A$;
{cf.}~\cite[Proposition 1.1.32]{Row1}. Thus, in the examples to
follow, the finiteness of the field~$F$ is crucial for their
special properties concerning identities.
\begin{exmpl}\Label{basexa1} Suppose $F \subseteq K$ are fields.
\begin{enumerate}
\item \Label{BE1i}
Let $A = \smat{F}{K}{0}{F}$ (which is an $F$-algebra but not a
$K$-algebra). Then $\smat{K}{K}{0}{K}$ is a central extension of
$A$ since $A$ contains the matrix units $e_{11}, e_{12}$, and
$e_{22}$. When $F$ is infinite, $A$ is PI-equivalent to
$\smat{K}{K}{0}{K}$. However, when $\card{F} = q$ is finite, then
$\alpha^q = \alpha$ for all $\alpha \in F$, implying $a^q -a \in
\smat{0}{K}{0}{0}$
for $a \in A$. Hence $(x^q-x)(y^q-y)\in {\operatorname {id}}(A)$.
\item \Label{BE1ii}
Let $A = \smat{F}{K}{0}{K}$, where $\card{F} = q$. Then $a^q -a
\in \smat{0}{K}{0}{K}$, for all $a \in A$, implying $(x^q-x) [y,z]
\in {\operatorname {id}} (A)$.
\item \Label{BE1iii}
Let $A = \smat{K}{K}{0}{F}$, where $\card{F} = q$. Then,
analogously to (\ref{BE1ii}), $[y,z] (x^q-x) \in {\operatorname {id}} (A)$.
\end{enumerate}
\end{exmpl}
There is another type of example, involving identification of
elements.
\begin{exmpl}\Label{basexa2}
Suppose $\sigma$ is an automorphism of $F_1$ over $F$, where $F \subseteq
F_1 \subseteq K$. Then $K$ can be viewed as an $F_1$-left module in the
usual way and as a right module ``twisted'' by $\sigma;$ namely
$a\cdot \alpha$ is defined as $a\sigma^{-1} (\alpha)$ for $a\in K$, $\alpha \in
F$. (We denote this new right module structure as $K_\sigma$.) Then $
\smat{F_1}{K_\sigma}{0}{F_1}$ is a PI-algebra, which is clearly
isomorphic to $\smat{F_1}{K}{0}{F_1}$ as a ring (but not as an
$F$-algebra). However, we get interesting new examples by making
certain identifications.
\begin{enumerate}
\item\Label{BE2i}
Suppose $\card{F_1} = q^t$, where $\card{F} = q$. Then we have the
Frobenius automorphism $\alpha \mapsto \alpha^{q^n}$ of $F_1$, and $\set{
\smat{\alpha^{p^n}}{a}{0}{\alpha}: \alpha \in F_1, \, a\in K}$
satisfies the identity $x[y,z] = [y,z]x^{p^n}$. Note that this
$F$-algebra is not an $F_1$-algebra in general.
\item\Label{BE2ii} Let $A = \set{ \smat{\sigma(\alpha)}{a}{0}{\alpha}: \alpha \in F_1, \ a
\in K }$. As a consequence of Theorem~\ref{linear} to be proved
below, if $\sigma$ is not Frobenius, then ${\operatorname {id}} (A) = {\operatorname {id}}(T_2)$, where
$T_2$ is the algebra of $2\times 2$ triangular matrices.
\end{enumerate}
\end{exmpl}
We call this identification process \textbf{gluing}, and it will be
described more precisely in \Sref{sec:4}. All of
Example~\ref{basexa1} and Example~\ref{basexa2} have a
central extension to $B= \smat{K}{K}{0}{K}$, and thus they satisfy the
same multilinear identities of $K$. But these varieties are quite
different. Thus, as opposed to algebras over infinite fields, in
general the multilinear identities are far from describing the
full PI picture.
For later use, we record the following result.
\begin{prop}\Label{break}
If $A = A_1 + A_2$, then a non-commutative polynomial $f$ is an
identity of $A$ iff $f$ and its consequences become zero under
substitutions in which every variable takes values either in $A_1$
or in $A_2$.
\end{prop}
\begin{proof}
This is trivial is characteristic zero, where every identity is
equivalent to a set of multilinear ones. In general, the proof is
by induction on the degree of $f$, considering the
multilinearization $f(\vec{x}+\vec{y}) - f(\vec{x}) - f(\vec{y})$.
\end{proof}
\section{The {Zariski closure}\ of a representable algebra}\Label{s:zcr}
Both the motivation for PI-theory and one of its major facets is
the theory of representable algebras. In this section we develop
this theory, with emphasis always on the set of identities of a
given representable algebra $A$. Thus we often exchange $A$ by an
appropriate PI-equivalent algebra.
Let $F$ be a field and $A$ an arbitrary $F$-algebra. Recall from
the introduction that $A$ is representable if $A$ embeds (as an
$F$-algebra) in $\M[n](K)$ for a suitable extension field $K$
(possibly infinite dimensional) of $F$ and suitable $n$. In this
section we assume throughout that $A$ is representable. Then $A$
can be embedded further in $B = \M[n](\bar K)$, where $\bar K$ is
the algebraic closure of $K$, so we assume throughout, without
loss of generality, that $K$ is algebraically closed. Thus, we
view $\M[n](K)$ as an $n^2$-dimensional variety and have the
theory of affine algebraic geometry at our disposal.
When the base field $F$ is infinite, $A$ is PI-equivalent to the
$K$-subalgebra $KA$ of $\M[n](K)$, which is finite dimensional, so
one passes at once to the finite dimensional case over an
algebraically closed field. In other words, one considers finite
dimensional algebras over a field, in which case one has the tools
from the theory of finite dimensional algebras, as described
above.
However, over finite fields (which clearly have positive
characteristic), it does not suffice to consider $K$-subalgebras
of $\M[n](K)$, as evidenced in Example~\ref{basexa1}, where we
have examples of algebras $A$ for which $KA = \smat{K}{K}{0}{K}$,
but $A$ satisfies extra identities. Thus we need a subtler way,
not passing all the way to the algebraic closure, of obtaining
``canonical'' algebras that are PI-equivalent to a given
representable algebra.
Our solution is to consider the {Zariski closure}\ of $A$ in $\M[n](K)$, which
enjoys the analogs of all of the properties of finite dimensional
algebras listed above.
To show that an $F$-algebra $A$ is representable, it clearly is
enough to embed $A$ into any finite dimensional unital $K$-algebra
$B$, since letting $n = \dimcol{B}{K}$ we can further embed $B$
into $\M[n](K)$. So we consider this situation that $A \subseteq
B$, where $B$ is an $n$-dimensional algebra over the algebraically
closed field $K$. At first, we assume that $B$ is a matrix
algebra, but later we modify our choice of~$B$ to better reflect
the structure of $A$.
\subsection{The {Zariski closure}}
\begin{defn}
Suppose $B$ is a $K$-vector space, with $\dimcol{B}{K} = n$.
Picking a base $b_1, \dots, b_n$ of $B$ over $K$, we view $B$ as
the affine variety ${\mathbf A}^n$ of dimension $n$, identifying an element
$\sum_{i=1}^n \alpha _i b_i$ ($\alpha_i \in K$) with the vector $(\alpha_1,
\dots, \alpha_n)$. Usually $B$ is a $K$-algebra, but we formally do
not need this requirement.
Suppose $F$ is a subfield of $K$ and $V \! \subset\! B$ is a
vector space over $F$. The {\bf {Zariski closure}} of $V$ inside $B$, denoted
by $\cl[B]{V}$, is the closure of $V$ inside $B$ via the Zariski
topology of ${\mathbf A}^n$ (identifying $B$ with ${\mathbf A}^n$). When $B$ is
understood, we write $\cl{V}$ for $\cl[B]{V}$.
\end{defn}
Recall that the Zariski topology of the affine variety ${\mathbf A}^n$ over
$K$ is defined as having its closed sets be precisely those sets
of simultaneous zeros of polynomials from the (commutative)
polynomial algebra $K[\lambda _1, \dots, \lambda _n]$. In other words, a
closed subspace of $B$ can be defined by (finitely many)
polynomials.
\begin{rem}\Label{polyf}
When we fix a base $b_1,\dots,b_n$ for $B$, any polynomial $f \in
K[{\lambda}_1,\dots,{\lambda}_n]$ can be viewed as a function $f {\,{:}\,} B {\rightarrow}
K$ by assigning $f(\alpha_1b_1+\cdots+\alpha_nb_n) = f(\alpha_1,\dots,\alpha_n)$.
\end{rem} %
A polynomial $f({\lambda}_1,\dots,{\lambda}_n)$ is called a {\bf polynomial
relation} on $A$ if $f(A) = 0$, in the sense of Remark
\ref{polyf}. Thus a polynomial relation $f(\lambda_1, \dots, \lambda _n)$
is always taken in $\le n$ indeterminates, and we check it by evaluating it on the
coordinates of a single element $a$, for each $a$ in $A$. In
contrast, in PI-theory, a polynomial identity $g(x_1, \dots, x_m)$
of $A$ (resp.\ of $B$) can be in any number of indeterminates,
specialized to $m$ elements of $A$ (resp.\ of $B$).
\begin{rem}\Label{indep}
The {Zariski closure}\ does not depend on the choice of base of
$B$ over~$K$, since a linear transformation induces an
automorphism of the polynomial ring (i.e., sends polynomial
relations to polynomial relations) and thus does not change the
Zariski topology.
\end{rem}
The {Zariski closure}\ does depend on the way in which $V$ is embedded in $B$
as an $F$-space, even for $F$ infinite. In particular, for an
$F$-algebra $A$ contained in a $K$-algebra $B$, the notation
$\cl[B]{A}$ should also indicate the particular representation of
$A$ into $B$, as evidenced in the following example. (But
nevertheless, the representation is usually understood, and so is
not spelled out in the notation.)
\begin{exmpl}
For $F = \mathbb R$, $K = {\mathbb {C}}$, and $B = \M[n]({\mathbb {C}})$, we could embed $A =
\mathbb C$ into $M_2({\mathbb {C}})$ as scalar matrices. On the other hand,
in the spirit of Example~\ref{basexa2}, we could identify ${\mathbb {C}}$
with $\set{\smat{\alpha}{0}{0}{\bar\alpha}: \alpha \in {\mathbb {C}}}$, where
$\bar{\phantom{w}}$ denotes the usual complex conjugation. In the
first case, the {Zariski closure}\ of $A$ is $A$ itself, which is isomorphic to
$ {\mathbb {C}}$. In the second case, the {Zariski closure}\ of $A$ is
$\smat{{\mathbb {C}}}{0}{0}{{\mathbb {C}}}\cong {\mathbb {C}} \times {\mathbb {C}}$, which has larger
dimension!
Although in this example $A\cong {\mathbb {C}}$ and thus $A$ is a
${\mathbb {C}}$-algebra, it is not {Zariski-closed}\ in $M_2({\mathbb {C}})$. Thus, ${\mathbb {C}}$ need not be
{Zariski-closed}\ in $M_2({\mathbb {C}})$ as an $\mathbb R$-algebra. But note here that $A$ is
not a ${\mathbb {C}}$-subalgebra of $M_2({\mathbb {C}})$, and in fact we have the
following remark.
\end{exmpl}
\begin{rem}\Label{Kcl}
Any $K$-subspace $V$ of $B$ is {Zariski-closed}. (In particular, any
$K$-subalgebra of $B$ is {Zariski-closed}.) Indeed, a $K$-subspace is an
algebraic subvariety, defined by linear relations.
\end{rem}
In particular, we have:
\begin{lem}\Label{Kcl2}
$\cl{A} \subseteq KA$ inside $B$.
\end{lem}
\begin{proof}
We saw in Remark~\ref{Kcl} that $KA$ is {Zariski-closed}. Thus, the {Zariski closure}\
$\cl{A}$ of $A$ is always contained in $KA$.
\end{proof}
Thus, we call $KA$ the {\bf \fcr} of $A$.
\begin{prop}\Label{finf}
If $F$ is infinite, then the {Zariski closure}\ of an $F$-vector space $A$ is
equal to the \fcr\ of $A$.
\end{prop}
\begin{proof}
By definition, $\cl{A}$ is composed of the common zeros in $B$ of
the polynomial relations of $A$. Let $a \in A$, and let $f \in
K[{\lambda}_1,\dots,{\lambda}_n]$ be a polynomial relation. Then $f(\alpha
a) = 0$ for every $\alpha \in F$; viewing $\alpha$ generically, we
see that $f(\alpha a)$ is identically zero. Therefore, $f(\alpha
a) = 0$ for every $\alpha \in K$, which proves that $K a \subseteq
\cl{A}$.
\end{proof}
\begin{rem}\Label{interpol}
{\ }
\begin{enumerate}
\item If a vector space is {Zariski-closed}, then any subset
defined by polynomial relations is {Zariski-closed}.
\item \Label{interpoliii}
If \ $V\! \subseteq\! B_0 \!\subseteq\! B$, then the {Zariski closure}\ of $V$ in $B$ is
equal to the {Zariski closure}\ of $V$ in $B_0$. (Indeed, $B_0$ is closed in
$B$ by \Rref{Kcl}.)
\item Suppose $A_i\! \subseteq\! B_i$ for $i=1,2$, where $B = B_1 \oplus B_2$.
Then $$\cl[B]{(A_1\!+\!A_2)} = \cl[B_1]{A_1}+\cl[B_2]{A_2}.$$
(Indeed, $(b_1,b_2)\in B_1 \oplus B_2$ satisfies all the
polynomial relations of $A_1 + A_2$ iff the $b_i$ satisfy all
polynomial relations of $A_i$, for $i=1,2$.)
\end{enumerate}
\end{rem}
The distinction between finite and infinite fields, which is
crucial in what is to come, is explained by the following
observation.
\begin{exmpl}
\begin{enumerate}
\item If $F$ is an infinite subfield of $K$, then $F$ satisfies
only the identities resulting from commutativity, and thus $\cl{F}
= K$ (this follows, {e.g.}, from \Pref{finf} below). On the other
hand, if $F$ is a finite field of order $q$, then ${\lambda}^q - {\lambda} =
0$ is an identity and $\cl{F} = F$.
\item The {Zariski closure}\ of $A = \smat{F}{K}{0}{F}$ in $M_2(K)$ is $A$ if $F$
is finite and $\smat{K}{K}{0}{K}$ otherwise.
\end{enumerate}
\end{exmpl}
\begin{exmpl}\Label{cap}
If $A_i$ are subsets of $B$, then clearly $\cl{(\bigcap A_i)} \subseteq
\bigcap(\cl{A_i})$. However, this may not be an equality. Indeed,
let $\mu$ be an indeterminate over ${\mathbb {F}}_q$, and take $A_i =
{\mathbb {F}}_p(\mu^i)$ for $i \in {\mathbb {N}}$, as subalgebras of the common
algebraic closure $K$. We have that $\cl{A_i} = K$ since these are
infinite fields, where $\bigcap A_i = {\mathbb {F}}_p$ which is closed.
\end{exmpl}
{}From now on, we assume that $B$ is a $K$-algebra.
\begin{thm}
{\ }
\begin{enumerate}
\item \Label{Si}
If $V$ is an $F$-subspace of $B$, then $\cl{V}$ is also an
$F$-subspace.
\item \Label{Sii}
If $A$ is an $F$-subalgebra of $B$, then $\cl{A}$ is also an
$F$-subalgebra.
\item \Label{Siii}
If $I $ is a left ideal of $ A$, then $\cl{I}$ is a left ideal of
$ \cl{A}$.
\item \Label{Siv}
If $I \triangleleft A$ then $\cl{I} \triangleleft \cl{A}$.
\end{enumerate}
\end{thm}
\begin{proof}
\begin{enumerate}
\item Given any $a \in B$ and any polynomial relation $f$
vanishing on $A$, define $f_a (x) = f(a+x)$. Clearly, for each
$a\in A$, $f_a$ vanishes on $A$, and thus on $\cl{A}$, i.e. $f(a +
r) = 0$ for all $r\in \cl{A}$. Thus, $f_r$ vanishes on $A$ for $r
\in \cl{A}$, implying $f_r$ vanishes on $\cl{A}$, i.e., $f(r+s) =
0$ for all $r,s \in \cl{A}$. This is true for every $f$ vanishing
on $A$, proving $r+s \in \cl{A}$; i.e., $\cl{A}$ is closed under
addition.
Likewise, defining $(\alpha f)(x) = f(\alpha x)$, we see for each $\alpha\in
F$ that $\alpha f$ vanishes on $A$ and thus on $\cl{A};$ i.e., $f(\alpha
r) = 0$ for all $r \in \cl{A}$, i.e., $\cl{A}$ is a $F$-vector
space.
\item Continuing the idea of (\ref{Si}), given any $a \in B$ and any
polynomial relation $f$ vanishing on~$A$, define $f_a (x) =
f(ax)$. Then, for each $a\in A$, $f_a$ vanishes on $A$ and thus on
$\cl{A}$, implying $f_a(r) = 0$ for all $r \in \cl{A}$. Repeating
this argument for $f_r$ shows that $f(rs) = 0$ for all $r,s \in
\cl{A}$, and we conclude that $rs \in \cl{A}$.
\item By (\ref{Si}), $\cl{I}$ is a subgroup of $\cl{A}$. But for any
$a \in A$ and any polynomial relation $f$ vanishing on $I$, we
define $f_a(x) = f(ax)$, which also vanishes on $I$ and thus on
$\cl{I}$. Using the same trick and defining $f_r (x) = xr$, we
see, for any $r \in \cl{I}$, that $f_r$ vanishes on $A$ and thus
on $\cl{A}$, implying $ \cl{A} \cl{I}\subseteq \cl{I};$ i.e.,
$\cl{I}$ is a left ideal of $\cl{A}$.
\item Also apply the right-handed version of (\ref{Siii}).
\end{enumerate}
\end{proof}
The {Zariski closure}\ acts functorially, and turns out to be a key tool in the
structure of algebras. To see this, we need to show that the {Zariski closure}\
preserves various important structural properties. Sometimes it is
convenient to separate addition from multiplication in our
discussion. The {Zariski closure}\ of an additive subgroup $(G,+)$ of
$\M[n](K)$ is a closed subgroup; i.e.,~an algebraic group.
\begin{prop}[{\cite[Cor.~7.4]{Hum}}]\Label{alggroup}
Suppose $G$ is any algebraic group that comes with a morphism of
algebraic groups $\psi{\,{:}\,} G \to V$. Then $\psi(G)$ is {Zariski-closed}\ in
$V$.
\end{prop}
\begin{cor} Suppose $A$ is a {Zariski-closed}\ algebra and $\psi{\,{:}\,} A
\to B'$ is a morphism of varieties. Then $\psi (A)$ is closed in
$B'$.
\end{cor}
\begin{cor}\Label{mapp}
For every $F$-subalgebra $A$ of $B$ and morphism $\psi {\,{:}\,} B {\rightarrow}
B'$, $\psi(\cl{A}) = \cl{\psi(A)}$.
\end{cor}
\begin{proof}
Since $\psi(\cl{A})$ is closed, we have that $\cl{\psi(A)} \subseteq
\psi(\cl{A})$; but $\psi(\cl{A}) \subseteq \cl{\psi(A)}$ by continuity
of $\psi$.
\end{proof}
Thus, we see how the power of algebraic group techniques enters
into the theory of {Zariski-closed}\ algebras. There is a newer theory of
algebraic semigroups \cite{Put} that would also enable us to
utilize the multiplicative structure; we return to this later.
\begin{cor}\Label{crucial}
Let $W \subseteq B$ be $K$-spaces. For any closed $F$-subspace $A \subseteq
B$, the factor space $A/(W\cap A)$ can be identified with a {Zariski-closed}\
subspace of $B/W$.
\end{cor}
\begin{proof}
Letting $\psi {\,{:}\,} B {\rightarrow} B/W$ be the projection morphism, $A/(W
\cap A) \isom (A+W)/W = \psi(A)$ is closed by \Cref{mapp}.
\end{proof}
\begin{cor}\Label{crucial1}
If $A$ is a {Zariski-closed}\ $F$-subalgebra of $B$ and $I \triangleleft B$,
then $A/(I \cap A)$ can be identified with a {Zariski-closed}\ subalgebra of
$B/I$.
\end{cor}
\begin{proof}
A special case of \Cref{crucial}.
\end{proof}
\subsection {PI's versus polynomial relations}
\begin{prop}\Label{notate}
The polynomial identities of the finite dimensional $K$-algebra
$B$ are determined by the polynomial relations in the Zariski
topology.
\end{prop}
\begin{proof}
Fixing the base $\set{b_i}$, we can take
any polynomial $f(x_1, \dots, x_m)$ defined on $B$, and, for any
$w_2, \dots, w_m \in B$, define $\hat f(x_1)$ via $\hat f(b) =
f(b, w_2, \dots, w_m)$. Writing $b$ ``generically'' as $\sum \lambda
_i b_i$ and $\hat f(b) = \sum \beta _k b_k$, we define $\hat f_k
(b) = \beta_k$. Putting each $\beta_k = 0$ in turn clearly
defines a polynomial relation, since multiplication of the base
elements of $B$ is given in terms of structure constants.
For example, suppose $b_ib_j = \sum \alpha _{ijk} b_k$ in $B$, and $f
= x_1x_2 - x_2 x_1$. Fixing $w_2 = \sum c_i b_i$, we have $$\hat f
(b) = \sum \lambda _i b_i \sum c_j b_j - \sum c_i b_i \sum \lambda _j b_j
= \sum _k \sum _{i,j} \alpha _{ijk}(c_j\lambda _i - c_i \lambda _j) b_k,$$ so,
for each $k$,
$$\hat f_k = \sum _{i,j} \alpha _{ijk}(c_j\lambda _i - c_i \lambda _j).$$
In this way, letting $w_2,\dots,w_m$ run over all elements of $B$,
we can view any polynomial identity as an (infinite) aggregate of
polynomial relations on the coefficients of the elements of $B$.
\end{proof}
The converse is one of our main objectives: {\emph{Can {Zariski-closed}\
algebras be differentiated by means of their polynomial
identities?}} For example, any proper $K$-subalgebra of $\M[n](K)$
satisfies the Capelli identity $c_{n^2}$, which is not an identity
of $\M[n](K)$ itself.
Although every multilinear identity of $A$ is also satisfied by
$KA$, we may have $\Var{A} \neq \Var{KA}$ in nonzero
characteristic. For example, if $A$ is the algebra of
Example~\ref{basexa2}.(\ref{BE2i}), then $x[y,z] = [y,z]x^{p^n}$
is an example of $A$ but not of $KA$. The pertinence of {Zariski closure}\ to
PI-theory comes from the following obvious but crucial
observation.
\begin{lem}\Label{samevar}
$\VarF A = \VarF{\cl{A}}$.
\end{lem}
\begin{proof}
By~\Pref{notate}, any identity $f(x_1, \dots, x_m)$ of $A$
can be described in terms of polynomial relations. Thus the
polynomial identity $f$ passes to the {Zariski closure}\ $\cl{A}$.
\end{proof}
(The same proof shows that any generalized polynomial identity of
$A$ remains a generalized polynomial identity of $\cl{A}$;
likewise for rational identities.)
Let us first consider polynomial identities when $F$ is infinite.
Combining the lemma with \Pref{finf}, an $F$-subalgebra $A$ of $B$
is PI-equivalent to $KA$. Thus, up to PI-equivalence, when $F$ is
infinite, the {Zariski-closed}\ $F$-algebras correspond precisely to the
$K$-subalgebras of $\M[n](K)$, and we have nothing new.
On the other hand, nonisomorphic {Zariski-closed}\ algebras may be
PI-equivalent. For example, the algebra of diagonal matrices
$\smat{K}{0}{0}{K}$ is PI-equivalent to the algebra of scalar
matrices $\left\{ \smat{\alpha}{0}{0}{\alpha}: \alpha \in K
\right\}$. Nevertheless, the {Zariski closure}\ is a way of finding canonical
representatives of varieties of PI-algebras, which becomes much
more sensitive over finite fields.
\subsection{The structure of {Zariski-closed}\ algebras.}\Label{ss:struc}
As promised in the Introduction, we now show that {Zariski-closed}\ algebras
have a structure theory closely paralleling the structure of
finite dimensional algebras over an algebraically closed field.
Since we want to pass to the {Zariski closure}\ in order to find a
``canonical'' algebra PI-equivalent to $A$, we want this to be
independent of the choice of $K$-algebra $B$ in which $A$ is
embedded. But presumably $A$ could be embedded in two $K$-algebras
$B_1$ and $B_2$, and could be {Zariski-closed}\ in $B_1$ but not in $B_2$. Towards
this end, we say $A$ is {\bf maximally {Zariski-closed}} if $A$ is {Zariski-closed}\ in
$B$, and every nonzero ideal of $B$ intersects $A$ nontrivially.
\begin{exmpl}
In general, a {Zariski-closed}\ algebra $A$ need not be maximally closed in~
$KA$. Indeed, let $A = \set{\smat{a}{0}{0}{a^p} \,:\, a\in K}$
where $p = \operatorname{Char} K$. Then $A$ is a field, but $KA =
\smat{K}{0}{0}{K}$ is not.
\end{exmpl}
However, we have the following useful fact:
\begin{prop}\Label{choice}
Every {Zariski-closed}\ $F$-subalgebra $A$ in $B$ is maximally {Zariski-closed}\ with
respect to a suitable homomorphic image of $B$.
In particular, we may assume $A$ is maximally {Zariski-closed}\ in $KA$.
\end{prop}
\begin{proof}
We proceed by induction on $\dim_K B$. If $A$ is not maximally
{Zariski-closed}, then there is some ideal $I$ of $B$ maximal with respect to
$I\cap A = 0$. But then $A \subseteq B/I$ by \Cref{crucial1}. The
second assertion follows by taking $B$ to be the $K$-space spanned
by $A$, a property retained by homomorphic images.
\end{proof}
For any subalgebra $A$ of a matrix algebra $\M[n](K)$, every nil
ideal of $A$ is nilpotent, of nilpotence index bounded by $n$, by
a theorem of Wedderburn; {cf.}~\cite[Theorem 2.6.31]{Row2}. Thus,
there is a unique largest nil (and thus nilpotent) ideal of $A$,
which we write as $\Rad(A)$. Recall that $A$ is semiprime iff
$\Rad(A) = 0$.
\begin{prop}\Label{radcl}
$\Rad (\cl{A}) = \cl{\Rad (A)} = \cl{A} \cap \Rad(KA)$.
\end{prop}
\begin{proof}
$\Rad(A)$ satisfies the identity $x^n = 0$, which can be expressed
in terms of polynomial relations ({cf.}~\Pref{notate}); therefore
$\cl{\Rad(A)}$ is also nil. But clearly $\cl{\Rad(A)} \subseteq
\cl{A}$, so $\cl{\Rad (A)}\subseteq \Rad (\cl{A})$.
Likewise $\cl{\Rad(A)} \subseteq K\Rad(A)$, by
\Lref{Kcl2}, which in turn is a nilpotent ideal of $KA$
and thus contained in $\Rad(KA)$. This proves $\cl{\Rad(A)}
\subseteq \cl{A} \cap \Rad(KA)$. But the latter is a nilpotent
ideal of $\cl{A}$ so is in $\Rad(\cl{A})$, completing the circle
of inclusions. \end{proof}
The inclusion $K \Rad(A) \subseteq \Rad(KA)$ can in general be a proper
one. In fact, when $A$ is not maximally {Zariski-closed}\ in $B$, we can have
$\Rad(KA) \neq 0$ even if $A$ is simple.
\begin{exmpl}
Suppose $L/F$ is an inseparable field extension of dimension $p$,
viewed as an $F$-subalgebra of $\M[p](K)$, where $K$ is the
algebraic closure of $F$. Then $$KL \isom K[z\,|\,z^p = 0]$$ has
non-trivial radical.
Since $F$ is necessarily infinite, $\cl{L} = KL$.
\end{exmpl}
We are ready to turn to the {Zariski closure}\ of factor images.
\begin{prop}\Label{radgood} Suppose $A$ is {Zariski-closed}\ in $B= KA$.
Then $A/\Rad(A)$ is {Zariski-closed}\ in $B/\Rad(B)$.\end{prop}
\begin{proof}
Let $J = \Rad (B)$. By \Cref{crucial}, $A/(A\cap J)$ is {Zariski-closed}\ in
$B/J$. But we are done, since $A\cap J = \Rad(A)$ by
Proposition~\ref{radcl}.
\end{proof}
\begin{prop} Suppose $A$ is {Zariski-closed}\ in $B$, and $z \in
\Cent{B}$. Then $A/{\operatorname {Ann}}_Az$ is {Zariski-closed}\ in $B/{\operatorname {Ann}}_Bz$.
\end{prop}
\begin{proof} ${\operatorname {Ann}}_A z = A \cap {\operatorname {Ann}}_B z$, so again we apply
\Cref{crucial}.
\end{proof}
{Zariski-closed}\ algebras behave strikingly similarly to finite dimensional
algebras over an algebraically closed field.
\begin{prop}\Label{simpA}
If $\cl{A}$ is simple, then it is a matrix algebra, either over a
finite field or over the algebraically closed field $K$.
\end{prop}
\begin{proof}
As a PI-algebra, $\cl{A}$ is finite dimensional over its center
$F$, and thus $\cl{A} \cong M_t(D)$ for some finite dimensional
division algebra $D$. If $F$ is infinite, then $A$ is finite
dimensional over the algebraically closed field $K$ by
\Pref{finf}, and thus $\cl{A} = M_t(K)$. On the other hand, if $F$
is finite, then $D=F$ by Wedderburn's \Tref{Wed3}, so $\cl{A}
\cong M_t(F)$.
\end{proof}
Our next goal is to obtain a Wedderburn decomposition for a {Zariski-closed}\
algebra, into radical and semisimple parts (when $F$ is finite, as
the other case is trivial). When describing intrinsic
ring-theoretic properties of a {Zariski-closed}\ algebra $A$, we do not refer
explicitly to $B$, and thus we choose $B$ as we wish. Usually we
take $B = KA$. Here is an example of this point of view.
\begin{lem}\Label{centcl}
If $A$ is {Zariski-closed}\ in the $K$-algebra $B$, then $\Cent{A}$ is {Zariski-closed}.
\end{lem}
\begin{proof}
An element of $A$ is in $\Cent{A}$ iff it commutes with a (finite)
base of $B$, so we are done by~\Pref{notate}.
\end{proof}
\begin{prop}\Label{mat}
If $A$ is prime and {Zariski-closed}, then $A$ is a matrix algebra (either
over a finite field or over the algebraically closed field $K$).
\end{prop}
\begin{proof}
We choose $B=KA$. But then $\Cent{A}$ is a {Zariski-closed}\ domain and must
be either finite (and thus a field) or $K$ itself. Hence
$\Cent{A}$ is a field, so $A$ is a prime PI-algebra whose center
is a field, implying $A$ is simple, so we are done by
\Pref{simpA}.
\end{proof}
\begin{thm}\Label{semis}
Suppose $A$ is semiprime and {Zariski-closed}. Then $A$ is semisimple, namely
isomorphic to a direct product of matrix algebras over fields.
\end{thm}
\begin{proof}
By \Pref{choice}, we may assume $A$ is maximally closed in $B =
KA$.
But $\Rad (B)$ is a nilpotent ideal, so would intersect $A$ at a
nilpotent ideal, contrary to hypothesis unless $\Rad (B) = 0$.
Hence $B = S_1 \times \dots \times S_t$ is a direct product of
simple $K$-algebras. By \Cref{crucial}, the projection $A_i$ of
$A$ into $S_i$ is {Zariski-closed}. Furthermore, the $A_i$ are prime, since
otherwise, taking nonzero ideals $I_1, I_2$ of $A_i$ with $I_1I_2
= 0$, we have $(I_1K)(I_2K) = 0$ in $S_i$, contrary to $S_i$ prime.
But then, by Proposition~\ref{mat}, $A_i$ is a matrix algebra over
a field. Writing $S_i = B/P_i$ for maximal ideals $P_i$ of $B$, we
have $A_i \approx A/(P_i\cap A)$, implying $P_i \cap A$ are
maximal ideals of $A$, with $\bigcap _{i=1}^t (P_i \cap A) = 0$.
Hence $A$ is semisimple.\end{proof}
\begin{cor}
If $A$ is {Zariski-closed} then $\Rad (A)$ is also the Jacobson radical of $A$.
\end{cor}
\begin{proof} %
$A/\Rad (A)$ is semiprime, and thus semisimple, implying $ \Rad
(A)$ is also the Jacobson radical.
\end{proof}
Let us recall some technical ring-theoretic results from
\cite{Row2}. Any nil ideal is idempotent-lifting, by
\cite[Corollary 1.1.28]{Row2}. An algebra $A$ is {\bf semiperfect}
when $\Rad (A)$ is nil and $A/\Rad (A)$ is semisimple, so we
instantly have the following result:
\begin{cor} %
Any {Zariski-closed}\ algebra $A$ is
semiperfect.%
\end{cor}
\begin{prop}
If $A$ is {Zariski-closed}, then so is $A/\Rad(A)$.
\end{prop}
\begin{proof}
We may assume $B = KA$ (\Rref{interpol}(\ref{interpoliii})) and
then apply Proposition~\ref{radgood}.
\end{proof}
We can now find an analog to the Krull-Schmidt theorem.
\begin{thm}\Label{KrullShexp2} If $A$ is {Zariski-closed}, then
there is a direct sum decomposition $A = \bigoplus_{i=1}^t Ae_i$ of
$A$ into indecomposable modules, and this decomposition is unique
up to isomorphism and permutation of components.\end{thm}
\begin{proof}
By \cite[Lemma 2.7.18]{Row2}, since $A$ is semiperfect.
\end{proof}
Our main structural result is an analog of Wedderburn's Principal
Theorem, \Tref{Wed2}, which played such a crucial role in Kemer's
proof of Specht's conjecture in characteristic $0$. This result is
the version that we need in characteristic $p$.
\begin{thm}\Label{Zarcl1} If $A = \cl{A}$, then $A$ has a Wedderburn
decomposition $A = S\oplus J$, where $J = \Rad(A)$ and $S \cong
A/J$ is a subalgebra of $A$.\end{thm}
\begin{proof}
By Wedderburn's Principal Theorem
\cite[Theorem~2.5.37(Case~I)]{Row2} it is enough to prove $A/J$ is
split semisimple. But $A/J$ is {Zariski-closed}\, by \Pref{radgood}, so we are
done by \Tref{semis}.
\end{proof}
\subsection{Subdirect decompositions of {Zariski-closed}\ algebras}
\begin{rem}\Label{subirr} Suppose $B$ is a subdirect product of $K$-algebras
$B_1$ and $B_2$. Since annihilator ideals can be defined through
polynomial relations, a {Zariski-closed}\ subalgebra $A$ of $B$ is a subdirect
product of {Zariski-closed}\ algebras, and arguing by induction on
$\dimcol{B}{K}$, we may conclude that any {Zariski-closed}\ algebra $A$ is a
finite subdirect product of {Zariski-closed}\ algebras whose \fcr{}s are
subdirectly irreducible.
Of course, if $A$ is the subdirect product of $A_1, \dots, A_m$,
then $${\operatorname {id}} (A) = {\operatorname {id}} (A_1 \times \cdots \times A_m) = \bigcap
_{i=1}^m {\operatorname {id}} (A_i),$$
thereby reducing the study of ${\operatorname {id}}(A)$ to
the subdirectly irreducible case.
\end{rem}
Let us summarize what we have done so far and indicate what is
still missing. Suppose $A$ is {Zariski-closed}\ with \fcr\ $B$. By
Wedderburn's Principal Theorem, \Tref{Wed2}, we can write $B= S
\oplus J$, where $S$ is the semisimple part and $J$ is the radical
part, and we may assume that $B$ is subdirectly irreducible and
has block triangular form, so $A$ involves the same nonzero
components. Furthermore, $A/J$ is semisimple and thus a direct sum
of central simple algebras. We can write $A$ in upper triangular
form. On the other hand, we do not yet have a good description of
the relations among the components; these are treated in the next
section, in particular Theorem~\ref{linear}.
\section {Types of polynomial relations of a {Zariski-closed}\
algebra}\Label{sec:3}
As before, we study an $F$-algebra $A$ contained in an
$n$-dimensional $K$-algebra $B$, where $F \subseteq K$ are fields.
Since PI-theory deals so extensively with the $T$-ideal ${\operatorname {id}} (A)$
of identities of an algebra $A$, it is reasonable to expect that
the ideal of polynomial relations will play an important role in
our analysis of {Zariski-closed}\ algebras in $B$.
\begin{defn}
For an $F$-algebra $A$ contained in a $K$-algebra $B$ with basis
$b_1,\dots,b_n$, $\operatorname{poly}(A) \normali K[\lambda_1, \dots, \lambda _n]$ is
defined as $\operatorname{poly}(A) = \set{f \suchthat f(A) = 0}$.
\end{defn}
Our next objective is to find the ``best'' generators of the
polynomial relations. This is a major issue, taking much of the
remainder of this paper.
Unlike the {Zariski closure}\ ({cf.}~\Rref{indep}), $\operatorname{poly}(A)$ does depend on the
choice of a base for $B$. In fact, the general linear group
$\GL[n](K)$ acts on bases of $B$ by linear transformations and on
polynomials (and ideals of polynomials) by left composition. {}From this point of view, relations are studied up to the action of $\GL[n](K)$
on ideals of $K[{\lambda}_1,\dots,{\lambda}_n]$, and we may simplify the
relations by proper choice of the base.
In some ways, although polynomial relations generalize polynomial
identities, their ideals are easier to study than $T$-ideals,
since we view them in a much more manageable algebra, the
commutative algebra $K[\lambda_1, \dots, \lambda _n]$.
\begin{rem}\Label{rem0}
Some initial remarks in studying $\operatorname{poly} (A)$ are as follows:
\begin{enumerate}
\item \Label{rem0i}
We may assume $A$ has some element $a = \sum \alpha _i b_i$ with
$\alpha _n \ne 0;$ otherwise we have the polynomial relation $\lambda_n$
that we can use to eliminate all monomials which include $\alpha _n$.
\item Since $A$ is a group under addition, we know that $0 \in A$,
so $f(0) = 0$ for all polynomial relations $f$ of $A$. In other
words, the only polynomials in $K[\lambda_1, \dots, \lambda _n]$ that we
consider are those having constant term $0$.
\end{enumerate}
\end{rem}
\begin{rem}\Label{finmany}
$\operatorname{poly} (A)$, being an ideal of the Noetherian ring $K[\lambda_1, \dots,
\lambda _n]$, is finitely generated. Thus all the polynomial relations
of $A$ are consequences of finitely many polynomial relations.
\end{rem}
{}Thus, Specht's problem becomes trivial for polynomial
relations. For example, the matrix algebra $M_n(F)$, viewed as an
$n^2$-dimensional affine space over the field $F$, satisfies the
polynomial relations $\lambda_i^q-\lambda_i$ iff $F$ is finite and
satisfies the identity $x^q -x$.
\subsection{Additively closed Zariski-closed sets}
Next, we adapt the well-known theory of multilinearization. Since
this uses the additive structure, we focus the next investigation
to this case.
\begin{rem}{\ }
\begin{enumerate}
\item An additive group $A \subseteq K^{(n)}$ acts on its set of relations $\operatorname{poly}(A)$
via translation: $a {\,{:}\,} f({\lambda}) \mapsto f({\lambda}+a)$.
\item If $A$ is an $F$-space, $\mul{F}$ acts on $\operatorname{poly}(A)$ via
scaling: $\alpha {\,{:}\,} f({\lambda}) \mapsto f(\alpha {\lambda})$.
\end{enumerate}
\end{rem}
\begin{defn} A polynomial $f(\lambda_1, \dots, \lambda _n)\in K[\lambda_1, \dots, \lambda _n]$ is
{\bf quasi-linear} (with respect to $A$) if $$f(\lambda +a)=f(\lambda)+
f(a), \qquad \forall a \in A.$$
Also, $f$ is {\bf
$F$-homogeneous} if there is $d\in {\mathbb {N}} ^+$ such that, for each $\alpha
\in F$,
$$f(\alpha \lambda_1, \dots, \alpha \lambda _n) =
\alpha ^d f(\lambda_1, \dots, \lambda _n).$$
\end{defn}
\begin{rem}
{\ }
\begin{enumerate}
\item A quasi-linear polynomial (with respect to any $A$)
necessarily has zero constant term, for $f({\lambda}) = f({\lambda} + 0) =
f({\lambda})+f(0)$. \item Over an infinite field, the quasi-linear
polynomials are linear. However, note that $x^p-x$ is a
non-linear but quasi-linear ${\mathbb {F}}_p$-homogeneous polynomial.
\end{enumerate}
\end{rem}
\begin{prop}\Label{TR}
{\ }
\begin{enumerate}
\item Suppose $A$ is an additive group. The ideal of polynomial
relations of $A$ is generated by quasi-linear polynomial
relations.
\item \Label{TRii} If $A$ is an $F$-vector space, the ideal of
polynomial relations of $A$ is generated by quasi-linear
$F$-homogeneous polynomial relations.\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item Suppose $f\in \operatorname{poly}(A)$. Given any $a \in A$, we define the
new polynomial relation $\Delta_a f(\lambda) = f(\lambda +a)-f(\lambda)$. This
has smaller degree than $f$, and clearly is a consequence of $f$.
On the other hand, $f$ is a formal consequence of $\{\Delta_a f :
a \in A \}$. Indeed, since $f$ has constant term $0$ (by
Remark~\ref{rem0}), we have $f(0) = 0$, and thus $$f(a) = f(a) -
f(0) = \Delta_a f(0).$$ If $\Delta_a f(A) = 0$ for all $a$, this
implies $f(a) = 0$, so that $f(\lambda)$ is a polynomial relation of
$A$.
We can thus replace $f$ by finitely many $\Delta_a f$, in view of
Remark~\ref{finmany}, and repeating this process, eventually we
get $\Delta_af(\lambda) = 0$ for all $a \in A$, i.e., $f$ is
quasi-linear.
\item Given $\alpha_i \in F_i$, we can define $\triangledown f =
f(\alpha\lambda_1, \dots, \alpha \lambda _n) - \alpha^d f(\lambda_1, \dots, \lambda _n)$,
where $d$ is the (total) degree of $f$. This provides a polynomial
relation with fewer nonzero monomials, as is
$f-\gamma\triangledown f$ (for suitable $\gamma \in K$, provided
$\triangledown f \ne 0)$. On the other hand, $f =
\gamma\triangledown f + (f-\gamma\triangledown f)$, so we continue
by induction, unless $\triangledown f = 0$. But this means that
$f$ is $F$-homogeneous.
\end{enumerate}
\end{proof}
\begin{rem}\Label{add} {\ }
\begin{enumerate}
\item For an additive subgroup $V \subseteq K^{(n)}$ and a polynomial $f$
quasi-linear with respect to $V$, the intersection $V \cap Z(f)$
is a group, where $$Z(f) = \set{c \in K^{(n)} \,:\, f(c) = 0}$$ is
the variety associated to $f$. Indeed, if $f(a) = f(b) = 0$, then
$f(a+b) = 0$, and $f(-a) + f(a) = f(-a+a) = f(0)$, implying $f(-a)
= 0$.
\item In particular, if $f$ is quasi-linear with respect to $K^{(n)}$,
then its variety is a group.
\item The variety of an arbitrary quasi-linear $F$-homogeneous
polynomial relation $f$ is a vector space over $F$, since if $f(a)
= 0$, then $$f(\alpha a) = f(\alpha a_1, \dots, \alpha a _n) =
\alpha^{d} f(a_1, \dots, a _n)=0.$$
\end{enumerate}
\end{rem}
Having reduced to quasi-linear (perhaps also $F$-homogeneous)
polynomial relations, we would like to determine their form.
\begin{defn}\Label{FTdef}
Suppose $\operatorname{Char} F = p$. Let $q = \card{F}$, setting $q = 1$ if $F$
is infinite.
\begin{enumerate}
\item\Label{FTi} A polynomial relation $f\in K[\lambda_1, \dots, \lambda
_n]$ is of {\bf weak Frobenius type} if $f$ has the following
form: \begin{equation}\label{Frobtyp}\sum _{i=1}^n \sum _{j\ge 1}
c_{ij} {\lambda}_i^{q_{ij}} = 0,\end{equation}
where $c_{ij} \in K$ and
each $q_{ij}$ is a $p$-power. (Recall that our polynomial
relations have constant term $0$.)
\item \Label{FTii} The polynomial relation $f$ is a \defin{weak
$F$-Frobenius type} (also known as a \defin{$q$-polynomial} in the
literature) if, in (\ref{FTi}), we may take each $q_{ij}$ to be a
power of $q$.
\end{enumerate}
\end{defn}
Note that weak $F$-Frobenius type (resp.~ weak Frobenius type)
reduces to the linear polynomial relation $\sum c_i {\lambda}_i = 0$
for $F$ infinite (resp.~in characteristic $0$). In view of
Remark~\ref{add}, the next result (which strengthens Proposition
\ref{TR}) characterizes algebraic varieties that are Abelian
groups.
\begin{thm}\Label{linear}
{\ }
\begin{enumerate}
\item\Label{Li}
The ideal of polynomial relations of an additive group $A$ is
generated by polynomial relations of weak Frobenius type.
Specifically, any polynomial relation is a consequence of finitely
many polynomial relations of weak Frobenius type.
\item\Label{Lii} The ideal of polynomial relations of an
$F$-vector space $A$ is generated by polynomial relations of weak
$F$-Frobenius type. Specifically, any quasi-linear
$F$-homogeneous polynomial relation is a consequence of finitely
many polynomial relations of weak $F$-Frobenius type.
\end{enumerate}
\end{thm}
\begin{proof}
\begin{enumerate}
\item It is enough to prove the second assertion. We write a
polynomial relation $f= \sum h_{{d_1, \dots, d_n}}$, where $h$ is
the monomial with multi-degree ${d_1, \dots, d_n}$, i.e., of the
form $c\lambda _1 ^{d_1}\cdots \lambda _n ^{d_n}$. Clearly $\Delta _a f=
\sum \Delta _a h_{{d_1, \dots, d_n}}$, so we consider a typical
monomial $$ h_{{d_1, \dots, d_n}} = c\lambda _1 ^{d_1}\cdots \lambda _n
^{d_n}.$$
Taking $a = \sum \alpha_i b_i$ with $\alpha_n\ne 0$
({cf.}~Remark~\ref{rem0}(\ref{rem0i})), we have
$$\Delta _a (h) = c(\lambda _1+ \alpha_1) ^{d_1}\cdots (\lambda _n + \alpha
_n)^{d_n} -c\lambda _1 ^{d_1}\cdots \lambda _n ^{d_n},$$ so the highest
monomial not cancelled (under the lexicographic order giving
highest weight to $\lambda _1$) is $$d_n\alpha _n c \lambda _1 ^{d_1}\cdots
\lambda _n ^{d_n -1}.$$
But this must be $0$, so $ d_n\alpha _n c $ must be $0$ in $K$, i.e.
$p\,|\,d_n$, where $p = \operatorname{Char} (K)$. Continuing in this vein, we see
that the highest term in $\Delta _a (h)$ is
$$\alpha _n ^q c\binom {d_n}{q}\lambda _1
^{d_1}\cdots \lambda _n ^{d_n -q},$$ for some $p$-power $q$. This is a
contradiction unless it is cancelled by $\Delta_a(h')$ for some
other monomial $$h' = c'\lambda _1 ^{d'_1}\cdots \lambda _n ^{d'_n}.$$ By
maximality assumption on the degrees, we must have $d_i' = d_i$
for all $i\le {n-1}$, and $d'_n = d+n+q'$ for some $p$-power
$q'$. Then $\Delta_a (h')$ contains the term
$$\alpha _n ^{q'} c'\binom {d_n}{q'}\lambda _1
^{d_1}\cdots \lambda _n ^{d_n -q'}.$$ %
Perhaps other terms of this form come from other monomials, but
the upshot is that there is a linear combination of $p$-powers of
$a_n$ that are $0$. But this is true for each $a_n$, and thus
yields a polynomial relation $g(\lambda _n)$. Applying $\Delta_{\alpha}$
to $g(\lambda_n)$ for every $\alpha \in K$ enables us to reduce the power,
unless $\Delta _\alpha(g(\lambda_n)) = 0$ for all $\alpha$, {i.e.}, $g$ is
quasi-linear. But in this case we can add $g$ to our list of
polynomial relations, and use $g$ to reduce the degree of $f$ in
$\lambda _n$.
Thus one continues until $\lambda _n$ does not appear in the highest
monomial of $f$. Applying the same argument whenever a monomial
has at least two indeterminates in it, we eventually reach the
situation in which each monomial has a single indeterminate, {i.e.},
$h_i = \sum_i c_{ij} \lambda _i^{d_{ij}}$. Applying $\Delta$ lowers
the degree unless every $d_{ij}$ is a $p$-power, as desired.
\item Continuing (\ref{Li}), applying $\triangledown$ (as defined in
the proof of \Pref{TR}(\ref{TRii})), lowers the degree unless
every $d_{ij}$ is a $q$-power, as desired.
\end{enumerate}
\end{proof}
The claim of Example~\ref{basexa2}(\ref{BE2ii}) follows as an
immediate consequence, for if the algebra satisfied any extra
identity its corresponding polynomial relations must come from the
fact that $\sigma$ was Frobenius.
\subsection{Multiplicatively closed Zariski-closed
sets}
Our next theorem is not needed for our exposition, since we never
deal with multiplicatively closed subvarieties of $K^{(n)}$ unless
they are algebras. Nevertheless, the result is interesting in its
own right and complements the other results.
\begin{exmpl} The subvariety $(K \times \{ 0\}) \cup (\{
0 \} \times K)$ of $K^{(2)}$ is defined by the polynomial
relation~$\lambda_1\lambda_2$.
\end{exmpl}
\begin{thm} %
Suppose $A$ is a {Zariski-closed}\ (multiplicative) submonoid of $K ^{(n)}$. Then the
polynomial relations of $A$ are generated by polynomial relations
of the form $$\lambda _{i_1}\cdots \lambda _{i_t} = 0; \qquad \lambda
_1^{i_1}\cdots \lambda _n ^{i_n}=\lambda _1^{j_1}\cdots \lambda _n ^{j_n}\quad
\text{for}\quad i_1, \dots, i_n ,j_1, \dots, j_n \in \mathbb Z.$$
\end{thm}
\begin{proof} To simplify notation, we write ${\mathbf i}$ for $i_1,
\dots, i_n$, $\lambda^{\mathbf i}$ for $\lambda _1^{i_1}\cdots \lambda _n
^{i_n}$, and ${\mathbf \alpha}^{\mathbf i}$ for $\alpha _1^{i_1}\cdots \alpha
_n ^{i_n}$. On the other hand, ${\mathbf \alpha}^m\lambda^{\mathbf i}$
designates $\alpha_1^m\lambda _1^{i_1} \cdots \alpha _n ^m\lambda _n^{i_n}$.
Take any polynomial relation $f = \sum c_{\mathbf i} \lambda
_1^{i_1}\cdots \lambda _n ^{i_n} = \sum_{\mathbf i} c_{\mathbf i}\lambda ^
{\mathbf i}$. By definition, $f(\mathbf \alpha) = 0$ for any $\mathbf
\alpha \in A$, and thus $f(\mathbf \alpha^j) = 0$ for each $j$, since $A$
is assumed multiplicative.
Cancelling out any $\lambda _i$ appearing in a polynomial relation
$\lambda _i = 0$, we induct on the number of indeterminates in $f$
and then on the number of monomials of $f$. Take any point $(\alpha_1,
\dots, \alpha _n)$. For $\gamma \in F$, we write $f_\gamma$ for the
sum of those monomials $c_{\mathbf i}\lambda^{\mathbf i} $ for which
$\mathbf \alpha ^ {\mathbf i} = \gamma$. Then by definition, $f = \sum
f_\gamma$. But $$f(\alpha^m \lambda) = \sum c_i \alpha_1^{mi_1}\cdots \alpha _n
^{mi_n} \lambda _1^{i_1}\cdots \lambda _n ^{i_n}= \sum \gamma ^m f_\gamma
(\lambda),$$ so by a Vandermonde argument, we see that each
$f_\gamma$ is a polynomial relation of $A$.
Thus, one can separate $f$ into a sum of polynomial relations
involving fewer monomials (and conclude by induction) unless
(comparing monomial by monomial) all the $\alpha_{\mathbf i}$ are
equal.
But this means, for each monomial $\lambda _1^{i_1}\cdots \lambda _n ^{i_n}$
and $\lambda _1^{j_1}\cdots \lambda _n ^{j_n}$ that $A$ satisfies the
equalities
\begin{equation} \Label{mon1}\alpha _1^{i_1}\cdots \alpha _n ^{i_n}=\alpha _1^{j_1}\cdots
\alpha _n ^{j_n}
\end{equation}
for all $(\alpha _1, \dots, \alpha _n) \in A$, so that
$$\lambda _1^{i_1}\cdots \lambda _n ^{i_n}=\lambda _1^{j_1}\cdots \lambda _n ^{j_n}$$
is a polynomial relation of $A$. In other words, $\alpha = \alpha
_1^{i_1}\cdots \alpha _n ^{i_n}$ is independent of the choice of the
monomial $\lambda _1^{i_1}\cdots \lambda _n ^{i_n}$ of $f$, so $\alpha^m f$ is
an identity. But now, working backwards,
$$0 = f(\alpha_1, \dots, \alpha _n) = \alpha f(1,\dots, 1),$$
implying $\sum c_i = f(1,\dots, 1) = 0$. This implies that $$f =
\sum _i c_i ( \lambda _1^{i_1}\cdots \lambda _n ^{i_n}- \lambda_1^{j_1}\cdots
\lambda_n ^{j_n})$$ is a consequence of the relations~\eqref{mon1}.
\end{proof}
\subsection{Polynomial relations of commutative algebras}
Now let us utilize the fact that $A$ is an $F$-algebra.
\begin{defn}
A polynomial relation $f$ is of {\bf Frobenius type} if it has one
of the following three forms, where $p = \operatorname{Char} (F)$:
(i) $\lambda _i = 0$,
(ii) $\lambda _i- \lambda _i^s = 0$, where $s$ is a $p$-power, or
(iii) $\lambda _i - \lambda _j^s = 0$, $j \ne i$, where $s$ is a
$p$-power.
The polynomial relation $f$ has {\bf $F$-Frobenius type} if, in
(ii) and (iii), $s$ is a $q$-power, $q = \card{F}$ (where, as
usual, we put $q = 1$ if $F$ is infinite).
\end{defn}
\begin{lem}\Label{glue0} {\ }
Suppose $A$ is an additive subgroup of $K^{(n)}$, defined by
polynomial relations of the form $\lambda _i = 0$ and $\lambda _i^{q_i} =
\lambda _j^{q_j}$ for natural numbers $q_i$ and $q_j$. Then any such
relation is equivalent to a polynomial relation of Frobenius type
(ii) or (iii).\end{lem}
\begin{proof}
First we discard all components $i$ for which $\lambda _i = 0$ holds. Next, assuming
$q_i \le q_j$, one could then factor out $x^{q_j -q_i}$ (since $K$
is a field) to get $\lambda _i - \lambda _j^q$ for some $q\ge 1$. We are
done if $q=1$, so assume $q>1$. This relation holds for $\lambda _i =
\lambda _j = 1$, so additivity of $A$ gives the relation
$$(\lambda _i+1)- (\lambda _j+1)^q = (\lambda _1-\lambda _j^q) + \sum _{\ell = 1}^{q-1}\binom q \ell
\lambda _j^\ell.$$
If some $\binom q \ell \ne 0$, this translates to algebraicity of
the $j$ component of $A$, which must thus be defined in a finite
subfield of $K$, say of dimension $m$ over ${\mathbb {F}}_p$, and, as is well
known, every polynomial in $x$ is satisfied by the field or is a
multiple of $x^{p^m}-x$, and we have reduced to type (ii).
Thus we may assume that $\binom q \ell = 0$ for all $1 \le \ell
<q$. But this clearly implies $K$ has positive characteristic
$p>0$, and therefore $q$ is a power of $p$ (since otherwise, if
$q'$ is the highest power of $p$ less than $q$, then $\binom q{q'}
\ne 0$ in $K$.)
We may now assume $f = \lambda_i - \lambda _j ^s$. Write $s = q^jt$ for
$t$ prime to $q$. Then $f$ reduces to the polynomial relation,
$\lambda _i - \lambda _j ^t$. But taking $\alpha \in F$ with $\alpha ^t \ne \alpha$,
we see that $$\alpha^t (\lambda _i - \lambda _j ^t)- (\alpha\lambda _i - \alpha^t\lambda _j
^t) = (\alpha^t -\alpha \lambda _i),$$ yielding the polynomial relation $\lambda
_i$ and thus also $\lambda _j$.
\end{proof}
\begin{thm}\Label{comZar}
Suppose $A$ is a commutative, semiprime {Zariski-closed}\ $F$-subalgebra of a
finite dimensional commutative $K$-algebra $B$. Then $\operatorname{poly}(A)$ is
generated by finitely many polynomial relations of Frobenius type.
\end{thm}
\begin{proof}
We can write $A \subseteq K_1 \times \dots \times K_t$, where each
$K_i \approx K$. By \Tref{linear}, it is enough to consider
polynomial relations of weak $F$-Frobenius type, i.e. of the form
$f = \sum _{i=1}^n \sum _{j\ge 1} c_{ij} \lambda_i^{q_{ij}}$. If we
have any relations of type (i), we can simply remove all terms
with $\lambda _i$, and ignore~$\lambda _i$.
The constant term of $f$ is $0$, where the $q_{ij}$ are powers of
$p$. If all the $q_{ij}$ are divisible by~$p$, then we may take
the $p^{\mbox{th}}$ root and still have a polynomial relation, so we may
assume that some monomial of $f$ is linear. For convenience we
assume $f$ has a monomial linear in $\lambda _1$.
If $A$ satisfies a polynomial relation only involving $\lambda _i$,
this means that the projection of $A$ onto $K_i$ is a finite field
$F_i$, which, if nontrivial, satisfies some identity $g_i = \lambda
_i^q - \lambda _i$, where we take $q$ minimal possible. But it is
well known that every polynomial satisfied by all elements of
$F_i$ is a consequence of $g_i$, so we may assume that all
polynomial relations involving only $\lambda _i$ are a consequence of
$g_i$.
We claim that modulo the $g_i$, either $f$ becomes $0$ or $f$
yields some polynomial relation of Frobenius type (iii). Suppose
$\lambda _1^i$ appears with two differing degrees. Write $f = \sum f_i
\lambda^i$. If each $f_i$ is a polynomial relation, then we continue
inductively. Otherwise take some nonzero value and conclude that
$F_1$ is algebraic of bounded degree over $F$, and thus is finite,
yielding a polynomial relation of type (ii), which thus is a
consequence of $g_i$.
Thus we may assume $\lambda_1$ appears in a single monomial. But this
means $f$ has the form $\lambda_1 + \sum c_j \lambda _j^{q_j}$, so
$$\alpha_1\lambda_1 + \alpha_j^{q_j}\sum c_j
\lambda _j^{q_j} = \alpha_1(\lambda_1 + \sum c_j \lambda _j^{q_j})$$ formally, or
in other words $\alpha _1 = \alpha _j^{q_j}$ for each $j$ appearing in
$f$, as desired.
\end{proof}
\begin{rem}\Label{gluequiv}
We can combine Theorems \ref{linear} and \ref{comZar} for any
subalgebra $A$ of $B= \M[n](K)$, as follows: Let $A^1 = A \cup
\sum Fe_{ii} \subseteq B$. We define a relation on $\{1, \dots,
n\}$ by saying $i \equiv j$ if there is a nontrivial Frobenius
polynomial relation involving both $i$ and $j$, and we extend it by
transitivity to an equivalence relation on $I = \{1, \dots, n\}$.
If $I_u$ is some equivalence class, then $A^1$ contains some
element
$$e_u= \sum _{i \in I_u} \alpha_i e_{ii}.$$ But then, for any $a\in
A$ and any $u,v \in I$, clearly $e_u a e_v \in A^1$, and the only
indices appearing nontrivially are from $I_u \times I_v$. We call
a quasi-Frobenius polynomial relation {\bf basic} if it only
involves coefficients from $I_u \times I_v$ for suitable
equivalence classes of $I$. In this way, we see that any
quasi-Frobenius polynomial relation reduces to the sum of basic
quasi-Frobenius polynomial relations.
\end{rem}
\section{Explicit representations of {Zariski-closed}\ algebras}\Label{sec:4}
We have seen in Theorem \ref{semis} that any semiprime {Zariski-closed}\
algebra is a direct sum of matrix components and thus has a very
easy representation inside $\M[n](K)$ along diagonal matrix
blocks. In order to describe the structure of a {Zariski-closed}\ algebra $A$
with nonzero nilradical $J$, we consider a faithful representation
of $A$ in a matrix algebra $\M[n](K)$. Throughout, we use this
particular representation to view $A \subseteq \M[n](K)$. Our
object is to find a ``canonical'' set of matrix units of
$\M[n](K)$ with which to view a {Zariski-closed}\ algebra $A$. The underlying
idea, introduced by Lewin \cite{Lew74} for PIs and utilized to
great effect in characteristic 0 by Giambruno and Zaicev
\cite{GZ}, is to write the algebra in something like upper
triangular form, in order to understand the placement of radical
substitutions in polynomial identities.
\begin{exmpl}
$A = \left(\begin{matrix} K & K & 0 \\ 0 & K & 0 \\ 0 & K & K
\end{matrix}\right)$.
Here $A/J$ can be identified with $\left(\begin{matrix} K & 0& 0
\\ 0 & K & 0 \\ 0 & 0& K \end{matrix}\right)$, and $J$ with
$\left(\begin{matrix} 0& K & 0 \\ 0 & 0 & 0 \\ 0 & K &
0 \end{matrix}\right)$. We can put $A$ into upper triangular form
by switching the second and third rows and columns to get
$\left(\begin{matrix} K & 0& K
\\ 0 & K & K \\ 0 & 0& K \end{matrix}\right)$.
\end{exmpl}
Unfortunately, we may not be able to straighten out $A$ so easily,
even in characteristic~$0$.
\begin{exmpl}\Label{glue00}
Let $A = \left\{\left(\begin{matrix} a & b \\ 0 & a
\end{matrix}\right): \ a, b \in K\right\}$. This can also be viewed as the
$2\times 2$ matrix representation of the commutative algebra of
dual numbers of $K$, i.e., $\left(\begin{matrix} a & b \\ 0 & a
\end{matrix}\right)$ is identified with $a+ b \delta$, where
$\delta ^2 = 0$.
\end{exmpl}
In order to represent this as a triangular matrix ring, we must
identify certain components. Our objective in this section is to
describe how the identifications work for a particular
representation of a {Zariski-closed}\ algebra. Let us start with an easy
example that may lower our expectations.
\begin{exmpl}\Label{unglued} $F = {\mathbb {F}}_2(\mu)$, where $\mu$ is an
indeterminate over the field ${\mathbb {F}}_2$ of two elements
and $K$ is its algebraic closure. Then $F$ can be represented in
$K\times K$ by $a \mapsto (a^2,a^4)$, so the identification among
components is the relation $\lambda _2 = \lambda _1^2$.
\end{exmpl}
A direct identification between two components, via a polynomial
relation, is called {\bf gluing}. When components are not glued,
we say they are {\bf separated}. In this paper, gluing is
considered mostly along the diagonal blocks, since off-diagonal
gluing turns out to be more complicated. As the above example
shows, gluing need not be ``onto'', when taken over infinite
fields.
\begin{defn}\Label{wbf}
Suppose $A$ is a {Zariski-closed}\ subalgebra of $\M[n](K)$ with radical $J$,
such that $A/J = A_1 \times \dots \times A_k$ with $A_u \cong
\M[n_u](F_u)$ for subfields $F_u \subseteq K$ ($u = 1,\dots,k$).
We say $A$ is in {\bf Wedderburn block form} if $n = \sum t(u)
n_u$, and for each $u$ there are $t(u)$ distinct matrix blocks
$A_u^{(1)}, \dots, A_u^{(t(u))}$ of size $n_u \times n_u$ along
the diagonal, each isomorphic to $A_u$, such that the given
representation $\varphi {\,{:}\,} A \to \M[n](K)$ restricts to an
embedding $\varphi_u {\,{:}\,} A_u \to A_u^{(1)}\times \dots \times
A_u^{(t(u))}$, where the projection $\varphi_u^{(\ell)} {\,{:}\,} A_u
\to A_u^{(\ell)}$ is an isomorphism for each $1\leq \ell\leq
t(u)$. Furthermore, $J$ is embedded into strictly upper
triangular blocks (above the diagonal blocks). For each $u$, the
blocks $A_u^{(1)},\dots,A_u^{(t(u))}$ are {\bf glued} and belong
to the same {\bf gluing component}. For further reference, we
define $m = \sum t(u)$, the total number of diagonal blocks
(before gluing) in the representation of $A$.
\end{defn}
For algebras with $1$, where $1$ is represented as the identity
matrix, obviously each diagonal Wedderburn block is nonempty.
However, for algebras without $1$, one could have some $B_u$
consisting only of $0$ matrices. In this case we say the block
$B_u$ is {\bf empty}.
Note that the glued blocks do not have to occur consecutively; for
example the semisimple part could be embedded in
$$\left(\begin{matrix}
A_1^{(1)} & 0& 0 & 0 & 0
\\ 0 & A_2^{(1)} & 0 & 0 & 0 \\ 0 & 0 & A_1^{(2)} & 0 & 0
\\0 & 0& 0 & A_1^{(3)}& 0\\0 & 0& 0 & 0 &A_2^{(2)}\end{matrix}
\right).$$
The radical belongs to blocks above the diagonal; for example,
$$\left(\begin{matrix} A_1^{(1)} & 0& 0 & J & J
\\ 0 & A_2^{(1)} & J & 0 & J \\ 0 & 0 & A_1^{(2)} & 0 & 0
\\0 & 0& 0 & A_1^{(3)}& 0\\0 & 0& 0 & 0 &A_2^{(2)}\end{matrix}
\right).$$
\begin{exmpl}\Label{GZB}
The following basic illustration of Wedderburn decomposition,
without gluing, appears as the ``minimal algebra'' in Giambruno
and Zaicev \cite[Chapter 8]{GZ}, which they realize as an upper
block-triangular algebra
$$\(\begin{array}{cccc}
M_{n_1}(F) & * & * & * \\
0 & M_{n_2}(F) & * & * \\
0 & 0 & \ddots & * \\
0 & 0 & 0 & M_{n_t}(F)
\end{array}\).$$
The Giambruno-Zaicev algebra could be thought of as the
algebra-theoretic analog of the Borel subgroups of
$\operatorname{GL}(n,F)$. The semisimple part $S$ is the direct
sum $\bigoplus M_{n_i}(F)$ of the diagonal blocks, and the radical is
the part above these blocks, designated above as $(*)$.
This kind of algebra first
came up in a theorem of Lewin \cite{Lew74}, who showed that any
PI-algebra $A$ with ideals $I_1,I_2$ satisfying $I_1I_2 = 0$ can
be embedded into an algebra of the form $\(\begin{array}{cccc}
A/I_1 & * \\
0 & A/I_2
\end{array}\)$.
Giambruno and Zaicev proved, in characteristic $0$, that for any
variety $\mathcal V$ of PI-algebras, its exponent $d$ (a concept
defined in terms of the asymptotics of the codimensions of
$\mathcal V$) is an integer and can be realized in terms of one of
these Giambruno-Zaicev algebras, as $d = \sum _{u=1}^k n_u^2$.
\end{exmpl}
\begin{rem}\Label{GZB1}
Belov \cite{Bel2} proved a parallel result to Giambruno-Zaicev's
theorem, for Gel'fand-Kirillov dimension in any characteristic.
Namely the GK-dimension of the ``generic'' upper
block-triangular algebra generated by $m$ generic elements is $$k
+ (m-1)\sum _{u=1}^k n_u^2.$$
In the same paper, Belov proved the following result: Suppose $A =
S\oplus J$ is the Wedderburn decomposition of a f.d.~algebra $A$,
with $S = A_1 \oplus \dots \oplus A_k$ (where $A_u$ are the simple
components). If there exist $x_u \in J$ such that $A_1 x_1A_2 x_2
\cdots x_m A_m \ne 0$ for some $m \le k$, then $A$ contains a
subalgebra isomorphic to the Giambruno-Zaicev algebra built up
from from $A_1, \dots, A_k$.
\end{rem}
Wedderburn block form refines Wedderburn's principal theorem.
Indeed, it is apparent by inspection that the part of $A$ along
the diagonal blocks is the semisimple part of $A$, and the part on
the blocks above the diagonal is the radical part. Note that this
example has not described the identifications among the
$A_u^{(\ell)}$. Clearly there must be gluing whenever some
$t(u)>1$, since $\dim (A_u^{(1)}\times \dots \times A_u^{(t(u))})
= t(u) \dim A_u$.
We can tighten these observations with some care. We start by
assuming that $A$ is a $K$-algebra ($K$ is algebraically closed,
as always). Then each $F_u = K$. An easy application of an
argument of Jacobson (spelled out in \cite[Theorem
25C.18]{Row3}) yields:
\begin{thm}\Label{block1}
For $K$ algebraically closed, any finite dimensional $K$-algebra
$A$ can be put into Wedderburn block form.
\end{thm}
\begin{cor}\Label{block2}
Any {Zariski-closed}\ $F$-subalgebra $A\subseteq \M[n](K)$ can be put into
Wedderburn block form.
\end{cor}
\begin{proof} We put $KA$ into Wedderburn block form
and then intersect down to~$A$. Explicitly, $A/J$ is {Zariski-closed}\ in the
semisimple part $S$ of $KA$, and $J$ is the intersection of $A$
with the part of $AK$ above the diagonal components.
\end{proof}
\subsection{Gluing Wedderburn blocks}
Let us investigate gluing in the Wedderburn block form. We start
with the semisimple part $\bigoplus A_u^{(\ell)}$ (of the blocks
along the diagonal). By definition, the only gluing occurs among
the $A_u^{(\ell)}$ for the same $u$.
\begin{rem} Suppose $\varphi_u^{\ell} {\,{:}\,} A_u \to A_u^{(\ell)}$ is
the representation as above. Then for any $\ell, \ell'$ in $\{1,
\dots, t(u)\}$ we have the isomorphism $$\varphi_u^{\ell, \ell'} =
(\varphi_u^{\ell})^{-1}\varphi_u^{\ell'} {\,{:}\,} A_u^{(\ell)} \to
A_u^{(\ell')}.$$\end{rem}
\begin{rem}\Label{glue1}
Let $1_u^{(\ell)}$ denote the unit element of $A_u^{(\ell)}$.
($1_u^{(\ell)}$ is then an idempotent of $\M[n](K)$, but we want
to emphasize its role in $A_u^{(\ell)}$.) We want to understand
the isomorphisms $\varphi_u^{\ell, \ell'}$ in terms of their
action on the center of the block. First of all, by \Lref{centcl},
since $A$ is {Zariski-closed}, so is $\Cent{A}$, which contains $\sum _{u}
F_u \sum _{\ell} 1_u^{(\ell)}$. It follows (by Theorem
\ref{comZar}) that all identifications in the center come from
polynomial relations of Frobenius type, between pairs $e_\ell$ and
$e_\ell'$ (as $\ell, \ell'$ run between $1$ and $t(u))$; i.e., of
the form
$$ {\lambda_u^{(\ell)}}_{ii} - ({\lambda_{u}^{({\ell'})}}_{ii})^q,$$
where $s$ is a power of $\card{F}$ (same $q$ for each $i =
1,\dots, n_u$). Here, and henceforth, ${\lambda_{u}^{(\ell)}}_{ij}$ is
the variable corresponding to the $(i,j)^{\mbox{th}}$ entry in the block
matrix $A_{u}^{(\ell)}$. This clearly is an instance of gluing.
\end{rem}
Since taking the power $q$ is not necessarily onto for $K$
infinite, Remark~\ref{glue1} is not symmetric, in the sense that
reversing direction from $\ell'$ to $\ell$ involves taking $q$
roots, which is possible in the variety but not over all of $K$.
In the relation above, if $\ell'=\ell$, i.e.,
${\lambda_u^{(\ell)}}_{ii} = ({\lambda_u^{(\ell)}}_{ii})^q$ holds, then we
can view $A_u^{(\ell)}$ as an algebra over a base field $F_u$ of
$q$ elements.
We continue from the center of diagonal blocks to the blocks
themselves.
\begin{defn}\Label{Fgng}
Suppose $\card{F} = q$. Two diagonal blocks $A_u^{(\ell)}$ and
$A_u^{(\ell')}$ of $A$ in $\M[n](K)$ have {\bf Frobenius gluing}
of {\bf exponent} $d$ if there are $n_u\times n_u$ matrix units of
$\M[n](K)$ such that, for large enough $\kappa$, writing
$e_{ij}^{(\ell)}$ for the corresponding $n_u \times n_u $ matrix
units in $A_u^{(\ell)}$, the isomorphism $\phi_u^{\ell,\ell'}$
identifies $\sum \alpha _{ij}^{q^{d+\kappa}} e_{ij}^{(\ell)}$ (in
$A_u^{(\ell)})$ with $\sum \alpha _{ij}^{q^{\kappa}} e_{ij}^{(\ell')}$
(in $A_u^{(\ell')})$.
This definition can be extended to gluing an arbitrary number $t$
of blocks.
\end{defn}
In this definition, since we may take $q$-roots, $\kappa$ can be
chosen to be $\max\set{0,-d}$. We could have $d = 0$, in which
case we call this {\bf identical gluing}. The same considerations
hold for an arbitrary number $t$ of glued blocks; the smallest
exponent may always be assumed to be $0$.
Soon we shall see that the only possible gluing on diagonal blocks
is Frobenius.
Let us consider the general situation. Since {Zariski-closed}\ algebras are
defined in terms of polynomial relations on the algebraically
closed field $K$ and all gluing is via a homomorphism from $K$
to itself, the gluing must come from a homomorphism defined by a
polynomial.
\begin{rem}\Label{glu00}
Gluing is possible only between matrix blocks of the same size,
whose centers have the same cardinality.
When $\operatorname{Char} (F)=0 $, the only polynomial homomorphism is the
identity, so every gluing is identical.
\end{rem}
\begin{prop}\Label{pass}
If $F$ is an infinite field, any variety of $F$-algebras contains
a {Zariski-closed}\ algebra whose gluing is identical.%
\end{prop}
\begin{proof} %
The \fcr\ $AK$ is in the same variety, by \Pref{finf} and
\Lref{samevar}.
\end{proof}
When $F$ is a finite field, we must also contend with the
Frobenius endomorphism, as illustrated in \Eref{unglued}, which we
note is preserved when we pass to the {Zariski closure}.
The point of \Dref{Fgng} is that all corresponding entries in
these blocks are glued in exactly the same way (although not
necessarily by the identity map). This is one way in which the
theory is considerably richer in characteristic $p$ than in
characteristic~$0$.
\begin{thm}\Label{Frobglue}
\Label{glue} Suppose $A \subseteq \M[n](K)$ is a {Zariski-closed}\ algebra,
with
$$A/J = A_1 \times \cdots \times A_k,$$ a direct product of $\,k$
simple components. Then we can choose the matrix units of
$\M[n](K)$ in such a way that $A$ has Wedderburn block form, and
all identifications among the diagonal blocks are Frobenius gluing.
\end{thm}
\begin{proof}
Fix $u = 1,\dots, k$. Fixing a set of $n_u \times n_u$ matrix
units $\{ e_{ij} : 1 \le i,j \le n_u\}$ of $A_u^{(1)}$, we then
have the corresponding set of $n_u \times n_u$ matrix units $\{
\varphi_u^{1, \ell'}(e_{ij}) : 1 \le i,j \le n_u\}$ of
$A_u^{(\ell)}$. We do this for each $u$, and by \cite[Proposition
1.1.25]{Row2} all of these matrix units can be combined and
extended to a set of matrix units for $\M[n](K)$.
Now any matrix $\sum _{i,j =1}^{n_u} \alpha _{ij} e_{ij}^{(\ell)}$ of
$A_u^{(\ell)}$ is glued (via $\varphi_u^{\ell, \ell'}$) to $$\sum
_{i,j =1}^{n_u} \varphi_u^{\ell, \ell'}(\alpha _{ij}1_u)^{(\ell)}
e_{ij}^{(\ell')}\in A_u^{(\ell')}.$$
But, by Remark~\ref{glue1},
there is some $q = q (\ell, \ell')$ such that $$\varphi_u^{\ell,
\ell'}(\alpha \sum _{i =1}^{n_u} e_{ii}^{(\ell)}) = \varphi_u^{\ell,
\ell'}(\alpha 1_u^{(\ell)}) = \alpha ^q 1_u^{({\ell'})} $$ %
(or visa versa, as noted above). Hence $\sum _{i,j =1}^{n_u} \alpha
_{ij} e_{ij}^{(\ell)}$ is glued to $\sum _{i,j =1}^{n_u} \alpha
_{ij}^q e_{ij}^{(\ell')}$, as desired.
\end{proof}
\subsection{Standard notation for Wedderburn blocks}\Label{ss:not}
We now change the point of view somewhat, and we write each diagonal
Wedderburn block as $B_1, \dots, B_m$ in the order in which they
appear on the diagonal. Thus $m = \sum_{u=1}^k t(u)$, where $t(u)$
is the number of blocks in the \th{u} glued component. Likewise
for $r<s$ we define the block $B_{rs} = B_r A B_s$. Any $B_{rs}$
can be viewed as a matrix block; in particular $B_{rr} = B_r$.
{}From this point of view, $B_{rs}$ is a $B_{r},B_{s}$-bimodule.
Each $B_r$ is a subalgebra of $\M[n](K)$, although in general it
is not contained in $A$. Letting $B = \sum_{r\leq s} B_{rs}$, we
have the following inclusions:
$$A \subseteq KA \subseteq B \subseteq \M[n](K).$$
\begin{exmpl}\Label{4.16}
When $F$ is finite, $A = \set{\smat{\alpha}{b}{0}{\alpha}
\suchthat \alpha \in F,\, b \in K}$ is in Wedderburn block form.
Then $$KA = \set{\smat{a}{b}{0}{a} \suchthat a, b \in K}$$
and $B
=\set{\smat{a}{b}{0}{a'} \suchthat a, a', b \in K}$, so the
inclusions $A \subset KA \subset B \subset \M[2](K)$ are all
strict.
\end{exmpl}
Let $T_1 \cup \cdots \cup T_k$ be the gluing partition of
$\set{1,\dots,m}$, namely $r \in T_u$ if, in the notation of
\Dref{wbf}, $B_r = A_{u}^{(\ell)}$ for some $\ell = 1,\dots,t(u)$.
Thus the \th{u} component of $A/J$ embeds as $\varphi_u {\,{:}\,} A_u
{\rightarrow} \bigoplus_{r \in T_u} B_{r}$. We let $\tau {\,{:}\,} \set{1,\dots,m}
{\rightarrow} \set{1,\dots,u}$ denote the quotient map, associating to every
index $r$ the gluing class $u$ of the block $B_r$; thus $r \in
T_{\tau(r)}$.
As always, $A$ is an algebra over a field $F$ of order $q$, a
prime power. (We also permit $F$ to be infinite, although this
case is easier.) We write $F_u$ for the field of scalar matrices
of $B_{rr}$, where $u = \tau(r)$. When finite, $\card{F_u} =
q^{d_u}$ for some number $d_r$.
\begin{rem}\Label{thisone}
Suppose $B_{rr}$ and $B_{ss}$ are glued blocks, with center of
order $q^{d_u}$ for $u = \tau(r) = \tau(s)$. If $B_{rr}$ and
$B_{ss}$ are glued via the Frobenius endomorphism $a \mapsto
a^{q^d},$ note that $d$ is only well defined modulo $d_u$.
\end{rem}
\subsection{Sub-Peirce decomposition and the \fcr}\Label{ss:sP}
Given a {Zariski-closed}\ algebra $A$ over $F$, represented in $\M[n](K)$, for
$K$ infinite, we have the primitive idempotents $\hat e_u$ of $A$
($u = 1,\dots,k$), which give rise to the Peirce decomposition $A
= \bigoplus \hat{e}_u A \hat{e}_v$.
Each idempotent decomposes as a sum $\hat e_u = \sum_{r \in T_u}
e_r$ of idempotents of $\M[n](K)$, where $T_u$ are defined in the
previous subsection, and we have the Peirce decomposition $B
=\bigoplus e_r A e_s$. This is a fine decomposition, and in general,
$e_r A e_s$ is not contained in $A$. Nevertheless, we do have the
following observation.
\begin{rem}\Label{forsepar}
Suppose $A = \prod _{i=1}^m F_i$ is a commutative semisimple
algebra and $a = (\alpha _i) \in A$ is written as $a = \sum a_j$, where
each $a_j$ is the sum of those Frobenius components of $a$ that
are glued. Then each $a_j \in \cl A$.
\end{rem}
\begin{defn}\Label{idemptype}
A primitive idempotent $\hat{e}$ of $A$ is of {\bf finite} (resp.\
{\bf infinite}) type if the base centers $F_u$ of the
corresponding glued blocks $e_r \M[n](K) e_r$ are finite (resp.\
infinite).
\end{defn}
Note that by \Pref{finf}, $F_u \cong K$ for any idempotent of
infinite type, although there may fail to be a natural action of
$K$ on the $F_u$ because of non-identity gluing.
\begin{exmpl}\Label{4.20}
\begin{enumerate}
\item The primitive idempotents of $$A =
\left\{\left(\begin{array}{ccc}
\alpha & 0 & x \\ %
0 & \beta & y \\ %
0 & 0 & \alpha^q
\end{array}\right): \alpha, \beta, x, y \in K\right\}$$
are $e_{11}+e_{33}$ and
$e_{22}$. Numerating the blocks into gluing components by setting
$T_1 = \set{1,3}$ and $T_2 = \set{2}$, we have that $F_1, F_2
\cong K$; however, scalar multiplication by $K$ does not preserve
$F_2$. Indeed, $A$ is not a $K$-algebra: $\dim(A) = 4$, while
$\dim(KA) = 5$.
\item Let $A = \left\{\left(\begin{array}{cccc}
\alpha & x & y & {\lambda} x\\ %
0 & \beta & z & 0\\ %
0 & 0 & \alpha & 0 \\
0 & 0 & 0 & \beta
\end{array}\right): \alpha, \beta, x,y,z \in K\right\}$, where
${\lambda} \in K$ is fixed. The glued blocks are $T_1 = \set{1,3}$ and
$T_2 = \set{2,4}$. Accordingly, $A = A_{11}\oplus A_{12}\oplus
A_{21}\oplus A_{22}$, where $A_{11} = K(e_{11}+e_{33})+Ke_{13}$,
$A_{12} = K(e_{12}+{\lambda} e_{14})$, $A_{21} = K e_{23}$ and $A_{22}
= K(e_{22}+e_{44}) + K e_{24}$.
\end{enumerate}
\end{exmpl}
We would like to refine this description by comparing the
Wedderburn decompositions of $A$ and its \fcr\ $KA$ (which may
have more primitive idempotents), even though $A$ and $KA$ need
not be PI-equivalent.
Break every gluing class $T_u$ ($u = 1,\dots,k$) into a disjoint
union $T_u = T_u^{(1)} \cup \dots \cup T_u^{(c_u)}$, where blocks
$B_r, B_s$ are in the same component $T_u^{(\mu)}$ if and only if
they are glued by an identical gluing. For example, in
\Eref{4.20}(1) the decomposition is $T_1 = \set{1} \cup \set{3}$.
The idempotents $\hat{e}_u$ decompose, accordingly, as
\begin{equation}\Label{hatbar}
\hat{e}_u = \sum_{\mu = 1}^{c_u} \bar{e}_u^{(\mu)},
\end{equation}
where $\bar{e}_u^{(\mu)} = \sum_{r \in T_u^{(\mu)}} e_r$. Although
$\bar{e}_u^{(\mu)}$ are not in $A$, these elements do belong to
$KA$ (since $K$ is infinite, allowing for a Vandermonde argument).
Therefore, we define:
\begin{defn}
The {\bf sub-Peirce decomposition} of $A$ is the restriction to
$A$ of the Peirce decomposition of $KA$; {cf.}~equation~(\ref{PDM}).
Namely,
$$A \subseteq \bigoplus A_{uv}^{(\mu\mu')}, \qquad A_{uv}^{(\mu \mu')}
= \bar{e}_u^{(\mu)} A \bar{e}_{v}^{(\mu')},$$
where the sum ranges over $u,v = 1,\dots,k$, $\mu = 1,\dots,c_u$
and $\mu' = 1,\dots,c_v$. We stress once more that this is not a
decomposition of $A$, as the $A_{uv}^{(\mu\mu')}$ are contained in
$KA$, but not in $A$ in general.
\end{defn}
Notice that if all the gluing in $A$ are via the identity map, in
particular (by \Pref{pass}), if $A$ is a $K$-algebra, then the
sub-Peirce decomposition is identical to the Peirce decomposition.
\begin{exmpl}\Label{4.21}
Let $A = \left\{\left(\begin{array}{cccc}
\alpha & x & y & z\\ %
0 & \alpha^q & x' & y'\\ %
0 & 0 & \alpha & x'' \\
0 & 0 & 0 & \alpha^{q}
\end{array}\right): \alpha, x,x',x'',y,y',z \in K\right\}$.
There is one glued component, namely $T_1 = \set{1,2,3,4}$, which
decomposes with respect to identical gluing as $T_1 =
\set{1,3}\cup \set{2,4}$. The corresponding idempotent
decomposition is $\hat{e}_1 = \bar{e}_1^{(1)} + \bar{e}_1^{(2)}$,
where $\hat{e}_1 = 1$, $\bar{e}_1^{(1)} = e_{11}+e_{33}$ and
$\bar{e}_1^{(2)} = e_{22}+e_{44}$. The sub-Peirce components are
$A_{11}^{(11)} = K\bar{e}_1^{(1)} + K e_{13}$, $A_{11}^{(12)} = K
e_{12}+Ke_{14}+Ke_{34}$, $A_{11}^{(21)} = Ke_{23}$ and
$A_{11}^{(22)} = K\bar{e}_1^{(2)} + Ke_{24}$ (similarly to the
Peirce components in \Eref{4.20}(2)).
\end{exmpl}
{}From one point of view, the \fcr\ erases all the subtlety
introduced by the finiteness of $F$, as we see in the next
observation.
\begin{rem}\Label{gluetype} Identity gluing in $A$ is preserved in $KA$;
however, it may
happen that $A = \cl{A}$ and $A$ has only identical gluing, while
$A \subset KA$ (see \Eref{4.16}).
On the other hand, non-identity (Frobenius) gluing for $A$ is
unglued in $KA$, as seen by applying a Vandermonde argument since $K$
is infinite. Thus $KA$ only has identical gluing.
Viewed in terms of the Peirce decomposition, a Peirce component
$A_{uu}$ of $A$ may ramify in $KA$, and the corresponding
primitive idempotent in $A$ becomes a sum of orthogonal
idempotents in $KA$. Thus, the sub-Peirce decomposition of $A$
consists of identity-glued components of the Peirce decomposition
of $KA$.
{}From the point of view of PI's, $KA$ satisfies all {\it
multilinear} identities of $A$, although it may lose identities
arising from Frobenius automorphisms, such as $x^2y-yx$ in the
example $A = \set{\smat{a}{b}{0}{a^2} \suchthat a, b \in {\mathbb {F}}_4}$.
\end{rem}
It turns out that non-identical gluing permits us to refine the
decomposition further, and this is our next goal.
\subsection{Relative exponents}\Label{ss:rd}
\begin{defn}\Label{reldeg}
Let $B_r$ and $B_{r'}$ be two glued blocks (whose centers thus
have the same cardinality). By \Tref{Frobglue}, we may assume the
blocks are glued by Frobenius gluing of some exponent ({cf.}\
\Dref{Fgng}), which we denote as $\exp(B_{rr'})$ and call the
{\emph{relative Frobenius exponent}} of $B_{rr'}$. This is
understood to be zero if $F$ is infinite. In fact, $\exp(B_{rr'})$
is only well defined modulo the dimension of $F_r = F_{r'}$ over
$F$ (where we interpret `modulo infinity' as a mere integer).
\end{defn}
The relative Frobenius exponents are used to define equivalence
relations on vectors of glued indices, as follows.
\begin{defn}\Label{eqdef}
Recall the definition of $T_u$ from Subsection~\ref{ss:not}. For every
$1\leq u,v \leq k$, we let $T_{u,v} = \set{(r,s) \in T_u \times
T_v \suchthat r\leq s}$, and define an equivalent relation on
$T_{u,v}$ by setting $(r,s) \sim (r',s')$ iff $\exp(B_{rr'})
\equiv \exp(B_{ss'})$ modulo $\gcd(d_u,d_v)$. (Recall that $d_u$
is the dimension of the center of the \th{u} component over $F$.)
More generally, for every $t$-tuple $1 \leq u_1,\dots,u_t \leq k$,
we set $$T_{u_1,\dots,u_t} = \set{(r_1,\dots,r_t) \in
T_{u_1}\times \cdots \times T_{u_t} \suchthat r_1 \leq \cdots \leq
r_t},$$ and define an equivalence relation on $T_{u_1,\dots,u_t}$
by $(r_1,\dots,r_t) \sim (r_1',\dots,r_t')$ if the values
$\exp(B_{r_1^{\,} r_1'}), \dots, \exp(B_{r_t^{\,} r_t'})$ are all
equivalent modulo $\gcd(d_{u_1},\dots,d_{u_t})$. We call $t$ the
\defin{length} of $\gamma$.
\end{defn}
\begin{rem}\Label{trans}
\begin{enumerate}
\item Relative exponents can be computed with respect to a fixed
block in the gluing component, {i.e.}\ $\exp(B_{rr'}) =
\exp(B_{r_0r'}) -\exp(B_{r_0r})$. In particular, $\exp(B_{rr'})=
-\exp(B_{r'r})$, and $\exp(B_{rr''}) =
\exp(B_{rr'})+\exp(B_{r'r''})$. \item In a `diagonal' set
$T_{u,u}$, $\exp(B_{rr'}) \equiv \exp(B_{ss'})$ iff $\exp(B_{rs})
\equiv \exp(B_{r's'})$, and so one can read the equivalence
relation directly from the matrix of relative exponents. \item
Likewise for any $t$, if $u_1 = \cdots = u_t = u$, then the
equivalence relation on $T_{u,\dots,u}$ is given as follows:
$(r_1,\dots,r_t) \sim (r_1',\dots,r_t')$ iff $\exp(B_{r_i
r_{i+1}}) \equiv \exp(B_{r_i'r_{i+1}'})$ modulo $d_u$ for $i =
1,\dots,t-1$. \item If $B_{r}$ and $B_{\bar{r}}$ are identically
glued, then obviously $\exp(B_{rs}) = \exp(B_{\bar{r}s})$ for any
$s$. In particular, $(r_1,\dots,r_{i-1},r,r_{i+1},\dots,r_t) \sim
(r_1,\dots,r_{i-1},\bar{r},r_{i+1},\dots,r_t)$ whenever $r_{i-1}
\leq r,\bar{r} \leq r_{i+1}$.
\end{enumerate}
\end{rem}
A word of caution: When all the Peirce idempotents in a given
sub-Peirce vector $(u_1, \dots, u_t)$ correspond to fields
$F_{u_i}$ of the same size (such as all having infinite type),
then the equivalence class of this vector is uniquely determined
by the equivalence classes of the pairs $(u_i, u_{i+1})$. However,
this can fail when the $F_{u_i}$ have differing sizes (such as
some finite and some infinite), because of the ambiguity arising
from the differing exponents of the Frobenius automorphisms.
If $(r_i,r_{i+1}) \sim (r_i',r_{i+1}')$ for each $i =
1,\dots,t-1$, then the respective relative exponents are
equivalent modulo $\gcd (d_i,d_{i+1})$, so in particular they are
all equivalent modulo $\gcd(d_1,\dots,d_t)$, and so
$(r_1,\dots,r_{t}) \sim (r_1',\dots,r_{t}')$.
On the other hand, equivalence modulo $\gcd(d_1,\dots,d_t)$ does
not in general imply any equivalence modulo $\gcd(d_i,d_{i+1})$,
so, for example, $(r_1,r_2,r_3) \sim (r_1',r_2',r_3')$ does not
even imply $(r_1,r_2) \sim (r_1',r_2')$.
\begin{defn}\Label{defcomp}
We define the \defin{composition} of equivalence classes, in the
spirit of matrix units, as follows. Suppose $\gamma \subseteq
T_{u_1,\dots,u_t}$ and $\gamma' \subseteq T_{v_1,\dots,v_{t'}}$. If
$u_t \neq v_1$, let $\gamma
* \gamma' = \emptyset$; and if $u_t = v_1$, let $$\gamma * \gamma'
= \set{(r_1,\dots,r_{t-1},r_t,s_2,\dots,s_{t'}) \suchthat
(r_1,\dots,r_{t-1},r_t) \in \gamma, \, (r_t,s_2,\dots,s_{t'}) \in
\gamma'}.$$ The composition $\gamma \circ \gamma'$ is defined as
the set of equivalence classes (of length $t+t'-1$) of
$T_{u_1,\dots,u_t,v_2,\dots,v_{t'}}$ which are contained in
$\gamma * \gamma'$.
\end{defn}
\subsection{The relative Frobenius decomposition}
Let $a \in A$ be an element in a Peirce component $\hat{e}_u A
\hat{e}_v$ of $A$. Applying the matrix block component
decomposition $a = \sum_{(r,s) \in T_{u,v}} a_{rs}$ (with $a_{rs}
\in B_{rs}$), we have $a = \sum a^{\gamma}$, where %
\begin{equation}\Label{agamma}
a^{\gamma} = \sum_{(r,s) \in \gamma} a_{rs} \end{equation} and
$\gamma$ ranges over the equivalence classes of $T_{u,v}$ defined
in \Dref{eqdef}. Thus, letting $A^{\gamma} = \set{a^{\gamma}
\suchthat a \in \hat{e}_u A \hat{e}_v}$, we have the
\defin{relative Frobenius decomposition}
\begin{equation}\Label{relFd}
A = \sum_{1\leq u,v \leq k} \,\sum_{\gamma \subseteq T_{u,v}} A^{\gamma}.
\end{equation}
\begin{prop}\Label{QQQ}
If $a \in \cl A$, then $a^\gamma \in \cl A$ for each equivalence
class $\gamma$ in every $T_{u,v}$.
\end{prop}
\begin{proof}
We need to show that each glued component $a^\gamma$ is in $\cl
A$. Let $M_{rs}$ be the $F$-subspace of $\M[n](K)$ spanned by the
$a_{rs}e_{rs}$. The natural vector space isomorphism $F \to
M_{rs}$ can be used to transfer the algebra structure of $F$ to
$M_{rs}$; i.e., $$(\alpha a_{rs}e_{rs})(\beta a_{rs}e_{rs}) = \alpha \beta
a_{rs}e_{rs}.$$ Note that each $M_{rs}$ is closed under these
operations. With respect to this structure, $M = \bigoplus
_{r,s}M_{rs}$ becomes a commutative $F$-algebra (defining the
operations componentwise), and its subalgebra $\tilde M$
corresponding to $a^\gamma$ is semiprime {Zariski-closed}, and thus has only
Frobenius gluing, in view of Remark \ref{glu00}. But this implies
$\tilde M \subseteq \cl A$, as desired.
\end{proof}
We see that the relative Frobenius decomposition is finer than the
Peirce decomposition, but coarser than the sub-Pierce
decomposition (which, strictly speaking, is not a decomposition of
$A$, since the components only belong to $KA$).
\begin{cor}\Label{offglue}
Two blocks of the \fcr\ $KA$ are in the same component in the
sub-Peirce decomposition iff they are identically glued. Thus, in
the relations defining the algebra $A$, any two off-diagonal
blocks $B_{rs}$ and $B_{r's'}$ with $(r,s) \not \sim (r',s')$ can
be separated (see \Rref{directsum} below).
\end{cor}
\begin{rem}\Label{moved}
Suppose $f$ is a relation of weak Frobenius type on $A$ whose
variables involve the blocks $B_{r_1, s_1}, \dots, B_{r_\nu,
s_\nu}$. Assume $f$ cannot be concluded from relations on the same
blocks, with fewer variables. Then the following facts hold:
\begin{enumerate}
\item The diagonal blocks $B_{r_1}, \dots , B_{r_\nu}$ are glued.
(Indeed, suppose some $a \in A$ satisfies the weak Frobenius
relation
$$\sum _{i=1}^n \sum _{j\ge 1} c_{ij} (a^{i,j}_{r,s})^{q_{r,s}} =
0.$$ %
If some $B_r$ and $B_{r'}$ are not glued, then we have a diagonal
element $d\in A$ with the identity $1$ in the $B_r$ block and $0$
in the $B_{r'}$ block. Then $da=0$, contrary to the minimality of
the quasi-linear relation defining the gluing.) Thus, applying
some graph theory cuts down the number of generating polynomial
relations even further.
\item Each diagonal block $B_r$ is defined over a field whose
order is at most the maximal $q_{r,s}$. (Multiply diagonal
elements of $A$ by the elements in the given glued components of
$A$, and apply a Vandermonde argument.)
\end{enumerate}
\end{rem}
\begin{exmpl}\Label{4.25}
Let $A$ be the algebra of \Eref{4.21}. For $u = v = 1$, the
relative exponents of $B_{rs}$ is the \th{(r,s)} entry in the
antisymmetric matrix $\left(\begin{array}{cccc}
0 & 1 & 0 & 1\\ %
\cdot & 0 & -1 & 0\\ %
\cdot & \cdot & 0 & 1 \\
\cdot & \cdot & \cdot & 0
\end{array}\right)$; see \Rref{trans}(2). The equivalence
relation on $T_{1,1}$ has classes $\set{(1,2),(1,4),(3,4)}$,
$\set{(2,3)}$ and $\set{(1,1),(1,3),(2,2),(2,4),(3,3),(4,4)}$. In
the notation of \Eref{4.21}, the relative Frobenius decomposition
is
$$A = ( A_{11}^{(11)} + A_{11}^{(22)} ) \oplus A_{11}^{(12)}
\oplus A_{11}^{(21)}.$$
\end{exmpl}
\begin{exmpl}
Now take $A = \left\{\left(\begin{array}{cccc} %
\alpha & x & y & z\\ %
0 & \beta^q & x' & y'\\ %
0 & 0 & \beta & x'' \\
0 & 0 & 0 & \alpha^{q}
\end{array}\right): \alpha, \beta, x,x',x'',y,y',z \in K\right\}$.
There are two glued components, $T_1 = \set{1,4}$ and $T_2
=\set{2,3}$. The non-diagonal relative Frobenius exponents are
$\exp(B_{14}) = 1$ and $\exp(B_{23}) = -1$, as in \Eref{4.25}. The
equivalence components are $T_{1,1} = \set{(1,1),(4,4)} \cup
\set{(1,4)}$, $T_{1,2} = \set{(1,2)}\cup \set{(1,3)}$, $T_{2,1} =
\set{(2,4)}\cup \set{(3,4)}$ and $T_{2,2} = \set{(2,2),(3,3)} \cup
\set{(2,3)}$. For this algebra, the relative Frobenius
decomposition recaptures the full sub-Peirce decomposition.
\end{exmpl}
\subsection{Higher length decomposition}
The same proof as in \Pref{QQQ} yields the following more
intricate result:
\begin{rem}\Label{abitmore}
Let $t \geq 2$ and $1\leq u_1,\dots,u_t \leq k$. Let $a =
\hat{e}_{u_1} a_1 \hat{e}_{u_2} \cdots \hat{e}_{u_{t-1}} a_{t-1}
\hat{e}_{u_t} \in \hat{e}_{u_1} A \hat{e}_{u_2} \cdots
\hat{e}_{u_{t-1}} A \hat{e}_{u_t}$, where $a_1,\dots,a_{t-1} \in
A$.
For every equivalence class $\gamma \subseteq T_{u_1,\dots,u_t}$ (as
defined in \Dref{eqdef}) and $a$ as above, let $a^{\gamma} =
\sum_{(r_1,\dots,r_t) \in \gamma} {e_{r_1} a_1 e_{r_2} \cdots
e_{r_{t-1}} a_{t-1} e_{r_t}}$. Then $a^{\gamma} \in A$ and
(clearly) $a = \sum_{\gamma \subseteq T_{u_1,\dots,u_t}} a^{\gamma}$.
Writing $a_i = \sum_{rs} \alpha^{(i)}_{rs}e_{rs}$, we have that
\begin{equation}\Label{form}
a^{\gamma} = \sum_{(r_1,\dots,r_t) \in \gamma}
\alpha_{r_1r_2}^{(1)} \cdots \alpha_{r_{t-1}r_t}^{(t-1)}
e_{r_1r_t}.\end{equation}
Letting $A^{\gamma} = \set{a^{\gamma} \suchthat a \in
\hat{e}_{u_1} A \hat{e}_{u_2} \cdots \hat{e}_{u_{t-1}} A
\hat{e}_{u_t}}$ (where we implicitly take advantage of the fact
that $\gamma$ determines $(u_1,\dots,u_t)$), we have the $t$-fold
relative Frobenius decomposition
\begin{equation}\Label{compg} A = \sum_{1\leq u_1,\dots,u_t \leq
k} \,\sum_{\gamma \subseteq T_{u_1,\dots,u_t}} A^{\gamma}.%
\end{equation}
\end{rem}
While \Eq{relFd} is clearly a direct sum, this is no longer the
case for $t \geq 2$ in \eq{compg}, as demonstrated in \Eref{six}
below.
\begin{rem}\Label{nest}
The decomposition of \Rref{abitmore} becomes finer as the length
of the classes increases. Indeed, for $\gamma$ of length $t+1$ and
$1 < i < t+1$, let $\pi_i$ denote the map $\set{1,\dots,m}^{t+1}
{\rightarrow} \set{1,\dots,m}^{t}$, forgetting the \th{i} entry. If $\gamma
\subseteq T_{u_1,\dots,u_{t+1}}$, then $\pi_i(\gamma)$ is contained in
an equivalence class of $T_{u_1,\dots,u_{i-1},u_{i+1},\dots,
u_{t+1}}$. For any equivalence class $\hat{\gamma} \subseteq
T_{u_1,\dots,u_t}$, one uses $A = \sum A \hat{e}_{v} A$ to show
that
$$A^{\hat{\gamma}} = \sum_{v=1}^{k} \sum_{\gamma} A^{\gamma},$$
where, for each $v$, $\gamma$ ranges over all equivalence classes
of $T_{u_1,\dots,u_{i-1},v,u_i,\dots,u_t}$ such that
$\pi_i(\gamma) = \hat{\gamma}$ ($k$ is the number of gluing
components).
\end{rem}
Products of the components obtained in this manner can be easily
computed via the following formula.
\begin{rem}\Label{mulc}
If $\gamma$ and $\gamma'$ are equivalence classes of arbitrary
length, then we have the `thick filtration' formula $$A^\gamma \cdot
A^{\gamma'} = \sum_{\gamma'' \in \gamma \circ \gamma'}
A^{\gamma''}$$ (where the composition is defined in
\Dref{defcomp}). In particular, the decomposition of length $t+1$
is a refinement of the product of the basic decomposition, of
length $2$, with the decomposition of length $t$.
\end{rem}
\begin{rem}
If $A \subseteq \M[n](K)$ is an $F$-algebra (not necessarily {Zariski-closed}), we
can define $\check{A}$ to be the extension of $A$ in $KA$
generated by all sub-Pierce components of elements of $A$. Thus,
$A \subseteq \cl{A} \subseteq \check{A} \subseteq KA$.
For example, if $A = \set{ \smat{\alpha^{p^n}}{a}{0}{\alpha}:
\alpha \in F_1, \, a\in K}$ (cf.~ Example \ref{basexa2}(1)), then
$\check{A} = \smat{F_2}{K}{0}{F_1},$ where $F_2 = \{ \alpha^p: \alpha
\in K_1\}.$
In particular, if $A\subseteq \M[n](K)$ is a generic algebra of a given
variety, generated by generic elements (see \Sref{sec:6} below),
then $\check{A}$ is naturally graded by the matrix components of
$\M[n](K)$, although $A$ itself is not graded.
\end{rem}
\subsection{Interaction with the radical}\label{ss:4.6}
The decompositions in (1)--(3) above can be refined further via
the decomposition $A = S \oplus J$ to the semisimple and radical
parts. For example $A_{(uv)} \subseteq J$ for $u \neq v$, but
$A_{(uu)}$ is a (nonunital) subalgebra of $A$, with
$\Rad(A_{(uu)}) = J \cap A_{(uu)}$.
\begin{rem}\Label{gluetype1}
Let $\gamma \subseteq T_{u_1,\dots,u_t}$ be an equivalence class, and
fix $1 \leq i < t$. By definition, $A^{\gamma} =
\set{a^{\gamma}}$, ranging over %
\begin{eqnarray*}
a & = & \hat{e}_{u_1} a_1 \hat{e}_{u_2} \cdots \hat{e}_{u_{t-1}}
a_{t-1} \hat{e}_{u_t} \\ & \in & \hat{e}_{u_1} A \cdots
\hat{e}_{u_i} A
\hat{e}_{u_{i+1}} \cdots A \hat{e}_{u_t} \\
& = & \hat{e}_{u_1} A \cdots \hat{e}_{u_i} S \hat{e}_{u_{i+1}}
\cdots A \hat{e}_{u_t} + \hat{e}_{u_1} A \cdots \hat{e}_{u_i} J
\hat{e}_{u_{i+1}} \cdots A
\hat{e}_{u_t}.\end{eqnarray*} %
If $u_i \neq u_{i+1}$, then
$\hat{e}_{u_i} S \hat{e}_{u_{i+1}} = 0$, so we may assume $a_i \in
J$. Otherwise, the computation shows that
$$a^{\gamma} = \sum_{(r_1,\dots,r_t) \in \gamma,\ r_i < r_{i+1}}
{e_{r_1} a_1 e_{r_2} \cdots e_{r_{t-1}} a_{t-1} e_{r_t}}$$ when
$a_i \in J$, while
$$a^{\gamma} = \sum_{(r_1,\dots,r_t) \in \gamma,\ r_i = r_{i+1}}
{e_{r_1} a_1 e_{r_2} \cdots e_{r_{t-1}} a_{t-1} e_{r_t}}$$ when
$a_i \in S$. In other words, we refine the
decomposition~\eq{compg} by separating the conditions $r_i \leq
r_{i+1}$ on an equivalence class $\gamma = \set{(r_1,\dots,r_t)}$
to one of the conditions $r_i = r_{i+1}$ or $r_{i} < r_{i+1}$.
We say the index $i$ has \defin{type} $0$ in $\gamma$ if $r_i =
r_{i+1}$ for every $(r_1,\dots,r_t) \in \gamma$; and has type $1$ if
$r_i < r_{i+1}$ for every $(r_1,\dots,r_t) \in \gamma$. For
example, if $u_i \neq u_{i+1}$, then $i$ has type $1$. An
equivalence class can be decomposed as a union $\gamma =
\gamma^{(0)} \cup \gamma^{(1)}$, where $\gamma^{(0)} =
\set{(r_1,\dots,r_t) \in \gamma \suchthat r_{i} = r_{i+1}}$ and
$\gamma^{(1)} = \set{(r_1,\dots,r_t) \in \gamma \suchthat r_i <
r_{i+1}}$. This process can be repeated for every $i$, and the
resulting sub-classes are called \defin{fully refined} classes.
One can multiply the components corresponding to
refined classes, as described for standard components in \Rref{mulc}. %
\end{rem}
\def{\omega}{{\omega}}
If $\gamma^*$ is a fully refined equivalence class, let the
\defin{weight} ${\omega}(\gamma^*)$ denote the number of indices $i$ of
type $1$ in $\gamma^*$. In particular, any components having
${\omega}(\gamma^*) $ greater than the nilpotence index of $J$ must be
zero. By construction, $A^{\gamma^*} \subseteq J^{{\omega}(\gamma^*)}$.
Moreover, $J^\ell = \sum_{{\operatorname{len}}(\gamma^*) = t,\ {\omega}(\gamma^*) \geq
\ell} A^{\gamma^*}$ for every $t \geq \ell$.
\begin{lem}
Suppose $\gamma^* \subseteq T_{u_1,\dots,u_k}$ is a refined equivalence
class, with the index $i$ having type $0$. Let $\gamma' \subseteq
T_{u_1,\dots,u_{i},u_{i+2},\dots,u_k}$ be the equivalence class
obtained by removing the $(i+1)$th entry from each vector in
$\gamma^*$. Then $A^{\gamma^*} = A^{\gamma'}$.
\end{lem}
\begin{proof} It is clear that $A^{\gamma^*} \subseteq A^{\gamma'}$, and if
$$a^{\gamma'} =
\sum_{(r_1,\dots,r_{i},r_{i+2},\dots,r_t) \in {\gamma'}} {e_{r_1}
a_1 e_{r_2} \cdots e_{r_{i}} a_{i+1} e_{r_{i+2}} \cdots
e_{r_{t-1}} a_{t-1} e_{r_t}}$$ for some
$a_1,\dots,a_{i-1},a_{i+1},\dots,a_{t-1} \in A$, then, taking
$a_{i} = 1$,
$$a^{\gamma^*} = \sum_{(r_1,\dots,r_{i},r_{i+1},r_{i+2},\dots,r_t)
\in {\gamma^*}} {e_{r_1} a_1 e_{r_2} \cdots e_{r_{i}} a_i
e_{r_{i+1}} a_{i+1} e_{r_{i+2}} \cdots e_{r_{t-1}} a_{t-1}
e_{r_t}}$$ is equal to $a^{\gamma'}$, since $e_{r_i} a_i
e_{r_{i+1}} = e_{r_i}$ for all vectors in $\gamma^*$.
\end{proof}
\begin{prop}\label{boundonlen}
Every homogeneous component of a fully refined equivalence class
has the form $A^{\gamma^*}$, where all indices in $\gamma^*$ have
type $1$. In particular ${\operatorname{len}}(\gamma^*) = \omega(\gamma^*) \leq
\nu$.
\end{prop}
\begin{exmpl}\Label{six}
Consider the $F$-algebra $$A = \set{
\left(\begin{array}{cccccc} %
a & 0 & * & * & * & * \\
0 & a & x & y & * & * \\
0 & 0 & a^q & x' & * & * \\
0 & 0 & 0 & a & \alpha x & \alpha \alpha' y \\
0 & 0 & 0 & 0 & a^q & \alpha' x' \\
0 & 0 & 0 & 0 & 0 & a \\
\end{array}\right) : a,x,x',y, *,\dots,* \in K
},$$ where $\alpha,\alpha' \in K$ are fixed, and $q = \card{F}$.
As in previous examples, there is one glued component $T_1 =
\set{1,2,3,4,5,6}$, which decomposes with respect to identical
gluing as $T_1 = \set{1,2,4,5} \cup \set{3,6}$. In the equivalence
relation of \Dref{eqdef}, $T_{11}$ decomposes into the three
equivalence classes: $\gamma_1 =
\set{(1,3),(1,5),(2,3),(2,5),(4,5)}$ (with relative exponent $1$),
$\gamma_{-1} = \set{(3,4),(3,6),(5,6)}$ (with relative exponent
$-1$), and the complement $\gamma_0$, whose elements are the $13$
pairs of relative exponent $0$. (Throughout this example, we use
relative exponents as indices.) The relative Frobenius
decomposition of a general element, described in \Eq{relFd}, is
$$\left(\begin{array}{cccccc} %
a & 0 & 0 & * & 0 & * \\
0 & a & 0 & y & 0 & * \\
0 & 0 & a^q & 0 & * & 0 \\
0 & 0 & 0 & a & 0 & \alpha \alpha' y \\
0 & 0 & 0 & 0 & a^q & 0 \\
0 & 0 & 0 & 0 & 0 & a \\
\end{array}\right)
+
\left(\begin{array}{cccccc} %
0 & 0 & * & 0 & * & 0 \\
0 & 0 & x & 0 & * & 0 \\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & \alpha x & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}\right)
+
\left(\begin{array}{cccccc} %
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & x' & 0 & * \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \alpha' x' \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}\right),
$$
and we denote the respective summands in the decomposition of $A$
as $\Gamma_0 + \Gamma_1 + \Gamma_{-1}$.
Next, we describe the decomposition corresponding to $t = 3$ in
\Rref{abitmore}. The set $T_{1,1,1}$, consisting of $\binom{8}{3}
= 56$ triples, decomposes into the $7$ equivalence classes:
$\gamma_{0,0} = [(111)]$,
$\gamma_{0,1} = [(113)]$,
$\gamma_{1,0} = [(133)]$,
$\gamma_{1,-1} = [(134)]$,
$\gamma_{0,-1} = [(334)]$,
$\gamma_{-1,0} = [(344)]$
and $\gamma_{-1,1} = [(345)]$.
Let $\Gamma_{\delta\delta'}$ denote the component in \eq{compg}
corresponding to $\gamma_{\delta\delta'}$. Computing via formula
\eq{form} we find that $\Gamma_{0,1} = \Gamma_{1,0} = \Gamma_1$,
that $\Gamma_{0,-1} = \Gamma_{-1,0} = \Gamma_{-1}$; and that
$\Gamma_{0,0} = \Gamma_0$. On the other hand $\Gamma_{-1,1} =
Ke_{35}$ and $\Gamma_{1,-1} = K e_{14}+ K e_{16} + Ke_{26} +
K(e_{24}+\alpha \alpha' e_{46})$ are proper subspaces of previous
components; however, we do not get a finer decomposition of $A$.
Applying the decomposition $A = S+J$, as in \Rref{gluetype1}, we
observe the following. The class $\gamma_{00} = [(111)]$ breaks
down to four sub-classes, namely $\gamma_{00} = (\gamma_{00}^{==})
\cup (\gamma_{00}^{=<}) \cup (\gamma_{00}^{<=}) \cup
(\gamma_{00}^{<<})$, with the obvious interpretation. For example,
$\gamma_{00}^{<<} = \set{(146),(246)}$. The corresponding
components are $\Gamma_{00}^{==} = S$, the semisimple subalgebra;
$\Gamma_{00}^{<<} = K e_{16}+K e_{26}$; and $\Gamma_{00}^{=<} =
\Gamma_{00}^{<=} = J \cap \Gamma_{00}$. Similar decomposition can
be applied to the other classes, although in every case some
sub-classes are empty. For example, $\gamma_{0,-1} =
(\gamma_{0,-1}^{=<}) \cup (\gamma_{0,-1}^{<<})$, with
$\Gamma_{0,-1}^{<<} = K e_{36}$ and $\Gamma_{0,-1}^{=<} =
\Gamma_{-1}$.
\end{exmpl}
\subsection{Summary}
Let $A \subseteq \M[n](K)$ be a {Zariski-closed}\ $F$-subalgebra, written in
Wedderburn block form (\Cref{block2}). In this section we have
discussed four useful decompositions. We follow the notation of
\Dref{wbf} and the idempotents defined in Subsection~\ref{ss:sP}.
\begin{enumerate}
\item The Peirce decomposition of $A$ is given by $A = \bigoplus
{\hat{e}}_u A \hat{e}_v$ for $1\leq u,v \leq k$. Thus every
element in $A$ can be written as $a = \sum_{u,v} a_{(uv)}$, with
$a_{(uv)} \in A$.
\item The relative Frobenius decomposition is a finer decomposition of
$A$: Every $a_{(uv)} \in \hat{e}_u A \hat{e}_v$ decomposes as
$a_{(uv)} = \sum_{\gamma} a^{\gamma}$, where $a^{\gamma}$ is
defined in \eq{agamma} and $\gamma \subseteq T_{u,v}$ ranges over the
equivalence classes of \Dref{eqdef}. The components $a^{\gamma}
\in A$ by \Pref{QQQ}. This can be refined further by
\Rref{abitmore}.
\item Each idempotent $\hat{e}_u$ is a sum of idempotents
$\sum_{\mu = 1}^{c_u} \bar{e}_u^{(\mu)}$, as in \Eq{hatbar}, where
each idempotent $\bar{e}_u^{(\mu)}$ corresponds to the
$\mu$-components of identical gluing. The Peirce decomposition of
$KA$ is $KA = \sum K A_{uv}^{(\mu\mu')}$, which gives the
decomposition $A = \bigoplus A_{uv}^{(\mu \mu')}$, with
$A_{uv}^{(\mu\mu')} \subseteq KA$.
\item[] If $F$ is infinite, then the decompositions (1), (2) and
(3) are identical.
\item Finally, we have the decomposition $A = \bigoplus A_{rs}$ of matrix
blocks, where $A_{rs} = e_r A e_s \subseteq B$, and $e_r$ are the block
idempotents defined by the Wedderburn block form. Every
$\bar{e}_u^{(\mu)}$ breaks down as a sum of such idempotents, as
demonstrated in \Eref{4.21}.
\item The decomposition in (4) can be refined by taking
equivalence classes of any length $\geq 2$, as described in
\Rref{abitmore}.
\item Finally, one may apply the Wedderburn decomposition, or
equivalently replace the equivalence classes by fully refined ones
of length $\leq \nu$; {cf.}\ Subsection~\ref{ss:4.6}.
\end{enumerate}
\section {Explicit generators for polynomial relations}\Label{s:explicit}
We are ready for a fairly precise description of the polynomial
relations of a {Zariski-closed}\ algebra $A$, with radical $J$.
\begin{rem}\Label{directsum}
Suppose $V$ is an $F$-subspace of a $K$-algebra $B$. If $V = V_1
\oplus V_2$ is a direct sum, then any polynomial relation of $V$
is a sum of polynomial relations of $V_1$ and $V_2$. \end{rem}
In view of this observation, taking the ``Wedderburn
decomposition'' $A = J \oplus S$ inside the Wedderburn
decomposition of $B$, we see that the polynomial relations of $A$
are generated by the polynomial relations of $J$ and the
polynomial relations of $S$.
The polynomial relations of $S$ come from gluing, which involves
$\Cent{S}$, a commutative algebra.
This leaves the polynomial relations of $J$, whose identifications
may be considerably more intricate, especially in the presence of
Frobenius gluing. In view of Theorem~ \ref{linear}, off-diagonal
identifications could involve minimal polynomial relations which
are $q$-polynomials, i.e., of weak Frobenius type. Denoting the
matrix unit in the $(i,j)$ position of the block $B_{rs}$ as
$e^{i,j}_{r,s}$, and expanding these to a base of $B$, we have
some minimal quasi-linear relations of the form
$$\sum _{r,s} \sum _{i,j=1}^{n_{r,s}} c_{ij}
({\lambda}^{i,j}_{r,s})^{q_{r,s}} = 0.$$ In this case we also say that
the gluing has {\bf weak Frobenius type}. Note that the $q_{r,s}$
are independent of $i$ and $j$, so we might as well assume that
$i=j=1$.
Let $\Lambda\subset K[{\lambda}_1, \dots, {\lambda}_N]$ denote the set of
all weak $F$-Frobenius type polynomials (see \Dref{FTdef}), where
$N = \dim_K(B)$. Let $\phi$ denote the map given by $a\mapsto
a^{q}$, where $q = \card{F}$ and where, as above, we take $q = 1$ if
$F$ is infinite. Noting that weak $F$-Frobenius type relations are
$F$-linear combinations of $\phi^j({\lambda}_i)$ for $j \geq 0$ and $i
= 1,\dots, m$, we may view $\Lambda$ as a module over $K[\phi]$,
under the obvious operation $\phi \cdot a = a^q$ ($a\in K$) and
$\phi\cdot {\lambda}_i = \phi({\lambda}_i)$. In fact, $\Lambda$ is a free
module, spanned by ${\lambda}_1,\dots,{\lambda}_N$. When $F$ is finite,
$K[\phi]$ is isomorphic to the ring of polynomials in one variable
over $K$. For infinite $F$, $K[\phi] = K$, and $\Lambda$ is merely
the $K$-dual of $B$, as a vector space.
\begin{thm}
For any $F$-subspace $A \subseteq B$, the relations $\operatorname{poly}{A}$ form a
free module of rank at most $N$ over $K[\phi]$.
\end{thm}
\begin{proof}
Indeed, $\operatorname{poly}{A} \subseteq \Lambda$ by \Tref{linear}, and $\Lambda$ is
a free module over the principal ideal domain $K[\phi]$.
\end{proof}
We can improve this estimate, by noting that the weak Frobenius
relations depend on the Wedderburn block, not on the
indeterminate, so we can reduce $\Lambda$ to the module generated
by one representative indeterminate for each Wedderburn block
above the diagonal. This is $\binom {m}2$, where $m$ is the number
of diagonal Wedderburn blocks in the given representation of $A$
(clearly $m^2 \leq N$). Thus, we have
\begin{cor} The weak Frobenius relations of
a {Zariski-closed}\ $F$-subalgebra $A$ of $B$ can be defined by at most
$\binom {m}2$ polynomial relations, where each relation is
duplicated $\dim(B_{uv})$ times. (For example, one needs four
relations to define $\M[2]({\mathbb {F}}_q)$ inside the algebraic closure:
${\lambda}_{ij}^q = {\lambda}_{ij}$ for $i,j = 1,2$.)
\end{cor}
We can improve this result even further.
\begin{lem}\Label{weakFrobe}
{\ } \begin{enumerate} \item In a polynomial relation of weak
Frobenius type, we may assume that one of the $q_{ij} = 1$.
\item Given two polynomial relations $f_1,f_2$ of weak Frobenius
type and given some ${\lambda}_i$, we may modify these two polynomial
relations to assume that $\lambda _i$ appears in at most one of them.
\end{enumerate}
\end{lem}
\begin{proof}
(1)
Otherwise we can write each $q_{ij} = q q_{ij}'$, and then
$$\left(\sum _{i=1}^m \sum _{j\ge 1} c_{ij} {\lambda}_i^{q'_{ij}}\right)^q
=\sum _{i=1}^m \sum _{j\ge 1} c_{ij} {\lambda}_i^{q_{ij}} = 0,$$ so
taking the $q$-root yields a polynomial relation of lower degree,
and we conclude by induction.
(2) The argument is by induction on the degree $q^{d_i}$
of $\lambda _i$ in $f_1$ and $f_2$. Suppose $d_1 \ge d_2$. Then the
degree of $q$ in $f_1- f_2^{d_1-d_2}$ is less than $d_1$, so we
continue by induction.
\end{proof}
\begin{cor} Suppose the weak Frobenius relations of $A$ are
defined by $\mu \le \binom {m}2$ polynomial relations $\{f_1,
\dots, f_\mu\}$. Then we may take $\mu$ variables and assume that
each of them appears in at most one of $\{f_1, \dots, f_\mu\}$.
\end{cor}
\section{Generic {Zariski-closed}\ algebras and PI-generic
rank}\Label{sec:6}
Our study of {Zariski-closed}\ algebras has been motivated by the desire to
find an algebra in a given variety whose structure is especially
malleable. Since every variety contains a relatively free algebra,
we are led to study the relatively free algebras in the variety
generated by {Zariski-closed}\ algebras. These are the generic algebras, which
can be described in terms of ``generic'' elements. First we start
with the classical setting, and then we see how the presence of
finite fields complicates the situation and requires a more
intricate description.
We want to define a ``finitely generated'' generic algebra. This
means that we need to consider polynomial identities in only a
finite number of indeterminates. First, we need to clarify exactly
what we mean generically by ``generation.''
\begin{defn} The {\bf topological rank} of a {Zariski-closed}\ algebra $A$ is defined as
the minimal possible number of generators of an $F$-subalgebra
$A_0$ of $A$ for which $\cl{A_0} = A$.\end{defn}
\begin{rem}\Label{infgen11}
By Theorem \ref{semis}, every semiprime {Zariski-closed}\ algebra is a finite
direct sum of simple algebras, and thus has finite topological
rank.
\end{rem}
Thus, the obstruction to finite topological rank
is found in the radical.
\begin{exmpl}\Label{infgen} Let $K$ be an infinite dimensional
field extension of a finite field $F$, and consider $$ A = \left(\AR{F & K \\
0 & F}\right).$$ This {Zariski-closed}\ algebra has infinite topological rank,
since any finite number of elements generates only a finite
subspace of $K$ in the $1,2$ position, which is {Zariski-closed}.
\end{exmpl}
Nevertheless, we do have the following information.
\begin{rem}
When the Peirce idempotent $\tilde e$ has infinite typ,
(cf.~\Dref{idemptype}), then the spaces $A\tilde e$ and $\tilde e A$
have finite topological rank, since they are naturally vector
spaces over $K$ (although this is not the structure of the initial
vector space). The action is via the isomorphism of $K$ with the
center of the prime component corresponding to $\tilde{e}$.
\end{rem}
\subsection{Generic algebras over an infinite
field}
Clearly, the topological rank of a f.d.~algebra $A$ over field $K$
is not greater than its dimension, which is given to be finite.
Thus, in this case, we need only finitely many elements to define
the generic algebra.
\begin{constr}\Label{classicgen}
The classical construction of a generic algebra of a f.d.~algebra
$A$ over an infinite field $F$ is to take a base $b_1, \dots, b_n$
of $A$ over $F$, adjoin indeterminates $\xi _{i}^{(k)}$ to~$F$ ($i
= 1,\dots,n$, $k \in {\mathbb {N}}$), and let $A'$ be the algebra generated
by the ``generic'' elements $Y_k = \sum _{i=1}^n \xi_{i}^{(k)}
b_i$, $k \in {\mathbb {N}}$. It is easy to see \cite[Example 3.26]{BR} that
in the case where $F$ is infinite, $A'$ is PI-equivalent to $A$, and in fact
$A'$ is relatively free in the variety defined by ${\operatorname {id}} (A)$. The
most celebrated example in PI-theory is when $A = \M[n](F)$, the
algebra of $n \times n$ matrices. Then $A'$ is the algebra of
generic $n \times n$ matrices, generated by the {\bf generic
matrices} $Y_k = (\xi _{ij}^{(k)})_{ij}$, $k \in {\mathbb {N}}$.
\end{constr}
\begin{exmpl}\Label{generic1} The generic upper triangular matrix
$Y_k = \left(\AR{\xi_{1}^{(k)} & \xi_{2}^{(k)} \\ 0 &
\xi_{3}^{(k)}}\right)$ is defined over the polynomial algebra $C=
F[\xi_{j}^{(k)} : j = 1,2,3 \, ,\,k \in {\mathbb {N}}].$ We get the {\bf
generic algebra of upper triangular matrices} by taking the
subalgebra of $\M[n](C)$ generated by the $Y_k$.
\end{exmpl}
\begin{constr}\Label{generic1.1}
Alternatively, when building generic elements for an arbitrary
f.d.~algebra $A$ over an infinite field, we could take the powers
of the radical $J$ into account. Writing $A = S \oplus J$ by
Theorem~\ref{Zarcl1}, where $J^\nu = 0$, we can view $J^2$ as
those blocks at least two steps above the diagonal, i.e., lying in
$\sum _{r+2 \leq s} B_{rs}$. We take a generic algebra for $S$ and generic elements for $J/J^2$ (which can be viewed as a
complementary subspace for $J^2$ inside $J$), taking identical
gluing into account; these then yield generic elements for $A$.
This also will be the approach that we take for {Zariski-closed}\ algebras
over arbitrary fields.
\end{constr}
\subsection{PI-generic rank over an arbitrary field}
Since the topological rank could be infinite, we look for an
alternative concept which is more closely relevant to PI-theory.
\begin{defn} The {\bf PI-generic rank} of $A$ is the minimal number
$m$ of elements needed to generate a subalgebra satisfying the
same PI's as $A$. Then the relatively free PI-algebra of $A$ could
also be generated by $m$ elements. In the literature, the
PI-generic rank is often called the {\bf basic rank}.\end{defn}
Clearly, the PI-generic rank is less than or equal to the
topological rank.
\begin{exmpl}
$$ A = \left\{\left(\AR{\alpha & \beta & \gamma \\0 & \alpha & \beta \\
0 & 0 & \alpha}\right): \alpha\in F, \beta \in K\right\}$$ is a
commutative algebra, having PI-generic rank 1, but having infinite
topological rank when $K$ is infinite dimensional over $F$. This
example also shows that gluing can lower the PI-rank.
\end{exmpl}
When computing PI-generic rank, we study a
polynomial in terms of substitutions of its monomials, as usual,
which can be complicated by the absence of indeterminates in
certain monomials. Accordingly, recall from
\cite[Definition~2.3.15]{Row1} that a polynomial $f$ is called
{\bf blended} if each monomial appearing in $f$ must appear in
each monomial of $f$. As noted in \cite[Exercise~2.3.7]{Row1}, any
PI is a sum of blended PI's, seen by specializing each
indeterminate to 0 in turn. Thus, we can limit ourselves to blended
PI's when determining the PI-generic rank.
Let us consider the PI-generic rank of a {Zariski-closed}\ PI-algebra.
Although when $F$ is infinite, this is obviously finite (since the
variety contains a finite dimensional algebra), the situation
becomes much more interesting when $F$ is finite. Although, as
already seen in Example~\ref{infgen}, we might need infinitely
many generic elements to generate our algebra, we aim to show,
however, that the PI-generic rank is always finite.
\begin{thm}\Label{frank}
Any {Zariski-closed}\ algebra $A$ (over an arbitrary field) has finite
PI-generic rank.
\end{thm}
\begin{proof}
(As noted above this statement is trivial for algebras over
infinite fields.) We decompose $A = S \oplus J$, where $S$ is
semisimple and $J$ is nilpotent, of nilpotence index $\nu$. By
\Pref{break}, it is enough to consider specializations for which
every variable takes values either in $S$ or in $J$.
We only need to consider blended polynomials. In any nonzero
evaluation of a polynomial on~$A$, at most $\nu-1$ components can
belong to $J_1$, as defined in Remark~\ref{gluetype1}. But $J_1$,
being a variety, has a finite number $\psi$ of irreducible
components, each of which has a generic element which we can use
for the radical substitution. Our ``generic'' radical substitution
could be taken to be the sum of these substitutions.
This leaves the semisimple substitutions, which we consider
``layered'' around the radical substitutions. The PI-generic rank
of $S$ is less than or equal to its topological rank, which is
finite; cf.~\Rref{infgen11}.
In view of $A = S \oplus J$, we see that the PI-generic rank is
at most $\mu + \nu-1 $, where $\mu$ is the PI-rank of $S = A/J$.
\end{proof}
By passing to the {Zariski closure}, we have:
\begin{cor}
Any representable algebra $A$ (over an arbitrary field) has a
PI-equivalent algebra with finite PI-generic rank.\end{cor}
\begin{rem} We can improve the bound given in Theorem \ref{frank}.
First, any central simple algebra over an infinite field is
generated by two elements; thus, its topological rank (and thus
PI-generic rank) is $2$. Thus, when $F$ is infinite, $\mu = 2,$
so our bound becomes $\nu+1$. (This can be lowered even further,
since the {Zariski closure}\ of a one-generator algebra contains its radical
part.)
Over a finite field $F$, any simple algebra has the form
$\M[n](F)$. If $|F|>n$, then $\M[n](F)$ is generated by two
elements, one being the diagonal with distinct entries and the
other being the upper triangular matrix $\sum _{i=1}^{n-1}
e_{i,i+1}$. On the other hand, when $|F|^2<n,$ the topological
rank starts growing, because of repeating eigenvalues, although
obviously the topological rank is finite (bounded by $n^2$) and
in fact grows much more slowly, bounded by $2+ \log _{|F|} n,$ as
seen by the argument given above. At any rate, the topological
rank, and thus the PI-rank, of any central simple algebra of
dimension $n^2$ is finite, bounded by some function of $n$.
Arguing by components shows that the PI-generic rank of any
noncommutative semisimple algebra is given along the same lines.
When $F$ is a finite field of $q$ elements, there are only
finitely many possible elements in $\M[n](F)$, namely $q^{n^2}$,
and thus $q^{2n^2}$ possible ordered pairs. If we have more
components in $A$, some pair must repeat itself, and thus the
corresponding components become glued when $\psi \ge q^{n^2}$
unless we have a greater number of generators. Thus the size of
$\psi$ can also force up (logarithmically) the bound for the
PI-generic rank.
\end{rem}
\subsection{Generic representable algebras, not over an infinite
field}
Our main theorem in this section is that for any representable
algebra $A$ there exists a relatively free, finitely generated
algebra in the variety $\operatorname{Var}(A)$ obtained by the
identities of $A$.
Although, for any representable algebra $A$ over an infinite
field $F$, the classical construction of a generic algebra for $A$
is PI-equivalent to the original algebra ~$A$, this is no longer
the case when $F$ is finite (or even worse, when there is no base
field). Thus, when considering finite characteristic, we need to
introduce new commutative rings that need not be fields.
\begin{exmpl}\Label{generic1.5} Suppose the field $F$ contains $q$
elements. Then $F$ is not PI-equivalent to the ring of polynomials
$F[\xi]$, so we must pass to $F[\xi]/\langle \xi ^q -\xi\rangle$,
where the image $\overline {\xi}$ of $\xi$ is a generic element.
Note that $F[\xi]/\langle \xi ^q -\xi\rangle$ is isomorphic to a
direct product of $q$ copies of $F$, so another way of viewing our
generic element is as a $q$-tuple listing the elements of $F$.
Unfortunately, this may not suffice to describe the PI's since
they involve more than one substitution. For two generic elements
we need to pass to $$F[\xi_1, \xi_2]/\langle \xi_1 ^q -\xi_1 ,
\xi_2 ^q -\xi_2\rangle,$$ which is isomorphic to a direct product
of $q^2$ copies of $F$, and so on. Of course, since the identity
of commutativity only requires two variables, this is enough for
the generic element of the variety of $F$, but we already see the
difficulty arising of predicting how many generic elements we need
to construct the generic algebra.
\end{exmpl}
Nevertheless, there is a way to define the generic algebra for a
general {Zariski-closed}\ algebra $A = \cl{A}$ (represented in some
f.d.~algebra $B$ over an algebraically closed field $K$). The
idea is to define everything as generically as possible.
\begin{constr}[General construction of generic algebras]\Label{genalgfin}
Letting ${\mathcal C}_1, \dots, {\mathcal C}_t$ denote the irreducible components of
$\cl{A}$ under the Zariski topology, suppose each ${\mathcal C}_i$ is
defined over a field with $q_i$ elements. Then we need $s$
``mutually generic'' elements $b_{i1}, \dots, b_{is}$ in each
component. Towards this end, we take a generic element
$$b \in {\mathcal C}_1^{s} \times \dots \times {\mathcal C}_t^{s},$$
where each ${\mathcal C}_i^{s}$ denotes the direct product of $s$ copies of
${\mathcal C}_i$. Thus $b$ has the form $((b_{11}, \dots, b_{1s}),
(b_{21}, \dots, b_{2s}), \dots, (b_{\mu 1}, \dots, b_{\mu s}))$,
where each $(b_{i1}, \dots, b_{is})\in {\mathcal C}_i;$ by definition, the
$b_{ik}$ are ``mutually generic''.
Next, we define the {\bf generic
coefficient ring}
$$C = F[ \xi _{ik}\! : 1 \le i \le s, \ 1 \le k \le \mu ]/\langle
\xi_{ik}^{q^{d_i}}-\xi_{ik}: 1 \le i \le s, \ 1 \le k \le \mu
\rangle,$$ and the generic elements $Y_k = \sum _{i=1}^s \bar
\xi_{ik} b_{ik}$ ($k = 1,\dots,\mu$), where $\bar \xi_{ik}$ is the
image of $\xi_{ik}$ in $C$. The subalgebra of $B$ generated by
the~$Y_k$ serves as our generic algebra for the variety generated
by $A$.
Note that this construction is completely general since the free
algebra of any variety is representable; however, this is difficult
to prove (using the theory developed in this paper).
\end{constr}
\begin{thm}\label{mainthm1} The algebra $\mathcal Y = F\{Y_1, \dots, Y_\mu\}$ of
Construction~\ref{genalgfin} is relatively free in $\VarF A$, with
free generators $Y_1, \dots, Y_\mu$. If $t \ge
\operatorname{PI-rank} A$, then $\VarF {\mathcal Y} = \VarF A$.
\end{thm}
\begin{proof}
Any set of mutually generic elements specialize to arbitrary
elements of $A$, so it remains to show that $\mathcal Y$ satisfies
the identities of $A$. But this is clear, since any element is the
sum of generic components, which satisfy the identities of $A$ by
definition.
\end{proof}
This construction also works for nonassociative {Zariski-closed}\ algebras of
arbitrary signature, in the framework of universal algebra.
\begin{rem}
One cannot simply describe the generic algebra in Construction~
\ref{genalgfin} by taking the $b_i$ from a base of the extended
algebra as in Construction~\ref{classicgen}, because the
Gel'fand-Kirillov (GK) dimensions do not match, as evidenced by
Belov's computation discussed in Remark~\ref{GZB1}. Indeed, the
generic coefficient ring $C$ is finite, and thus the GK dimension
would only be equal to the GK dimension of the first block,
instead of the sum of the GK dimensions of the blocks.
\end{rem}
\begin{exmpl}
Let $A$ be the algebra of triangular matrices with entries as
follows: $\left(\AR{\alpha & \beta
\\ 0 & \gamma }\right)$ where $\alpha $ is
in the finite field $F$, and $\beta ,\gamma $ are in an infinite
field extension $K$ of $F$. Continuing the notation of
Construction~\ref{genalgfin}, the generic algebra is generated by
matrices of the form $Y_i = \left(\AR{\xi_{i1} & \xi_{i2}
\\ 0 & \xi_{i3} }\right)$
where $\xi_{i1}$ is a generic element of $C$, whereas
$\xi_{i2},\xi_{i3}$ are indeterminates over $K$.
\end{exmpl}
\begin{exmpl} The generic upper triangular matrices for an algebra with
Frobenius gluing of power $q$ along the diagonal can be written
in the form $Y_k = \left(\AR{\xi_{1k} & \xi_{2k}
\\ 0 & \xi_{1k}^q }\right)$.\end{exmpl}
When Frobenius gluing is involved, we can still use the generic
coordinate ring of Construction~\ref{genalgfin}. The generic
description of partial gluing up to infinitesimals becomes more
complicated in the presence of Frobenius gluing, because we need
to deal both with the Frobenius automorphism and also with the
degree of the infinitesimal.
We close by providing an explicit construction for generic
PI-algebras of {Zariski-closed}\ algebras. As in
Construction~\ref{generic1.1}, we take the powers of the radical
$J$ into account.
\begin{constr}[The explicit generic algebra of a
{Zariski-closed}\ algebra of finite topological rank]\Label{generic2}
In view of Construction~\ref{classicgen}, we may assume the base
field $F$ is finite. The construction requires generic elements
for the {Zariski-closed}\ algebra $A$, which will be defined over the ring of
polynomials $F[\xi_1,\xi_2,\dots]$ (with an appropriate indexing).
There are several methods; we choose the one that is perhaps most
intuitive according to the structure, but rather intricate. Our
point of departure is the Wedderburn Block form
(Theorem~\ref{block1}). Namely, write $A \subset \M[n](K)$, where
$F \subseteq K$.
\begin{enumerate}
\item First consider the center $A_0$ of a simple component in $A$, such as
$$A_0 = \set{\smat{\alpha}{0}{0}{\alpha^p} {\,{:}\,} \alpha \in K}.$$ This
algebra may be described by the Frobenius gluing of $1\times 1$
blocks along a single gluing component. Namely, the algebra has
the form $\set{\sum_{i=1}^{s} \alpha^{\phi_i} e_i \suchthat \alpha
\in K}$, where $e_i$ are the basic idempotents and $\phi$ is the
exponent vector, taking $q$-power values (for $q = \card{F}$).
The generic elements can be taken as $X_k = \sum_{i=1}^{s}
\xi_k^{\phi_i}
e_i$. %
\item Let $S$ be a simple component of $A$. In $B \subseteq \M[n](K)$,
$S$ is contained in a direct sum of matrix blocks of the same
size. Let $e_i^{jj'}$ denote the $1\times 1$ matrix units in the
\th{i} block, whose corresponding block idempotent is $e_i$.
Keeping the notation as above, the generic elements can be taken
as $X^{k,jj'} = \sum_{i=1}^{s} (\xi_k^{jj'})^{\phi_i} e_i^{jj'}$,
where the variables are $\xi_k^{jj'}$. For example, we could have
$$\hat{X}_{[1]}^{k,21} =
\smat{\smat{0}{0}{\xi_k^{2,1}}{0}}{0}{0}{\smat{0}{0}{(\xi_k^{2,1})^q}{0}}.$$ %
\item Let $d$ denote the number of glued components in $A$, and let
$\hat{e}_u$ denote the idempotent corresponding to the \th{u}
component, decomposed as $\hat{e}_u = \sum_{r \in T_u} e_r$, as in
Subsection~\ref{ss:sP}. Let $\hat{X}_{[u]}^{k,jj'}$ denote the
glued sum of appropriate powers of $\xi_k^{jj'}$, placed in the
$(j,j')$ entry of the glued blocks, where the sum is over the
blocks $r \in T_u$. Each generic element of this type can be
decomposed as a sum $\hat{X}_{[u]}^{k,jj'} = \sum_{r \in T_u}
X_r^{k,jj'}$, where $X_r^{k,jj'}$ is the appropriate power of
$\xi_k^{jj'}$, placed in the $(j,j')$ entry of the \th{r} block.
The $X_{[u]}^{k,jj'}$ are the semi-simple part of our generic
elements.
Let $b_1,\dots,b_\tau$ be a topological basis for $S$. We continue
as in \Rref{abitmore}. For every $2 \leq t \leq \nu$ (where $\nu$
is the nilpotence index of $J$; see \Pref{boundonlen}), for all
indices $1 \leq u_1,\dots,u_t \leq d$, and for every (fully
refined) equivalence class $\gamma \subseteq T_{u_1,\dots,u_t}$ (see
\Dref{eqdef} and \Rref{gluetype1}), we take all the elements
$$X_{\vec{k}}^{*,\gamma} = \sum_{(r_1,\dots,r_t) \in \gamma} {X_{r_1}^{k_1,j_1j_1'} b_{p_1} X_{r_2}^{k_2,j_2j_2'}
\cdots X_{r_{t-1}}^{k_{t-1},j_{t-1}j_{t-1}'} b_{p_{t-1}}
X_{r_t}^{k_t,j_tj_t'}},$$ where each of $p_1,\dots,p_t$ ranges
over the values $1$ through $\tau$, each $k_i$ ranges over~${\mathbb {N}}$;
and each $(j_i,j_i')$ ranges over the matrix entries of the blocks
in the $r_i$ place (they have the same dimensions for each
$(r_1,\dots,r_t) \in \gamma$).
\end{enumerate}
\end{constr}
\begin{thm}\label{mainthm2}
The algebra $\mathcal Y = F\set{X_{\vec{k}}^{*,\gamma} \suchthat
k_1,\dots,k_t \leq \mu}$ of Construction~\ref{generic2} (where we
range over all possible choices of $t$, $p_1,\dots,p_t$ and
$j_1,j_1',\dots,j_t,j_t'$ and $\gamma$) is relatively free in
$\VarF A$, with free generators $\tilde X_{\vec{k}}^{*,\gamma}$.
If $\mu \ge \operatorname{PI-rank} A$ (in particular, if $\mu$ is
at least the number of topological generators of $A$), then $\VarF
{\mathcal Y} = \VarF A$.
\end{thm}
\begin{proof} Same as \Tref{mainthm1}.
\end{proof}
\begin{cor} For any representable algebra $A$, its variety has a
finitely generated relatively free algebra. \end{cor}
\begin{proof}
In view of Theorem \ref{frank}, we may assume that $A$ has finite
PI-rank, and thus we are done by Theorem
\ref{mainthm2}.\end{proof}
This construction is needed in a forthcoming paper, where we
describe varieties of algebras in terms of a certain kind of
quiver.
|
1,941,325,220,771 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{D}{eep} learning has been an active research field with abundant applications in pattern recognition \cite{DBLP:conf/socpar/PadmanabhanP16,DBLP:journals/corr/abs-1809-09645}, data mining \cite{DBLP:journals/inffus/ZhangYCL18a}, statistical learning \cite{DBLP:conf/nips/TranPCP017}, computer vision \cite{DBLP:journals/cacm/KrizhevskySH17,DBLP:conf/cvpr/HeZRS16}, natural language processing \cite{DBLP:conf/naacl/DevlinCLT19,DBLP:journals/corr/abs-2003-01200}, \latinphrase{etc.}\xspace
It has achieved great successes in both theory and practice \cite{DBLP:books/daglib/0040158, DBLP:journals/nature/LeCunBH15}, especially in supervised learning scenarios, by leveraging a large amount of high-quality labeled data. However, labeled samples are often difficult, expensive, or time-consuming to obtain. The labeling process usually requires experts' efforts, which is one of the major limitations to train an excellent fully-supervised deep neural network. For example, in medical tasks, the measurements are made with expensive machinery, and labels are drawn from a time-consuming analysis of multiple human experts. If only a few labeled samples are available, it is challenging to build a successful learning system. By contrast, the unlabeled data is usually abundant and can be easily or inexpensively obtained. Consequently, it is desirable to leverage a large number of unlabeled data for improving the learning performance given a small number of labeled samples. For this reason, semi-supervised learning (SSL) has been a hot research topic in machine learning in the last decade \cite{DBLP:books/mit/06/ChapelleSZ06,DBLP:series/synthesis/2009Zhu}.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.9in]{images/taxonomy.pdf}
\caption{The taxonomy of major deep semi-supervised learning methods based on loss function and model design.}
\label{fig:overview}
\end{figure*}
SSL is a learning paradigm associated with constructing models that use both labeled and unlabeled data. SSL methods can improve learning performance by using additional unlabeled instances compared to supervised learning algorithms, which can use only labeled data. It is easy to obtain SSL algorithms by extending supervised learning algorithms or unsupervised learning algorithms.
SSL algorithms provide a way to explore the latent patterns from unlabeled examples, alleviating the need for a large number of labels \cite{DBLP:conf/nips/OliverORCG18}. Depending on the key objective function of the systems, one may have a semi-supervised classification, a semi-supervised clustering, or a semi-supervised regression. We provide the definitions as follows:
\begin{itemize}
\item \textbf{Semi-supervised classification.} Given a training dataset that consists of both labeled instances and unlabeled instances, semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data.
\item \textbf{Semi-supervised clustering.} Given a training dataset that consists of unlabeled instances, and some supervised information about the clusters, the goal of semi-supervised clustering is to obtain better clustering than the clustering from unlabeled data alone. Semi-supervised clustering is also known as constrained clustering.
\item \textbf{Semi-supervised regression.} Given a training dataset that consists of both labeled instances and unlabeled instances, the goal of semi-supervised regression is to improve the performance of a regression algorithm from a regression algorithm with labeled data alone, which predicts a real-valued output instead of a class label.
\end{itemize}
In order to explain SSL clearly and concretely, we focus on the problem of image classification. The ideas described in this survey can be adapted without difficulty to other situations, such as object detection, semantic segmentation, clustering, or regression. Therefore, we primarily review image classification methods with the aid of unlabeled data in this survey.
Since the 1970s when the concept of SSL first came to the fore \cite{DBLP:journals/tit/Agrawala70,DBLP:journals/tit/Fralick67,DBLP:journals/tit/Scudder65a},
there have been a wide variety of SSL methods, including generative models \cite{DBLP:conf/nips/MillerU96,DBLP:journals/ml/NigamMTM00}, semi-supervised support vector machines \cite{DBLP:conf/icml/Joachims99,DBLP:conf/nips/BennettD98,XuJZKL07nips,XuJZKLY09nips}, graph-based methods \cite{DBLP:conf/icml/ZhuGL03,DBLP:journals/jmlr/BelkinNS06,DBLP:conf/icml/BlumC01,DBLP:conf/nips/ZhouBLWS03}, and co-training \cite{DBLP:conf/colt/BlumM98}. We refer interested readers to \cite{DBLP:books/mit/06/CSZ2006,DBLP:series/synthesis/2009Zhu}, which provide a comprehensive overview of traditional SSL methods.
Nowadays, deep neural networks have played a dominating role in many research areas.
It is important to adopt the classic SSL framework and develop novel SSL methods for deep learning settings, which leads to deep semi-supervised learning (DSSL).
DSSL studies how to effectively utilize both labeled and unlabeled data by deep neural networks. A considerable amount of DSSL methods have been proposed. According to the most distinctive features in semi-supervised loss functions and model designs, we classify DSSL into five categories, \latinphrase{i.e.}\xspace, generative methods, consistency regularization methods, graph-based methods, pseudo-labeling methods, and hybrid methods.
The overall taxonomy used in this literature is shown in Fig.~\ref{fig:overview}.
Some representative works of SSL were described in the early survey \cite{DBLP:books/mit/06/CSZ2006,DBLP:series/synthesis/2009Zhu}, however, emerging technologies based on deep learning, such as adversarial training which generates new training data for SSL, have not been included.
Besides, \cite{DBLP:conf/nips/OliverORCG18} focuses on unifying the evaluation indices of SSL, and \cite{DBLP:journals/corr/abs-1903-11260} only reviews generative models and teacher-student models in SSL without making a comprehensive overview of SSL. Although \cite{DBLP:journals/ml/EngelenH20} tries to present a whole picture of SSL, the taxonomy is quite different from ours.
A recent review by Ouali \latinphrase{et~al.}\xspace~\cite{Ouali2020AnOO} gives a similar notion of DSSL as we do. However, it does not compare the presented methods based on their taxonomy and provide perspectives on future trends and existing issues.
Summarizing both previous and the latest research on SSL, we survey the fundamental theories and compare the deep semi-supervised methods.
In summary, our contributions are listed as follows.
\begin{itemize}
\item We provide a detailed review of DSSL methods and introduce a taxonomy of major DSSL methods, background knowledge, and various models. One can quickly grasp the frontier ideas of DSSL.
\item We categorize DSSL methods into several categories, i.e., generative methods, consistency regularization methods, graph-based methods, pseudo-labeling methods, and hybrid methods, with particular genres inside each one.
We review the variants on each category and give standardized descriptions and unified sketch maps.
\item We identify several open problems in this field and discuss the future direction for DSSL.
\end{itemize}
This survey is organized as follows. In Section~\ref{sec:background}, we introduce SSL background knowledge, including assumptions in SSL, classical SSL methods, related concepts, and datasets used in various applications. Section~\ref{sec:regularization} to Section~\ref{sec:hybridModel} introduce the primary deep semi-supervised techniques, \latinphrase{i.e.}\xspace, generative methods in Section~\ref{sec:generative}, consistency regularization methods in Section~\ref{sec:regularization}, graph-based methods in Section~\ref{sec:graph}, pseudo-labeling methods in Section~\ref{sec:pseudoLabeling} and hybrid methods in Section~\ref{sec:hybridModel}. In Section~\ref{sec:future_trends}, we discuss the challenges in semi-supervised learning and provide some heuristic solutions and future directions for these open problems.
\section{Background}\label{sec:background}
Before presenting an overview of the techniques of SSL, we first introduce the notations. To illustrate the DSSL framework, we limit our focus on the single-label classification tasks which are simple to describe and implement. We refer interested readers to \cite{DBLP:conf/cvpr/CevikalpBGS19,DBLP:conf/aaai/WangLQS020,DBLP:journals/pr/CevikalpBG20} for multi-label classification tasks. Let $X = \{X_L,X_U\}$ denote the entire data set, including a small labeled subset $X_L=\{x_i\}_{i=1}^L$ with labels $Y_L=(y_1,y_2,\ldots,y_L)$ and a large scale unlabeled subset $X_U=\{(x_i)\}_{i=1}^U$, and generally we assume $L \ll U$.
We assume that the dataset contains $K$ classes and the first $L$ examples within $X$ are labeled by $\{y_i\}_{i=1}^L \in (y^1,y^2,\ldots,y^K)$.
Formally, SSL aims to solve the following optimization problem,
\begin{equation}
\min_{\theta} \underset{\text{supervised loss}}{\underbrace{\sum_{x\in X_L,y\in Y_L}\mathcal{L}_s(x,y,\theta)}}+\alpha \underset{\text{unsupervised loss}}{\underbrace{\sum_{x\in X_U}\mathcal{L}_u(x,\theta)}}+\beta \underset{\text{regularization}}{\underbrace{\sum_{x\in X}\mathcal{R}(x,\theta)}},
\label{equ: semiLoss}
\end{equation}
\xucomment{
In the following, we will present an overview of the techniques of SSL. Let $X = \{X_L,X_U\}$ denote the entire data set, including a small labeled subset $X_L=\{(x_i,y_i)\}_{i=1}^L$ with labels $Y_L=(y_1,y_2,\ldots,y_L)$ and a large scale unlabeled subset $X_U=\{(x_i)\}_{i=1}^U$, and $L \ll U$.
We assume that the dataset contains $K$ classes and the first $L$ examples within $X$ are labeled by $\{y_i\}_{i=1}^L \in (y^1,y^2,\ldots,y^K)$.
Formally, SSL aims to solve the following optimization problem,
\begin{equation}
\min_{\theta} \underset{\text{supervised loss}}{\underbrace{\sum_{(x,y)\in X_L}\mathcal{L}_s(x,y,\theta)}}+\alpha \underset{\text{unsupervised loss}}{\underbrace{\sum_{x\in X_U}\mathcal{L}_u(x,\theta)}}+\beta \underset{\text{regularization}}{\underbrace{\sum_{x\in X}\mathcal{R}(x,\theta)}},
\label{equ: semiLoss}
\end{equation}
}
where $\mathcal{L}_s$ denotes the per-example supervised loss, \latinphrase{e.g.}\xspace, cross-entropy for classification, $\mathcal{L}_u$ denotes the per-example unsupervised loss, and $\mathcal{R}$ denotes the per-example regularization, \latinphrase{e.g.}\xspace, consistency loss or a designed regularization term. Note that unsupervised loss terms are often not strictly distinguished from regularization terms, as regularization terms are normally not guided by label information. Lastly, $\theta$ denotes the model parameters and $\alpha, \beta \in \mathbb{R}_{>0}$ denotes the trade-off. Different choices of the unsupervised loss functions and regularization terms lead to different semi-supervised models. Note that we do not make a clear distinction between unsupervised loss and regularization terms in many cases. Unless particularly specified, the notations used in this paper are illustrated in TABLE~\ref{tab:notations}.
Regarding whether test data are wholly available in the training process, semi-supervised learning can be classified into two settings: the transductive setting and the inductive learning setting. Transductive learning assumes that the unlabeled samples in the training process are exactly the data to be predicted, and the purpose of the transductive learning is to generalize over these unlabeled samples, while inductive learning supposes that the learned semi-supervised classifier will be still applicable to new unseen data. In fact, most graph-based methods are transductive while most other kinds of SSL methods are inductive.
\subsection{Assumptions for semi-supervised learning}\label{subsec:assumptions}
SSL aims to predict more accurately with the aid of unlabeled data than supervised learning that uses only labeled data. However, an essential prerequisite is that the data distribution should be under some assumptions. Otherwise, SSL may not improve supervised learning and may even degrade the prediction accuracy by misleading inferences. Following \cite{DBLP:series/synthesis/2009Zhu} and \cite{DBLP:books/mit/06/CSZ2006}, the related assumptions in SSL include:
\textbf{Self-training assumption.} The predictions of the self-training model, especially those with high confidence, tend to be correct. We can assume that when the hypothesis is satisfied, those high-confidence predictions are considered to be ground-truth. This can happen when classes form well-separated clusters.
\textbf{Co-training assumption.} Different reasonable assumptions lead to different combinations of labeled and unlabeled data, and accordingly, different algorithms are designed to take advantage of these combinations. For example, Blum \latinphrase{et~al.}\xspace \cite{DBLP:conf/colt/BlumM98} proposed a co-training model, which works under the assumptions: instance $x$ has two conditionally independent views, and each view is sufficient for a classification task.
\textbf{Generative model assumption.} Generally, it is assumed that data are generated from a mixture of distributions. When the number of mixed components, a prior $p(y)$ and a conditional distribution $p(x|y)$ are correct, data can be assumed to come from the mixed model. This assumption suggests that if the generative model is correct enough, we can establish a valid link between the distribution of unlabeled data and the category labels by $p(x,y)=p(y)p(x|y)$.
\xucomment{
\textbf{Semi-supervised smoothness assumption.}
If point $x_1$ is close to $x_2$ in a high-density region, then the corresponding outputs $y_1$ and $y_2$ are also close.
The hypothesis implies that if two samples belong to the same cluster, their outputs will likely be the same class label (see cluster assumption in \cite{DBLP:books/mit/06/CSZ2006}). On the contrary, they are separated by a low-density region, and then their output class labels tend to be different.
}
\textbf{Cluster assumption.} If two points $x_1$ and $x_2$ are in the same cluster, they should belong to the same category \cite{DBLP:books/mit/06/CSZ2006}.
This assumption refers to the fact that data in a single class tend to form a cluster, and when the data points can be connected by short curves that do not pass through any low-density regions, they belong to the same class cluster \cite{DBLP:books/mit/06/CSZ2006}. According to this assumption, the decision boundary should not cross high-density areas but instead lie in low-density regions \cite{DBLP:conf/aistats/ChapelleZ05}. Therefore, the learning algorithm can use a large amount of unlabeled data to adjust the classification boundary.
\textbf{Low-density separation.} The decision boundary should be in a low-density region, not through a high-density area \cite{DBLP:books/mit/06/CSZ2006}. The low-density separation assumption is closely related to the cluster assumption. We can consider the clustering assumption from another perspective by assuming that the class is separated by areas of low density \cite{DBLP:books/mit/06/CSZ2006}. Since the decision boundary in a high-density region would cut a cluster into two different classes and within such a part would violate the cluster assumption.
\textbf{Manifold assumption.} If two points $x_1$ and $x_2$ are located in a local neighborhood in the low-dimensional manifold, they have similar class labels \cite{DBLP:books/mit/06/CSZ2006}. This assumption reflects the local smoothness of the decision boundary. It is well known that one of the problems of machine learning algorithms is the curse of dimensionality. It is hard to estimate the actual data distribution when volume grows exponentially with the dimensions in high dimensional spaces. If the data lie on a low-dimensional manifold, the learning algorithms can avoid the curse of dimensionality and operate in the corresponding low-dimension space.
\begin{table}
\centering
\caption{Notations}
\begin{tabular}{c|c}
\hline
\textbf{Symbol} & \textbf{Explanation} \\
\hline
$\mathcal{X}$ & Input space, for example $\mathcal{X}=\mathbb{R}^n$ \\
$\mathcal{Y}$ & Output space. \\ & Classification: $\mathcal{Y}=\{y^1,y^2,\ldots, y^K\}$.\\ & Regression: $\mathcal{Y}=\mathbb{R}$ \\
$X_L$ & Labeled dataset. $x_i\in \mathcal{X}, y_i\in \mathcal{Y}$\\
$X_U$ &Unlabeled dataset. $x_i\in \mathcal{X}$\\
$X$ & Input dataset $X$. $N=L+U, L\ll U$\\
$\mathcal{L}$ & Loss Function \\
$G$ & Generator \\
$D$ & Discriminator \\
$C$ & Classifier \\
$H$ & Entropy \\
$\mathbb{E}$ & Expectation \\
$\mathcal{R}$ & Consistency constraint \\
$\mathcal{T}_x$& Consistency target \\
$\mathcal{G}$ & A graph \\
$\mathcal{V}$ & The set of vertices in a graph \\
$\mathcal{E}$ & The set of edges in a graph\\
$v$ & A node $v \in \mathcal{V}$\\
$e_{ij}$ & An edge linked between node $i$ and $j$, $e_{ij}\in \mathcal{E}$\\
${A}$ & The adjacency matrix of a graph \\
${D}$ & The degree matrix of a graph \\
$D_{ii}$ & The degree of node $i$\\
$W$ & The weight matrix\\
$W_{ij}$ & The weight associated with edge $e_{ij}$\\
$\mathcal{N}(v)$ & The neighbors of a node $v$\\
$\mathbf{Z}$ & Embedding matrix \\
$\mathbf{z}_{v}$ & An embedding for node $v$\\
$\mathbf{S}$ & Similarity matrix of a graph \\
$\mathbf{S}[u,v]$ & Similarity measurement between node $u$ and $v$\\
$\mathbf{h}_{v}^{(k)}$ & Hidden embedding for node $v$ in $k$th layer\\
$\mathbf{m}_{\mathcal{N}(v)}$ & Message aggregated from node $v$'s neighborhoods \\
\hline
\end{tabular}
\label{tab:notations}
\end{table}
\subsection{Classical Semi-supervised learning Methods}
\label{subsec:classical}
In the following, we briefly introduce some representative SSL methods motivated from the above described assumptions.
In the 1970s, the concept of SSL first came to the fore \cite{DBLP:journals/tit/Agrawala70,DBLP:journals/tit/Fralick67,DBLP:journals/tit/Scudder65a}.
Perhaps the earliest method has been established as self-learning --- an iterative mechanism that uses initial labeled data to train a model to predict some unlabeled samples. Then the most confident prediction is marked as the best prediction of the current supervised model, thus providing more training data for the supervised algorithm, until all the unlabelled examples have been predicted.
Co-training \cite{DBLP:conf/colt/BlumM98} provides a similar solution by training two different models on two different views.
Confident predictions of one view are then used as labels for the other model. More relevant literature of this method also includes \cite{DBLP:conf/acl/Yarowsky95,DBLP:conf/nips/Sa93,DBLP:conf/ecml/VittautAG02}, \latinphrase{etc.}\xspace
Generative models assume a model $p(x,y)=p(y)p(x|y)$, where the density function $p(x|y)$ is an identifiable distribution, for example, polynomial, Gaussian mixture distribution, \latinphrase{etc.}\xspace, and the uncertainty is the parameters of $p(x|y)$. Generative models can be optimized by using iterative algorithms. \cite{DBLP:conf/nips/MillerU96,DBLP:journals/ml/NigamMTM00} apply EM algorithm for classification. They compute the parameters of $p(x|y)$ and then classify unlabeled instances according to the Bayesian full probability formula. Moreover, generative models are harsh on some assumptions. Once the hypothetical $p(x|y)$ is poorly matched with the actual distribution, it can lead to classifier performance degradation.
A representative example following the low-density separation principle is Transductive Support Vector Machines (TSVMs) \cite{DBLP:conf/icml/Joachims99,DBLP:conf/nips/BennettD98,DBLP:conf/aistats/ChapelleZ05,DBLP:conf/icml/ChapelleCZ06, DBLP:conf/nips/XuJZKL07}.
As regular SVMs, TSVMs optimize the gap between decision boundaries and data points, and then expand this gap based on the distance from unlabeled data to the decision margin. To address the corresponding non-convex optimization problem, a number of optimization algorithms have been proposed. For instance,
in \cite{DBLP:conf/aistats/ChapelleZ05}, a smooth loss function substitutes the hinge loss of the TSVM, and for the decision boundary in a low-density space, a gradient descent technique may be used.
Graph-based methods rely on the geometry of the data induced by both labeled and unlabeled examples. This geometry is represented by an empirical graph $\mathcal{G}=(\mathcal{V, E})$, where nodes $\mathcal{V}$ represent the training data points with $|\mathcal{V}|=n$ and edges $\mathcal{E}$ represent similarities between the points. By exploiting the graph or manifold structure of data, it is possible to learn with very few labels to propagate information through the graph \cite{Zhu2002LearningFL,DBLP:conf/icml/ZhuGL03,DBLP:books/mit/06/BengioDR06,DBLP:conf/icml/BlumC01,DBLP:conf/nips/ZhouBLWS03,DBLP:journals/jmlr/BelkinNS06}. For example, Label propagation \cite{Zhu2002LearningFL} is to predict the label information of unlabeled nodes from labeled nodes. Each node label propagates to its neighbors according to the similarity.At each step of node propagation, each node updates its label according to its neighbors' label information. In the label propagation label, the label of the labeled data is fixed so that it propagates the label to the unlabeled data. The label propagation method can be applied to deep learning \cite{DBLP:series/lncs/WestonRMC12}.
\subsection{Related Learning Paradigms} \label{sec:relatedWork}
There are many learning paradigms that can make use of an extra data source to boost learning performance.
Based on the availability of labels or the distribution difference in the extra data source,
there are several learning paradigms related to semi-supervised learning.
\textbf{Transfer learning.}
Transfer learning \cite{DBLP:journals/tkde/PanY10,DBLP:conf/icann/TanSKZYL18,DBLP:journals/pieee/ZhuangQDXZZXH21} aims to apply knowledge from one or more source domains to a target domain in order to improve performance on the target task. In contrast to SSL, which works well under the hypothesis that the training set and the testing set are independent and identically distributed (i.i.d.), transfer learning allows domains, tasks, and distributions used in training and testing be different but related.
\textbf{Weakly-supervised learning.}
Weakly-supervised learning \cite{Li2019TowardsSW} relaxes the data dependence that requires ground-truth labels to be given for a large amount of training data set in strong supervision. There are three types of weakly supervised data: incomplete supervised data, inexact supervised data, and inaccurate supervised data. Incomplete supervised data means only a subset of training data is labeled. In this case, representative approaches are SSL and domain adaptation. Inexact supervised data suggests that the labels of training examples are coarse-grained, \latinphrase{e.g.}\xspace, in the scenario of multi-instance learning. Inaccurate supervised data means that the given labels are not always ground-truth, such as in the situation of label noise learning.
\textbf{Positive and unlabeled learning.}
Positive and unlabeled (PU) learning \cite{DBLP:conf/iisa/JaskieS19, DBLP:journals/ml/BekkerD20} is a variant of positive and negative binary classification, where the training data consists of positive samples and unlabeled samples. Each unlabeled instance can be either the positive and negative class. During the training procedure, only positive samples and unlabeled samples are available. We can think of PU learning as a special case of SSL.
\textbf{Meta-learning.}
Meta-learning \cite{DBLP:journals/corr/abs-2004-05439, DBLP:journals/corr/abs-2004-11149, DBLP:journals/corr/abs-1810-03548, DBLP:journals/corr/abs-2007-09604,DBLP:journals/corr/abs-2010-03522}, also known as ``learning to learn", aims to learn new skills or adapt to new tasks rapidly with previous knowledge and a few training examples. It is well known that a good machine learning model often requires a large number of samples for training. The meta-learning model is expected to adapt and generalize to new environments that have been encountered during the training process. The adaptation process is essentially a mini learning session that occurs during the test but has limited exposure to new task configurations. Eventually, the adapted model can be trained on various learning tasks and optimized on the distribution of functions, including potentially unseen tasks.
\textbf{Self-supervised learning.}
Self-supervised learning \cite{DBLP:journals/corr/abs-1902-06162, DBLP:journals/corr/abs-2006-08218,DBLP:journals/corr/abs-2011-00362} has gained popularity due to its ability to prevent the expense of annotating large-scale datasets. It can leverage input data as supervision and use the learned feature representations for many downstream tasks. In this sense, self-supervised learning meets our expectations for efficient learning systems with fewer labels, fewer samples, or fewer trials. Since there is no manual label involved, self-supervised learning can be regarded as a branch of unsupervised learning.
\subsection{Datasets and applications}\label{sec:applications}
SSL has many applications across different tasks and domains, such as image classification, object detection, semantic segmentation, \latinphrase{etc.}\xspace in the domain of computer vision, and text classification, sequence learning, \latinphrase{etc.}\xspace in the field of Natural Language Processing (NLP).
As shown in TABLE~\ref{tab:application}, we summarize some of the most widely used datasets and representative references according to the applications area. In this survey, we mainly discuss the methods applied to image classification since these methods can be extended to other applications, such as \cite{DBLP:conf/nips/JeongLKK19} applied consistency training to object detection, \cite{DBLP:conf/iccv/SoulySS17} modified Semi-GANs to fit the scenario of semantic segmentation. There are many works that have achieved state-of-the-art performance within different applications. Most of these methods share details of the implementation and source code, and we refer the interested readers to a more detailed review of the datasets and corresponding references in TABLE~\ref{tab:application}.
\begin{table*}
\caption{Summary of Applications and Datasets}
\label{tab:application}
\centering
\begin{tabular}{>{\centering\arraybackslash}m{3.5cm}|>{\centering\arraybackslash}m{9cm}|>{\centering\arraybackslash}m{3cm}}
\hline
\textbf{Applications} & \textbf{Datasets} & \textbf{Citations} \\
\hline
Image classification &
MNIST \cite{LeCun2005TheMD},
SVHN \cite{Netzer2011ReadingDI},
STL-10 \cite{DBLP:journals/jmlr/CoatesNL11},
Cifar-10,Cifar-100 \cite{Krizhevsky2009LearningML},
ImageNet \cite{DBLP:conf/nips/KrizhevskySH12}
&\cite{DBLP:conf/nips/RasmusBHVR15,DBLP:conf/nips/SajjadiJT16,DBLP:conf/iclr/LaineA17,DBLP:conf/nips/TarvainenV17,DBLP:conf/cvpr/ZhangLH20WCP,DBLP:conf/ijcai/VermaLKBL19,DBLP:conf/nips/BerthelotCGPOR19,DBLP:conf/iclr/BerthelotCCKSZR20,DBLP:conf/iclr/LiSH20} \\
\hline
Object detection & PASCAL VOC \cite{DBLP:journals/ijcv/EveringhamGWWZ10},
MSCOCO \cite{DBLP:conf/eccv/LinMBHPRDZ14},
ILSVRC \cite{DBLP:journals/ijcv/RussakovskyDSKS15} &\cite{DBLP:conf/cvpr/TangWGDGC16,DBLP:conf/nips/JeongLKK19,DBLP:conf/iccv/GaoWDLN19,DBLP:journals/corr/abs-2005-04757} \\
\hline
3D object detection & SUN RGB-D \cite{DBLP:conf/cvpr/SongLX15},
ScanNet \cite{DBLP:conf/cvpr/DaiCSHFN17},
KITTI \cite{DBLP:conf/cvpr/GeigerLU12} & \cite{DBLP:conf/cvpr/ZhaoCL20,DBLP:conf/iccv/TangL19} \\
\hline
Video salient object detection &VOS \cite{DBLP:journals/tip/LiXC18},
DAVIS \cite{DBLP:conf/cvpr/PerazziPMGGS16},
FBMS \cite{DBLP:conf/eccv/BroxM10} &\cite{DBLP:conf/iccv/YanL0LWCL19}\\
\hline
Semantic segmentation &PASCAL VOC \cite{DBLP:journals/ijcv/EveringhamGWWZ10},
PASCAL context \cite{DBLP:conf/cvpr/MottaghiCLCLFUY14},
MS COCO \cite{DBLP:conf/eccv/LinMBHPRDZ14},
Cityscapes \cite{DBLP:conf/cvpr/CordtsORREBFRS16},
CamVid \cite{DBLP:conf/eccv/BrostowSFC08},
SiftFlow \cite{DBLP:journals/pami/LiuYT11,DBLP:conf/cvpr/XiaoHEOT10},
StanfordBG \cite{DBLP:conf/iccv/GouldFK09}
&
\cite{DBLP:conf/iccv/DaiHS15,DBLP:conf/nips/HongNH15,DBLP:conf/iccv/PapandreouCMY15,DBLP:conf/iccv/SoulySS17,
DBLP:conf/cvpr/WeiXSJFH18,DBLP:conf/cvpr/LeeKLLY19,DBLP:journals/corr/abs-1908-05724,DBLP:conf/iccv/KalluriVCJ19,DBLP:conf/cvpr/IbrahimVRM20,DBLP:conf/cvpr/OualiHT20}\\
\hline
Text classification & AG News \cite{DBLP:conf/nips/ZhangZL15},
BPpedia \cite{lrec12mendes2},
Yahoo! Answers\cite{DBLP:conf/aaai/ChangRRS08},
IMDB \cite{DBLP:conf/acl/MaasDPHNP11},
Yelp review \cite{DBLP:conf/nips/ZhangZL15},
Snippets \cite{DBLP:conf/www/PhanNH08},
Ohsumed \cite{DBLP:conf/aaai/YaoM019},
TagMyNews \cite{DBLP:conf/ecir/VitaleFS12},
MR \cite{DBLP:conf/acl/PangL05},
Elec \cite{DBLP:conf/nips/JohnsonZ15},
Rotten Tomatoes \cite{DBLP:conf/acl/PangL05},
RCV1 \cite{DBLP:journals/jmlr/LewisYRL04} &\cite{DBLP:conf/aaai/XuSDT17,DBLP:conf/iclr/MiyatoDG17,DBLP:conf/aaai/SachanZS19,DBLP:conf/emnlp/HuYSJL19,DBLP:conf/emnlp/JoC19,DBLP:conf/acl/ChenYY20} \\
\hline
Dialogue policy learning &MultiWOZ \cite{DBLP:conf/emnlp/BudzianowskiWTC18} &\cite{DBLP:conf/acl/HuangQSZ20} \\
\hline
Dialogue generation &Cornell Movie Dialogs Corpus \cite{DBLP:conf/acl-cmcl/Danescu-Niculescu-Mizil11},
Ubuntu Dialogue Corpus \cite{DBLP:conf/sigdial/LowePSP15} &\cite{DBLP:conf/emnlp/ChangHWZYW19} \\
\hline
Sequence learning & IMDB \cite{DBLP:conf/acl/MaasDPHNP11},
Rotten Tomatoes \cite{DBLP:conf/acl/PangL05},
20 Newsgroups \cite{DBLP:conf/icml/Lang95},
DBpedia \cite{lrec12mendes2},
CoNLL 2003 NER task \cite{DBLP:conf/conll/SangM03},
CoNLL 2000 Chunking task \cite{DBLP:conf/conll/SangB00},
Twitter POS Dataset \cite{DBLP:conf/acl/GimpelSODMEHYFS11,DBLP:conf/naacl/OwoputiODGSS13},
Universal Dependencies(UD) \cite{DBLP:conf/acl/McDonaldNQGDGHPZTBCL13},
Combinatory Categorial Grammar (CCG) supertagging \cite{DBLP:journals/coling/HockenmaierS07}
&\cite{DBLP:conf/nips/DaiL15,DBLP:conf/acl/PetersABP17,DBLP:conf/acl/Rei17,DBLP:conf/emnlp/ClarkLML18} \\
\hline
Semantic role labeling& CoNLL-2009 \cite{DBLP:conf/conll/HajicCJKMMMNPSSSXZ09},
CoNLL-2013 \cite{DBLP:conf/conll/PradhanMXNBUZZ13} &\cite{DBLP:journals/jidm/CarneiroCZR17,DBLP:conf/emnlp/MehtaLC18,DBLP:conf/emnlp/CaiL19} \\
\hline
Question answering &SQuAD \cite{DBLP:conf/emnlp/RajpurkarZLL16},
TriviaQA \cite{DBLP:conf/acl/JoshiCWZ17} &\cite{DBLP:conf/aaai/OhTHITK16,DBLP:conf/naacl/DhingraPR18,DBLP:conf/emnlp/ZhangB19} \\
\hline
\end{tabular}
\end{table*}
\section{Generative methods}
\label{sec:generative}
As discussed in Subsection~\ref{subsec:classical}, generative methods can learn the implicit features of data to better model data distributions. They model the real data distribution from the training dataset and then generate new data with this distribution. In this section, we review the deep generative semi-supervised methods based on the Generative Adversarial Network (GAN) framework and the Variational AutoEncoder (VAE) framework, respectively.
\begin{figure*}[!t]
\centering
\includegraphics[width=7.0in]{images/SemiGAN.pdf}
\caption{A glimpse of the diverse range of architectures used for GAN-based deep generative semi-supervised methods. The characters `$`D,G$" and ``$E$" represent \emph{Discriminator}, \emph{Generator} and \emph{Encoder}, respectively. In Figure (6), Localized GAN is equipped with a local generator $G(x,z)$, so we use the yellow box to distinguish it. Similarly, in CT-GAN, the purple box is used to denote a discriminator that introduces consistency constraint.
}
\label{fig:generativeModel}
\end{figure*}
\subsection{Semi-supervised GANs}\label{sec:semiGAN}
A typical GAN \cite{DBLP:conf/nips/GoodfellowPMXWOCB14} consists of a generator $G$ and a discriminator $D$ (see Fig.~\ref{fig:generativeModel}(2)). The goal of $G$ is to learn a distribution $p_g$ over data $x$ given a prior on input noise variables $p_z(z)$. The fake samples $G(z)$ generated by the generator $G$ are used to confuse the discriminator $D$. The discriminator $D$ is used to maximize the distinction between real training samples $x$ and fake samples $G(z)$. As we can see, $D$ and $G$ play the following two-player minimax game with the value function $V(G,D)$:
\begin{align}
\min\limits_G \max\limits_D V(D, G) = \mathbb{E}_{x \sim p(x)}[\text{log}D(x)] \nonumber \\
+ \mathbb{E}_{z \sim p_{z}}[\text{log}(1 - D(G(z)))].
\label{equ:gan}
\end{align}
Since GANs can learn the distribution of real data from unlabeled samples, it can be used to facilitate SSL. There are many ways to use GANs in SSL settings. Inspired by \cite{Schoneveld2017SemiSupervisedLW}, we have identified four main themes in how to use GANs for SSL, \latinphrase{i.e.}\xspace, (a) re-using the features from the discriminator, (b) using GAN-generated samples to regularize a classifier, (c) learning an inference model, and (d) using samples produced by a GAN as additional training data.
A simple SSL approach is to combine supervised and unsupervised loss during training. \cite{DBLP:journals/corr/RadfordMC15} demonstrates that a
hierarchical representation of the GAN-discriminator is useful for object classification. These findings indicate that a simple and efficient SSL method can be provided by combining an unsupervised GAN value function with a supervised classification objective function, \latinphrase{e.g.}\xspace, $\mathbb{E}_{(x,y)\in X_l}[\log D(y|x)]$. In the following, we review several representative methods of semi-supervised GANs.
\textbf{CatGAN.}
Categorical Generative Adversarial Network (CatGAN) \cite{DBLP:journals/corr/Springenberg15} modifies the GAN's objective function to take into account the mutual information between observed examples and their predicted categorical class distributions. In particular, the optimization problem is different from the standard GAN (Eq.~(\ref{equ:gan})). The structure is illustrated in Fig.~\ref{fig:generativeModel}(3). This method aims to learn a discriminator which distinguishes the samples into $K$ categories by labeling $y$ to each $x$, instead of learning a binary discriminator value function. Moreover, in the CatGAN discriminator loss function, the supervised loss is also a cross-entropy term between the predicted conditional distribution $p(y|x,D)$ and the true label distribution of examples. It consists of three parts: (1) entropy $H[p(y|x,D)]$ which to obtain certain category assignment for samples; (2) $H[p(y|G(z), D)]$ for uncertain predictions from generated samples; and (3) the marginal class entropy $H[p(y|D)]$ to uniform usage of all classes. The proposed framework uses the feature space learned by the discriminator for the final learning task. For the labeled data, the supervised loss is also a cross-entropy term between the conditional distribution $p(y|x,D)$ and the samples' true label distribution.
\textbf{CCGAN.}
Context-Conditional Generative Adversarial Networks (CCGAN) \cite{DBLP:journals/corr/DentonGF16} is proposed to use an adversarial loss for harnessing unlabeled image data based on image in-painting. The architecture of the CCGAN is shown in Fig.~\ref{fig:generativeModel}(4). The main highlight of this work is context information provided by the surrounding parts of the image. The method trains a GAN where the generator is to generate pixels within a missing hole. The discriminator is to discriminate between the real unlabeled images and these in-painted images. More formally, $m\odot x$ as input to a generator, where $m$ denotes a binary mask to drop out a specified portion of an image and $\odot$ denotes element-wise multiplication. Thus the in-painted image $x_I=(1-m)\odot x_G+m\odot x$ with generator outputs $x_G=G(m\odot x, z)$. The in-painted examples provided by the generator cause the discriminator to learn features that generalize to the related task of classifying objects. The penultimate layer of components of the discriminator is then shared with the classifier, whose cross-entropy loss is used combined with the discriminator loss.
\textbf{Improved GAN.}
There are several methods to adapt GANs into a semi-supervised classification scenario. CatGAN \cite{DBLP:journals/corr/Springenberg15} forces the discriminator to maximize the mutual information between examples and their predicted class distributions instead of training the discriminator to learn a binary classification. To overcome the learned representations' bottleneck of CatGAN, Semi-supervised GAN (SGAN) \cite{DBLP:journals/corr/Odena16a} learns a generator and a classifier simultaneously. The classifier network can have $(K+1)$ output units corresponding to $[y_1,y_2,\ldots, y_K, y_{K+1}]$, where the $y _{K+1}$ represents the outputs generated by $G$. Similar to SGAN, Improved GAN \cite{DBLP:conf/nips/SalimansGZCRCC16} solves a $(K+1)$-class classification problem. The structure of Improved GAN is shown in Fig.~\ref{fig:generativeModel}(5). Real examples for one of the first $K$ classes and the additional $(K+1)$th class consisted of the synthetic images generated by the generator $G$. This work proposes the improved techniques to train the GANs, \latinphrase{i.e.}\xspace, feature matching, minibatch discrimination, historical averaging one-sided label smoothing, and virtual batch normalization, where feature matching is used to train the generator. It is trained by minimizing the discrepancy between features of the real and the generated examples, that is $\|\mathbb{E}_{x\in X} D(x)-\mathbb{E}_{z\sim p(z)}D(G(z))\|_2^2$, rather than maximizing the likelihood of its generated examples classified to $K$ real classes. The loss function for training the classifier becomes
\begin{equation}
\begin{aligned}
&\max_D \mathbb{E}_{(x,y)\sim p(x,y)}\log p_D(y|x,y\le K) \\
&+\mathbb{E}_{x\sim p(x)}\log p_D(y\le K|x)+\mathbb{E}_{x\sim p_G}\log p_D(y=K+1|x),
\end{aligned}
\label{equ:improvedGAN}
\end{equation}
where the first term of Eq.~(\ref{equ:improvedGAN}) denotes the supervised cross-entropy loss, The last two terms of Eq.~(\ref{equ:improvedGAN}) are the unsupervised losses from the unlabeled and generated data, respectively.
\textbf{GoodBadGAN.}
GoodBadGAN \cite{DBLP:conf/nips/DaiYYCS17} realizes that the generator and discriminator in \cite{DBLP:conf/nips/SalimansGZCRCC16} may not be optimal simultaneously, \latinphrase{i.e.}\xspace, the discriminator achieves good performance in SSL, while the generator may generate visually unrealistic samples. The structure of GoodBadGAN is shown in Fig.~\ref{fig:generativeModel}(5).
The method gives theoretical justifications of why using bad samples from the generator can boost the SSL performance. Generally, the generated samples, along with the loss function (Eq.~(\ref{equ:improvedGAN})), can force the boundary of the discriminator to lie between the data manifolds of different categories, which improves the generalization of the discriminator. Due to the analysis, GoodBadGAN learns a bad generator by explicitly adding a penalty term $\mathbb{E}_{x\sim p_G}\log p(x)\mathbb{I}[p(x)>\epsilon]$ to generate bad samples, where $\mathbb{I}[\cdot]$ is an indicator function and $\epsilon$ is a threshold, which ensures that only high-density samples are penalized while low-density samples are unaffected. Further, to guarantee the strong true-fake belief in the optimal conditions, a conditional entropy term $\mathbb{E}_{x \sim p_x} \sum_{k=1}^{K} \log p_D(k|x)$ is added to the discriminator objective function in Eq.~(\ref{equ:improvedGAN}).
\textbf{Localized GAN.}
Localized GAN \cite{DBLP:conf/cvpr/QiZHEWH18} focuses on using local coordinate charts to parameterize local geometry of data transformations across different locations manifold rather than the global one. This work suggests that Localized GAN can help train a locally consistent classifier by exploring the manifold geometry. The architecture of Localized GAN is shown in Fig.~\ref{fig:generativeModel}(6). Like the methods introduced in \cite{DBLP:conf/nips/SalimansGZCRCC16,DBLP:conf/nips/DaiYYCS17}, Localized GAN attempts to solve the $K+1$ classification problem that outputs the probability of $x$ being assigned to a class. For a classifier, $\nabla_z D(G(x,z))$ depicts the change of the classification decision on the manifold formed by $G (x,z)$, and the evolution of $D(\cdot)$ restricted on $G(x,z)$ can be written as
\begin{equation}
|D(G(x,z+\sigma z))-D(G(x,z))|^2\approx \|\nabla_x^G D(x)\|^2 \sigma z,
\end{equation}
which shows that penalizing $\|\nabla_x^G D(x)\|^2$ can train a robust classifier with a small perturbation $\sigma z$ on a manifold. This probabilistic classifier can be introduced by adding $\sum_{k=1}^{K}\mathbb{E}_{x\sim p_x}\|\nabla_x^G \log p(y=k|x)\|^2$, where $\nabla_x^G \log p(y=k|x)$ is the gradient of the log-likelihood along with the manifold $G(x,z)$.
\textbf{CT-GAN.}
CT-GAN \cite{DBLP:conf/iclr/WeiGL0W18} combines consistency training with WGAN \cite{DBLP:journals/corr/ArjovskyCB17} applied to semi-supervised classification problems. And the structure of CT-GAN is shown in Fig.~\ref{fig:generativeModel}(7). Following \cite{DBLP:conf/nips/GulrajaniAADC17}, this method also lays the Lipschitz continuity condition over the manifold of the real data to improve the improved training of WGAN. Moreover, CT-GAN devises a regularization over a pair of samples drawn near the manifold following the most basic definition of the 1-Lipschitz continuity. In particular, each real instance $x$ is perturbed twice and uses a Lipschitz constant to bound the difference between the discriminator's responses to the perturbed instances $x',x''$. Formally, since the value function of WGAN is
\begin{equation}
\min_G\max_D \mathbb{E}_{x\sim p_x}D(x)-\mathbb{E}_{z\sim p_z}D(G(z)),
\label{equ:wgan}
\end{equation}
where $D$ is one of the sets of 1-Lipschitz function.
The objective function for updating the discriminator include: (a) basic Wasserstein distance in Eq.~(\ref{equ:wgan}), (b) gradient penalty $GP|_{\hat{x}}$ \cite{DBLP:conf/nips/GulrajaniAADC17} used in the improved training of WGAN, where $\hat{x}=\epsilon x+(1-\epsilon)G(z)$, and (c) A consistency regularization $CT|_{x',x''}$. For semi-supervised classification, CT-GAN uses the Eq.~(\ref{equ:improvedGAN}) for training the discriminator instead of the Eq.(~\ref{equ:wgan}), and then adds the consistency regularization $CT|_{x',x''}$.
\textbf{BiGAN.}
Bidirectional Generative Adversarial Networks (BiGANs) \cite{DBLP:conf/iclr/DonahueKD17} is an unsupervised feature learning framework. The architecture of BiGAN is shown in Fig.~\ref{fig:generativeModel}(8). In contrast to the standard GAN framework, BiGAN adds an encoder $E$ to this framework, which maps data $x$ to $z'$, resulting in a data pair $(x, z')$. The data pair $(x,z')$ and the data pair generated by generator $G$ constitute two kinds of true and fake data pairs. The BiGAN discriminator $D$ is to distinguish the true and fake data pairs. In this work, the value function for training the discriminator becomes,
\begin{equation}
\begin{aligned}
\min_{G,E}\max_{D} V(D,E,G)=\mathbb{E}_{x\sim \mathcal{X}}\underset{\log D(x,E(x))}{\underbrace{\left[ \mathbb{E}_{z\sim p_E\left( \cdot |x \right)}\left[ \log D\left( x,z \right) \right] \right] }}
\\
+\mathbb{E}_{z\sim p\left( z \right)}\underset{\log \left( 1-D\left( G\left( z \right) ,z \right) \right)}{\underbrace{\left[ \mathbb{E}_{x\sim p_G\left( \cdot |z \right)}\left[ \log \left( 1-D\left( x,z \right) \right) \right] \right] }}.
\end{aligned}
\label{equ:bigan}
\end{equation}
\textbf{ALI.}
Adversarially Learned Inference (ALI) \cite{DBLP:conf/iclr/DumoulinBPLAMC17} is a GAN-like adversarial framework based on the combination of an inference network and a generative model.
This framework consists of three networks, a generator, an inference network and a discriminator.
The generative network $G$ is used as a decoder to map latent variables $z$ (with a prior distribution) to data distribution $x'=G(z)$, which can be formed as joint pairs $(x',z)$. The inference network $E$ attempts to encode training samples $x$ to latent variables $z'=E(x)$, and joint pairs $(x,z')$ are similarly drawn. The discriminator network $D$ is required to distinguish the joint pairs $(x,z')$ from the joint pairs $(x',z)$. As discussed above, the central architecture of ALI is regarded as similar to the BiGAN's (see Fig.~\ref{fig:generativeModel}(8). In semi-supervised settings, this framework adapts the discriminator network proposed in \cite{DBLP:conf/nips/SalimansGZCRCC16}, and shows promising performance on semi-supervised benchmarks on SVHN and CIFAR 10. The objective function can be rewritten as an extended version similar to Eq.~(\ref{equ:bigan}).
\textbf{Augmented BiGAN.}
Kumar \latinphrase{et~al.}\xspace \cite{DBLP:conf/nips/KumarSF17} propose an extension of BiGAN called Augmented BiGAN for SSL. This framework has a BiGAN-like architecture that consists of an encoder, a generator and a discriminator. Since trained GANs produce realistic images, the generator can be considered to obtain the tangent spaces of the image manifold. The estimated tangents infer the desirable invariances which can be injected into the discriminator to improve SSL performance. In particular, the Augmented BiGAN uses feature matching loss \cite{DBLP:conf/nips/SalimansGZCRCC16}, $\|\mathbb{E}_{x\in X}D(E(x),x)-\mathbb{E}_{z\sim p(z)}D(z,G(z))\|^2_2$ to optimize the generator network and the encoder network. Moreover, to avoid the issue of class-switching (the class of $G(E(x))$ changed during the decoupled training), a third pair $(E(x), G(E(x)))$ loss term $\mathbb{E}_{x\sim p( x)}[ \log ( 1-D( E( x) ,G_x( E( x) ) )) ] $ is added to the objective function Eq.~(\ref{equ:bigan}).
\textbf{Triple GAN.}
Triple GAN \cite{DBLP:conf/nips/LiXZZ17} is presented to address the issue that the generator and discriminator of GAN have incompatible loss functions, \latinphrase{i.e.}\xspace, the generator and the discriminator can not be optimal at the same time \cite{DBLP:conf/nips/SalimansGZCRCC16}. The problem has been mentioned in \cite{DBLP:conf/nips/DaiYYCS17}, but the solution is different. As shown in Fig.~\ref{fig:generativeModel}(9), the Triple GAN tackles this problem by playing a three-player game. This three-player framework consists of three parts, a generator $G$ using a conditional network to generate the corresponding fake samples for the true labels, a classifier $C$ that generates pseudo labels for given real data, and a discriminator $D$ distinguishing whether a data-label pair is from the real-label dataset or not. This Triple GAN loss function may be written as
\begin{equation}
\begin{aligned}
\min_{C,G}\max_{D} V(C,G,D) =\mathbb{E}_{\left( x,y \right) \sim p\left( x,y \right)}\left[ \log D\left( x,y \right) \right]
\\
+ ~\alpha \mathbb{E}_{\left( x,y \right) \sim p_c\left( x,y \right)}\left[ \log \left( 1-D\left( x,y \right) \right) \right]
\\
+\left( 1-\alpha \right) \mathbb{E}_{\left( x,y \right) \sim p_g\left( x,y \right)}\left[ \log \left( 1-D\left( G\left( y,z \right) ,y \right) \right) \right],
\end{aligned}
\label{equ:tripleGAN}
\end{equation}
where $D$ obtains label information about unlabeled data from the classifier $C$ and forces the generator $G$ to generate the realistic image-label samples.
\textbf{Enhanced TGAN.}
Based on the architecture of Triple GAN \cite{DBLP:conf/nips/GanCWPZLLC17}, Enhanced TGAN \cite{DBLP:conf/cvpr/WuDLL0W19} modifies the Triple-GAN by re-designing the generator loss function and the classifier network. The generator generates images conditioned on class distribution and is regularized by class-wise mean feature matching. The classifier network includes two classifiers that collaboratively learn to provide more categorical information for generator training. Additionally, a semantic matching term is added to enhance the semantics consistency with respect to the generator and the classifier network. The discriminator $D$ learned to distinguish the labeled data pair $(x,y)$ from the synthesized data pair $(G(z),\tilde{y})$ and predicted data pair $(x,\bar{y})$. The corresponding objective function is similar to Eq.~(\ref{equ:tripleGAN}), where $(G(z),\tilde{y})$ is sampling from the pre-specified distribution $p_g$, and $(x,\bar{y})$ denotes the predicted data pair determined by $p_c(x)$.
\textbf{MarginGAN.}
MarginGAN \cite{DBLP:conf/nips/DongL19} is another extension framework based on Triple GAN \cite{DBLP:conf/nips/GanCWPZLLC17}. From the perspective of classification margin, this framework works better than Triple GAN when used for semi-supervised classification. The architecture of MarginGAN is presented in Fig.~\ref{fig:generativeModel}(10). MarginGAN includes three components like Triple GAN, a generator $G$ which tries to maximize the margin of generated samples, a classifier $C$ used for decreasing margin of fake images, and a discriminator $D$ trained as usual to distinguish real samples from fake images. This method solves the problem that performance is damaged due to the inaccurate pseudo label in SSL, and improves the accuracy rate of SSL.
\textbf{Triangle GAN.}
Triangle Generative Adversarial Network ($\triangle$-GAN) \cite{DBLP:conf/nips/GanCWPZLLC17} introduces a new architecture to match cross-domain joint distributions. The architecture of the $\triangle$-GAN is shown in Fig.~\ref{fig:generativeModel}(11). The $\triangle$-GAN can be considered as an extended version of BiGAN \cite{DBLP:conf/iclr/DonahueKD17} or ALI \cite{DBLP:conf/iclr/DumoulinBPLAMC17}. This framework is a four-branch model that consists of two generators $E$ and $G$, and two discriminators $D_1$ and $D_2$. The two generators can learn two different joint distributions by two-way matching between two different domains. At the same time, the discriminators are used as an implicit ternary function, where $D_2$ determines whether the data pair is from $(x,y')$ or from $(G(z),y)$ , and $D_1$ distinguishes real data pair $(x,y)$ from the fake data pair $(G(z),y)$.
\textbf{Structured GAN.}
Structured GAN \cite{DBLP:conf/nips/DengZLYXZX17} studies the problem of semi-supervised conditional generative modeling based on designated semantics or structures. The architecture of Structured GAN (see Fig.~\ref{fig:generativeModel}(12)) is similar to Triangle GAN \cite{DBLP:conf/nips/GanCWPZLLC17}. Specifically, Structured GAN assumes that the samples $x$ are generated conditioned on two independent latent variables, \latinphrase{i.e.}\xspace, $y$ that encodes the designated semantics and $z$ contains other variation factors. Training Structured GAN involves solving two adversarial games that have their equilibrium concentrating at the true joint data distributions $p(x,z)$ and $p(x,y)$. The synthesized data pair $(x',y)$ and $(x',z)$ are generated by generator $G(y,z)$, where $(x ',y)$ mix real sample pair $(x,y)$ together as input for training discriminator $D(x,y)$ and $(x'z)$ blends the $E$'s outputs pair $(x,z')$ for discriminator $D(x,z)$.
\textbf{$\bm{R^3}$-CGAN.}
Liu \latinphrase{et~al.}\xspace \cite{LiuYR3GAN20} propose a Class-conditional GAN with Random Regional Replacement (R3-regularization) technique, called $R^3$-CGAN. Their framework and training strategy relies on Triangle GAN \cite{DBLP:conf/nips/GanCWPZLLC17}. The $R^3$-CGAN architecture comprises four parts, a generator $G$ to synthesize fake images with specified class labels, a classifier $C$ to generate instance-label pairs of real unlabeled images with pseudo-labels, a discriminator $D_1$ to identify real or fake pairs,
and another discriminator $D_2$ to distinguish two types of fake data.
Specifically, CutMix \cite{DBLP:conf/iccv/YunHCOYC19}, a Random Regional Replacement strategy, is used to construct two types of between-class instances (cross-category instances and real-fake instances). These instances are used to regularize the classifier $C$ and discriminator $D_1$. Through the minimax game among the four players, the class-specific information is effectively used for the downstream.
\textbf{Summary.}
Comparing with the above discussed Semi-GANs methods, we find that the main difference lies in the number and type of the basic modules, such as generator, encoder, discriminator, and classifier. As shown in Fig.~\ref{fig:generativeModel}(1), we find the evolutionary relationship of the Semi-GANs models. Overall, CatGAN \cite{DBLP:journals/corr/Springenberg15} and CCGAN \cite{DBLP:journals/corr/DentonGF16} extend the basic GAN by including additional information in the model, such as category information and in-painted images. Based on Improved GAN \cite{DBLP:conf/nips/SalimansGZCRCC16}, Localized GAN \cite{DBLP:conf/cvpr/QiZHEWH18} and CT-GAN \cite{DBLP:conf/iclr/WeiGL0W18} consider the local information and consistency regularization, respectively. BiGAN \cite{DBLP:conf/iclr/DonahueKD17} and ALI \cite{DBLP:conf/iclr/DumoulinBPLAMC17} learn an inference model during the training process by adding an Encoder module. In order to solve the problem that the generator and discriminator can not be optimal at the same time, Triple-GAN \cite{DBLP:conf/nips/LiXZZ17} adds an independent classifier instead of using a discriminator as a classifier.
\subsection{Semi-supervised VAE}\label{sec:semiVAE}
\begin{figure*}[!t]
\centering
\includegraphics[width=7.0in]{images/SemiVAE.pdf}
\caption{A glimpse of probabilistic graphical models used for VAE-based deep generative semi-supervised methods. Each method contains two models, the generative model $P$ and the inference model $Q$. The variational parameters $\theta$ and $\phi$ are learned jointly by the incoming connections (\latinphrase{i.e.}\xspace, deep neural networks).}
\label{fig:semiVAE}
\end{figure*}
Variational AutoEncoders (VAEs) \cite{DBLP:journals/corr/KingmaW13,DBLP:conf/icml/RezendeMW14} are flexible models which combine deep autoencoders with generative latent-variable models. The generative model captures representations of the distributions rather than the observations of the dataset, and defines the joint distribution in the form of $p(x,z)=p(z)p(x|z)$, where $p(z)$ is a prior distribution over latent variables $z$. Since the true posterior $p(z|x)$ is generally intractable, the generative model is trained with the aid of an approximate posterior distribution $q(z|x)$. The architecture of VAEs is a two-stage network, an encoder to construct a variational approximation $q(z|x)$ to the posterior $p(z|x)$, and a decoder to parameterize the likelihood $p(x|z)$. The variational approximation of the posterior maximizes the marginal likelihood, and the evidence lower bound (ELBO) may be written as
\begin{align}
\log p(x)=\log \mathbb{E}_{q(z|x)}[\frac{p(z)p(x|z)}{q(z|x)}]\\
\geq \mathbb{E}_{q(z|x)}[\log p(z)p(x|z)-\log q(z|x)].
\end{align}
There are three reasons why latent-variable models can be useful for SSL: (a) It is a natural way to incorporate unlabeled data, (b) The ability to disentangle representations can be easily implemented via the configuration of latent variables, and (c) It also allows us to use variational neural methods. In the following, we review several representative latent variable methods for semi-supervised learning.
\textbf{SSVAEs.}
SSVAEs denotes the VAE-based generative models with latent encoder representation proposed in \cite{DBLP:conf/nips/KingmaMRW14}.
The first one, i.e., the latent-feature discriminative model, referred to M1 \cite{DBLP:conf/nips/KingmaMRW14}, can provide more robust latent features with a deep generative model of the data. As shown in Fig.~\ref{fig:semiVAE}(1), $p_{\theta}(x|z)$ is a non-linear transformation, \latinphrase{e.g.}\xspace, a deep neural network. Latent variables $z$ can be selected as a Gaussian distribution or a Bernoulli distribution. An approximate sample of the posterior distribution on the latent variable $q_{\phi}(z|x)$ is used as the classifier feature for the class label $y$. The second one, namely Generative semi-supervised model, referred to M2 \cite{DBLP:conf/nips/KingmaMRW14}, describes the data generated by a latent class variable $y$ and a continuous latent variable $z$, is expressed as $p_{\theta}(x|z,y)p(z)p(y)$ (as depicted in Fig.~\ref{fig:semiVAE}(2)). $p(y)$ is the multinomial distribution, where the class labels $y$ are treated as latent variables for unlabeled data. $p_{\theta}(x|z,y)$ is a suitable likelihood function. The inferred posterior distribution $q_{\phi}(z|y,x)$ can predict any missing labels.
Stacked generative semi-supervised model, called M1+M2, uses the generative model M1 to learn the new latent representation $z_1$, and uses the embedding from $z_1$ instead of the raw data $x$ to learn a generative semi-supervised model M2. As shown in Fig.~\ref{fig:semiVAE}(3), the whole process can be abstracted as follows:
\begin{equation}
p _{\theta} (x,y,z_1,z_2)=p(y)p(z_2)p_{\theta}(z_1|y,z_2)p_{\theta}(x|z_1),
\end{equation}
where $p_{\theta}(z_1|y,z_2)$ and $p_{\theta}(x|z_1)$ are parameterised as deep neural networks. In all the above models, $q_{\phi}({z|x})$ is used to approximate the true posterior distribution $p({z|x})$, and following the variational principle, the boundary approximation lower bound of the model is derived to ensure that the approximate posterior probability is as close to the true posterior probability as possible.
\textbf{ADGM.}
Auxiliary Deep Generative Models (ADGM) \cite{DBLP:conf/icml/MaaloeSSW16} extends SSVAEs \cite{DBLP:conf/nips/KingmaMRW14} with auxiliary variables, as depicted in Fig.~\ref{fig:semiVAE}(4). The auxiliary variables can improve the variational approximation and make the variational distribution more expressive by training deep generative models with multiple stochastic layers.
Adding the auxiliary variable $a$ leaves the generative model of $x,y$ unchanged while significantly improving the representative power of the posterior approximation. An additional inference network is introduced such that:
\begin{equation}
q_{\phi}(a,y,z|x)=q_{\phi}(z|a,y,x)q_{\phi}(y|a,x)q_{\phi}(a|x).
\end{equation}
The framework has the generative model $p$ defined as $p_{\theta}(a)p_{\theta}(y)p_{\theta}(z)p_{\theta}(x|z,y)$, where $a,y,z$ are the auxiliary variable, class label, and latent features, respectively. Learning the posterior distribution is intractable. Thus we define the approximation as $q_{\phi}(a|{x})q_{\phi}({z}|y,{x})$ and a classifier $q_{\phi}(y|a,{x})$.
The auxiliary unit $a$ actually introduces a class-specific latent distribution between $x$ and $y$, resulting in a more expressive distribution $q_{\phi}(y|a,x)$. Formally, \cite{DBLP:conf/icml/MaaloeSSW16} employs the similar variational lower bound $\mathbb{E}_{q_{\phi}(a,z|x)}[\log p_{\theta}(a,x,y,z)-\log q_{\phi}(a,z|x,y)]$ on the marginal likelihood, with $q_{\phi}(a,z|x,y)=q_{\phi}(a|x)q_{\phi}(z|y,x)$. Similarly, the unlabeled ELBO is
$ \mathbb{E}_{q_{\phi}(a,y,z|x)}[\log p_{\theta}(a,x,y,z)-\log q_{\phi}(a,y,z|x)]$
with $q_{\phi}(a,y,z|x)=q_{\phi}(z|y,x)q_{\phi}(y|a,x)q_{\phi}(a|x)$.
Interestingly, by reversing the direction of the dependence between $x$ and $a$, a model similar to the stacked version of M1 and M2 is recovered (Fig.~\ref{fig:semiVAE}(3)), with what the authors denote skip connections from the second stochastic layer and the labels to the inputs $x$. In this case the generative model is affected, and the authors call this the Skip Deep Generative Model (SDGM). This model is able to be trained end to end using stochastic gradient
descent (SGD) (according to the \cite{DBLP:conf/icml/MaaloeSSW16} the skip connection between $z$ and $x$ is crucial for training to converge). Unsurprisingly, joint training for the model improves significantly upon the performance presented in \cite{DBLP:conf/nips/KingmaMRW14}.
\textbf{Infinite VAE.}
Infinite VAE \cite{DBLP:conf/cvpr/AbbasnejadDH17} proposes a mixture model for combining variational autoencoders, a non-parametric Bayesian approach. This model can adapt to suit the input data by mixing coefficients by a Dirichlet process.It combines Gibbs sampling and variational inference that enables the model to learn the input's underlying structures efficiently. Formally, Infinite VAE employs the mixing coefficients to assist SSL by combining the unsupervised generative model and a supervised discriminative model.
The infinite mixture generative model as,
\begin{equation}
p(c,\pi,x,z)=p(c|\pi)p_{\alpha}(\pi)p_{\theta}(x|c,z)p(z),
\end{equation}
where $c$ denotes the assignment matrix for each instance to a VAE component where the VAE-$i$ can best reconstruct instance $i$. $\pi$ is the mixing coefficient prior for $c$, drawn from a Dirichlet distribution with parameter $\alpha$. Each latent variable $z_i$ in each VAE is drawn from a Gaussian distribution.
\textbf{Disentangled VAE.}
Disentangled VAE \cite{DBLP:conf/nips/NarayanaswamyPM17} attempts to learn disentangled representations using partially-specified graphical model structures and distinct encoding aspects of the data into separate variables. It explores the graphical model for modeling a general dependency on observed and unobserved latent variables with neural networks, and a stochastic computation graph \cite{DBLP:conf/nips/SchulmanHWA15} is used to infer with and train the resultant generative model. For this purpose, importance sampling estimates are used to maximize the lower bound of both the supervised and semi-supervised likelihoods. Formally, this framework considers the conditional probability $q_{y,z|x}$, which has a factorization $q_{\phi}(y,z|x)=q_{\phi}(y|x,z)q_{\phi}(z|x)$ rather than $q_{\phi}(y,z|x)=q_{\phi}(z|x,y)q_{\phi}(y|x)$ in \cite{DBLP:conf/nips/KingmaMRW14}, which means we can no longer compute a simple Monte Carlo estimator by sampling from the unconditional distribution $q_{\phi}(z|x)$. Thus, the variational lower bound for supervised term expand below,
\begin{equation}
\mathbb{E}_{q_{\phi}(z|x,y)}[\log p_{\theta}(x|y,z)p(y)p(z)-q_{\phi}(y,z|x)].
\end{equation}
\textbf{SDVAE.}
Semi-supervised Disentangled VAE (SDVAE) \cite{DBLP:journals/isci/LiPWPYC19} incorporates the label information to the latent representations by encoding the input disentangled representation and non-interpretable representation. The disentangled variable captures categorical information, and the non-interpretable variable consists of other uncertain information from the data. As shown in Fig.~\ref{fig:semiVAE}(5), SDVAE assumes the disentangled variable $v$ and the non-interpretable variable $u$ are independent condition on $x$, \latinphrase{i.e.}\xspace, $q_{\phi}(u,v|x)=q_{\phi}(u|x)q_{\phi}(v|x)$. This means that $q_{\phi}(v|x)$ is the encoder for the disentangled representation, and $q_{\phi}(u|x)$ denotes the encoder for non-interpretable representation. Based on those assumptions, the variational lower bound is written as:
\begin{equation}
\mathbb{E}_{q(u|x),q(v|x)}[\log p(x|u,v)p(v)p(u)-\log q(u|x)q(v|x)].
\end{equation}
\begin{figure*}[!t]
\centering
\includegraphics[width=7.0in]{images/ConsistencyRegularization.pdf}
\caption{A glimpse of the diverse range of architectures used for consistency regularization semi-supervised methods. In addition to the identifiers in the figure, $\zeta$ denotes the perturbation noise, and $\mathcal{R}$ is the consistency constraint. }
\label{fig:consistencyRegularization}
\end{figure*}
\textbf{ReVAE.}
Reparameterized VAE (ReVAE) \cite{DBLP:journals/corr/abs-2006-10102} develops a novel way to encoder supervised information, which can capture label information through auxiliary variables instead of latent variables in the prior work \cite{DBLP:conf/nips/KingmaMRW14}. The graphical model is illustrated in Fig.~\ref{fig:semiVAE}(6). In contrast to SSVAEs, ReVAE captures meaningful representations of data with a principled variational objective. Moreover, ReVAE carefully designed the mappings between auxiliary and latent variables. In this model, a conditional generative model $p_{\psi}(z|y)$ is introduced to address the requirement for inference at test time. Similar to \cite{DBLP:conf/nips/KingmaMRW14} and \cite{DBLP:conf/icml/MaaloeSSW16}, ReVAE treats $y$ as a known observation when the label is available in the supervised setting, and as an additional variable in the unsupervised case. In particular, the latent space can be partitioned into two disjoint subsets under the assumption that label information captures only specific aspects.
\textbf{Summary.} As the name indicates, Semi-supervised VAE applies the VAE architecture for handling SSL problems. An advantage of these methods is that meaningful representations of data can be learned by the generative latent-variable models. The basic framework of these Semi-supervised VAE methods is M2 \cite{DBLP:conf/nips/KingmaMRW14}. On the basis of the M2 framework, ADGM \cite{DBLP:conf/icml/MaaloeSSW16} and ReVAE \cite{DBLP:journals/corr/abs-2006-10102} consider introducing additional auxiliary variables, although the roles of the auxiliary variables in the two models are different. Infinite VAE \cite{DBLP:conf/cvpr/AbbasnejadDH17} is a hybrid of several VAE models to improve the performance of the entire framework. Disentangled VAE \cite{DBLP:conf/nips/NarayanaswamyPM17} and SDVAE \cite{DBLP:journals/isci/LiPWPYC19} solve the semi-supervised VAE problem by different disentangled methods. Under semi-supervised conditions, when a large number of labels are unobserved, the key to this kind of method is how to deal with the latent variables and label information.
\section{Consistency Regularization}\label{sec:regularization}
In this section, we introduce the consistency regularization methods for semi-supervised deep learning. In these methods, a consistency regularization term is applied to the final loss function to specify the prior constraints assumed by researchers. Consistency regularization is based on the manifold assumption or the smoothness assumption, and describes a category of methods that the realistic perturbations of the data points should not change the output of the model \cite{DBLP:conf/nips/OliverORCG18}. Consequently, consistency regularization can be regarded to find a smooth manifold on which the dataset lies by leveraging the unlabeled data \cite{DBLP:conf/nips/BelkinN01}.
The most common structure of consistency regularization SSL methods is the Teacher-Student structure. As a student, the model learns as before, and as a teacher, the model generates targets simultaneously. Since the model itself generates targets, they may be incorrect and then used by themselves as students for learning. In essence, the consistency regularization methods suffer from confirmation bias \cite{DBLP:conf/nips/TarvainenV17}, a risk that can be mitigated by improving the target's quality. Formally, following \cite{DBLP:conf/iccv/KeWYRL19}, we assume that dataset $X$ consists of a labeled subset $X_l$ and an unlabeled subset $X_u$. Let $\theta '$ denote the weight of the target, and $\theta$ denote the weights of the basic student. The consistency constraint is defined as:
\begin{equation}
\mathbb{E}_{x\in X } \mathcal{R}(f(\theta, x), \mathcal{T}_x),
\label{equ:consistency_cost}
\end{equation}
where $f(\theta,x)$ is the prediction from model $f(\theta)$ for input $x$. $\mathcal{T}_x$ is the consistency target of the teacher. $\mathcal{R}(\cdot, \cdot)$ measures the distance between two vectors and is usually set to Mean Squared Error (MSE) or KL-divergence. Different consistency regularization techniques vary in how they generate the target. There are several ways to boost the target $\mathcal{T}_x$ quality. One strategy is to select the perturbation rather than additive or multiplicative noise carefully. Another technique is to consider the teacher model carefully instead of replicating the student model.
\textbf{Ladder Network.}
Ladder Network \cite{DBLP:conf/nips/RasmusBHVR15,DBLP:conf/icml/PezeshkiFBCB16} is the first successful attempt towards using a Teacher-Student model that is inspired by a deep denoising AutoEncoder. The structure of the Ladder Network is shown in Fig.~\ref{fig:consistencyRegularization}(1). In Encoder, noise $\zeta$ is injected into all hidden layers as the corrupted feedforward path $x+\zeta \rightarrow \frac{\text{Encoder}}{f( \cdot)}\rightarrow \tilde{z}_1\rightarrow \tilde{z}_2\,\,$ and shares the mappings $f(\cdot)$ with the clean encoder feedforward path $x\rightarrow \frac{\text{Encoder}}{f(\cdot)}\rightarrow z_1\rightarrow z_2\rightarrow y$. The decoder path $\tilde{z}_1\rightarrow \tilde{z}_2 \rightarrow \frac{\text{Decoder}}{g( \cdot ,\cdot )}\rightarrow \hat{z}_2\rightarrow \hat{z}_1
$ consists of the denoising functions $g(\cdot, \cdot)$ and the unsupervised denoising square error $\mathcal{R}$ on each layer consider as consistency loss between $\hat{z}_i$ and $z_i$. Through latent skip connections, the ladder network is differentiated from regular denoising AutoEncoder. This feature allows the higher layer features to focus on more abstract invariant features for the task. Formally, the ladder network unsupervised training loss $\mathcal{L}_u$ or the consistency loss is computed as the MSE between the activation of the clean encoder $z_i$ and the reconstructed activations $\hat{z}_i$. Generally, $\mathcal{L}_u$ is
\begin{equation}
\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x \right) ,g\left( f\left( \theta ,x+\zeta \right) \right) \right).
\label{equ:ladderNetwork}
\end{equation}
\textbf{$\bm{\Pi}$ Model.}
Unlike the perturbation used in Ladder Network, $\Pi$ Model \cite{DBLP:conf/nips/SajjadiJT16} is to create two random augmentations of a sample for both labeled and unlabeled data. Some techniques with non-deterministic behavior, such as randomized data augmentation, dropout, and random max-pooling, make an input sample pass through the network several times, leading to different predictions. The structure of the $\Pi$ Model is shown in Fig.~\ref{fig:consistencyRegularization}(2). In each epoch of the training process for $\Pi$ Model, the same unlabeled sample propagates forward twice, while random perturbations are introduced by data augmentations and dropout. The forward propagation of the same sample may result in different predictions, and the $\Pi$ Model expects the two predictions to be as consistent as possible. Therefore, it provides an unsupervised consistency loss function,
\begin{equation}
\mathbb{E}_{x\in X} \mathcal{R}(f(\theta,x,\zeta_1),f(\theta,x,\zeta_2)),
\end{equation}
which minimizes the difference between the two predictions.
\textbf{Temporal Ensembling.}
Temporal Ensembling \cite{DBLP:conf/iclr/LaineA17} is similar to the $\Pi$ Model, which forms a consensus prediction under different regularization and input augmentation conditions. The structure of Temporal Ensembling is shown in Fig.~\ref{fig:consistencyRegularization}(3). It modifies the $\Pi$ Model by leveraging the Exponential Moving Average (EMA) of past epochs predictions. In other words, while $\Pi$ Model needs to forward a sample twice in each iteration, Temporal Ensembling reduces this computational overhead by using EMA to accumulate the predictions over epochs as $\mathcal{T}_x$. Specifically, the ensemble outputs $Z_i$ is updated with the network outputs $z_i$ after each training epoch, \latinphrase{i.e.}\xspace, $Z_i\gets \alpha Z_i+\left( 1-\alpha \right) z_i$, where $\alpha$ is a momentum term. During the training process, the $Z$ can be considered to contain an average ensemble of $f(\cdot)$ outputs due to Dropout and stochastic augmentations. Thus, consistency loss is:
\begin{equation}
\mathbb{E}_{x\in X} \mathcal{R} (f(\theta, x, \zeta_1), \text{EMA}(f(\theta, x, \zeta_2))).
\label{equ:temporalEnsembling}
\end{equation}
\begin{table*}
\centering
\caption{Summary of Consistency Regularization Methods}
\label{tab:consistency}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Methods} & \textbf{Techniques} & \textbf{Transformations} & \textbf{Consistency Constraints} \\
\hline
\makecell[c]{Ladder \\ Network} &\makecell[c]{Additional Gaussian Noise \\ in every neural layer} & input &$\mathbb{E}_{x\in X}\mathcal{R}( f( \theta ,x ) ,f( \theta ,x+\zeta ) ) $ \\
\hline
$\Pi$ Model &Different Stochastic Augmentations & input & $\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x,\zeta _1 \right) ,f\left( \theta ,x,\zeta _2 \right) \right)$ \\
\hline
\makecell[c]{Temporal \\ Ensembling} & \makecell[c]{Different Stochastic Augmentation \\ and EMA the predictions} & \makecell[c]{input, \\ predictions} & $\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x,\zeta _1 \right) ,\text{EMA}\left( f\left( \theta ,x,\zeta _2 \right) \right) \right) $\\
\hline
Mean Teacher &\makecell[c]{Different Stochastic Augmentation \\ and EMA the weights} & \makecell[c]{input,\\ weights } & $\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x,\zeta \right) ,f\left( \text{EMA}\left( \theta \right) ,x,\zeta \right) \right)
$
\\
\hline
VAT & Adversarial perturbation & input & $\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x \right) ,f\left( \theta ,x,\gamma ^{adv} \right) \right)
$
\\
\hline
Dual Student &\makecell[c]{Stable sample \\ and stabilization constraint} & \makecell[c]{input, \\ weights} & $\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \text{STA}\left( \theta ,x_i \right) ,\zeta _1 \right) ,f\left( \text{STA}\left( \theta ,x_j \right) ,\zeta _2 \right) \right)$
\\
\hline
SWA & Stochastic Weight Averaging & \makecell[c]{input,\\ weights} & $\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x \right) ,f\left( \text{SWA}\left( \theta \right) ,x,\zeta \right) \right) $
\\
\hline
VAdD & \makecell[c]{Adversarial perturbation and Stochastic \\ Augmentation (dropout mask)} & \makecell[c]{input,\\ weights} &$\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x,\epsilon ^s \right) ,f\left( \theta ,x,\epsilon ^{adv} \right) \right) $
\\
\hline
UDA & \makecell[c]{AutoAugment/RandAugment for image; \\ Back-Translation for text} & input & $
\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x \right) ,f\left( \theta ,x,\zeta \right) \right) $
\\
\hline
WCP & \makecell[c]{Additive perturbation on network weights, \\DropConnect perturbation for network structure} & \makecell[c]{input, \\ network structure} &$\mathbb{E}_{x\in X}\mathcal{R}\left( f\left( \theta ,x \right) ,g\left( \theta +\zeta ,x \right) \right) $\\
\hline
\end{tabular}
\end{table*}
\textbf{Mean Teacher.}
Mean Teacher \cite{DBLP:conf/nips/TarvainenV17} averages model weights using EMA over training steps and tends to produce a more accurate model instead of directly using output predictions. The structure of Mean Teacher is shown in Fig.~\ref{fig:consistencyRegularization}(4). Mean Teacher consists of two models called Student and Teacher. The student model is a regular model similar to the $\Pi$ Model, and the teacher model has the same architecture as the student model with exponential moving averaging the student weights. Then Mean Teacher applied a consistency constraint between the two predictions of student and teacher:
\begin{equation}
\mathbb{E}_{x\in X}\mathcal{R}(f(\theta,x,\zeta),f(\text{EMA}(\theta),x,\zeta')).
\label{equ:meanTeacher}
\end{equation}
\textbf{VAT.}
Virtual Adversarial Training \cite{DBLP:journals/pami/MiyatoMKI19} proposes the concept of adversarial attack for consistency regularization. The structure of VAT is shown in Fig.~\ref{fig:consistencyRegularization}(5). This technique aims to generate an adversarial transformation of a sample, which can change the model prediction. Specifically, the adversarial training technique is used to find the optimal adversarial perturbation $\gamma$ of a real input instance $x$ such that $\gamma \leq \delta$. Afterward, the consistency constraint is applied between the model's output of the original input sample and the perturbed one, \latinphrase{i.e.}\xspace,
\begin{equation}
\mathbb{E}_{x\in X}\mathcal{R}( f( \theta ,x) ,g( \theta ,x+\gamma ^{adv} )),
\label{equ:VAT}
\end{equation}
where $\gamma^{adv}=\operatornamewithlimits{argmax}_{\gamma;\|\gamma\|\leqslant \delta}\mathcal{R}(f(\theta,x),g(\theta,x+\gamma))$.
\textbf{Dual Student.}
Dual Student \cite{DBLP:conf/iccv/KeWYRL19} extends the Mean Teacher model by replacing the teacher with another student. The structure of Dual Student is shown in Fig.~\ref{fig:consistencyRegularization}(6). The two students start from different initial states and are optimized through individual paths during training. The authors also defined a novel concept, ``stable sample", along with a stabilization constraint to avoid the performance bottleneck produced by a coupled EMA Teacher-Student model. Hence, their weights may not be tightly coupled, and each learns its own knowledge. Formally, Dual Student checks whether $x$ is a stable sample for student $i$:
\begin{equation}
\mathcal{C}_{x}^{i}=\left\{ p_{x}^{i}=p_{\bar{x}}^{i} \right\} _1\&\left( \left\{ \mathcal{M}_{x}^{i}>\xi \right\} _1\left\| \left\{ \mathcal{M}_{\bar{x}}^{i}>\xi \right\} _1 \right. \right),
\end{equation}
where $\mathcal{M}_{x}^{i}=\left\| f\left( \theta ^i,x \right) \right\| _{\infty}$, and the stabilization constraint:
\begin{equation}
\mathcal{L}^i_{sta}=\begin{cases}
\left\{ \varepsilon ^i>\varepsilon ^j \right\} _1\mathcal{R}\left( f\left( \theta ^i,x \right) ,f\left( \theta ^j,x \right) \right) & \mathcal{C}^i=\mathcal{C}^j=1\\
\mathcal{C}^i\mathcal{R}\left( f\left( \theta ^i,x \right) ,f\left( \theta ^j,x \right) \right) & \text{otherwise}\\
\end{cases}.
\label{equ:dualStudent}
\end{equation}
\textbf{SWA.}
Stochastic Weight Averaging (SWA) \cite{DBLP:conf/uai/IzmailovPGVW18} improves generalization than conventional training. The aim is to average multiple points along the trajectory of stochastic gradient descent (SGD) with a cyclical learning rate and seek much flatter solutions than SGD. The consistency-based SWA \cite{DBLP:conf/iclr/AthiwaratkunFIW19} observes that SGD fails to converge on the consistency loss but continues to explore many solutions with high distances apart in predictions on the test data. The structure of SWA is shown in Fig.~\ref{fig:consistencyRegularization}(7). The SWA procedure also approximates the Teacher-Student approach, such as $\Pi$ Model and Mean Teacher with a single model. The authors propose fast-SWA, which adapts the SWA to increase the distance between the averaged weights by using a longer cyclical learning rate schedule and diversity of the corresponding predictions by averaging multiple network weights within each cycle. Generally, the consistency loss can be rewritten as follows:
\begin{equation}
\mathbb{E}_{x\in X}\mathcal{R}(f(\theta, x), f(\text{SWA}(\theta),x,\zeta)).
\label{equ:swa}
\end{equation}
\textbf{VAdD.}
In VAT, the adversarial perturbation is defined as an additive noise unit vector applied to the input or embedding spaces, which has improved the generalization performance of SSL. Similarly, Virtual Adversarial Dropout (VAdD) \cite{DBLP:conf/aaai/ParkPSM18} also employs adversarial training in addition to the $\Pi$ Model. The structure of VAdD is shown in Fig.~\ref{fig:consistencyRegularization}(8). Following the design of $\Pi$ Model, the consistency constraint of VAdD is computed from two different dropped networks: one dropped network uses a random dropout mask, and the other applies adversarial training to the optimized dropout network. Formally, $f(\theta, x, \epsilon)$ denotes an output of a neural network with a random dropout mask, and the consistency loss incorporated adversarial dropout is described as:
\begin{equation}
\mathbb{E}_{x\in X}\mathcal{R}(f(\theta, x, \epsilon^s), f(\theta, x, \epsilon^{adv})),
\label{equ:vadd}
\end{equation}
where $\epsilon^{adv}=\operatornamewithlimits{argmax}_{\epsilon;\|\epsilon^s-\epsilon\|_2\le \delta H}\mathcal{R}(f(\theta,x,\epsilon^s),f(\theta,x,\epsilon))$; $f(\theta, x, \epsilon^{adv})$ represents an adversarial target; $\epsilon^{adv}$ is an adversarial dropout mask; $\epsilon^s$ is a sampled random dropout mask instance; $\delta$ is a hyperparameter controlling the intensity of the noise, and $H$ is the dropout layer dimension.
\textbf{WCP.}
A novel regularization mechanism for training deep SSL by minimizing the Worse-case Perturbation (WCP) is presented by Zhang \latinphrase{et~al.}\xspace \cite{DBLP:conf/cvpr/ZhangLH20WCP}. The structure of WCP is shown in Fig.~\ref{fig:consistencyRegularization}(9). WCP considers two forms of WCP regularizations -- additive and DropConnect perturbations, which impose additive perturbation on network weights and make structural changes by dropping the network connections, respectively. Instead of generating an ensemble of randomly corrupted networks, the WCP suggests enhancing the most vulnerable part of a network by making the most resilient weights and connections against the worst-case perturbations. It enforces an additive noise on the model parameters $\zeta$, along with a constraint on the norm of the noise. In this case, the WCP regularization becomes,
\begin{equation}
\mathbb{E}_{x\in X}\mathcal{R}(f(\theta, x), g(\theta+\zeta, x)).
\end{equation}
The second perturbation is at the network structure level by DropConnect, which drops some network connections. Specifically, for parameters $\theta$, the perturbed version is $(1-\alpha)\theta$, and the $\alpha=1$ denotes a dropped connection while $\alpha=0$ indicates an intact one. By applying the consistency constraint, we have
\begin{equation}
\mathbb{E}_{x\in X} \mathcal{R}(f(\theta,x), f((1-\alpha)\theta, x)).
\end{equation}
\textbf{UDA.}
UDA stands for Unsupervised Data Augmentation \cite{DBLP:conf/nips/XieDHL020} for image classification and text classification. The structure of UDA is shown in Fig.~\ref{fig:consistencyRegularization}(10).
This method investigates the role of noise injection in consistency training and substitutes simple noise operations with high-quality data augmentation methods, such as AutoAugment \cite{DBLP:conf/cvpr/CubukZMVL19}, RandAugment \cite{DBLP:journals/corr/abs-1909-13719} for images, and Back-Translation \cite{DBLP:conf/acl/SennrichHB16,DBLP:conf/emnlp/EdunovOAG18} for text. Following the consistency regularization framework, the UDA \cite{DBLP:conf/nips/XieDHL020} extends the advancement in supervised data augmentation to SSL.
As discussed above, let $f(\theta, x,\zeta)$ be the augmentation transformation from which one can draw an augmented example $(x,\zeta)$ based on an original example $x$. The consistency loss is:
\begin{equation}
\mathbb{E}_{x\in X} \mathcal{R}(f(\theta, x),f(\theta, x, \zeta)),
\end{equation}
where $\zeta$ represents the data augmentation operator to create an augmented version of an input $x$.
\textbf{Summary.}
The core idea of consistency regularization methods is that the output of the model remains unchanged under realistic perturbation. As shown in TABLE~\ref{tab:consistency}, Consistency constraints can be considered at three levels: input dataset, neural networks and training process. From the input dataset perspective, perturbations are usually added to the input examples: additive noise, random augmentation, or even adversarial training. We can drop some layers or connections for the networks, as WCP \cite{DBLP:conf/cvpr/ZhangLH20WCP}. From the training process, we can use SWA to make the SGD fit the consistency training or EMA parameters of the model for some training epochs as new parameters.
\section{Graph-based methods}\label{sec:graph}
Graph-based semi-supervised learning (GSSL) has always been a hot subject for research with a vast number of successful models\cite{DBLP:conf/cvpr/IscenTAC19,DBLP:conf/cvpr/ChenMQXZ20,DBLP:conf/cvpr/LiLCCYY20} because of its wide variety of applicability. The basic assumption in GSSL is that a graph can be extracted from the raw dataset where each node represents a training sample, and each edge denotes some similarity measurement of the node pair.
In this section, we review graph embedding SSL methods, and the principal goal is to encode the nodes as small-scale vectors representing their role and the structure information of their neighborhood.
\begin{figure*}[!t]
\centering
\includegraphics[width=7.0in]{images/Graph-based.pdf}
\caption{A glimpse of the diverse range of architectures used for graph-based semi-supervised methods. Specifically, in figure (3), ``PPMI" is short for positive pointwise mutual information. In figure (4), $A$ denotes the adjacency matrix, and in figure (5), pink A represents node A.}
\label{fig:graph_based}
\end{figure*}
For graph embedding methods, we have the formal definition for the embedding on the node level. Given the graph $\mathcal{G}(\mathcal{V}, \mathcal{E})$, the node that has been embedded is a mapping result of $f_{\mathbf{z}}: v \rightarrow \mathbf{z}_v \in \mathbb{R}^{d}, \forall v \in {\mathcal{V}}$ such that $d \ll|{\mathcal{V}}|$, and the $f_{\mathbf{z}}$ function retains some of the measurement of proximity, defined in the graph $\mathcal{G}$.
The unified form of the loss function is shown as Eq.~(\ref{general_graph_embedding}) for graph embedding methods.
\begin{equation}
\label{general_graph_embedding}
\begin{aligned}
\mathcal{L}(f_{\mathbf{z}}) =&\sum_{(x,y) \in X_l} (x,y,f_{\mathbf{z}})+\alpha \sum_{x\in X} \mathcal{R}(x,f_{\mathbf{z}}),
\end{aligned}
\end{equation}
where $f_{\mathbf{z}}$ is the embedding function.
Besides, we can divide graph embedding methods into shallow embedding and deep embedding based on the use or not use of deep learning techniques. The encoder's function, as a simple lookup function based on node ID, is built with shallow embedding methods including DeepWalk~\cite{DBLP:conf/kdd/PerozziAS14}, LINE~\cite{DBLP:conf/www/TangQWZYM15} and node2vec~\cite{DBLP:conf/kdd/GroverL16}, whereas the deep embedding encoder is far more complicated, and the deep learning frameworks have to make full use of node attributes.
As deep learning progresses, recent GSSL research has switched from shallow embedding methods to deep embedding methods in which the $f_{\mathbf{z}}$ embedding term in the Eq.~(\ref{general_graph_embedding}) is focused on deep learning models. Two classes can be identified among all deep embedding methods: AutoEncoder-based methods and GNN-based methods.
\subsection{AutoEncoder-based methods}
AutoEncoder-based approaches also differ as a result of using a unary decoder rather than a pairwise method. More precisely, every node, ${i}$, is correlated to a neighborhood vector $\mathbf{s}_{i} \in \mathbb{R}^{|{\mathcal{V}}|}$. The $\mathbf{s}_{i}$ vector contains ${i}$'s similarity with all other graph nodes and acts as a high-dimensional vector representation of ${i}$ in the neighborhood. The aim of the auto-encoding technique is to embed nodes using hidden embedding vectors like $\mathbf{s}_{i}$ in so as to reconstruct the original information from such embeddings (Fig.~\ref{fig:graph_based}(1)):
\begin{equation}
\label{autoencoder_embedding_goal}
\operatorname{Dec}\left(\operatorname{Enc}\left(\mathbf{s}_{i}\right)\right)=\operatorname{Dec}\left(\mathbf{z}_{i}\right) \approx \mathbf{s}_{i}.
\end{equation}
In other words, the loss for these methods takes the following form:
\begin{equation}
\mathcal{L}=\sum_{{i} \in {V}}\left\|\operatorname{Dec}\left(\mathbf{z}_{i}\right)-\mathbf{s}_{i}\right\|_{2}^{2}.
\end{equation}
\textbf{SDNE.}
Structural deep network embedding (SDNE) is developed by Wang ~\latinphrase{et~al.}\xspace~\cite{wang2016structural} by using deep autoencoders to maintain the first and second-order network proximities. This is managed by optimizing the two proximities simultaneously. The process utilizes highly nonlinear functions to obtain its embedding. This framework consists of two parts: unsupervised and supervised. The first is an autoencoder to identify an embedding for a node to rebuild the neighborhood. The second is based on Laplacian Eigenmaps~\cite{belkin2002laplacian}, which imposes a penalty when related vertices are distant from each other.
\textbf{DNGR.}
(DNGR)~\cite{cao2016deep} combines random surfing with autoencoders. The model includes three components: random surfing, positive pointwise mutual information (PPMI) calculation, and stacked denoising autoencoders. The random surfing design is used to create a stochastic matrix equivalent to the similarity measure matrix in HOPE~\cite{DBLP:conf/kdd/OuCPZ016} on the input graph. The matrix is transformed into a PPMI matrix and fed into a stacked denoising autoencoder to obtain the embedding. The PPMI matrix input ensures that the autoencoder model captures a higher proximity order. The use of stacked denoising autoencoders also helps make the model robust in the event of noise in the graph, as well as in seizing the internal structure needed for downstream tasks.
\textbf{GAE \& VGAE.}
Both MLP-based and RNN-based strategies only consider the contextual information and ignore the feature information of the nodes. To encode both, GAE~\cite{kipf2016variational} uses GCN~\cite{DBLP:conf/iclr/KipfW17}. The encoder is in the form,
\begin{equation}
\label{vae_enc}
\operatorname{Enc}({A},{X}) = \operatorname{GraphConv}\left(\sigma (\operatorname{GraphConv} ({A},{X}))\right),
\end{equation}
where $\operatorname{GraphConv}(\cdot)$ is a graph convolutional layer defined in~\cite{DBLP:conf/iclr/KipfW17}, $\sigma(\cdot)$ is the activation function and $A$ , $X$ is adjacency matrix and attribute matrix respectively. The decoder of GAE is defined as
\begin{equation}
\label{vae_dec}
\operatorname{Dec}{(\mathbf{z}_{u},\mathbf{z}_{v})} = \mathbf{z}_{u}^T \mathbf{z}_{v}.
\end{equation}
Variational GAE (VGAE)~\cite{kipf2016variational} learns about the distribution of the data in which the variation lower bound $\mathcal{L}$ is optimized directly by reconstructing the adjacency matrix.
\begin{equation}
\label{vgae}
\mathcal{L}=\mathbb{E}_{q(\mathbf{Z} \mid {X}, {A})}[\log p({A} \mid \mathbf{Z})]-\operatorname{KL}[q(\mathbf{Z} \mid {X}, {A}) \| p(\mathbf{Z})],
\end{equation}
where $\operatorname{KL}[q(\cdot) \| p(\cdot)]$ is the Kullback-Leibler divergence between $q(\cdot)$ and $p(\cdot)$. Moreover, we have
\begin{equation}
q(\mathbf{Z} \mid {X}, {A})=\prod_{i=1}^{N} \mathcal{N}\left(\mathbf{z}_{i} \mid {\mu}_{i}, \operatorname{diag}\left({\sigma}_{i}^{2}\right)\right),
\end{equation}
and
\begin{equation}
p({A} \mid \mathbf{Z})=\prod_{i=1}^{N} A_{ij}\sigma\left(\mathbf{z}_{i}^{\top} \mathbf{z}_{j}\right) + \left(1-A_{ij}\right)\left(1-\sigma\left(\mathbf{z}_{i}^{\top} \mathbf{z}_{j}\right)\right).
\end{equation}
\textbf{Summary.}
It should be noted from Eq.~(\ref{autoencoder_embedding_goal}) that the encoder unit does depend on the specific $\mathbf{s}_{i}$ vector, which gives the crucial information relating to the local community structure of $v_{i}$. TABLE~\ref{autoencoder_summary} sums up the main components of these methods, and their architectures are compared as Fig.~\ref{fig:graph_based}.
\begin{table*}[!ht]
\centering
\caption{Summary of AutoEncoder-based Deep Graph Embedding Methods}
\label{autoencoder_summary}
\begin{tabular}{cccccc}
\hline
\textbf{Method} &
\textbf{Encoder} &
\textbf{Decoder} &
\textbf{Similarity Measure} &
\textbf{Loss Function} &
\textbf{Time Complexity} \\
\hline
SDNE~\cite{wang2016structural} &
MLP &
MLP &
$\mathbf{s}_{u}$ &
$\sum_{{u} \in \mathcal{V}}\left\|\operatorname{Dec}\left(\mathbf{z}_{u}\right)-\mathbf{s}_{u}\right\|_{2}^{2}$ &
${O}(|\mathcal{V} \| \mathcal{E}|)$ \\
DNGR~\cite{cao2016deep} &
MLP &
MLP &
$\mathbf{s}_{u}$ &
$\sum_{{u} \in {\mathcal{V}}}\left\|\operatorname{Dec}\left(\mathbf{z}_{u}\right)-\mathbf{s}_{u}\right\|_{2}^{2}$ &
${O}\left(|\mathcal{V}|^{2}\right)$ \\
GAE~\cite{kipf2016variational} &
GCN &
$\mathbf{z}_{u}^{\top} \mathbf{z}_{v}$ &
$A_{uv}$ &
$\sum_{{u} \in {\mathcal{V}}}\left\|\operatorname{Dec}\left(\mathbf{z}_{u}\right)-{A}_{u}\right\|_{2}^{2}$ &
${O}(|\mathcal{V} \| \mathcal{E}|)$ \\
VGAE~\cite{kipf2016variational} &
GCN &
$\mathbf{z}_{u}^{\top} \mathbf{z}_{v}$ &
$A_{uv}$ &
$\mathbb{E}_{q(\mathbf{Z} \mid {X}, {A})}[\log p({A} \mid \mathbf{Z})]-\operatorname{KL}[q(\mathbf{Z} \mid {X}, {A}) \| p(\mathbf{Z})]$ &
${O}(|\mathcal{V} \| \mathcal{E}|)$ \\
\hline
\end{tabular}
\end{table*}
\subsection{GNN-based methods}
Several updated, deep embedding strategies are designed to resolve major autoencoder-based method disadvantages by building certain specific functions that rely on the local community of the node but not necessarily the whole graph (Fig.~\ref{fig:graph_based}(5)). The GNN, which is widely used in state-of-the-art deep embedding approaches, can be regarded as a general guideline for the definition of deep neural networks on graphs.
Like other deep node-level embedding methods, a classifier is trained to predict class labels for the labeled nodes. Then it can be applied to the unlabeled nodes based on the final, hidden state of the GNN-based model. Since GNN consists of two primary operations: the aggregate operation and the update operation, the basic GNN is provided, and then some popular GNN extensions are reviewed with a view to enhancing each process, respectively.
\textbf{Basic GNN.}
As Gilmer~\latinphrase{et~al.}\xspace~\cite{gilmer2017neural} point out, the critical aspect of a basic GNN is that the benefit of \textit{neural message passing} is to exchange and update messages between each node pair by using neural networks.
More specifically, a hidden embedding $\mathbf{h}_{u}^{(k)}$ in each neural message passing through the GNN basic iteration is updated according to message or information from the neighborhood within $\mathcal{N}(u)$ according to each node $u$. This general message can be expressed according to the update rule:
\begin{equation}\begin{aligned}
\label{message_passing_update_rule}
&\mathbf{h}_{u}^{(k+1)} \\
&=\text { Update }^{(k)}\left(\mathbf{h}_{u}^{(k)}, \text { Aggregate }^{(k)}\left(\left\{\mathbf{h}_{v}^{(k)}, \forall v \in \mathcal{N}(u)\right\}\right)\right) \\
&=\text { Update }^{(k)}\left(\mathbf{h}_{u}^{(k)}, \mathbf{m}_{\mathcal{N}(u)}^{(k)}\right),
\end{aligned}\end{equation}
where
\begin{equation}
\mathbf{m}_{\mathcal{N}(u)}^{(k)}=\text { Aggregate }^{(k)}\left(\left\{\mathbf{h}_{v}^{(k)}, \forall v \in \mathcal{N}(u)\right\}\right).
\end{equation}
It is worth noting that the functions $\textbf{Update}$ and $\textbf{Aggregate}$ must generally be differentiable in Eq.~(\ref{message_passing_update_rule}). The new state is generated in accordance with Eq.~(\ref{message_passing_update_rule}) when the neighborhood message is combined with the previous hidden embedding state. After a number of iterations, the last hidden embedding state converges so that each node's final status is created as the output. Formally, we have, $\mathbf{z}_{u}=\mathbf{h}_{u}^{(K)}, \forall u \in \mathcal{V}$.
The basic GNN model is introduced before reviewing many other GNN-based methods designed to perform SSL tasks. The basic version of GNN aims to simplify the original GNN model, proposed by Scarselli~\latinphrase{et~al.}\xspace~\cite{scarselli2008graph}.
The basic GNN message passing is defined as:
\begin{equation}
\label{basic_gnn}\mathbf{h}_{u}^{(k)}=\sigma\left(\mathbf{W}_{\text {self }}^{(k)} \mathbf{h}_{u}^{(k-1)}+\mathbf{W}_{\text {neigh }}^{(k)} \sum_{v \in \mathcal{N}(u)} \mathbf{h}_{v}^{(k-1)}+\mathbf{b}^{(k)}\right),\end{equation}
where $\mathbf{W}_{\text {self }}^{(k)}, \mathbf{W}_{\text {neigh }}^{(k)}$ are trainable parameters and $\sigma$ is the activation function.
In principle, the messages from the neighbors are summarized first. Then, the neighborhood information and the previously hidden node results are integrated by an essential linear combination. Finally, the joint information uses a nonlinear activation function. It is worth noting that the GNN layer can be easily stacked together following Eq.~(\ref{basic_gnn}). The last layer's output in the GNN model is regarded as the final node embedding result to train a classifier for the downstream SSL tasks.
As previously mentioned, GNN models have all sorts of variants that try to some degree boost their efficiency and robustness. All of them, though, obey the Eq.~(\ref{message_passing_update_rule}) neural message passing structure, regardless of the GNN version explored previously.
\textbf{GCN.}
As mentioned above, the most simple neighborhood aggregation operation only calculates the sum of the neighborhood encoding states. The critical issue with this approach is that nodes with a large degree appear to derive a better benefit from more neighbors compared to those with a lower number of neighbors.
One typical and straightforward approach to this problem is to normalize the aggregation process depending on the central node degree. The most popular method is to use the following symmetric normalization Eq.~(\ref{gcn_neighbor_norm}) employed by Kipf~\latinphrase{et~al.}\xspace~\cite{DBLP:conf/iclr/KipfW17} in the graph convolutional network (GCN) model as Eq.~(\ref{gcn_neighbor_norm}).
\begin{equation}
\label{gcn_neighbor_norm}
\mathbf{m}_{\mathcal{N}(u)}=\sum_{v \in \mathcal{N}(u)} \frac{\mathbf{h}_{v}}{\sqrt{|\mathcal{N}(u)| \mid \mathcal{N}(v)} \mid}
\end{equation}
GCN fully uses the uniform neighborhood grouping technique. Consequently, the GCN model describes the update function as Eq.~(\ref{GCN}). As it is indirectly specified in the update function, no aggregation operation is defined.
\begin{equation}
\label{GCN}
\mathbf{h}_{u}^{(k)}=\sigma\left(\mathbf{W}^{(k)} \sum_{v \in \mathcal{N}(u) \cup\{u\}} \frac{\mathbf{h}_{v}}{\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}}\right)
\end{equation}
A vast range of GCN variants is available to boost SSL performance from various aspects \cite{DBLP:conf/nips/Wang0C0PW20,DBLP:conf/nips/FengZDHLXYK020}. Li~\latinphrase{et~al.}\xspace~\cite{li2018deeper} were the first to have a detailed insight into the performance and lack of GCN in SSL tasks. Subsequently, GCN extensions for SSL started to propagate~\cite{DBLP:conf/iclr/LiaoBTGUZ18}~\cite{zhang2019bayesian}~\cite{vashishth2019confidence}~\cite{10.1145/3178876.3186116}~\cite{hu2019hierarchical}.
\textbf{GAT.}
In addition to more general ways of set aggregation, another common approach for improving the aggregation layer of GNNs is to introduce certain attention mechanisms~\cite{BahdanauCB14}. The basic theory is to give each neighbor a weight or value of significance which is used to weight the influence of this neighbor during the aggregation process. The first GNN to use this focus was Cucurull~\latinphrase{et~al.}'s\xspace Graph Attention Network (GAT), which uses attention weights to describe a weighted neighboring amount:
\begin{equation}\mathbf{m}_{\mathcal{N}(u)}=\sum_{v \in \mathcal{N}(u)} \alpha_{u, v} \mathbf{h}_{v},\end{equation}
where $\alpha_{u, v}$ denotes the attention on neighbor $v \in \mathcal{N}(u)$ when we are aggregating information at node $u$. In the original GAT paper, the attention weights are defined as
\begin{equation}\alpha_{u, v}=\frac{\exp \left(\mathbf{a}^{\top}\left[\mathbf{W} \mathbf{h}_{u} \oplus \mathbf{W} \mathbf{h}_{v}\right]\right)}{\sum_{v^{\prime} \in \mathcal{N}(u)} \exp \left(\mathbf{a}^{\top}\left[\mathbf{W h}_{u} \oplus \mathbf{W h}_{v^{\prime}}\right]\right)},\end{equation}
where $\mathbf{a}$ is a trainable attention vector, $\mathbf{W}$ is a trainable matrix, and denotes the concatenation operation.
\textbf{GraphSAGE.}
\label{Generalized update operation}
Over-smoothing is an obvious concern for GNN. Over-smoothing after several message iterations is almost unavoidable as the node-specific information is ``washed away." The use of vector concatenations or skip connections, which both preserve information directly from the previous rounds of the update, is one fair way to lessen this concern. For general purposes, $\text{Update}_{\text{base}}$ denotes a basic update rule.
One of the simplest updates in GraphSage~\cite{hamilton2017inductive} for skip connection uses a concatenation vector to contain more node-level information during a messages passage process:
\begin{equation}\text {Update}\left(\mathbf{h}_{u}, \mathbf{m}_{\mathcal{N}}(u)\right)=\left[\text {Update}_{\text {base}}\left(\mathbf{h}_{u}, \mathbf{m}_{\mathcal{N}(u)}\right) \oplus \mathbf{h}_{u}\right],\end{equation}
where the output from the basic update function is concatenated with the previous layer representation of the node. The critical observation is that this designed model is encouraged to detach information during the message passing operation.
\textbf{GGNN.}
Parallel to the above work, researchers also are motivated by the approaches taken by recurrent neural networks (RNNs) to improve stability. One way to view the GNN message propagation algorithm is to gather an observation from neighbors of the aggregation process and then change each node's hidden state. In this respect, specific approaches can be used explicitly based on observations to check the hidden states of RNN architectures.
For example, one of the earliest GNN variants which put this idea into practice is proposed by Li~\latinphrase{et~al.}\xspace~\cite{gatedGNN}, in which the update operation is defined as Eq.~(\ref{gatedGNN})
\begin{equation}
\label{gatedGNN}
\mathbf{h}_{u}^{(k)}=\operatorname{GRU}\left(\mathbf{h}_{u}^{(k-1)}, \mathbf{m}_{\mathcal{N}(u)}^{(k)}\right),\end{equation}
where GRU is a gating mechanism function in recurrent neural networks, introduced by Kyunghyun Cho~\latinphrase{et~al.}\xspace~\cite{chung2014empirical}. Another related approach would be similar improvements based on the LSTM architecture~\cite{lstmGNN}.
\textbf{Summary.} The main point of graph-based models for DSSL is to perform label inference on a constructed similarity graph so that the label information can be propagated from the labeled samples to the unlabeled ones by incorporating both the topological and feature knowledge. Moreover, the involvement of deep learning models in GSSL helps generate more discriminative embedding representations that are beneficial for the downstream SSL tasks, thanks to the more complex encoder functions.
\section{Pseudo-labeling methods}\label{sec:pseudoLabeling}
The pseudo-labeling methods differ from the consistency regularization methods in that the consistency regularization methods usually rely on consistency constraint of rich data transformations. In contrast, pseudo-labeling methods rely on the high confidence of pseudo-labels, which can be added to the training data set as labeled data.
There are two main patterns, one is to improve the performance of the whole framework based on the disagreement of views or multiple networks, and the other is self-training, in particular, the success of self-supervised learning in unsupervised domain makes some self-training self-supervised methods realized.
\begin{figure*}[!t]
\centering
\includegraphics[width=7.0in]{images/Pseudo-label.pdf}
\caption{A glimpse of the diverse range of architectures used for pseudo-label semi-supervised methods. The same color and structure have the same meaning as shown in Figure \ref{fig:consistencyRegularization}. $M_s$ denotes shared module, $M_1,M_2$ and $M_3$ are three different modules in Tri-Net. ``Rotation " and ``Exemplar"
represent $S^4L$-Rotation and $S^4L$-Exemplar, respectively. }
\label{fig:pseudoLabel}
\end{figure*}
\subsection{Disagreement-based models}\label{sec:disagreement}
The idea of disagreement-based SSL is to train multiple learners for the task and exploit the disagreement during the learning process \cite{DBLP:journals/kais/ZhouL10}. In such model designs, two or three different networks are trained simultaneously and label unlabeled samples for each other.
\textbf{Deep co-training.}
Co-training \cite{DBLP:conf/colt/BlumM98} framework assumes each data $x$ in the dataset has two different and complementary views, and each view is sufficient for training a good classifier.
Because of this assumption, Co-training learns two different classifiers on these two views (see Fig.~\ref{fig:pseudoLabel}(1)).
Then the two classifiers are applied to predict each view's unlabeled data and label the most confident candidates for the other model.
This procedure is iteratively repeated till unlabeled data are exhausted, or some condition is met (such as the maximum number of iterations is reached). Let $v_1$ and $v_2$ as two different views of data such that $x =(v_1,v_2)$. Co-training assumes that $\mathcal{C}_1$ as the classifier trained on View-$1$ $v_1$ and $\mathcal{C}_2$ as the classifier trained on View-$2$ $v_2$ have consistent predictions on $\mathcal{X}$.
In the objective function, the Co-training assumption can be model as:
\begin{equation}
\mathcal{L}_{ct}=H(\frac{1}{2}(\mathcal{C}_1(v_1)+\mathcal{C}_2(v_2)))-\frac{1}{2}(H(\mathcal{C}_1(v_1))+H(\mathcal{C}_2(v_2))),
\end{equation}
where $H(\cdot)$ denotes the entropy, the Co-training assumption is formulated as $\mathcal{C}(x)=\mathcal{C}_1(v_1)=\mathcal{C}_2(v_2), \forall x=(v_1,v_2)\sim \mathcal{X}$. On the labeled dataset $X_L$, the supervised loss function can be the standard cross-entropy loss
\begin{equation}
\mathcal{L}_{s}=H(y,\mathcal{C}_1(v_1))+H(y,\mathcal{C}_2(v_2)),
\end{equation}
where $H(p,q)$ is the cross-entropy between distribution $p$ and $q$.
The key to the success of Co-training is that the two views are different and complementary. However, the loss function $\mathcal{L}_{ct} $ and $\mathcal{L}_s$ only ensure the model tends to be consistent for the predictions on the dataset. To address this problem, \cite{DBLP:conf/eccv/QiaoSZWY18} forces to add the View Difference Constraint to the previous Co-training model, and formulated as:
\begin{equation}
\exists \mathcal{X}': \mathcal{C}_1(v_1) \neq \mathcal{C}_2(v_2), \forall x=(v_1,v_2)\sim \mathcal{X}',
\end{equation}
where $\mathcal{X}'$ denotes the adversarial examples of $\mathcal{X}$, thus $\mathcal{X}' \cap \mathcal{X}=\emptyset$.
In the loss function, the View Difference Constraint can be model by minimizing the cross-entropy between $\mathcal{C}_2(x)$ and $\mathcal{C}_1(g_2(x))$, where $g(\cdot)$ denotes the adversarial examples generated by the generative model. Then, this part loss function is:
\begin{equation}
\mathcal{L}_{dif}(x)=H(\mathcal{C}_1(x),\mathcal{C}_2(g_1(x)))+H(\mathcal{C}_2(x),p_1(g_2(x))).
\end{equation}
Some other research works also explore to apply co-training into neural network model training. For example, \cite{DBLP:conf/ijcai/ChengZCLHR16} treats the RGB and depth of an image as two independent views for object recognition. Then, co-training is performed to train two networks on the two views. Next, a fusion layer is added to combine the two-stream networks for recognition, and the overall model is jointly trained. Besides, in sentiment classification, \cite{DBLP:conf/acl/XiaWDL15} considers the original review and the automatically constructed anonymous review as two opposite sides of one review and then apply the co-training algorithm. One crucial property of \cite{DBLP:conf/acl/XiaWDL15} is that two views are opposing and therefore associated with opposite class labels.
\textbf{Tri-Net.}
Tri-net \cite{DBLP:conf/ijcai/ChenWGZ18}, a deep learning-based method inspired by the tri-training \cite{DBLP:journals/tkde/ZhouL05}. The tri-training learns three classifiers from three different training sets, which are obtained by utilizing bootstrap sampling. The framework (as shown in Fig.~\ref{fig:pseudoLabel}(2)) of tri-net can be intuitively described as follows.
Output smearing \cite{DBLP:journals/ml/Breiman00} is used to add random noise to the labeled sample to generate different training sets and help learn three initial modules. The three models then predict the pseudo-label for unlabeled data. With the predictions of two modules for unlabeled instances consistent, the pseudo-label is considered to be confident and stable. The labeled sample is added to the training set of the third module, and then the third module is fine-tuned on the augmented training set. During the augmentation process, the three modules become more and more similar, so the three modules are fine-tuned on the training sets respectively to ensure diversity. Formally, the output smearing is used to construct three different training sets $\{\mathcal{L}_{os}^j=(x_i, \hat{y}_i^j), j=1, 2, 3\}$ from the initial labeled set $X_L$. Then tri-net can be initialized by minimizing the sum of standard softmax cross-entropy loss function from the three training sets,
\begin{align}
\mathcal{L}=&\frac{1}{L}\sum_{i=1}^L \left\{ \mathcal{L}_y(M_1(M_S(x_i)),\hat{y}_i^1)
+ \mathcal{L}_y(M_2(M_S(x_i)),\hat{y}_i^2)\nonumber\right.\\
& \left.+ \mathcal{L}_y(M_3(M_S(x_i)),\hat{y}_i^3)\right\},
\end{align}
where $\mathcal{L}_y$ is the standard softmax cross-entropy loss function; $M_S$ denote a shared module, and $M_1,M_2,M_3$ is the three different modules; $M_j(M_S(x_i)), j=1,2,3$ denotes the outputs of the shared features generated by $M_S$.
In the whole procedure, the unlabeled sample can be pseudo-labeled by the maximum posterior probability,
\begin{equation}
\begin{aligned}
y=&\operatornamewithlimits{argmax}_{k\in \{1,2,\dots, K\}}\left\{p(M_1(M_S(x))=k|x)+\nonumber\right.\\
&\left. p(M_2(M_S(x))=k|x)+p(M_3(M_S(x))=k|x) \right\}.
\end{aligned}
\end{equation}
\textbf{Summary.} The disagreement-based SSL methods exploit the unlabeled data by training multiple learners, and the ``disagreement"
among these learners is crucial. When the data has two sufficient redundancy and conditional independence views, Deep Co-training \cite{DBLP:conf/eccv/QiaoSZWY18} improves the disagreement by designing a View Difference Constraint. Tri-Net \cite{DBLP:conf/ijcai/ChenWGZ18} obtains three labeled datasets by bootstrap sampling and trains three different learners. These methods in this category are less affected by model assumptions, non-convexity of the loss function and the scalability of the learning algorithms.
\subsection{Self-training models }\label{sec:pLabel}
Self-training algorithm leverages the model's own confident predictions to produce the pseudo labels for unlabeled data. In other words, it can add more training data by using existing labeled data to predict the labels of unlabeled data.
\textbf{EntMin.}
Entropy Minimization (EntMin) \cite{DBLP:conf/nips/GrandvaletB04} is a method of entropy regularization, which can be used to realize SSL by encouraging the model to make low-entropy predictions for unlabeled data and then using the unlabeled data in a standard supervised learning setting. In theory, entropy minimization can prevent the decision boundary from passing through a high-density data points region, otherwise it would be forced to produce low-confidence predictions for unlabeled data.
\textbf{Pseudo-label.}
Pseudo-label \cite{Lee2013PseudoLabelT} proposes a simple and efficient formulation of training neural networks in a semi-supervised fashion, in which the network is trained in a supervised way with labeled and unlabeled data simultaneously. As illustrated in Fig.~\ref{fig:pseudoLabel}(3), the model is trained on labeled data in a usual supervised manner with a cross-entropy loss. For unlabeled data, the same model is used to get predictions for a batch of unlabeled samples. The maximum confidence prediction is called a pseudo-label, which has the maximum predicted probability.
That is, the pseudo-label model trains a neural network with the loss function $\mathcal{L}$, where:
\begin{equation}
\mathcal{L}=\frac{1}{n}\sum_{m=1}^{n}\sum_{i=1}^{K}\mathcal{R}(y_i^m, f_i^m)+\alpha (t) \frac{1}{n'}\sum_{m=1}^{n'}\sum_{i=1}^{K}\mathcal{R}(y_i^{'m},f_i^{'m}),
\end{equation}
where $n$ is the number of mini-batch in labeled data for SGD, $n'$ for unlabeled data, $f_i^m$ is the output units of $m$'s sample in labeled data, $y_i^m$ is the label of that, $y_i^{'m}$ for unlabeled data, $y_i^{'m}$ is the pseudo-label of that for unlabeled data, $\alpha(t)$ is a coefficient balancing the supervised and unsupervised loss terms.
\textbf{Noisy Student.}
Noisy Student \cite{DBLP:conf/cvpr/XieLHL20} proposes a semi-supervised method inspired by knowledge distillation \cite{DBLP:journals/corr/HintonVD15} with equal-or-larger student models.
The framework is shown in Fig.~\ref{fig:pseudoLabel}(4). The teacher EfficientNet \cite{DBLP:conf/icml/TanL19} model is first trained
on labeled images to generate pseudo labels for unlabeled examples. Then a larger EfficientNet model as a student is trained on the combination of labeled and pseudo-labeled examples. These combined instances are augmented using data augmentation techniques such as RandAugment \cite{DBLP:conf/cvpr/CubukZSL20}, and model noise such as Dropout and stochastic depth are also incorporated in the student model during training. After a few iterations of this algorithm, the student model becomes the new teacher to relabel the unlabeled data and this process is repeated.
\textbf{$\bm{S^4L}$.}
Self-supervised Semi-supervised Learning ($S^4L$) \cite{DBLP:conf/iccv/BeyerZOK19} tackles the problem of SSL by employing self-supervised learning \cite{DBLP:conf/cvpr/KolesnikovZB19} techniques to learn useful representations from the image databases. The architecture of $S^4L$ is shown in Fig.~\ref{fig:pseudoLabel}(5). The conspicuous self-supervised techniques are predicting image rotation \cite{DBLP:conf/iclr/GidarisSK18} and exemplar \cite{DBLP:conf/nips/DosovitskiySRB14,DBLP:journals/pami/DosovitskiyFSRB16}. Predicting image rotation is a pretext task that anticipates an angle of the rotation transformation applied to an input example. In $S^4L$, there are four rotation degrees $\{0^{\circ}, 90^{\circ} , 180^{\circ}, 270^{\circ}\}$ to rotate input images. The $S^4L$-Rotation loss is the cross-entropy loss on outputs predicted by those rotated images. $S^4L$-Exemplar introduces an exemplar loss which encourages the model to learn a representation that is invariant to heavy image augmentations. Specifically, eight different instances of each image are produced by inception cropping \cite{DBLP:conf/cvpr/SzegedyLJSRAEVR15}, random horizontal mirroring, and HSV-space color randomization as in \cite{DBLP:conf/nips/DosovitskiySRB14}. Following \cite{DBLP:conf/cvpr/KolesnikovZB19}, the loss term on unsupervised images uses the batch hard triplet loss \cite{DBLP:journals/corr/HermansBL17} with a soft margin.
\textbf{MPL.}
In the SSL, the target distributions are often generated on unlabeled data by a shaped teacher model trained on labeled data. The constructions for target distributions are heuristics that are designed prior to training, and cannot adapt to the learning state of the network training. Meta Pseudo Labels (MPL) \cite{DBLP:journals/corr/abs-2003-10580} designs a teacher model that assigns distributions to input examples to train the student model. Throughout the course of the student's training, the teacher observes the student's performance on a held-out validation set, and learns to generate target distributions so that if the student learns from such distributions, the student will achieve good validation performance. The training procedure of MPL involves two alternating processes. As shown in Fig.~\ref{fig:pseudoLabel}(6), the teacher $g_{\phi}(\cdot)$ produces the conditional class distribution $g_{\phi}(x)$ to train the student. The pair $(x,g_{\phi}(x))$ is then fed into the student network $f_{\theta}(\cdot)$ to update its parameters $\theta$ from the cross-entropy loss. After the student network updates its parameters, the model evaluates the new parameters $\theta'$ based on the samples from the held-out validation dataset. Since the new parameters of the student depend on the teacher, the dependency allows us to compute the gradient of the loss to update the teacher's parameters.\
\textbf{EnAET.} Different from the previous semi-supervised methods and $S^4L$ \cite{DBLP:conf/iccv/BeyerZOK19}, EnAET
\cite{DBLP:journals/tip/WangKLQ21} trains an Ensemble of Auto-Encoding Transformations to enhance the learning ability of the model.
The core part of this framework is that EnAET integrates an ensemble of spatial and non-spatial transformations to self-train a good feature representation \cite{DBLP:conf/cvpr/ZhangQWL19}. EnAET incorporates four spatial transformations and a combined non-spatial transformation. The spatial transformations are Projective transformation, Affine transformation, Similarity transformation and Euclidean transformation. The non-spatial transformation is composed of different colors, contrast, brightness and sharpness with four strength parameters. As shown in Fig.~\ref{fig:pseudoLabel}(7), EnAET learns an encoder $E: x\rightarrow E(x), t(x)\rightarrow E(t(x))$ on an original instance and its transformations. Meanwhile, a decoder $D:[E(x), E(t(x))]\rightarrow \hat{t}$ is learned to estimate $\hat{t}$ of input transformation. Then, we can get an AutoEncoding Transformation (AET) loss,
\begin{equation}
\mathcal{L}_{AET}=\mathbb{E}_{x,t(x)}\|D[E(x),E(t(x))]-t(x)\|^2,
\end{equation}
and EnAET adds the AET loss to the SSL loss as a regularization term. Apart from the AET loss, EnAET explore a pseudo-labeling consistency by minimizing the KL divergence between $P(y|x)$ on an original sample $x$ and $P(y|t(x))$ on a transformation $t(x)$.
\textbf{SimCLRv2.}
SimCLRv2 \cite{DBLP:conf/nips/ChenKSNH20} modifies
SimCLR \cite{DBLP:conf/icml/ChenK0H20} for SSL problems. Following the paradigm of supervised fine-tuning after unsupervised pretraining, SimCLRv2 uses unlabeled samples in a task-agnostic way, and shows that a big (deep and wide) model can surperisingly effective for semi-supervised learning. As shown in Fig.~\ref{fig:pseudoLabel}(8), the SimCLRv2 can be summarized in three steps: unsupervised or self-supervised pre-training, supervised fine-tuning on 1\% or 10\% labeled samples, and self-training with task-specific unlabeled examples. In the pre-training step, SimCLRv2 learns representations by maximizing the contrastive learning loss function in latent space, and the loss function is constructed based on the consistency of different augmented views of the same example. The contrastive loss is
\begin{equation}
\ell_{i,j}=-\log \frac{\exp(sim(z_i, z_j)/\tau)}{\sum_{k=1}^{2N}\mathbb{I}_{[k\neq i]}\exp (sim(z_i,z_k)/\tau)},
\end{equation}
where $(i,j)$ is a pair of positive example augmented from the same sample. $sim(\cdot, \cdot)$ is cosine similarity, and $\tau$ is a temperature parameter. In the self-training step, SimCLRv2 uses task-specific unlabeled samples, and the fine-tuning network acts as a teacher model to minimize the following distillation loss,
\begin{equation}
\mathcal{L}^{\text{distill}}=-\sum_{x_i \in X}\big[\sum_y P^T(y|x_i; \tau)\log P^S(y|x_i;\tau)\big],
\end{equation}
where $P^T(y|x_i;\tau)$ and $P^S(y|x_i;\tau)$ are produced by teacher network and student network, respectively.
\textbf{Summary.} In general, self-training is a way to get more training data by a series of operations to get pseudo-labels of unlabeled data. Both EntMin \cite{DBLP:conf/nips/GrandvaletB04} and Pseudo-label \cite{Lee2013PseudoLabelT} use entropy minimization to get the pseudo-label with the highest confidence as the ground truth for unlabeled data. Noisy Student \cite{DBLP:conf/cvpr/XieLHL20} utilizes a variety of techniques when training the student network, such as data augmentation, dropout and stochastic depth. $S^4L$ \cite{DBLP:conf/iccv/BeyerZOK19} not only uses data augmentation techniques, but also adds another $4$-category task to improve model performance. MPL \cite{DBLP:journals/corr/abs-2003-10580} modifies Pseudo-label \cite{Lee2013PseudoLabelT} by deriving the teacher network's update rule from the feedback of the student network. Emerging techniques (\latinphrase{e.g.}\xspace, rich data augmentation strategies, meta-learning, self-supervised learning ) and network architectures (\latinphrase{e.g.}\xspace, EfficientNet \cite{DBLP:conf/icml/TanL19}, SimCLR \cite{DBLP:conf/icml/ChenK0H20}) provide powerful support for the development of self-training methods.
\section{Hybrid methods}\label{sec:hybridModel}
Hybrid methods combine ideas from the above-mentioned methods such as pseudo-label, consistency regularization and entropy minimization for performance improvement. Moreover, a learning principle, namely Mixup \cite{DBLP:conf/iclr/ZhangCDL18}, is introduced in those hybrid methods. It can be considered as a simple, data-agnostic data augmentation approach, a convex combination of paired samples and their respective labels. Formally, Mixup constructs virtual training examples,
\begin{equation}
\tilde{x}=\lambda x_i+(1-\lambda)x_j,\quad
\tilde{y}=\lambda y_i+(1-\lambda)y_j,
\end{equation}
where $(x_i,y_i)$ and $(x_j,y_j)$ are two instances from the training data, and $\lambda \in [0,1]$.
Therefore, Mixup extends the training data set by a hard constraint that linear interpolations of samples should lead to the linear interpolations of the corresponding labels.
\begin{figure*}[!t]
\centering
\includegraphics[width=7.0in]{images/HybridModel.pdf}
\caption{A glimpse of the diverse range of architectures used for hybrid semi-supervised methods. ``Mixup"
is Mixup operator \cite{DBLP:conf/iclr/ZhangCDL18}. ``MixMatch"
is MixMatch \cite{DBLP:conf/nips/BerthelotCGPOR19} in figure (2). ``GMM"
is short for Gaussian Mixture Model. ``SAug" and ``WAug"
represent
Strong Augmentation and Weak Augmentation, respectively. }
\label{fig:hybridModel}
\end{figure*}
\textbf{ICT.}
Interpolation Consistency Training (ICT) \cite{DBLP:conf/ijcai/VermaLKBL19} regularizes SSL by encouraging the prediction at an interpolation of two unlabeled examples to be consistent with the interpolation of the predictions at those points. The architecture is shown in Fig.\ref{fig:hybridModel}(1). The low-density separation assumption inspires this method, and Mixup can achieve broad margin decision boundaries. It is done by training the model $f(\theta)$ to predict $\text{Mix}_{\lambda}(\hat{y}_j,\hat{y}_k)$ at location $\text{Mix}_{\lambda}(x_j,x_k)$ with the Mixup operation: $\text{Mix}_{\lambda}(a,b)=\lambda a+(1-\lambda) b$. In semi-supervised settings, ICT extends Mixup by training the model $f(\theta, x)$ to predict the ``fake label" $\text{Mix}_{\lambda}(f(\theta ,x_j),f(\theta ,x_k))$ at location $\text{Mix}_{\lambda}(x_j,x_k)$. Moreover, the model $f_{\theta}$ predict the fake label $\text{Mix}_{\lambda}(f_{\theta '}(x_i),f_{\theta '}(x_j))$ at location $\text{Mix}_{\lambda}(x_i,x_j)$, where $\theta'$ is a moving average of $\theta$, like \cite{DBLP:conf/nips/TarvainenV17}, for a more conservation consistent regularization. Thus, the ICT term is
\begin{equation}
\mathbb{E}_{x\in X}\mathcal{R}(f(\theta, \text{Mix}_{\lambda}(x_i,x_j)), \text{Mix}_{\lambda}(f(\theta', x_i),f(\theta',x_j)).
\end{equation}
\textbf{MixMatch.}
MixMatch \cite{DBLP:conf/nips/BerthelotCGPOR19} combines consistency regularization and entropy minimization in a unified loss function. This model operates by producing pseudo-labels for each unlabeled instance and then training the original labeled data with the pseudo-labels for the unlabeled data using fully-supervised techniques. The main aim of this algorithm is to create the collections $X'_L$ and $X'_U$, which are made up of augmented labeled and unlabeled samples that were generated using Mixup. Formally, MixMatch produces an augmentation of each labeled instance $(x_i,y_i)$ and $K$ weakly augmented version of each unlabeled instance $(x_j)$ with $k \in \{1,\ldots, K\}$. Then, it generates a pseudo-label $\bar{y}_j$ for each $x_j$ by computing the average prediction across the $K$ augmentations. The pseudo-label distribution is then sharpened by adjusting temperature scaling to get the final pseudo-label $\tilde{y}_j$. After the data augmentation, the batches of augmented labeled examples and unlabeled with pseudo-label examples are combined, then the whole group is shuffled. This group is divided into two parts: the first $L$ samples were taken as $\mathcal{W}_L$, and the remaining taken as $\mathcal{W}_U$. The group $\mathcal W_L$ and the augmented labeled batch $\tilde{X}_L$ are fed into the Mixup algorithm to compute examples $(x',y')$ where $x'=\lambda x_1+(1-\lambda)x_2$ for $\lambda \sim \text{Beta}(\alpha, \alpha)$. Similarly, Mixup is applied between the remaining $\mathcal{W}_U$ and the augmented unlabeled group $\tilde{X}_U$. MixMatch conducts traditional fully-supervised training with a standard cross-entropy loss for a supervised dataset and a mean square error for unlabeled data given these mixed-up samples.
\textbf{ReMixMatch.}
ReMixMatch \cite{DBLP:conf/iclr/BerthelotCCKSZR20} extends MixMatch \cite{DBLP:conf/nips/BerthelotCGPOR19} by introducing distribution alignment and augmentation anchoring. Distribution alignment encourages the marginal distribution of aggregated class predictions on unlabeled data close to the marginal distribution of ground-truth labels. Augmentation anchoring replaces the consistency regularization component of MixMatch. This technique generates multiple strongly augmented versions of input and encourages each output to be close to predicting a weakly-augmented variant of the same input. A variant of AutoAugment \cite{DBLP:conf/cvpr/CubukZMVL19} dubbed ``CTAugment" is also proposed to produce strong augmentations, which learns the augmentation policy alongside the model training. As the procedure of ReMixmatch presented in Fig.~\ref{fig:hybridModel}(3), an ``anchor"
is generated by applying weak augmentation to a given unlabeled example and then $K$ strongly-augmented versions of the same unlabeled example using CTAugment.
\textbf{DivideMix.}
DivideMix \cite{DBLP:conf/iclr/LiSH20} presents a new SSL framework to handle the problem of learning with noisy labels. As shown in Fig.~\ref{fig:hybridModel}(4), DivideMix proposes co-divide, a process that trains two networks simultaneously. For each network, a dynamic Gauss Mixed Model (GMM) is fitted on the loss distribution of each sample to divide the training set into labeled data and unlabeled data. The separated data sets are then used to train the next epoch's networks. In the follow-up SSL process, co-refinement and co-guessing are used to improve MixMatch \cite{DBLP:conf/nips/BerthelotCGPOR19} and solve the problem of learning with noisy labels.
\textbf{FixMatch.}
FixMatch \cite{DBLP:conf/nips/SohnBCZZRCKL20} combines consistency regularization and pseudo-labeling while vastly simplifying the overall method.
The key innovation comes from the combination of these two ingredients, and the use of a separate weak and strong augmentation in the consistency regularization approach. Given an instance, only when the model predicts a high-confidence label can the predicted pseudo-label be identified as ground-truth. As shown in Fig.\ref{fig:hybridModel}(5), given an instance $x_j$, FixMatch firstly generates pseudo-label $\hat{y}_j$ for weakly-augmented unlabeled instances $\hat{x}_j$. Then, the model trained on the weakly-augmented samples is used to predict pseudo-label in the strongly-augmented version of $x_j$.
In FixMatch, weak augmentation is a standard flip-and-shift augmentation strategy, randomly flipping images horizontally with a probability. For strong augmentation, there are two approaches which are based on \cite{DBLP:conf/cvpr/CubukZMVL19}, \latinphrase{i.e.}\xspace, RandAugment \cite{DBLP:conf/cvpr/CubukZSL20} and CTAugment \cite{DBLP:conf/iclr/BerthelotCCKSZR20}. Moreover, Cutout \cite{DBLP:journals/corr/abs-1708-04552} is followed by either of these strategies.
\textbf{Summary.}
As discussed above, the hybrid methods unit the most successful approaches in SSL, such as pseudo-labeling, entropy minimization and consistency regularization, and adapt them to achieve state-of-the-art performance. In TABLE~\ref{tab:augmentation}, we summarize some techniques that can be used in consistency training to improve model performance.
\begin{table}
\caption{Summary of Input Augmentations and Neural Network Transformations}
\label{tab:augmentation}
\centering
\begin{tabular}{>{\centering\arraybackslash}m{1.8cm}|>{\centering\arraybackslash}m{1.7cm}|>{\centering\arraybackslash}m{4cm}}
\hline
& \textbf{Techniques} & \textbf{Methods} \\
\hline
\multirow{7}{1.8cm}{Input augmentations}
& Additional Noise & Ladder Network \cite{DBLP:conf/nips/RasmusBHVR15}, WCP \cite{DBLP:conf/cvpr/ZhangLH20WCP}\\
\cline{2-3}
& Stochastic Augmentation &$\Pi$ Model \cite{DBLP:conf/nips/SajjadiJT16}, Temporal Ensembling \cite{DBLP:conf/iclr/LaineA17}, Mean Teacher \cite{DBLP:conf/nips/TarvainenV17}, Dual Student \cite{DBLP:conf/iccv/KeWYRL19}, MixMatch \cite{DBLP:conf/nips/BerthelotCGPOR19}, ReMixMatch \cite{DBLP:conf/iclr/BerthelotCCKSZR20}, FixMatch \cite{DBLP:conf/nips/SohnBCZZRCKL20} \\
\cline{2-3}
& Adversarial perturbation & VAT \cite{DBLP:journals/pami/MiyatoMKI19}, VAdD \cite{DBLP:conf/aaai/ParkPSM18} \\
\cline{2-3}
& AutoAugment & UDA \cite{DBLP:conf/nips/XieDHL020}, Noisy Student \cite{DBLP:conf/cvpr/XieLHL20} \\
\cline{2-3}
& RandAugment & UDA \cite{DBLP:conf/nips/XieDHL020}, FixMatch \cite{DBLP:conf/nips/SohnBCZZRCKL20} \\
\cline{2-3}
& CTAugment & ReMixMatch \cite{DBLP:conf/iclr/BerthelotCCKSZR20}, FixMatch \cite{DBLP:conf/nips/SohnBCZZRCKL20} \\
\cline{2-3}
& Mixup &ICT \cite{DBLP:conf/ijcai/VermaLKBL19}, MixMatch \cite{DBLP:conf/nips/BerthelotCGPOR19}, ReMixMatch \cite{DBLP:conf/iclr/BerthelotCCKSZR20}, DivideMix \cite{DBLP:conf/iclr/LiSH20}\\
\hline
\multirow{5}{1.8cm}{Neural network transformations}
& Dropout &$\Pi$ Model \cite{DBLP:conf/nips/SajjadiJT16}, Temporal Ensembling \cite{DBLP:conf/iclr/LaineA17}, Mean Teacher \cite{DBLP:conf/nips/TarvainenV17}, Dual Student \cite{DBLP:conf/iccv/KeWYRL19}, VAdD \cite{DBLP:conf/aaai/ParkPSM18},Noisy Student~\cite{DBLP:conf/cvpr/XieLHL20} \\
\cline{2-3}
& EMA & Mean Teacher \cite{DBLP:conf/nips/TarvainenV17}, ICT \cite{DBLP:conf/ijcai/VermaLKBL19} \\
\cline{2-3}
& SWA &SWA \cite{DBLP:conf/iclr/AthiwaratkunFIW19} \\
\cline{2-3}
& Stochastic depth & Noisy Student \cite{DBLP:conf/cvpr/XieLHL20} \\
\cline{2-3}
& DropConnect &WCP \cite{DBLP:conf/cvpr/ZhangLH20WCP} \\
\hline
\end{tabular}
\end{table}
\section{Challenges and future directions}\label{sec:future_trends}
Although exceptional performance and achieved promising DSSL progress, there are still several open research questions for future work. We outline some of these issues and future directions below.
\textbf{Theoretical analysis.} Existing semi-supervised approaches predominantly use unlabeled samples to generate constraints and then update the model with labeled data and these constraints. However, the internal mechanism of DSSL and the role of various techniques, such as data augmentations, training methods and loss functions,\latinphrase{etc.}\xspace, are not clear.
Generally, there is a single weight to balance the supervised and unsupervised loss, which means that all unlabeled instances are equally weighted. However, not all unlabeled data is equally appropriate for the model in practice. To counter this issue, \cite{DBLP:conf/nips/RenYS20} considers how to use a different weight for every unlabeled example.
For consistency regularization SSL, \cite{DBLP:conf/iclr/AthiwaratkunFIW19} investigates how loss geometry interacts with training process. \cite{DBLP:conf/nips/ZophGLCLC020} experimentally explores the effects of data augmentation and labeled dataset size on pre-training and self-training, as well as the limitations and interactions of pre-training and self-training. \cite{DBLP:conf/iclr/GhoshT21} analyzes the property of the consistency regularization methods when data instances lie in the neighborhood of low-dimensional manifolds, especially in the case of efficient data augmentations or perturbations schemes.
\textbf{Incorporation of domain knowledge.} Most of the SSL approaches listed above can obtain satisfactory results only in ideal situations in which the training dataset meets the designed assumptions and contains sufficient information to learn an insightful learning system. However, in practice, the distribution of the dataset is unknown and does not necessarily meet these ideal conditions. When the distributions of labeled and unlabeled data do not belong to the same one or the model assumptions are incorrect, the more unlabeled data is utilized, the worse the performance will be. Therefore, we can attempt to incorporate richer and more reliable domain knowledge into the model to mitigate the degradation performance. Recent works focusing on this direction have proposed \cite{DBLP:conf/acl/HuMLHX16,DBLP:journals/pami/TangWWGDGC18,DBLP:conf/cvpr/QiWQL19,DBLP:journals/mlc/YuYZ19, DBLP:journals/corr/abs-2008-03923} for DSSL.
\textbf{Learning with noisy labels.} In this survey, models discussed typically consider the labeled data is generally accurate to learn a standard cross-entropy loss function. An interesting consideration is to explore how SSL can be performed for cases where corresponding labeled instances with noisy initial labels.
For example, the labeling of samples may be contributed by the community, so we can only obtain noisy labels for the training dataset. One solution to this problem is \cite{DBLP:journals/corr/ReedLASER14}, which augments the prediction objective with consistency where the same prediction is made given similar percepts. Based on graph SSL, \cite{DBLP:journals/pr/LuW15a} introduces a new $L_1$-norm formulation of Laplacian regularization inspired by sparse coding. \cite{DBLP:conf/nips/HanYYNXHTS18} deals with this problem from the perspective of memorization effects, which proposed a learning paradigm combining co-training and mean teacher.
\textbf{Imbalanced semi-supervised learning.} The problem of class imbalance is naturally widespread in real-world applications. When the training data is highly imbalanced, most learning frameworks will show bias towards the majority class, and in some extreme cases, may completely ignore the minority class \cite{DBLP:journals/jbd/JohnsonK19}, as a result, the efficiency of predictive models will be significantly affected. Nevertheless, to handle the semi-supervised problem, it is commonly assumed that training dataset is uniformly distributed over all class labels. Recently, more and more works have focused on this problem. \cite{DBLP:conf/nips/KimHPYHS20} aligns pseudo-labels with the desirable class distribution in the unlabeled data for SSL with imbalanced labeled data. Based on graph-based semi-supervised methods, \cite{DBLP:journals/pr/DengY21} copes with various degrees of class imbalance in a given dataset.
\textbf{Robust semi-supervised learning.} The common of the latest state-of-the-art approaches is the application of consistency training on augmented unlabeled data without changing the model predictions. One attempt is made by VAT \cite{DBLP:journals/pami/MiyatoMKI19} and VAdD \cite{DBLP:conf/aaai/ParkPSM18}. Both of them employ adversarial training to find the optimal adversarial example. Another promising approach is data augmentation (adding noise or random perturbations, CutOut \cite{DBLP:journals/corr/abs-1708-04552}, RandomErasing \cite{DBLP:conf/aaai/Zhong0KL020}, HideAndSeek \cite{DBLP:journals/corr/abs-1811-02545} and GridMask \cite{DBLP:journals/corr/abs-2001-04086}), especially advanced data augmentation, such as AutoAugment \cite{DBLP:conf/cvpr/CubukZMVL19}, RandAugment \cite{DBLP:conf/cvpr/CubukZSL20}, CTAugment \cite{DBLP:conf/iclr/BerthelotCCKSZR20}, and Mixup \cite{DBLP:conf/iclr/ZhangCDL18} which also can be considered as a form of regularization.
\textbf{Safe semi-supervised learning.} In SSL, it is generally accepted that unlabeled data can help improve learning performance, especially when labeled data is scarce. Although it is remarkable that unlabeled data can improve learning performance under appropriate assumptions or conditions, some empirical studies \cite{DBLP:conf/nips/SinghNZ08,DBLP:journals/pami/YangP11,DBLP:journals/jair/ChawlaK05} have shown that the use of unlabeled data may lead to performance degeneration, making the generalization performance even worse than a model learned only with labeled data in real-world applications. Thus, safe semi-supervised learning approaches are desired, which never significantly degrades learning performance when unlabeled data is used.
\section{Conclusion}\label{sec:conclusion}
Deep semi-supervised learning is a promising research field with important real-world applications. The success of deep learning approaches has led to the rapid growth of DSSL techniques. This survey provides a taxonomy of existing DSSL methods, and groups DSSL methods into five categories: Generative models, Consistency Regularization models, Graph-based models, Pseudo-labeling models, and Hybrid models. We provide illustrative figures to compare the differences between the approaches within the same category. Finally, we discuss the challenges of DSSL and some future research directions that are worth further studies.
%
%
%
%
%
%
%
\ifCLASSOPTIONcompsoc
\section*{Acknowledgment}
\fi
This paper was partially supported by the National Key Research and Development Program of China (No. 2018AAA0100204), and a key program of fundamental research from Shenzhen Science and Technology Innovation Commission (No. JCYJ20200109113403826).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
%
\bibliographystyle{IEEEtran}
|
1,941,325,220,772 | arxiv | \section{Introduction}
\label{sec_Introduction}
Visual object tracking has received increasing attention over the last decades and has remained a very active research direction. It has a large range of applications in diverse fields like visual surveillance \cite{ICPR10HumanTrack}, human-computer interactions \cite{ICIP12HandPose}, and augmented reality \cite{CVPR15TrackSLAM}. Although much progress has been made recently, it has still been commonly recognized as a very challenging task due to numerous factors such as illumination variation, occlusion, and background clutters, to name a few \cite{PAMI15OTB}.
Recently, the Siamese network based trackers \cite{CVPR16SINT, ECCV16SiamFC, ECCV16GOTURN, arXiv17DCFNet, CVPR17CFNet, CVPR18SiamRPN, CVPR18RASTrack, ECCV18DaSiamRPN, IJCAI18EDCF} have drawn much attention in the community. These Siamese trackers formulate the visual object tracking problem as learning a general similarity map by cross-correlation between the feature representations learned for the target template and the search region. To ensure tracking efficiency, the offline learned Siamese similarity function is often fixed during the running time \cite{CVPR16SINT, ECCV16SiamFC, ECCV16GOTURN}. The CFNet tracker \cite{CVPR17CFNet} and DSiam tracker \cite{ICCV17DSiam} update the tracking model via a running average template and a fast transformation module, respectively. The SiamRNN tracker \cite{CVPR18SiamRPN} introduces the region proposal network \cite{CVPR18SiamRPN} after the Siamese network and performs joint classification and regression for tracking. The DaSiamRPN tracker \cite{ECCV18DaSiamRPN} further introduces a distractor-aware module and improves the discrimination power of the model.
Although the above Siamese trackers have obtained outstanding tracking performance, especially for the well-balanced accuracy and speed, even the best performed Siamese trackers, such as SiamPRN, the accuracy still has a notable gap with the state-of-the-arts \cite{CVPR17ECO} on tracking benchmarks like OTB2015 \cite{PAMI15OTB}. We observe that all these trackers have built their network upon architecture similar to AlexNet \cite{NIP12AlexNet} and tried several times to train a Siamese tracker with more sophisticated architecture like ResNet \cite{CVPR16ResNet} yet with no performance gain. Inspired by this observation, we perform an analysis of existing Siamese trackers and find the core reason comes from the destroy of the strict translation invariance. Since the target may appear at any position in the search region, the learned feature representation for the target template should stay spatial invariant, and we further theoretically find that, among modern deep architectures, only the zero-padding variant of AlexNet satisfies this spatial invariance restriction.
To overcome this restriction and drive the Siamese tracker with more powerful deep architectures, through extensive experimental validations, we introduce a simple yet effective sampling strategy to break the spatial invariance restriction of the Siamese tracker. We successfully train a SiamRPN~\cite{CVPR18SiamRPN} based tracker using the ResNet as a backbone network and obtain significant performance improvements. Benefiting from the ResNet architecture, we propose a layer-wise feature aggravation structure for the cross-correlation operation, which helps the tracker to predict the similarity map from features learned at multiple levels. By analyzing the Siamese network structure for cross-correlations, we find that its two network branches are highly imbalanced in terms of parameter number; therefore we further propose a depth-wise separable correlation structure which not only greatly reduces the parameter number in the target template branch, but also stabilizes the training procedure of the whole model.
In addition, an interesting phenomena is observed that objects in the same categories have high response on the same channels while responses of the rest channels are suppressed. The orthogonal property may also improve the tracking performance.
To summarize, the main contributions of this work are listed below in fourfold:
\vspace{-3mm}
\begin{itemize}
\setlength{\itemsep}{5pt}
\setlength{\parsep}{5pt}
\setlength{\parskip}{0pt}
\item We provide a deep analysis of Siamese trackers and prove that when using deep networks the decrease in accuracy comes from the destroy of the strict translation invariance.
\item We present a simple yet effective sampling strategy to break the spatial invariance restriction which successfully trains Siamese tracker driven by a ResNet architecture.
\item We propose a layer wise feature aggregation structure for the cross-correlation operation, which helps the tracker to predict the similarity map from features learned at multiple levels.
\item We propose a depth-wise separable correlation structure to enhance the cross-correlation to produce multiple similarity maps associated with different semantic meanings.
\end{itemize}
Based on the above theoretical analysis and technical contributions, we have developed a highly effective and efficient visual tracking model which establishs a new state-of-the-art in terms of tracking accuracy, while running efficiently at 35 FPS. The proposed tracker, referred as \emph{SiamRPN++}, consistently obtains the best tracking results on five of the largest tracking benchmarks, including OTB2015 \cite{PAMI15OTB}, VOT2018 \cite{VOT18Results}, UAV123 \cite{ECCV16UAV123}, LaSOT \cite{ARX18LaSOT}, and TrackingNet \cite{ECCV18trackingnet}. Furthermore, we propose a fast variant of our tracker using MobileNet\cite{ARX17mobile} backbone that maintains competitive performance, while running at 70 FPS. To facilitate further studies on the visual tracking direction, we will release the source code and trained models of the SiamRPN++ tracker.
\section{Related Work}
\label{sec_RelatedWork}
In this section, we briefly introduce recent trackers, with a special focus on the Siamese network based trackers \cite{CVPR16SINT, ECCV16SiamFC}. Besides, we also describe the recent developments of deep architectures.
Visual tracking has witnessed a rapid boost in the last decade due to the construction of new benchmark datasets \cite{CVPR13OTB, PAMI15OTB, VOT16Results, VOT18Results, ARX18LaSOT, ECCV18trackingnet} and improved methodologies \cite{PAMI15KCF, ICCVW15RAJSSC, ICCV15SRDCF, ICCVW15DeepSRDCF, CVPR15Muster, CVPR16MDNet, ECCV16CCOT, CVPR17ECO, CVPR18RASTrack, ECCV18DaSiamRPN, ECCV18SACFN}. The standardized benchmarks \cite{CVPR13OTB, PAMI15OTB, ARX18LaSOT} provide fair testbeds for comparisons with different algorithms. The annually held tracking challenges \cite{VOT15Results, VOT16Results, VOT17Results, VOT18Results} are consistently pushing forward the tracking performance. With these advancements, many promising tracking algorithms have been proposed. The seminal work by Bolme \etal \cite{CVPR10MOSSE} introduces the Convolution Theorem from the signal processing field into visual tracking and transforms the object template matching problem into a correlation operation in the frequency domain. Own to this transformation, the correlation filter based trackers gain not only highly efficient running speed, but also increase accuracy if proper features are used \cite{PAMI15KCF, ICIP15JSSC, ICCVW15RAJSSC, CVPR14CN, ICCV15SRDCF}. With the wide adoption of deep learning models in visual tracking, tracking algorithms based on correlation filter with deep feature representations \cite{ECCV16CCOT, CVPR17ECO} have obtained the state-of-the-art accuracy in popular tracking benchmarks \cite{CVPR13OTB, PAMI15OTB} and challenge \cite{VOT15Results, VOT16Results, VOT17Results}.
Recently, the Siamese network based trackers have received significant attentions for their well-balanced tracking accuracy and efficiency \cite{CVPR16SINT,ECCV16SiamFC, ECCV16GOTURN, arXiv17DCFNet, CVPR17CFNet, ICCV17DSiamFC, CVPR18SiamRPN, CVPR18RASTrack, ECCV18DaSiamRPN, IJCAI18EDCF}. These trackers formulate visual tracking as a cross-correlation problem and are expected to better leverage the merits of deep networks from end-to-end learning. In order to produce a similarity map from cross-correlation of the two branches, they train a Y-shaped neural network that joins two network branches, one for the object template and the other for the search region. Additionally, these two branches can remain fixed during the tracking phase \cite{CVPR16SINT,ECCV16SiamFC, ECCV16GOTURN, CVPR18RASTrack, CVPR18SiamRPN, ECCV18DaSiamRPN} or updated online to adapt the appearance changes of the target \cite{arXiv17DCFNet, CVPR17CFNet, ICCV17DSiamFC}. The currently state-of-the-art Siamese trackers \cite{CVPR18SiamRPN, ECCV18DaSiamRPN} enhance the tracking performance by a region proposal network after the Siamese network and produce very promising results. However, on the OTB benchmark \cite{PAMI15OTB}, their tracking accuracy still leaves a relatively large gap with state-of-the-art deep trackers like ECO \cite{CVPR17ECO} and MDNet \cite{CVPR16MDNet}.
With the proposal of modern deep architecture AlexNet by Alex \etal \cite{NIP12AlexNet} in 2012, the studies of the network architectures are rapidly growing and many sophisticated deep architectures are proposed, such as VGGNet \cite{ICLR15VGG}, GoogleNet \cite{CVPR15GoogleNet}, ResNet \cite{CVPR16ResNet} and MobileNet \cite{ARX17mobile}. These deep architectures not only provide deeper understanding on the design of neural networks, but also push forwards the state-of-the-arts of many computer vision tasks like object detection \cite{CVPR18MegDet}, image segmentation \cite{ECCV18SegEDASC}, and human pose estimation \cite{ECCV18PoseDLC}.
In deep visual trackers, the network architecture usually contains no more than five constitutional layers tailored from AlexNet or VGGNet. This phenomenon is explained that shallow features mostly contribute to the accurate localization of the object \cite{ARX17STrackerAnalysis}.
In this work, we argue that the performance of Siamese trackers can significantly get boosted using deeper models if the model is properly trained with the whole Siamese network.
\section{Siamese Tracking with Very Deep Networks}
\label{sec_SiamResNet}
The most important finding of this work is that the performance of the Siamese network based tracking algorithm can be significantly boosted if it is armed with much deeper networks. However, simply training a Siamese tracker by directly using deeper networks like ResNet does not obtain the expected performance improvement. We find the underlying reason largely involves the intrinsic restrictions of the Siamese trackers, Therefore, before the introduction of the proposed SiamRPN++ model, we first give a deeper analysis on the Siamese networks for tracking.
\subsection{Analysis on Siamese Networks for Tracking}
\label{sec_analysis}
The Siamese network based tracking algorithms \cite{CVPR16SINT, ECCV16SiamFC} formulate visual tracking as a cross-correlation problem and learn a tracking similarity map from deep models with a Siamese network structure, one branch for learning the feature presentation of the target, and the other one for the search area. The target patch is usually given in the first frame of the sequence and can be viewed as an exemplar $\mathbf{z}$. The goal is to find the most similar patch (instance) from following frame $\mathbf{x}$ in a semantic embedding space $\phi(\cdot)$:
\begin{equation}
f(\mathbf{z}, \mathbf{x})=\phi(\mathbf{z})\ast \phi(\mathbf{x})+b,
\label{SiamProb}
\end{equation}
where $b$ is used to model the offset of the similarity value.
This simple matching function naturally implies two \emph{intrinsic} restrictions in designing a Siamese tracker.
\vspace{-1mm}
\begin{itemize}
\setlength{\itemsep}{-1pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item The contracting part and the feature extractor used in Siamese trackers have an intrinsic restriction for \emph{strict translation invariance}, $f(\mathbf{z}, \mathbf{x}[\bigtriangleup\tau_j])=f(\mathbf{z}, \mathbf{x})[\bigtriangleup\tau_j]$, where $[\bigtriangleup\tau_j]$ is the translation shift sub window operator, which ensures the efficient training and inference.
\item The contracting part has an intrinsic restriction for \emph{structure symmetry}, \ie~$f(\mathbf{z}, \mathbf{x}')=f(\mathbf{x}', \mathbf{z})$, which is appropriate for the similarity learning.
\vspace{0mm}
\end{itemize}
After detailed analysis, we find the core reason for preventing Siamese tracker using deep network is related to these two aspects. Concretely speaking, one reason is that padding in deep networks will destroy the strict translation invariance. The other one is that RPN requires \emph{asymmetrical} features for classification and regression. We will introduce spatial aware sampling strategy to overcome the first problem, and discuss the second problem in Sect. \ref{sec:dw}.
Strict translation invariance only exists in no padding network such as modified AlexNet \cite{ECCV16SiamFC}. Previous Siamese based Networks \cite{ECCV16SiamFC, arXiv17DCFNet, CVPR17CFNet, CVPR18SiamRPN, ECCV18DaSiamRPN} are designed to be shallow to satisfy this restriction.
However, if the employed networks are replaced by modern networks like ResNet or MobileNet, padding is inevitable to make the network going deeper, which destroys the strict translation invariance restriction. Our hypothesis is that the violation of this restriction will lead to a spatial bias.
We test our hypothesis by simulation experiments on a network with padding.
Shift is defined as the max range of translation generated by a uniform distribution in data augmentation.
Our simulation experiments are performed as follows.
First, targets are placed in the center with different shift ranges (0, 16 and 32) in three sepreate training experiments.
After convergence, we aggregate the heatmaps generated on test dataset and then visualize the results in Fig. \ref{fig:spbias2}.
In the first simulation with zero shift, the probabilities on the border area are degraded to zero. It shows that a strong center bias is learned despite of the appearances of test targets.
The other two simulations show that increasing shift ranges will gradually prevent model collapse into this trivial solution.
The quantitative results illustrate that the aggregated heatmap of 32-shift is closer to the location distribution of test objects.
It proves that the spatial aware sampling strategy effectively alleviate the break of strict translation invariance property caused by the networks with padding.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\columnwidth]{sp_bias2.pdf}
\end{center}
\vspace{-3mm}
\caption{Visualization of prior probabili ties of positive samples when using different random translations. The distributions become more uniform after random translations within $\pm32$ pixels.}
\vspace{-3mm}
\label{fig:spbias2}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\columnwidth]{randshift_eao.pdf}
\end{center}
\vspace{-4mm}
\caption{The impacts of the random translation on VOT dataset.}
\vspace{-5mm}
\label{fig:shifteao}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.9\columnwidth]{SiamRPN_fusion_up_cropped.pdf}
\end{center}
\vspace{-3mm}
\caption{Illustration of our proposed framework. Given a target template and search region, the network ouputs a dense prediction by fusion the outputs from multiple Siamese Region Proposal (SiamRPN) blocks. Each SiamRPN block is shown on right.}
\vspace{-2mm}
\label{fig:framework}
\end{figure*}
To avoid putting a strong center bias on objects, we train SiamRPN with a ResNet-50 backbone by the spatial aware sampling strategy.
As shown in Fig. \ref{fig:shifteao}, the performance with zero shift reduced to 0.14 on VOT2018, a suitable shift ($\pm 64$ pixels) is vital for training a deep Siamese tracker.
\subsection{ResNet-driven Siamese Tracking}
\label{sec:backbone}
Based on the above analyses, the influence of center bias can be eliminated. Once we eliminate the learning bias to the center location, any off-the-shelf networks~(\emph{e.g.}, MobileNet, ResNet) can be utilized to perform visual tracking after domain adaptation. Moreover, we can adaptively construct the network topology and unveil the performance of \emph{deep} network for visual tracking.
In this subsection, we will discuss how to transfer a deep network into our tracking algorithms. In particular, we conduct our experiments mainly focusing on ResNet-50 \cite{CVPR16ResNet}.
The original ResNet has a large stride of 32 pixels, which is not suitable for dense Siamese network prediction. As shown in Fig.\ref{fig:framework}, we reduce the effective strides at the last two block from 16 pixels and 32 pixels to 8 pixels by modifying the $conv4$ and $conv5$ block to have unit spatial stride, and also increase its receptive field by dilated convolutions~\cite{CVPR15FCN}. An extra $1\times1$ convolution layer is appended to each of block outputs to reduce the channel to 256.
Since the paddings of all layers are kept, the spatial size of the template feature increases to 15, which imposes a heavy computational burden on the correlation module. Thus we crop the center $7\times 7$ regions \cite{CVPR17CFNet} as the template feature where each feature cell can still capture the entire target region.
Following~\cite{CVPR18SiamRPN}, we use a combination of cross correlation layers and fully convolutional layers to assemble a head module for calculating classification scores (denoted by $\mathcal{S}$) and bounding box regressor (denoted by $\mathcal{B}$). The Siamese RPN blocks are denoted by $\mathcal{P}$.
Furthermore, we find that carefully fine-tuning ResNet will boost the performance. By setting learning rate of ResNet extractor with 10 times smaller than RPN parts, the feature representation can be more suitable for tracking tasks.
Different from traditional Siamese approaches, the parameters of the deep network are jointly trained in an end-to-end fashion. To the best of our knowledge, we are the first to achieve an end-to-end learning on a deep Siamese Network ($>$ 20 layers) for visual tracking.
\vspace{-2mm}
\subsection{Layer-wise Aggregation}
After utilizing deep network like ResNet-50, aggregating different deep layers becomes possible.
Intuitively, visual tracking requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improve inference of recognition and localization.
In the previous works which only use shallow networks like AlexNet, multi-level features cannot provide very different representations. However, different layers in ResNet are much more meaningful considering that the receptive field varies a lot.
Features from earlier layers will mainly focus on low level information such as color, shape, are essential for localization, while lacking of semantic information; Features from latter layers have rich semantic information that can be beneficial during some challenge scenarios like motion blur, huge deformation. The use of this rich hierarchical information is hypothesized to help tracking.
In our network, multi-branch features are extracted to collaboratively infer the target localization. As for ResNet-50, we explore multi-level features extracted from the last three residual block for our layer-wise aggregation. We refer these outputs as $\mathcal{F}_3(\mathbf{z})$, $\mathcal{F}_4(\mathbf{z})$, and $\mathcal{F}_5(\mathbf{z})$, respectively. As shown in Fig.~\ref{fig:framework}, the outputs of $conv3$, $conv4$, $conv5$ are fed into three Siamese RPN module individually.
Since the output sizes of the three RPN modules have the same spatial resolution, weighted sum is adopted directly on the RPN output. A weighted-fusion layer combines all the outputs.
\begin{equation}
\mathcal{S}_{all} = \sum_{l=3}^{5}\alpha_i*\mathcal{S}_{l},\quad \mathcal{B}_{all} = \sum_{l=3}^{5}\beta_i*\mathcal{B}_{l}.
\end{equation}
The combination weights are separated for classification and regression since their domains are different. The weight is end-to-end optimized offline together with the network.
In contrast to previous works, our approach does not explicitly combine convolutional features, but learn classifiers and regressions separately. Note that with the depth of the backbone network significantly increased, we can achieve substantial gains from the sufficient diversity of visual-semantic hierarchies.
\begin{figure}[t]
\begin{center}
\subfigure[Cross Correlation Layer]{\includegraphics[height=1.7cm]{./dw1.pdf}\label{cc}\vspace{-5mm}}
\subfigure[Up-Channel Cross Correlation Layer]{\includegraphics[height=2.2cm]{./upcrosscorrelation_cropped.pdf}\label{up}}
\subfigure[Depth-wise Cross Correlation Layer]{\includegraphics[height=2cm]{./dw3.pdf}\label{dw}}
\end{center}
\vspace{-2mm}
\caption{Illustrations of different cross correlation layers. (a) Cross Correlation (XCorr) layer predicts a single channel similarity map between target template and search patches in SiamFC~\cite{ECCV16SiamFC}. (b) Up-Channel Cross Correlation (UP-XCorr) layer outputs a multi-channel correlation features by cascading a heavy convolutional layer with several independent XCorr layers in SiamRPN~\cite{CVPR18SiamRPN}. (c) Depth-wise Cross Correlation (DW-XCorr) layer predicts multi-channel correlation features between a template and search patches.}
\vspace{-1mm}
\label{fig:correaltion}
\end{figure}
\subsection{Depthwise Cross Correlation}
\label{sec:dw}
The cross correlation module is the core operation to embed two branches information. SiamFC~\cite{ECCV16SiamFC} utilizes a Cross-Correlation layer to obtain a single channel response map for target localization. In SiamRPN~\cite{CVPR18SiamRPN}, Cross-Correlation is extended to embed much higher level information such as anchors, by adding a huge convolutional layer to scale the channels (UP-Xcorr). The heavy up-channel module makes seriously imbalance of parameter distribution (\emph{i.e.} the RPN module contains 20M parameters while the feature extractor only contains 4M parameters in~\cite{CVPR18SiamRPN}), which makes the training optimization hard in SiamRPN.
In this subsection, we present a lightweight cross correlation layer, named Depthwise Cross Correlation (DW-XCorr), to achieve efficient information association. The DW-XCorr layer contains 10 times fewer parameters than the UP-XCorr used in SiamRPN while the performance is on par with it.
To achieve this, a conv-bn block is adopted to adjust features from each residual blocks to suit tracking task. Crucially, the bounding box prediction and anchor based classification both are \emph{asymmetrical}, which is different from SiamFC (See Sect. \ref{sec_analysis}). In order to encode the difference, the template branch and search branch pass two \emph{non-shared} convolutional layers. Then two feature maps with the same number of channels do the correlation operation channel by channel. Another conv-bn-relu block is appended to fuse different channel outputs. Finally, the last convolution layer for the output of classification or regression is appended.
By replacing cross-correlation to depthwise correlation, we can greatly reduce the computational cost and the memory usage. In this way, the numbers of parameters on the template and the search branches are balanced, resulting the training procedure more stable.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\linewidth]{./depthwise2.pdf}
\end{center}
\vspace{-1mm}
\caption{Channels of depthwise correlation output in \textit{conv4}. There are totally 256 channels in \textit{conv4}, however, only few of them have high response during tracking. Therefore we choose $148th$, $222th$, $226th$ channels as demonstration, which are $2nd$, $3rd$, $4th$ rows in the figure. The first row contains six corresponding search regions from OTB dataset \cite{PAMI15OTB}. Different channels represent different semantics, the $148th$ channel has high response on cars, while has low response on persons and faces. The $222th$ and $226th$ channel have high response on persons and faces, respectively.}
\vspace{-1mm}
\label{fig:corr_res}
\end{figure}
Furthermore, an interesting phenomena is illustrated in Fig.\ref{fig:corr_res}. The objects in the same category have high response on same channels (car in $148th$ channel, person in $222th$ channel,
and face in $226th$ channel), while responses of the rest channels are suppressed.
This property can be comprehended as the channel-wise features produced by the depthwise cross correlation are nearly orthogonal and each channel represents some semantic information.
We also analyze the heatmaps when using the up-channel cross correlation and the reponse maps are less interpretable.
\section{Experimental Results}
\label{sec_experiment}
\subsection{Training Dataset and Evaluation}
\noindent
\textbf{Training}. The backbone network of our architecture~\cite{CVPR16ResNet} is pre-trained on ImageNet~\cite{IJCV15ImageNet} for image labeling, which has proven to be a very good initialization to other tasks~\cite{ICCV17MaskRCNN,CVPR15FCN}. We train the network on the training sets of COCO~\cite{ECCV2014COCO}, ImageNet DET~\cite{IJCV15ImageNet}, ImageNet VID, and YouTube-BoundingBoxes Dataset~\cite{CVPR17YTB} and to learn a generic notion of how to measure the similarities between general objects for visual tracking. In both training and testing, we use single scale images with 127 pixels for template patches and 255 pixels for searching regions.
\noindent
\textbf{Evaluation}. We focus on the short-term single object tracking on OTB2015~\cite{PAMI15OTB}, VOT2018~\cite{VOT18Results} and UAV123~\cite{ECCV16UAV123}. We use VOT2018-LT~\cite{VOT18Results} to evaulate the long-term setting. In the long-term tracking, the object may leave the field of view or become fully occluded for a long period, which are more challenging than short-term tracking. We also analyze the generalization of our method on LaSOT~\cite{ARX18LaSOT} and TrackingNet~\cite{ECCV18trackingnet}, two of the recent largest benchmarks for single object tracking.
\subsection{Implementation Details}
\noindent
\textbf{Network Architecture}. In experiments, we follow~\cite{ECCV18DaSiamRPN} for the training and inference settings. We attach two sibling convolutional layers to the stride-reduced ResNet-50 (Sect.~\ref{sec:backbone}) to perform proposal classification and bounding box regression with 5 anchors. Three randomly initialized $1\times1$ convolutional layers are attached to \textit{conv3}, \textit{conv4}, \textit{conv5} for reducing the feature dimension to 256.
\noindent
\textbf{Optimization}. SiamRPN++ is trained with stochastic gradient descent (SGD). We use synchronized SGD over 8 GPUs with a total of 128 pairs per minibatch (16 pairs per GPU), which takes 12 hours to converge. We use a warmup learning rate of 0.001 for first 5 epoches to train the RPN braches. For the last 15 epoches, the whole network is end-to-end trained with learning rate exponentially decayed from 0.005 to 0.0005. Weight decay of 0.0005 and momentum of 0.9 are used. The training loss is the sum of classification loss and the standard smooth $L_1$ loss for regression.
\subsection{Ablation Experiments}
\noindent
\textbf{Backbone Architecture}. The choice of feature extractor is crucial as the number of parameters and types of layers directly affect memory, speed, and performance of the tracker. We compare different network architectures for the visual tracking. Fig.~\ref{fig:tpo1auc} shows the performance of using AlexNet, ResNet-18, ResNet-34, ResNet-50, and MobileNet-v2 as backbones. We report performance by Area Under Curve (AUC) of success plot on OTB2015 with respect to the top1 accuracy on ImageNet. We observe that our SiamRPN++ can benefit from \emph{deeper} ConvNets.
Table \ref{tab:ablation} also illustrates that by replacing AlexNet to ResNet-50, the performance improves a lot on VOT2018 dataset. Besides, our experiments shows that finetuning the backbone part is critical, which yields a great improvement on tracking performance.
\begin{figure}[t]
\begin{center}
\includegraphics[trim={0cm 0cm 0cm 0cm}, clip, width=0.99\columnwidth]{top1_auc.pdf}
\end{center}
\vspace{-4mm}
\caption{The Top-1 accuracy on ImageNet \textit{vs.} Expected Average Overlap (EAO) scores on OTB2015.}
\vspace{-2mm}
\label{fig:tpo1auc}
\end{figure}
\renewcommand\arraystretch{1.0}
\setlength{\tabcolsep}{2.2pt}
\begin{table}[t]
\centering
\small
\begin{tabular}{c|ccc|c|c|c|c}
BackBone & L3 & L4 & L5 & Finetune & Corr & VOT2018 & OTB2015 \\
\shline
\multirow{2}{*}{AlexNet} & & & & & UP & 0.332 & 0.658\\
& & & & & DW & 0.355 & 0.666 \\
\hline %
\multirow{2}{*}{ResNet-50} & \cmark & \cmark & \cmark & & UP & 0.371 & 0.664 \\
& \cmark & \cmark & \cmark & \cmark & UP & 0.390 & 0.684\\
\hline %
\multirow{6}{*}{ResNet-50} & \cmark & & & \cmark & DW & 0.331 & 0.669 \\
& & \cmark & & \cmark & DW & 0.374 & 0.678 \\
& & & \cmark & \cmark & DW & 0.320 & 0.646 \\
& \cmark & \cmark & & \cmark & DW & 0.346 & 0.677 \\
& \cmark & & \cmark & \cmark & DW & 0.336 & 0.674 \\
& & \cmark & \cmark & \cmark & DW & 0.383 & 0.683 \\
\hline %
\multirow{2}{*}{ResNet-50} & \cmark & \cmark & \cmark & & DW & 0.395 & 0.673 \\
& \cmark & \cmark & \cmark & \cmark & DW & \first{0.414} & \first{0.696} \\
\end{tabular}
\caption{Ablation study of the proposed tracker on VOT2018 and OTB2015. L3, L4, L5 represent \textit{conv3},\textit{conv4},\textit{conv5}, respectively. Finetune represents whether the backbone is trained offline. Up/DW means Up channel correlation and depthwise correlation.}
\vspace{-3mm}
\label{tab:ablation}
\end{table}
\setlength{\tabcolsep}{.2em}
\begin{table*}[t] \small
\centering
\begin{tabular}{l| c c c c c c c c c c c c}
& DLSTpp & DaSiamRPN & SA\_Siam\_R & CPT & DeepSTRCF & DRT & RCO & UPDT & SiamRPN & MFT & LADCF & \textbf{Ours}\\ %
\shline %
EAO $\uparrow$ & 0.325 & 0.326 & 0.337 & 0.339 & 0.345 & 0.356 & 0.376 & 0.378 & 0.383 & 0.385 & 0.389 & \color{red}\textbf{0.414}\\ %
Accuracy $\uparrow$ & 0.543 & 0.569 & 0.566 & 0.506 & 0.523 & 0.519 & 0.507 & 0.536 & 0.586 & 0.505 & 0.503 & \color{red}\textbf{0.600}\\ %
Robustness $\downarrow$ & 0.224 & 0.337 & 0.258 & 0.239 & 0.215 & 0.201 & 0.155 & 0.184 & 0.276 & \color{red}\textbf{0.140} & 0.159 & 0.234\\ %
\hline %
AO $\uparrow$ & 0.495 & 0.398 & 0.429 & 0.379 & 0.436 & 0.426 & 0.384 & 0.454 &0.472 & 0.393 & 0.421 & \color{red}\textbf{0.498}\\
\end{tabular}
\caption{Comparison with the state-of-the-art in terms of expected average overlap (EAO), robustness (failure rate), and accuracy on the VOT2018 benchmark. We compare with the top-10 trackers and our baseline DaSiamRPN in the competition. Our tracker obtains a significant relative gain of $6.4\%$ in EAO, compared to the top-ranked method (LADCF).}
\vspace{-5mm}
\label{tab:vot18}
\end{table*}
\noindent
\textbf{Layer-wise Feature Aggregation}.
To investigate the impact of layer-wise feature aggregation, first we train three variants with single RPN on ResNet-50. We empirically found that $conv4$ alone can achieve a competitive performance with $0.374$ in EAO, while deeper layer and shallower layer perform with $4\%$ drops.
Through combining two branches, $conv4$ and $conv5$ gains improvement, however no improvement is observed on the other two combinations. Even though, the robustness has increased $10\%$, which is the key vulnerability of our tracker. It means that our tracker still has room for improvement.
After aggregating all three layers, both accuracy and robustness steadily improve, with gains between $3.1\%$ and $1.3\%$ for VOT and OTB.
In total, layer-wise feature aggregation yields a $0.414$ EAO score on VOT2018, which is $4.0\%$ higher than that of the single layer baseline.
\noindent
\textbf{Depthwise Correlation}.
We compare the original Up-Channel Cross Correlation layer with the proposed Depthwise Cross Correlation layer. As shown in the Table \ref{tab:ablation}, the proposed depthwise correlation gains $2.3\%$ improvement on VOT2018 and $0.8\%$ improvement on OTB2015, which demonstrates the importance of depthwise correlation.
This is partly beacause a balanced parameter distribution of the two branches makes the learning process more stable, and converges better.
\subsection{Comparison with the state-of-the-art}
\begin{figure}
\begin{center}
\subfigure[Success Plot]{\includegraphics[trim={1cm 0cm 1.5cm 0cm}, clip, width=0.495\linewidth]{./OTB100-success-plot.pdf}\label{Success Plot}}
\subfigure[Precision Plot]{\includegraphics[trim={1cm 0cm 1.5cm 0cm}, clip,width=0.495\linewidth]{./OTB100-precision-plot.pdf}\label{Precision Plot}}
\end{center}
\vspace{-3mm}
\caption{Success and precision plots show a comparison of our tracker with state-of-the-art trackers on the OTB2015 dataset.}
\vspace{-3mm}
\label{fig:OTB}
\end{figure}
\paragraph{OTB-2015 Dataset.}
The standardized OTB benchmark \cite{PAMI15OTB} provides a fair testbed on robustness.
The Siamese based tracker formulate the tracking as one-shot detection task without any online update, thus resulting in inferior performance on this no-reset setting benchmark.
However, we identify the limited representation from the \emph{shallow} network as the primary obstacle preventing Siamese based tracker from surpassing top-performing methods, such as C-COT variants~\cite{ECCV16CCOT, CVPR17ECO}.
We compare our SiamRPN++ tracker on the OTB2015 with the state-of-the-art trackers. Fig.~\ref{fig:OTB} shows that our SiamRPN++ tracker produces leading result in overlap success. Compared with the recent DaSiamRPN~\cite{ECCV18DaSiamRPN}, our SiamRPN++ improves $3.8\%$ in overlap and $3.4\%$ in precision from the considerably increased depth. Representations extracted from deep ConvNets are less sensitive to illumination and background clutter. And to the best of our knowledge, this is the first time that Siamese tracker can obtain the comparable performance with the state-of-the-art tracker on OTB2015 dataset.
\vspace{-5mm}
\paragraph{VOT2018 Dataset.} We test our SiamRPN++ tracker on the lastest VOT-2018 dataset~\cite{VOT18Results} in comparison with 10 state-of-the-art methods. The VOT-2018 public dataset is one of the most recent datasets for evaluating online model-free single object trackers, and includes 60 public sequences with different challenging factors. Following the evaluation protocol of VOT-2018, we adopt the Expected Average Overlap (EAO), Accuracy(A) and Robustness(R) and no-reset-based Average Overlap(AO) to compare different trackers. The detailed comparisons are reported in Table~\ref{tab:vot18}.
From Table~\ref{tab:vot18}, we observe that the proposed SiamRPN++ method achieves the top-ranked performance on EAO, A and AO criteria. Especially, our SiamRPN++ tracker outperforms all existing trackers, including the VOT2018 challenge winner. Compared with the best tracker in the VOT2018 challenge (LADCF~\cite{VOT18Results}), the proposed method achieves a performance gain of $2.5\%$. In addition, our tracker achieves a substantial improvement over the challenge winner (MFT~\cite{VOT18Results}), with a gain of $9.5\%$ in accuracy.
In comparison with the baseline tracker DaSiamRPN, our approach yields substantial gains of $10.3\%$ on robustness, which is the common vulnerability of the Siamese Network based tracker against correlation filters method. Even though, due to the lack of adaption to the template, the robustness still has a gap with the state-of-art correlation filters methods~\cite{ECCV18UPDT} which relies on the online updating.
The One Pass Evaluation (OPE) is also adopted to evaluate trackers and the AO values are reported to demonstrate their performance. From the last row in Table~\ref{tab:vot18}, we can observe that our method achieves comparable performance compared to the DLSTpp~\cite{VOT18Results} and improves the DaSiamRPN~\cite{ECCV18DaSiamRPN} method by an absolute gain of $10.0\%$.
\begin{figure}[t]
\begin{center}
\includegraphics[trim={2cm 10.5cm 2cm 11cm}, clip, width=0.99\columnwidth]{eao_rank_vot2018.pdf}
\end{center}
\vspace{-3mm}
\caption{Expected averaged overlap performance on VOT2018.}
\vspace{-3mm}
\label{fig:votatrr}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[trim={0cm 0.5cm 0cm 0cm}, clip, width=0.99\columnwidth]{speed_eao.pdf}
\end{center}
\vspace{-3mm}
\caption{A comparison of the quality and the speed of state-of-the-art tracking methods on VOT2018. We visualize the Expected Average Overlap (EAO) with respect to the Frames-Per-Seconds (FPS). Note that the FPS axis is in the log scale. Two of our variants, which replace ResNet-50 backbone with ResNet-18~(Ours-res18) and MobileNetv2~(Ours-mobile), respectively.}
\vspace{-3mm}
\label{fig:speed}
\end{figure}
\begin{figure}
\begin{center}
\subfigure{\includegraphics[width=0.49\linewidth,height=0.49\linewidth]{./pr_vot2018.pdf}\label{fig:pr}}
\subfigure{\includegraphics[width=0.49\linewidth,height=0.49\linewidth]{./thr_vot2018.pdf}\label{fig:thr}}
\end{center}
\vspace{-3mm}
\caption{Long-term tracking performance. The average tracking precision-recall curves (left), the corresponding F-score curves (right). Tracker labels are sorted according to the F-score.}
\vspace{-3mm}
\label{fig:vot18lt}
\end{figure}
\noindent
\textbf{Accuracy vs. Speed}.
In Fig.~\ref{fig:speed}, we visualize the EAO on VOT2018 with respect to the Frames-Per-Second (FPS). The reported speed is evaluated on a machine with an NVIDIA Titan Xp GPU, other results are provided by the VOT2018 official results.
From the plot, our SiamRPN++ achieves best performance, while still running at realtime speed(35 FPS). It is worth noting that two of our variants achieve nearly the same accuracy as SiamRPN++, while running at more than 70 FPS, which makes these two variants highly competitive.
\vspace{-2mm}
\paragraph{VOT2018 Long-term Dataset.}
In the latest VOT2018 challenge, a long-term experiment are newly introduced. It is composed of 35 long sequences, where targets may leave the field of view or become fully occluded for a long period. The performance measures are precision, recall and a combined F-score. We report all these metrics compared with the state-of-the-art trackers on VOT2018-LT.
As shown in the Fig. ~\ref{fig:vot18lt}, after equipping our tracker with the long term strategy, SiamRPN++ obtains $2.2\%$ gain from DaSiam\_LT, and outperforms the best tracker by $1.9\%$ in F-score. The powerful feature extracted by ResNet improves both TP and TR by $2\%$ absolutely from our baseline DaSiamRPN. Meanwhile, the long term version of SiamRPN++ is still able to run at 21 FPS, which is nearly 8 times faster than MBMD~\cite{VOT18Results}, the winner of VOT2018-LT.
\vspace{-5mm}
\paragraph{UAV123 Dataset.}
UAV123 dataset includes 123 sequences with average sequence length of 915 frames. Besides the recent trackers in \cite{ECCV16UAV}, ECO \cite{CVPR17ECO}, ECO-HC \cite{CVPR17ECO}, DaSiamRPN \cite{ECCV18DaSiamRPN}, SiamRPN \cite{CVPR18SiamRPN} are added on comparison. Fig.~\ref{fig:uav} illustrates the precision and success plots of the compared trackers. Specifically, our tracker achieves a success score of 0.613, which outperforms DaSiamRPN (0.586) and ECO (0.525) with a large margin.
\begin{figure}[t]
\begin{center}
\subfigure{\includegraphics[trim={1cm 0cm 1cm 0cm}, clip, width=0.495\linewidth]{./UAV123-success-plot.pdf}}
\subfigure{\includegraphics[trim={1cm 0cm 1cm 0cm}, clip, width=0.495\linewidth]{./UAV123-precision-plot.pdf}}
\end{center}
\vspace{-5mm}
\caption{Evaluation results of trackers on UAV123.}
\vspace{-6mm}
\label{fig:uav}
\end{figure}
\begin{figure}
\begin{center}
\subfigure{\includegraphics[trim={1cm 0cm 1cm 0cm}, clip, width=0.495\linewidth]{./LaSOT-success-plot.pdf}}
\subfigure{\includegraphics[trim={1cm 0cm 1cm 0cm}, clip, width=0.495\linewidth]{./LaSOT-normalized-precision-plot.pdf}}
\end{center}
\vspace{-5mm}
\caption{Evaluation results of trackers on LaSOT.}
\vspace{-4mm}
\label{fig:lasot}
\end{figure}
\vspace{-4mm}
\paragraph{LaSOT Dataset.}
To further validate the proposed framework on a larger and more challenging dataset, we conduct experiments on LaSOT~\cite{ARX18LaSOT}. The LaSOT dataset provides a large-scale, high-quality dense annotations with 1,400 videos in total and 280 videos in the testing set. Fig.~\ref{fig:lasot} reports the overall performances of our SiamRPN++ tracker on LaSOT testing set. Without bells and whistles, our SiamRPN++ model is sufficient to achieve state-of-the-art AUC score of $49.6\%$. Specifically, SiamRPN++ increases the normalized distance precision and AUC relatively by $23.7\%$ and $24.9\%$ over MDNet \cite{CVPR16MDNet}, which is the best tracker reported in the original paper.
\newcommand{\demph}[1]{\textcolor{demphcolor}{#1}}
\renewcommand\arraystretch{1.1}
\setlength{\tabcolsep}{1.pt}
\begin{table}[t]
\centering
\footnotesize
\begin{tabular}{c|cccccccc}
& \begin{tabular}[c]{@{}l@{}}CSRDCF\\ [-.7ex]~~~~\cite{CVPR17CSRDCF}\end{tabular}
& \begin{tabular}[c]{@{}l@{}}ECO\\ [-.7ex] ~~\cite{CVPR17ECO}\end{tabular}
& \begin{tabular}[c]{@{}l@{}}SiamFC\\ [-.7ex] ~~~~\cite{ECCV16SiamFC}\end{tabular}
& \begin{tabular}[c]{@{}l@{}}CFNet\\ [-.7ex] ~~~~\cite{CVPR17CFNet}\end{tabular}
& \begin{tabular}[c]{@{}l@{}}MDNet\\ [-.7ex]~~~\cite{CVPR16MDNet}\end{tabular}
& \begin{tabular}[c]{@{}l@{}}DaSiamRPN\\ [-.7ex]~~~~~~~\cite{ECCV18DaSiamRPN}\end{tabular}
& \begin{tabular}[c]{@{}l@{}}\textbf{Ours}\\ [-.7ex] {}\end{tabular}\\
\shline
AUC ($\%$) & 53.4 & 55.4 & 57.1 & 57.8 & 60.6 & 63.8 & \color{red}\textbf{73.3} \\[-.2ex]
P ($\%$) & 48.0 & 49.2 & 53.3 & 53.3 & 56.5 & 59.1 & \color{red}\textbf{69.4} \\[-.2ex]
P$_{norm}$ ($\%$) & 62.2 & 61.8 & 66.3 & 65.4 & 70.5 & 73.3 & \color{red}\textbf{80.0} \\[-.2ex]
\end{tabular}
\vspace{.5em}
\caption{State-of-the-art comparison on the TrackingNet \texttt{test} set in terms of success, precision, and normalized precision.}
\label{tab:trackingnet}
\vspace{-1em}
\end{table}
\vspace{-4mm}
\paragraph{TrackingNet Dataset.}
The recently released TrackingNet~\cite{ECCV18trackingnet} provides a large amount of data to assess trackers in the wild. We evaluate SiamRPN++ on its \texttt{test} set with 511 videos. Following~\cite{ECCV18trackingnet}, we use three metrics success (AUC), precision (P) and normalized precision (P$_{norm}$) for evaluation. Table~\ref{tab:trackingnet} demonstrates the comparison results to trackers with top AUC scores, showing that SiamRPN++ achieves the best results on all three metrics. In specific, SiamRPN++ obtains the AUC score of $73.3\%$, P score of $69.4\%$ and P$_{norm}$ score of $80.0\%$, outperforming the second best tracker DaSiamRPN~\cite{ECCV18DaSiamRPN} with AUC score of $63.8\%$, P score of $59.1\%$ and P$_{norm}$ score of $73.4\%$ by $9.5\%$, $10.3\%$ and $6.6\%$, respectively.
In summary, it is important to note that all these consistent results show the generalization ability of SiamRPN++.
\section{Conclusions}
\label{sec_conclusion}
In this paper, we have presented a unified framework, referred as SiamRPN++, to end-to-end train a deep Siamese network for visual tracking.
We show theoretical and empirical evidence that how to train a deep network on Siamese tracker.
Our network is composed of a multi-layer aggregation module which assembles the hierarchy of connections to aggregate different levels of representation and a depth-wise correlation layer which allows our network to reduce computation cost and redundant parameters while also leading to better convergence.
Using SiamRPN++, we obtained state-of-the-art results on the VOT2018 in real-time, showing the effectiveness of SiamRPN++. SiamRPN++ also acheived state-of-the-art results on large datasets like LaSOT and TrackingNet showing its generalizability.
{\small
\bibliographystyle{ieee}
|
1,941,325,220,773 | arxiv | \section{ Introduction}
Pension actuaries traditionally have computed the liabilities for defined benefit (DB) pension plans; however, more and more employees are participating in defined contribution (DC) plans. Indeed, in June 2007, the Employee Benefits Research Institute (EBRI) reported that in 1979, among active workers participating in retirement plans, the percentages in DB plans only, DC plans only, and both DB and DC plans were 62\%, 16\%, and 22\%, respectively. The corresponding percentages in 2005 were 10\%, 63\%, and 27\%, respectively.
In terms of numbers of employees, EBRI reported that in 1980, 30.1 million active workers participated in DB plans, while 18.9 million workers participated in DC plans. The corresponding numbers in 2004 were 20.6 and 52.2 active workers, respectively. Finally, in terms of numbers of plans in the private sector, in 1980, there were 148 thousand DB plans and 341 thousand DC plans; the corresponding numbers in 2004 were 47 and 653 thousand plans, respectively.
Therefore, however one measures the change in employee coverage under DB versus DC plans, it is clear that pension actuaries will need to adapt to the migration from DB to DC plans. One way that they can adapt is to switch from advising employers about their DB liabilities to providing investment advice for retirees and employees in DC plans. The purpose of our proposed research is to help train actuaries for this opportunity under the easy-to-explain goal of an employee or retiree avoiding bankruptcy.
Previous work focused on finding the optimal investment strategy to minimize the probability of bankruptcy under a variety of situations: (1) allowing the individual to invest in a standard Black-Scholes financial market with a rate of consumption given by some function of wealth, \citep{young,MR2295829}; (2) incorporating immediate and deferred annuities in the financial market, \citep{MR2253122, Bayraktar_Young-naaj09}; (3) limiting borrowing or requiring that borrowing occur at a higher rate than lending, \citep{MR2324574}; (4) modeling consumption as an increasing function of wealth or as a random process correlated with the price process of the stock, \citep{Bayraktar_Young-naaj08, Bayraktar_Moore_Young, Bayraktar_Young-fs09}. Throughout this body of work, the price process of the stock is modeled as a geometric Brownian motion, which is arguably unrealistic, but has given results that one can consider to be ``first approximations.'' Here we extend some of the previous work and allow the stock price to exhibit stochastic volatility. Additionally, we intend to find easy-to-implement rules that will result in nearly minimal probabilities of bankruptcy under stochastic volatility.
The rest of the paper is organized as follows. In Section 2, we introduce the financial market and define the problem of minimizing the probability of lifetime ruin. In Section 3, we present a related optimal controller-stopper problem, and show that the solution of that problem is the Legendre dual of the minimum probability of lifetime ruin.
By solving the optimal controller-stopper problem, we effectively solve the problem of minimizing the probability of lifetime ruin. Relying on the results in Section 3, we find an asymptotic approximation of the minimum probability of ruin and the optimal strategy in Section 4. On the other hand, in Section 5, relying on the Markov Chain Approximation Method, we construct a numerical algorithm that solves the original optimal control problem numerically. In Section 6, we present some numerical experiments.
We learn that the optimal investment strategy in the presence of stochastic volatility is not necessarily to invest less in the risky asset than when volatility is fixed. We also observe that the minimal probability of ruin can be almost attained by the asymptotic approximation described in Section 5.1. Also, if an individual uses the investment prescribed by the optimal investment strategy for the constant volatility environment while updating the volatility variable in this formula according to her observations, it turns out she can almost achieve the minimum probability of ruin in a stochastic volatility environment.
\section{The Financial Market and the Probability of Lifetime Ruin}
In this section, we present the financial ingredients that make up the individual's wealth, namely, consumption, a riskless asset, and a risky asset. We, then, define the minimum probability of lifetime ruin.
We assume that the individual invests in a riskless asset whose price at time $t$, $X_t$, follows the process $dX_t = rX_t dt, X_0 = x > 0$, for some fixed rate of interest $r > 0$. Also, the individual invests in a risky asset whose price at time $t$, $S_t$, follows a diffusion given by
\begin{equation}
\dd S_t = S_t \left( \mu dt + \sigma_t \, \dd B^{(1)}_t \right), \quad S_0 = S > 0,
\label{eqn:st}
\end{equation}
in which $\mu > r$ and $\sigma_t$ is the (random) volatility of the price process at time $t$. Here, $B^{(1)}$ is a standard Brownian motion with respect to a filtered probability space $(\Omega, {\cal F}, {\bf P}, {\bf F} = \{{\cal F}_t\}_{t \ge 0})$. We assume that the stochastic volatility is given by
\begin{equation}
\sigma_t = f(Y_t, Z_t),
\label{eqn:sigmat}
\end{equation}
in which $f$ is a smooth positive function that is bounded and bounded away from zero, and $Y$ and $Z$ are two diffusions. Below, we follow \citet{Fouque:asymp} in specifying the dynamics of $Y$ and $Z$. Note that if $f$ is constant, then $S$ follows geometric Brownian motion, and that case is considered by \citet{young}.
The first diffusion $Y$ is a fast mean-reverting Gaussian Ornstein-Uhlenbeck process. Denote by $1/\epsilon$ the rate of mean reversion of this process, with $0 < \epsilon \ll 1$ corresponding to the time scale of the process. $Y$ is an ergodic process, and we assume that its invariant distribution is independent of $\epsilon$. In particular, the invariant distribution is normal with mean $m$ and variance $\nu^2$. The resulting dynamics of $Y$ are given by
\begin{equation}
\dd Y_t = \frac{1}{\epsilon} \left( m - Y_t \right) dt + \nu \, \sqrt{\frac{2}{\epsilon}} \; \dd B^{(2)}_t, \quad Y_0 = y \in {\bf R},
\label{eqn:23}
\end{equation}
in which $B^{(2)}$ is a standard Brownian motion on $(\Omega, {\cal F}, {\bf P}, {\bf F})$. Suppose $B^{(1)}$ and $B^{(2)}$ are correlated with (constant) coefficient $\rho_{12} \in (-1, 1)$.
Under its invariant distribution ${\cal N}(m, \nu^2)$, the autocorrelation of $Y$ is given by
\begin{equation}
{\bf E} \left[ (Y_t - m) (Y_s - m) \right] = \nu^2 \, e^{- {|t - s|}/{\epsilon}}.
\label{eqn:24}
\end{equation}
Therefore, the process decorrelates exponentially fast on the time scale $\epsilon$; thus, we refer to $Y$ as the fast volatility factor.
The second factor $Z$ driving the volatility of the risky asset's price process is a slowly varying diffusion process. We obtain this diffusion by applying the time change $t \to \delta \cdot t$ to a given diffusion process:
\begin{equation}
\dd \widetilde Z_t = g( \widetilde Z_t) \, dt + h( \widetilde Z_t) \, \dd \tilde B_t,
\label{eqn:25}
\end{equation}
in which $0 < \delta \ll 1$ and $\tilde B$ is a standard Brownian motion. The coefficients $g$ and $h$ are smooth and at most linearly growing at infinity, so (\ref{eqn:25}) has a unique strong solution.
Under the time change $t \to \delta \cdot t$, define $Z_t = \widetilde Z_{\delta \cdot t}$. Then, the dynamics of $Z$ are given by
\begin{equation}
\dd Z_t = \delta \, g(Z_t) \,dt + h(Z_t) \, \dd \tilde B_{\delta \cdot t}, \quad Z_0 = z \in {\bf R}.
\label{eqn:26}
\end{equation}
In distribution, we can write these dynamics as
\begin{equation}
\dd Z_t = \delta \, g(Z_t) \,dt + \sqrt{\delta} \, h(Z_t) \, \ddB^{(3)}_t, \quad Z_0 = z \in {\bf R},
\label{eqn:27}
\end{equation}
in which $B^{(3)}$ is a standard Brownian motion on $(\Omega, {\cal F}, {\bf P}, {\bf F})$. Suppose $B^{(1)}$ and $B^{(3)}$ are correlated with (constant) coefficient $\rho_{13} \in [-1, 1]$. Similarly, suppose $B^{(2)}$ and $B^{(3)}$ are correlated with (constant) coefficient $\rho_{23} \in [-1, 1]$. To ensure that the covariance matrix of the Brownian motions is positive semi-definite, we impose the following condition on the $\rho$'s:
\begin{equation}
1+2 \rho_{12} \rho_{13} \rho_{23} - \rho_{12}^2 -\rho_{13}^2 - \rho_{23}^2 \geq 0.
\label{eqn:rhos}
\end{equation}
Let $W_t$ be the wealth at time $t$ of the individual, and let $\pi_t$ be the amount that the decision maker invests in the risky asset at that time. It follows that the amount invested in the riskless asset is $W_t - \pi_t$. We assume that the individual consumes at a constant rate $c > 0$. Therefore, the wealth process follows
\begin{equation}
\dd W_t = [rW_t + (\mu - r) \pi_t - c] \, dt + f(Y_t, Z_t) \, \pi_t \, \dd B^{(1)}_t, \label{eqn:28}
\end{equation}
and we suppose that initial wealth is non-negative; that is, $W_0 = w \ge 0$.
By {\it lifetime ruin}, we mean that the individual's wealth reaches zero before she dies. Define the corresponding hitting time by $\tau_0 := \inf\{t \ge 0: W_t \le 0 \}$. Let $\tau_d$ denote the random time of death of the individual, which is independent of the Brownian motions. We assume that $\tau_d$ is exponentially distributed with parameter $\lambda$ (that is, with expected time of death equal to $1/\lambda$); this parameter is also known as the {\it hazard rate}, or, {\it force of mortality}.
\citet{MR2328666} minimize the probability of lifetime ruin with varying hazard rate and show that by updating the hazard rate each year and treating it as a constant, the individual can quite closely obtain the minimal probability of ruin when the true hazard rate is Gompertz. Specifically, at the beginning of each year, set $\lambda$ equal to the inverse of the individual's life expectancy at that time. Compute the corresponding optimal investment strategy as given below, and apply that strategy for the year. According to the work of \citet{MR2328666}, this scheme results in a probability of ruin close to the minimum probability of ruin. Therefore, there is no significant loss of generality to assume that the hazard rate is constant and revise its estimate each year.
Denote the minimum probability of lifetime ruin by $\psi(w, y, z)$, in which the arguments $w$, $y$, and $z$ indicate that one conditions on the individual possessing wealth $w$ at the current time with the two factors $Y$ and $Z$ taking the values $y$ and $z$, respectively, then. Thus, $\psi$ is the minimum probability that $\tau_0 < \tau_d$, in which one minimizes with respect to admissible investment strategies $\pi$. A strategy $\pi$ is {\it admissible} if it is ${\cal F}_t$-progressively measurable, and if it satisfies the integrability condition $\int_0^t \pi_s^2 \, ds < \infty$ almost surely for all $t \ge 0$. Thus, $\psi$ is formally defined by
\begin{equation}
\psi(w, y, z) = \inf_{\pi} {\bf P}^{w, y, z} \left[\tau_0 < \tau_d \right].
\label{eqn:29}
\end{equation}
Here, ${\bf P}^{w, y, z}$ indicates the probability conditional on $W_0 = w$, $Y_0 = y$, and $Z_0 = z$.
Note that if $w \ge c/r$, then $\psi(w, y, z) = 0$ because the individual can invest $c/r$ of her wealth in the riskless asset and generate a rate of income equal to $c$, which exactly covers her consumption. Therefore, we effectively only need to determine the minimum probability of lifetime ruin and corresponding optimal investment strategy on the domain ${\bf D} := \{ (w, y, z) \in {\bf R}^3: w \in [0, c/r] \}$.
\section{ Computing the Minimum Probability of Lifetime Ruin}
\subsection{ A Related Optimal Controller-Stopper Problem}
In this section, we present an optimal controller-stopper problem whose solution $\hat{\psi}$ is the Legendre dual of the minimum probability of ruin $\psi$. It is not clear {\it a priori} that the value function $\psi$ is convex or smooth due to the implicit dependent on the initial values of the state variable. By passing to the controller-stopper problem, however, we can obtain the regularity of $\hat{\psi}$ in more simply, which, in turn, provides an intermediate tool in the proof of regularity of $\psi$. The dual relationship and the analysis of the controller-stopper problem are, therefore, crucial and worth investigating.
First, note that we can represent the three Brownian motions from Section 2 as follows: given $B^{(1)}$, $B^{(2)}$, and $B^{(3)}$, define $\widetilde{B}^{(1)}$, $\widetilde{B}^{(2)}$, and $\widetilde{B}^{(3)}$ via the following invertible system of equations:
\begin{equation}
\begin{split}
B^{(1)}_t &= \widetilde{B}^{(1)}_t, \\
B^{(2)}_t &= \rho_{12} \, \widetilde{B}^{(1)}_t + \sqrt{1 - \rho^2_{12}} \, \widetilde{B}^{(2)}_t, \\
B^{(3)}_t &= \rho_{13} \, \widetilde{B}^{(1)}_t + \frac{\rho_{23}- \rho_{12} \rho_{13} }{\sqrt{1 - \rho^2_{12}}} \, \widetilde{B}^{(2)}_t + \frac{\sqrt{(1 - \rho^2_{12})(1 - \rho^2_{13}) - (\rho_{23} - \rho_{12} \rho_{13})^2} }{ \sqrt{1 - \rho^2_{12}}} \, \widetilde{B}^{(3)}_t.
\end{split}
\label{eqn:41}
\end{equation}
One can show that $\widetilde{B}^{(1)}$, $\widetilde{B}^{(2)}$, and $\widetilde{B}^{(3)}$ thus defined are three {\it independent} standard Brownian motions on $(\Omega, {\cal F}, {\bf P}, {\bf F})$. Also notice that condition (\ref{eqn:rhos}) on the $\rho$'s guarantees that the expression under the square root in the coefficient of $\widetilde{B}^{(3)}_t$ is non-negative.
Next, define the controlled process $X^\gamma$ by
\begin{equation}
\dd X^\gamma_t = -(r - \lambda) \, X^\gamma_t \, dt - \frac{\mu - r }{ f(Y_t, Z_t)} \, X^\gamma_t \, \dd \widetilde{B}^{(1)}_t + \gamma^{(2)}_t \dd \widetilde{B}^{(2)}_t + \gamma^{(3)}_t \dd \widetilde{B}^{(3)}_t, \quad X_0 = x > 0,
\label{eqn:42}
\end{equation}
in which $\gamma = \left(\gamma^{(2)}, \gamma^{(3)} \right)$ is the control, and $Y$ and $Z$ are given in (\ref{eqn:23}) and (\ref{eqn:27}), respectively.
For $x > 0$, define the function $\hat \psi$ by
\begin{equation}
\hat \psi(x, y, z) = \inf_\tau \sup_\gamma {\bf E}^{x, y, z} \left[ \int_0^\tau e^{-\lambda t} c \, X^\gamma_t \, dt + e^{-\lambda \tau} \min\left( (c/r)X^\gamma_\tau, 1 \right) \right].
\label{eqn:43}
\end{equation}
$\hat \psi$ is the value function for an optimal controller-stopper problem. Indeed, the controller chooses among processes $\gamma$ in order to maximize the discounted running ``penalty'' to the stopper given by $c \, X^\gamma_t$ in (\ref{eqn:43}). On the other hand, the stopper chooses the time to stop the game in order to minimize the penalty but has to incur the terminal cost of $\min\left( (c/r)X^\gamma_\tau, 1 \right)$, discounted by $e^{-\lambda \tau}$ when she stops.
\citet{Bayraktar_Young-fs09} consider a controller-stopper problem that is mathematically similar to the one in this paper; see that paper for details of the following assertions--specifically, see Theorem 2.4 and its proof. One can show that the controller-stopper problem has a continuation region given by $\{ (x, y, z): 0 \le x_{c/r}(y, z) \le x \le x_0(y, z) \}$ for some functions $0 \le x_{c/r}(y, z) \le r/c \le x_0(y, z)$ with $(y, z) \in {\bf R}^2$. Thus, if $x \le x_{c/r}(y, z)$, we have $\hat \psi(x, y, z) = (c/r) \, x$, and if $x \ge x_0(y, z)$, we have $\hat \psi(x, y, z) = 1$.
Moreover, $\hat \psi$ is non-decreasing and concave with respect to $x$ on ${\bf R}^+$ (increasing and strictly concave in the continuation region) and is the unique classical solution of the following free-boundary problem on $\left[ x_{c/r}(y, z), \, x_0(y, z) \right]$:
\begin{equation}
\begin{cases} & cx + \left( \frac{1}{ \epsilon} \, {\cal L}_0 + \frac{1 }{ \sqrt{\epsilon}} \, {\cal L}_1 + {\cal L}_2 + \sqrt{\delta} \, {\cal M}_1 + \delta \, {\cal M}_2 + \sqrt{\frac{\delta }{ \epsilon}} \, {\cal M}_3 \right) \hat \psi + NL^{\epsilon, \delta} = 0; \\
&\hat \psi(x_{c/r}(y, z), y, z) = \frac{c }{ r} \; x_{c/r}(y, z), \quad \hat \psi_x(x_{c/r}(y, z), y, z) = \frac{c }{ r}; \\
&\hat \psi(x_0(y, z), y, z) = 1, \quad \hat \psi_x(x_0(y, z), y, z) = 0;
\end{cases}
\label{eqn:44}
\end{equation}
in which
\begin{eqnarray}
{\cal L}_0 v &=& (m- y) \, v_y + \nu^2 \, v_{yy}, \label{eqn:45}\\
{\cal L}_1 v &=& - \rho_{12} \, \frac{\mu - r }{ f(y, z)} \, \nu \, \sqrt{2} \, x \, v_{xy},\label{eqn:46}\\
{\cal L}_2 v &=& - \lambda \, v - (r - \lambda) \, x \, v_x + \frac{1}{2} \left( \frac{\mu - r }{ f(y, z)} \right)^2 \, x^2 \, v_{xx},\label{eqn:47}\\
{\cal M}_1 v &=& - \rho_{13} \, \frac{\mu - r }{ f(y, z)} \, h(z) \, x \, v_{xz},\label{eqn:48}\\
{\cal M}_2 v &=& g(z) \, v_z + \frac{1 }{ 2} \, h^2(z) \, v_{zz},\label{eqn:49}\\
{\cal M}_3 v &=& \rho_{23} \, \nu \, \sqrt{2} \, h(z) \, v_{yz},
\label{eqn:410}
\end{eqnarray}
and
\begin{equation}
\begin{split}
NL^{\epsilon, \delta} &= \sup_{\gamma} \left[\rule{0cm}{0.9cm} \frac{1 }{ 2} \left( \left( \gamma^{(2)} \right)^2 + \left( \gamma^{(3)} \right)^2 \right) \hat \psi_{xx} \right. \\
& \; \qquad \quad + \, \gamma^{(2)} \left( \nu \, \sqrt{\frac{2 }{ \epsilon}} \, \sqrt{1 - \rho^2_{12}} \; \hat \psi_{xy} + \sqrt{\delta} \, h(z) \, {\frac{\rho_{23} - \rho_{12} \rho_{23} }{ \sqrt{1 - \rho^2_{12}}}} \; \hat \psi_{xz} \right) \\
& \; \qquad \quad \left. + \, \gamma^{(3)} \, \sqrt{\delta} \, h(z) \, \frac{\sqrt{(1 - \rho^2_{12})(1 - \rho^2_{13}) - (\rho_{23} - \rho_{12} \rho_{13})^2} }{ \sqrt{1 - \rho^2_{12}}} \; \hat \psi_{xz}
\right]\rule{0cm}{0.9cm}.
\end{split}
\label{eqn:411}
\end{equation}
Because $\hat \psi$ is concave with respect to $x$, we can express $NL^{\epsilon, \delta}$ as follows:
\begin{equation}
NL^{\epsilon, \delta} = - \frac{1}{\epsilon} \, \nu^2 \left(1 - \rho^2_{12} \right) \frac{\hat \psi^2_{xy} }{ \hat \psi_{xx}} - \frac{1 }{ 2} \, \delta \, h^2(z) \left( 1- \rho^2_{13} \right) \frac{\hat \psi^2_{xz} }{ \hat \psi_{xx}} - \nu \sqrt{2} \, \sqrt{\frac{\delta }{ \epsilon}} \; h(z) \left( \rho_{23} - \rho_{12} \rho_{13} \right) \frac{\hat \psi_{xy} \hat \psi_{xz} }{ \hat \psi_{xx}}.
\label{eqn:412}
\end{equation}
\subsection{ Convex Legendre Dual of $\hat \psi$}
Since $\hat \psi$ is strictly concave with respect to $x$ in its continuation region (which corresponds to wealth lying in $[0, c/r]$), we can define its convex dual $\Psi$ by the Legendre transform: for $(w, y, z) \in {\bf D} = \{ (w, y, z) \in {\bf R}^3: w \in [0, c/r] \}$,
\begin{equation}
\Psi(w, y, z) = \max_x \left( \hat \psi(x, y, z) - wx \right). \label{eqn:413}
\end{equation}
In this section, we show that the convex dual $\Psi$ is the minimum probability of lifetime ruin; then, in the next section, we asymptotically expand $\hat \psi$ in powers of $\sqrt{\epsilon}$ and $\sqrt{\delta}$.
{\theorem
\label{thm:41}
$\Psi$ equals the minimum probability of lifetime ruin $\psi$ on $\bf D$, and the investment policy $\pi^*$ given in feedback form by $\pi^*_t = \pi^*(W^*_t, Y_t, Z_t)$ is an optimal policy, in which $W^*$ is the optimally controlled wealth $($that is, wealth controlled by $\pi^*)$ and the function $\pi^*$ is given by
\begin{equation}
\pi^*(w, y, z) = - \frac{\mu - r }{ f^2(y, z)} \, \frac{\psi_w }{ \psi_{ww}} - \rho_{12} \, \sqrt{\frac{2 }{ \epsilon}} \, \frac{\nu }{ f(y, z)} \, \frac{\psi_{wy} }{ \psi_{ww}} - \rho_{13} \, \sqrt{\delta} \; \frac{ h(z) }{f(y, z)} \, \frac{\psi_{wz} }{ \psi_{ww}} \, ,
\label{eqn:414}
\end{equation}
in which the right-hand side of $(\ref{eqn:414})$ is evaluated at $(w, y, z)$.}
\noindent {\it Proof}.\quad From (\ref{eqn:413}), it follows that the critical value $x^*$ solves $w = \hat \psi_x(x, y, z)$; thus, given $w$, we have $x^* = I(w, y, z)$, in which $I$ is the inverse function of $\hat \psi_x$ with respect to $x$. Therefore, $\Psi(w, y, z) = \hat \psi(I(w, y, z), y, z) - w I(w, y, z)$. By differentiating this expression of $\Psi$ with respect to $w$, we obtain $\Psi_w(w, y, z) = \hat \psi_x(I(w, y, z), y, z) I_w(w, y, z) - I(w, y, z) - w I_w(w, y, z) = - I(w, y, z)$; thus, $x^* = - \Psi_w(w, y, z)$. Similarly, we obtain (with $w = \hat \psi_x(x, y, z))$ the following expressions:
\begin{equation}
\hat \psi_{xx}(x, y, z) = - \, \frac{1 }{ \Psi_{ww}(w, y, z)},
\label{eqn:415}
\end{equation}
\begin{equation}
\hat \psi_y(x, y, z) = \Psi_y(w, y, z),
\label{eqn:416}
\end{equation}
\begin{equation}
\hat \psi_z(x, y, z) = \Psi_z(w, y, z),
\label{eqn:417}
\end{equation}
\begin{equation}
\hat \psi_{yy}(x, y, z) = \Psi_{wy}(w, y, z) \, \hat \psi_{xy}(x, y, z) + \Psi_{yy}(w, y, z),
\label{eqn:418}
\end{equation}
\begin{equation}
\hat \psi_{zz}(x, y, z) = \Psi_{wz}(w, y, z) \, \hat \psi_{xz}(x, y, z) + \Psi_{zz}(w, y, z),
\label{eqn:419}
\end{equation}
\begin{equation}
\hat \psi_{xy}(x, y, z) = \Psi_{wy}(w, y, z) \, \hat \psi_{xx}(x, y, z),
\label{eqn:420}
\end{equation}
\begin{equation}
\hat \psi_{xz}(x, y, z) = \Psi_{wz}(w, y, z) \, \hat \psi_{xx}(x, y, z),
\label{eqn:421}
\end{equation}
and
\begin{equation}
\hat \psi_{yz}(x, y, z) = \Psi_{wy}(w, y, z) \, \Psi_{wz}(w, y, z) \, \hat \psi_{xx}(x, y, z) + \Psi_{yz}(w, y, z).
\label{eqn:422}
\end{equation}
By substituting $x^* = - \Psi_w(w, y, z)$ into the free-boundary problem for $\hat \psi$, namely (\ref{eqn:44}), one can show that $\Psi$ uniquely solves the following boundary-value problem on $\bf D$:
\begin{equation}
\begin{cases}
&\min_\beta {\cal D}^\beta v(w, y, z) = 0; \\
&v(0, y, z) = 1, \quad v(c/r, y, z) = 0.
\end{cases}
\label{eqn:423}
\end{equation}
where the differential operator ${\cal D}^\beta$ is given by
\begin{equation}
\begin{split}
{\cal D}^\beta v &= - \lambda \, v + (rw + (\mu - r) \beta - c) \, v_w + \frac{1}{ \epsilon} \, (m-y) \, v_y + \delta \, g(z) \, v_z \\
& \quad + \frac{1}{2} f^2(y,z) \, \beta^2 \, v_{ww} + \frac{1}{ \epsilon} \, \nu^2 \, v_{yy} + \frac{1}{2} \, \delta \, h^2(z) \, v_{zz} + \rho_{12} \, f(y, z) \, \beta \, \nu \, \sqrt{\frac{2 }{ \epsilon}} \, v_{wy} \\
& \quad + \rho_{13} \, f(y, z) \, \beta \, \sqrt{\delta} \, h(z) \, v_{wz} + \rho_{23} \, \sqrt{2} \, \nu \, \sqrt{\frac{\delta}{ \epsilon}} \, h(z) \, v_{yz}.
\end{split}
\label{eqn:31}
\end{equation}
Observe that $\Psi$ is strictly convex in $w$ because $\hat{\psi}$ is strictly concave in $x$ in its continuation region which corresponds to $\bf D$ in the original space. Since $\Psi$ is strictly convex with respect to $w$, the optimal policy $\pi^*$ in (\ref{eqn:423}) is given by the first-order necessary condition, which results in the expression in (\ref{eqn:414}). Now, using a standard verification theorem we deduce that $\Psi$ is the minimum probability of lifetime ruin $\psi$. $\square$ \medskip
Theorem \ref{thm:41} demonstrates the strong connection between $\hat \psi$ and $\psi$, namely that they are dual via the Legendre transform. (As an aside, if we have $\psi$, we can obtain $\hat \psi$ via $\hat \psi(x, y, z) = \min_w \left( \psi(w, y, z) + wx \right)$.) Therefore, if we have $\hat \psi$, then we obtain the minimum probability of ruin $\psi$ via (\ref{eqn:413}). More importantly, we get the optimal investment strategy $\pi^*$ via (\ref{eqn:414}). As a corollary to Theorem \ref{thm:41}, we have the following expression for $\pi^*$ in terms of the dual variable $x$.
{\corollary
\label{cor:42}
In terms of the dual variable $x$, the optimal investment strategy $\pi^*$ is given by $\pi^*_t = \hat{\pi}^*(X^*_t, Y_t, Z_t)$, in which $X^*$ is the optimally controlled process $X$, and
\begin{equation}
\hat{\pi}^*(x, y, z) = - \frac{\mu - r }{ f^2(y, z)} \, x \, \hat \psi_{xx} + \rho_{12} \, \sqrt{\frac{2 }{ \epsilon}} \, \frac{\nu }{ f(y, z)} \, \hat \psi_{xy} + \rho_{13} \, \sqrt{\delta} \; \frac{h(z) }{ f(y, z)} \, \hat \psi_{xz},
\label{eqn:424}
\end{equation}
with the right-hand side of $(\ref{eqn:424})$ evaluated at $(x, y, z)$.}
\noindent {\it Proof}.\quad Let $w = \hat \psi_x(x, y, z)$ in (\ref{eqn:414}) and simplify the right-hand side via equations (\ref{eqn:415})-(\ref{eqn:422}) to obtain (\ref{eqn:424}). $\square$ \medskip
\section{Asymptotic Approximation of the Minimum Probability of Lifetime Ruin}
In this section, we asymptotically expand $\hat \psi$, the Legendre transform of the minimum probability of ruin, in powers of $\sqrt{\epsilon}$ and $\sqrt{\delta}$. (A parallel analysis of expanding the Legendre transform of the value function of the utility maximization problem was carried out in \citet{JSMFIN03}.) We expand $\hat \psi$ instead of $\psi$ because if one were to do the latter, then one would note that each term in the expansion solves a {\it non-linear} differential equation. The differential equation for the zeroth-order term has a closed-form solution; however, none of the differential equations for the higher-order terms does. What this fact implies is that to solve any of these non-linear differential equations, one would have to assume that it has a convex solution, determine the corresponding linear free-boundary problem for the concave dual, solve this free-boundary problem, then invert the solution numerically, as in equation \eqref{eqn:413}. Note that one would have to perform this procedure for {\it each} higher-order term in the expansion.
By contrast, when we expand $\hat \psi$, each term solves a {\it linear} differential equation, as we show below. We explicitly solve these linear differential equations, then invert the approximation using \eqref{eqn:413} {\it once} to obtain an approximation for the minimum probability of lifetime ruin $\psi$. Note that the resulting approximation of $\psi$ is not guaranteed to be a probability, that is, to lie in the interval $[0, 1]$; however, our numerical experiments show that this is not a problem for the values of the parameters we consider. See \cite{Fprob} for an example of approximating a probability that solves a linear differential equation.
To begin, expand $\hat \psi$ and the free boundaries in powers of $\sqrt{\delta}$:
\begin{equation}
\hat \psi = \hat \psi_0 + \sqrt{\delta} \, \hat \psi_1 + \delta \, \hat \psi_2 + \cdots,
\label{eqn:425}
\end{equation}
\begin{equation}
x_{c/r}(y, z) = x_{c/r, 0}(y, z) + \sqrt{\delta} \, x_{c/r, 1}(y, z) + \delta \, x_{c/r, 2}(y, z) + \cdots,
\label{eqn:426}
\end{equation}
and
\begin{equation}
x_{0}(y, z) = x_{0, 0}(y, z) + \sqrt{\delta} \, x_{0, 1}(y, z) + \delta \, x_{0, 2}(y, z) + \cdots.
\label{eqn:427}
\end{equation}
Insert the expression in (\ref{eqn:425}) into $NL^{\epsilon, \delta}$ in (4.12) to obtain the following expansion in powers of $\sqrt{\delta}$:
\begin{equation}
\begin{split}
NL^{\epsilon, \delta} &= - \, \frac{1 }{ \epsilon} \, \nu^2 \left( 1 - \rho^2_{12} \right) \, \frac{\hat \psi^2_{0, xy} }{ \hat \psi_{0, xx}} \\
& \quad + \sqrt{\delta} \left[ \frac{1 }{ \epsilon} \, \nu^2 \left( 1 - \rho^2_{12} \right) \left( \left( \frac{\hat \psi_{0, xy} }{ \hat \psi_{0, xx}} \right)^2 \hat \psi_{1, xx} - 2 \; \frac{\hat \psi_{0, xy} }{ \hat \psi_{0, xx}} \; \hat \psi_{1, xy} \right) \right. \\
& \qquad \qquad \left. - \, \sqrt{\frac{2 }{ \epsilon}} \; \nu \, h(z) (\rho_{23} - \rho_{12} \rho_{13}) \frac{\hat \psi_{0, xy} \, \hat \psi_{0, xz} }{ \hat \psi_{0, xx}} \right] + {\cal O}(\delta).
\end{split}
\label{eqn:428}
\end{equation}
Keeping terms up to $\sqrt{\delta}$, we expand the free-boundary conditions in (\ref{eqn:44}) as
\begin{equation}
\begin{split}
& \hat \psi_0(x_{c/r, 0}(y, z), y, z) + \sqrt{\delta} \left[ x_{c/r, 1}(y, z) \, \hat \psi_{0, x}(x_{c/r, 0}(y, z), y, z) + \hat \psi_1(x_{c/r, 0}(y, z), y, z) \right]\\
& \quad = \frac{c }{ r} \left( x_{c/r, 0}(y, z) + \sqrt{\delta} \; x_{c/r, 1}(y, z) \right),
\end{split}
\label{eqn:429}
\end{equation}
\begin{equation}
\begin{split}
& \hat \psi_{0, x}(x_{c/r, 0}(y, z), y, z) + \sqrt{\delta} \left[ x_{c/r, 1}(y, z) \, \hat \psi_{0, xx}(x_{c/r, 0}(y, z), y, z) + \hat \psi_{1, x}(x_{c/r, 0}(y, z), y, z) \right]\\
& \quad = \frac{c }{ r},
\end{split}
\label{eqn:430}
\end{equation}
\begin{equation}
\hat \psi_0(x_{0, 0}(y, z), y, z) + \sqrt{\delta} \left[ x_{0, 1}(y, z) \, \hat \psi_{0, x}(x_{0, 0}(y, z), y, z) + \hat \psi_1(x_{0, 0}(y, z), y, z) \right] = 1,
\label{eqn:431}
\end{equation}
and
\begin{equation}
\hat \psi_{0, x}(x_{0, 0}(y, z), y, z) + \sqrt{\delta} \left[ x_{0, 1}(y, z) \, \hat \psi_{0, xx}(x_{0, 0}(y, z), y, z) + \hat \psi_{1, x}(x_{0, 0}(y, z), y, z) \right] = 0.
\label{eqn:432}
\end{equation}
We begin by approximating $\hat \psi_0$ and the free boundaries $x_{c/r, 0}$ and $x_{0, 0}$. Then, we use the boundaries $x_{c/r, 0}$ and $x_{0, 0}$ as {\it fixed} boundaries to determine $\hat \psi_1$. As one can see from equations (\ref{eqn:429})-(\ref{eqn:432}), this fixing of the boundaries introduces an ${\cal O}(\sqrt{\delta})$-error into $\hat \psi_1$ in ${\cal O}(\sqrt{\delta})$-neighborhoods of $x_{c/r, 0}$ and $x_{0, 0}$.
\subsection*{Terms of order $\delta^0$} By inserting (\ref{eqn:425})-(\ref{eqn:428}) into (\ref{eqn:44}) and collecting terms of order $\delta^0$, we obtain the following free-boundary problem:
\begin{equation}
\begin{cases}&cx + \left( \frac{1 }{ \epsilon} \, {\cal L}_0 + \frac{1 }{ \sqrt{\epsilon}} \, {\cal L}_1 + {\cal L}_2 \right) \hat \psi_0 - \, \frac{1 }{ \epsilon} \, \nu^2 \left( 1 - \rho^2_{12} \right) \, \frac{\hat \psi^2_{0, xy} }{ \hat \psi_{0, xx}} = 0; \\
&\hat \psi_0(x_{c/r, 0}(y, z), y, z) = \frac{c }{ r} \, x_{c/r, 0}(y, z), \quad \hat \psi_{0, x}(x_{c/r,0}(y, z), y, z) = \frac{c }{ r}; \\
&\hat \psi_0(x_{0, 0}(y, z), y, z) = 1, \quad \hat \psi_{0, x}(x_{0, 0}(y, z), y, z) = 0.
\end{cases}
\label{eqn:433}
\end{equation}
\medskip
\subsection*{Terms of order $\sqrt{\delta}$} Similarly, by comparing terms of order $\sqrt{\delta}$ and using $x_{c/r, 0}$ and $x_{0, 0}$ as fixed boundaries for $\hat \psi_1$, we obtain the following boundary-value problem:
\begin{equation}
\begin{cases}
&\left( \frac{1 }{ \epsilon} \, {\cal L}_0 + \frac{1 }{ \sqrt{\epsilon}} \, {\cal L}_1 + {\cal L}_2 \right) \hat \psi_1 + \left( {\cal M}_1 + \frac{1 }{ \sqrt{\epsilon}} \, {\cal M}_3 \right) \hat \psi_0 \\
& \quad + \, \frac{1 }{ \epsilon} \, \nu^2 \left( 1 - \rho^2_{12} \right) \left( \left( \frac{\hat \psi_{0, xy} }{ \hat \psi_{0, xx}} \right)^2 \hat \psi_{1, xx} - 2 \; \frac{\hat \psi_{0, xy} }{ \hat \psi_{0, xx}} \; \hat \psi_{1, xy} \right) \\
& \quad - \sqrt{\frac{2}{ \epsilon}} \; \nu \, h(z) (\rho_{23} - \rho_{12} \rho_{13}) \frac{\hat \psi_{0, xy} \, \hat \psi_{0, xz} }{ \hat \psi_{0, xx}} = 0; \\
&\hat \psi_1(x_{c/r, 0}(y, z), y, z) = 0, \quad \hat \psi_1(x_{0,0}(y, z), y, z) = 0.
\end{cases}
\label{eqn:434}
\end{equation}
\medskip
Next, we expand the solutions of (\ref{eqn:433}) and (\ref{eqn:434}) in powers of $\sqrt{\epsilon}$:
\begin{equation}
\hat \psi_0(x, y, z) = \hat \psi_{0, 0}(x, y, z) + \sqrt{\epsilon} \, \hat \psi_{0, 1}(x, y, z) + \epsilon \, \hat \psi_{0, 2}(x, y, z) + \cdots,
\label{eqn:435}
\end{equation}
and
\begin{equation}
\hat \psi_1(x, y, z) = \hat \psi_{1, 0}(x, y, z) + \sqrt{\epsilon} \, \hat \psi_{1, 1}(x, y, z) + \epsilon \, \hat \psi_{1, 2}(x, y, z) + \cdots.
\label{eqn:436}
\end{equation}
Similarly, expand the free boundaries $x_{c/r, 0}$ and $x_{0, 0}$ in powers of $\sqrt{\epsilon}$:
\begin{equation}
x_{c/r, 0}(y, z) = x_{c/r, 0, 0}(y, z) + \sqrt{\epsilon} \, x_{c/r, 0, 1}(y, z) + \epsilon \, x_{c/r, 0, 2}(y, z) + \cdots,
\label{eqn:437}
\end{equation}
and
\begin{equation}
x_{0, 0}(y, z) = x_{0, 0, 0}(y, z) + \sqrt{\epsilon} \, x_{0, 0, 1}(y, z) + \epsilon \, x_{0, 0, 2}(y, z) + \cdots.
\label{eqn:438}
\end{equation}
Substitute (\ref{eqn:435}) and (\ref{eqn:436}) into (\ref{eqn:433}) and (\ref{eqn:434}), respectively, and collect terms of the same order of $\sqrt{\epsilon}$. As discussed earlier, we determine the free boundaries $x_{c/r, 0, 0}(y, z)$ and $x_{0, 0, 0}(y, z)$ via a free-boundary problem for $\hat \psi_{0, 0}$; then, we use these boundaries as the {\it fixed} boundaries for $\hat \psi_{0, 1}$ and $\hat \psi_{1, 0}$.
\subsection*{Terms of order $1/\epsilon$ in $(\ref{eqn:433})$} By matching terms of order $1/\epsilon$ in (\ref{eqn:433}), we obtain the following:
\begin{equation}
{\cal L}_0 \, \hat \psi_{0, 0} - \, \, \nu^2 \left( 1 - \rho^2_{12} \right) \, \frac{\hat \psi^2_{0, 0, xy}}{ \hat \psi_{0, 0, xx}} = 0,
\label{eqn:439}
\end{equation}
or equivalently
\begin{equation}
(m - y) \, \hat \psi_{0, 0, y} + \nu^2 \, \hat \psi_{0, 0, yy} - \, \, \nu^2 \left( 1 - \rho^2_{12} \right) \, \frac{\hat \psi^2_{0, 0, xy} }{ \hat \psi_{0, 0, xx}} = 0.
\label{eqn:440}
\end{equation}
We, therefore, look for an $\hat \psi_{0,0}$ independent of $y$; otherwise, $\hat \psi_{0,0}$ will experience exponential growth as $y$ goes to $\pm \infty$ \citep{MRsircar,Fouque:asymp}. We also seek free boundaries $x_{c/r, 0, 0}$ and $x_{0, 0, 0}$ independent of $y$.
\subsection*{Terms of order $1/\sqrt{\epsilon}$ in $(\ref{eqn:433})$} By matching terms of order $1/\sqrt{\epsilon}$ in (\ref{eqn:433}) and using the fact that $\hat \psi_{0, 0, y} \equiv 0$, we obtain the following:
\begin{equation}
{\cal L}_0 \, \hat \psi_{0, 1} = 0.
\label{eqn:441}
\end{equation}
Therefore, we look for an $\hat \psi_{0, 1}$ independent of $y$; otherwise, $\hat \psi_{0, 1}$ will experience exponential growth as $y$ goes to $\pm \infty$.
\medskip
\subsection*{Terms of order $\epsilon^0$ in $(\ref{eqn:433})$} By matching terms of order $\epsilon^0$ in (\ref{eqn:433}) and using the fact that $\hat \psi_{0, 0, y} = \hat \psi_{0, 1, y} \equiv 0$, we obtain the following Poisson equation (in $y$) for $\hat \psi_{0, 2}$:
\begin{equation}
{\cal L}_0 \, \hat \psi_{0, 2} = -cx - {\cal L}_2 \, \hat \psi_{0, 0}.
\label{eqn:442}
\end{equation}
The solvability condition for this equation requires that $cx + {\cal L}_2 \, \hat \psi_{0, 0}$ be centered with respect to the invariant distribution ${\cal N}(m, \nu^2)$ of the Ornstein-Uhlenbeck process $Y$. Specifically,
\begin{equation}
\left \langle cx + {\cal L}_2 \, \hat \psi_{0, 0} \right \rangle = cx + \left \langle {\cal L}_2 \right \rangle \hat \psi_{0, 0} = 0,
\label{eqn:443}
\end{equation}
in which $\langle \cdot \rangle$ denotes averaging with respect to the distribution ${\cal N}(m, \nu^2)$:
\begin{equation}
\langle v \rangle = \frac{1 }{ \sqrt{2 \pi \nu^2}}\, \int_{-\infty}^\infty v(y) \, e^{- \frac{(y- m)^2 }{2 \nu^2}} \, \dd y.
\label{eqn:444}
\end{equation}
In (4.43), the averaged operator $\left \langle {\cal L}_2 \right \rangle$ is defined by
\begin{equation}
\left \langle {\cal L}_2 \right \rangle v = -\lambda \, v - (r - \lambda) \, x \, v_x + \frac{1 }{ 2} \left( \frac{\mu - r }{ \sigma_*(z)} \right)^2 \, x^2 \, v_{xx},
\label{eqn:445}
\end{equation}
in which $\sigma_*(z)$ is given by
\begin{equation}
\frac{1 }{ \sigma^2_*(z)} = \left \langle \frac{1 }{ f^2(y, z)} \right \rangle.
\label{eqn:446}
\end{equation}
Thus, we have the following free-boundary problem for $\hat \psi_{0, 0}$:
\begin{equation}
\begin{cases}
& cx -\lambda \, \hat \psi_{0, 0} - (r - \lambda) \, x \, \hat \psi_{0, 0, x} + s(z) \, x^2 \, \hat \psi_{0, 0, xx} = 0 \, ; \\
& \hat \psi_{0, 0}(x_{c/r, 0, 0}(z), z) = \frac{c }{ r} \, x_{c/r, 0, 0}(z), \quad \hat \psi_{0, 0, x}(x_{c/r, 0, 0}(z), z) =\frac{c }{ r} \, ; \\
&\hat \psi_{0, 0}(x_{0, 0, 0}(z), z) = 1, \quad \hat \psi_{0, 0, x}(x_{0, 0, 0}(z), z) = 0.
\end{cases}
\label{eqn:447}
\end{equation}
with $s(z) = \frac{1 }{ 2} \left( \frac{\mu - r }{ \sigma_*(z)} \right)^2$. The general solution of the differential equation in (\ref{eqn:447}) is given by
\begin{equation}
\hat \psi_{0, 0}(x, z) = D_1(z) \, x^{B_1(z)} + D_2(z) \, x^{B_2(z)} + \frac{c }{ r} \, x,
\label{eqn:448}
\end{equation}
in which
\begin{equation}
B_1(z) = \frac{1 }{2 s(z)} \left[ \left(r - \lambda + s(z) \right) + \sqrt{ \left(r - \lambda + s(z) \right)^2 + 4 \lambda s(z)} \right] > 1,
\label{eqn:449}
\end{equation}
and
\begin{equation}
B_2(z) = \frac{1 }{ 2 s(z)} \left[ \left(r - \lambda + s(z) \right) - \sqrt{ \left(r - \lambda + s(z) \right)^2 + 4 \lambda s(z)} \right] < 0.
\label{eqn:450}
\end{equation}
We determine $D_1$ and $D_2$ from the free-boundary conditions.
The free-boundary conditions imply that
\begin{equation}
D_1(z) \, x_{c/r, 0, 0}(z)^{B_1(z)} + D_2(z) \, x_{c/r, 0, 0}(z)^{B_2(z)} + \frac{c }{ r} \, x_{c/r, 0, 0}(z) = \frac{c }{ r} \, x_{c/r, 0, 0}(z),
\label{eqn:451}
\end{equation}
\begin{equation}
D_1(z) \, B_1(z) \, x_{c/r, 0, 0}(z)^{B_1(z) - 1} + D_2(z) \, B_2(z) \, x_{c/r, 0, 0}(z)^{B_2(z) - 1} + \frac{c}{r} = \frac{c}{r},
\label{eqn:452}
\end{equation}
\begin{equation}
D_1(z) \, x_{0, 0, 0}(z)^{B_1(z)} + D_2(z) \, x_{0, 0, 0}(z)^{B_2(z)} + \frac{c}{r} \, x_{0, 0, 0}(z) = 1,
\label{eqn:453}
\end{equation}
and
\begin{equation}
D_1(z) \, B_1(z) \, x_{0, 0, 0}(z)^{B_1(z) - 1} + D_2(z) \, B_2(z) \, x_{0, 0, 0}(z)^{B_2(z) - 1} + \frac{c}{r} = 0,
\label{eqn:454}
\end{equation}
which gives us four equations to determine the four unknowns $D_1$, $D_2$, $x_{c/r, 0, 0}$, and $x_{0, 0, 0}$. Indeed, the solution to these equations is
\begin{equation}
D_1(z) = - \, \frac{1 }{ B_1(z) - 1} \left( \frac{c}{r} \cdot \frac{B_1(z) - 1}{ B_1(z)} \right)^{B_1(z)},
\label{eqn:455}
\end{equation}
\begin{equation}
D_2(z) \equiv 0,
\label{eqn:456}
\end{equation}
\begin{equation}
x_{c/r, 0, 0}(z) \equiv 0,
\label{eqn:457}
\end{equation}
and
\begin{equation}
x_{0, 0, 0}(z) = \frac{B_1(z)}{ B_1(z) - 1} \cdot \frac{r }{ c}.
\label{eqn:458}
\end{equation}
It follows that
\begin{equation}
\hat \psi_{0, 0}(x, z) = - \, \frac{1}{ B_1(z) - 1} \left( \frac{c}{r} \cdot \frac{B_1(z) - 1 }{ B_1(z)} \cdot x \right)^{B_1(z)} + \frac{c}{r} \, x.
\label{eqn:459}
\end{equation}
\subsection*{Terms of order $\sqrt{\epsilon}$ in $(\ref{eqn:433})$} By matching terms of order $\sqrt{\epsilon}$ in (\ref{eqn:433}) and using the fact that $\hat \psi_{0, 0, y} = \hat \psi_{0, 1, y} = 0$, we obtain the following Poisson equation (in $y$) for $\hat \psi_{0, 3}$:
\begin{equation}
{\cal L}_0 \, \hat \psi_{0, 3} = - {\cal L}_1 \, \hat \psi_{0, 2} - {\cal L}_2 \, \hat \psi_{0, 1}.
\label{eqn:460}
\end{equation}
As above, the solvability condition for this equation requires that
\begin{equation}
\left \langle {\cal L}_1 \, \hat \psi_{0, 2} + {\cal L}_2 \, \hat \psi_{0, 1} \right \rangle = 0,
\label{eqn:461}
\end{equation}
in which
\begin{equation}
\hat \psi_{0, 2}(x, z) = {\cal L}_0^{-1} \left( - cx - {\cal L}_2 \, \hat \psi_{0, 0} \right).
\label{eqn:462}
\end{equation}
It follows that $\hat \psi_{0, 1}$ solves
\begin{equation}
\left \langle {\cal L}_2 \right \rangle \hat \psi_{0, 1} = \left \langle {\cal L}_1 {\cal L}_0^{-1} \left( cx + {\cal L}_2 \, \hat \psi_{0, 0} \right) \right \rangle.
\label{eqn:463}
\end{equation}
Recall that we impose the (fixed) boundary conditions $\hat \psi_{0,1}(x_{c/r, 0, 0}(z), z) = 0$ and \hfill \break $\hat \psi_{0,1}(x_{0, 0, 0}(z), z) = 0$ at $x_{c/r, 0, 0}(z) \equiv 0$ and $x_{0, 0, 0}(z) = \frac{B_1(z) }{ B_1(z) - 1} \cdot \frac{r }{ c}$.
From (\ref{eqn:462}), it is straightforward to show that $\hat \psi_{0, 2}$ can be expressed as follows:
\begin{equation}
\hat \psi_{0, 2}(x, y, z) = - D_1(z) \, B_1(z) \, (B_1(z) - 1) \, x^{B_1(z)} \, \eta(y, z),
\label{eqn:464}
\end{equation}
in which $\eta$ solves
\begin{equation}
(m-y) \eta_y + \nu^2 \, \eta_{yy} = \frac{1}{2}\left( \frac{\mu - r }{ f(y, z)} \right)^2 - \frac{1}{2}\left( \frac{\mu - r }{ \sigma^*(z)} \right)^2 = \frac{1}{2}\left( \frac{\mu - r }{ f(y, z)} \right)^2 - s(z).
\label{eqn:465}
\end{equation}
It follows that the right-hand side of (\ref{eqn:463}) equals
\begin{equation}
\begin{split}
& - \rho_{12} \, (\mu - r) \, \nu \, \sqrt{2} \, D_1(z) \, B_1^2(z) \, (B_1(z) - 1) \, x^{B_1(z)} \, \left \langle \frac{\eta_y(y, z) }{ f(y, z)} \right \rangle \\
& = \rho_{12} \, (\mu - r) \, \sqrt{2} {\nu} \, D_1(z) \, B_1^2(z) \, (B_1(z) - 1) \, x^{B_1(z)} \, \left \langle \tilde F(y, z) \left( \frac{1}{2}\left( \frac{\mu - r }{ f(y, z)} \right)^2 - s(z) \right) \right \rangle ,
\end{split}
\label{eqn:466}
\end{equation}
in which $\tilde F$ is an antiderivative of $1/f$ with respect to $y$; that is,
\begin{equation}
\tilde F_y(y, z) = \frac{1 }{ f(y, z)}.
\label{eqn:467}
\end{equation}
From (\ref{eqn:463}) and (\ref{eqn:466}), we obtain that $\hat \psi_{0, 1}$ equals
\begin{equation}
\hat \psi_{0, 1}(x, z) = \tilde D_1(z) \, x^{B_1(z)} + \tilde D_2(z) \, x^{B_2(z)} + A(z) \, x^{B_1(z)} \, \ln x,
\label{eqn:468}
\end{equation}
in which $B_1$ and $B_2$ are given in (\ref{eqn:449}) and (\ref{eqn:450}), respectively, and $A$ is given by
\begin{equation}
A(z) = \frac{\rho_{12} \, (\mu - r) \, \sqrt{2} \nu\, D_1(z) \, B_1^2(z) \, (B_1(z) - 1) }{ (2 B_1(z) - 1) \, s(z) - (r - \lambda)} \left \langle \tilde F(y, z) \left( \frac{1}{2}\left( \frac{\mu - r }{ f(y, z)} \right)^2 - s(z) \right) \right \rangle.
\label{eqn:469}
\end{equation}
The functions $\tilde D_1$ and $\tilde D_2$ are given by the (fixed) boundary conditions at $x_{c/r, 0, 0}(z) \equiv 0$ and $x_{0, 0, 0}(z) = \frac{B_1(z) }{ B_1(z) - 1} \cdot \frac{r}{c}$, from which it follows that
\begin{equation}
\begin{split}
\hat \psi_{0, 1}(x, z) &= A(z) \, x^{B_1(z)} \, \left( \ln x - \ln \left( \frac{B_1(z) }{ B_1(z) - 1} \cdot \frac{r}{c} \right) \right) \\
&= A(z) \, x^{B_1(z)} \, \ln \left( x \cdot \frac{B_1(z) - 1 }{ B_1(z)} \cdot \frac{c}{r} \right).
\end{split}
\label{eqn:470}
\end{equation}
Next, we focus on (\ref{eqn:434}) to find $\hat \psi_{1, 0}$, after which we will approximate $\hat \psi$ by $\hat \psi_{0, 0} + \sqrt{\epsilon} \, \hat \psi_{0, 1} + \sqrt{\delta} \, \hat \psi_{1, 0}$.
\subsection*{Terms of order $1/\epsilon$ in $(\ref{eqn:434})$} By matching terms of order $1/\epsilon$ in (\ref{eqn:434}), we obtain the following:
\begin{equation}
{\cal L}_0 \, \hat \psi_{1, 0} = 0,
\label{eqn:471}
\end{equation}
from which it follows that $\hat \psi_{1,0}$ is independent of $y$; otherwise, $\hat \psi_{1,0}$ will experience exponential growth as $y$ goes to $\pm \infty$ \citep{Fouque:asymp}.
\subsection*{Terms of order $1/\sqrt{\epsilon}$ in $(\ref{eqn:434})$} By matching terms of order $1/\sqrt{\epsilon}$ in (\ref{eqn:434}) and using the fact that $\hat \psi_{1, 0, y} \equiv 0$, we obtain the following:
\begin{equation}
{\cal L}_0 \, \hat \psi_{1, 1} = 0.
\label{eqn:472}
\end{equation}
Therefore, we look for an $\hat \psi_{1, 1}$ independent of $y$; otherwise, $\hat \psi_{1, 1}$ will experience exponential growth as $y$ goes to $\pm \infty$.
\subsection*{Terms of order $\epsilon^0$ in $(\ref{eqn:434})$} By matching terms of order $\epsilon^0$ in (\ref{eqn:434}) and using the fact that $\hat \psi_{1, 0, y} = \hat \psi_{1, 1, y} \equiv 0$, we obtain the following Poisson equation (in $y$) for $\hat \psi_{1, 2}$:
\begin{equation}
{\cal L}_0 \, \hat \psi_{1, 2} = - {\cal L}_2 \, \hat \psi_{1, 0} + \rho_{13} \, \frac{\mu - r }{ f(y, z)} \, h(z) \, x \, \hat \psi_{0, 0, xz}.
\label{eqn:473}
\end{equation}
The solvability condition for this equation requires that
\begin{equation}
\left \langle - {\cal L}_2 \, \hat \psi_{1, 0} + \rho_{13} \, \frac{\mu - r }{ f(y, z)} \, h(z) \, x \, \hat \psi_{0, 0, xz} \right \rangle = 0,
\label{eqn:474}
\end{equation}
or equivalently,
\begin{equation}
\left \langle {\cal L}_2 \right \rangle \hat \psi_{1, 0} = \rho_{13} \, \left \langle \frac{\mu - r }{ f(y, z)} \right \rangle \, h(z) \, x \, \hat \psi_{0, 0, xz}.
\label{eqn:475}
\end{equation}
with boundary conditions $\hat \psi_{1,0}(x_{c/r, 0, 0}(z), z) = 0$ and $\hat \psi_{1,0}(x_{0, 0, 0}(z), z) = 0$ at the boundaries $x_{c/r, 0, 0}(z) \equiv 0$ and $x_{0, 0, 0}(z) = \frac{B_1(z) }{ B_1(z) - 1} \cdot \frac{r}{c}$. It follows that $\hat \psi_{1, 0}$ is given by
\begin{equation}
\hat \psi_{1,0}(z) = x^{B_1(z)} \, \ln \left( x \cdot \frac{B_1(z) - 1 }{ B_1(z)} \cdot \frac{c}{r} \right) \, \left[ A_1(z) + A_2(z) \ln \left( x \cdot \frac{B_1(z) }{ B_1(z) -1} \cdot \frac{r}{c} \right) \right] ,
\label{eqn:476}
\end{equation}
in which $A_1$ and $A_2$ are
\begin{equation}
A_1(z) = \frac{H_1(z) }{ (2 B_1(z) - 1) \, s(z) - (r - \lambda)} - \frac{H_2(z) \, s(z) }{ \left[ (2 B_1(z) - 1) \, s(z) - (r - \lambda) \right]^2},
\label{eqn:477}
\end{equation}
and
\begin{equation}
A_2(z) = \, \frac{1}{2}\cdot \frac{H_2(z) }{ (2 B_1(z) - 1) \, s(z) - (r - \lambda)},
\label{eqn:478}
\end{equation}
with $H_1$ and $H_2$ functions of $z$ defined by
\begin{equation}
\begin{split}
& H_1(z) + H_2(z) \, \ln x \\
& = - \rho_{13} \, h(z) \left \langle \frac{\mu - r }{ f(y, z)}\right \rangle \frac{B'_1(z) }{ B_1(z) - 1} \left(
\frac{B_1(z) - 1}{ B_1(z)} \cdot \frac{c}{r} \right)^{B_1(z)} \left[ 1 + B_1(z) \, \ln \left( x \cdot \frac{B_1(z) - 1 }{ B_1(z)} \cdot \frac{c}{r} \right) \right].
\end{split}
\label{eqn:479}
\end{equation}
\subsection{The Approximation of the Probability of Lifetime Ruin and the Optimal Investment Strategy}\label{eq:app-tems}
Combining (\ref{eqn:459}), (\ref{eqn:470}), and (\ref{eqn:476}), we obtain the following approximation of $\hat \psi$
\begin{equation}
\begin{split}
\hat{\psi}^{\epsilon,\delta}(x, z) &= \hat \psi_{0, 0}(x, z) + \sqrt{\epsilon} \, \hat \psi_{0, 1}(x, z) + \sqrt{\delta} \, \hat \psi_{1, 0}(x, z) \\
&= - \, \frac{1 }{ B_1(z) - 1} \left( \frac{c}{r} \cdot \frac{B_1(z) - 1 }{ B_1(z)} \cdot x \right)^{B_1(z)} + \frac{c}{r} \, x \\
& \quad + \sqrt{\epsilon} \, A(z) \, x^{B_1(z)} \, \ln \left( x \cdot \frac{B_1(z) - 1 }{ B_1(z)} \cdot \frac{c}{r} \right) \\
& \quad + \sqrt{\delta} \, x^{B_1(z)} \, \ln \left( x \cdot \frac{B_1(z) - 1 }{ B_1(z)} \cdot \frac{c}{r} \right) \, \left[ A_1(z) + A_2(z) \ln \left( x \cdot \frac{B_1(z) }{ B_1(z) -1} \cdot \frac{r}{c} \right) \right],
\end{split}
\label{eqn:480}
\end{equation}
in which $A,$ $A_1,$ and $A_2,$ are specified in $(\ref{eqn:469}),$ $(\ref{eqn:477}),$ and $(\ref{eqn:478}),$ respectively.
We also approximate the dual of the optimal investment strategy up to the first powers of $\sqrt{\epsilon}$ and $\sqrt{\delta}$, as we did for $\hat \psi$. Using (\ref{eqn:424}), we obtain
\begin{equation}
\begin{split}
\hat{\pi}^{\epsilon,\delta} (x, z) &= - \frac{\mu - r}{ f^2(y, z)} \, x \, \hat \psi_{0, 0, xx} + \sqrt{\epsilon} \left( - \frac{\mu - r }{ f^2(y, z)} \, x \, \hat \psi_{0, 1, xx} + \rho_{12} \, \frac{\nu \sqrt{2} }{ f(y, z)} \, \hat \psi_{0, 2, xy} \right) \\
& \quad + \sqrt{\delta} \left( - \frac{\mu - r }{ f^2(y, z)} \, x \, \hat \psi_{1, 0, xx} + \rho_{13} \, \frac{h(z) }{ f(y, z)} \, \hat \psi_{0, 0, xz} \right).
\end{split}
\label{eqn:51}
\end{equation}
Given $w \in \mathbb{R}_+$, we solve for $x$ using $w = \hat{\psi}^{\epsilon,\delta}_x(x, z)$. Then, we let $\psi^{\epsilon,\delta}(w,z):=\hat{\psi}^{\epsilon,\delta}(x, z) - x w$, thereby performing the calculation in equation \eqref{eqn:413}. We also denote by $\pi^{\epsilon,\delta}$ the function that satisfies $\pi^{\epsilon,\delta}(w,z):=\hat{\pi}^{\epsilon,\delta} (x, z)$. Note that the resulting approximation of $\psi$ is not guaranteed to be a probability; however, this is not a problem in the numerical experiments we consider in the next section.
\section{Numerical Solution using the Markov Chain Approximation Method}\label{sec:MCAM}
In this section, we describe how to construct a numerical algorithm for the original optimal control problem directly using the Markov Chain Approximation Method (MCAM); see e.g. \citep{KushnerBook, KushnerConsistency}. For the ease of presentation, we will describe the numerical algorithm only when the fast scale volatility factor is present. In what follows $\rho$ will denote the correlation between the Brownian motion driving the stock and the one driving the fast factor, that is, $\rho=\rho_{12}$.
Let us fix an $h$-grid, that is, a rectangular compact domain $G^h \subset {\bf R}^2$ with the same spacing $h$ in both directions.
We choose an initial guess (on this grid) for a candidate optimal strategy. Denote this strategy by $\pi$. Then, our goal is to create a discrete-time Markov chain $(\xi^h_n)_{n \geq 0}$ that lives on $G^h$ and that satisfies the local consistency condition
\begin{equation}\label{eq:local-const}
\begin{split}
{\bf E}_{x,n}^{h,\pi} [\Delta \xi_{n+1}^h ]&= b(x,\pi)\Delta t^{\pi,h}(x,\pi) + o(\Delta t^{h}),\\
\text{Cov}_{x,n}^{h,\pi} [\Delta \xi_{n+1}^h ]&= A(x,\pi)\Delta t^{\pi,h}(x,\pi) + o(\Delta t^{h}),
\end{split}
\end{equation}
in which $\Delta \xi_{n+1}=\xi_{n+1}-\xi_n$, and $b$ and $A$ denote the drift and the covariance of the vector $X_t=(W_t,Y_t)$, respectively. (The Markov chain is constructed to approximate this vector in a certain sense.)
$\mathbb{E}_{x,n}^{h,\pi}$ denotes the expectation, given that the state of the Markov chain at time $n$ is $x$. In \eqref{eq:local-const} the quantity $\Delta t^{h}$ (called the interpolation interval) is to be chosen so that it goes to zero as $h \to 0$. We also do not want this quantity to depend on the state variables or the control variable.
Since $G^h$ is a compact domain, we impose reflecting boundary conditions at its edges. (Natural boundaries exist for $W(t)$, specifically $0$ and the safe level $\frac{c}{r}$. However, $Y_t$ lives on an infinite region.) For example, we choose the transition probabilities to be $p^{\pi,h}((w,y),(w,y-h))=1$, when $y$ is as large it can be in $G^h$ and for all $w \in [0, \frac{c}{r}]$.
\subsection{Constructing the Approximating Markov Chain}\hfill
\subsubsection{When $\rho=0$.} Denote $\alpha=\frac{1}{\epsilon}, \beta=\nu \sqrt{\frac{2}{\epsilon}}$. We obtain the transition probabilities of the Markov chain $\xi^h$ as
\begin{equation}
\begin{cases}
p^{\pi,h}((w,y), (w,y \pm h))& =\displaystyle \frac{ \beta^2/2 + h \alpha (m-y)^{\pm} }{\widetilde{Q}^h}\,, \\
p^{\pi,h}((w,y), (w\pm h ,y ))& =\displaystyle \frac{(f(y)\pi(w,y))^2/2 + h (\mu-r) \pi(w,y)^{\pm}+ h (rw-c)^{\pm}}{\widetilde{Q}^h}\, ,\\
p^{\pi,h}((w,y), (w,y ))&= \frac{\widetilde{Q}^h-Q^{\pi,h}(w,y)}{\widetilde{Q}^h}\,,
\end{cases}
\end{equation}
and choose the interpolation interval to be
$$\Delta t^{h} =\frac{h^2}{ \widetilde{Q}^h},$$
in which $$Q^{\pi,h}(w,y)= (\pi f(y))^2 + \beta^2 + h |\alpha (m-y)| + h|(\mu-r)\pi(w,y)| + h|rw-c| ,$$ and
\[
\widetilde{Q}^h=\max_{(w,y, \pi)} Q^{\pi,h}(w,y),
\]
in order to satisfy the local consistency condition. Here $a^{\pm}=\max\{0, \pm a\}$.
\subsubsection{When $\rho \neq 0$.} In this case a convenient transition probability matrix solving the local consistency condition is
\begin{equation} \label{eq:tranprob}
\begin{cases}
& p^{\pi,h}((w,y), (w,y \pm h)) =\displaystyle \frac{(1-\rho^2)\beta^2/2 -|\rho \pi(w,y)| \beta f(y)/2 + h \alpha (m-y)^{\pm} }{\widetilde{Q}^h}\,, \\
& p^{\pi,h}((w,y), (w\pm h ,y )) =\displaystyle \frac{(f(y)\pi(w,y))^2/2-|\rho \pi(w,y)| \beta f(y)/2 + h (\mu-r) \pi(w,y)^{\pm}+ h (rw-c)^{\pm}}{\widetilde{Q}^h}\,,\\
& p^{\pi,h}((w,y), (w+ h ,y+h )) = p^{\pi,h}((w,y), (w-h ,y-h )) =\displaystyle \frac{ (\rho \pi(w,y))^{+} \beta f(y)}{2\widetilde{Q}^h}, \; \\
& p^{\pi,h}((w,y), (w+ h ,y-h )) = p^{\pi,h}((w,y), (w-h ,y+h )) = \displaystyle \frac{( \rho \pi(w,y))^{-} \beta f(y)}{2\widetilde{Q}^h}\,\\
& p^{\pi,h}((w,y), (w,y ))= \frac{\widetilde{Q}^h-Q^{\pi,h}(w,y)}{\widetilde{Q}^h}\,,
\end{cases}
\end{equation}
where $$Q^{\pi,h}(w,y)= (\pi f(y))^2 + \beta^2 -|\rho \pi(w,y)| \beta f(y) + h |\alpha (m-y)| + h|(\mu-r)\pi(w,y)| + h|rw-c| .$$
For values of $|\rho|$ close to 1, the transition probabilities may be negative. The positiveness of these probabilities is equivalent to the diagonal dominance of the covariance matrix $A=(a_{ij})$. (Recall that we call $A$ diagonally dominant if $
a_{ii}-\sum_{j,j\neq i}|a_{ij}| > 0, \quad \forall i$.) The construction of an approximating Markov chain when some of the expressions in \eqref{eq:tranprob} are negative will be discussed next.
\subsubsection{When $\rho=1$ and some of the transition probabilities in \eqref{eq:tranprob} are negative.} We accomplish the construction of the approximating Markov chain in two steps, following \citet{KushnerConsistency}:
\noindent \textbf{(i) Decomposition.}
As in \citet{KushnerBook} Sections 5.3 and 5.4, we decompose $X$ into separate components and build approximating Markov chains to match each component. Then, we combine the transition probabilities appropriately to obtain the approximating Markov chain for $X$ itself.
Let $X=X^{(1)}+X^{(2)}$, in which
\begin{eqnarray}
\dd X^{(1)}_t &=& \begin{pmatrix} \pi f(y) \\ \beta \end{pmatrix} \dd B^1_t ,\label{eqn:dx1}\\
\dd X^{(2)}_t &=& \begin{pmatrix} rW_t-c + (\mu - r)\pi_t \\ \alpha (m-Y_t) \label{eqn:dx2} \end{pmatrix} dt.
\end{eqnarray}
Since $\rho=1$, we take $B^1=B^2$.
Suppose that the form of the locally consistent (with dynamics of $X^{(1)}$ and $X^{(2)}$, respectively) transition probabilities and interpolation intervals are
\begin{eqnarray*}
p_1^{\pi,h}(x,\bar{x}) &= &\frac{n_1^{\pi,h}(x,\bar{x})}{\widetilde{Q}_1^{h}}\; , \;\Delta t_1^{\pi,h} =\frac{h^2}{\widetilde{Q}_1^{h}}\; , \\
p_2^{\pi,h}(x,\bar{x}) &= & \frac{n_2^{\pi,h}(x,\bar{x})}{\widetilde{Q}_2^{h}}\; , \; \Delta t_2^{\pi,h} =\frac{h}{\widetilde{Q}_2^{h}}\;,
\end{eqnarray*}
for some $n_1^{\pi,h}(x,\bar{x}), n_2^{\pi,h}(x,\bar{x})$, and appropriate normalizers $\widetilde{Q}_1^{h}$, $\widetilde{Q}_2^{h}$.
Then, the following transition probabilities and the interpolation interval are locally consistent with the dynamics of $X$
\begin{eqnarray}
p^{\pi,h}(x,\bar{x}) =\displaystyle \frac{n_1^{\pi,h}(x,\bar{x}) + h n_2^{\pi,h}(x,\bar{x}) }{\widetilde{Q}_1^{h}+h \widetilde{Q}_2^{h}}, \quad \Delta t^{\pi,h} =\displaystyle\frac{h^2}{ \widetilde{Q}_1^{h}+h \widetilde{Q}_2^{h}}.
\label{eqn:pdx}
\end{eqnarray}
Since it is easier, we first provide the expression for $p^{\pi,h}_2$:
\begin{equation}
\begin{cases}
p^{\pi,h}_2((w,y), (w,y \pm h) | \pi)& =\displaystyle \frac{ \alpha (m-y)^{\pm} }{\widetilde{Q}_2^{h}}\,, \\
p^{\pi,h}_2((w,y), (w\pm h ,y ) | \pi)& =\displaystyle \frac{ (\mu-r) \pi(w,y)^{\pm}+ (rw-c)^{\pm}}{\widetilde{Q}_2^{h}}\,,
\\ p_2^{\pi,h}((w,y), (w,y ))&= \displaystyle \frac{\widetilde{Q_2}^h-Q_2^{\pi,h}(w,y)}{\widetilde{Q}_2^h},
\label{eqn:pdx2}
\end{cases}
\end{equation}
where
$$Q_2^{\pi,h}(w,y)= \displaystyle \alpha |m-y| + (\mu-r) |\pi(w,y)|+ |rw-c| .$$
The computation of $p^{\pi,h}_1$ is more involved. This is the subject of the next step.
\noindent \textbf {(ii) Variance control.} System (\ref{eqn:dx1}) is fully degenerate; that is, the corresponding covariance matrix $A$ is not diagonally dominant. The previous technique for building a Markov chain does not work. Instead, we will build an approximating Markov chain by allowing the local consistency condition to be violated by a small margin of error.
If $(\sigma_1,\sigma_2) = (q k_1,q k_2)$ for some constant $q$ and integers $k_1$, $k_2$, we could let the transition probability to be $p^{h}(x,x \pm (h k_1,h k_2)) = {1}/{2}$ and the interpolation interval to be $\Delta t^{h} = {h^2}/{q^2}$, and we would obtain a locally consistent Markov chain. This is not possible in general. For an arbitrary vector $(\sigma_1,\sigma_2)$,
we can find a pair of integers $k_1(x,\pi), k_2(x,\pi)$, and a real number $\gamma(x,\pi) \in [0,1]$, such that
$$\begin{pmatrix} \sigma_1(x,\pi) \\ \sigma_2 \end{pmatrix} = q(x,\pi)\begin{pmatrix} k_1(x,\pi) \\k_2(x,\pi) + \gamma(x,\pi) \end{pmatrix}. $$
Since the Markov chain is constrained to the grid $G^h$, we can only approximately let it move in the direction of $(\sigma_1,\sigma_2)^T$. We choose
\begin{equation}
\begin{split}
p^{\pi,h}(x,x\pm h(k_1, k_2 )^T) & = p_1/2 , \\
p^{\pi,h}(x,x\pm h(k_1, k_2+1)^T) & = p_2/2 , \label{eqn:pdx1}
\end{split}
\end{equation}
in which $p_1 + p_2 =1$, and $p_1$ and $p_2$ will be appropriately chosen in what follows.
The mean and the covariance of the approximating chain is
\begin{equation}
\begin{split}
{\bf E}_{x,\pi}^{h,\pi}[\Delta \xi^h(x,\pi)] &= 0,\\
{\bf E}_{x,\pi}^{h,\pi}[\Delta \xi^h(x,\pi)\Delta \xi^h(x,\pi)^T] &= h^2 C(x,\pi),
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
C(x,\pi)&= p_1 \begin{pmatrix} k_1^2 & k_1 k_2 \\ k_1 k_2 & k_2^2\end{pmatrix} + p_2 \begin{pmatrix}k_1^2 & k_1 (k_2 +1)\\ k_1 (k_2+1) & (k_2+1)^2 \end{pmatrix} \\
& = \begin{pmatrix} k_1^2 & k_1 (k_2+p_2) \\ k_1 (k_2+p_2) & k_2^2 + 2 p k_2 + p_2 \end{pmatrix}.
\end{split}
\label{eqn:ccc}
\end{equation}
We choose the interpolation interval to be $ \Delta t^{\pi,h}(x,\pi)=h^2/q^2$.
On the other hand
$$a(x,\pi)= A(x,\pi)/q^2= \begin{pmatrix} k_1^2 & k_1 (k_2+\gamma) \\ k_1 (k_2+\gamma) & (k_2+\gamma)^2 \end{pmatrix},$$
and we see that if we pick $p_2 = \gamma $,
then $C_{11}=a_{11}$ and $C_{12}=a_{12}$ match, but we violate the local consistency condition
by
\begin{equation}\label{eq:var-cont-noise}
\frac{C_{22} - a_{22}^2}{a_{22}^2} = \frac{\gamma (1-\gamma)}{ (k_2 + \gamma)^2} = O\left(\frac{1}{k_2^2}\right).
\end{equation}
We will choose $k_2$ sufficiently large so that the local consistency condition is almost satisfied, and the numerical noise in \eqref{eq:var-cont-noise} is significantly reduced.
\subsubsection{The case when $\rho \in (-1,1)$ and some of the transition probabilities in \eqref{eq:tranprob} are negative}
We will decompose the state variable into three components:
\begin{equation}
\dd \vec{X_t} = \begin{pmatrix} \dd W_t \\ \dd Y_t \end{pmatrix} = \begin{pmatrix}rW_t-c + (\mu - r)\pi_t \\ \alpha (m-Y_t)\end{pmatrix} dt + \begin{pmatrix}\pi_t f(Y_t) \\
\beta \rho \end{pmatrix} \dd B_t^1 + \begin{pmatrix}0 & 0 \\
0 & \beta \sqrt{1-\rho^2} \end{pmatrix} \begin{pmatrix} \dd B_t^1 \\ \dd B_t^2 \end{pmatrix},
\label{eqn:dx3}
\end{equation}
that is, a drift component, a fully degenerate noise component, and a noise component with diagonally dominated covariance matrix. We can build an approximating Markov chain for each component separately and
then combine them as discussed above.
\subsection{Approximating the Probability of Ruin and Updating the Strategy}
We solve the system of linear equations
\begin{equation}\label{eq:dy-pg-pr}
V^{\pi,h}(x)=e^{-\lambda \Delta t^{\pi,h}}\sum_{\tilde{x} \in G^h}p^{\pi,h}(x,\tilde{x})V^{\pi,h}(\tilde{x}),
\end{equation}
with boundary conditions $V^{\pi,h}(0,y)=1$ and $V^{\pi,h}(c/r,y)=0$.
This is the dynamic programming equation for a probability of ruin problem when the underlying state variable is the Markov chain $\xi^h$. In the next step, we update our candidate for the optimal strategy.
For convenience, denote $V^{\pi,h}$ by $V$.
In the interior points of the grid
{\small
\[
\pi(w,y)=-\frac{h(\mu-r)[V(w+h,y)-V(w,y)]+ (\beta/2) \rho f(y)\left[V(w+h,y+h)+V(w,y-h)-V(w+h,y-h)-V(w,y+h)\right]}{f^2(y) \left[V(w+h,y)+V(w-h,y)-2V(w,y)\right]}.
\]
}
On the wealth dimension boundaries of the grid, we let $\pi(c/r,y)=0$ and
\[
\begin{split}
\pi(0,y)=-\frac{h(\mu-r)[V(h,y)-V(0,y)]+(\beta/2) \rho f(y)\left[V(h,y+h)+V(0,y-h)-V(h,y-h)-V(0,y+h)\right]}{f^2(y) \left[2V(0,y)-5V(h,y)+4V(2h,y)-V(3h,y)\right]}.
\end{split}
\]
The updates of the optimal strategy for the maximum and minimum values of $y$ are similar.
\subsection*{Iteration}
Once the optimal strategy is updated, we go back and update the transition probabilities and solve the system of linear equations in \eqref{eq:dy-pg-pr} to update the value function. This iteration continues until the improvement in the value function is smaller than an exogenously picked threshold.
\subsection*{Two Technical Issues}
\begin{itemize}
\item The initial guess of the optimal strategy is important. For $\rho=0$, we take the initial strategy as the one in constant volatility case, where the closed-form solution is available in \citet{young}. For $\rho \neq 0$, we take the final strategy computed from zero-correlation case ($\rho=0$) as the initial guess. This initial guess makes the algorithm converge fast.
\item For $\rho \neq 0$, the covariance matrix of the wealth process and volatility factor, in general, does not satisfy the diagonal dominance condition. The problem is more serious for the slow factor, since its variance is of the order of $\delta$, and the numerical noise using "`variance control"' is far greater.
To solve this issue we perform a ``scale adjustment" to increase the variance of the factor.
For example, if we define $\bar{Z_t}=100 Z_t$, then the dynamic of the system becomes
\begin{equation}
\begin{split}
\frac{\dd S_t}{S_t} &= \mu dt + f\left(Y_t,\frac{\bar{Z_t}}{100}\right) \dd B_t^1,\\
\dd \bar{Z_t}&= \delta (100 m-\bar{Z_t}) dt + 100 \sqrt{\delta} \sqrt{2} \nu_2 \dd B_t^{(3)}.
\end{split}
\end{equation}
when $g=(m-z)$ and $h=\sqrt{2} \nu$.
The new system is mathematically equivalent to the original one, but with a much bigger variance; thus, the numerical noise in variance control is much smaller. Note that the scheme here is equivalent to choosing a different grid sizes for the volatility and wealth dimensions.
\end{itemize}
\section{Numerical Experiments}
In order to conduct our numerical experiments we will take the dynamics of the slow factor in \eqref{eqn:27} to be
\begin{equation*}
\dd Z_t = \delta (m-Z_t)dt + \sqrt{\delta} \sqrt{2}\nu \dd B_t^{(3)},\ \quad Z_0=z.
\end{equation*}
We let $f(y,z)=\exp(-y)$ or $f(y,z)=\exp(-z)$ in \eqref{eqn:sigmat},
depending on whether we want to account for the fast volatility factor or the slow volatility factor in our modeling.
We will call $1/\epsilon$ or $\delta$ the speed of mean reversion. We will take the correlations between the Brownian motions driving the volatility factors and the stock price to be $\rho=\rho_{13}=\rho_{12}$.
The following parameters are fixed throughout this section:
\begin{itemize}
\item $r=0.02$; the risk-free interest rate is 2\% over inflation.
\item $\mu=0.1$; the expected return of risky asset is 10\% over inflation.
\item $c=0.1$; the individual consumes at a constant rate of 0.1 unit of wealth per year.
\item $\lambda=0.04$; the hazard rate (force of mortality) is constant such that the expected future lifetime is always 25 years.
\item $m=1.364$ and $\nu=0.15$, so that the harmonic average volatility, which we will denote by $\sigma_m = \sqrt{1/{\bf E}[1/f^2(Y)]} = \sqrt{1/{\bf E}[e^{2Y}]}=e^{-m-\nu^2}=0.25$, in which $Y$ is a normal random variable with mean $m$ and variance $\nu^2$. The distribution of this random variable is the stationary distribution of the process $(Y_t)_{t \geq 0}$; see \eqref{eqn:446}. [Note that $\sigma_m$ is very close in value to ${\bf E}[f(Y)]={\bf E}[e^{-Y}]=e^{-m+\nu^2/2}=0.26$.]
\end{itemize}
In our numerical procedure we use a bounded region for $Y$ and impose reflecting boundary conditions.
However, $f(Y_t)$ is not bounded and not bounded away from zero.
On the other hand, the invariant distribution of the process $Y$ is normal with mean 1.364, and variance $0.15^2$. So it is with very small probability that $Y_t$ is negative or very large. Therefore, the fact that $f(Y_t)$ is not bounded or bounded away from zero does not affect the accuracy of our numerical work in a significant way.
\subsection*{Observation 1}
We give a three-dimensional graph of the minimum probability of ruin and the optimal investment strategy in Figure~\ref{fig:mcam3}, which are computed using MCAM. Here the speed of mean reversion is 0.5, $\rho=0$, and only one factor is used. In our experiments we observed that the optimal strategy $\pi^*$ is positive (no-shortselling).
As expected we observe that $w \to \psi(w,y)$ is convex and decreasing. Note that $f(y) \to \psi(w,y)$ is increasing. Also, $f(y) \to \pi^*(w,y)$ is decreasing; however, it is not necessarily true that $w \to \pi^{*}(w,y)$ is decreasing. The latter behavior depends on the value of $y$.
The probability of ruin does not depend on the sign of the correlation, $\rho$, between the Brownian motions driving the stock and the one driving the volatility. The larger the magnitude of $\rho$, the larger the probability of ruin. However, the minimum probability of ruin is quite insensitive to the changes in $\rho$; see Figure~\ref{fig:rho3}.
\subsection*{Observation 2}
We compare the optimal investment strategy $\pi^*(w,y,z)$ in \eqref{eqn:414} to
\begin{equation*}
\widetilde{\pi}(w;\sigma)=\frac{\mu-r}{\sigma^2} \frac{c-rw}{(p-1)r}\;\;,
\end{equation*}
in which
\[
p=\frac{1}{2r}\left[(r+\lambda+s)+\sqrt{(r+\lambda+s)^2- 4 r \lambda}\right],
\]
and
\[
s=\frac{1}{2}\left(\frac{\mu-r}{\sigma}\right)^2.
\]
When we want to emphasize the dependence on $\sigma$, we will refer to $p$ as $p(\sigma)$.
\citet{young} showed that the strategy $\widetilde{\pi}$ is optimal when the volatility is fixed to be $\sigma$.
If only the fast factor is present and the speed of mean reversion is 250 ($\epsilon=0.004$), then $\hat{\psi}^{\epsilon,\delta}$ in \eqref{eqn:480} can be expressed as
\[
\hat{\psi}^{\epsilon}(w)=\hat{\phi}_{0,0}(x)
\]
whose inverse Legendre transform is
\begin{equation}\label{eq:vgax}
\psi^{\epsilon}(w;\sigma_m)=\left(1-\frac{r}{c} w\right)^{p(\sigma_m)},
\end{equation}
which is exactly the minimal probability of ruin if the volatility were fixed at $\sigma_m$. Therefore, it is not surprising that for very small values of $\epsilon$, the minimum probability of ruin $\psi(w,y)$, calculated using MCAM, can be approximated by \eqref{eq:vgax}; see Figure~\ref{fig:sWorld1}-a.
In our numerical calculations and in \eqref{eq:vgax},
we observe that the minimum probability of ruin $\psi$ does not depend on its second variable. This result is intuitive, since when only the fast factor is present whatever the initial value of $\sigma_0$ is, the volatility quickly approaches its equilibrium distribution (which is normal with mean $\sigma_m$).
In fact $\pi_{0}(w; \sigma_m)$ practically coincides with the optimal investment strategy $\pi^*$, which is computed using MCAM; see Figure~\ref{fig:sWorld1}-b.
The most important conclusion from Figure~\ref{fig:sWorld1}-b is that it is not necessarily true that the optimal investment strategy when there is stochastic volatility is more or less than the optimal investment strategy when the volatility is constant. Comparing $\widetilde{\pi}(w;\sigma)$ and $\pi^{*}(w,-\ln(\sigma))$ for different values of $\sigma$, we see that $\pi^{*}(w,-\ln(\sigma))<\widetilde{\pi}(w;\sigma)$ for larger values of $\sigma$, whereas the opposite inequality holds for smaller values of $\sigma$. The investment amount decreases significantly as the volatility increases.
If only the slow factor is present and the speed of mean reversion is 0.02, then
\begin{equation}\label{eq:vgaxd}
\psi^{\delta}(w,z)=\left(1-\frac{r}{c} w\right)^{p(e^{-z})},
\end{equation}
approximates the minimum probability of ruin $\psi(w,z)$, which we calculate using MCAM, quite well; compare $\psi(w,-\text{ln}(\sigma))$ and $\psi^{\delta}(w,-\text{ln}(\sigma))$ for different values of $\sigma$ in Figure~\ref{fig:sWorld16}-a. We also compare $\widetilde{\pi}(w;\sigma)$ and $\pi^{*}(w,-\text{ln}(\sigma))$ for several values of $\sigma$ and draw the same conclusions as before. Also note that the optimal investment strategy is not necessarily a decreasing function of wealth.
When we take the speed of mean reversion to be 0.2 (medium speed), then the probability of ruin starts diverting from what \eqref{eq:vgax} or \eqref{eq:vgaxd} describes; see Figure~\ref{fig:sWorld13}-a. As to the comparison of the optimal investment strategy with $\widetilde{\pi}(w;\sigma)$, the same conclusions can be drawn; see Figure~\ref{fig:sWorld13}-b.
\subsection*{Observation 3}
We compare the performance of several investment strategies in the stochastic volatility environment.
Let $\sigma_0$ be the initial volatility. We denote by $\pi^M$ the strategy when one only invests in the money market. The corresponding probability of ruin can be explicitly computed as $\psi^{M}(w)=(1-c/r w)^{1 \vee [\lambda/r]}$. We will also denote
$\pi^a(w)=\widetilde{\pi}(w;\sigma_0)$, $\pi^{b}=\widetilde{\pi}(w;\sigma_m)$, and
\begin{equation}\label{eq:pi-c}
\pi^{c}(w,y,z)=\frac{\mu-r}{f^2(y,z)} \frac{c-rw}{(p-1)r}.
\end{equation}
Let $\pi^{\epsilon}(w)$ denote the approximation to the optimal strategy we obtained in Section ~\ref{eq:app-tems} when we only use the $\epsilon$-perturbation. Similarly, let $\pi^{\delta}(w,z)$ be the approximation to the optimal strategy when we only use the $\delta$-perturbation.
We obtain the probability of ruin corresponding to a given strategy $\pi$ by solving the linear partial differential equation ${\cal D}^{\pi} v=0$, (see \eqref{eqn:31} for the definition of the differential operator ${\cal D}^{\pi} $) with boundary conditions $v(0,y,z)=0$ and $v(c/r,y,z)=1$. (This computation uses the MCAM without iterating.)
In Figure~\ref{fig:diffStrat109}-\ref{fig:diffStrat109c} we observe that the performance of $\pi^c$ and $\pi^{\epsilon}$ are almost as good as the optimal strategy $\pi^*$. (Here we are considering a medium mean reversion speed. When the mean reversion speed is much smaller, then $\pi^{\delta}$ would be a better investment strategy.) Moreover, their performances are robust, in that, they do not depend on the initial volatility $\sigma_0$. This should be contrasted to $\pi^a$ and $\pi^b$. The former performs relatively well when $\sigma_0$ is small, whereas the latter performs better when $\sigma_0$ is large. When $\sigma_0=\sigma_m$, all strategies perform as well as the optimal strategy. Also, observe that for wealthy or very poor individuals the choice of the strategy does not matter as long as they invest in the stock market. The difference is for the individuals who lie in between.
As a result, we conclude that if the individual wants to minimize her probability of ruin in a stochastic volatility environment, she can still use the investment that is optimal for the constant volatility environment. She simply needs to update the volatility in that formula whenever the volatility changes significantly.
\bibliographystyle{model2-names}
|
1,941,325,220,774 | arxiv | \section{Introduction}
\label{sec:intro}
A chemical reaction network consists of chemical species which interact through reactions to form new chemical species. Under reasonable physical assumptions, such as well-mixing of the chemicals and sufficient molecular counts, it is reasonable to model the dynamics of such systems with mass-action kinetics resulting in a system of nonlinear polynomial ordinary differential equations. Mass-action systems are a common modeling framework for industrial processes \cite{E-T,Sharma2000} and systems biology \cite{Ingalls,Alon2007}.
In general, characterizing the steady states of mass-action systems is made challenging by the high-dimensionality, significant nonlinearities, and parameter uncertainly inherent in realistic biochemical reaction systems, such as signal transduction cascades and gene regulatory networks. Recent mathematical research has focused on developing computationally-tractable network-based methods for characterizing properties of the steady states of mass-action systems, such as the capacity for multistationarity \cite{M-D-S-C,MR2012,C-F1,C-F2,C-F-M-W2016} and absolute concentration robustness \cite{Sh-F,A-E-J,Tonello2017}, and developing methods for constructing parametrizations of the steady state set \cite{MR2014,M-D-S-C,C-D-S-S,J-M-P,J-B2018,D-M2018}.
Recent work of the author has focused on methods for establishing steady state properties of mass-action systems through the method of \emph{network translation} \cite{J1,J2,Tonello2017,J-M-P,J-B2018}. In this approach, the reaction graph of a chemical reaction network is corresponded to a generalized chemical reaction network with more desirable topological properties, such as weak reversibility and a low deficiency. In a generalized network, there are two sets of a complexes: (i) \emph{stoichiometric complexes}, which determine the stoichiometry of the network; and (ii) \emph{kinetic-order complexes}, which determine the rate of each reaction. These properties can then be used to construct a steady state parametrization which is monomial \cite{MR2014} or rational \cite{J-M-P}, depending on the topological structure of the network. Translation-based results have been used in conjunction with recent computational work on multistationarity \cite{C-F-M-W2016} to establish or eliminate the capacity of multistationarity in a variety of biochemical models, including the EnvZ-OmpR osmoregulatory network \cite{Sh-F,J1}, shuttled WNT signaling network \cite{G-H-R-S,J-M-P}, and multisite phosphorylation networks \cite{M-H-K1,J-B2018}.
Nevertheless, limitations to the application of network translation remain. For example, consider the following chemical reaction network:
\begin{equation} \label{example1}
\begin{tikzcd}
& X_2 & 2X_2 \arrow[rd,"r_3"] & & X_1 + X_2 \\[-0.15in]
X_1 \arrow[rd,"r_2"] \arrow[ru,"r_1"] & & & X_4 \arrow[rd,"r_6"] \arrow[ru,"r_5"] & \\[-0.15in]
& X_3 & 2X_3 \arrow[ru,"r_4"] & & X_1 + X_3
\end{tikzcd}
\end{equation}
Each arrow (labeled $r_i$) corresponds to a reaction which converts the chemical species (labeled $X_j$) at the tail end into the species at the arrow end. The translation methods of \cite{J1,J2,Tonello2017,J-B2018,J-M-P} do not succeed in corresponding \eqref{example1} to a weakly reversible deficiency zero system, which would allow the construction of a monomial parametrization by classical theory \cite{H-J1,C-D-S-S,MR2014}. Despite this, it can be shown that, when modeled with mass-action kinetics, the steady state set of \eqref{example1} in fact has a monomial parametrization given by the following:
\begin{equation}
\label{parametrization}
\begin{aligned}
x_1 & = 2 \kappa_3 \kappa_4(\kappa_5+\kappa_6)\tau\\
x_2 & = \kappa_4(2\kappa_1\kappa_5+\kappa_1\kappa_6+\kappa_2\kappa_5)\tau\\
x_3 & = 2\kappa_3\kappa_4(\kappa_1+\kappa_2)\tau\\
x_4 & = \kappa_3(\kappa_1\kappa_6+\kappa_2\kappa_5+2\kappa_2\kappa_6)\tau
\end{aligned}
\end{equation}
where $\tau > 0$.
The computational method of \cite{C-F-M-W2016} can be used to establish monostationarity. That is, there is a set of parameter values for which there are two stoichiometrically-compatible positive steady states.
In this paper, we extend the notion of network translation to allow \emph{split network translation}. In a split network translation, we allow each reaction to appear \emph{multiple times} in the same network provided that the total stoichiometric change of each reaction is preserved. We use this technique to correspond \eqref{example1} to the following generalized chemical reaction network:
\begin{equation} \label{example2}
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} 2X_1 \\ (X_1) \end{array}$}} \arrow[r,bend left = 10,"r_1"] \arrow[d,bend left = 10,"r_2"] & \mbox{\ovalbox{$\begin{array}{c} X_1+X_2 \\ (2X_2) \end{array}$}} \arrow[l,bend left = 10,"r_3"] \arrow[d,bend left = 10,"r_3"]\\
\mbox{\ovalbox{$\begin{array}{c} X_1+X_3 \\ (2X_3) \end{array}$}} \arrow[r,bend left = 10,"r_4"] \arrow[u,bend left = 10,"r_4"] & \mbox{\ovalbox{$\begin{array}{c} X_4 \\ (X_4) \end{array}$}} \arrow[l,bend left = 10,"r_6"] \arrow[u,bend left = 10,"r_5"]
\end{tikzcd}
\end{equation}
where the \emph{stoichiometric complex} is denoted as the upper term in each box and the \emph{kinetic-order complex} is denoted as the bracketed lower term in each box.
Notice that $r_3$ and $r_4$ appear multiple times in \eqref{example2} which is not allowed by standard network translation. This generalization extends the theory and application of network translation and allows us to show that the set of positive steady states of the mass-action system corresponding to \eqref{example1} has the parametrization \eqref{parametrization}.
In additional to developing the theory of network translation in this important direction, we provide a computational algorithm utilizing mixed-integer linear programming for corresponding a given chemical reaction network to a weakly reversible split network translation. Unlike the computational method of \cite{J2}, the method presented here does not depend on knowledge of the original network's rate parameters or the stoichiometry of the translated complexes, and unlike the methods of \cite{Tonello2017,J-B2018}, the algorithm does not depend utilize the network's elementary modes
The paper is organized as follows. In Section \ref{sec:background}, we introduce the mathematical background for chemical reaction networks, mass-action systems, their generalized counterparts, and network translation. In Section \ref{sec:main}, we present the notion of a split network translation and a computational program utilizing mixed-integer linear programming which can be used to determine whether a given chemical reaction network admits a weakly reversible split network translation. In Section \ref{sec:examples}, we present several examples which demonstrate how split network translation extends the current application of network-based theory for analyzing mass-action systems. Finally, in Section \ref{sec:conclusions}, we summarize the paper and present some open questions for further research.
\section{Mathematical Background}
\label{sec:background}
In this section, we outline the mathematical background necessary to understand generalized chemical reaction networks, generalized mass-action systems, and network translations. We note that classical chemical reaction networks and mass-action systems, which are utilized extensively in industrial and biochemical systems, may be considered as special cases.
\subsection{Generalized Chemical Reaction Networks}
\label{sec:chemical reaction network}
A directed multigraph is given by $G = (V,E,\rho,\pi)$, where $V$ is the vertex set, $E$ is the edge set, $\rho: E \mapsto V$ is the source mapping, and $ \pi: E \mapsto V$ is the target mapping. We assume throughout that both $V$ and $E$ are finite.
When representing multigraphs graphically, we will represent edges $k \in E$ as directed arrows of the form $\begin{tikzcd} i \arrow[r,"r_{k}"] & j\end{tikzcd}$ where $i,j \in V$, $\rho(k)=i$, $\pi(k)=j$, and $r_k$ is the edge label. For simplicity, distinct edges which connect the same vertices will be represented as a single arrow with multiple labels, i.e. if $\rho(k') = \rho(k'')$ and $\pi(k') = \pi(k'')$ for $k', k'' \in E$, then we use $\begin{tikzcd}i \arrow[r,"r_{k'} \& r_{k''}"] & j\end{tikzcd}$
The following notion was introduced in \cite{MR2012,MR2014}
\begin{dfn}
\label{generalized chemical reaction network}
A \emph{generalized chemical reaction network} on a directed multigraph $G = (V,E,\rho,\pi)$ is a triple $(G,y,y')$ where $y, y': V \mapsto \mathbb{R}^m$. The mapping $y$ is referred to as the \emph{stoichiometric mapping}, the mapping $y'$ is referred to as the \emph{kinetic-order mapping}, and the graph $G$ is referred to as the \emph{reaction graph}.
\end{dfn}
\begin{rem}
We extend upon the definition of a generalized chemical reaction network presented in \cite{MR2012,MR2014,J-M-P} by allowing the reaction graph $G$ to be a multigraph. This is more general than traditionally allowed in \emph{Chemical Reaction Network Theory} \cite{Feinberg1979} in two notable ways: (1) we allow \emph{self loops} (i.e. edges $k \in E$ with $\rho(k) = \pi(k)$); and (2) we allow \emph{multiple edges} to connect the same vertices (i.e. $r_{k'}$ and $r_{k''}$ with $\rho(k') = \rho(k'')$ and $\pi(k')=\pi(k'')$). This generalization will be necessary to define and utilize a split network translation (Definition \ref{def:splitting}).
\end{rem}
We interpret the mappings $y$ and $y'$ as representing linear combinations of species from the {\em species set} $\{X_1, \ldots, X_m\}$. For example, we interpret $y(i) = (1,0,1)$ as representing the combination $X_1 + X_3$, which could be an input or output for a given reaction. The linear combinations of species arising from $y$ are known as \emph{stoichiometric complexes} and those arising from $y'$ are known as \emph{kinetic-order complexes}.
Many aspects of the network topology of reaction graphs have been studied in the context of generalized chemical reaction networks \cite{H-J1,MR2012,MR2014}. To each edge $k \in E$ we associate a \emph{reaction vector} $y(\pi(k)) - y(\rho(k)) \in \mathbb{R}^m$. The span of the reaction vectors is known as the \emph{stoichiometric subspace} of the network:
\[S = \mbox{span}\{ y(\pi(k)) - y(\rho(k)) \; | \; k \in E \}.\]
The \emph{kinetic-order subspace} of a generalized chemical reaction network is defined similarly:
\[S' = \mbox{span}\{ y'(\pi(k)) - y'(\rho(k)) \; | \; k \in E \}.\]
Note that, since $y$ and $y'$ may be defined independently, the dimensions of $S$ and $S'$ may differ.
Two vertices or a reaction graph are said to be \emph{connected} if there is a sequence of undirected reactions which connect them. A set of connected vertices is called a \emph{linkage class}. Two complexes are said to be \emph{strongly connected} if the existence of a directed path from one complex to another implies the existence of a directed path back. A set of strongly connected complexes is called a \emph{strong linkage class}. A network is \emph{reversible} is a reaction from one complex to another complex implies the existence of a reversible reaction, and \emph{weakly reversible} if its linkage classes and strong linkage classes coincide. The \emph{stoichiometric deficiency} of a network is a nonnegative integer defined by the formula $\delta = n - \ell - \mbox{dim}(S)$ where $n$ is the number of vertices, $\ell$ is the number of linkage classes, and $S$ is the stoichiometric subspace. The \emph{kinetic-order deficiency} is defined similarly as $\delta' = n - \ell - \mbox{dim}(S')$. The deficiency was introduced by Feinberg and Horn in the papers \cite{Feinberg1972,H} in the context of studying complex-balanced mass-action systems \cite{H-J1}. The relationship between the deficiency and steady state properties of dynamical models of chemical reaction systems has been studied significantly since \cite{J1,MR2012,Feinberg1995-1,Feinberg1995-2,Feinberg1988,Feinberg1987}.
\begin{rem}
To incorporate the mappings $y$ and $y'$ into the vertices, we will represent each vertex as a box containing the two complexes (stoichiometric complex upper, kinetic-order complex lower and bracketed) \cite{Tonello2017,J-M-P}. Notice that the mappings $y$ and $y'$ are not required to be injective and consequently a single complex may be embedded in multiple vertices.
\end{rem}
\begin{exa}
\label{example7}
Consider the following generalized chemical reaction network:
\begin{equation} \label{example6}
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (X_1+X_2) \end{array}$}} \arrow[r,"r_1"] & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (2X_3) \end{array}$}} \arrow[r,"r_2 \& r_3"] & \mbox{\ovalbox{$\begin{array}{c} 3 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_3 \\ (X_1+X_2) \end{array}$}} \arrow[ll,bend left = 25,"r_5"]\arrow[loop right,"r_4"]\\
\end{tikzcd}
\end{equation}
where each vertex is represented with a box with the index on the left and the stoichiometric (upper) and kinetic-order (lower, bracketed) complex on the right. We have the multigraph $G = (V,E,\rho,\pi)$ with $V = \{ 1, 2, 3\}$, $E = \{ 1, 2, 3, 4, 5 \}$, $\rho(1) = 1$, $\rho(2) = 2$, $\rho(3) = 2$, $\rho(4) = 3$, $\rho(5) = 3$, $\pi(1) = 2$, $\pi(2) = 3$, $\pi(3) = 3$, $\pi(4) = 3$, and $\pi(5) = 1$. Note that the edges $r_2$ and $r_3$ both correspond to $2 \to 3$, which for simplicity we represent as a single arrow with multiple labels. We also have the self-loop $3 \to 3$. We have the mappings $y$ and $y'$ with $y(1) = (1,0,0)$, $y(2) = (0,1,0)$, $y(3) = (0,0,1)$, $y'(1) = (1,1,0)$, $y'(2) = (0,0,2)$, and $y'(3) = (1,1,0)$. The network has one linkage class ($\ell = 1$), is not reversible, but is weakly reversible. Note that the kinetic-order complex $X_1 + X_2$ is embedded in vertex $1$ and $3$ so that $y'$ is not injective. The stoichiometric subspace is given by $S = \mbox{span} \{ (-1,1,0), (0,-1,1) \}$ and the kinetic-order subspace is given by $S' = \mbox{span} \{ (-1,-1,2) \}$ so that $\mbox{dim}(S) = 2$ and $\mbox{dim}(S') = 1$. We compute that $\delta = 3 - 1 - 2 = 0$ and $\delta' = 3 - 1 -1 = 1$. \hfill $\square$
\end{exa}
\subsection{Generalized Mass-Action Systems}
\label{sec:gmas}
To a given generalized chemical reaction network $(G,y,y')$ with reaction graph $G = (V, E, \rho, \pi)$, we associate a system of ordinary differential equations where the rate of each reaction is proportional to the product of the chemical concentrations of the reactant species in the kinetic-order complex. For example, a reaction from the kinetic-order complex $X_i + X_j$ would have rate $\kappa x_i x_j$. This assumption was first made in \cite{MR2012} and is a generalization of mass-action kinetics \cite{M-M} inspired heavily by power-law kinetics \cite{Sa}.
Given a vector of chemical concentrations $\mathbf{x} = (x_1, \ldots, x_m) \in \mathbb{R}_{\geq 0}^m$ and a vector of rate constants $\kappa = (\kappa_1, \ldots, \kappa_{|E|}) \in \mathbb{R}_{\geq 0}^{|E|}$, we have the \emph{generalized mass-action system}
\begin{equation}
\label{gmas}
\frac{d\mathbf{x}}{dt} = \sum_{k \in E} \kappa_k (y(\pi(k)) - y(\rho(k))) \; \mathbf{x}^{y'(\rho(k))}
\end{equation}
where we use the convention that, for $\mathbf{x}, \mathbf{y} \in \mathbb{R}^m$, $\mathbf{x}^{\mathbf{y}} = \prod_{j=1}^m x_j^{y_j}$.
\begin{exa}
Consider the generalized chemical reaction network \eqref{example6} given in Example \ref{example7}. With the rate constant vector $(\kappa_1, \kappa_2, \kappa_3, \kappa_4, \kappa_5) \in \mathbb{R}^5_{> 0}$, we have the generalized mass-action system
\[
\left( \begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{array} \right) = \kappa_1 \left( \begin{array}{c} -1 \\ 1 \\ 0 \end{array} \right) x_1 x_2 + (\kappa_2 + \kappa_3) \left( \begin{array}{c} 0 \\ -1 \\ 1 \end{array} \right) x_2 x_3^2 + \kappa_5 \left( \begin{array}{c} 1 \\ 0 \\ -1 \end{array} \right) x_1 x_2
\]
in the chemical concentrations $x_1, x_2,$ and $x_3$. Notice that the duplicated edge $2 \to 3$ contributes two rate constants ($\kappa_2$ and $\kappa_3$) and the self-loop $3 \to 3$ does not contribute any ($\kappa_4$ does not appear) since the corresponding reaction vector is $y(\pi(4)) - y(\rho(4)) = (0,0,0)$. \hfill $\square$
\end{exa}
\subsection{Chemical Reaction Networks}
The following concept can be seen as a special case of generalized chemical reaction networks.
\begin{dfn}
Consider a generalized chemical reaction network $(G,y,y')$ with multigraph $G = (V,E,\rho,\pi)$ and mappings $y,y': V \mapsto \mathbb{R}^m$. The generalized chemical reaction network is a \emph{chemical reaction network} (chemical reaction network) if $y = y'$ and $y$ is injective. Chemical reaction networks will be denoted by $(G,y)$.
\end{dfn}
For chemical reaction networks, it is unnecessary to distinguish between stoichiometric and kinetic-order complexes, subspaces, or deficiencies. Consequently, we only speak of complexes, the stoichiometric subspace ($S$), and the deficiency ($\delta$). The reaction graph may furthermore be simplified since the vertices are in one-to-one correspondence with the complexes. The corresponding ordinary differential equation model is a \emph{mass-action system} given by
\begin{equation}
\label{mas}
\frac{d\mathbf{x}}{dt} = \sum_{k \in E} \kappa_k (y(\pi(k)) - y(\rho(k))) \mathbf{x}^{y(\rho(k))}.
\end{equation}
Mass-action systems are frequently used to model systems drawn from industrial processes \cite{E-T,Sharma2000} and systems biology \cite{Ingalls,Alon2007}
\begin{exa}
Consider the following chemical reaction network, which is derived from the classical Lotka-Volterra system in population dynamics \cite{Lotka,Volterra}:
\begin{equation} \label{lv}
\begin{tikzcd}
X_1 \arrow[r,"r_1"] & 2X_1, & X_1+X_2 \arrow[r,"r_2"] & 2X_2, &
X_2 \arrow[r,"r_3"] & \O. \\[-0.2in]
\end{tikzcd}
\end{equation}
Since each vertex is assigned a unique complex by the injective mapping $y$, we allow the complexes (e.g. $X_1$, $2X_1$, etc.) to identify the corresponding vertices. The network has $2$ species, $6$ complexes, $3$ reactions, $3$ linkage classes, and a $2$-dimensional stoichiometric subspace. The network is neither reversible nor weakly reversible and has a deficiency of $\delta = 6 - 2 - 2 = 2$. The mass-action system \eqref{mas} corresponding to \eqref{lv} is given by
\[
\left( \begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \end{array} \right) = \kappa_1 \left( \begin{array}{c} 1 \\ 0 \end{array} \right) x_1 + \kappa_2 \left( \begin{array}{c} -1 \\ 1\end{array} \right) x_1x_2 + \kappa_3 \left( \begin{array}{c} 0 \\ -1 \end{array} \right) x_2.
\]
\end{exa}
\begin{exa}
Consider the chemical reaction network \eqref{example1} given in Section \ref{sec:intro}. The network has $4$ species, $7$ complexes, $6$ reactions, $2$ linkage classes, and a $3$-dimensional stoichiometric subspace. It is not reversible or weakly reversible and has a deficiency of $\delta = 7 - 2 -3 = 2$. The mass-action system \eqref{mas} corresponding to \eqref{example1} is given by
\small
\begin{equation}
\label{mas1}
\left( \begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \\ \dot{x}_4 \end{array} \right) = \kappa_1 \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \end{array} \right) x_1 + \kappa_2 \left( \begin{array}{c} -1 \\0 \\ 1 \\ 0\end{array} \right) x_1 + \kappa_3 \left( \begin{array}{c} 0 \\ -2 \\ 0 \\ 1\end{array} \right) x_2^2 + \kappa_4 \left( \begin{array}{c} 0 \\ 0 \\ -2 \\ 1\end{array} \right) x_3^3+ \kappa_5 \left( \begin{array}{c} 1 \\ 1 \\ 0 \\ -1\end{array} \right) x_4 + \kappa_6 \left( \begin{array}{c} 1 \\ 0 \\ 1 \\ -1 \end{array} \right) x_4.
\end{equation}
\normalsize
\end{exa}
\subsection{Translated Chemical Reaction Networks}
\label{sec:tchemical reaction network}
The following construction was introduced in \cite{J1} as a method for relating chemical reaction networks to generalized chemical reaction networks with different network structure.
\begin{dfn}
\label{def:translation}
Consider a chemical reaction network $(G,y)$ with directed multigraph $G=(V,E, \rho, \pi)$. A generalized chemical reaction network $(\tilde G,\tilde y,\tilde y')$ with directed multigraph $\tilde G=(\tilde V, \tilde E, \tilde \rho, \tilde \pi)$ is a \emph{translation} of $(G,y)$ if there exists a bijective mapping $\alpha: E \mapsto \tilde E$ such that:
\begin{enumerate}
\item[(a)]
for all $k', k'' \in E$, $\rho(k') = \rho(k'')$ implies $\tilde \rho(\alpha(k')) = \tilde \rho(\alpha(k''))$;
\item[(b)]
for all $k \in E$, $\tilde y'(\tilde \rho(\alpha(k))) = y(\rho(k))$; and
\item[(c)]
for all $k \in E$, $\tilde y(\tilde \pi(\alpha(k)))-\tilde y(\tilde \rho(\alpha(k))) = y(\pi(k)) - y(\rho(k))$.
\end{enumerate}
\end{dfn}
\begin{lem}[Lemma 16, \cite{J-M-P}]
\label{lemma16}
Let $(G,y)$ be a chemical reaction network with reaction graph $G = (V,E,\rho,\pi)$ and let the generalized chemical reaction network $(\tilde G, \tilde y, \tilde y')$ with reaction graph $\tilde G = (\tilde V, \tilde E, \tilde \rho, \tilde \pi)$ be a translation of $(G,y)$. Then the mass-action system \eqref{mas} corresponding to $(G,y)$ and the generalized mass-action system \eqref{gmas} corresponding to $(\tilde G, \tilde y, \tilde y')$ are dynamically equivalent.
\end{lem}
\noindent Network translation allows chemical reaction networks to be related to generalized chemical reaction networks with potentially superior network structure, such as weak reversibility and a low deficiency. Computational methods for finding network translations have been developed \cite{J2,Tonello2017,J-B2018}.
Network translation is commonly visualized by adding or subtracting species to both sides of a reaction in order to form new connections in the reaction graph. This process does not change the stoichiometric difference across any reaction edge, so it satisfies Condition (c) of Definition \ref{def:translation}, and we can satisfy Condition (b) of Definition \ref{def:translation} by allowing the source complex in the original chemical reaction network to become the kinetic-order complex of the translation.
Consider the following example.
\begin{exa}
Reconsider the Lotka-Volterra system \eqref{lv} and the associated translation scheme:
\begin{equation} \label{translation11}
\begin{tikzcd}
X_1 \arrow[r,"r_1"] & 2X_1 & & (-X_1) \\[-0.2in]
X_1+X_2 \arrow[r,"r_2"] & 2X_2 & & (-X_2) \\[-0.2in]
X_2 \arrow[r,"r_3"] & \O & & (\O) \\[-0.2in]
\end{tikzcd}
\end{equation}
This results in the following network translation:
\begin{equation} \label{example432}
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} \O \\ (X_1) \end{array}$}} \arrow[rr,"r_1"] & & \mbox{\ovalbox{$\begin{array}{c} X_1 \\ (X_1 + X_2) \end{array}$}} \arrow[ld,"r_2"'] \\[-0.15in]
& \mbox{\ovalbox{$\begin{array}{c} X_2 \\ (X_2) \end{array}$}} \arrow[lu,"r_3"']&
\end{tikzcd}
\end{equation}
The mass-action system \eqref{mas} corresponding to \eqref{translation11} and generalized mass-action system \eqref{gmas} corresponding to \eqref{example432} are identical. Notice that the network translation \eqref{example432} is weakly reversible and has a stoichiometric and kinetic-order deficiency of zero while the original network \eqref{translation11} is not weakly reversible and has a deficiency of one.
\end{exa}
\section{Main Results}
\label{sec:main}
In this section, we introduce the notion of a \emph{split network translation} of a chemical reaction network and show how this concept may be used to expand the scope of mass-action systems which can be analyzed through network translation. We also present a computational algorithm which corresponds a given chemical reaction network to a weakly reversible split network translation.
\subsection{Split Network Translation}
\label{sec:splitting}
The following notion extends network translation (Definition \ref{def:translation}) and is the primary new concept introduced of this paper.
\begin{dfn}
\label{def:splitting}
Consider a chemical reaction network $(G,y)$ with directed multigraph $G=(V,E, \rho, \pi)$. Also consider a family of generalized chemical reaction networks $(\tilde G^{(l)},\tilde y, \tilde y')$, $l \in Q$, where $Q = \{ 1, \ldots, q\}$, with directed multigraphs $\tilde G^{(l)} = (\tilde V, \tilde E^{(l)}, \tilde \rho^{(l)}, \tilde \pi^{(l)})$, $l \in Q$, and let $(\tilde G, \tilde y, \tilde y')$ be a generalized chemical reaction network with directed multigraph $\tilde G = (\tilde V, \tilde E, \tilde \rho, \tilde \pi)$ where $\displaystyle{\tilde E = \tilde E^{(1)} \cup \cdots \cup \tilde E^{(q)}}$ and $\tilde E^{(i)} \cap \tilde E^{(j)} = \emptyset$ for $i, j \in Q, i \not= j$.
Then $(\tilde G, \tilde y,\tilde y')$ is a \emph{split network translation} of $(G, y)$ if there is a family of bijective mappings $\alpha^{(l)}: E \mapsto \tilde E^{(l)}$, $l \in Q$, such that:
\begin{enumerate}
\item[(a)]
for all $k \in E$ and $l', l'' \in Q$, $\tilde \rho^{(l')}(\alpha^{(l')}(k)) = \tilde \rho^{(l'')}(\alpha^{(l'')}(k))$ so that there is a uniform source mapping $\beta: E \mapsto \tilde V$ given by $\beta(k) := \tilde \rho^{(l)}(\alpha^{(l)}(k))$, $l \in Q$;
\item[(b)]
for all $k', k'' \in E$, $\rho(k') = \rho(k'')$ implies $\beta(k') = \beta(k'')$;
\item[(c)]
for all $k \in E$, $\tilde y'(\beta(k)) = y(\rho(k))$; and
\item[(d)]
for all $k \in E$, $\displaystyle{\sum_{l \in Q} \left(\tilde y(\tilde \pi(\alpha^{(l)}(k)))-\tilde y( \beta(k))\right) = y(\pi(k)) - y(\rho(k))}$.
\end{enumerate}
\end{dfn}
We have the following extension of Lemma \ref{lemma16}.
\begin{thm}
\label{dynamicalequivalence}
Consider a chemical reaction network $(G,y)$ with reaction graph $G = (V,E, \rho, \pi)$. Suppose that $(G,y)$ has a split network translation ($\tilde G, \tilde y, \tilde y')$ with reaction graph $\tilde G = (\tilde V, \tilde E, \tilde \rho, \tilde \pi)$ and slices $(\tilde G^{(l)}, \tilde y)$ where $\tilde G^{(l)} = (\tilde V, \tilde E^{(l)}, \tilde \rho^{(l)}, \tilde \pi^{(l)})$, $l \in Q$.
Then the generalized mass-action system \eqref{gmas} corresponding to $(\tilde G, \tilde y, \tilde y')$ and the mass-action system \eqref{mas} corresponding to $(G,y)$ are dynamically equivalent.
\end{thm}
\begin{proof}
Consider a chemical reaction network $(G,y)$ with directed multigraph $G = (V,E, \rho, \pi)$ and a generalized chemical reaction network $(\tilde G, \tilde y, \tilde y')$ with directed multigraph $\tilde G = (\tilde V, \tilde E, \tilde \rho, \tilde \pi)$. Suppose that $(\tilde G, \tilde y, \tilde y')$ is a split network translation of $(G,y)$ according to Definition \ref{def:splitting} with slices $(\tilde G^{(l)}, \tilde y, \tilde y')$ where $\tilde G^{(l)} = (\tilde V, \tilde E^{(l)}, \tilde \rho^{(l)}, \tilde \pi^{(l)})$, $l \in Q$.
The mass-action system \eqref{mas} corresponding to $(G,y)$ can be written
\[\begin{aligned}
\frac{d\mathbf{x}}{dt} & = \sum_{k \in E} \kappa_k \left( y(\pi(k)) - y(\rho(k))\right) \; \mathbf{x}^{y(\rho(k))} \\
& = \sum_{k \in E} \sum_{l \in Q} \kappa_k \left( \tilde y (\tilde \pi^{(l)}(\alpha^{(l)}(k))) - \tilde y (\beta(k)) \right) \mathbf{x}^{y(\rho(k))}
\end{aligned}\]
by Conditions (a) and (d) of Definition \ref{def:splitting}.
This corresponds to the generalized mass-action system \eqref{gmas} for the generalized chemical reaction network with reactions of the following form:
\[
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} \tilde y(\beta(k)) \\[0.05in] \left(y(\rho(k)) \right) \end{array}$}} \arrow[r,"r_k"] & \mbox{\ovalbox{$\begin{array}{c} \tilde y(\tilde \pi^{(l)}(\alpha^{(l)}(k))) \\[0.05in] (-) \end{array}$}}.
\end{tikzcd}
\]
Clearly we have that $\tilde y'(\beta(k)) = y(\rho(k))$ so that Condition (c) is satisfied, and we are done.
\end{proof}
A split network translation (Definition \ref{def:splitting}) captures many of the features of network translation (Definition \ref{def:translation}). We require that edges with the same source be mapped to edges with the same source in the translation (Condition (b)) and that the kinetic complex in the translation be derived from the sources of the stoichiometric mapping which are translated to it (Condition (c)). In a split network translation, however, we allow there to be $q \in \mathbb{Z}_{>0}$ copies of the reactions of a chemical reaction network so long as the sources of each reaction is the same in each slice (Condition (a)), and that the network is structured so that the stoichiometric change is preserved across the union of all the individual slices (Condition (d)). Note that when $q = 1$ (i.e. there is only one slice), Condition (a) of Definition \ref{def:splitting} is trivially satisfied, and the remaining conditions coincide with those of Definition \ref{def:translation}
Consider the following example.
\begin{exa}
\label{example34}
Consider the following chemical reaction network:
\begin{equation} \label{example3}
\begin{tikzcd}
\ovalbox{$\; 1 \; \Big\lvert \; 2X_1 \;$} \arrow[r,"r_1"] & \ovalbox{$\; 2 \; \Big\lvert \; X_2 \;$} \arrow[r,"r_2"] & \ovalbox{$\; 3 \; \Big\lvert \; \O \;$}
\end{tikzcd}
\end{equation}
This corresponds to the chemical reaction network $(G,y)$ on the reaction graph $G = (V,E,\rho,\pi)$ where $V = \{ 1, 2, 3 \}$, $E = \{ 1, 2 \}$, $\rho(1) = 1$, $\rho(2) = 2$, $\pi(1) = 2$, $\pi(2) = 3$, $y(1) = (2,0)$, $y(2) = (0,1)$, and $y(3) = (0,0)$. Furthermore, we have the reaction vectors $y(\pi(1)) - y(\rho(1)) = (-2,1)$ and $y(\pi(2)) - y(\rho(2)) = (0,-1)$.
Now consider the following generalized chemical reaction networks:
\begin{equation}
\label{slices}
\begin{tikzcd}
(\tilde G^{(1)}, \tilde y, \tilde y'): & \mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (2X_1) \end{array}$}} \arrow[rr,"r_1^{(1)}"] & & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (X_2) \end{array}$}} \arrow[loop right,"r_2^{(1)}"] \\
& & \mbox{\ovalbox{$\begin{array}{c} 3 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} \O \\ (\O) \end{array}$}} & \\
(\tilde G^{(2)}, \tilde y, \tilde y'): & \mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (2X_1) \end{array}$}} \arrow[rd,"r_1^{(2)}"] & & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (X_2) \end{array}$}} \arrow[ld,"r_2^{(2)}"]\\
& & \mbox{\ovalbox{$\begin{array}{c} 3 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} \O \\ (\O) \end{array}$}} &
\end{tikzcd}
\end{equation}
and
\begin{equation} \label{example345}
\begin{tikzcd}
(\tilde G, \tilde y, \tilde y'): & \mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (2X_1) \end{array}$}} \arrow[rr,"r_1^{(1)}"] \arrow[rd,"r_1^{(2)}"] & & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (X_2) \end{array}$}} \arrow[ld,"r_2^{(2)}"] \arrow[loop right,"r_2^{(1)}"] \\
& & \mbox{\ovalbox{$\begin{array}{c} 3 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} \O \\ (\O) \end{array}$}} & \\
\end{tikzcd}
\end{equation}
We have the reaction graphs $\tilde G^{(1)} = (\tilde V, \tilde E^{(1)}, \tilde \rho^{(1)}, \tilde \pi^{(1)})$, $\tilde G^{(2)} = (\tilde V, \tilde E^{(2)}, \tilde \rho^{(2)}, \tilde \pi^{(2)})$, $\tilde G = (\tilde V, \tilde E, \tilde \rho, \tilde \pi)$ with $\tilde V = \{ 1, 2, 3 \}$, $\tilde E^{(1)} = \{ 1^{(1)}, 2^{(1)} \}$, $\tilde E^{(2)} = \{ 1^{(2)}, 2^{(2)} \}$, $\tilde E = \tilde E^{(1)} \cup \tilde E^{(2)}$, $\tilde \rho^{(1)}(1^{(1)}) = 1$, $\tilde \rho^{(1)}(2^{(1)}) = 2$, $\tilde \pi^{(1)}(1^{(1)}) = 2$, $\tilde \pi^{(1)}(2^{(1)}) = 2$, $\tilde \rho^{(2)}(1^{(2)}) = 1$, $\tilde \rho^{(2)}(2^{(2)}) = 2$, $\tilde \pi^{(2)}(1^{(2)}) = 3$, $\tilde \pi^{(2)}(2^{(2)}) = 3$, $\tilde \rho = \tilde \rho^{(1)} \cup \rho^{(2)}$, and $\tilde \pi = \tilde \pi^{(1)} \cup \pi^{(2)}$, and the stoichiometric and kinetic-order mappings $\tilde y(1) = (1,0)$, $\tilde y(2) = (0,1)$, $\tilde{3} = (0,0)$, $\tilde y'(1) = (2,0)$, $\tilde y'(2) = (0,1)$, and $\tilde y'(3) = (0,0)$.
The networks in \eqref{slices} represent \emph{slices} of \eqref{example345} (Definition \ref{def:splitting}) with the mappings $\alpha^{(i)}: E \mapsto \tilde E^{(i)}$ given by $\alpha^{(i)}(j) = j^{(i)}$. Notice that: (i) there is a copy of each reaction on each slice (i.e. the mappings $\alpha^{(i)}$ are bijective); (ii) each reaction has the same source in every slice (Condition (a) is satisfied); (iii) reactions with the same sources in the original network \eqref{example3} trivially have the same sources in the split network translation \eqref{example345} (i.e. Condition (b) is satisfied); and (iv) the source complex of each reaction in \eqref{example3} is the kinetic complex in the split network translation \eqref{example345} (i.e. Condition (c) is satisfied).
To check Condition (d), notice that summing the reactions vectors corresponding to $r_1$ across the two slices in \eqref{slices} gives
\[ \left( \tilde y(2) - \tilde y(1) \right) + \left( \tilde y(3) - \tilde y(1) \right) = (-1,1) + (-1,0) = (-2,1) = y(2) - y(1)\]
and summing the reaction vectors corresponding to $r_2$ gives
\[\left( \tilde y(2) - \tilde y(2) \right) + \left( \tilde y(3) - \tilde y(2) \right) = (0,0) + (0,-1) = (0,-1) = y(3) - y(2).\]
It follows that \eqref{example345} satisfies Condition (d) of Definition \ref{def:splitting} and is therefore a split network translation of \eqref{example3}.
Notice that the generalized mass-action system \eqref{gmas} corresponding to \eqref{example345} is given by:
\[
\small
\left( \begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \end{array} \right) = \kappa_1 \left( \left( \begin{array}{c} -1 \\ 1 \end{array} \right) + \left( \begin{array}{c} -1 \\ 0 \end{array}\right) \right) x_1^2 + \kappa_2 \left( \left( \begin{array}{c} 0 \\ 0 \end{array} \right) + \left( \begin{array}{c} 0 \\ -1 \end{array} \right) \right) x_2 = \kappa_1 \left( \begin{array}{c} -2 \\ 1 \end{array} \right) x_1^2 + \kappa_2 \left( \begin{array}{c} 0 \\ -1 \end{array} \right) x_2.
\]
This coincides with the mass-action system \eqref{mas} corresponding to \eqref{example3} so that Theorem \ref{dynamicalequivalence} is satisfied.\\
\end{exa}
\begin{rem}
In general, we will represent split network translations without self-loops or superscripts. For example, we represent the generalized chemical reaction network \eqref{example345} as:
\[
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (2X_1) \end{array}$}} \arrow[rr,"r_1"] \arrow[rd,"r_1"] & & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (X_2) \end{array}$}} \arrow[ld,"r_2"] \\
& \mbox{\ovalbox{$\begin{array}{c} 3 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} \O \\ (\O) \end{array}$}} & \\
\end{tikzcd}
\]
\end{rem}
\begin{rem}
As with traditional network translation, when a split network translation is weakly reversible we may use known network-based results to understand important properties of the underlying generalized mass-action system of the split network translation, and use Theorem \ref{dynamicalequivalence} to extend the result to the mass-action system corresponding to the original network. In particular, we can determine whether the steady state set admits a monomial or rational parametrization according to \cite{M-D-S-C,C-D-S-S,MR2014,J-M-P} and then establish the capacity for multistationarity by the results of \cite{M-D-S-C,C-F-M-W2016}.
\end{rem}
\subsection{Computational Implementation}
\label{sec:implementation}
In general, it is challenging to know whether a given chemical reaction network can be corresponded to a network translation (Definition \ref{def:translation}) or split network translation (Definition \ref{def:splitting}) with a desired structural property (e.g. weak reversibility, low deficiency).
Computational research has consequently been conducted on developing algorithms and computational implementation which can find network translations (Definition \ref{def:translation}).
When extending to \emph{split} network translations, we note that the algorithms of \cite{J-S4} and \cite{J2} require a given set of potential stoichiometric complexes, which is generally infeasible for networks drawn from realistic biochemical interactions. By contrast, the methods of \cite{Tonello2017} and \cite{J-B2018}, utilize the original network's elementary modes by turning them into cycles in the translation. Since splitting reactions does not preserve elementary modes, however, these methods do not readily extend to split network translations. Consequently, we instead develop a computational method which does not require a known stoichiometric complex set and which does not utilize elementary modes.
We now outline a mixed-integer linear programming framework capable of establishing whether a given chemical reaction network $(G,y)$ can be corresponded to a weakly reversible generalized chemical reaction network $(\tilde G, \tilde y, \tilde y')$ which is dynamically equivalent through split network translation (Definition \ref{def:splitting} and Theorem \ref{dynamicalequivalence}). We recall that a mixed-integer linear program can be stated in the following standard form:
\begin{equation}
\label{milp}
\begin{aligned}
& \mbox{min } \mathbf{c} \cdot \mathbf{x} \\
\mbox{subject to } & \left\{ \begin{array}{l} A_1 \mathbf{x} = \mathbf{b}_1 \\ A_2 \mathbf{x} \leq \mathbf{b}_2 \\ x_i \mbox{ is an integer for } i \in I \end{array} \right.
\end{aligned}
\end{equation}
where $\mathbf{x} \in \mathbb{R}^n$ is the vector of decision variables, $I \subseteq \{ 1, \ldots, n \}$, and $\mathbf{c} \in \mathbb{R}^n$, $\mathbf{b}_1 \in \mathbb{R}^{p_1}$, $\mathbf{b}_2 \in \mathbb{R}^{p_2}$, $A_1 \in \mathbb{R}^{p_1 \times n}$, and $A_2 \in \mathbb{R}^{p_2 \times n}$ are vectors and matrices of parameters. When \eqref{milp} contains integer-valued decision variables (i.e. $I \not= \emptyset$), the problem is NP-hard \cite{Sz2}.
We first reformulate the generalized mass-action system \eqref{gmas} corresponding to $(G,y)$ as
\[
\frac{d \mathbf{x}}{dt} = \Gamma R(\mathbf{x}; y)
\]
where $\Gamma \in \mathbb{R}^{m \times r}$ is the \emph{stoichiometric matrix} with columns $\Gamma_{\cdot, k} = y(\pi(k)) - y(\rho(k))$ and $R(\mathbf{x}; y) \in \mathbb{R}_{\geq 0}^r$ has entries $R_k(\mathbf{x}) = \kappa_k \mathbf{x}^{y(\rho(j))}$. The stoichiometric matrix $\Gamma$ can be decomposed in several ways which will be useful in our computational approach. Firstly, we have
\[
\Gamma = \Gamma_t - \Gamma_s
\]
where $\Gamma_t$ and $\Gamma_s$ are the \emph{target} and \emph{source matrices}, respectively, with columns $[\Gamma_t]_{\cdot, k} = y(\pi(k))$ and $[\Gamma_s]_{\cdot, k} = y(\rho(k))$. The source matrix $\Gamma_s$ encodes which reactions have common source complexes and is therefore required in enforcing Conditions (a) and (b) of Definition \ref{def:splitting}. We also have the following decomposition of $\Gamma$:
\[
\Gamma = Y A
\]
where $Y \in \mathbb{R}^{m \times n}$ is the \emph{complex matrix} with columns $Y_{\cdot, j} = y(j)$ and $A \in \{ -1, 0, 1 \}^{n \times r}$ is the \emph{adjacency matrix} with entries
\[A_{j,k} = \left\{ \begin{array}{ll} -1, \; \; \; \; \; \; \; & \mbox{ if } \rho(k) = j \\ 1, & \mbox{ if } \pi(k) = j \\ 0, & \mbox{ otherwise.}\end{array} \right.\]
The adjacency matrix $A$ encodes the mappings $\rho$ and $\pi$. We can further decompose $A = A_t - A_s$ where $[A_t]_{j,k} = 1$ if $\pi(k) = j$ and is $0$ otherwise, and $[A_s]_{j,k} = 1$ if $\rho(k) = j$ and is $0$ otherwise. We have the following relationships between the target and source matrices:
\[
\Gamma_t = Y A_t, \; \; \; \Gamma_s = Y A_s, \; \; \; \mbox{ and } \; \; \; \Gamma = \Gamma_t - \Gamma_s = Y A_t - Y A_s.
\]
We now outline the mixed-integer linear programming procedure for finding a weakly reversible split network translation $(\tilde G, \tilde y, \tilde y')$ of a given chemical reaction network $(G,y)$.\\
\noindent \emph{Inputs:} We require the following as inputs, which specify $(G,y)$ and give constraints on the split network translation $(\tilde G, \tilde y, \tilde y')$:
\begin{itemize}
\item
sets $C = \{ 1, \ldots, m \}$ (\emph{species set}), $V = \{ 1, \ldots, n \}$ (\emph{vertex set}), $E = \{ 1, \ldots, r \}$ (\emph{edge set}), and $Q = \{1, \ldots, q \}$ (\emph{slice set}); and
\item
the target and source matrices $\Gamma_t$ and $\Gamma_s$ for the chemical reaction network $(G, y)$; and
\item
a small parameter $0 < \epsilon \ll 1$ and a large parameter $\delta \gg 1$ (e.g. $\delta = 1/\epsilon$).
\end{itemize}
\noindent Note that $m$, $n$, and $r$ can be determined from the source matrix $\Gamma_s$. The value of $q$ must be selected by the user prior to initializing the procedure. A value of $q=1$ produces a network translation (Definition \ref{def:translation}) and the procedure becomes more computationally intensive as $q$ is increased.\\
\noindent \emph{Outputs:} The procedure outputs the matrices $\tilde{Y} \in \mathbb{R}^{m \times n}$, $\tilde \Gamma_s \in \mathbb{R}^{m \times r}$, and $\tilde{A}_s \in \mathbb{R}^{n \times r}$, and $\tilde \Gamma_t^{(l)}$, $\tilde{A}_t^{(l)}$, $l \in Q$, corresponding to the split network translation $(\tilde G, \tilde y, \tilde y')$. The matrices $\tilde \Gamma_t^{(l)}$ and $\tilde A_t^{(l)}$ correspond to the target mappings in the individual slices $(\tilde G^{(l)},\tilde y)$ where $(\tilde G^{(l)} = (\tilde V, \tilde E^{(l)})$.\\
\noindent \emph{Decision variables:} We require the following decisions variables.\\
\begin{tabular}{|l|l|r|}
\hline
\mbox{Variable} & \mbox{Description} & \mbox{Sets} \\
\hline \hline
$[\tilde Y]_{i,j} \geq 0$ & Stoichiometric matrix for the split network translation & $i \in C, j \in V$ \\
\hline
$[\tilde \Gamma_t]_{i,k,l} \geq 0$ & \begin{tabular}{@{}l@{}}Collection of $q$ matrices $\tilde \Gamma_t^{(l)} \in \mathbb{R}_{\geq 0}^{m \times r}$ corresponding \\ to the target complex matrices for the $l^{th}$ slice \end{tabular} & $i \in C, k \in E, l \in Q$\\
\hline
$[\tilde \Gamma_s]_{i,k} \geq 0$ & \begin{tabular}{@{}l@{}}Matrix $\tilde \Gamma_s \in \mathbb{R}_{\geq 0}^{m \times r}$ corresponding to the source \\ complex matrix in the split network translation \end{tabular} & $i \in C, k \in E$\\
\hline
$[\tilde A_t]_{j,k,l} \in \{ 0, 1 \}$ & \begin{tabular}{@{}l@{}}Collection of $q$ matrices $\tilde A_t^{(l)} \in \mathbb{R}_{\geq 0}^{n \times r}$ indexing the \\ targets for the $l^{th}$ slice in the split network translation \end{tabular} & $j \in V, k \in E, l \in Q$\\
\hline
$[\tilde A_s]_{j,k} \in \{ 0, 1 \}$ & \begin{tabular}{@{}l@{}} Matrix $\tilde A_s \in \mathbb{R}_{\geq 0}^{n \times r}$ indexing the sources for the \\ split network translation \end{tabular} & $j \in V, k \in E$\\
\hline
$[\tilde B_t]_{j,k} \geq 0$ & \begin{tabular}{@{}l@{}} Scaling of the collection of matrices $\tilde A_t^{(l)}$ for use in \\ establishing weak reversibility \end{tabular} & $j \in V, k \in E$\\
\hline
$[\tilde B_s]_{j,k} \geq 0$ & Scaling of $\tilde A_s$ for use in establishing weak reversibility & $j \in V, k \in E$\\
\hline
$[\Delta]_{j,k,l} \in \{ 0, 1 \}$ & \begin{tabular}{@{}l@{}} Indicator matrix with $[\Delta]_{i,j,l} = 1$ if and only if \\ $[\tilde A_t]_{j,k,l} - [\tilde A_s]_{j,k} \not= 0$ \end{tabular} & $j \in V, k \in E, l \in Q$\\
\hline
$[\Lambda]_{k,l} \in \{ 0, 1 \}$ & \begin{tabular}{@{}l@{}} Indicator matrix with $[\Lambda]_{k,l} = 1$ if and only if the \\ $k^{th}$ reaction is on $l^{th}$ slice \end{tabular} & $k \in E, l \in Q$\\
\hline
\end{tabular}
\vspace{0.25in}
\noindent We require the following constraint sets to enforce that the network $(\tilde G, \tilde y, \tilde y')$ satisfies Definition \ref{def:splitting}, and is also weakly reversible.\\
\noindent \emph{Stoichiometry constraints:} To satisfy Condition (d) of Definition \ref{def:splitting}, we introduce the following constraint set:
\begin{flalign}
\tag{\textbf{Stoic}}
\label{stoichiometry}
&
\left\{ \; \; \; \begin{array}{ll} \\[-0.1in] \displaystyle{\sum_{l \in Q}\left( [\tilde \Gamma_t]_{i,k,l} - [\tilde \Gamma_s]_{i,k} \right) = [\Gamma_t]_{i,k} - [\Gamma_s]_{i,k},} & \; \; \; i \in C, k \in E. \end{array}\right.
&
\end{flalign}
\noindent \emph{Incidence constraints:} We impose that the source (respectively, target) complex of a given reaction (i.e. the column of $\tilde \Gamma_t$ [respectively, $\tilde \Gamma_s$]), corresponds to the required complex in the translated complex set (i.e. the required column of $\tilde Y$). Specifically, we require the following logical relationships:
\[
\begin{split}
[\tilde A_s]_{j,k} = 1 \: \: & \; \; \; \Longrightarrow \; \; \; [\tilde Y]_{\cdot, j} = [\tilde \Gamma_s]_{\cdot, k} \\
[\tilde A_t]_{j,k,l} = 1 & \; \; \; \Longrightarrow \; \; \; [\tilde Y]_{\cdot, j} = [\tilde \Gamma_t]_{\cdot, k,l}.
\end{split}\]
This can be accomplished with the following constraint set:
\begin{flalign}
\tag{\textbf{Incidence 1}}
\label{incidence1}
&
\left\{ \; \; \; \begin{array}{ll}
\displaystyle{[\tilde Y]_{i,j} - \delta \left(1 - [\tilde A_s]_{j,k}\right) \leq [\tilde \Gamma_s]_{i,k}}, & \; \; \; i \in C, j \in V, k \in E \\[0.05in]
\displaystyle{[\tilde \Gamma_s]_{i,k} \leq [\tilde Y]_{i,j} + \delta \left(1 - [\tilde A_s]_{j,k} \right)}, & \; \; \; i \in C, j \in V, k \in E \\[0.05in]
\displaystyle{[\tilde Y]_{i,j} - \delta \left(1 - [\tilde A_t]_{j,k,l}\right) \leq [\tilde \Gamma_t]_{i,k,l}}, & \; \; \; i \in C, j \in V, k \in E, l \in Q \\[0.05in]
\displaystyle{[\tilde \Gamma_t]_{i,k,l} \leq [\tilde Y]_{i,j} + \delta \left(1 - [\tilde A_t]_{j,k,l} \right)}, & \; \; \; i \in C, j \in V, k \in E, l \in Q.
\end{array}\right.
&
\end{flalign}
Note that, since $\delta \gg 1$, we have that $[\tilde A_t]_{j,k,l} = 0$ and $[\tilde A_s]_{j,k} = 0$ effectively give no restrictions on $[\tilde \Gamma_t]_{i,k,l}$ or $[\tilde \Gamma_s]_{i,k}$.
We require that every reaction is assigned exactly one source complex and one target complex on each slice in $(\tilde G, \tilde y, \tilde y')$ so that the mappings $\alpha^{(l)}$, $l \in Q$, in Definition \ref{def:splitting} are bijective. This can be accomplished with the following constraint set:
\begin{flalign}
\tag{\textbf{Incidence 2}}
\label{incidence2}
&
\left\{ \; \; \; \begin{array}{ll} \\[-0.1in]
\displaystyle{\sum_{j \in V} [\tilde A_s]_{j,k}} = 1, & \; \; \; k \in E \\[0.05in]
\displaystyle{\sum_{j \in V} [\tilde A_t]_{j,k,l}} = 1, & \; \; \; k \in E, l \in Q
\end{array}\right.
&
\end{flalign}
Note that the reaction $k \in E$ on the slice $l\in Q$ is a self loop at vertex $j \in V$ if $[\tilde A_s]_{j,k} =1$ and $[\tilde A_t]_{j,k,l} = 1$.\\% This is allowed by \eqref{incidence2}. Since such reactions make no contribution to \eqref{gmas} will not affect the theory we have introduced in this paper.\\
\noindent \emph{Weakly reversibility constraints:} We want the split network translation $(\tilde G, \tilde y)$ to be weakly reversible. We can accomplish this with the following constraint set (see Appendix \ref{appendixa} for justification):
\begin{flalign}
\tag{\textbf{Weak reversibility}}
\label{wr}
&
\left\{ \; \; \; \begin{array}{ll} \\[-0.1in]
\displaystyle{\sum_{k \in E} [\tilde B_t]_{j,k}} = \displaystyle{\sum_{k \in E} [\tilde B_s]_{j,k}}, & \; \; \; j \in V \\[0.05in]
\epsilon [\tilde A_s]_{j,k} \leq [\tilde B_s]_{j,k}, & \; \; \; j \in V, k \in E\\[0.05in]
[\tilde B_s]_{j,k} \leq \delta [\tilde A_s]_{j,k}, & \; \; \; j \in V, k \in E\\[0.05in]
\displaystyle{\epsilon \left( \sum_{l \in Q} [\tilde A_t]_{j,k,l} \right) \leq [\tilde B_t]_{j,k},} & \; \; \; j \in V, k \in E\\[0.05in]
\displaystyle{[\tilde B_t]_{j,k} \leq \delta \left( \sum_{l \in Q} [\tilde A_t]_{j,k,l} \right),} & \; \; \; j \in V, k \in E
\end{array}\right.
&
\end{flalign}
\noindent The first constraint of \eqref{wr} is equivalent to $\tilde B \cdot \mathbf{1} = \mathbf{0}$ where $\tilde B = \tilde B_t - \tilde B_s$, $\mathbf{1} = (1, \ldots, 1)$, and $\mathbf{0} = (0,\ldots, 0)$. The remaining constraints guarantee that $\tilde A$ and $\tilde B$ are structurally equivalent matrices (see Definition \ref{dfn:se}).\\
\noindent \emph{Efficiency constraints:} In order to increase computational efficiency, it is desirable to remove solutions which are equivalent through, for instance, permutations of indexing. We introduce the following constraint:
\begin{flalign}
\tag{\textbf{Efficiency 1}}
\label{efficiency1}
&
\left\{ \; \; \; \begin{array}{ll} \\[-0.1in]
\displaystyle{\mathop{\sum_{k' \in E}}_{k' < k} [\tilde A_s]_{j,k'}} \geq \displaystyle{\mathop{\sum_{j' \in V}}_{j' < j} [\tilde A_s]_{j',k}}, & \; \; \; j \in V, k \in E, k \geq j
\end{array}\right.
&
\end{flalign}
This constraint set guarantees that the source complexes are indexed so that each new source complex is assigned the slice with the lowest available index (see Section 3.4 of \cite{J-S6} for justification).
It is also computationally desirable to impose that, if multiple weakly reversible split network translations exist, we minimize the number of non-trivial (i.e. non-self loop) reactions and index the reactions on the lowest possible available slice. This requires tracking and counting the non-trival reactions. To this end, we introduce indicator variables $\Delta_{j,k,l} \in \{ 0, 1\}$, $j \in V, k \in E, l \in Q,$ and $\Lambda_{k,l} \in \{ 0, 1 \}$, $k \in E, l \in Q$, and impose the following requirements:
\begin{enumerate}
\item[(i)]
$\Delta_{j,k,l} = 1$ if and only if the vertex $j \in V$ is either a source or target for the reaction $k \in E$ on the slice $l \in Q$. We can impose this with the logical equivalency:
\[\Delta_{j,k} = 1 \; \Longleftrightarrow \; \left| [\tilde A_s]_{j,k} - [\tilde A_t]_{j,k,l} \right| = 1.\]
\item[(ii)]
$\Lambda_{j,k} = 1$ if and only if $k \in E$ is a nontrivial (i.e. non-self loop) reaction on the slice $l \in Q$. We can impose this with the logical equivalency:
\[\Delta_{j,k,l} = 1 \mbox{ for some } j \in V \Longleftrightarrow \Lambda_{j,k} = 1.\]
\item[(iii)]
Non-trivial reactions are assigned to the lowest indexed available slice.
\end{enumerate}
\noindent We introduce the following constraints:
\begin{flalign}
\tag{\textbf{Efficiency 2}}
\label{efficiency2}
&
\left\{ \; \; \; \begin{array}{ll} \\[-0.1in]
\displaystyle{[\tilde A_s]_{j,k} - [\tilde A_t]_{j,k,l} \leq [\Delta]_{j,k,l}} & j \in V, k \in E, l \in Q\\[0.05in]
\displaystyle{[\tilde A_t]_{j,k,l} - [\tilde A_s]_{j,k} \leq [\Delta]_{j,k,l}} & j \in V, k \in E, l \in Q\\[0.05in]
\displaystyle{\sum_{j \in V} [\Delta]_{j,k,l}} \leq \delta \Lambda_{k,l} & k \in E, l \in Q \\[0.05in]
\displaystyle{-\delta \Lambda_{k,l} \leq \sum_{j \in V} [\Delta]_{j,k,l}} & k \in E, l \in Q \\[0.05in]
\displaystyle{\Lambda_{k,l+1} \leq \Lambda_{k,l}} & k \in E, l \in Q, l < q
\end{array}\right.
&
\end{flalign}
The first two constraints guarantee (i) above, the third and fourth constraints guarantee (ii), and the fifth constraint guarantees (iii).\\
\noindent \emph{Objective Function:} We introduce the following objective function:
\begin{equation}
\tag{\textbf{Objective}}
\label{objective}
\mbox{minimize} \; \; \sum_{i \in C} \sum_{j \in V} [\tilde Y]_{i,j} + \sum_{k \in E} \sum_{l \in Q} [\Lambda]_{k,l} .
\end{equation}
This objective function minimizes the total stoichiometry and the number of non-trivial reactions. Together, optimizing \eqref{objective} over the constraint sets \eqref{stoichiometry}, \eqref{incidence1}, \eqref{incidence2}, \eqref{wr}, \eqref{efficiency1}, and \eqref{efficiency2} determines, from the given chemical reaction network $(G,y)$, a split network translation $( \tilde G, \tilde y, \tilde y')$ which is weakly reversible and has up to $q$ slices. If the feasible region is empty, then there is no split network translation with up to $q$ slices.
\section{Examples}
\label{sec:examples}
In this section, we present examples which demonstrate how the algorithm presented in Section \ref{sec:implementation} may be utilized to find split network translations (Definition \ref{def:splitting}). In all the examples, the methods and theory of \cite{J1,J-B2018,Tonello2017,J2,J-M-P} to not succeed obtaining a weakly reversible translation, so that split network translation is required
\begin{exa}
Consider the following chemical reaction network:
\begin{equation}
\label{example123}
\begin{tikzcd}
nX_1 \arrow[r,"r_1"] & nX_2 & X_2 \arrow[r,"r_2"] & X_1
\end{tikzcd}
\end{equation}
where $n \in \mathbb{Z}_{> 0}$. The network \eqref{example123} is trivially weakly reversible for $n = 1$ but fails to have even a weakly reversible network translation (Definition \ref{def:translation}) for $n \geq 2$. The method of split translation (Definition \ref{def:splitting}), however, yields the following network:
\begin{equation}
\label{example124}
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (nX_1) \end{array}$}} \arrow[rr,yshift=5,"nr_1"] & & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (X_2) \end{array}$}} \arrow[ll,yshift=-5,"r_2"]\end{tikzcd}
\end{equation}
corresponding to the following $n$ slices:
\[
\begin{tikzcd}
\tilde G^{(1)}: & \mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (nX_1) \end{array}$}} \arrow[rr,yshift=5,"r_1"] & & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (X_2) \end{array}$} \arrow[ll,yshift=-5,"r_2"]}\\
\tilde G^{(l)}: & \mbox{\ovalbox{$\begin{array}{c} 1 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_1 \\ (nX_1) \end{array}$}} \arrow[rr,"r_1"] & & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 \\ (X_2) \end{array}$} \arrow[loop right,"r_2"]}
\end{tikzcd}
\]
for $l = 2, \ldots, n$. Specifically, we have that
\[\sum_{l=1}^n (\tilde y(\rho^{(l)}(\alpha^{(l)}(1)) - \tilde y(\pi^{(l)}(\alpha^{(l)}(1)) = n \left( \begin{array}{c} -1 \\ 1 \end{array} \right) = \left( \begin{array}{c} -n \\ n \end{array} \right)\]
and
\[\sum_{l=1}^n (\tilde y(\rho^{(l)}(\alpha^{(l)}(2)) - \tilde y(\pi^{(l)}(\alpha^{(l)}(2)) = \left( \begin{array}{c} 1 \\ -1 \end{array} \right) + \sum_{l=2}^n \left( \begin{array}{c} 0 \\ 0 \end{array} \right) = \left( \begin{array}{c} 1 \\ -1 \end{array} \right)\]
which corresponds to the stoichiometry of the reaction vectors of \eqref{example123}, so that Condition (d) of Definition \ref{def:splitting} is satisfied.
Note that the mass-action system corresponding to \eqref{example123} is
\[\left( \begin{array}{c} \dot{x}_1 \\ \dot{x}_2 \end{array} \right) = \kappa_1 \left( \begin{array}{c} -n \\ n \end{array} \right) x_1^n + \kappa_2 \left( \begin{array}{c} 1 \\ -1 \end{array} \right) x_2 = n\kappa_1 \left( \begin{array}{c} -1 \\ 1 \end{array} \right) x_1^n + \kappa_2 \left( \begin{array}{c} 1 \\ -1 \end{array} \right) x_2\]
where we can identify the right-most system as corresponding to the generalized chemical reaction network \eqref{example124} with the rescaled rate constant $n \kappa_1$. Despite this simple correspondence between \eqref{example123} and \eqref{example124}, previous work on network translation, and in particular Definition \ref{def:translation}, does not accommodate scaling of rate constants. Split network translation extends previous work in this important direction.
\end{exa}
\begin{exa}
Reconsider the chemical reaction network \eqref{example1} given in Section \ref{sec:intro}, which we denote $(G, y)$:
\begin{equation}
\label{example321}
\begin{tikzcd}
& X_2 & 2X_2 \arrow[rd,"r_3"] & & X_1 + X_2 \\[-0.15in]
X_1 \arrow[rd,"r_2"] \arrow[ru,"r_1"] & & & X_4 \arrow[rd,"r_6"] \arrow[ru,"r_5"] & \\[-0.15in]
& X_3 & 2X_3 \arrow[ru,"r_4"] & & X_1 + X_3
\end{tikzcd}
\end{equation}
The computational algorithms of \cite{J2,J-B2018,Tonello2017} do not succeed in finding a network translation (Definition \ref{def:translation}).
We now attempt to find a split network translation (Definition \ref{def:splitting}) using the algorithm presented in Section \ref{sec:implementation}. The algorithm identifies the following generalized chemical reaction network $(\tilde G, \tilde y, \tilde y')$ as a weakly reversible split network translation of \eqref{example321}:
\begin{equation} \label{example223}
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} 1 \\ \end{array} \Bigg\lvert \begin{array}{c} 2X_1 \\ (X_1) \end{array}$}} \arrow[r,bend left = 10,"r_1"] \arrow[d,bend left = 10,"r_2"] & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \end{array} \Bigg\lvert \begin{array}{c} X_1+X_2 \\ (2X_2) \end{array}$}} \arrow[l,bend left = 10,"r_3"] \arrow[d,bend left = 10,"r_3"]\\
\mbox{\ovalbox{$\begin{array}{c} 3 \\ \end{array} \Bigg\lvert \begin{array}{c} X_1+X_3 \\ (2X_3) \end{array}$}} \arrow[r,bend left = 10,"r_4"] \arrow[u,bend left = 10,"r_4"] & \mbox{\ovalbox{$\begin{array}{c} 4 \\ \end{array} \Bigg\lvert \begin{array}{c} X_4 \\ (X_4) \end{array}$}} \arrow[l,bend left = 10,"r_6"] \arrow[u,bend left = 10,"r_5"]
\end{tikzcd}
\end{equation}
where we have the following two slices:
\begin{equation} \label{example222}
\begin{tikzcd}
2X_1 \arrow[r,"r_1"] \arrow[dd,bend left = 10,"r_2"] & X_1+X_2 \arrow[dd,bend left = 10,"r_3"] & & 2X_1 \arrow[loop left,"r_1 \& r_2"] & X_1+X_2 \arrow[l,"r_3"]\\
& & & & \\
X_1+X_3 \arrow[uu,bend left = 10,"r_4"] & X_4 \arrow[l,"r_6"] \arrow[uu,bend left = 10,"r_5"] & & X_1+X_3 \arrow[r,"r_4"] & X_4 \arrow[loop right,"r_5 \& r_6"]
\end{tikzcd}
\end{equation}
Note that we show the self loops in the slices \eqref{example222} for completeness but omit them in \eqref{example223} to avoid overcluttering the diagram.
It can be checked that the conditions of Definition \ref{def:splitting} are satisfied, and that the mass-action system \eqref{mas} corresponding to \eqref{example321} and generalized mass-action system \eqref{gmas} corresponding to \eqref{example223} are both given by \eqref{mas1} (i.e. Theorem \ref{dynamicalequivalence} is satisfied). In particular, we have that Condition (d) of Definition \ref{def:splitting} is satisfied because, even though reactions $r_3$ and $r_4$ are split in the split network translation \eqref{example223}, we have
\[y(\pi(3)) - y(\rho(3)) = \left( \begin{array}{c} 0 \\ -2 \\ 0 \\ 1 \end{array} \right) = \left( \begin{array}{c} 1 \\ -1 \\ 0 \\ 0 \end{array} \right) + \left( \begin{array}{c} -1 \\ -1 \\ 0 \\ 1 \end{array} \right) = \left( \tilde y(1) - \tilde y(2) \right) + \left(\tilde y(4) - \tilde y(2)\right)\]
and
\[y(\pi(4)) - y(\rho(4)) = \left( \begin{array}{c} 0 \\ 0 \\ -2 \\ 1 \end{array} \right) = \left( \begin{array}{c} 1 \\ 0 \\ -1 \\ 0 \end{array} \right) + \left( \begin{array}{c} -1 \\ 0 \\ -1 \\ 1 \end{array} \right) = \left( \tilde y(1) - \tilde y(3) \right) + \left(\tilde y(4) - \tilde y(3)\right).\]
Since \eqref{example223} is weakly reversible and has a stoichiometric and kinetic-order deficiency of zero ($\tilde \delta = 0$ and $\tilde \delta' = 0$), the methods of \cite{MR2014} and \cite{J-M-P} can be applied to obtain the steady state parametrization
\[
\begin{aligned}
x_1 & = 2 \kappa_3 \kappa_4(\kappa_5+\kappa_6)\tau\\
x_2 & = \kappa_4(2\kappa_1\kappa_5+\kappa_1\kappa_6+\kappa_2\kappa_5)\tau\\
x_3 & = 2\kappa_3\kappa_4(\kappa_1+\kappa_2)\tau\\
x_4 & = \kappa_3(\kappa_1\kappa_6+\kappa_2\kappa_5+2\kappa_2\kappa_6)\tau
\end{aligned}
\]
where $\tau > 0$. The computational method introduced in \cite{C-F-M-W2016} guarantees that the system is mono-stationary for all values of the rate constants $\kappa_i > 0$. That is, within each positive stoichiometric compatibility class there is exactly one steady state. \hfill $\square$
\end{exa}
\begin{exa}
Consider the following mechanism for the bifunction enzyme 6-phosphofructo-2-kinase/fructose-2,6-bisphosphatase (PFK-2/FBPase-2), which is simplified from that of the paper by Karp et al. \cite{Karp}:
\begin{equation}
\label{pfk}
\begin{tikzcd}
& X_2 \arrow[dd,"r_2"] & X_2 + X_4 \arrow[rd,bend left = 10,"r_{4}"] & & X_3 + X_5 & & \\
X_1 \arrow[ur,"r_1"] & & & X_6 \arrow[lu,bend left = 10,"r_{5}"] \arrow[ld,bend left = 10,"r_{7}"] \arrow[ru,"r_{8}"] \arrow[rd,"r_{9}"'] & & & \\
& X_3 \arrow[ul,"r_3"] & X_1 + X_5 \arrow[ru,bend left = 10,"r_{6}"] & & X_1 + X_4 & &
\end{tikzcd}
\end{equation}
where $X_1 = E$-$ATP$-$F6P$, $X_2 = E$-$ATP$-$F2,6BP$, $X_3 = E$-$F2,6BP$, $X_4 = F6P$, $X_5 = F2,6BP$, and $X_6 = E$-$ATP$-$F6P$-$F2,6BP$. Note that this network is not weakly reversible, and furthermore does not admit a weakly reversible network translation by the techniques outlined in \cite{J-M-P,Tonello2017,J-B2018}.
We therefore look for a split network translation (Definition \ref{def:splitting}) using the algorithm outlined in Section \ref{sec:implementation}. This procedure finds the following split network translation which has two slices:
\begin{equation} \label{example225}
\begin{tikzcd}
\mbox{\ovalbox{$\begin{array}{c} 1 \\ \end{array} \Bigg\lvert \begin{array}{c} 2X_1+X_4 \\ (X_1) \end{array}$}} \arrow[d,"r_1"] & \mbox{\ovalbox{$\begin{array}{c} 2 \\ \end{array} \Bigg\lvert \begin{array}{c} X_1+X_6 \\ (X_6) \end{array}$}} \arrow[l,"r_5 \& r_9"'] \arrow[d,bend left = 10,"r_7 \& r_8"] \arrow[r,"r_5"] \arrow[dr,bend left=5,"r_8"] & \mbox{\ovalbox{$\begin{array}{c} 3 \\ \end{array} \Bigg\lvert \begin{array}{c} X_2 + X_6 \\ (X_2) \end{array}$}} \arrow[d,"r_2"]\\
\mbox{\ovalbox{$\begin{array}{c} 4 \\ \end{array} \Bigg\lvert \begin{array}{c} X_1+X_2+X_4 \\ (X_2+X_4) \end{array}$}} \arrow[ur,"r_4"] & \mbox{\ovalbox{$\begin{array}{c} 5 \\ \end{array} \Bigg\lvert \begin{array}{c} 2X_1+X_5 \\ (X_1+X_5) \end{array}$}} \arrow[u,bend left = 10,"r_6"] & \mbox{\ovalbox{$\begin{array}{c} 6 \\ \end{array} \Bigg\lvert \begin{array}{c} X_3+X_6 \\ (X_3) \end{array}$}} \arrow[ul,bend left=5,"r_3"]
\end{tikzcd}
\end{equation}
Notice that the reactions $r_5$ and $r_8$ explicitly appear twice while the second copy of the remainder of the reactions correspond to self-loops and are not shown. To verify Condition (d) of Definition \ref{def:splitting}, we observe that, for $r_5$, we have
\[\footnotesize(\tilde y(1) - \tilde y(2)) + (\tilde y(3) - \tilde y(2)) = \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ -1 \end{array} \right) + \left( \begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} \right) = \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ 1 \\ 0 \\ -1 \end{array} \right) = y(\pi(5)) - y(\rho(5))\]
and, for $r_8$, we have
\[\footnotesize(\tilde y(5) - \tilde y(2)) + (\tilde y(6) - \tilde y(2)) = \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ -1 \end{array} \right) + \left( \begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \\ 1 \\ -1 \end{array} \right) = y(\pi(8)) - y(\rho(8)).\]
This network \eqref{example225} has a stoichiometric deficiency of one ($\delta = 1$) and kinetic deficiency of zero ($\delta' = 0$). It follows by the Theorem 14 of \cite{J-M-P} and Theorem \ref{dynamicalequivalence} that the following monomial parametrization lies on the steady state set of the mass-action system corresponding to \eqref{pfk}:
\begin{equation}
\label{pfk-param}
\left\{\; \;
\begin{array}{rlrlrl}
x_1 & = \displaystyle{\frac{k_5+k_9}{k_1} \tau}, & x_3 & = \displaystyle{\frac{k_5+k_8}{k_3} \tau}, &
x_5 & = \displaystyle{\frac{k_1(k_7+k_8)}{k_6(k_5+k_9)}},\\
x_2 & = \displaystyle{\frac{k_5}{k_2} \tau}, & x_4 & = \displaystyle{\frac{k_2(k_5+k_9)}{k_4k_5}}, & x_6& = \tau,
\end{array}
\right.
\end{equation}
where $\tau > 0$ is a free parameter. It is worth noting that, since $\delta \not= 0$, the parametrization \eqref{pfk-param} does not represent the entire steady state set. In fact, \eqref{pfk-param} is only a subset of the full parametrization, which is given by:
\small
\[\left\{ \; \; \begin{array}{rlrlrl}
x_1 & = \displaystyle{\frac{k_2(k_5 + k_9) + k_4k_9\tau_2}{k_1(k_5 + k_9)}\tau_1},
& x_3 & = \displaystyle{\frac{k_2(k_5 + k_9) + k_4k_8\tau_2}{k_3(k_5 + k_9)}\tau_1},
& x_5 & = \displaystyle{\frac{k_1k_4(k_7 + k_8)}{k_6(k_2(k_5 + k_9) + k_4k_9\tau_2 )}\tau_2},\\
x_2 & = \tau_1,
& x_4 & = \tau_2,
& x_6 & = \displaystyle{\frac{k_4}{k_5 + k_9}\tau_1 \tau_2}
\end{array} \right.\]
\normalsize
where $\tau_1, \tau_2 > 0$ are free parameters.
\end{exa}
\section{Conclusions and Future Work}
\label{sec:conclusions}
In this paper, we have extended the framework of network translation to accommodating splitting of the reactions in a chemical reaction network. This expands the scope of networks for which the steady state set can be characterized by deficiency-based methods. We have also presented a computational program for finding split network translations which are weakly reversible.
This work raises several avenues for future computational work.
In particular, the computational algorithm presented in Section \ref{sec:implementation} does not currently scale well to large networks, often taking several minutes to complete for networks with more than even ten reactions. This limits widespread application. Future work will focus on increasing the efficiency of the code, which would allow the theory of split network translation to be tested on public available biochemical reaction databases, for example, the European Bioinformatics' Institute's BioModels Database \cite{Biomodels}. Additionally, we will work toward combining the computational work of this paper and \cite{J2,Tonello2017,J-B2018} with the computational methods for building steady state parametrizations and establishing multistationarity in mass-action systems \cite{C-F-M-W2016}.
|
1,941,325,220,775 | arxiv | \section{Introduction}
Prostate cancer is responsible for one of the leading cancer-caused deaths in adult males \cite{parikesit2016impact}. External beam radiation therapy (EBRT) is a most commonly used treatments for prostate cancer \cite{d1998biochemical} where precise prostate segmentation of CT images is a crucial step in the EBRT planning to maximize the delivery of radiation doses to tumor tissues while minimizing harm to the surrounding healthy tissues.
While manual delineation is time and labor consuming and frequently plagued by significant inter-operator variation, several automated segmentation methods have been proposed to alleviate the clinician's burden. Fully convolutional networks (FCN), such as U-Net \cite{ronneberger2015u} and its variants \cite{milletari2016v,balagopal2018fully,isensee2021nnu,li2022uncertainty} have been successfully applied to medical image segmentation tasks. However, the limitation of convolution operation in capturing the long-range relation across global context information still exists \cite{chen2021transunet,lin2022ds,he2022transformers}, resulting in a subpar performance. On the other hand, accurate prostate segmentation of CT images remains challenging due to the unclear prostate boundary caused by the low contrast of CT images and the large variance of shapes and sizes across different individuals \cite{wang2020boundary,he2021hf}.
Considering U-Net, one of the most widely used techniques, uses an FCN-based encoder-decoder setup with skip connections to preserve details in extracting the local visual features. Yet, incorporating a self-attention mechanism to enhance the FCN-based encoder's ability of capturing the global context is desirable \cite{alimjan2021image}. Transformer-based encoders leveraging the self-attention mechanism show great promise in this regard. Transformers was initially designed to capture long-term dependencies of sequential data with stacked self-attention layers \cite{vaswani2017attention} and achieved great success in National Language Processing (NLP) tasks. Inspired by this, Dosovitskiy et al. \cite{Dosovitskiy2021AnII} proposed the Vision Transformer (ViT) by formulating image classification as a sequence prediction task of the image patch (region) sequence, thereby capturing long-term dependencies within the input image. TransUNet \cite{chen2021transunet} successfully adapts the ViT to the medical image segmentation task, where the encoder consists of the FCN-based layers followed by several layers of transformer (multi-head self-attention module) to better capture the global context from medical image inputs. The subsequent studies \cite{valanarasu2021medical,zhang2021transfuse,chen2021transattunet,jiang2022nested} follow a similar route. However, learning the long-term dependencies that typically contain precise spatial information in low-level feature maps requires more than a few transformer layers for high-level feature inputs.
More recently, Swin Transfomer \cite{liu2021swin} demonstrated that it can simultaneously learn long-range global context and extract hierarchical feature maps from natural images. Based on this idea, SwinUNet \cite{cao2021swin} utilizes hierarchical Swin Transformer blocks to construct both encoder and decoder with a U-Net-like architecture. DS-TransUNet \cite{lin2022ds} adds on the more parallel encoder to process the input with a different resolution. SwinUNETR \cite{tang2022self} uses a pre-training on a large medical image data set. Despite their usefulness, these fine-grain ViT-based approaches use standard self-attention to capture short- and long-range interactions. As such, they suffer from high computational costs and an explosion of time and memory costs, especially when the feature map size becomes large. The focal transformer \cite{yang2021focal} employs a focal self-attention mechanism via both fine-grained local and coarse-grained global interactions. Therefore, it can efficiently and effectively capture both short- and long-range visual feature dependencies to improve the performance of objection detection task on the benchmark natural image data sets. To the best of our knowledge, it has not been generalized for extracting visual features to tackle medical image segmentation task. Particularly, to mitigate the unclear organ boundary and the large variance of shapes and sizes in prostate CT scans, we apply a Gaussian kernel over the boundary of ground truth contour \cite{lin2021bsda} via an auxiliary boundary-aware regression task (Figure \ref{fig:focal_unetr}B). This auxiliary task serves as a regularization term for the main mask generation task (Figure \ref{fig:focal_unetr}A), further improving the generalizability of our model to the unseen test sets.
Here we propose a FocalUNETR (Focal UNEt TRansformers), a novel focal transformer architecture for CT-based image segmentation accounting for the unclear boundary of CT contours and we summarize our main contributions as follows:
\begin{itemize}
\item We propose FocalUNETR, a new focal transformer model for CT-based prostate segmentation utilizing {\it focal self-attention} to hierarchically learn the feature maps accounting for both short- and long-range visual dependencies efficiently and effectively.
\item We tackle the CT-specific unclear boundary challenge by designing an auxiliary task of kernel regression as regularization to the main task of segmentation mask generation.
\item We evaluate our proposed method on a large 400-patient CT image dataset, demonstrating an improved performance of prostate segmentation over with state-of-the-art methods.
\end{itemize}
\section{Related Work}
\subsection{Prostate Segmentation}
Existing prostate segmentation methods can be broadly classified into three types: multi-atlas-based methods, deformable model-based methods, and learning-based methods. In this section, we will review the most related learning-based methods, particularly the state-of-the-art deep learning methods.
In general, conventional learning-based approaches for building models have two major components: (a) extraction of hand-crafted features to represent target organs, and (b) classification/ regression model for segmentation. For instance, Glocker et al. \cite{glocker2012joint} developed a supervised forest model that uses both class and structural information to jointly perform pixel classification and shape regression. To enhance the segmentation performance, Chen and Zheng \cite{chen2013fully} selected the most important features from the complete feature set using a hierarchical landmark detection method. Gao et al. \cite{gao2016accurate} utilized multi-task
random forests to perform the segmentation of prostate, bladder, rectum, left and right femoral heads tasks, jointly with a displacement regression task. Since these methods are typically created using low-dimensional hand-crafted features, their performance may be constrained, particularly when the inputs have a unclear boundary, as in the case of CT-based prostate images.
In contrast to the conventional learning techniques, deep learning methods can automatically extract multi-level feature representations from the input images. For example, a 2D U-Net model was utilized to learn a mapping function that converts each slice CT image to the corresponding segmented mask for prostate, bladder, and rectum in male pelvic \cite{kazemifar2018segmentation}. Balagopal et al. \cite{balagopal2018fully} proposed an automated workflow for male pelvic CT image segmentation by utilizing a 2D volume localization network followed by a 3D segmentation network for volumetric segmentation of prostate, bladder, rectum, and femoral heads. Both are based on U-Net with an encoder-decoder architecture. To better address the unclear prostate boundary in CT images, He et al. \cite{he2021hf} proposed a multi-task learning strategy that combined the main prostate segmentation task with an auxiliary prostate boundary delineation task. Wang et al. \cite{wang2020boundary} developed a boundary coding representation via a dilation and erosion process over the original segmentation mask. These two studies also adopted the U-Net architecture for segmentation. Despite its success, these FCN models usually fail to capture the long-range global context information due to the induced bias for convolutional operations. As a result, subpar performance may occur, especially when large variation in shapes and sizes from different individuals exists for the prostate segmentation task.
\subsection{ViT-based Medical Image Segmentation}
Motivated by Transformer's capacity to capture the long-range global relations, Dosovitskiy et al. \cite{Dosovitskiy2021AnII} proposed the ViT by formulating image classification as a sequence prediction task of the image patch sequence, thereby capturing long-term dependencies within the input image.
TransUNet \cite{chen2021transunet} is among the first to successfully use ViT for medical image segmentation via using pre-trained weights from image classification. Both convolutional layers serve as the main body for feature extraction, and transformers are used to capture the long-range global context. Some subsequent studies \cite{valanarasu2021medical,zhang2021transfuse,chen2021transattunet} followed a similar route; however, several layers of transformers are not adequate to learn the long-term dependencies with precise spatial information in the hierarchical feature maps.
To address the above issue, researchers have introduced self-attention into convolution operation \cite{xie2021cotr,zhou2021nnformer,gao2021utnet} to perform medical image segmentation. For example, Gao et al. \cite{gao2021utnet} integrated self-attention into a CNN for enhancing medical image segmentation. Zhou et al. \cite{zhou2021nnformer} proposed a hybrid model with interleaved convolution and self-attention in both encoder and decoder modules. Although these methods achieve improved performance, they need carefully design the convolution and self-attention modules, limiting the scalability of developing more advanced transformer architectures. Recently, Swin Transfomer \cite{liu2021swin} demonstrated a linear complexity in self-attention calculation for long-range context learning and generating hierarchical feature maps simultaneously. Based on this idea, SwinUNet \cite{cao2021swin} utilized hierarchical transformer blocks to construct the encoder and decoder within a U-Net-like architecture and DS-TransUNet \cite{lin2022ds} added on the more parallel encoder to process the input with a different resolution, and SwinUNETR \cite{tang2022self} used pre-training with a large medical image data set. Nonetheless, they focus too much on learning the global long-range dependencies only with fine-grained global interactions without exploring how to efficiently long-range visual dependencies for dense medical image segmentation prediction tasks.
\section{Methods}
\begin{figure*}[hbtp]
\centering
\captionsetup{font=small}
\includegraphics[scale=0.38]{figures/focal_SA_with_exp.pdf}
\caption{An illustration of focal self-attention mechanism for medical image segmentation. (A) The focal self-attention mechanism, and (B) An example of perfect boundary matching using focal self-attention for prostate CT image segmentation task.}\label{fig:focal_concept}
\end{figure*}
\begin{figure*}[hbtp]
\centering
\captionsetup{font=small}
\includegraphics[scale=0.42]{figures/focal_seg_arch.pdf}
\caption{(A) The architecture of FocalUNETR as the main task for segmentation mask generation, and (B) The auxiliary task designed for mitigating the unclear boundary issue in CT images.
}\label{fig:focal_unetr}
\end{figure*}
\subsection{FocalUNETR}
Our FocalUNETR architecture shares a similar multi-scale design in \cite{hatamizadeh2022unetr,tang2022self}, which allows us to obtain hierarchical feature maps at different stages. As shown in Figure \ref{fig:focal_unetr}A, a medical image input to the encoder is $\mathcal{X} \in \mathcal{R}^{C \times H \times W}$, where $H, W$ are the spatial height, width and $C$ is the number of channels. We firstly use a patch with resolution of $(H', W')$ to split the input into a sequence of tokens with dimension of $\lceil \frac{H}{H'} \rceil \times \lceil \frac{W}{W'} \rceil$ and project them into an embedding space with dimension $D$. The self-attention is computed at two focal levels as shown in Figure \ref{fig:focal_concept}, fine-grain and coarse-grain. Instead of using all tokens at the fine-grain level, we propose to attend to the fine-grain tokens only locally, but to the coarse-grained (summarized) ones globally. Therefore, it can cover as many regions as standard self-attention to enable long-range self-attention yet with much lower computational cost due to a much smaller number of surrounding (summarized) tokens. In practice, we perform focal self-attention at the window level to efficiently extract the surrounding tokens for each query position. Given a feature map of $x \in \mathcal{R}^{d \times H''\times W''}$ with spatial size $\times H''\times W''$ and $d$ channels, we first partition it into a grid of windows with size $s_w\times s_w$. Then, we find the surroundings for each window rather than individual tokens. In the following, we elaborate on the window-wise focal self-attention.
For window-wise focal self-attention \cite{yang2021focal} (Figure \ref{fig:focal_concept}A), there are three terms $\{L, s_w, s_r\}$. Focal levels $L$ is the number of granularity levels we extract the tokens for our focal self-attention. We demonstrate two examples of focal levels (fine and coarse We show two focal levels, (fine and coarse in Figure \ref{fig:focal_concept}B). Focal window size $s_w^l$ is the size of sub-window on which we get the summarized tokens at level $l \in \{1, \dots, L\}$. Focal region size $s_r^l$ is the number of sub-windows horizontally and vertically in attended regions at level $l$. The focal self-attention module proceeds in two main steps, sub-window pooling, and attention computation. In sub-window pooling step, an input feature map $x \in \mathcal{R}^{d \times H''\times W''}$ is split into a grid of sub-windows with size $\{s_w^l, s_w^l\}$ followed by a simple linear layer $f_p^l$ to pool the sub-windows spatially. The pooled feature maps at different levels $l$ provide rich information at both fine-grain and coarse-grain, where $x^l = f_p^l(\hat{x}) \in \mathcal{R}^ {d \times
\frac{H''}{s_w^l} \times \frac{W''}{s_w^l}}$, and $\hat{x} = \text{Reshape}(x) \in \mathcal{R}^{({d \times \frac{H''}{s_w^l} \times \frac{W''}{s_w^l}) \times (s_w^l \times s^w_l)}}$. After obtaining the pooled feature maps ${x^l}_1^L$, we calculate the query at the first level and key and value for all levels using three linear projection layers $f_q$, $f_k$ and $f_v$: $$ Q = f_q(x^1), K = \{K^l\}_1^L=f_k(\{x^1,\dots, x^L\}), V = \{V^l\}_1^L=f_v(\{x^1,\dots, x^L\}).$$
For the queries inside the $i$-th window $Q_i \in \mathcal{R}^{d \times s_w \times s_w}$, we extract the ${s_r^l \times s_r^l}$ keys and values from $K^l$ and $V^l$ around the window where the query lies in and then gather the keys and values from all $L$ to obtain $K_i = \{K_1, \dots,K_L\} \in \mathcal{R}^{s \times d}$ and $V_i = \{V_1, \dots,V_L\} \in \mathcal{R}^{s \times d}$, where $s$= $\sum_{l=1}^{L} (s_r^l)^2$. Finally, a relative position bias is added to compute the focal self-attention for $Q_i$ by $$\text{Attention}(Q_i, K_i, V_i) = \text{Softmax}(\frac{Q_i K_i^T}{\sqrt{d}}+B)V_i,$$ where $B = \{B^l\}_1^L$ is the learnable relative position bias \cite{yang2021focal}.
The encoder uses a patch size of $2 \times 2$ with a feature dimension of $2 \times 2 \times 1 = 4$ (\textit{i.e.} single input channel CT) and and a
$D$-dimensional embedding space (e.g., 64). Moreover, the overall architecture of the encoder consists of four stages of focal transformer blocks.
Between every stage, a patch merging layer is used to reduce the resolution by a factor of 2. The hierarchical representations of the encoder at different stages are used in downstream applications such as CT-based prostate segmentation for multi-scale feature extraction.
By using skip connections, the FocalUNETR encoder (Figure \ref{fig:focal_unetr}A) connects to a CNN-based decoder at each resolution to build a \qq{U-shaped} network for downstream applications such as segmentation. Following this, we concatenate the encoder output with the processed input volume features and feed them into a residual block for the prostate segmentation task, followed by a final $1 \times 1$ convolutional layer with a proper activation function (i.e., softmax), the required number of class-based probabilities is obtained.
\subsection{Multi-Task Learning}
For the main task of mask prediction (Figure \ref{fig:focal_unetr}), we use a combination of Dice loss and Cross-Entropy loss to evaluate the pixel-wise agreement between the prediction and the ground truth. The objective function for the main segmentation head is: $$L_{seg} = L_{dice} (\hat{p}_i, G) + L_{ce}(\hat{p}_i, G),$$ where $\hat{p}_i$ and $G$ respectively denote the prediction probabilities from the main task and ground truth mask given an input image $i$. $\hat{p}_i$ is given by the main-task, $\hat{p}_i= \text{FocalUNETR}(\mathcal{X}, w)$, where $w$ is the weights of our model.
To better capture the unclear boundary for the CT-based medical segmentation task,
we design an auxiliary task to predict a boundary-aware label besides the main head for generating a segmented mask. We attach convolution layers to the decoder as the reconstruction head (Figure \ref{fig:focal_unetr}B). The boundary-aware labels are generated considering the pixels near the contour as sub-ground truths. Inspired by \cite{ma2020distance,he2021hf}, we formulate each contour point and its surrounding pixels into a Gaussian distribution, with a kernel of $\sigma$ (i.e., $\sigma = 1.6$ here). Then a soft label heatmap in the form of a $Heatsum$ function \cite{he2021hf} can be generated, and we utilize it as a regression task trained by minimising mean-squared error instead of treating it as a single pixel boundary segmentation problem. Given the ground truth of contour $G_i^C$ induced from the segmentation mask for an input image $i$, the reconstructed out probability is denoted as $\hat{p}_i^C$. A regression loss is used for this task: $$ L_{reg} = \frac{1}{N} \sum_i ||\hat{p}_i^C - G_i^C||_2, $$ where $N$ is the total number of images. This auxiliary task can be trained simultaneously with the main segmentation task.
Multi-task learning is formulated as a main task regularized by auxiliary tasks. The overall loss function is a combination of $L_{seg}$ and $L_{reg}$:
$$ L = \lambda_1 L_{seg} + \lambda_2 L_{reg}, $$ where $\lambda_1$ and $\lambda_2$ are two hyper-parameters of weighting the mask prediction loss and contour regression loss. After tuning hyper-parameters with grid-search, we find the optimal setting of $\lambda_1 = \lambda_2 = 0.5.$
\section{Experiments}
\subsection{Methods for Comparison}
We compare the performance of FocalUNETR with multiple state-of-the-art segmentation models. For FCN-based models: UNet \cite{ronneberger2015u} builds on top of the fully convolutional networks with a U-shaped architecture to capture context information. The ResUNet is similar to UNet in architecture, but it uses residual blocks as the building block. UNet++ \cite{zhou2018unet++} is a variant of UNet. It introduces nested, dense skip connections in the encoder and decoder sub-networks, which can reduce the semantic gap between the feature maps of encoder and decoder sub-networks for better feature fusion. Attention UNet \cite{oktay2018attention} introduces an attention gating module to UNet. The attention gate model learns to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. Here we implement all models for 2D setting, due to the large space distance variance between different CT cases.
For Transformer-based models: TransUNet \cite{chen2021transunet} is an
early attempt of Transformers on medical image segmentation. It uses a R50-ViT as the encoder and employs a UNet decoder for 2D medical image segmentation. Specifically, the images is first fed to the ResNet50 backbone to obtain the feature map ($16\times$ down-sampled), and then processed by ViT for further modeling. SwinUNet \cite{cao2021swin} is a pure Transformer model for 2D medical image segmentation, which uses SwinTransformer blocks in a UNet-like encoder decoder structure. It first divides images into $4 \times 4$ patches and then projects patches into token embeddings. The tokens are processed by interleaved window-based attention and shifted window-based attention module.
The window-based attention compute the attention locally and propagate
information across windows by shifting the window.
\subsection{Experiment Setup}
We systematically evaluate the performance of FocalUNETR on a large in-house CT data set with 400 cases for prostate gland segmentation.
For this dataset, we randomly split it into 280, 40, and 80 cases for training, validation, and test sets. In data pre-processing, we resample all images to a spatial resolution of $1.0 \times 1.0 \times 1.5$ $mm^2$. A 128×128×64 voxel patch at the center of each image was cropped for training. A pixel-wise linear transformation was applied to assign HU values to intensity levels between 0 and 1. We train all models but TransUNet (with ViT pretrained on ImageNet) from scratch for 100 epochs with a default batch size of 24, and use the exponentially learning rate scheduler with a base learning rate of 0.01. We use the SGD (stochastic gradient descent) optimizer with a batch size of 32 on a NVIDIA A100 GPU, momentum and weight decay are set to 0.9 and 1e-4 respectively. Data augmentation is applied on the fly during model training to alleviate overfitting, including random rotation and flip. All images are resampled with bi-linear interpolation to $224\times 224$ before entering the models.
\subsection{Evaluation Metrics}
We use Dice score and 95\% Hausdorff Distance (HD) to evaluate the accuracy of segmentation in our experiments. The Dice similarity coefficient (DSC) evaluates the overlap of the predicted
and ground truth segmentation map:
$$ DSC = \frac{2|P \cap G|}{|P| + |G|}, $$
where $P$ indicates the predicted segmentation map and $G$ denotes the ground
truth. A $\mathrm{DSC}$ of 1 indicates a perfect segmentation while 0 indicates no
overlap at all. Hausdorff distance (HD) measures the largest symmetrical distance between two
segmentation maps:
$$ d_{H}(P,G)=\max\{{\sup_{p\in P} \inf_{g\in G}{d(p, g)}, \sup_{g\in G}
\inf_{p\in P}{d(p, g)} }\}, $$
where $d(\cdot)$ represents the Euclidean distance, $\sup$ and $\inf$
denote supremum and infimum, respectively. We employ 95$\%$ HD to eliminate the
impact of a very small subset of the outliers.
\begin{table}[]
\centering
\caption{Quantitative performance comparison on the in-house prostate CT datasets in terms of
average Dice similarity coefficient, and average 95$\%$ Hausdorff
distance.}
\bigskip
\label{main_table1}
\addtolength{\tabcolsep}{5pt}
\begin{tabular}{|c|c|c|}
\hline
Methods & Avg DSC (\%) & Avg 95HD (mm) \\
\hline
\hline
UNet & 84.21 $\pm$ 1.21 & 6.73 $\pm$ 0.89
\\ \hline
ResUNet & 84.82 $\pm$ 1.33 & 6.89 $\pm$ 1.36
\\ \hline
UNet++ & 85.54 $\pm$ 1.62 & 6.53 $\pm$ 1.12
\\ \hline
Attn UNet & 85.62 $\pm$ 0.98 & 6.58 $\pm$ 0.98
\\ \hline
TransUNet & 85.12 $\pm$ 2.32 & 6.35 $\pm$ 1.32
\\ \hline
SwinUNet & 86.31 $\pm$ 1.72 & 6.10 $\pm$ 1.28
\\ \hline
FocalUNETR (ours) & \textbf{87.67 $\pm$ 1.42} & \textbf{5.62 $\pm$ 1.17}
\\ \hline \hline
FocalUNETR + Multi-Task & \textbf{88.15 $\pm$ 1.39} & \textbf{5.53 $\pm$ 1.01}
\\ \hline
\end{tabular}
\end{table}
\begin{figure*}[hbtp]
\centering
\captionsetup{font=small}
\includegraphics[scale=0.6]{figures/quality_result_1.pdf}
\caption{Qualitative results of prostate segmentation by comparing our FocalUNETR with two other representative segmentation models. For fair comparison, all methods are trained with only main segmentation task.}\label{fig:quality_r_1}
\end{figure*}
\subsection{Experimental Results}
All methods are evaluated using two metrics, i.e., Dice and 95\% Hausdorff Distance (95HD[mm]). Table \ref{main_table1} tabulates the segmentation results on the 400 CT cases data set. We compare the FocalUNETR with both the FCN-based and transformer-based models. Both metrics identify the superior of our FocalUNETR model compared with others. Specifically, the UNet and its variants demonstrate a relatively low performance due to their limited capability of capturing long-range global context. TransUNet and SwinUNet built with ViT and Siwn transformers achieve better performance than the UNets. However, these two models still suffer from effectively learning the local information and global context simultaneously and show subpar performance than our focal transformer based FocalUNETR. Furthermore, using the multi-task training strategy, our FocalUNETR model demonstrates better performance on the CT images with unclear boundary.
Figure \ref{fig:quality_r_1} shows the qualitative results for prostate segmentation. We compared our FocalUNETR with two other representative models, UNet and TransUNet. All methods perform well for easy cases, but our FocalUNETR can be even better. For more challenging cases (irregular shape, unclear boundary, small size), FocalUNETR performs much better than others. FocalUNETR is less likely to give the false prediction (false positives) for CT images without a foreground mask.
\subsection{Ablation Study}
During the training of FocalUNETR, we find that the embedding space dimension ($D$) has a relatively significant impact on the final segmentation accuracy. By increasing the $D$ from 48 to 64 (introducing more parameters), the segmentation performance improves accordingly on both DSC and 95HD metrics (Table \ref{table_embeddingD}). We do not explore more complex settings of FocalUNETR due to limited computing resources.
\begin{table}
\centering
\caption{The effect of embedding space dimension on segmentation performance.}
\bigskip
\label{table_embeddingD}
\addtolength{\tabcolsep}{5pt}
\begin{tabular}{|c|c|c|c|}
\hline
$D$ & Avg DSC (\%) & Avg 95HD (mm) & Param (M) \\
\hline
\hline
48 & 86.53 $\pm$ 1.65 & 5.95 $\pm$
1.29 & 25.5 \\
\hline
64 & \textbf{87.67 $\pm$ 1.42} & \textbf{5.62 $\pm$
1.17} & 57.8 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
To better capture the local visual features and global contexts in medical images, we
present a novel focal transformer based segmentation architecture, FocalUNETR.
The learned hierarchical features are effective in extracting visual features for the main segmentation
and auxiliary boundary-aware regression tasks. Furthermore, boundary-aware labels
can alleviate the unclear boundary and the large variance in sizes and shapes.
Extensive experiments on a large data set with 400 cases of prostate CTs show
that our FocalUNETR achieves generally better performance than the state-of-the-art methods
on the prostate segmentation task.
\bibliographystyle{splncs04}
|
1,941,325,220,776 | arxiv | \section{C-RLSVI Algorithm}\label{sec:algorithm main}
The major goal of this paper is to improve the regret bound of TS-based algorithms in the tabular setting. Different from using fixed bonus term in the optimism-in-face-of-uncertainty (OFU) approach, TS methods~\citep{agrawal2013thompson,abeille2017linear,russo2019worst,zanette2020frequentist} facilitate exploration by making large enough random perturbation so that optimism is obtained with at least a constant probability. However, the range of induced value function can easily grow unbounded and this forms a key obstacle in previous analysis \cite{russo2019worst}. To address this issue, we apply a common clipping technique in RL literature~\citep{azar2017minimax,zanette2020frequentist,yang2019reinforcement}.\\
We now formally introduce our algorithm C-RLSVI as shown in Algorithm \ref{alg: RLSVI}. C-RLSVI follows a similar approach as RLSVI in~\citet{russo2019worst}. The algorithm proceeds in episodes. In episode $k$, the agent first samples $Q_h^{\text{pri}}$ from prior $\mathcal N(0,\frac{\beta_k}{2}I)$ and adds random perturbation on the data \textbf{(lines 3-10)}, where $\mathcal{D}_{h}=\{(s_{h}^{l}, a_{h}^{l}, r_{h}^{l}, s_{h+1}^{l} ) : l <k \}$ for $h<H$ and $\mathcal{D}_{H}=\{(s_{H}^{l}, a_{H}^{l}, r_{H}^{l}, \emptyset ) : l <k \}$. The injection of Gaussian perturbation (noise) is essential for the purpose of exploration and we set $\beta_k=H^3S\log(2HSAk)$. Later we will see the magnitude of $\beta_k$ plays a crucial role in the regret bound and it is tuned to satisfy the optimism with a constant probability
in Lemma~\ref{lem: Optimism Main}. Given history data, the agent further performs the following procedure from timestep $H$ back to timestep 1: (i) conduct regularized least square regression \textbf{(lines 13)}, where $\mathcal{L}(Q \mid Q', \mathcal{D})=
\sum_{(s,a,r,s')\in \mathcal{D}} (Q(s,a) - r- \max_{a'\in \mathcal{A}} Q'(s',a'))^2$, and (ii) clips the Q-value function to obtain $\overline Q_k$ \textbf{(lines 14-19)}. Finally, the clipped Q-value function $\overline Q_k$ is used to extract the greedy policy $\pi^k$ and the agent rolls out a trajectory with $\pi^k$ (\textbf{lines 21-22)}.\\
\begin{algorithm}[ht]
\begin{algorithmic}[1]
\STATE \textbf{input:} variance $\beta_{k}$ and clipping threshold $\alpha_k$;
\FOR{episode $k=1,2,\ldots,K$ }
\FOR{timestep $h=1,2, \ldots, H$}
\STATE Sample prior $Q^{\text{pri}}_h \sim \mathcal N(0, \frac{\beta_k}{2} I)$;
\STATE $\dot{D}_{h} \leftarrow \{\}$;
\FOR{$(s,a,r,s')\in \mathcal{D}_h$}
\STATE Sample $w\sim \mathcal N(0,\beta_k/2)$;
\STATE $\dot{\mathcal{D}}_h \leftarrow \dot{\mathcal{D}}_h \cup \{(s,a,r+w,s')\}$;
\ENDFOR
\ENDFOR
\STATE Define terminal value $\overline Q_{H+1,k}(s,a)\leftarrow 0 \quad \forall s,a$;
\FOR{timestep $h = H,H-1,\ldots , 1$}
\STATE $\hat{Q}_{h}^k \leftarrow \argmin_{Q\in \mathbb{R}^{SA}} \left[\mathcal{L}(Q \mid \overline Q_{h+1,k}, \dot{\mathcal{D}}_{h}) + \|Q-Q^{\text{pri}}_h \|_2^2\right]$;
\STATE \textsl{(Clipping)}
$\forall(s,a)$\\
\IF{$n^k(h,s,a) > \alpha_k$}
\STATE $\overline{Q}_{h,k}(s,a) = \hat{Q}_h^k(s,a)$;
\ELSE
\STATE $\overline{Q}_{h,k}(s,a) = H-h+1$;
\ENDIF
\ENDFOR
\STATE Apply greedy policy ($\pi^k)$ with respect to $ (\overline{Q}_{1,k}, \ldots \overline{Q}_{H,k})$ throughout episode;
\STATE Obtain trajectory $s_{1}^k,a^k_1,r^{k}_1,\ldots s^k_{H}, a^k_H, r^k_H$;
\ENDFOR
\end{algorithmic}
\caption{\textsc{C-RLSVI}}\label{alg: RLSVI}
\end{algorithm}
C-RLSVI as presented is a model-free algorithm, which can be easily extended to more general setting and achieve computational efficiency~\citep{osband2016deep,zanette2020frequentist}. When the clipping does not happen, it also has an equivalent model-based interpretation~\citep{russo2019worst} by leveraging the equivalence between running Fitted Q-Iteration~\citep{geurts2006extremely, chen2019information} with batch data and using batch data to first build empirical MDP and then conducting planing. In our later analysis, we will utilize the following property (Eq \ref{eq: blr}) of Bayesian linear regression \citep{russo2019worst,osband2019deep} for \textbf{line 13}
\begin{align}
\label{eq: blr}
\hat Q_h^k(s,a)|\overline Q_{h+1,k}&\sim
\mathcal{N}\large(\hat R^k_{h,s,a}+\sum_{s'\in S}\hat P^k_{h,s,a}(s')\max_{a'\in\mathcal{A}} \overline Q_{h+1,k}(s',a'),\frac{\beta_k}{2(n^k(h,s,a)+1)}\large)\nonumber\\
&\sim\hat R^k_{h,s,a}+\sum_{s'\in S}\hat P^k_{h,s,a}(s')\max_{a'\in\mathcal{A}} \overline Q_{h+1,k}(s',a')+w^k_{h,s,a},
\end{align}
where the noise term $w^k\in\mathbb{R}^{HSA}$ satisfies $w^k(h,s,a)\sim \mathcal{N}(0,\sigma_k^2(h,s,a))$ and $\sigma_k(h,s,a)=\frac{\beta_k}{2(n^k(h,s,a)+1)}$. In terms of notation, we denote $\overline V_{h,k}(s)=\max_a \overline Q_{h,k}(s,a)$.\\
Compared with RLSVI in~\citet{russo2019worst}, we introduce a clipping technique to handle the abnormal case in the Q-value function. C-RLSVI has simple one-phase clipping and the threshold $\alpha_k=4H^3S\log\del{2HSAk}\log\del{40SAT/\delta} $ is designed to guarantee the boundness of the value function. Clipping is the key step that allows us to introduce new analysis as compared to~\citet{russo2019worst} and therefore obtain a high-probability regret bound. Similar to as discussed in~\citet{zanette2020frequentist}, we want to emphasize that clipping also hurts the optimism obtained by simply adding Gaussian noise. However, clipping only happens at an early stage of visiting every $(h,s,a)$ tuple. Intuitively, once $(h,s,a)$ is visited for a large number of times, its estimated Q-value will be rather accurate and concentrates around the true value (within $[0, H-h+1]$), which means clipping will not take place. Another effect caused by clipping is we have an optimistic Q-value function at the initial phase of exploration since $Q^*_h \le H-h+1$. However, this is not the crucial property that we gain from enforcing clipping. Although clipping slightly breaks the Bayesian interpretation of RLSVI~\citep{russo2019worst,osband2019deep}, it is easy to implement empirically and we will show it does not introduce a major term on regret bound.
\section{Events and Concentration Lemmas}\label{sec: events}
\section{Proof of Optimism}\label{sec: optimism}
Optimism is required since it is used for bounding the pessimism term in the regret bound calculation. We only care about the probability of a timestep 1 in an episode $k$ being optimistic. The following proof is adapted from \citet{zanette2020frequentist, russo2019worst}.
\begin{lemma}[Optimism with a constant probability, restatement of Lemma \ref{lem: Optimism Main}]\label{lem: Optimism}
Conditioned on history $\mathcal H^{k-1}_H$, we have $$\mathbb{P}\left(\overline{V}^{}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1)\,|\mathcal{C}_k\right) \geq \Phi(-\sqrt 2),$$ where $\mathcal{C}_k$ refers to the event that $\hat M^k\in \mathcal{M}^k$, where $\mathcal M^k$ is the confidence set defined in Eq (\ref{eq: confidence interval}).
In addition, when $0<\delta < 4\Phi(-\sqrt 2)$, we have
$$\mathbb{P}\left(\overline{V}^{}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1)\,|\mathcal{G}_k\right) \geq \Phi(-\sqrt 2)/2.$$
\end{lemma}
\begin{proof}
The analysis is valid for any episode $k$ and hence we skip $k$ from the subsequent notations in this proof.
For the first result, we conditioned all the discussions on event $\mathcal C_k$. Let $(s_1,\cdots,s_H)$ be the random sequence of states drawn by the policy $\pi^*$ (optimal policy under true MDP $M$) in the estimated MDP $\overline{M}$ and $a_h=\pi^*_h(s_h)$. By the property of Bayesian linear regression and Bellman equation, for any $s_h$ (or more specifically $(h,s_h,a_h)$) that is not clipped, we have
\begin{align*}
&~\overline{V}_h(s_h)-V^{*}_h(s_h)\\
\ge &~\overline{Q}_h(s_h,\pi^*_h(s_h))-Q^{*}_h(s_h,\pi^*_h(s_h))\\
=&~\hat Q_h(s_h,\pi^*_h(s_h))-Q^{*}_h(s_h,\pi^*_h(s_h))\\
=&~ \hat{R}_{h,s_h,a_h} + w_{h,s_h,a_h} + \langle \hat{P}_{h,s_h,a_h} \, , \, \overline{V}_{h+1} \rangle -R_{h,s_h,a_h} -\langle P_{h,s_h,a_h} \, , \, V^{*}_{h+1} \rangle \\
=&~ \hat{R}_{h,s_h,a_h} -R_{h,s_h,a_h} +\langle \hat{P}_{h,s_h,a_h} \, , \,\overline{V}_{h+1} - V^{*}_{h+1} \rangle+\langle \hat{P}_{h,s_h,a_h}-P_{h,s_h,a_h} \, , \, V^{*}_{h+1} \rangle + w_{h,s_h,a_h}.\\
\overset{a}{\ge}&~ \langle \hat{P}_{h,s_h,a_h} \, , \,\overline{V}_{h+1} - V^{*}_{h+1} \rangle + w_{h,s_h,a_h} - \sqrt{e(h,s_h, a_h)},
\end{align*}
where step $(a)$ is from the definition of the confidence sets (Definition~\ref{def:confidence set}).
For any $s_h$ that is clipped, we have
\begin{align*}
\overline{V}_h(s_h)-V^{*}_h(s_h)=H-h+1-V^{*}_h(s_h)\ge 0.
\end{align*}
From the above one-step expansion, we know that we will keep accumulating $w_{h,s_h,a_h} - \sqrt{e(h,s_h, a_h)}$ when unrolling an trajectory until clipping happens. Define $d(h,s)$ as the probability of the random sequence $(s_1,\ldots,s_H)$ that satisfies $s_h=s$ and no clipping happens at $s_1,\ldots,s_{h}$. Unrolling from timestep 1 to timestep $H$ and noticing $a_h=\pi^*_h(s_h)$ gives us
\begin{align*}
&~\frac{1}{H} \del{\overline{V}_{1}(s_1) -V^{*}_1(s_1)}\nonumber \\
\geq&~ \frac{1}{H} \sum_{s\in\mathcal{S},1\le h \le H}d(h,s)\sbr{w_{h,s,\pi^*_h(s)} - \sqrt{e(h,s, \pi^*_h(s))}}\nonumber\\
\geq&~ \left( \sum_{s\in\mathcal{S},1\le h \le H}(d(h,s)/H)w_{h,s,\pi^*_h(s)}\right) - \sqrt{HS} \sqrt{ \sum_{s\in \mathcal{S}, 1\leq h\leq H} (d(h,s)/H)^2 e(h,s, \pi^*_h(s))}\label{eq: optimism lem 1} \\
:=&~ X(w).\nonumber
\end{align*}
The first inequality is due to the definition of $d(h,s)$, and the second inequality is due to Cauchy-Schwartz.
Since
\[(d(h,s)/H)w_{h,s,\pi^*_h(s)}\sim\mathcal{N}\del{0,(d(h,s)/H)^2HS e(h,s, \pi^*_h(s))/2},
\]
we get
\begin{equation*}
X(w) \sim \mathcal{N}\left( - \sqrt{HS} \sqrt{ \sum_{s\in \mathcal{S}, 1\leq h\leq H} (d(h,s)/H)^2 e(h,s, \pi^*_h(s))},\, HS \sum_{s\in \mathcal{S}, 1\leq h\leq H} (d(h,s)/H)^2 e(h,s, \pi^*_h(s))/2 \right).
\end{equation*}
Upon converting to standard Gaussian distribution it follows that
\begin{align*}
\mathbb{P}\del{X(W) \geq 0}=\mathbb{P}\left(\mathcal{N}(0,1)\ge\sqrt{2}\right)= \Phi(-\sqrt{2}).
\end{align*}
Therefore $\mathbb{P}\del{\overline{V}_{1}(s_1) \ge V^{*}_1(s_1)\mid \mathcal C_k } \geq \Phi(-\sqrt 2)$.
For the second part, Lemma~\ref{lem: intersection event} tells us that $P(\mathcal G_k|\mathcal C_k)\ge 1-\delta/8$. Applying the law of total iteration yields
\begin{align*}
\mathbb{P}\del{\overline{V}_{1}(s_1) \ge V^{*}_1(s_1)\mid \mathcal C_k } =&~\mathbb{P}\del{\mathcal G_k|\mathcal C_k}\mathbb{P}\del{\overline{V}_{1}(s_1) \ge V^{*}_1(s_1)\mid \mathcal G_k,\mathcal C_k }+\mathbb{P}\del{\mathcal G_k^\complement|\mathcal C_k}\mathbb{P}\del{\overline{V}_{1}(s_1) \ge V^{*}_1(s_1)\mid \mathcal G_k^\complement,\mathcal C_k}\\
\le&~\mathbb{P}\del{\overline{V}_{1}(s_1) \ge V^{*}_1(s_1)\mid \mathcal G_k } + \delta/8.
\end{align*}
Therefore, we get
\begin{align*}
\mathbb{P}\del{\overline{V}_{1}(s_1) \ge V^{*}_1(s_1)\mid \mathcal G_k } \ge \Phi(-\sqrt 2)- \delta/8\ge \Phi(-\sqrt 2) /2.
\end{align*}
This completes the proof.
\end{proof}
\section{Concentration of Events}\label{sec: concentration of events}
\begin{lemma}[Bound on the confident set, restatement of Lemma \ref{lem: confidence interval lemma main}]\label{lem: confidence interval lemma}
$\sum_{k=1}^{\infty} \mathbb{P}\left(\mathcal{C}_k \right)=\sum_{k=1}^{\infty} \mathbb{P}(\hat{M}^k \notin \mathcal{M}^k ) \leq 2006HSA$.
\end{lemma}
\begin{proof}
Similar as \cite{russo2019worst}, we construct ``stack of rewards'' as in \cite{lattimore2020bandit}. For every tuple $z=(h,s,a)$, we generate two i.i.d sequences of random variables $r_{z,n}\sim R_{h,s,a}$ and $s_{z,n}\sim P_{h,s,a}(\cdot)$. Here $r_{(h,s,a),n}$ and $s_{(h,s,a),n}$ denote the reward and state transition generated from the $n$th time action $a$ is played in state $s$, timestep $h$. Set
\[
Y_{z,n} = r_{z,n} + V_{h+1}^*(s_{z,n}) \qquad n\in \mathbb{N}.
\]
They are i.i.d, with $Y_{z,n} \in [0,H]$ since $\| V_{h+1}^*\|_{\infty } \leq H-1$, and satisfies
\[
\mathbb{E}[Y_{z,n}] = R_{h,s,a} + \langle P_{h,s,a} \, , \, V^*_{h+1} \rangle.
\]
Now let $n=n^k(h,s,a)$. First consider the case $n\ge 0$. From the definition of empirical MDP, we have
\[
\hat{R}^k_{h,s,a} + \langle \hat{P}^k_{h,s,a} \, ,\, V^*_{h+1} \rangle = \frac{1}{n+1} \sum_{i=1}^{n} Y_{(h,s,a), i}=\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-\frac{1}{n(n+1)} \sum_{i=1}^{n} Y_{(h,s,a), i}.
\]
Applying triangle inequality gives us
\begin{align*}
&~\mathbb{P}\left( \left| \hat{R}^k_{h,s,a} -R_{h,s,a}+ \langle \hat{P}^k_{h,s,a} - P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}} \right)\\ =&~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle -\frac{1}{n(n+1)} \sum_{i=1}^{n} Y_{(h,s,a), i} \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}} \right) \\
\leq &~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| + \left|\frac{1}{n(n+1)} \sum_{i=1}^{n} Y_{(h,s,a), i} \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}} \right)\\
\leq &~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| + \frac{1}{n+1} H \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}} \right)\\
= &~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}}-\frac{1}{n+1} H\right).
\end{align*}
When $n\ge 126$, we have
\begin{align*}
&~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}}-\frac{1}{n+1} H\right)\\
\le &~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}}-\frac{H}{8}\sqrt{\frac{\log(2/\delta_n)}{2n}} \right)\\
= &~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq \frac{7H}{8}\sqrt{\frac{\log(2/\delta_n)}{2n}}\right).
\end{align*}
By Hoeffding's inequality, for any $\delta_n \in (0,1)$,
\[
\mathbb{P}\left( \left| \frac{1}{n}\sum_{i=1}^{n} Y_{(h,s,a),i} - R_{h,s,a} - \langle P_{h,s,a} \, , \, V^*_{h+1} \rangle \right| \geq \frac{7H}{8}\sqrt{\frac{\log(2/\delta_n)}{2n}} \right) \leq \sqrt[64]{2^{15}\delta_n^{49}}.
\]
For $\delta_n=\frac{1}{HSAn^2}$, a union bound over $HSA$ values of $z=(h,s,a)$ and all possible $n\ge 127$ yields
\begin{align*}
&~\mathbb{P}\left( \bigcup_{h\in[H],s\in[S],a\in[A], n\ge126} \left\{ \left| \frac{1}{n}\sum_{i=1}^{n} Y_{(h,s,a),i} - R_{h,s,a} - \langle P_{h,s,a} \, , \, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}}\right\} \right)\\
\le&~\mathbb{P}\left( \bigcup_{h\in[H],s\in[S],a\in[A], n\ge126} \left\{ \left| \frac{1}{n}\sum_{i=1}^{n} Y_{(h,s,a),i} - R_{h,s,a} - \langle P_{h,s,a} \, , \, V^*_{h+1} \rangle \right| \geq \frac{7H}{8}\sqrt{\frac{\log(2/\delta_n)}{2n}}\right\} \right)\\
\leq&~ \sum_{s=1}^S\sum_{a=1}^A\sum_{h=1}^H\sum_{n=126}^{\infty} \sqrt[64]{2^{15}\left(\frac{1}{HSAn^2}\right)^{49}}\\
=&~ (HSA) \sum_{n=126}^{\infty} \sqrt[64]{2^{15}\left(\frac{1}{HSAn^2}\right)^{49}}\\
\le &~ 2(HSA)^{15/64} \sum_{n=1}^{\infty} \left(\frac{1}{n}\right)^{49/32}\\
\le &~ 2(HSA)^{15/64} \left(\int_{x=1}^{\infty} \left(\frac{1}{x}\right)^{49/32}dx+1\right)\\
\le &~ 6(HSA)^{15/64}.
\end{align*}
For $1\le n \le 125$, we instead have
\begin{align*}
&~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}}-\frac{1}{n+1} H \right)\\
\le &~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}}-\frac{H}{2}\sqrt{\frac{\log(2/\delta_n)}{2n}} \right)\\
= &~\mathbb{P}\left( \left|\frac{1}{n} \sum_{i=1}^{n} Y_{(h,s,a), i}-R_{h,s,a}- \langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| \geq \frac{H}{2}\sqrt{\frac{\log(2/\delta_n)}{2n}} \right).
\end{align*}
By Hoeffding's inequality, for any $\delta_n \in (0,1)$, we have
\[
\mathbb{P}\left( \left| \frac{1}{n}\sum_{i=1}^{n} Y_{(h,s,a),i} - R_{h,s,a} - \langle P_{h,s,a} \, , \, V^*_{h+1} \rangle \right| \geq \frac{H}{2}\sqrt{\frac{\log(2/\delta_n)}{2n}} \right) \leq \sqrt[4]{8\delta_n}.
\]
For $\delta_n=\frac{1}{HSAn^2}$, a union bound over $HSA$ values of $z=(h,s,a)$ and all possible $1\le n \le 125$ gives
\begin{align*}
&~\mathbb{P}\left( \bigcup_{h\in[H],s\in[S],a\in[A],1\le n\le125} \left\{ \left| \frac{1}{n}\sum_{i=1}^{n} Y_{(h,s,a),i} - R_{h,s,a} - \langle P_{h,s,a} \, , \, V^*_{h+1} \rangle \right| \geq H\sqrt{\frac{\log(2/\delta_n)}{2n}}\right\} \right)\\
\le &~\mathbb{P}\left( \bigcup_{h\in[H],s\in[S],a\in[A],1\le n\le125} \left\{ \left| \frac{1}{n}\sum_{i=1}^{n} Y_{(h,s,a),i} - R_{h,s,a} - \langle P_{h,s,a} \, , \, V^*_{h+1} \rangle \right| \geq \frac{H}{2}\sqrt{\frac{\log(2/\delta_n)}{2n}}\right\} \right)\\
\leq&~ \sum_{s=1}^S\sum_{a=1}^A\sum_{h=1}^H\sum_{n=1}^{125} \sqrt[4]{8\frac{1}{HSAn^2}}\\
=&~ (HSA)^{3/4} \sum_{n=1}^{125} \sqrt[4]{8\frac{1}{n^2}}\\
\le &~ 2000(HSA)^{3/4} .
\end{align*}
Combining the above two cases, we have
\begin{align*}
&~ \mathbb{P}\left( \exists (k,h,s,a) : n>0 , \left| \hat{R}^k_{h,s,a} -R_{h,s,a}+ \langle \hat{P}^k_{h,s,a} - P_{h,s,a},V^*_{h+1} \rangle \right| \geq H\sqrt{ \frac{ \log\left( 2HSA n \right) }{2 n}} \right) \\
\le &~3(HSA)^{15/64} + 2000(HSA)^{3/4} \\
\le &~ 2006 HSA.
\end{align*}
Note that by definition, when $n=n^k(h,s,a)>0$ we have \[
\sqrt{e_{k}(h,s,a)} \geq H\sqrt{ \frac{ \log\left( 2HSA n^k(h,s,a) \right) }{2n^k(h,s,a)}}
\]
and hence this concentration inequality holds with $\sqrt{e_k(h,s,a)}$ on the right hand side.
When $n=n^k(h,s,a)=0$, we have $\hat{R}^k_{h,s,a}=0$ and $\hat{P}^k_{h,s,a}(\cdot)=0$ by definition, so we trivially have
\[
\left| \hat{R}^k_{h,s,a} -R_{h,s,a}+ \langle \hat{P}^k_{h,s,a} - P_{h,s,a} \, ,\, V^*_{h+1} \rangle \right| = | R_{h,s,a} +\langle P_{h,s,a} \, ,\, V^*_{h+1} \rangle | \leq H \leq e_{k}(h,s,a).
\]
\end{proof}
\begin{lemma}[Bound on the noise]\label{lemma: noise bounds}
For $w^k(h,s,a)\sim \mathcal{N}(0,\sigma^2_k(h,s,a)),$ where $\sigma^2_k(h,s,a) = \frac{H^3S\log(2HSAk)}{2(n^k(h,s,a)+1)}$, we have that for any $k\in[K]$, the event $\mathcal{E}^w_{k}$ holds with probability at least $1-\delta/8$.
\end{lemma}
\begin{proof}
For any fix $s,a,h,k$, the random variable $w^k(h,s,a)$ follows Gaussian distribution $\mathcal{N}(0,\sigma^2_k)$. Therefore, Chernoff concentration bounds~(see e.g. \cite{wainwright_2019}) suggests
\begin{equation*}
\mathbb{P}\left[|w^k(h,s,a)|\geq t\right]\leq 2\exp\left(-\frac{t^2}{2\sigma^2_k}\right).
\end{equation*}
Substituting the value of $\sigma^2_k$ and rearranging, with probability at least $1-\delta'$, we can write
\begin{equation*}
\envert{w^k(h,s,a)} \leq \sqrt{\frac{H^3S\log(2HSAk)\log(2/\delta')}{n^k(h,s,a)+1}}.
\end{equation*}
Union bounding over all $s,a,k,h$ (i.e. over state, action, timestep, and episode) imply that $\forall s,a,k,h$, the following hold with probability at least $1-\delta'$,
\begin{equation*}
\envert{w^k(h,s,a)} \leq \sqrt{\frac{H^3S\log(2HSAk)\log(2SAT/\delta')}{n^k(h,s,a)+1}}.
\end{equation*}
Setting $\delta' = \delta/8$, for any $s\in[S],a\in[A],h\in[H],k\in[K]$, we have that
\begin{equation*}
\envert{w^k(h,s,a)} \leq \sqrt{\frac{H^3S\log(2HSAk)\log(16SAT/\delta)}{n^k(h,s,a)+1}} \le \gamma_k(h,s,a).
\end{equation*}
Finally recalling the definition of $\mathcal{E}^w_{k}$, we complete the proof.
\end{proof}
\begin{lemma}[Bounds on the estimated action-value function, restatement of Lemma \ref{lem: est Q function bounded main}]\label{lem: est Q function bounded}
When the events $\mathcal{C}_k$ and $\mathcal{E}^w_k$ hold then for all $(h,s,a)$
\begin{equation*}
\left| \left(\overline{Q}^{}_{h,k} - Q^*_{h}\right)(s,a) \right| \leq H-h +1.
\end{equation*}
\end{lemma}
\begin{proof}
For simplicity, we set $\overline{Q}^{}_{H+1,k}(s,a) = Q^*_{H+1}(s,a)=0$ and it is a purely virtual value for the purpose of the proof. The proof goes through by backward induction for $h=H+1,H,\ldots,1$.
Firstly, consider the base case $h=H+1$. The condition $|\overline{Q}^{}_{H+1,k}(s,a) - Q^*_{H+1}(s,a)| =0 \leq H - (H+1) +1 $ directly holds from the definition.
Now we do backward induction. Assume the following inductive hypothesis to be true
\begin{equation}\label{Eq: Good Events Q IH}
\envert{ \left(\overline{Q}^{}_{h+1,k} - Q^*_{h+1}\right)(s,a) } \leq H-h.
\end{equation}
We consider two cases:\\
\textbf{Case 1:} $n^k(h,s,a)\leq \alpha_k$\\
The Q-function is clipped and hence $\overline{Q}^{}_{h,k} = H-h+1$. By the definition of the optimal Q-function, we have $0\le Q^*_{h}\leq H-h+1$. Therefore it is trivially satisfied that
\begin{equation*}
\left| \left(\overline{Q}^{}_{h,k} - Q^*_{h}\right)(s,a) \right| \leq H-h +1.
\end{equation*}
\textbf{Case 2:} $n^k(h,s,a) > \alpha_k$\\
In this case, we don't have clipping, so $\overline Q_{h,k}(s,a)=\hat Q_{h,k}(s,a)$. From the property of Bayesian linear regression and Bellman equation, we have the following decomposition
\begin{align*}
&~\envert{\overline{Q}^{}_{h,k}(s,a)-Q^*_{h}(s,a)} \\
=&~ \envert{\hat{R}^k_{h,s,a} + w^k_{h, s, a} + \langle \hat{P}^k_{h,s, a} \, , \, \overline{V}^{}_{h+1,k} \rangle -R^{k}_{h, s, a} -\langle P^{k}_{h,s,a} \, , \, V^*_{h+1} \rangle } \\
\leq&~ \underbrace{\envert{\langle \hat{P}^k_{h,s, a} \, , \, \overline{V}^{}_{h+1,k} - V^*_{h+1}\rangle}}_{(1)} + \underbrace{\envert{\hat{R}^k_{h, s,a} -R^{k}_{h, s, a} +\langle \hat{P}^k_{h,s, a}- P^{k}_{h,s,a} \, , \, V^*_{h+1} \rangle}}_{(2)} + \underbrace{\envert{w^k_{h, s, a}}}_{(3)}.
\end{align*}
Term (1) is bounded by $H-h$ due to the inductive hypothesis in Eq~(\ref{Eq: Good Events Q IH}). Under the event $\mathcal{C}_k$, term (2) is bounded by $ \sqrt{e_k(h,s,a)} = H\sqrt{ \frac{ \log\left( 2HSA k \right) }{n^k(h,s,a)+1}}$. Finally, term (3) is bounded by $\gamma_k(h,s,a)$ as the event $\mathcal{E}^w_{h,k}$ holds. With the choice of $\alpha_k$, it follows that the sum of terms (2) and (3) is bounded by 1 as
\begin{equation*}
\frac{\sqrt{ H^2\log\left( 2HSA k \right) } +\sqrt{H^3S\log(2HSAk)L}}{\sqrt{n^k(h,s,a)}} <1.
\end{equation*}
Thus the sums of all the three terms is upper bounded by $H-h+1$.
This completes the proof.
\end{proof}
\begin{lemma}[Intersection event probability]\label{lem: intersection event}
For any episode $k\in[K]$, when the event $\mathcal{C}_k$ holds (i.e. $\hat{M}^k\,\in\,\mathcal{M}^k$), the intersection event
intersection event $\overline{\mathcal{E}}_k = \mathcal{E}^{w}_{k} \cap \mathcal{E}^{\overline{Q}^{}}_{k}$ holds with probability at least $1-\delta/8$. In other words, whenever the unperturbed estimated MDP lies in the confidence set (Definition~\ref{def:confidence set}), the each pseudo-noise and the estimated $\overline{Q}$ function are bounded with high probability $1-\delta/8$. Similarly defined, $\tilde{\mathcal{E}}_k$ also holds with probability $1-\delta/8$ when $\mathcal{C}_k$ happens.
\end{lemma}
\begin{proof}
The event, $\mathcal{E}^w_k$ holds with probability at least $1-\delta/8$ from Lemma~\ref{lemma: noise bounds}. Lemma~\ref{lem: est Q function bounded} gives that whenever $\left(\mathcal{C}_k \cap \mathcal{E}^w_k\right)$ holds then almost surely $\mathcal{E}^{\overline{Q}^{}}_{k}$ holds. Therefore, $\mathcal{E}_k$ holds with probability $1-\delta/8$, whenever $\mathcal{C}_k$ holds.
\end{proof}
\section{Introduction}\label{sec: introduction main}
We study systematic exploration in reinforcement learning (RL) and the exploration-exploitation trade-off therein. Exploration in RL \cite{sutton2018reinforcement} has predominantly focused on \textit{Optimism in the face of Uncertainty} (OFU) based algorithms. Since the seminal work of~\citet{jaksch2010near}, many provably efficient methods have been proposed but most of them are restricted to either tabular or linear setting~\citep{azar2017minimax,jin2020provably}. A few paper study a more general framework but subjected to computational intractability~\citep{jiang2017contextual,sun2019model,henaff2019explicit}. Another broad category is Thompson Sampling (TS)-based methods \citep{osband2013more,agrawal2017optimistic}. They are believed to have more appealing empirical results \citep{chapelle2011empirical,osband2017posterior}.\\
In this work, we investigate a TS-like algorithm, RLSVI~\citep{osband2016generalization,osband2019deep,russo2019worst,zanette2020frequentist}. In RLSVI, the exploration is induced by injecting randomness into the value function. The algorithm generates a randomized value function by carefully selecting the variance of Gaussian noise, which is used in perturbations of the history data (the trajectory of the algorithm till the current episode) and then applies the least square policy iteration algorithm of~\citet{lagoudakis2003least}. Thanks to the model-free nature, RLSVI is flexible enough to be extended to general function approximation setting, as shown by~\citet{osband2016deep,osband2018randomized,osband2019deep}, and at the same time has less burden on the computational side.\\
We propose \textsc{C-RLSVI}\, algorithm, which additionally considers an initial burn-in or warm-up phase on top of the core structure of RLSVI. Theoretically, we prove that \textsc{C-RLSVI}\, achieves $\tilde{\mathrm{O}}(H^2S\sqrt{AT})$ high-probability regret bound\footnote{$\tilde{\mathrm{O}}\del{\cdot}$ hides dependence on logarithmic factors.}
\paragraph{Significance of Our Results}
\begin{itemize}
\item Our high-probability bound improves upon previous $\tilde{\mathrm{O}}(H^{\nicefrac{5}{2}}S^{\nicefrac{3}{2}}\sqrt{AT})$ worst-case expected regret bound of RLSVI in \citet{russo2019worst}.
\item Our high-probability regret bound matches the sharpest $\tilde{\mathrm{O}}(H^2S\sqrt{AT})$ worst-case regret bound among all TS-based methods~\citep{agrawal2017optimistic}\footnote{\citet{agrawal2017optimistic} studies weakly communicating MDPs with diameter $D$. Bounds comparable to our setting (time in-homogeneous) are obtained by augmenting their state space as $S'\rightarrow SH$ and noticing $D \ge H$.}.
\end{itemize}
\paragraph{Related Works} Taking inspirations from~\citet{azar2017minimax,dann2017unifying,zanette2019tighter,yang2019reinforcement}, we introduce clipping to avoid propagation of unreasonable estimates of the value function. Clipping creates a warm-up effect that only affects the regret bound with constant factors (i.e. independent of the total number of steps $T$). With the help of clipping, we prove that the randomized value functions are bounded with high probability.\\
In the context of using perturbation or random noise methods to obtain provable exploration guarantees, there have been recent works~\citep{osband2016deep,fortunato2018noisy,pacchiano2020optimism,xu1001worst,kveton2019garbage} in both theoretical RL and bandit literature. A common theme has been to develop a TS-like algorithm that is suitable for complex models where exact posterior sampling is impossible. RLSVI also enjoys such conceptual connections with Thompson sampling~\citep{osband2019deep,osband2016generalization}. Related to this theme, the worst-case analysis of~\citet{agrawal2017optimistic} should be highlighted, where the authors do not solve for a pure TS algorithm but have proposed an algorithm that samples many times from posterior distribution to obtain an optimistic model. In comparison, \textsc{C-RLSVI}\, does not require such strong optimistic guarantee.\\
Our results are not optimal as compared with $\mathrm{\Omega}({H\sqrt{SAT})}$ lower bounds in~\citet{jaksch2010near} \footnote{The lower bound is translated to time-inhomogeneous setting.}. The gap of $\sqrt{SH}$ is sometimes attributed to the additional cost of exploration in TS-like approaches~\citep{abeille2017linear}. Whether this gap can be closed, at least for RLSVI, is still an interesting open question. We hope our analysis serves as a building block towards a deeper understanding of TS-based methods.
\section{Main Result}\label{sec: main results main}
In this section, we present our main result: high-probability regret bound in Theorem~\ref{thm: high probability regret main}.
\begin{thm}\label{thm: high probability regret main}
For $0<\delta < 4\Phi(-\sqrt 2)$, \textsc{C-RLSVI}\, enjoys the following high-probability regret upper bound, with probability $1-\delta$,
\begin{align*}
{\rm Reg}(K) = \tilde{\mathrm{O}}\left( H^2S\sqrt{AT}\right).
\end{align*}
\end{thm}
Theorem~\ref{thm: high probability regret main} shows that \textsc{C-RLSVI}\, matches the state-of-the-art TS-based method~\citep{agrawal2017optimistic}. Compared to the lower bound~\citep{jaksch2010near}, the result is at least off by $\sqrt{HS}$ factor. This additional factor of $\sqrt{HS}$ has eluded all the worst-case analyses of TS-based algorithms known to us in the tabular setting. This is similar to an extra $\sqrt{d}$ factor that appears in the worst-case upper bound analysis of TS for $d$-dimensional linear bandits~\citep{abeille2017linear}.\\
It is useful to compare our work with the following contemporaries in related directions.
\paragraph{Comparison with~\citet{russo2019worst}}
Other than the notion of clipping (which only contributed to warm-up or burn-in term), the core of \textsc{C-RLSVI}\, is the same as RLSVI considered by~\citet{russo2019worst}. Their work presents significant insights about randomized value functions but the analysis does not extend to give high-probability regret bounds, and the latter requires a fresh analysis. Theorem~\ref{thm: high probability regret main} improves his worst-case expected regret bound $\tilde{\mathrm{O}}(H^{5/2}S^{3/2}\sqrt{AT})$ by $\sqrt{HS}$.
\paragraph{Comparison with~\citet{zanette2020frequentist}}
Very recently,~\citet{zanette2020frequentist} proposed frequentist regret analysis for a variant of RLSVI with linear function approximation and obtained high-probability regret bound of $\tilde{\mathrm{O}}\del{H^2d^2\sqrt{T}}$, where $d$ is the dimension of the low rank embedding of the MDP. While they present some interesting analytical insights which we use (see Section~\ref{sec: proof outline main}), directly converting their bound to tabular setting ($d\,\rightarrow \,SA$) gives us quite loose bound $\tilde{\mathrm{O}}\del{H^2S^2A^2\sqrt{T}}$.
\paragraph{Comparison with~\citet{azar2017minimax,jin2018q}} These OFU works guaranteeing optimism almost surely all the time are fundamentally different from RLSVI. However, they develop key technical ideas which are useful to our analysis, e.g. clipping estimated value functions and estimation error propagation techniques. Specifically, in~\citet{azar2017minimax,dann2017unifying,jin2018q}, the estimation error is decomposed as a recurrence. Since RLSVI is only optimistic with a constant probability (see Section~\ref{sec: proof outline main} for details), their techniques need to be substantially modified to be used in our analysis.
\section{Notations, Constants and Definition}\label{sec: notations}
\subsection{Notation Table}
\begin{longtable}[H]{l c l }
\caption{Notation table}
\label{tab: notation}
\\\hline
\textbf{Symbol} & & \textbf{Explanation}
\\
\hline
$\mathcal{S}$& & The state space
\\
$\mathcal{A}$& & The action space
\\
$S$& & Size of state space
\\
$A$& & Size of action space
\\
$H$& & The length of horizon
\\
$K$& & The total number of episodes
\\
$T$ & & The total number of steps across all episodes
\\
$\pi^k$& & The greedy policy obtained in the Algorithm \ref{alg: RLSVI} at episode $k$, $\pi^k=\{\pi^k_1,\cdots,\pi^k_H\}$
\\
$\pi^*$& & The optimal policy of the true MDP\\
$(s^k_h,a^k_h)$& & The state-action pair at timestep $h$ in episode $k$ \\
$(s^k_h,a^k_h,r^k_h)$& & The tuple representing state-action pair and the corresponding reward\\ && at timestep $h$ in episode $k$ \\
$\mathcal{H}^k_{h} $ && $ \{ (s^j_l,a^j_l,r^j_l):\text{if } j<k \text{ then}\,
h\leq H,\text{ else if }\,j=k\,\text{ then }\,l\leq h\}$
\\
&& The history (algorithm trajectory) till timestep $h$ of the episode $k$.
\\
$\overline{\mathcal{H}}^k_{h} $ && $\mathcal{H}^k_{h}\,\bigcup\, \left\{ w^k(l,s,a):l\in[H],s\in\mathcal{S},a\in\mathcal{A}\right\}$\\
&& The union of the history (algorithm trajectory) til timestep $h$ in episode $k$ \\
&&and the pseudo-noise of all timesteps in episode $k$\\
$n^k(h,s,a)$& &$\sum_{l=1}^{k-1}\mathbf{1}\{(s^l_h,a^l_h)=(s,a)\}$ \\ & &The number of visits to state-action pair $(s,a)$ in timestep $h$ upto episode $k$
\\
$P_{h,s^k_h,a^k_h}$& &The transition distribution for the state action pair $(s^k_h,a^k_h)$\\
$R_{h,s^k_h,a^k_h}$& &The reward distribution for the state action pair $(s^k_h,a^k_h)$
\\
$P_{h,s,a}$& &The transition distribution for the state action pair $(s,a)$ at timestep $h$\\
$R_{h,s,a}$& &The reward distribution for the state action pair $(s,a)$ at timestep $h$\\
\\
$\hat{P}^k_{h,s^k_h,a^k_h}$& &The estimated transition distribution for the state action pair $(s^k_h,a^k_h)$
\\
$\hat{R}^k_{h,s^k_h,a^k_h}$& &The estimated reward distribution for the state action pair $(s^k_h,a^k_h)$
\\
$\mathcal{M}^k$ && The confidence set around the true MDP
\\
$w^k_{h,s^k_h,a^k_h}$ && The pseudo-noise used for exploration
\\
$\tilde{w}^k_{h,s^k_h,s^k_h}$ && The independently pseudo-noise sample, conditioned on history till epsiode $k-1$
\\
$\hat{M}^k$& & $(H,\mathcal{S},\mathcal{A},\hat{P}^k,\hat{R}^k,s^k_1)$\\&& The estimated MDP without perturbation in data in episode $k$
\\
$\overline{M}^k$& & $(H,\mathcal{S},\mathcal{A},\hat{P}^k,\hat{R}^k+w^k,s^k_1)$\\&& The estimated MDP with perturbed data in episode $k$
\\
$V^*_{h}$& &The optimal value function under true MDP on the sub-episodes \\& & consisting of the timesteps $\{h,\cdots,H\}$
\\
$V^{\pi^k}_{h}$& &The state-value function of $\pi^k$ evaluated on the true MDP on the \\ & &sub-episodes consisting of the timesteps $\{h,\cdots,H\}$ \\
$\overline{V}_{h,k}$& &The state-value function calculated in Algorithm \ref{alg: RLSVI} with noise $w^k$\\
$\overline{Q}_{h,k}$& &The Q-value function calculated in Algorithm \ref{alg: RLSVI} with noise $w^k$\\
$\tilde{M}^k,\,\tilde{V}_{1,k},\,\tilde{w}_{h,s,a}^k$ &&Refer to Definition~\ref{def: tilde V}
\\
$\underline{M}^k,\,\underline{V}_{1,k},\,\underline{w}_{h,s,a}^k$ &&Refer to Definition~\ref{def: under V}
\\
$\overline \delta_{h,k}(s_h^k)$& &$ V^*_h(s_h^k) - \overline{V}_{h,k}(s_h^k)$\\
$\underline \delta_{h,k}(s_h^k)$& &$V^*_h(s_h^k) - \underline{V}_{h,k}(s_h^k)$\\
$\overline \delta^{\pi^k}_{h,k}(s_h^k)$& &$\overline{V}_{h,k}(s_h^k) - V^{\pi^k}_{h,k}(s_h^k)$
\\
$\underline \delta^{\pi^k}_{h,k}(s_h^k)$& &$V^{\pi^k}_{h,k}(s_h^k)- \underline{V}_{h,k}(s_h^k)$
\\
$\mathcal{R}^k_{h,s^k_h,a^k_h}$& &$\hat{R}^k_{h,s^k_h,a^k_h}-R_{h,s^k_h,a^k_h}$ \\
\\
$\mathcal{P}^k_{h,s^k_h,a^k_h}$& &$\langle\hat{P}^k_{h,s^k_h,a^k_h}- P_{h,s^k_h,a^k_h},V^*_{h+1}\rangle$
\\
$C$& &$\frac{1}{\Phi(-\sqrt 2)/2}$
\\
$L$ & &$\log\del{40SAT/\delta}$
\\
$\sqrt{\alpha_k}$ && $2\sqrt{H^3S\log\del{2HSAk}\log\del{40SAT/\delta}} $
\\
$\sigma^2_k(h,s,a)$ && $\frac{\beta_k}{2(n^k(h,s,a) + 1)}=\frac{H^3S\log(2HSAk)}{2(n^k(h,s,a)+1)}$
\\
$\gamma_k(h,s,a)$ && $\sqrt{\sigma^2_k(h,s,a)L}$
\\
$\sqrt{e_{k}(h,s,a)}$ &&
$H\sqrt{ \frac{ \log\left( 2HSA k \right) }{n^k(h,s,a)+1}}$
\\
$\beta_k$ & &$H^3S\log(2HSAk)$
\\
$\mathcal{M}_{\deltaEPik{h}}$ && Refer to Appendix~\ref{sec: MDS}
\\
$\mathcal{M}_{\deltaPiUk{h}}$ && Refer to Appendix~\ref{sec: MDS}
\\
$\MDSfind{1}$ && Refer to Appendix~\ref{sec: MDS}\\
$\mathcal{C}_k$ &&$ \left\{ \hat{M}^k \in \mathcal{M}^k \right\}$\\
$\mathcal{E}^{w}_{h,k}$ &&$ \left\{|w^k(h,s,a)| \leq \gamma_k(h,s,a),\forall (s,a)\right\}$\\
$\mathcal{E}^w_k$ &&$ \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{w}_{h,k} \right)\right\}$\\
$\mathcal{E}^{\overline{Q}^{}}_{h,k}$ &&$ \left\{ |(\overline{Q}^{}_{h,k} - Q^*_{h})(s,a)| \leq H -h+1,\, \forall (s,a) \right\}$\\
$\mathcal{E}^{\overline{Q}^{}}_k$ &&$ \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{\overline{Q}^{}}_{h,k} \right)\right\}$\\
$\overline{\mathcal{E}}_k$ &&$ \left\{ \mathcal{E}^{w}_{k} \cap \mathcal{E}^{\overline{Q}^{}}_{k}\right\}$\\
$\mathcal{E}^{\tilde{w}}_{h,k}$ &&$ \left\{ |\tilde{w}^k(h, s,a)| \leq \gamma_k(h,s,a)), \forall (s,a)\right\}$\\
$\mathcal{E}^{\tilde{w}}_k$ &&$ \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{\tilde{w}}_{h,k} \right)\right\}$\\
$\mathcal{E}^{\tilde{Q}^{}}_{h,k}$ &&$ \left\{ |(\tilde{Q}^{}_{h,k} - Q^*_{h})(s,a)| \leq H -h+1,\, \forall (s,a) \right\}$\\
$\mathcal{E}^{\tilde{Q}^{}}_k$ &&$ \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{\tilde{Q}^{}}_{h,k} \right)\right\}$\\
$\tilde{\mathcal{E}}_k$ &&$ \left\{ \mathcal{E}^{\tilde{w}}_{k} \cap \mathcal{E}^{\tilde{Q}^{}}_{k}\right\}$\\
$\mathcal{E}^{\text{th}}_{h,k}$ &&$ \left\{ n^k(h,s^k_h,a^k_h) \geq \alpha_k \right\}$\\
$\mathcal{E}^{\text{th}}_{k}$ && $\left\{\underset{h\in[H]}{\cap} \mathcal{E}^{\text{th}}_{h,k}\right\}$\\
$\mathcal{G}_k$ &&$ \left\{\overline{\mathcal{E}}_{k}\cap \mathcal{C}_k \right\}$\\
$\tilde{\mathcal{G}}_k$ &&$ \left\{\tilde{\mathcal{E}}_{k}\cap \mathcal{C}_k \right\}$\\
$\overline{\mathcal{O}}_{1,k}$ &&$ \left\{ \overline{V}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1) \right\}$\\
$\tilde{\mathcal{O}}_{1,k}$ && $\left\{ \tilde{V}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1) \right\}$\\\hline
\end{longtable}
\iffalse
\begin{definition}[Events]
\begin{align*}
\mathcal{C}_k &\overset{\text{def}}{=} \left\{ \hat{M}^k \in \mathcal{M}^k \right\}\\
\mathcal{E}^{w}_{h,k} &\overset{\text{def}}{=} \left\{ w^k(h,s^k_h,a^k_h) \leq \gamma_k(h,s^k_h,a^k_h) \right\}\\
\mathcal{E}^w_k &\overset{\text{def}}{=} \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{w}_{h,k} \right)\right\}\\
\mathcal{E}^{\overline{Q}^{}}_{h,k} &\overset{\text{def}}{=} \left\{ |(\overline{Q}^{}_{h,k} - Q^*_{h})(s,a)| \leq H -h+1,\, \forall (s,a) \right\}\\
\mathcal{E}^{\overline{Q}^{}}_k &\overset{\text{def}}{=} \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{\overline{Q}^{}}_{h,k} \right)\right\}\\
\overline{\mathcal{E}}_k &\overset{\text{def}}{=} \left\{ \mathcal{E}^{w}_{k} \cap \mathcal{E}^{\overline{Q}^{}}_{k}\right\}\\
\mathcal{E}^{\tilde{w}}_{h,k} &\overset{\text{def}}{=} \left\{ \tilde{w}^k(h,s^k_h,a^k_h) \leq \gamma_k(h,s^k_h,a^k_h)) \right\}\\
\mathcal{E}^{\tilde{w}}_k &\overset{\text{def}}{=} \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{\tilde{w}}_{h,k} \right)\right\}\\
\mathcal{E}^{\tilde{Q}^{}}_{h,k} &\overset{\text{def}}{=} \left\{ |(\tilde{Q}^{}_{h,k} - Q^*_{h})(s,a)| \leq H -h+1,\, \forall (s,a) \right\}\\
\mathcal{E}^{\tilde{Q}^{}}_k &\overset{\text{def}}{=} \left\{\underset{h\in[H]}{\cap}\left( \mathcal{E}^{\tilde{Q}^{}}_{h,k} \right)\right\}\\
\tilde{\mathcal{E}}_k &\overset{\text{def}}{=} \left\{ \mathcal{E}^{\tilde{w}}_{k} \cap \mathcal{E}^{\tilde{Q}^{}}_{k}\right\}\\
\mathcal{E}^{\text{th}}_{h,k} &\overset{\text{def}}{=} \left\{ n^k(h,s^k_h,a^k_h) \geq \alpha_k \right\}\\
\mathcal{E}^{\text{th}}_{k} &\overset{\text{def}}{=} \left\{\underset{h\in[H]}{\cap} \mathcal{E}^{\text{th}}_{h,k}\right\}\\
\overline{\mathcal{G}}_k &\overset{\text{def}}{=} \left\{\mathcal{E}^{\text{th}}_{k}\cap\overline{\mathcal{E}}_{k}\cap \mathcal{C}_k \right\}\\
\tilde{\mathcal{G}}_k &\overset{\text{def}}{=} \left\{\mathcal{E}^{\text{th}}_{k}\cap\tilde{\mathcal{E}}_{k}\cap \mathcal{C}_k \right\}\\
\mathcal{G}_k &\overset{\text{def}}{=} \left\{\overline{\mathcal{G}}_k\cap\tilde{\mathcal{G}}_k \right\}\\
\overline{\mathcal{O}}_{1,k} &\overset{\text{def}}{=} \left\{ \overline{V}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1) \right\}\\
\tilde{\mathcal{O}}_{1,k} &\overset{\text{def}}{=} \left\{ \tilde{V}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1) \right\}
\end{align*}
\end{definition}
\fi
\subsection{Definitions of Synthetic Quantities}
In this section we define some synthetic quantities required for analysis.
\begin{definition}[$\tilde V_{h,k}$]\label{def: tilde V}
Given history $\mathcal{H}^{k-1}_H$, and $\hat P^k$ and $\hat R^k$ defined in empirical MDP $\overline {M}^k=(H,\mathcal{S},\mathcal{A},\hat{P}^k,\hat{R}^k,s_1^k)$, we define independent Gaussian noise term $\tilde{w}^k(h,s,a)| \mathcal{H}^{k-1}_H\,\sim\,\mathcal{N}(0,\sigma^2_k(h,s,a))$, perturbed MDP $\tilde{M}^k=(H,\mathcal{S},\mathcal{A},\hat{P}^k,\hat{R}^k+\tilde{w}^k,s_1^k)$, and $\tilde{V}_{h,k}$ to be the value function obtained by running Algorithm \ref{alg: RLSVI} with random noise $\tilde w^k$.
Notice that $\tilde w^k$ can be different from the realized noise term $w^k$ sampled in the Algorithm~\ref{alg: RLSVI}. They are two independent samples form the same Gaussian distribution. Therefore, conditioned on the history $\mathcal{H}^{k-1}_H$, $\tilde{M}^k$ has the same marginal distribution as $\overline{M}^k$, but is statistically independent of the policy $\pi^k$ selected by \textsc{C-RLSVI}.
\end{definition}
\begin{definition}[$\underline V_{1,k}$]\label{def: under V}
Similar as in Definition~\ref{def: tilde V}, given history $\mathcal{H}^{k-1}_H$ and any fixed noise $ w_{\text{ptb}}^k\in\mathbb{R}^{HSA}$, we define a perturbed MDP ${M}^k_{\text{ptb}}=(H,\mathcal{S},\mathcal{A},\hat{P}^k,\hat{R}^k+w^k_{\text{ptb}},s_1^k)$ and $V^{w_{\text{ptb}}^k}_{h,k}$ to be the value function obtained by running Algorithm \ref{alg: RLSVI} with random noise $w_{\text{ptb}}^k$.
Let $\underline w^k$ be the solution of following optimization program
\begin{align*}
& \underset{ w^k_{\text{ptb}}\, \in\,\mathbb{R}^{HSA}}{\min} V^{ w^k_{\text{ptb}}}_{1,k}(s_1^k)\nonumber\\
s.t.& \quad |w^k_{\text{ptb}}(h,s,a)| \leq \gamma_k(h,s,a)\quad\forall\, h,s,a.
\end{align*}
We also use $\underline{V}_{h,k}$ to denote the minimum of the optimization program (i.e., value function $V^{\underline w^k}_{h,k}$) and define MDP $\underline{M}^k=(H,\mathcal{S},\mathcal{A},\hat{P}^k,\hat{R}^k+\underline{w}^k,s_1^k)$. Then we get that $\underline{V}_{1,k} \le V^{w^k_{\text{ptb}}}_{1,k}$ for any $|w^k_{\text{ptb}}| \leq \gamma_k$.
\end{definition}
\begin{definition}[Confidence set, restatement of Definition~\ref{def:confidence set main}]\label{def:confidence set}
\begin{align*}
\mathcal{M}^k =\bigg\{ (H, \mathcal{S}, \mathcal{A}, P', R', s_1) : \, \forall(h,s,a), \, \envert{R_{h,s,a}'-R_{h,s,a} + \langle P'_{h,s,a}-P_{h,s,a},V^*_{h+1}\rangle}\,\,\,
\leq \sqrt{e_k(h,s,a)} \bigg\},
\end{align*}
where we set
\begin{equation}\label{eq: confidence interval}
\sqrt{e_k(h,s,a)} =
H\sqrt{ \frac{ \log\left( 2HSA k \right) }{n^k(h,s,a)+1}}.
\end{equation}
\end{definition}
\subsection{Martingale Difference Sequences}\label{sec: MDS}
In this section, we give the filtration sets that consists of the history of the algorithm. Later we enumerate the martingale difference sequences needed for the analysis based on these filtration sets. We use the following to denote the history trajectory:
\begin{align*}
\mathcal{H}^k_{h} &\coloneqq \{ (s^j_l,a^j_l,r^j_l):\text{if } j<k \text{ then}\,
l\in [H],\text{ else if }\,j=k\,\text{ then }\,l \in [h]\} \nonumber\\
\overline{\mathcal{H}}^k_{h} &\coloneqq \mathcal{H}^k_{h}\,\bigcup\, \left\{ w^k(l,s,a):l\in[H],s\in\mathcal{S},a\in\mathcal{A}\right\}.
\end{align*}
With $a^k_h=\pi^k_h(s^k_h)$ as the action taken by \textsc{C-RLSVI}{} following the policy $\pi^k_h$ and conditioned on the history of the algorithm, the randomness exists only on the next-step transitions. Specifically, with filtration sets $\{\overline{\mathcal{H}}^k_{h} \}_{h,k}$, we define the following notations that is related to the martingale difference sequences (MDS) appeared in the final regret bound:
\begin{align*}
\mathcal{M}_{\deltaEPik{h}} &=\mathbf{1}\{\mathcal{G}_k\}\left[\mathbb{E}\left[ \,\overline{\delta}^{\pi^k}_{h+1,k}(s')\right] - \overline{\delta}^{\pi^k}_{h+1,k}(s^k_{h+1})\right],\\
\mathcal{M}_{\deltaPiUk{h}} &=\mathbf{1}\{\mathcal{G}_k\}\left[\mathbb{E}\left[\underline{\delta}^{\pi^k}_{h+1,k}(s') \right] - \underline{\delta}^{\pi^k}_{h+1,k}(s^k_{h+1})\right],
\end{align*}
where the expectation is over next state $s'$ due to the transition distribution: $P_{h,s^k_h,a^k_h}$.
We use with the filtration sets $\{\mathcal{H}^{k-1}_H\}_{k}$ for the following martingale difference sequence
\begin{align*}
\mathcal{M}_{1,k}^w&= \mathbf{1}\{\mathcal{G}_k\}\left[\mathbb{E}_{\tilde w|\tilde{\mathcal{G}}_k}\left[ \tilde{V}_{1,k}(s^k_1)\right] -\overline{V}_{1,k}(s^k_1)\right].
\end{align*}
The detailed proof related to martingale difference sequences is presented in Lemma~\ref{lem: MDS concentration}.
\subsection{Events}
For reference, we list the useful events in Table \ref{tab: notation}.
\section{Preliminaries}\label{sec: preliminaries main}
\paragraph{Markov Decision Processes}
We consider the episodic Markov Decision Process (MDP) $M=(H,\mathcal{S},\mathcal{A},P,R,s_1)$ described by \citet{puterman2014markov}, where $H$ is the length of the episode, $\mathcal{S}=\{1,2,\ldots, S\}$ is the finite state space, $\mathcal{A}=\{1,2,\ldots, A\}$ is the finite action space, $P = [P_1,\ldots,P_H]$ with $P_h:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})$ is the transition function, $R = [R_1,\ldots,R_H]$ with $R_h:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]$ is the reward function, and $s_1$ is the deterministic initial state.\\
A deterministic (and non-stationary) policy $\pi=(\pi_1,\ldots,\pi_H)$ is a sequence of functions, where each $\pi_h:\mathcal{S}\rightarrow\mathcal{A}$ defines the action to take at each state. The RL agent interacts with the environment across $K$ episodes giving us $T=KH$ steps in total. In episode $k$, the agent start with initial state $s_1^k = s_1$ and then follows policy $\pi^k$, thus inducing trajectory $s_1^k,a_1^k,r_1^k,s_2^k,a_2^k,r_2^k,\ldots,s_H^k,a_h^k,r_H^k$.\\
For any timestep $h$ and state-action pair $(s,a)\in\mathcal{S}\times\mathcal{A}$, the Q-value function of policy $\pi$ is defined as $Q_h^\pi(s,a)=R_h(s,a)+\mathbb{E}_\pi[\sum_{l=h}^H R_l(s_l,\pi_l(s_l)|s,a)]$ and the state-value function is defined as $V_h^\pi(s)=Q_h^\pi(s,\pi_h(s))$. We use $\pi^*$ to denote the optimal policy. The optimal state-value function is defined as $V_h^*(s)\coloneqq V_h^{\pi^*}(s)=\max_\pi V_h^\pi(s)$ and the optimal Q-value function is defined as $Q_h^*(s,a)\coloneqq Q_h^{\pi^*}(s,a)=\max_\pi Q_h^\pi(s,a)$. Both $Q^\pi$ and $Q^*$ satisfy Bellman equations
$$Q_h^\pi(s,a)=R_h(s,a)+\mathbb{E}_{s'\sim P_h(\cdot|s,a)}[V_{h+1}^\pi(s')]$$
$$Q_h^*(s,a)=R_h(s,a)+\mathbb{E}_{s'\sim P_h(\cdot|s,a)}[V_{h+1}^*(s')]$$
where $V_{H+1}^\pi(s)=V_{H+1}^*(s)=0$ $\forall s$. Notice that by the bounded nature of the reward function, for any $(h,s,a)$, all functions $Q_h^*,V_h^*,Q_h^\pi,V_h^\pi$ are within the range $[0,H-h+1]$. Since we consider the time-inhomogeneous setting (reward and transition change with timestep $h$), we have subscript $h$ on policy and value functions, and later traverse over $(h,s,a)$ instead of $(s,a)$.
\paragraph{Regret}
An RL algorithm is a random mapping from the history until the end of episode $k-1$ to policy $\pi^k$ at episode $k$. We use regret to evaluate the performance of the algorithm:
$${\rm \text{Reg}}(K) = \sum_{k=1}^{K} V_{1}^{*}(s_1) - V_{1}^{\pi^k}(s_1).$$
Regret ${\rm \text{Reg}}(K)$ is a random variable, and we bound it with high probability $1-\delta$. We emphasize that high-probability regret bound provides a stronger guarantee on each roll-out~\citep{seldin2013evaluation,lattimore2020bandit} and can be converted to the same order of expected regret bound $${\rm \text{E-Reg}}(K) = \mathbb{E}\left[\sum_{k=1}^{K} V_{1}^{*}(s_1) - V_{1}^{\pi^k}(s_1)\right]$$
by setting $\delta=1/T$. However, expected regret bound does not imply small variance for each run. Therefore it can violate the same order of high-probability regret bound. We also point out that both bounds hold for all MDP instances $M$ that have $S$ states, $A$ actions, horizon $H$, and bounded reward $R\in[0,1]$. In other words, we consider worst-case (frequentist) regret bound.
\paragraph{Empirical MDP} We define the number of visitation of $(s,a)$ pair at timestep $h$ until the end of episode $k-1$ as $n^k(h,s,a)=\sum_{l=1}^{k-1}\mathbf{1}\{(s^l_h,a^l_h)=(s,a)\}$. We also construct empirical reward and empirical transition function as $\hat R^k_{h,s,a}=\frac{1}{n^k(h,s,a)+1}\sum_{l=1}^{k-1}\mathbf{1}\{(s^l_h,a^l_h)=(s,a)\}r_h^l$ and $\hat P^k_{h,s,a}(s')=\frac{1}{n^k(h,s,a)+1}\sum_{l=1}^{k-1}\mathbf{1}\{(s^l_h,a^l_h,s_{h+1}^l)=(s,a,s')\}.$
Finally, we use $\hat{M}^k=(H,\mathcal{S},\mathcal{A},\hat{P}^k,\hat{R}^k,s_1^k)$ to denote the empirical MDP. Notice that we have $n^k(h,s,a)+1$ in the denominator, and it is not standard. The reason we have that is due to the analysis between model-free view and model-based view in Section \ref{sec:algorithm main}. In the current form, $\hat P_{h,s,a}^k$ is no longer a valid probability function, and it is for ease of presentation. More formally, we can slightly augment the state space by adding one absorbing state for each level $h$ and let all $(h,s,a)$ transit to the absorbing states with remaining probability.
\section{Proof Outline}\label{sec: proof outline main}
In this section, we outline the proof of our main results, and the details are deferred to the appendix. The major technical flow is presented from Section~\ref{sec: regret as sum of estimation and pessimism main} onward. Before that, we present three technical prerequisites: (i) the total probability for the unperturbed estimated $\hat{M}^k$ to fall outside a confidence set is bounded; (ii) $\overline{V}_{1,k}$ is an upper bound of the optimal value function $V^*_{1}$ with at least a constant probability at every episode; (iii) the clipping procedure ensures that $\overline{V}_{h,k}$ is bounded with high probability\footnote{We drop/hide constants by appropriate use of $\gtrsim,\lesssim,\simeq$ in our mathematical relations. All the detailed analyses can be found in our appendix.}.
\paragraph{Notations} To avoid cluttering of mathematical expressions, we abridge our notations to exclude the reference to $(s,a)$ when it is clear from the context. Concise notations are used in the later analysis: $R_{h,s^k_h,a^k_h}\rightarrow R^k_{h}$, $\hat{R}^k_{h,s^k_h,a^k_h}\rightarrow \hat R_h^k$, $P_{h,s^k_h,a^k_h} \rightarrow P^k_{h}$, $\hat{P}^k_{h,s^k_h,a^k_h} \rightarrow \hat{P}^k_{h}$, $n^k(h,s^k_h,a^k_h) \rightarrow n^k(h)$, $w^k_{h,s^k_h,a^k_h} \rightarroww^k_{h}$.
\paragraph{High probability confidence set}In Definition~\ref{def:confidence set main}, $\mathcal{M}^k$ represents a set of MDPs, such that the total estimation error with respect to the true MDP is bounded.
\begin{definition}[Confidence set]\label{def:confidence set main}
\begin{align*}
&\mathcal{M}^k =\bigg\{ (H, \mathcal{S}, \mathcal{A}, P', R', s_1) : \forall(h,s,a),\, \left|R_{h,s,a}'-R_{h,s,a} + \langle P'_{h,s,a}-P_{h,s,a},V^*_{h+1}\rangle \right|\,\,\,
\leq \sqrt{e_k(h,s,a)} \bigg\}
\end{align*}
where we set
\begin{equation}\label{eq: confidence interval main}
\sqrt{e_k(h,s,a)} =
H\sqrt{ \frac{ \log\left( 2HSA k \right) }{n^k(h,s,a)+1}}.
\end{equation}
\end{definition}
Through an application of Hoeffding's inequality~\citep{jaksch2010near,osband2013more}, it is shown via Lemma~\ref{lem: confidence interval lemma main} that the empirical MDP does not often fall outside confidence set $\mathcal{M}^k$. This ensures {\sf exploitation}, i.e., the algorithm's confidence in the estimates for a certain $(h,s,a)$ tuple grows as it visits that tuple many numbers of times
\begin{lemma}\label{lem: confidence interval lemma main}
$\sum_{k=1}^{\infty} \mathbb{P}\left(\hat{M}^k \notin \mathcal{M}^k \right) \leq 2006HSA$.
\end{lemma}
\paragraph{Bounded Q-function estimates}It is important to note the pseudo-noise used by \textsc{C-RLSVI}\, has both exploratory ({\sf optimism}) behavior and corrupting effect on the estimated value function. Since the Gaussian noise is unbounded, the clipping procedure (lines 14-19 in Algorithm~\ref{alg: RLSVI}) avoids propagation of unreasonable estimates of the value function, especially for the tuples $(h,s,a)$ which have low visit counts. This saves from low rewarding states to be misidentified as high rewarding ones (or vice-versa). Intuitively, the clipping threshold $\alpha_k$ is set such that the noise variance ($\sigma_k(h,s,a)=\frac{\beta_k}{2(n^k(h,s,a)+1)}$) drops below a numerical constant and hence limiting the effect of noise on the estimated value functions. This idea is stated in Lemma~\ref{lem: est Q function bounded main}, where we claim the estimated Q-value function is bounded for all $(h,s,a)$.
\begin{lemma}[(Informal) Bound on the estimated Q-value function]\label{lem: est Q function bounded main}
Under some good event, for the clipped Q-value function $\overline{Q}_k$ defined in Algorithm~\ref{alg: RLSVI}, we have $|(\overline{Q}^{}_{h,k} - Q^*_{h})(s,a)| \leq H-h +1.$
\end{lemma}
See Appendix~\ref{sec: concentration of events} for the precise definition of good event and a full proof. Lemma~\ref{lem: est Q function bounded main} is striking since it suggests that randomized value function needs to be clipped only for constant (i.e. independent of $T$) number of times to be well-behaved.
\paragraph{Optimism}The event when none of the rounds in episode $k$ need to be clipped is denoted by $\mathcal{E}^{\text{th}}_{k} \coloneqq \{ \cap_{h\in[H]}(n^k(h) \geq \alpha_k)\}.$
Due to the randomness in the environment, there is a possibility that a learning algorithm may get stuck on ``bad'' states, i.e. not visiting the ``good'' $(h,s,a)$ enough or it grossly underestimates the value function of some states and as result avoid transitioning to those state. Effective {\sf exploration} is required to avoid these scenarios. To enable correction of faulty estimates, most RL exploration algorithms maintain optimistic estimates almost surely. However, when using randomized value functions, \textsc{C-RLSVI}\, does not always guarantee optimism. In Lemma~\ref{lem: Optimism Main}, we show that \textsc{C-RLSVI}\, samples an optimistic value function estimate with at least a constant probability for any $k$. We emphasize that such difference is fundamental.
\begin{lemma}\label{lem: Optimism Main}
When conditioned on $\mathcal{H}_{H}^{k-1}$, $$\mathbb{P}\left(\overline{V}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1)\,| \,\mathcal{G}_k\right) \geq \Phi(-\sqrt 2)/2.$$
Here $\Phi(\cdot)$ is the CDF of $\mathcal N(0,1)$ distribution and $\mathcal{H}_{H}^{k-1}$ is all the history of the past observations made by \textsc{C-RLSVI}\, till the end of the episode $k-1$. We use $\mathcal{G}_k$ to denote the good intersection event of $\hat M^k\in\mathcal M^k$ and bounded noises and values (the specific definition of good event $\mathcal{G}_k$ can be found in Appendix~\ref{sec: notations}).
\end{lemma}
Lemma~\ref{lem: Optimism Main} is adapted from \citet{zanette2020frequentist,russo2019worst}, and we reproduce the proof in Appendix \ref{sec: optimism} for completeness.\\
Now, we are in a position to simplify the regret expression with high probability as
\begin{align}
&{\rm Reg}(K)\leq~
\sum_{k=1}^{K} \mathbf{1}\{\mathcal{G}_k\}\left( V_{1}^{*} - V_{1}^{\pi^k}\right)(s_1^k) +H\underbrace{\mathbb{P}(\hat{M}^k\notin\mathcal{M}^k)}_{\text{Lemma~\ref{lem: confidence interval lemma main}}},
\label{eq: regret decom main text}
\end{align}
where we use Lemma~\ref{lem: confidence interval lemma main} to show that for any $(h,s,a)$, the edge case that the estimated MDP lies outside the confidence set is a transient term (independent of $T$). The proof of Eq (\ref{eq: regret decom main text}) is deferred to Appendix~\ref{sec: apx_d}.\\
We also define $\tilde{w}^k(h,s,a)$ as an independent sample from the same distribution $\mathcal{N}(0,\sigma^2_k(h,s,a))$ conditioned on the history of the algorithm till the last episode. Armed with the necessary tools, over the next few subsections we sketch the proof outline of our main results. All subsequent discussions are under good event $\mathcal{G}_k$.
\subsection{Regret as Sum of Estimation and Pessimism}\label{sec: regret as sum of estimation and pessimism main}
Now the regret over $K$ episodes of the algorithm decomposes as
\begin{equation}\label{eq: regret decomp pes est terms}
\sum^K_{k=1}\big(\underbrace{(V_{1}^{*} -\overline{V}_{1,k})(s^k_1)}_{\text{Pessimism}} + \underbrace{\overline{V}_{1,k}(s^k_1) - V_{1}^{\pi^k}(s^k_1)}_{\text{Estimation}}\big).
\end{equation}
In OFU-style analysis, the pessimism term is non-positive and insignificant \cite{azar2017minimax,jin2018q}. In TS-based analysis, the pessimism term usually has zero expectation or can be upper bounded by the estimation term~\cite{osband2013more,osband2017posterior,agrawal2017optimistic,russo2019worst}. Therefore, the pessimism term is usually relaxed to zero or reduced to the estimation term, and the estimation term can be bounded separately. Our analysis proceeds quite differently. In Section~\ref{sec: pessimism in terms of estimation}, we show how the pessimism term is decomposed to terms that are related to the algorithm's trajectory (estimation term and pessimism correction term). In Section~\ref{sec: estimation bound main} and Section~\ref{sec: pessimism correction bound main}, we show how to bound these two terms through two independent recurrences. Finally, in Section~\ref{sec: final regret bound}, we reorganize the regret expression whose individual terms can be bounded easily by known concentration results.
\subsection{Pessimism in Terms of Estimation}\label{sec: pessimism in terms of estimation}
In this section we present Lemma~\ref{lem: pessimism decomp main}, where the pessimism term is bounded in terms of the estimation term and a correction term $(V_{1}^{\pi^k}-\underline{V}_{1,k})(s^k_1)$ that will be defined later. This correction term is further handled in Section~\ref{sec: pessimism correction bound main}. While the essence of Lemma~\ref{lem: pessimism decomp main} is similar to that given by~\citet{zanette2020frequentist}, there are key differences: we need to additionally bound the correction term; the nature of the recurrence relations for the pessimism and estimation terms necessitates a distinct solution, hence leading to different order dependence in regret bound. In all, this allows us to obtain stronger regret bounds as compared to~\citet{zanette2020frequentist}.
\begin{lemma}\label{lem: pessimism decomp main}
Under the event $\mathcal{G}_k$,
\begin{align}\label{eq:lem9-1-a main}
&~(V^*_{1}-\overline{V}_{1,k})(s^k_1)\lesssim~ (\overline{V}_{1,k}-V^{\pi^k}_{1})(s^k_1)+(V^{\pi^k}_{1}-\underline{V}_{1,k})(s^k_1) + \MDSfkind{1},
\end{align}
where $\MDSfkind{1}$ is a martingale difference sequence (MDS).
\end{lemma}
The detailed proof can be found in Appedix~\ref{sec: Bounds on Pessimism}, while we present an informal proof sketch here. The general strategy in bounding $V^*_{1}(s^k_1) - \overline{V}_{1,k}(s^k_1)$ is that we find an upper bounding estimate of $V^*_{1}(s^k_1)$ and a lower bounding estimate of $\overline{V}_{1,k}(s^k_1)$, and show that the difference of these two estimates converge.
We define $\tilde{V}_{1,k}$ to be the value function obtained when running Algorithm \ref{alg: RLSVI} with random noise $\tilde w$. Since $\tilde w$ is i.i.d. with $w$, we immediately have that $\tilde{V}_{1,k}$ is i.i.d. to $\overline{V}_{1,k}$. Then we introduce the following optimization program:
\begin{align*}
& \underset{w^k_{\text{ptb}}\, \in\,\mathbb{R}^{HSA}}{\min} V^{ w^k_{\text{ptb}}}_{1,k}(s_1^k)\nonumber\\
s.t.& \quad | w^k_{\text{ptb}}(h,s,a)| \leq \gamma_k(h,s,a)\quad\forall\, h,s,a,
\end{align*}
where $V^{ w^k_{\text{ptb}}}_{1,k}(s_1^k)$ is analogous to $\overline V_{1,k}$ obtained from Algorithm \ref{alg: RLSVI} but with optimization variable $w^k_{\text{ptb}}$ in place of $w^k$. We use $\underline w^k$ to denote the solution of the optimization program and $\underline V$ to be the minimum. This ensures $\underline{V}_{1,k} \leq \overline{V}_{1,k}$ and $\underline{V}_{1,k} \leq \tilde{V}_{1,k}$.
Thus the pessimism term is now given by
\begin{align}
(V^*_{1} - \overline{V}_{1,k})(s^k_1) \leq (V^*_{1} - \underline{V}_{1,k})(s^k_1).\label{eq: lemma pes 1 main}
\end{align}
Define event $\tilde{\mathcal{O}}_{1,k} \coloneqq \{ \tilde{V}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1) \}$, $\tilde {\mathcal G}_k$ to be a similar event as $\mathcal G_k$, and use $\mathbb{E}_{\tilde w}\sbr{\cdot}$ to denote the expectation over the pseudo-noise $\tilde{w}$. Since $V^*_{1}(s^k_1)$ does not depend on $\tilde{w}$, we get $V^*_{1}(s^k_1) \leq \mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1,k},\tilde {\mathcal G}_k}\sbr{ \tilde{V}_{1,k}(s^k_1)}$. We can further upper bound Eq~(\ref{eq: lemma pes 1 main}) by
\begin{align}
(V^*_{1} - \underline{V}_{1,k})(s^k_1) \leq \mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1,k},\tilde {\mathcal G}_k}[ (\tilde{V}_{1,k}-\underline{V}_{1,k})(s^k_1)]\label{eq: pessimism 3 old main}.
\end{align}
Thus, we are able to relate pessimism to quantities which only depend on the algorithm's trajectory. Further we upper bound the expectation over marginal distribution $\mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1,k},\tilde {\mathcal G}_k}[\cdot]$ by $\mathbb{E}_{\tilde{w}|\tilde {\mathcal G}_k}[\cdot]$. This is possible because we are taking expectation of non-negative entities. Moreover, we can show:
\begin{align}
\mathbb{E}_{\tilde{w}|\tilde {\mathcal G}_k}[ (\tilde{V}_{1,k}-\underline{V}_{1,k})(s^k_1)]
\simeq& ~\MDSfkind{1} + \overline{V}_{1,k}(s^k_1) - \underline{V}_{1,k}(s^k_1),
\label{eq: pessimism 4 main}
\end{align}
Now consider
\begin{align}\label{eq: pessimism decom main}
(\overline{V}_{1,k} - \underline{V}_{1,k})(s^k_1)
=& \underbrace{(\overline{V}_{1,k} - V^{\pi^k}_{1})(s^k_1)}_{\text{Estimation term}} + \underbrace{(V^{\pi^k}_{1} - \underline{V}_{1,k})(s^k_1)}_{\text{Correction term}}.
\end{align}
In Eq~(\ref{eq: pessimism decom main}), the estimation term is decomposed further in Section~\ref{sec: estimation bound main}. The correction term is simplified in Section~\ref{sec: pessimism correction bound main}.
\subsection{Bounds on Estimation Term}\label{sec: estimation bound main}
In this section we show the bound on the estimation term. Under the high probability good event $\mathcal{G}_k$, we show decomposition for the estimation term $(\overline{V}_{h,k} - V^{\pi^k}_{h,k})(s^k_h)$ holds with high probability. By the property of Bayesian linear regression and the Bellman equation, we get
\begin{align}
&(\overline{V}_{h,k} - V^{\pi^k}_{h})(s^k_h) =\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}(\underbrace{\langle\hat{P}^k_{h} - P^k_{h} ,\overline{V}_{h+1,k}\rangle}_{(1)}+ \underbrace{\langle P^k_{h}, \overline{V}_{h+1,k} - V^{\pi^k}_{h+1}\rangle}_{(1')}+
\hat{R}^k_{h}-R^k_{h}+w^k_{h})+H \underbrace{\mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}}_{\text{Warm-up term}}.\label{eq: estimation bound main 1}
\end{align}
We first decompose Term (1) as
\begin{align}\label{eq: estimation bound main 5}
(1) = \underbrace{\langle\hat{P}^k_{h} - P^k_{h} ,V^*_{h+1}\rangle}_{(2)}+\underbrace{\langle\hat{P}^k_{h}- P^k_{h},\overline{V}_{h+1,k}-V^*_{h+1}\rangle}_{(3)}.
\end{align}
Term (2) represents the error in estimating the transition probability for the optimal value function $V^*_{h}$, while Term (3) is an offset term. The total estimation error, $\epsilon_{h,k}^{\text{err}}\,:=\, \envert{\text{Term }(2) + \hat{R}^k_{h}-R^k_{h}}$ is easy to bound since the empirical MDP $\hat M^k$ lies in the confidence set (Eq~(\ref{eq: confidence interval main})). Then we discuss how to bound Term (3). Unlike OFU-styled analysis, here we do not have optimism almost surely. Therefore we cannot simply relax $\overline{V}_{h+1,k}-V^*_{h+1}$ to $\overline{V}_{h+1,k}-V^{\pi^k}_{h+1}$ and form the recurrence. Instead, we will apply ($L_1,L_\infty$) Cauchy-Schwarz inequality to separate the deviation of transition function estimation and the deviation of value function estimation, and then further bound these two deviation terms. Noticing that $\overline V_{h+1,k}-V^*_{h+1}$ might be unbounded, we use Lemma~\ref{lem: est Q function bounded main} to assert that $\|V_{h+1}^*-\overline{V}_{h+1}\|_\infty \le H$ under event $\mathcal{G}_k$. With the boundedness of the deviation of value function estimation, it suffices to bound the remaining $\|\hat{P}_{h,s_h,a_h}- P_{h,s_h,a_h}\|_1$ term. Proving an $L_1$ concentration
bound for multinomial distribution with careful application of the Hoeffding's inequality shows
$$
\|\hat{P}^k_{h}- P^k_{h}\|_1\le 4\sqrt{\frac{SL}{n^k(h)+ 1}},
$$
where $L=\log\del{40SAT/\delta}$. In Eq~(\ref{eq: estimation bound main 1}), we also decomposes Term (1') to a sum of the next-state estimation and a MDS.\\
Clubbing all the terms starting from Eq~(\ref{eq: estimation bound main 1}), with high probability, the upper bound on estimation is given by
\begin{align}
(\overline{V}_{h,k}-V^{\pi^k}_{h} )(s^k_h)
\lesssim& ~ \mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\Big(\underbrace{(\overline{V}_{h+1,k}- V^{\pi^k}_{h+1} )(s^k_{h+1})}_{\text{Next-state estimation}}+ \epsilon_{h,k}^{\text{err}} + w^k_{h}+ \mathcal{M}_{\deltaEPik{h}} + 4H\sqrt{\frac{SL}{n^k(h) + 1}}\Big)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\},\label{eq: estimation decomp main 50}
\end{align}
where $\mathcal{M}_{\deltaEPik{h}}$ is a Martingale difference sequence (MDS). Thus, via Eq~(\ref{eq: estimation decomp main 50}) we are able decompose estimation term in terms of total estimation error, next-step estimation, pseudo-noise, a MDS term, and a $\tilde{\mathrm{O}}\del{\sqrt{1/n^k(h)}}$ term. From the form of Eq~(\ref{eq: estimation decomp main 50}), we can see that it forms a recurrence. Due to this style of proof, our Theorem~\ref{thm: high probability regret main} is $\sqrt{HS}$ superior than the previous state-of-art result~\citep{russo2019worst}, and we are able to provide a high probability regret bound instead of just the expected regret bound.
\subsection{Bounds on Pessimism Correction}\label{sec: pessimism correction bound main}
In this section, we give the decomposition of the pessimism correction term $(V^{\pi^k}_{h} - \underline{V}_{h,k})(s^k_h)$. Shifting from $\overline V_k$ to $\underline V_k$ and re-tracing the steps of Section~\ref{sec: estimation bound main}, with high probability, it follows
\begin{align}
&~(V^{\pi^k}_{h} - \underline{V}_{h,k})(s^k_h)\lesssim\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\Big(\underbrace{(V^{\pi^k}_{h+1} - \underline{V}_{h+1,k})(s^k_{h+1})}_{\text{Next-state pessimism correction}}+\,\,\epsilon_{h,k}^{\text{err}} + \envert{\underline{w}^k_{h}}+\mathcal{M}_{\deltaPiUk{h}}+ 4H\sqrt{\frac{SL}{n^k(h) + 1}}\Big)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.
\label{eq: pessimism correction decomp main}
\end{align}
The decomposition Eq~(\ref{eq: pessimism correction decomp main}) also forms a recurrence. The recurrences due to Eq~(\ref{eq: estimation decomp main 50}) and Eq~(\ref{eq: pessimism correction decomp main}) are later solved in Section~\ref{sec: final regret bound}.
\subsection{Final High-Probability Regret Bound}\label{sec: final regret bound}
To solve the recurrences of Eq~(\ref{eq: estimation decomp main 50}) and Eq~(\ref{eq: pessimism correction decomp main}), we keep unrolling these two inequalities from $h=1$ to $h=H$. Then with high probability, we get
\begin{align*}
{\rm Reg}(K) \lesssim &\sum_{k=1}^{K}\sum_{h=1}^{H}
\left(\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text {th}}\}\Big(\envert{\epsilon_{h,k}^{\text{err}}} + \envert{\underline{w}^k_{h}}+w^k_{h} + \mathcal{M}_{\deltaPiUk{h}} + \mathcal{M}_{\deltaEPik{h}}\right.\\
&\left.+ 4H\sqrt{\frac{SL}{n^k(h) + 1}}\Big)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\} \right)+\sum_{k=1}^K\MDSfkind{1} .\label{eq: final regret decomp main}
\end{align*}
Bounds of individual terms in the above equation are given in Appendix~\ref{sec: bounds on individual terms}, and here we only show the order dependence.\\
The maximum estimation error that can occur at any round is limited by the size of the confidence set Eq~(\ref{eq: confidence interval main}). Lemma~\ref{lem: estimation error} sums up the confidence set sizes across the $h$ and $k$ to obtain $\sum_{k=1}^K\sum_{h=1}^{H} \envert{\epsilon_{h,k}^{\text{err}}} = \tilde{\mathrm{O}}(\sqrt{H^3SAT})$. In Lemma~\ref{lem: MDS concentration}, we use Azuma-Hoeffding inequality to bound the summations of the martingale difference sequences with high probability by $\tilde{\mathrm{O}}(H\sqrt{T})$.
The pseudo-noise $\sum_{k=1}^K\sum_{h=1}^{H}w^k_{h}$ and the related term $\sum_{k=1}^K\sum_{h=1}^{H}\underline{w}^k_{h}$ are bounded in Lemma~\ref{lem: estimation non random noise} with high probability by $\tilde{\mathrm{O}}(H^2S\sqrt{AT})$.
Similarly, we have $\sum_{k=1}^K\sum_{h=1}^{H}\sqrt{\frac{SL}{n^k(h) + 1}} = \tilde{\mathrm{O}}(H^2S\sqrt{AT})$ from
Lemma~\ref{lem: estimation non random noise}. Finally, Lemma~\ref{lem: warmup bound} shows that the warm-up term due to clipping is independent on $T$. Putting all these together yields the high-probability regret bound of Theorem~\ref{thm: high probability regret main}.
\section{Discussions and Conclusions}
In this work, we provide a sharper regret analysis for a variant of RLSVI and advance our understanding of TS-based algorithms. Compared with the lower bound, the looseness mainly comes from the magnitude of the noise term in random perturbation, which is delicately tuned for obtaining optimism with constant probability. Specifically, the magnitude of $\beta_k$ is $\tilde{\mathrm{O}}(\sqrt{HS})$ larger than sharpest bonus term~\citep{azar2017minimax}, which leads to an additional $\tilde{\mathrm{O}}(\sqrt{HS})$ dependence. Naively using a smaller noise term will affect optimism, thus breaking the analysis. Another obstacle to obtaining $\tilde{\mathrm{O}}(\sqrt{S})$ results is attributed to the bound on Term (3) of Eq~(\ref{eq: estimation bound main 5}). Regarding the dependence on the horizon, one $\mathrm{O}(\sqrt{H})$ improvement may be achieved by applying the law of total variance type of analysis in~\citep{azar2017minimax}. The future direction of this work includes bridging the gap in the regret bounds and the extension of our results to the time-homogeneous setting.
\section{Acknowledgements}
We gratefully thank the constructive comments and discussions from Chao Qin, Zhihan Xiong, and Anonymous Reviewers.
\section{Regret Decomposition}
\label{sec: apx_d}
In this section we give a full proof of our main result Theorem~\ref{thm: Regret main result} which is a formal version of Theorem~\ref{thm: high probability regret main}. We will give a high-level sketch proof before jumping into the details of individual parts in Sections~\ref{sec: Estimation bounds} and~\ref{sec: Bounds on Pessimism}.
\begin{thm}\label{thm: Regret main result}
For $0<\delta < 4\Phi(-\sqrt 2)$, \textsc{C-RLSVI}\, enjoys the following high probability regret upper bound, with probability at least $1-\delta$,
\begin{equation*}
{\rm Reg}(K) = \tilde{\mathrm{O}}\left( H^2S\sqrt{AT}+H^5S^2A\right).
\end{equation*}
\end{thm}
We first decompose the regret expression into several terms and show bounds for each of the individual terms separately. With probability at least $1-\delta/4$, we have
\begin{align}
{\rm Reg}(K) ~=& ~ \sum_{k=1}^{K} \mathbf{1}\{\mathcal{C}_k\}\left(V_{1}^{*}(s_1^k) - V_{1}^{\pi^k}(s_1^k)\right) + \sum_{k=1}^{K} \underbrace{\mathbf{1}\{ \{\mathcal{C}_k\}^{\complement} \}\left(V_{1}^{*}(s_1^k) - V_{1}^{\pi^k}(s_1^k)\right)}_{(1)}\nonumber \\
\overset{a}{=}& ~ \sum_{k=1}^{K} \mathbf{1}\{\mathcal{G}_k\}\left(V_{1}^{*}(s_1^k) - V_{1}^{\pi^k}(s_1^k)\right) + \sum_{k=1}^{K} \underbrace{\mathbf{1}\{ \{\mathcal{C}_k\}^{\complement} \}\left(V_{1}^{*}(s_1^k) - V_{1}^{\pi^k}(s_1^k)\right)}_{(1)}\nonumber \\
\leq&~
\sum_{k=1}^{K} \mathbf{1}\{\mathcal{G}_k\}\left( \underbrace{V_{1}^{*}(s_1^k) - \overline{V}^{}_{1,k}(s_1^k)}_{(2)} + \underbrace{\overline{V}^{}_{1,k}(s_1^k) - V_{1}^{\pi^k}(s_1^k)}_{(3)}\right) + 2006H^2SA.\label{eq: regret decom main}
\end{align}
Step (a) holds with probability at least $1-\delta/4$ due to Lemma~\ref{lem: intersection event}.
Term (1) is upper bounded due to Lemma~\ref{lem: confidence interval lemma} and the fact that $V_{h}^{*}(s^k_h) - V_{h}^{\pi^k}(s^k_h) \leq H,\,\forall\,k\,\in\,[K]$. Term (2), additive inverse of {\sl optimism}, is called {\sl pessimism} \citep{zanette2020frequentist} and is further decomposed in Lemma~\ref{lem: pessimism decomp} and Lemma~\ref{lem:est-underV}. Term (3) is a measure of how well the estimated MDP tracks the true MDP and is called {\sl estimation error}. It is discussed further by Lemma~\ref{lem: Decomposition Lemma}, Lemma~\ref{lem: Decomposition supporting Lemma} and finally decomposed in Lemma~\ref{lem: estimation decomp}. We start with the results that decompose the terms in Eq~(\ref{eq: regret decom main}) and later aggregate them back to complete the proof of Theorem~\ref{thm: Regret main result}.
\subsection{Bound on the Estimation Term}\label{sec: Estimation bounds}
Lemma~\ref{lem: Decomposition Lemma} decomposes the deviation term between the Q-value function and its estimate, and the proof relies on Lemma~\ref{lem: Decomposition supporting Lemma}. This result is extensively used in our analysis. For the purpose of the results in this subsection, we assume the episode index $k$ is fixed and hence dropped from the notation in both the lemma statements and their proofs when it is clear.
\begin{lemma}\label{lem: Decomposition Lemma}
With probability at least $1-\delta/4$, for any $h,k,s_h,a_h$, it follows that
\begin{align*}
&~ \mathbf{1}\{\mathcal{G}_k\}\left[\overline{Q}_{h}(s_h,a_h) - Q^{\pi}_{h}(s_h,a_h)\right] \\
\leq &~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left(\mathcal{P}_{h,s_h,a_h} +\mathcal{R}_{h,s_h,a_h} + w_{h,s_h,a_h} + \deltaEPi{h+1}+ \mathcal{M}_{\deltaEPi{h}}
+ 4H\sqrt{\frac{SL}{n(h,s_h,a_h) + 1}}\right)+H\mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.
\end{align*}
\end{lemma}
\begin{proof}
Here the action at any period $h$ is due to the policy of the algorithm, therefore, $a_h=\pi(s_h)$. By the property of Bayesian linear regression and the Bellman equation, we have the following
\begin{align*}
&~ \mathbf{1}\{\mathcal{G}_k\}\left[\overline{Q}_{h}(s_h,a_h) - Q^{\pi}_{h}(s_h,a_h)\right] \\
=&~ \mathbf{1}\{\mathcal{G}_k\}\left[\overline{Q}_{h}(s_h,a_h) - Q^{\pi}_{h}(s_h,a_h)\right]\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}+ \mathbf{1}\{\mathcal{G}_k\}\left[\overline{Q}_{h}(s_h,a_h) - Q^{\pi}_{h}(s_h,a_h)\right]\mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_{h,k}\}\\
\le&~ \mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left[\hat Q_{h}(s_h,a_h) - Q^{\pi}_{h}(s_h,a_h)\right]+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}\\
=&~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left[\langle\hat{P}_{h,s_h,a_h} ,\overline{V}_{h+1}\rangle - \langle P_{h,s_h,a_h}, V^\pi_{h+1}\rangle+ \hat{R}_{h,s_h,a_h}-R_{h,s_h,a_h}+w_{h,s_h,a_h}\right]+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}\\
=&~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left[\langle\hat{P}_{h,s_h,a_h} ,\overline{V}_{h+1}\rangle - \langle P_{h,s_h,a_h}, V^\pi_{h+1}\rangle+ \hat{R}_{h,s_h,a_h}-R_{h,s_h,a_h}+w_{h,s_h,a_h}\right.\\
&~+\left.\langle\hat{P}_{h,s_h,a_h}- P_{h,s_h,a_h},V^*_{h+1}\rangle-\langle\hat{P}_{h,s_h,a_h}- P_{h,s_h,a_h},V^*_{h+1}\rangle\right]+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}\\
=&~ \mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left[\mathcal{P}_{h,s_h,a_h}+\mathcal{R}_{h,s_h,a_h}+w_{h,s_h,a_h}+ \langle P_{h,s_h,a_h}, \overline{V}_{h+1}-V^\pi_{h+1}\rangle\right]\\
&~+ \mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\langle\hat{P}_{h,s_h,a_h}- P_{h,s_h,a_h},\overline{V}_{h+1}-V^*_{h+1}\rangle+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}\\
\overset{a}{\leq}& ~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left[\mathcal{P}_{h,s_h,a_h} +\mathcal{R}_{h,s_h,a_h} + w_{h,s_h,a_h} + \overline{\delta}^{\pi}_{h+1}(s_{h+1})+ \mathcal{M}_{\deltaEPi{h}}
+ 4H\sqrt{\frac{SL}{n(h,s_h,a_h) + 1}}
\right]+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\},
\end{align*}
where step $(a)$ follows from Lemma \ref{lem: Decomposition supporting Lemma} and by adding and subtracting $\overline{V}^{\pi}_{h+1}(s_{h+1})-V^\pi_{h+1}(s_{h+1})$ to create $\mathcal{M}_{\deltaEPi{h}}$.
\end{proof}
\begin{lemma}\label{lem: Decomposition supporting Lemma}
With probability at least $1-\delta/4$, for any $h,k,s,a$, it follows that
\begin{align*}
&~\mathbf{1}\{\mathcal{G}_k\}\langle\hat{P}_{h,s,a}- P_{h,s,a},V^*_{h+1}-\overline{V}_{h+1}\rangle\le 4H\sqrt{\frac{SL}{n(h,s,a) + 1}}.
\end{align*}
\end{lemma}
\begin{proof}
Firstly, applying the Cauchy-Schwarz inequality, we get
\begin{align*}
&~\mathbf{1}\{\mathcal{G}_k\}\langle\hat{P}_{h,s,a}- P_{h,s,a},V^*_{h+1}-\overline{V}_{h+1}\rangle \le \|\hat{P}_{h,s,a}- P_{h,s,a}\|_1\|\mathbf{1}\{\mathcal{G}_k\}(V^*_{h+1}-\overline{V}_{h+1})\|_\infty.
\end{align*}
Since Lemma \ref{lem: est Q function bounded} implies $\|\mathbf{1}\{\mathcal{G}_k\}(V_{h+1}^*-\overline{V}_{h+1})\|_\infty \le H$, it suffices to bound $\|\hat{P}_{h,s,a}- P_{h,s,a}\|_1$. Note that for any vector $v\in\mathbb{R}^S$, we have $$\|v\|_1=\sup_{u\in\{-1,+1\}^S}u^\top v.$$
Hence, we will prove the concentration for $u^\top(\hat{P}_{h,s,a}- P_{h,s,a})$.
If the visiting time $n^k(h,s,a)=0$, we know that $\|\hat{P}_{h,s,a}- P_{h,s,a}\|_1\le 4H\sqrt{SL}$, which means the final bound holds. Now we consider the case that $n^k(h,s,a)\ge 1$. For any fixed $h,s,a,$ $u\in\{-1,+1\}^S$, and assume given number of visits $n>0$ in $(h,s,a)$ before step $T$, (i.e. $n^k(h,s,a)=n$, but for simplicity we still use $n^k(h,s,a)$ in the analysis below). Applying Hoeffding's inequality to the transition probabilities obtained from $n$ observations, with probability at least $1-\delta'$, we have
\begin{align*}
u^\top\left( \frac{1}{n^k(h,s,a)}\sum_{l=1}^{k-1}\mathbf{1}\{(s_h^l,a_h^l,s_h^{l+1})=(s,a,\cdot)\} - P_{h,s,a}(\cdot)\right)\le 2\sqrt{\frac{\log(2/\delta')}{2n^k(h,s,a)}}.
\end{align*}
This is because $u^\top \left(\frac{1}{n^k(h,s,a)}\sum_{l=1}^{k-1}\mathbf{1}\{(s_h^l,a_h^l,s_h^{l+1})=(s,a,\cdot)\}\right) $ is the average of i.i.d. random variables $u^\top \mathbf{e}_{s'}$ with bounded range $[-1,1]$. Notice here we have fixed $n^k(h,s,a)=n$, so $n^k(h,s,a)$ is not a random variable.
By triangle inequality, we have
\begin{align*}
&~\left|\hat{P}^k_{h,s,a}(\cdot)-\frac{1}{n^k(h,s,a)}\sum_{l=1}^{k-1}\mathbf{1}\{(s^l_h,a^l_h,s_{h+1}^l)=(s,a,\cdot)\} \right|\\
=&~\frac{1}{n^k(h,s,a)(n^k(h,s,a)+1)}\sum_{l=1}^{k-1}\mathbf{1}\{(s_h^l,a_h^l,s_h^{l+1})=(s,a,\cdot)\}\\
\le&~\frac{1}{n^k(h,s,a)},
\end{align*}
where the last step is by noticing visiting $(s_h^l,a_h^l,s_h^{l+1})=(s,a,\cdot)$ implies visiting $(h,s,a)$.
Therefore, we get
\begin{align*}
u^\top\left( \hat P^k_{h,s,a} - P^k_{h,s,a}\right)\le 3\sqrt{\frac{\log(2/\delta')}{2n^k(h,s,a)}}.
\end{align*}
Finally, union bounding over all $h,s,a$, $u\in\{-1,+1\}^S$, $n^k(h,s,a)\in[K]$ and set $\delta=\delta'/(2^SSAT)$, we get
\begin{align*}
u^\top\left( \hat P^k_{h,s,a} - P^k_{h,s,a}\right)\le 3\sqrt{\frac{SL}{n^k(h,s,a)}}\le 4\sqrt{\frac{SL}{n^k(h,s,a) + 1}}.
\end{align*}
This implies $\|\hat{P}^k_{h,s,a}- P_{h,s,a}\|_1\le 4\sqrt{\frac{SL}{n^k(h,s,a)+ 1}}$, which completes the proof.
\end{proof}
\iffalse
\begin{lemma}\label{lem: Decomposition supporting Lemma}
With probability at least $1-\delta/4$, for any $h,k,s_h,a_h$, it follows that
\begin{align*}
&~\mathbf{1}\{\mathcal{G}_k\}\langle\hat{P}_{h,s_h,a_h}- P_{h,s_h,a_h},V^*_{h+1}-\overline{V}_{h+1}\rangle(s_h)\le\mathbf{1}\{\mathcal{G}_k\}\left(C_1\envert{\overline{\delta}_{h+1}(s_{h+1})} + \mathcal{M}_{\envert{\deltaEO{h}}} + \frac{C_3}{n_h}\right),
\end{align*}
where $C_1 =\sqrt{\frac{1}{4\del{C_2+1}^2H^2L}}$ and $C_3=4\del{C_2+1}^2SH^2L$ with $C=\frac{1}{\Phi(-\sqrt 2)/2}$.
\end{lemma}
\begin{proof}
In this proof, we mainly apply Bernstein's inequality followed by an instance of the Pigeon-hole principle. We first introduce a new notation random variable $x$ to denote the the next-state reached after transitioning from $(s_h,\pi_h(s_h))$ and its randomness comes from the algorithm. $x$ is not the next-state in the true trajectory and should not be confused with $s_{h+1}$. Recall that $s_{h+1}$ is the true state visited by the algorithm at period $h+1$.
To make the notation concise, let $\hat{p}_h(x):=\hat{P}_{h,s_h,\pi_h(s_h)}(x)$, $p_h(x):=P_{h,s_h,\pi_h(s_h)}(x)$, $n_h:= n(h,s_h,a_h)$ and $\overline{\delta}_h(x) = V^*_h(x)-\overline{V}_h(x)$. Then we have the following
\begin{align*
& ~\mathbf{1}\{\mathcal{G}_k\}\langle\hat{P}_{h,s_h,a_h}- P_{h,s_h,a_h},V^*_{h+1}-\overline{V}_{h+1}\rangle(s_h) \\
\leq&~ \mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S}\envert{\hat{p}_h(x)- p_h(x)}\, \envert{\overline{\delta}_{h+1}(x)}\\
\overset{a}{\leq}&~
\mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S} \left( 2\sqrt{\frac{p_h(x)\del{1-p_h(x)}L}{n_h}}+\frac{10L}{3n_h}\right) \envert{\overline{\delta}_{h+1}(x)} \\
\overset{b}{\leq}& ~2\sqrt{L}\underbrace{\mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S}\left(\sqrt{\frac{p_h(x)}{n_h}}\envert{\overline{\delta}_{h+1}(x)}\right)}_{(1)}+\frac{10SHL}{3n_h}.
\end{align*}
Step $(a)$ follows by Lemma~\ref{lemma: Bernstein one another}. Step $(b)$ is due to the fact that $|\overline{\delta}_{h+1}(x)| \leq H$ under the event $\mathcal{G}_k$. For simplicity, let's denote $S_{\text{next}}(h,s_h,a_h)$ by $S_{\text{next}}$ (see Definition~\ref{def: next state}). Thus, the term (1) can be decomposed as
\begin{align}
&~\mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S}\left(\sqrt{\frac{p_h(x)}{n_h}}\envert{\overline{\delta}_{h+1}(x)}\right)\nonumber\\
=&~ \mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S_{\text{next}}}\left(\sqrt{\frac{p_h(x)}{n_h}}\envert{\overline{\delta}_{h+1}(x)}\right) + \mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S_{\text{next}}^{\complement}}\left(\sqrt{\frac{p_h(x)}{n_h}}\envert{\overline{\delta}_{h+1}(x)}\right)\nonumber\\
=&~
\mathbf{1}\{\mathcal{G}_k\}\underbrace{\sum_{x\in S_{\text{next}}} P_{h,s_h,a_h}\sqrt{\frac{1}{n_h\,p_h(x)}}\envert{\overline{\delta}_{h+1}(x)}}_{(2)} + \mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S_{\text{next}}^{\complement}} \frac{\sqrt{n_h\,p_h(x)}}{n_h}\envert{\overline{\delta}_{h+1}(x)}.\label{eq: decomposition lemma term 2}
\end{align}
We first simplify term (2) in Eq~(\ref{eq: decomposition lemma term 2}). By adding and subtracting $\mathbf{1}\{s_{h+1}\in S_{\text{next}}\}\sqrt{\frac{1}{n_h\,p_h(x)}}\envert{\overline{\delta}_{h+1}(s_{h+1})}$, we have
\begin{align*}
(2)=&~
\underbrace{\mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S_{\text{next}}}\sqrt{\frac{1}{n_h\,p_h(x)}}\left[ P_{h,s_h,a_h}\envert{\overline{\delta}_{h+1}(x)} - \envert{\overline{\delta}_{h+1}(s_{h+1})}\mathbf{1}\{s_{h+1}\in S_{\text{next}}\}\right]}_{(2')}\\
&~+ \underbrace{\mathbf{1}\{\mathcal{G}_k\}\sum_{x\in S_{\text{next}}}\sqrt{\frac{1}{n_h\,p_h(x)}}\envert{\overline{\delta}_{h+1}(s_{h+1})}\mathbf{1}\{s_{h+1}\in S_{\text{next}}\}}_{(2'')}\\
\overset{c}{\leq}&~
\mathbf{1}\{\mathcal{G}_k\}\mathcal{M}_{\envert{\deltaEO{h}}} + C_1\mathbf{1}\{\mathcal{G}_k\}\envert{\overline{\delta}_{h+1}(s_{h+1})}.
\end{align*}
In step $(c)$, we use that $\forall\,x\,\in S_{\text{next}}\,, \frac{1}{n_h\,p_h(x)} \leq \frac{1}{\nextStateC}$ and $C_1= \sqrt{\frac{1}{4\del{C_2+1}^2H^2L}}$ and applied the definition of $\mathcal{M}_{\envert{\deltaEO{h}}}$ on (2'). By the definition of $S_{\text{next}}^{\complement}$, $n_h\,p_h(x) < \nextStateC\,,\forall\,x\,\in S_{\text{next}}^{\complement}$. Therefore for the second term of Eq~(\ref{eq: decomposition lemma term 2}), we have
\begin{equation*}
\sum_{x\in S_{\text{next}}^{\complement}} \frac{\sqrt{n_h\,p_h(x)}}{n_h}\envert{\overline{\delta}_{h+1}(x)} \leq \frac{2(C+1)H^2S\sqrt{L}}{n_h}.
\end{equation*}
Clubbing all the terms, we get
\begin{align*}
& ~\mathbf{1}\{\mathcal{G}_k\}\envert{\langle\hat{P}_{h,s_h,a_h}- P_{h,s_h,a_h},\overline{V}_{h+1}-V^*_{h+1}\rangle(s_h)} \\
\leq& ~\mathbf{1}\{\mathcal{G}_k\}\left(C_1\envert{\overline{\delta}_{h+1}(s_{h+1})} + \mathcal{M}_{\envert{\deltaEO{h}}} + \frac{C_3}{n_h}\right),
\end{align*}
where $C_1 =\sqrt{\frac{1}{4\del{C_2+1}^2H^2L}}$ and $C_3=4\del{C_2+1}^2SH^2L$ with $C=\frac{1}{\Phi(-\sqrt 2)/2}$.
\end{proof}
\fi
The following Lemma \ref{lem: estimation decomp} is the $V$ function version of its $Q$ function version counterpart in Lemma \ref{lem: Decomposition Lemma}. It is applied in the proof of final regret decomposition in Theorem~\ref{thm: Regret main result}.
\begin{lemma}\label{lem: estimation decomp}
With probability at least $1-\delta/4$, for any $h,k,s_h,a_h$, the following decomposition holds
\begin{align*}
& ~\mathbf{1}\{\mathcal{G}_k\}\left[\overline{V}_{h}(s_h) - V^{\pi}_{h}(s_h)\right] \\
\leq& ~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left(\mathcal{P}_{h,s_h,a_h} +\mathcal{R}_{h,s_h,a_h} + w_{h,s_h,a_h} +\overline{\delta}^{\pi}_{h+1}(s_{h+1})+ \mathcal{M}_{\deltaEPi{h}}
+ 4H\sqrt{\frac{SL}{n(h,s_h,a_h) + 1}}\right)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.
\end{align*}
\end{lemma}
\begin{proof}
With $a_h$ as the action taken by the algorithm $\pi(s_h)$, it follows that $\overline{V}_{h}(s_h) = \overline{Q}_{h}(s_h,a_h)$ and $V^{\pi}_{h}(s_h) = Q^{\pi}_{h}(s_h,a_h)$. Thus, the proof follows by a direction application of Lemma~\ref{lem: Decomposition Lemma}.
\end{proof}
\subsection{Bound on the Pessimism Term}\label{sec: Bounds on Pessimism}
In this section, we will upper bound the pessimism term with the help of the probability of being optimistic and the bound on the estimation term. The approach generally follows Lemma~G.4 of~\cite{zanette2020frequentist}. The difference here is that we also provide a bound for $V_{1}^{*}(s^k_1) - \underline{V}_{1,k}(s^k_1)$. This difference enable us to get stronger bounds in the tabular setting as compared to~\cite{zanette2020frequentist}. The pessimism term will be decomposed to the two estimation terms $\overline{V}^{}_{1,k}(s^k_1) - V^{\pi^k}_{1}(s^k_1)$ and $V_{1}^{\pi^k}(s^k_1)-\underline{V}^{}_{1,k}(s^k_1)$, and the martingale difference term $\MDSfkind{1}$.
\begin{lemma}[Restatement of Lemma~\ref{lem: pessimism decomp main}]\label{lem: pessimism decomp}
For any $k$, the following decomposition holds,
\begin{align}\label{eq:lem9-1-a}
&~\mathbf{1}\{\mathcal{G}_k\}\left(V_{1}^{*}(s^k_1) - \overline{V}^{}_{1,k}(s^k_1)\right)\leq \mathbf{1}\{\mathcal{G}_k\}\left(V_{1}^{*}(s^k_1) - \underline{V}^{}_{1,k}(s^k_1)\right) \nonumber\\
\leq &~C \mathbf{1}\{\mathcal{G}_k\}\left(\overline{V}^{}_{1,k}(s^k_1) - V^{\pi^k}_{1}(s^k_1)+V_{1}^{\pi^k}(s^k_1)-\underline{V}^{}_{1,k}(s^k_1)+ \MDSfkind{1}\right),
\end{align}
where $\mathbf{1}\{\mathcal{G}_k\}\left[V_{1}^{\pi^k}(s^k_1)-\underline{V}^{}_{1,k}(s^k_1)\right]$ will be further bounded in Lemma~\ref{lem:est-underV}.
\end{lemma}
\begin{proof}
For the purpose of analysis we use two ``virtual'' quantities $\tilde{V}^{}_{1,k}(s^k_1)$ and $\underline{V}^{}_{1,k}(s^k_1)$, which are formally stated in the Definitions \ref{def: tilde V} and \ref{def: under V} respectively. Thus we can define the event $\tilde{\mathcal{O}}_{1,k} \overset{def}{=} \left\{ \tilde{V}^{}_{1,k}(s^k_1) \geq V^*_{1}(s^k_1) \right\}$. For simplicity of exposition, we skip showing dependence on $k$ in the following when it is clear.
By Definition~\ref{def: tilde V}, we know that $\overline{V}_1(s_1)$ and $\tilde{V}_1(s_1)$ are identically distributed conditioned on last round history $\mathcal{H}^{k-1}_H$. From Definition~\ref{def: under V}, under event $\mathcal{G}_k$, it also follows that $\underline{V}_1(s_1) \leq \overline{V}_1(s_1)$.
Since $\underline{V}_1(s_1) \leq \overline{V}_1(s_1)$ under event $\mathcal{G}_k$, we get
\begin{align}
\mathbf{1}\{\mathcal{G}_k\}\left[V^*_1(s_1) - \overline{V}_1(s_1)\right] \leq \mathbf{1}\{\mathcal{G}_k\}\left[V^*_1(s_1) - \underline{V}_1(s_1)\right].\label{eq: lemma pes 1}
\end{align}
We also introduce notation $\mathbb{E}_{\tilde w}\sbr{\cdot}$ to denote the expectation over the pseudo-noise $\tilde w$ (recall that $\tilde w$ discussed in Definition \ref{def: tilde V}). Under event $\tilde{\mathcal{O}}_{1}$, we have $\tilde{V}_{1}(s_1) \geq V^*_{1}(s_1)$. Since $V^*_{1}(s_1)$ does not depend on $\tilde{w}$, we get $V^*_1(s_1) \leq \mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal{G}}_k}\sbr{ \tilde{V}_1(s_1)}$.
Using the similar argument for $\underline{V}_1(s_1)$, we know that $\underline{V}_1(s_1)=\mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal{G}}_k}\left[\underline{V}_1(s_1)\right]$. Subtracting this equality from the inequality $V^*_1(s_1) \leq \mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal{G}}_k}\sbr{ \tilde{V}_1(s_1)}$, it follows that
\begin{align}
V^*_1(s_1) - \underline{V}_1(s_1) \leq \mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal{G}}_k}\left[ \tilde{V}_1(s_1)-\underline{V}_1(s_1)\right]\label{eq: pessimism 3 old}.
\end{align}
Therefore, we have
\begin{align*}
\mathbf{1}\{\mathcal{G}_k\}\left[V^*_1(s_1) - \underline{V}_1(s_1)\right] \leq\mathbf{1}\{\mathcal{G}_k\} \mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal{G}}_k}\left[\tilde{V}_1(s_1)-\underline{V}_1(s_1)\right].
\end{align*}
From the law of total expectation, we can write
\begin{align}
&~\mathbb{E}_{\tilde{w}|\tilde{\mathcal G}_k}\left[ \tilde{V}_1(s_1)-\underline{V}_1(s_1)\right] \nonumber\\
=&~ \mathbb{P}(\tilde{\mathcal{O}}_{1}|\tilde{\mathcal G}_k)\mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal G}_k}\left[ \tilde{V}_1(s_1)-\underline{V}_1(s_1)\right] + \mathbb{P}(\tilde{\mathcal{O}}_{1}^{\complement}|\tilde{\mathcal G}_k)\mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1}^{\complement},\tilde{\mathcal G}_k}\left[ \tilde{V}_1(s_1)-\underline{V}_1(s_1)\right].\label{eq: law of total exp}
\end{align}
Since $\tilde{V}_1(s_1)-\underline{V}_1(s_1) \geq 0$ under event $\tilde{\mathcal{G}}_k$, multiplying both sides of Eq~(\ref{eq: law of total exp}) by $\mathbf{1}\{\mathcal{G}_k\}$, relaxing the second term on RHS to 0 and rearranging yields
\begin{align}
\mathbf{1}\{\mathcal{G}_k\}\mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal{G}}_k}\left[ \tilde{V}_1(s_1)-\underline{V}_1(s_1)\right] \leq \frac{1}{\mathbb{P}(\tilde{\mathcal{O}}_{1}|\tilde{\mathcal{G}}_k)}\mathbf{1}\{\mathcal{G}_k\}\mathbb{E}_{\tilde{w}|\tilde{\mathcal{G}}_k}\left[\tilde{V}_1(s_1)-\underline{V}_1(s_1)\right].\label{eq: pessimism 4}
\end{align}
Noticing $\tilde{V}$ is an independent sample of $\overline{V}$, we can invoke Lemma \ref{lem: Optimism Main} for $\tilde{V}$, and it follows that $\mathbb{P}(\tilde{\mathcal{O}}_{1}|\tilde{\mathcal{G}}_k)\geq \Phi(-\sqrt 2)/2$. Set $C= \frac{1}{\Phi(-\sqrt 2)/2}$ and consider
\begin{align}
\mathbf{1}\{\mathcal{G}_k\}\mathbb{E}_{\tilde{w}|\tilde{\mathcal{G}}_k}\left[\tilde{V}_1(s_1)-\underline{V}_1(s_1)\right]
= &~ \underbrace{\mathbf{1}\{\mathcal{G}_k\}\left(\mathbb{E}_{\tilde{w}|\tilde{\mathcal{G}}_k}\left[ \tilde{V}_1(s_1)\right] -\overline{V}_1(s_1)\right)}_{(1)} + \,\mathbf{1}\{\mathcal{G}_k\} \underbrace{\left(\overline{V}_1(s_1) -\underline{V}_1(s_1)\right)}_{(2)},
\label{eq: pessimism 1}
\end{align}
where the equality is due to $\tilde w$ is independent of $\underline{V}_1(s_1)$.
Since $\overline{V}_1(s_1)$ and $\tilde{V}_1(s_1)$ are identically distributed from the definition, we will later show term (1) $\mathbf{1}\{\mathcal{G}_k\}\left[\mathbb{E}_{\tilde w|\tilde{\mathcal{G}}_k}\left[\tilde{V}_1(s_1)\right] -\overline{V}_1(s_1)\right]:=\MDSfind{1}$ is a martingale difference sequence in Lemma~\ref{lem: MDS concentration}. Term (2) can be further decomposed as
\begin{align}
\overline{V}_1(s_1) -\underline{V}_1(s_1)
= \underbrace{\overline{V}_1(s_1) - V^{\pi}_1(s_1)}_{(3)} +\, \underbrace{V^{\pi}_1(s_1) -\underline{V}_1(s_1)}_{(4)}.
\label{eq: pessimism 2}
\end{align}
Term (3) in Eq~(\ref{eq: pessimism 2}) is same as {\sl estimation} term in Lemma~\ref{lem: estimation decomp}. For term (4), to make it clearer, we will show a bound separately in Lemma~(\ref{lem:est-underV}).
Combining Eq~(\ref{eq: pessimism 4}), (\ref{eq: pessimism 1}), and (\ref{eq: pessimism 2}) gives us that
\begin{align*}
&~\mathbf{1}\{\mathcal{G}_k\} \mathbb{E}_{\tilde{w}|\tilde{\mathcal{O}}_{1},\tilde{\mathcal{G}}_k}\left[\tilde{V}_1(s_1)-\underline{V}_1(s_1)\right]\nonumber\\
\leq&~ C \mathbf{1}\{\mathcal{G}_k\}\left(V_{1}^{\pi^k}(s^k_1)-\underline{V}^{}_{1,k}(s^k_1)+\overline{V}^{}_{1,k}(s^k_1) - V^{\pi^k}_{1}(s^k_1) + \MDSfkind{1}\right).
\end{align*}
This completes the proof.
\end{proof}
In Lemma~\ref{lem:est-underV}, we provide a missing piece in Lemma~\ref{lem: pessimism decomp}. It will be applied when we do the regret decomposition of major term in Theorem~\ref{thm: Regret main result}.
\begin{lemma}\label{lem:est-underV}
With probability at least $1-\delta/4$, for any $h,k,s_h^k,a_h^k$, the following decomposition holds with the intersection event $\mathcal{G}_k$
\begin{align}\label{eq:lem9-2}
&~\mathbf{1}\{\mathcal{G}_k\}\left[V_{h}^{\pi^k}(s^k_h)-\underline{V}_{h,k}^{}(s^k_h)\right] \\
\leq&~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left(-\mathcal{P}^k_{h,s^k_h,a^k_h} -\mathcal{R}^k_{h,s^k_h,a^k_h} - \underline{w}^k_{h,s^k_h,a^k_h} +\underline{\delta}^{\pi^k}_{h+1,k}(s^k_{h+1})+ \mathcal{M}_{\deltaPiUk{h}}
+ 4H\sqrt{\frac{SL}{n^k(h,s_h^k,a_h^k) + 1}}\right)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.\nonumber
\end{align}
\end{lemma}
\begin{proof}
We continue to show how to bound term (4) in Lemma~\ref{lem: pessimism decomp} and we will also drop the superscript $k$ here.
Noticing that $a_h$ as the action chosen by the algorithm $\pi(s_h)$, we have $V^{\pi}_h(s_h) = Q^{\pi}_h(s_h,a_h)$. By the definition of value function $\underline{V}_h(s_h) =\max_{a\,\in\,\mathcal{A}}\underline{Q}_h(s_h,a)$. This gives $\underline{Q}_h(s_h,a_h)\leq \underline{V}_h(s_h)$. Hence,
\begin{equation*}
V^{\pi}_h(s_h) -\underline{V}_h(s_h) = Q^{\pi}_h(s_h,a_h) -\underline{V}_h(s_h) \leq Q^{\pi}_h(s_h,a_h) -\underline{Q}_h(s_h,a_h).
\end{equation*}
From the definition of $\underline V_h$, we know that its noise satisfies $|\underline w(h,s,a)|\le \gamma(h,s,a)$. Therefore, we can show a version of Lemma~\ref{lem: est Q function bounded} for $\underline V_h$ and get $\|\mathbf{1}\{\mathcal{G}_k\}(V_{h+1}^*-\underline V_{h+1})\|_\infty \le H$. This implies the version of Lemma~\ref{lem: Decomposition supporting Lemma} for $\underline V_h$ would hold. Since the decomposition and techniques in Lemma~\ref{lem: Decomposition Lemma} only utilize the property that $\overline{Q}_{h}$ is the solution of the Bayesian linear regression and the Bellman equation for $Q^\pi_h$, we can directly get another version for instance $\underline{Q}_{h}$. Also noticing that we flip the sign of $V_h^\pi(s_h)-\underline{V}_h(s_h)$, therefore, we obtain the following decomposition for the term (4) in Lemma~\ref{lem: pessimism decomp}
\begin{align*}
&~\mathbf{1}\{\mathcal{G}_k\}\left[V_h^{\pi}(s_h)-\underline{V}_h(s_h)\right]\nonumber \\
\leq &~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}^{\text{th}}_{h,k}\}\left(-\mathcal{P}_{h,s_h,a_h} -\mathcal{R}_{h,s_h,a_h} - \underline {w}_{h,s_h,a_h} +\underline{\delta}^{\pi}_{h+1}(s_{h+1})+ \mathcal{M}_{\deltaPiU{h}}
+ 4H\sqrt{\frac{SL}{n(h,s_h,a_h) + 1}}\right)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.
\end{align*}
\end{proof}
\subsection{Final Bound on Theorem~\ref{thm: Regret main result}}\label{sec:pf_final}
Armed with all the supporting lemmas, we present the remaining proof of Theorem~\ref{thm: Regret main result}.
\begin{proof}
Recall that in the regret decomposition Eq (\ref{eq: regret decom main}), it remains to bound
\begin{align*}
\sum_{k=1}^{K} \mathbf{1}\{\mathcal{G}_k\}\left( V_{1}^{*}(s_1^k) - \overline{V}^{}_{1,k}(s_1^k)+ \overline{V}^{}_{1,k}(s_1^k) - V_{1}^{\pi^k}(s_1^k)\right)\nonumber.
\end{align*}
Again, we would skip notation dependence on $k$ when it is clear. For each episode $k$, it suffices to bound \begin{align}
&~\mathbf{1}\{\mathcal{G}_k\}\left( V_{1}^{*}(s_1) - \overline{V}^{}_{1}(s_1)+ \overline{V}^{}_{1}(s_1) - V_{1}^{\pi}(s_1)\right) \nonumber \\
\leq&~ \mathbf{1}\{\mathcal{G}_k\}\left[V_{1}^{*}(s_1) - \overline{V}_{1}(s_1)\right] + \mathbf{1}\{\mathcal{G}_k\}\left[\overline{V}_{1}(s_1) - V^{\pi}_{1}(s_1)\right]\nonumber\\
=&~ \mathbf{1}\{\mathcal{G}_k\}\deltaEO{1} + \mathbf{1}\{\mathcal{G}_k\}\deltaEPi{1}.\label{eq: thm proof 1}
\end{align}
We first use Lemma~\ref{lem: pessimism decomp} to relax the first term in Eq~(\ref{eq: thm proof 1}). Applying Eq~(\ref{eq:lem9-1-a}) in Lemma~\ref{lem: pessimism decomp} gives us the following
\begin{align}
&~\mathbf{1}\{\mathcal{G}_k\}\deltaEO{1}\nonumber\\
=&~ \mathbf{1}\{\mathcal{G}_k\}\left[V_{1}^{*}(s_1) - \overline{V}_{1}(s_1)\right]\nonumber\\
\leq&~ C \mathbf{1}\{\mathcal{G}_k\}\left(V_{1}^{\pi}(s_1)-\underline{V}_{1}(s_1)+\overline{V}_{1}(s_1) - V^{\pi}_{1}(s_1) + \MDSfind{1}\right)\nonumber\\
=&~ C \mathbf{1}\{\mathcal{G}_k\}\left(\deltaEPi{1} + \deltaPiU{1}+\MDSfind{1}\right).
\label{eq: bounds on pes 3}
\end{align}
Combining Eq~(\ref{eq: bounds on pes 3}) and Eq~(\ref{eq: thm proof 1}), we get
\begin{align}
&~\mathbf{1}\{\mathcal{G}_k\}\left( V_{1}^{*}(s_1) - \overline{V}^{}_{1}(s_1)+ \overline{V}^{}_{1}(s_1) - V_{1}^{\pi}(s_1)\right) \nonumber \\
\leq&~ (C+1)\mathbf{1}\{\mathcal{G}_k\}\deltaEPi{1} + C\mathbf{1}\{\mathcal{G}_k\}\left(\mathcal{M}_1^w+\deltaPiU{1}\right).\label{eq:decdd}
\end{align}
We will bound first and second term in Eq~(\ref{eq:decdd}) correspondingly. In the sequence, we always consider the case that Lemma~\ref{lem:est-underV} and Lemma~\ref{lem: estimation decomp} hold. Therefore, the following holds with probability at least $1-\delta/4-\delta/4=1-\delta/2$.
For the $\deltaPiU{1}$ term in Eq~(\ref{eq:decdd}), applying Eq~(\ref{eq:lem9-2}) in Lemma~\ref{lem:est-underV} yields
\begin{align}\label{eq:bound_est_1}
&~ \mathbf{1}\{\mathcal{G}_k\}\deltaPiU{1}\nonumber\\
=&~\mathbf{1}\{\mathcal{G}_k\}\left[V_1^{\pi}(s_1)-\underline{V}_1(s_1)\right] \nonumber\\
\leq&~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}_{1,k}^{\text{th}}\}\left(\envert{\PDiffind{1}+\RDiffind{1}} + \envert{\noiseUind{1}} +\underline{\delta}^{\pi}_{2}(s_{2})+ \MDScind{1}
+ 4H\sqrt{\frac{SL}{n(1,s_1,a_1) + 1}}\right)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.
\end{align}
For the $\deltaEPi{1}$ term Eq~(\ref{eq:decdd}), applying Lemma~\ref{lem: estimation decomp} yields
\begin{align}
&~ \mathbf{1}\{\mathcal{G}_k\}\deltaEPi{1}\nonumber \\
=&~ \mathbf{1}\{\mathcal{G}_k\}\left[\overline{V}_{1}(s_1) - V^{\pi}_{1}(s_1)\right] \nonumber\\
\leq& ~\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}_{1,k}^{\text{th}}\}\left(\left|\PDiffind{1}+\RDiffind{1} \right| +\noiseind{1} +\overline{\delta}^{\pi}_{2}(s_{2})+ \MDSbind{1}
+ 4H\sqrt{\frac{SL}{n(1,s_1,a_1) + 1}}\right)+H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.
\label{eq:bound_est_2}
\end{align}
Plugging Eq~(\ref{eq:bound_est_1}) and (\ref{eq:bound_est_2}) into Eq~(\ref{eq:decdd}) gives us, with probability at least $1-\delta/2$,
\begin{align}
&~\mathbf{1}\{\mathcal{G}_k\}\left( V_{1}^{*}(s_1) - \overline{V}^{}_{1}(s_1)+ \overline{V}^{}_{1}(s_1) - V_{1}^{\pi}(s_1)\right) \nonumber \\
\leq&~ (C+1)\mathbf{1}\{\mathcal{G}_k\}\deltaEPi{1} + C\mathbf{1}\{\mathcal{G}_k\}\left(\mathcal{M}_1^w+\deltaPiU{1}\right)\nonumber\\
\leq&~ C\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}_{1,k}^{\text{th}}\}\left(\envert{\PDiffind{1}+\RDiffind{1}} + \envert{\noiseUind{1}} +\underline{\delta}^{\pi}_{2}(s_{2})+ \MDScind{1}
+ 4H\sqrt{\frac{SL}{n(1,s_1,a_1) + 1}}\right)+CH \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}\nonumber\\
& + (C+1)\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}_{1,k}^{\text{th}}\}\left(\left|\PDiffind{1}+\RDiffind{1}\right| + \noiseind{1} +\overline{\delta}^{\pi}_{2}(s_{2})+ \MDSbind{1}
+ 4H\sqrt{\frac{SL}{n(1,s_1,a_1) + 1}}\right) \nonumber\\
&+(C+1)H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}+ C\mathbf{1}\{\mathcal{G}_k\}\MDSfind{1}\nonumber\\
=&~C\mathbf{1}\{\mathcal{G}_k\}\deltaPiU{2}+(C+1)\mathbf{1}\{\mathcal{G}_k\}\deltaEPi{2}+ C\mathbf{1}\{\mathcal{G}_k\}\MDSfind{1}\nonumber\\
& + C\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}_{1,k}^{\text{th}}\}\left(\envert{\PDiffind{1}+\RDiffind{1}} + \envert{\noiseUind{1}} + \MDScind{1}
+ 4H\sqrt{\frac{SL}{n(1,s_1,a_1) + 1}}\right)+CH \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}\nonumber\\
& + (C+1)\mathbf{1}\{\mathcal{G}_k\}\mathbf{1}\{\mathcal{E}_{1,k}^{\text{th}}\}\left(\left|\PDiffind{1}+\RDiffind{1}\right| + \noiseind{1} + \MDSbind{1}
+ 4H\sqrt{\frac{SL}{n(1,s_1,a_1) + 1}}\right)+(C+1)H \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}.
\label{eq:one_step_dec}
\end{align}
Keep unrolling Eq~(\ref{eq:one_step_dec}) to timestep $H$ and noticing $\deltaPiU{H+1}=\deltaEPi{H+1}=0$ and $\MDSfind{H+1}=0$ yields that with probability at least $1-\delta/2$,
\begin{align}
&~\mathbf{1}\{\mathcal{G}_k\}\left[V_{1}^{*}(s_1) - V^{\pi}_{1}(s_1)\right] \nonumber \\
\le &~(2C+1)H^2 \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}+C\MDSfind{1}\nonumber\\
&~+C\sum_{h=1}^{H}\mathbf{1}\{\mathcal{G}_k\}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\left(\envert{\mathcal{P}_{h,s_h,a_h}+\mathcal{R}_{h,s_h,a_h}}+ \envert{\underline{w}_{h,s_h,a_h}} + \mathcal{M}_{\deltaPiU{h}} +4H\sqrt{\frac{SL}{n(h,s_h,a_h)+1}}\right)\nonumber\\
&~+\del{C+1}\sum_{h=1}^{H}\mathbf{1}\{\mathcal{G}_k\}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\left(\envert{\mathcal{P}_{h,s_h,a_h}+\mathcal{R}_{h,s_h,a_h}} + w_{h,s_h,a_h} + \mathcal{M}_{\deltaEPi{h}}+4H\sqrt{\frac{SL}{n(h,s_h,a_h) + 1}}\right).
\label{eq:final_+dec}
\end{align}
It suffices to bound each individual term in Eq~(\ref{eq:final_+dec}) and we will take sum over $k$ outside.
Lemma~\ref{lem: estimation error} gives us the bound on transition function and reward function
\begin{equation*}
\sum_{k=1}^K\sum_{h=1}^{H}\mathbf{1}\{\mathcal{G}_k\} \envert{\mathcal{P}^k_{h,s^k_h,a^k_h}+\mathcal{R}^k_{h,s^k_h,a^k_h}} = \tilde{\mathrm{O}}(\sqrt{H^3SAT}).
\end{equation*}
Following the steps in Lemma~\ref{lem: estimation error}, we also get the bound
\begin{equation}
\sum_{k=1}^K\sum_{h=1}^H H\sqrt{\frac{SL}{n^k(h,s^k_h,a^k_h)+1}} = \tilde{\mathrm{O}}\del{H^{\nicefrac{3}{2}}S\sqrt{AT}}.
\end{equation}
Lemma~\ref{lem: MDS concentration} bounds the martingale difference sequences. Replacing $\delta$ by $\delta'$ in Lemma~\ref{lem: MDS concentration} gives us that with probability at least $1-\delta'$,
\begin{align*}
&\envert{\sum_{k=1}^K\mathbf{1}\{\mathcal{G}_k\}\sum_{h=1}^{H}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaPiUk{h}}} = \tilde{\mathrm{O}}(H\sqrt{T})\\
&\envert{\sum_{k=1}^K\mathbf{1}\{\mathcal{G}_k\}\sum_{h=1}^{H}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}}} = \tilde{\mathrm{O}}(H\sqrt{T})\\
&\envert{\sum_{k=1}^K\mathbf{1}\{\mathcal{G}_k\}\MDSfkind{1}} = \tilde{\mathrm{O}}(H\sqrt{T}).
\end{align*}
For the noise term, we first notice that under event $\mathcal{G}_k$, $w^k_{h,s^k_h,\pi^k(s^k_h)}$ can be upper bounded by $\envert{\underline{w}^k_{h,s^k_h,\pi^k(s^k_h)}}$. Applying Lemma~\ref{lem: estimation non random noise} and (replacing $\delta$ by $\delta'$ in Lemma~\ref{lem: estimation non random noise}) gives us, with probability at least $1-2\delta'$
\begin{align*}
&\sum_{k=1}^K\sum_{h=1}^{H}\mathbf{1}\{\mathcal{G}_k\} w^k_{h,s^k_h,\pi^k(s^k_h)} = \tilde{\mathrm{O}}(H^2S\sqrt{AT})
\end{align*}
and
\begin{align*}
&\sum_{k=1}^K\sum_{h=1}^{H}\mathbf{1}\{\mathcal{G}_k\}
\envert{\underline{w}^k_{h,s^k_h,\pi^k(s^k_h)}} = \tilde{\mathrm{O}}(H^2S\sqrt{AT}).
\end{align*}
The warm-up regret term is bounded in Lemma~\ref{lem: warmup bound}
\begin{align*}
&~H^2\sum_{k=1}^{K} \mathbf{1}\{\mathcal{E}^{\text{th}\;\complement}_k\}=\tilde{\mathrm{O}}(H^5S^2A).
\end{align*}
Putting all these pieces together and setting $\delta'=\delta/12$ yields, with probability at least $1-\delta$, we get
\begin{equation}\label{eq: high probability regret}
\sum_{k=1}^K \mathbf{1}\{\mathcal{G}_k\}\left( V_{1}^{*}(s_1) - \overline{V}^{}_{1,k}(s_1)+ \overline{V}^{}_{h,k}(s_1) - V_{1}^{\pi^k}(s_1)\right) = \tilde{\mathrm{O}}\del{H^2S\sqrt{AT}+H^5S^2A}.
\end{equation}
This completes the proof of Theorem~\ref{thm: Regret main result}.
\end{proof}
\section{Bounds on Individual Terms}\label{sec: bounds on individual terms}
\subsection{Bound on the Noise Term}
\begin{lemma}\label{lem: estimation non random noise}
With $\underline{w}^k_{h,s^k_h,a^k_h}$ as defined in Definition~\ref{def: under V} and $a^k_h=\pi^k(s^k_h)$, the following bound holds:
\begin{equation*}
\sum_{k=1}^K\sum_{h=1}^{H}\mathbf{1}\{\mathcal{G}_k\} \envert{\underline{w}^k_{h,s^k_h,\pi^k(s^k_h)}} = \tilde{\mathrm{O}}\del{H^2S\sqrt{AT}} .
\end{equation*}
\end{lemma}
\begin{proof}
We have:
\begin{align*}
&\sum_{k=1}^K\sum_{h=1}^{H} \envert{\underline{w}^k_{h,s_h^k,\pi^k(s_h^k)}} =\sqrt{\frac{\beta_kL}{2}}\sum_{k=1}^K\sum_{h=1}^{H}\sqrt{\frac{1}{n^k(h,s_h^k,a_h^k)+1}}
= \sqrt{\frac{\beta_kL}{2}} \sum_{h,s,a}\sum^{n^K(h,s,a)}_{n=1}\sqrt{\frac{1}{n}}.
\end{align*}
Upper bounding by integration followed by an application of Cauchy-Schwarz inequality gives:
\begin{align*}
&\underset{h,s,a}{\sum}\vc{\sum}{n^K(h,s,a)}{n=1}\sqrt{\frac{1}{n}}
\leq \underset{h,s,a}{\sum}\int^{n^K(h,s,a)}_{0}\sqrt{\frac{1}{ x}} dx = 2\sum_{h,s,a}\sqrt{n^K(h,s,a)} \leq 2\sqrt{HSA\sum_{h,s,a}n^K(h,s,a)} = \mathrm{O}\del{\sqrt{HSAT}}.
\end{align*}
This leads to the bound of $\mathrm{O}\del{\sqrt{\beta_kL}\sqrt{HSAT}} = \tilde{\mathrm{O}}\del{H^2S\sqrt{AT}}$.
\end{proof}
\subsection{Bound on Estimation Error}
\begin{lemma}\label{lem: estimation error}
For $a^k_h=\pi^k(s^k_h)$, the following bound holds
\begin{equation*}
\sum_{k=1}^K\sum_{h=1}^{H}\mathbf{1}\{\mathcal{G}_k\} \envert{\hat{R}^k_{h,s^k_h,a^k_h}-R_{h,s^k_h,a^k_h} + \langle\hat{P}^k_{h,s^k_h,a^k_h}- P_{h,s^k_h,a^k_h},V^*_{h+1}\rangle} = \tilde{\mathrm{O}}\del{H^{\nicefrac{3}{2}}\sqrt{SAT}}.
\end{equation*}
\end{lemma}
\begin{proof}
Under the event $\mathcal{G}_k$, the estimated MDP $\hat M^k$ lies in the confidence set defined in Appendix~\ref{sec: notations}. Hence
\begin{equation*}
\envert{\hat{R}^k_{h,s^k_h,a^k_h}-R_{h,s^k_h,a^k_h} + \langle\hat{P}^k_{h,s^k_h,a^k_h}- P_{h,s^k_h,a^k_h},V^*_{h+1}\rangle} \leq \sqrt{e_{k}(h,s^k_h,a^k_h)},
\end{equation*}
where $\sqrt{e_{k}(h,s^k_h,a^k_h)} =
H\sqrt{ \frac{ \log\left( 2HSA k \right) }{n^k(h,s^k_h,a^k_h)+1}}$.
We bound the denominator as
\begin{align*}
&~\sum_{k=1}^K\sum_{h=1}^{H}\sqrt{\frac{1}{n^k(h,s_h^k,a_h^k)+1}} \\
\leq & ~\sum_{h,s,a}\sum^{n^K(h,s,a)}_{n=1}\sqrt{\frac{1}{n}}\\
\leq&~ \sum_{h,s,a}\,\int^{n^K(h,s,a)}_{0}\sqrt{\frac{1}{ x}}dx \\
\leq &~2\sum_{h,s,a}\sqrt{n^K(h,s,a)} \\
\overset{a}{\leq} &~2\sqrt{HSA\sum_{h,s,a}n^K(h,s,a)} \\
=& ~\mathrm{O}(\sqrt{HSAT}),
\end{align*}
where step $(a)$ follows Cauchy-Schwarz inequality.
Therefore we get
\begin{align*}
\sum_{k=1}^K\sum_{h=1}^{H}\sqrt{e_{k}(h,s^k_h,a^k_h)} =
H\tilde{\mathrm{O}}\del{\sqrt{HSAT}} = \tilde{\mathrm{O}}\del{H^{\nicefrac{3}{2}}\sqrt{SAT}}.
\end{align*}
\end{proof}
\subsection{Bounds on Martingale Difference Sequences}
\begin{lemma}\label{lem: MDS concentration}
The following martingale difference summations enjoy the specified upper bounds with probability at least $1-\delta$,
\begin{gather*}
\envert{\sum_{k=1}^K\mathbf{1}\{\mathcal{G}_k\}\sum_{h=1}^{H}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaPiUk{h}}} = \tilde{\mathrm{O}}(H\sqrt{T})\\
\envert{\sum_{k=1}^K\mathbf{1}\{\mathcal{G}_k\}\sum_{h=1}^{H}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}}} = \tilde{\mathrm{O}}(H\sqrt{T})\\
\envert{\sum_{k=1}^K\mathbf{1}\{\mathcal{G}_k\}\MDSfkind{1}} = \tilde{\mathrm{O}}(H\sqrt{T}).
\end{gather*}
\end{lemma}
Here
$\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaPiUk{h}},\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}} $ are considered under filtration $\overline{\mathcal{H}}^k_h$, while $\MDSfkind{1}$ is considered under filtration $\mathcal{H}^{k-1}_{H}$. Noticing the definition of martingale difference sequences, we can also drop $\mathbf{1}\{\mathcal{G}_k\}$ in the lemma statement.
\begin{proof}
This proof has two parts. We show (i) above are summations of martingale difference sequences and (ii) these summations concentrate under the event $\mathcal{G}_k$ due to Azuma-Hoeffding inequality \citep{wainwright_2019}. We only present the proof for $\left\{\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}}\right\}$ and $\{\MDSfkind{1}\}$, and another one follow like-wise.
We first consider $\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}}$ term. Given the filtration set $\overline{\mathcal{H}}^k_h$, we observe that
\begin{align*}
\mathbb{E}\sbr{\mathbf{1}\{\mathcal{G}_k\}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\overline{\delta}^{\pi^k}_{h+1,k}(s^k_{h+1}) \bigg| \overline{\mathcal{H}}^k_h} = \mathbb{E}\sbr{\mathbf{1}\{\mathcal{G}_k\}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathbb{E}_{s'\simP^k_{h,s^k_h,\pi^k_h(s^k_h)}}\left[\overline{\delta}^{\pi^k}_{h+1,k}(s') \right]\bigg| \overline{\mathcal{H}}^k_h}.
\end{align*}
This is because the randomness is due to the random transitions of the algorithms when conditioning on $\overline{\mathcal{H}}^k_h$. Thus we have $\mathbb{E}\sbr{\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}} \bigg| \overline{\mathcal{H}}^k_h}=0$ and $\left\{\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}}\right\}$ is indeed a martingale difference on the filtration set $\overline{\mathcal{H}}^k_h$.
Under event $\mathcal{G}_k$, we also have $\overline{\delta}^{\pi^k}_{h+1,k}(s^k_{h+1}) = \overline{V}_{h+1,k}(s_{h+1}^k) - V^{\pi^k}_{h+1,k}(s_{h+1}^k) \leq 2H$. Applying Azuma-Hoeffding inequality (e.g. \cite{azar2017minimax}), for any fixed $K'\in[K]$ and $H'\in[H]$, we have with probability at least $1-\delta'$,
\begin{equation*}
\envert{\sum_{k=1}^{K'}\sum_{h=1}^{H'}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}}} \leq H\sqrt{4T\log\del{\frac{2T}{\delta'}}} = \tilde{\mathrm{O}}\del{H\sqrt{T}}.
\end{equation*}
Union bounding over all $K'$ and $H'$, we know the following holds for any $K'\in[K]$ and $H'\in[H]$ with probability at least $1-\delta'$
\begin{equation*}
\envert{\sum_{k=1}^{K'}\sum_{h=1}^{H'}\prod_{h'=1}^h\mathbf{1}\{\mathcal{E}_{h',k}^{\text{th}}\}\mathcal{M}_{\deltaEPik{h}}} \leq H\sqrt{4T\log\del{\frac{2T}{\delta'}}} = \tilde{\mathrm{O}}\del{H\sqrt{T}}.
\end{equation*}\\
Then we consider $\MDSfkind{1}$ term. Given filtration $\mathcal{H}^{k-1}_{H}$, we know that $\tilde{V}_{1,k}$ has identical distribution as $\overline{V}_{1,k}$. Therefore, for any state $s$, we have
\begin{equation*}
\mathbb{E}\sbr{\mathbf{1}\{\tilde{\mathcal{G}}_k\}\tilde{V}_{1,k}(s)\bigg| \mathcal{H}^{k-1}_{H}}=\mathbb{E}\sbr{\mathbf{1}\{\mathcal{G}_k\} \overline{V}_{1,k}(s)\bigg| \mathcal{H}^{k-1}_{H}}.
\end{equation*}
Besides, from the definition of $\mathbb{E}_{\tilde w}$ and $\tilde w$ is the only randomness given $\mathcal{H}^{k-1}_{H}$, we have that for any state $s$,
\begin{align*}
&~\mathbb{E}\sbr{\mathbf{1}\{\mathcal{G}_k\}\mathbb{E}_{\tilde w|\tilde{\mathcal{G}}_k}\left[ \tilde{V}_{1,k}(s)\right]\bigg| \mathcal{H}^{k-1}_{H}}\\
&~\mathbb{E}\sbr{\mathbf{1}\{\mathcal{G}_k\}\mathbb{E}_{\tilde w|\tilde{\mathcal{G}}_k}\left[\mathbf{1}\{\tilde{\mathcal{G}}_k\} \tilde{V}_{1,k}(s)\right]\bigg| \mathcal{H}^{k-1}_{H}}\\
=&~ \mathbb{E}\sbr{\mathbf{1}\{\mathcal{G}_k\}\mathbb{E}_{\tilde w|\tilde{\mathcal{G}}_k}\left[\mathbf{1}\{\tilde{\mathcal{G}}_k\} \tilde{V}_{1,k}(s)\big| \mathcal{H}^{k-1}_{H}\right]\bigg| \mathcal{H}^{k-1}_{H}}\\
=&~ \mathbb{E}\sbr{\mathbf{1}\{\mathcal{G}_k\}\mathbb{E}\left[ \mathbf{1}\{\tilde{\mathcal{G}}_k\}\tilde{V}_{1,k}(s)\big| \mathcal{H}^{k-1}_{H}\right]\bigg| \mathcal{H}^{k-1}_{H}}\\
=&~\mathbb{E}\sbr{\mathbf{1}\{\tilde{\mathcal{G}}_k\} \tilde{V}_{1,k}(s)\bigg| \mathcal{H}^{k-1}_{H}}.
\end{align*}
Combining these two equations and setting $s=s_1^k$, we have $\mathbb{E}\sbr{\MDSfkind{1} \big| \mathcal{H}^{k-1}_{H}}$. Therefore the sequence $\{\MDSfkind{1}\}$ is indeed a martingale difference.
Under event $\mathcal{G}_k$, we also have $\left|\mathbb{E}_{w|\tilde{\mathcal{G}}_k}\left[ \overline{V}_{1,k}(s^k_1)\right]- \overline{V}_{1,k}(s^k_1)\right| \leq 2H$ from Lemma~\ref{lem: est Q function bounded}. Applying from Azum-Hoeffding inequality (e.g. \cite{azar2017minimax}) and similar union bounding argument above, for any $K'\in[K]$, with probability at least $1-\delta'$, we have
\begin{equation*}
\envert{ \sum_{k=1}^{K'}\MDSfkind{1}} \leq H\sqrt{4T\log\del{\frac{2T}{\delta'}}} = \tilde{\mathrm{O}}\del{H\sqrt{T}}.
\end{equation*}
The remaining results as in the lemma statement is proved like-wise. Finally let $\delta'=\delta/3$ and uniform bounding over these 3 martingale difference sequences completes the proof.
\end{proof}
\subsection{Bound on the Warm-up Term}\label{sec: bounds on warm-up term}
\begin{lemma}[Bound on the warm-up term]\label{lem: warmup bound}
\begin{align*}
&\sum_{k=1}^{K} \mathbf{1}\{\mathcal{E}^{\text{th}\,\complement}_{k} \}
= \tilde{\mathrm{O}}(H^3S^2A).
\end{align*}
\end{lemma}
\begin{proof}
\begin{align*}
&~\sum_{k=1}^{K} \mathbf{1}\{\mathcal{E}^{\text{th}\,\complement}_{k}\}\\
= &~ \sum_{k=1}^{K} \mathbf{1}\{ \underset{h\in[H]}{\cup} n^k(h,s,a) \leq \alpha_k,\,\forall (h,s,a) = (h,s^k_h,a^k_h) \} \,\\
\leq&~ \sum_{k=1}^{K} \sum_{h=1}^{H} \mathbf{1}\{ n^k(h,s,a) \leq \alpha_k,\,\forall (h,s,a) = (h,s^k_h,a^k_h) \} \,\\
\overset{a}{\leq}&~ \sum_{a\in \mathcal{A}} \sum_{s\in \mathcal{S}} \sum_{h=1}^{H} \alpha_k\\
\leq&~ 4H^3S^2A\log\left(2HSAK\right)\log\del{40SAT/\delta} \\
=&~ \tilde{\mathrm{O}}(H^3S^2A).
\end{align*}
Step $(a)$ is by substituting the value of $\alpha_k$ followed by upper bound for all $4H^3S\log(2HSAK)\log\del{40SAT/\delta} $.
\end{proof}
|
1,941,325,220,777 | arxiv | \section{Introduction}\label{sec:intro}
\subsection{Background}
The study of Fokker-Planck equations (sometimes also called Kolmogorov forward equations) has a long history - going back to the early 20th century. Originally, Fokker and Planck used their equation to describe Brownian motion in a PDE form, rather than its usual SDE representation. \\
In its most general form, the Fokker-Planck equation reads as
\begin{equation}\label{eq:fokker_planck_general}
\partial_t f(t,x) = \sum_{i,j=1}^d \partial_{x_ix_j}\pa{D_{ij}(x) f(t,x)}-\sum_{i=1}^d \partial_{x_i}\pa{A_i(x)f(t,x)},
\end{equation}
with $t>0,x\in\mathbb{R}^d$, and where $D_{ij}(x),A_i(x)$ are real valued functions, with ${\bf D}(x)=\pa{D_{ij}(x)}_{i,j=1,\dots,d}$ being a positive semidefinite matrix.\\
The Fokker-Planck equation has many usages in modern mathematics and phys\-ics, with connection to statistical physics, plasma physics, stochastic analysis and mathematical finances. For more information about the equation, we refer the reader to \cite{RiFP89}. Here we will consider a very particular form of \eqref{eq:fokker_planck_general} that allows degeneracies and defectiveness to appear.
\subsection{The Fokker-Planck Equation in our Setting}
In this work we will focus our attention on Fokker-Planck equations of the form:
\begin{equation}\label{eq:fokkerplanck}
\partial_t f(t,x)=Lf(t,x):=\text{div}\pa{{\bf D}\nabla f(t,x)+{\bf C} xf(t,x)}, \quad\quad t>0, x\in\mathbb{R}^d,
\end{equation}
with appropriate initial conditions, where the matrix ${\bf D}$ (the \textit{diffusion} matrix) and ${\bf C}$ (the \textit{drift} matrix) are assumed to be constant and real valued.\\
In addition to the above, we will also assume the following:
\begin{enumerate}\label{hyp1}
\item[(A)] ${\bf D}$ is a positive semidefinite matrix with
$$1\le r:=\text{rank}\pa{{\bf D}} \leq d.$$
\item[(B)] All the eigenvalues of ${\bf C}$ have positive real part (this is sometimes called \textit{positively stable}).
\item[(C)] There exists no non-trivial ${\bf C}^T$-invariant subspace of $\text{Ker}\pa{{\bf D}}$ (this is equivalent to \emph{hypoellipticity} of \eqref{eq:fokkerplanck}, cf. \cite{Ho67}).
\end{enumerate}
Each of these conditions has a significant impact on the equation:
\begin{itemize}
\item Condition (A) allows the possibility that our Fokker-Planck equation is degenerate ($r<d$).
\item Condition (B) implies that the drift term confines the system. Hence it is crucial for the existence of a non-trivial steady state to the equation, and
\item Condition (C) tells us that when ${\bf D}$ is degenerate, ${\bf C}$ compensates for the lack of diffusion in the appropriate direction and ``pushes'' the solution back to where diffusion happens.
\end{itemize}
Equations of the form \eqref{eq:fokkerplanck}, with emphasis on the degenerate structure (and hence $d\ge2$), have been extensively investigated recently (see \cite{AE},\cite{OPPS12}) and were shown to retain much of the structure of their non-degenerate counterpart. When it comes to the question of long time behavior, it has been shown in \cite{AE} that under Conditions (A)-(C) there exists a unique equilibrium state $f_\infty$ to \eqref{eq:fokkerplanck} with a unit mass (it was actually shown that the kernel of $L$ is one dimensional) and that the convergence rate to it can be explicitly estimated by the use of the so called \emph{(relative) entropy functionals}. Based on \cite{AMTU01,BaEmD85}, and denoting by $\mathbb{R}^+:=\br{x>0\;|\;x\in\mathbb{R}}$ and $\mathbb{R}_0^+:=\mathbb{R}^+\cup\br{0}$, we introduce these entropy functionals:
\begin{definition}\label{def-entropy}
We say that a function $\psi$ is a \textit{generating function for an admissible relative entropy} if $\psi \not\equiv 0$, $\psi\in C\pa{\mathbb{R}_0^+}\cap C^4\pa{\mathbb{R}^+}$, $\psi(1)=\psi^\prime(1)=0$, $\psi^{\prime\prime} > 0$ on $\mathbb{R}^+$ and
\begin{equation}\label{eq:psicondition}
\pa{\psi^{\prime\prime\prime}}^2 \leq \frac{1}{2}\psi^{\prime\prime}\psi^{\prime\prime\prime\prime}.
\end{equation}
For such a $\psi$, we define the \textit{admissible relative entropy} $e_\psi\pa{\cdot|f_\infty}$ to the Fokker-Planck equation \eqref{eq:fokkerplanck} with a unit mass equilibrium state $f_\infty$, as the functional
\begin{equation}\label{eq:defentropy}
e_\psi\pa{f|f_\infty}:= \int_{\mathbb{R}^d}\psi\pa{\frac{f(x)}{f_\infty(x)}}f_\infty(x)dx,
\end{equation}
for any non-negative $f$ with a unit mass.
\end{definition}
\begin{remark}\label{rem:about_entropies}
It is worth to note a few things about Definition \ref{def-entropy}:
\begin{itemize}
\item As $\psi$ is only defined on $\mathbb{R}^+_0$ the admissible relative entropy can only be used for non-negative functions $f$. This, however, is not a problem for equation \eqref{eq:fokkerplanck} as it propagates non-negativity.
\item Assumption \eqref{eq:psicondition} is equivalent to the concavity of $(\psi'')^{-1}$ on $\mathbb{R}^+$.
\item Important examples of generating functions include $\psi_1(y):=y\log y -y+1$ (the Boltzmann entropy) and $\psi_2(y):=\frac{1}{2}(y-1)^2$. \\Note that for $f\in L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$
$$e_2(f|f_\infty)=\frac{1}{2}\norm{f-f_\infty}^2_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}.$$
This means that up to some multiplicative constant, $e_2$ is the square of the (weighted) $L^2$ norm.
\end{itemize}
\end{remark}
A detailed study of the rate of convergence to equilibrium of the relative entropies for \eqref{eq:fokkerplanck} when $r<d$ was completed recently in \cite{AE}. Denoting by $L^1_+\pa{\mathbb{R}^d}$ the space of non-negative $L^1$ functions on $\mathbb{R}^d$, the authors have shown the following:
\begin{theorem}\label{thm:Anton_Erb_rate}
Consider the Fokker-Planck equation \eqref{eq:fokkerplanck} with diffusion and drift matrices ${\bf D}$ and ${\bf C}$ which satisfy Conditions (A)-(C). Let
\begin{equation}\label{eq:def_of_mu}
\mu:=\min\br{\Re\pa{\lambda}\,|\, \lambda\text{ is an eigenvalue of }{\bf C}}.
\end{equation}
Then, for any admissible relative entropy $e_\psi$ and a solution $f(t)$ to \eqref{eq:fokkerplanck} with initial datum $f_0\in L^1_+\pa{\mathbb{R}^d}$, of unit mass and such that $e_\psi(f_0|f_\infty)<\infty$ we have that:
\begin{enumerate}[(i)]
\item If all the eigenvalues from the set
\begin{equation}\label{eq:eigenvalueMu}
\{\lambda \mid \lambda \text{ is an eigenvalue of ${\bf C}$ and }\Re(\lambda)=\mu\}
\end{equation} are non-defective
\footnote{An eigenvalue is \textit{defective} if its geometric multiplicity is strictly less than its algebraic multiplicity. We will call the difference between these numbers the \textit{defect} of the eigenvalue.},
then there exists a fixed geometric constant $c\ge1$, that doesn't depend on $f$, such that
\begin{equation*
e_\psi(f(t)|f_\infty) \leq c e_\psi(f_0|f_\infty)e^{-2\mu t},\quad t\geq 0.
\end{equation*}
\item If one of the eigenvalues from the set \eqref{eq:eigenvalueMu}
is defective, then for any $\epsilon>0$ there exists a fixed geometric constant $c_{\epsilon}$, that doesn't depend on $f$, such that
\begin{equation}\label{eq:entropydecayAEdef}
e_\psi(f(t)|f_\infty) \leq c_{\epsilon} e_\psi(f_0|f_\infty)e^{-2(\mu-\epsilon) t},\quad t\geq 0.
\end{equation}
\end{enumerate}
\end{theorem}
The loss of the exponential rate $e^{-2\mu t}$ in part $(ii)$ of the above theorem is to be expected, however it seems that replacing it by $e^{-2\pa{\mu-\epsilon}t}$ is too crude. Indeed, if one considers the much related, finite dimensional, ODE equivalent
\begin{equation}\nonumber
\dot{x}=-{\bf B} x
\end{equation}
where the matrix ${\bf B}\in\mathbb{R}^{d\times d}$ is positively stable and has, for example, a defect of order $1$ in an eigenvalue with real part equal to $\mu>0$ (defined as in \eqref{eq:def_of_mu}), then one notices immediately that
\begin{equation}\nonumber
{\norm{x(t)}}^2 \leq c\|x_0\|^2\pa{1+t^2}e^{-2\mu t},\quad t\geq 0,
\end{equation}
i.e.\ the rate of decay is worsened by a multiplication of a polynomial of the order twice the defect of the ``minimal eigenvalue''.\\
The goal of this work is to show that the above is also the case for our Fokker-Planck equation.\\
We will mostly focus our attention on the natural family of relative entropies $e_p\pa{\cdot|f_\infty}$, with $1<p\leq 2$, which are generated by
\begin{equation*
\psi_p(y):=\frac{y^{p}-p(y-1)-1}{p(p-1)}.
\end{equation*}
Notice that $\psi_1$ can be understood to be the limit of the above family as $p$ goes to $1$.\\
An important observation about the above family, that we will use later, is the fact that \emph{the generating function for $p=2$, associated to the entropy $e_2$, is actually defined on $\mathbb{R}$ and not only $\mathbb{R}^{+}$}. This is not surprising as we saw the connection between $e_2$ and the $L^2$ norm. This means that we are allowed to use $e_2$ even when we deal with functions without a definite sign.\\
Our main theorem for this paper is the following:
\begin{theorem}\label{thm:main}
Consider the Fokker-Planck equation \eqref{eq:fokkerplanck} with diffusion and drift matrices ${\bf D}$ and ${\bf C}$ which satisfy Conditions (A)-(C). Let $\mu$ be defined as in \eqref{eq:def_of_mu} and assume that one, or more, of the eigenvalues of ${\bf C}$ with real part $\mu$ are defective. Denote by $n>0$ the maximal defect of these eigenvalues. Then, for any $1<p\leq 2$, the solution $f(t)$ to \eqref{eq:fokkerplanck} with unit mass initial datum $f_0\in L^1_+\pa{\mathbb{R}^d}$ and finite $p-$entropy, i.e. $e_p\pa{f_0|f_\infty}<\infty$, satisfies
\begin{equation*
e_p\pa{f(t)|f_\infty} \leq \begin{cases}
c_2 e_2\pa{f_0|f_\infty}\pa{1+t^{2n}}e^{-2\mu t}, & p=2, \\
c_p \pa{p(p-1)e_p(f_0|f_\infty)+1}^{\frac{2}{p}}\pa{1+t^{2n}}e^{-2\mu t}, & 1<p<2,
\end{cases}
\end{equation*}
for $t\ge0$, where $c_p>0$ is a fixed geometric constant, that doesn't depend on $f_0$, and $f_\infty$ is the unique equilibrium with unit mass.
\end{theorem}
The main idea, and novelty, of this work is in combining elements from Spectral Theory and the study of our $p-$entropies. We will give a detailed study of the geometry of the operator $L$ in the $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ space and deduce, from its spectral properties, the result for $e_2$. Since the other entropies, $e_p$ for $1<p<2$, lack the underlying geometry of the $L^2$ space that $e_2$ enjoys, we will require additional tools: We will show a quantitative result of \emph{hypercontractivity for non-symmetric Fokker-Planck operators} that will assure us that after a certain, \emph{explicit} time, any solution to our equation with finite $p-$entropy will belong to $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$. This, together with the dominance of $e_2$ over $e_p$ for functions in $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ will allow us to ``push'' the spectral geometry of $L$ to solutions with initial datum that only has finite $p-$entropy.\\
\amit{We have recently become aware that the long time behaviour of Theorem \ref{thm:main} has been shown in a preprint by Monmarch\'e, \cite{MoH15Arx}. However, the method he uses to show this result is a generalised entropy method (more on which can be found in \S\ref{sec:fisher}), while we have taken a completely different approach to the matter.\\}
The structure of the work is as follows: In \S\ref{sec:fokkerplanck} we will recall known facts about the Fokker-Planck equation (degenerate or not). \S\ref{sec:spectral} will see the spectral investigation of $L$ and the proof of Theorem \ref{thm:main} for $p=2$. In \S\ref{sec:hyper} we will show our non-symmetric hypercontractivity result and conclude the proof of our Theorem \ref{thm:main}. Lastly, in \S\ref{sec:fisher} we will recall another important tool in the study of Fokker-Planck equations - the Fisher information - and show that Theorem \ref{thm:main} can also be formulated for it, due to the hypoelliptic regularisation of the equation.
\section{The Fokker-Planck Equation}\label{sec:fokkerplanck}
This section is mainly based on recent work of Arnold and Erb (see \cite{AE}). We will provide here, mostly without proof, known facts about degenerate (and non-degenerate) Fokker-Planck equations of the form \eqref{eq:fokkerplanck}.
\begin{theorem}\label{thm:propertiesoffokkerplanck}
Consider the Fokker-Planck equation \eqref{eq:fokkerplanck}, with diffusion and drift matrices ${\bf D}$ and ${\bf C}$ that satisfy Conditions (A)-(C), and an initial datum $f_0\in L^1_+\pa{\mathbb{R}^d}$. Then
\begin{enumerate}[(i)]
\item There exists a unique classical solution $f\in C^\infty \pa{\mathbb{R}^+\times \mathbb{R}^d}$ to the equation. Moreover, if $f_0\not=0$ it is strictly positive for all $t>0$.
\item For the above solution $\int_{\mathbb{R}^d}f(t,x)dx=\int_{\mathbb{R}^d}f_0(x)dx$.
\item If in addition $f_0\in L^{p}\pa{\mathbb{R}^d}$ for some $1< p\leq \infty$, then $f\in C\pa{[0,\infty),L^p\pa{\mathbb{R}^d}}$.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{thm:equilibrium}
Assume that the diffusion and drift matrices, ${\bf D}$ and ${\bf C}$, satisfy Conditions (A)-(C). Then, there exists a unique stationary state $f_\infty\in L^1\pa{\mathbb{R}^d}$ to \eqref{eq:fokkerplanck} satisfying $\int_{\mathbb{R}^d} f_\infty(x) dx = 1$. Moreover, $f_\infty$ is of the form:
\begin{equation}\label{eq:equilibrium}
f_\infty(x)= c_{{\bf K}} e^{-\frac{1}{2}x^T {\bf K}^{-1} x},
\end{equation}
where the covariance matrix ${\bf K}\in\mathbb{R}^{d\times d}$ is the unique, symmetric and positive definite solution to the continuous Lyapunov equation
\begin{equation}\nonumber
2{\bf D}={\bf C} {\bf K}+{\bf K} {\bf C}^T,
\end{equation}
and where $c_{{\bf K}}>0$ is the appropriate normalization constant. In addition, for any $f_0\in L^1_+\pa{\mathbb{R}^d}$ with unit mass, the solution to the Fokker-Planck equation \eqref{eq:fokkerplanck} with initial datum $f_0$ converges to $f_\infty$ in relative entropy (as referred to in Theorem \ref{thm:Anton_Erb_rate}).
\end{theorem}
\begin{remark}\label{rem:equilibrium}
In the case where $f_0\in L^1_+\pa{\mathbb{R}^d}$ is not of unit mass, it is immediate to deduce that the solution to the Fokker-Planck equation with initial datum $f_0$ converges to $\pa{\int_{\mathbb{R}^d}f_0(x)dx}f_\infty(x)$.
\end{remark}
\begin{corollary}\label{cor:simple_form_for_L}
The Fokker-Planck operator $L$ can be rewritten as
\begin{equation}\label{eq:FPrecast-gen}
Lf=\text{div}\pa{f_\infty(x){\bf C}{\bf K} \nabla \pa{\frac{f(t,x)}{f_\infty(x)}}}
\end{equation}
(cf.\ Theorem 3.5 in \cite{AE}).
\end{corollary}
A surprising, and useful, property of \eqref{eq:fokkerplanck} is that the diffusion and drift matrices associated to it can always be simplified by using a change of variables. The following can be found in \cite{AAS}:
\begin{theorem}\label{thm:simpliedDC}
Assume that the diffusion and drift matrices satisfy Conditions (A)-(C). Then, there exists a \textit{linear} change of variable that transforms \eqref{eq:fokkerplanck} to itself with new diffusion and drift matrices $\bf D$ and $\bf C$ such that
\begin{equation}\label{eq:diagD}
{\bf D}=\operatorname{diag}\br{d_1,d_2,\dots,d_r,0,\dots,0}
\end{equation}
with $d_j>0$, $j=1,\ldots,r$ and ${\bf C}_s:=\frac{{\bf C}+{\bf C}^T}{2}={\bf D}$. In these new variables the equilibrium $f_\infty$ is just the standard Gaussian with ${\bf K}={\bf I}$.
\end{theorem}
The above matrix normalisation has additional impact on the calculation of the adjoint operator:
\begin{corollary}
Let ${\bf C}_s={\bf D}$. Then:
\begin{enumerate}
\item[(i)]
\begin{equation*}
\big(L_{{\bf D},{\bf C}}\big)^* = L_{{\bf D},{\bf C}^T}\,,
\end{equation*}
where $L^*$ denotes the (formal) adjoint of $L$, considered w.r.t.\ $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$. The domain of $L$ will be discussed in \S\ref{sec:spectral}.
\item[(ii)]
The kernels of $L$ and $L^*$ are both spanned by $\exp(-\frac{|x|^2}{2})$. This is not true in general, i.e.\ for a Fokker-Planck operator $L$ without the matrix normalisation assumption.
\end{enumerate}
\end{corollary}
\begin{proof}
(i) Under the normalising coordinate transformation of Theorem \ref{thm:simpliedDC} we see from \eqref{eq:FPrecast-gen} that
\begin{equation}
\begin{split}\label{L-star}
\int_{\mathbb{R}^d} f(x) L_{{\bf D},{\bf C}}&g(x) f_\infty^{-1}(x)dx= -\int_{\mathbb{R}^d} f_\infty(x) \nabla \pa{\frac{f(x)}{f_\infty(x)}}^T{\bf C} \nabla \pa{\frac{g(x)}{f_\infty(x)}}dx \\
=& \int_{\mathbb{R}^d} \text{div}\pa{f_\infty(x) {\bf C}^ T\nabla\pa{\frac{f(x)}{f_\infty(x)}}} g(x)f_\infty^{-1}(x)dx.
\end{split}
\end{equation}
(ii) follows from \eqref{eq:equilibrium} and ${\bf K}={\bf I}$.
\end{proof}
{}From this point onwards we will always assume that Conditions (A)-(C) hold, and that we are in the coordinate system where ${\bf D}$ is of form \eqref{eq:diagD} and equals ${\bf C}_s$.\\
\section{The Spectral Study of $L$}\label{sec:spectral}
The main goal of this section is to explore the spectral properties of the Fokker-Planck operator $L$ in $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$, and to see how one can use them to understand rates of convergence to equilibrium for $e_2$. The crucial idea we will implement here is that, since $L^2\pa{\mathbb{R}^d,f^{-1}_{\infty}}$ decomposes into orthogonal eigenspaces of $L$ with eigenvalues that get increasingly farther to the left of the imaginary axis, one can deduce \emph{improved convergence rates on ``higher eigenspaces''}. \\
The first step in achieving the above is to recall the following result from \cite{AE}, where we use the notation $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$:
\begin{theorem}\label{thm:spectraldecomposition}
Denote by
\begin{equation*
V_m:=\span\br{\partial_{x_1}^{\alpha_1}\dots \partial_{x_d}^{\alpha_d}f_\infty(x)\ \Big|\ \alpha_1,\dots,\alpha_d\in \mathbb{N}_0, \sum_{i=1}^d \alpha_i=m}.
\end{equation*}
Then, $\br{V_m}_{m\in \mathbb{N}_0}$ are mutually orthogonal in $L^2\pa{\mathbb{R}^d, f_\infty ^{-1}}$,
\begin{equation}\nonumber
L^2\pa{\mathbb{R}^d,f_\infty^{-1}}= \bigoplus_{m\in\mathbb{N}_0}V_m,
\end{equation}
and $V_m$ are invariant under $L$ and its adjoint (and thus under the flow of \eqref{eq:fokkerplanck}).\\
Moreover, the spectrum of $L$ satisfies
\begin{equation}\nonumber
\begin{gathered}
\sigma\pa{L}=\bigcup_{m\in\mathbb{N}_0}\sigma\pa{L\vert_{V_m}},\\
\sigma\pa{L\vert_{V_m}}=\br{-\sum_{i=1}^d \alpha_i \lambda_i \ \Big|\ \alpha_1 ,\dots,\alpha_d\in\mathbb{N}_0, \sum_{i=1}^d \alpha_i=m},
\end{gathered}
\end{equation}
where $\br{\lambda_j}_{j=1,\dots,d}$ are the eigenvalues (with possible multiplicity) of the matrix ${\bf C}$. The eigenfunctions of $L$ (or eigenfunctions and generalized eigenfunctions in the case ${\bf C}$ is defective) form a basis to $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$.
\end{theorem}
Let us note that this orthogonal decomposition is non-trivial since $L$ is in general non-symmetric. The above theorem quantifies our previous statement about ``higher eigenspaces'': the minimal distance between the eigenvalues of $L$ restricted to the ``higher'' $L$-invariant eigenspace $V_m$ and the imaginary axis is $m\mu$. Thus, the decay we expect to find for initial datum from $V_m$ is of order $e^{-2m\mu t}$ (in the quadratic entropy, e.g.). However, as the function we will use in our entropies are not necessarily contained in only finitely many $V_m$, we might need to pay a price in the rate of convergence.\\
This intuition is indeed true. Denoting by
\begin{equation}\label{def:Hk}
H_k := \bigoplus _{m\geq k} V_m
\end{equation}
for any $k\geq 0$, we have the following:
\begin{theorem}\label{thm:e2rateofdecay}
Let \amit{$ f_k\in H_{k}$ for some $k\geq 1$} and let $f(t)$ be the solution to \eqref{eq:fokkerplanck} with initial data \amit{$f_0=f_\infty + f_k$}. Then for any $0<\epsilon<\mu$ there exists a geometric constant $c_{k,\epsilon}\ge1$ that depends only on $k$ and $\epsilon$ such that
\amit{\begin{equation} \label{eq:e2rateofdecay}
e_2\pa{f(t)|f_\infty} \leq c_{k,\epsilon}e_2(f_0|f_\infty) e^{-2(k\mu-\epsilon)t}\,,\quad t\ge0\,.
\end{equation}}
\end{theorem}
\begin{remark}\label{rem:comparisonDecay}
The loss of an $\epsilon$ in the decay rate of \eqref{eq:e2rateofdecay} -- compared to the decay rate solely on $V_k$ -- can have two causes:
\begin{enumerate}
\item For drift matrices ${\bf C}$ with a defective eigenvalue with real part $\mu$, the larger decay rate $2k\mu$ would not hold in general. This is illustrated in \eqref{eq:entropydecayAEdef}, which provides the best possible \textit{purely exponential} decay result, as proven in \cite{AE}.
\item For \emph{non-defective matrices} ${\bf C}$, the improved decay rate $2k\mu$ actually holds, but our method of proof, that uses the Gearhart-Pr\"uss Theorem, cannot yield this result.
The decay estimate \eqref{eq:e2rateofdecay} will be improved in Theorem \ref{thm:rate_of_decay_e2}: There, the $\epsilon$-reduction drops out in the non-defective case.
\end{enumerate}
\end{remark}
\begin{remark}\label{rem:e2decay_no_positivity}
As we insinuated in the introduction to our work, an important observation to make here is that the initial data, $f_0$, \emph{doesn't have to be non-negative} (and in many cases, is not). While this implies that $f(t)$ might also be non-negative, this poses no problems as $e_2$ is the squared (weighted) $L^2$ norm (up to a constant). Theorem \ref{thm:e2rateofdecay} \emph{would not work in general} for $e_p$ as the non-negativity of $f(t)$ is crucial there (in other words, $f_0$ would not be admissible).
\end{remark}
The main tool to prove Theorem \ref{thm:e2rateofdecay} is the Gearhart--Pr\"uss Theorem (see for instance Th.\ 1.11 Chap.\ V in \cite{EN00}). In order to be able to do that, we will need more information about the dissipativity of $L$ and its resolvents with respect to $H_k$.
\begin{lemma}\label{lem:dissipative}
Let $V_m$ be as defined in Theorem \ref{thm:spectraldecomposition}. Consider the operator $L$ with the domain $D(L)=\span\br{V_m,\, m\in\mathbb{N}_0}$. Then $L$ is dissipative, and as such closable. Moreover, its closure, $\overline{L}$, generates a contraction semigroup on $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$.
\end{lemma}
\begin{proof}
Given $f\in D(L)$, and denoting $g:=\frac{f}{f_\infty}$, we notice that \eqref{eq:FPrecast-gen} with ${\bf K}={\bf I}$ implies that
$$\pa{Lf,f}_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}} = \int_{\mathbb{R}^d} \text{div}\pa{f_\infty (x){\bf C} \nabla g(x)}g(x)dx = -\int_{\mathbb{R}^d} \nabla g(x)^T {\bf C} \nabla g(x) f_\infty(x)dx$$
$$=-\int_{\mathbb{R}^d} \nabla g(x)^T {\bf D} \nabla g(x) f_\infty(x)dx \leq 0,$$
where we have used the fact that ${\bf C}_s={\bf D}$. Thus, $L$ is dissipative.\\
To show the second statement we use the Lumer-Phillips Theorem (see for instance Th. 3.15 Chap. II in \cite{EN00}). Since $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}=\bigoplus_{m\in\mathbb{N}_0} V_m$ it will be enough to show that for $\lambda>0$ we have that $V_m\subset \text{Range}\pa{\lambda I-L}$ for any $m$. As $V_m \subset D(L)$, is finite dimensional, and is invariant under $L$ (Theorem \ref{thm:spectraldecomposition} again) we can consider the linear bounded operator $L\vert_{V_m}:V_m \rightarrow V_m$. Since we have shown that $L$ is dissipative, we can conclude that the eigenvalues of $L\vert_{V_m}$ have non-positive real parts, implying that $\pa{\lambda I- L} \vert_{V_m}$ is invertible. This in turn implies that
$$V_m=\text{Range}\pa{\pa{\lambda I-L}\vert_{V_m}}\subset \text{Range}\pa{\lambda I-L},$$
completing the proof.
\end{proof}
To study the resolvents of $L$ we will need to use some information about its ``dual'': the Ornstein-Uhlenbeck operator.\\
For a given symmetric positive semidefinite matrix ${\bf Q}=(q_{ij})$ and a real, negatively stable matrix ${\bf B}=(b_{ij})$ on $\mathbb{R}^d$ we consider the Ornstein-Uhlen\-beck operator
\begin{equation}\label{eq:OU_operator}
P_{{\bf{Q}},{\bf B}}:=\frac{1}{2}\sum_{i,j}q_{ij}\partial_{x_ix_j}^2+\sum_{i,j}b_{ij}x_j \partial_{x_i}=\frac{1}{2}\operatorname{Tr}\pa{{\bf{Q}} \nabla_x^2}+\pa{{\bf B} x,\nabla_x},\quad x\in\mathbb{R}^d.
\end{equation}
Similarly to our conditions on the diffusion and drift matrices, we will only be interested in Ornstein-Uhlen\-beck operators that are \emph{hypoelliptic}. In the above setting, this corresponds to the condition
$$\operatorname{rank} \left[{\bf{Q}}^{\frac{1}{2}},{\bf B}{\bf{Q}}^{\frac{1}{2}},\dots,{\bf B}^{d-1} {\bf{Q}}^{\frac{1}{2}} \right]=d.$$
The hypoellipticity condition guarantees the existence of an invariant measure, $d\mu$, to the process. This measure has a density w.r.t.\ the Lebesgue measure, which is given by
$$\frac{d\mu}{dx}(x)= c_{\bf M} e^{-\frac12 x^T{\bf M}^{-1}x}\,,\quad\mbox{with}\quad
{\bf {\bf M}}:=\int_0^\infty e^{{\bf B} s}{\bf Q} e^{{\bf B}^T s}\,ds$$
where $c_{\bf M}>0$ is a normalization constant. It is well known that the above definition of ${\bf {\bf M}}$ is equivalent to finding the unique solution to the continuous Lyapunov equation
\begin{equation}\label{Lyapunov}
{\bf Q}=-{\bf B}{\bf M}-{\bf M}{\bf B}^T\,.
\end{equation}
(See for instance Theorem 2.2 in \cite{SnZaN70}, \S2.2 of \cite{HoJoT91}.)\\
Hypoelliptic Ornstein-Uhlen\-beck operators have been studied for many years, and more recently in \cite{OPPS15} the authors considered them under the additional possibility of degeneracy in their diffusion matrix ${\bf{Q}}$. In \cite{OPPS15}, the authors described the domain of the closed operator $P_{{\bf{Q}},{\bf B}}$, and have found the following resolvent estimation:
\begin{theorem}\label{thm:OPPSthm}
Consider the hypoelliptic Ornstein-Uhlen\-beck operator $P_{{\bf{Q}},{\bf B}}$, as in \eqref{eq:OU_operator}, and its invariant measure $d\mu(x)$. Then there exist some positive constants $c,C>0$ such that for any $z\in \Gamma_{\kappa}$, with
\begin{equation}\label{eq:OPPSset}
\Gamma_{\kappa}:=\br{z\in\mathbb{C} \,\Bigg |\, \Re z \leq \textstyle\frac{1}{2}\pa{1-\operatorname{Tr}({\bf B})}, \abs{\Re z-\pa{1-\textstyle\frac{1}{2}\operatorname{Tr}({\bf B})}} \leq c\abs{z-\pa{1-\textstyle\frac{1}{2}\operatorname{Tr}({\bf B})}}^{\frac{1}{2\kappa+1}} }
\end{equation}
and where $\kappa$ is the smallest integer $0\leq \kappa\leq d-1$ such that
\begin{equation}\label{rank-cond}
\operatorname{rank} \left[{\bf{Q}}^{\frac{1}{2}},{\bf B}{\bf{Q}}^{\frac{1}{2}},\dots,{\bf B}^{\kappa} {\bf{Q}}^{\frac{1}{2}} \right]=d\ ,
\end{equation}
one has that
\begin{equation*}
\norm{\pa{P_{{\bf{Q}},{\bf B}}-zI}^{-1}}_{B\pa{L^2\pa{\mathbb{R}^d,d\mu}}} \leq C\abs{z-\pa{1-\frac{1}{2}\operatorname{Tr}({\bf B})}}^{-\frac{1}{2\kappa+1}}.
\end{equation*}
\end{theorem}
We illustrate the spectrum of $P_{{\bf Q},{\bf B}}$ and the domain $\Gamma_{\kappa}$ in Figure \ref{fig:gamma}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{resolvent_set1}
\caption{The black dots represent $\sigma(P_{{\bf Q},{\bf B}})$ with the eigenvalues of the $2\times 2$ matrix ${\bf B}$ given as $\lambda_{1,2}=-1\pm \frac{7}{2}i$. The shaded area represents the set $\Gamma_{\kappa}$ of Theorem \ref{thm:OPPSthm} with $\kappa=1$.}
\label{fig:gamma}
\end{figure}\\
In order to use the above theorem for our operator, $L$, we show the connection between it and $P$ in the following lemma:
\begin{lemma}\label{lem:equiv_L_and_P}
Assume that the associated diffusion and drift matrices for $L$, defined on $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$, and $P_{{\bf Q},{\bf B}}$, defined on $L^2\pa{\mathbb{R}^d,d\mu(x)}$, satisfy
$${\bf Q}=2{\bf D},\,{\bf B}=-{\bf C}.$$
Then $d\mu(x)=f_\infty(x)dx$ is the invariant measure for $P=P_{{\bf Q},{\bf B}}$ and its adjoint, and (up to the natural transformation $\frac{Lf}{f_\infty}=P^*(\frac{f}{f_\infty})$) we have $L=P^*$.
\end{lemma}
\begin{proof}
We start by recalling that we assume that ${\bf D}={\bf C}_s$. Since \eqref{Lyapunov} can be rewritten as
$$2{\bf D}={\bf C}{\bf M}+{\bf M}{\bf C}^T$$
for our choice of ${{\bf Q}} $ and ${\bf B}$, we conclude that ${\bf M}={\bf I}$ for $P_{2{\bf D},-{\bf C}}$ and that
$\big(P_{2{\bf D},-{\bf C}}\big)^*=P_{2{\bf D},-{\bf C}^T}$ (the last equality can be shown in a similar way to \eqref{L-star}). Thus, the invariant measure corresponding to both these operators is $f_\infty(x) dx$.\\
Let $f\in D(L)\subset L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ and define $g_f:=\frac{f}{f_\infty}
\in L^2\pa{\mathbb{R}^d,f_\infty}$.
Then
\begin{equation}\label{eq:connectionPL}
\begin{gathered}
\frac{L_{{\bf D},{\bf C}} f(x)}{f_\infty(x)}=\frac{\text{div}\pa{f_\infty(x){\bf C} \nabla g_f(x)}}{f_\infty(x)} = \text{div}\pa{{\bf C} \nabla g_f(x)} + \frac{\nabla f_\infty(x)^T {\bf C} \nabla g_f(x)}{f_\infty(x)}\\
=\text{div}\pa{{\bf D} \nabla g_f(x)} - x^T {\bf C} \nabla g_f(x)=P_{2{\bf D},-{\bf C}^T}g_f(x)=\big(P_{2{\bf D},-{\bf C}}\big)^*g_f(x)\,,
\end{gathered}
\end{equation}
where the adjoint is considered w.r.t.\ $L^2\pa{\mathbb{R}^d,f_\infty}$. In particular, if $f(t,\cdot)\in L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ solves \eqref{eq:fokkerplanck} then $g_f(t,\cdot)$ satisfies the adjoint equation $\partial_t g_f =\big(P_{2{\bf D},-{\bf C}}\big)^*g_f$.
\end{proof}
With this at hand we can recast, and improve, Theorem \ref{thm:OPPSthm} for the operator $L$ and its closure.
\begin{prop}\label{prop:OPPSthmimproved}
Let any $k\in\mathbb{N}_0$ be fixed.
Consider the set $\Gamma_{\kappa}$, defined by \eqref{eq:OPPSset}, associated to ${\bf Q}=2{\bf D},\,{\bf B}=-{\bf C}^T$ (Condition (C) guarantees the existence of such $\kappa$). Then we have that, for any $z\in \Gamma_{\kappa}$, the operator $\pa{L-zI}\vert_{H_k}:H_k\rightarrow H_k$
is well defined, closable, and its closure is invertible with
\begin{equation}\label{eq:OPPSimprovedresolvant}
\Norm{\pa{\pa{\overline{L}-zI}\vert_{H_k}}^{-1}}_{B\pa{H_k}} \leq C\abs{z-\pa{1+\frac{1}{2}\operatorname{Tr}({\bf C})}}^{-\frac{1}{2\kappa+1}},
\end{equation}
where $C>0$ is the same constant as in Theorem \ref{thm:OPPSthm}.
\end{prop}
\begin{proof}
We consider the case $k=0$ first. Due to Theorem \ref{thm:OPPSthm} we know that for any $z\in\Gamma_{\kappa}$, $P_{2{\bf D},-{\bf C}^T}-zI$ is invertible on $L^2\pa{\mathbb{R}^d,f_\infty}$. Hence, for any $f\in L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ there exists a unique $\ell_f\in L^2\pa{\mathbb{R}^d,f_\infty}$ such that
$$\pa{P_{2{\bf D},-{\bf C}^T}-zI}\ell_f(x)=\frac{f(x)}{f_\infty(x)},$$
which can also be written differently due to \eqref{eq:connectionPL}, as
$$\pa{\overline{L}-zI} \pa{f_\infty(x) \ell_f(x)}=f(x).$$
This implies that $\overline{L}-zI$ is bijective on its appropriate space.\\
Next we notice that, with the notations from Lemma \ref{lem:equiv_L_and_P}
$$\sup_{\norm{f}=1}\norm{\pa{\overline{L}-zI}^{-1}f}_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}=\sup_{\norm{f}=1}\norm{f_\infty \ell_f}_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}$$
$$=\sup_{\norm{f}=1}\norm{\ell_f}_{L^2\pa{\mathbb{R}^d,f_\infty}}=\sup_{\norm{g_f}=1}\norm{\pa{P_{2{\bf D},-{\bf C}^T}-zI}^{-1}g_f}_{L^2\pa{\mathbb{R}^d,f_\infty}}\,,$$
from which we conclude that
$$\norm{\pa{\overline{L}-zI}^{-1}}_{B\pa{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}}=\norm{\pa{P_{2{\bf D},-{\bf C}^T}-zI}^{-1}}_{B\pa{L^2\pa{\mathbb{R}^d,f_\infty}}}\,,$$
completing the proof for this case.\\
We now turn our attention to the restrictions $\pa{L-zI}\vert_{H_k}$ with $k\ge1$ and domain
$$D_k:=\span\br{V_m, m\geq k}=D(L)\cap H_k.$$
Since $L|_{V_m}:\,V_m\to V_m$ $\forall m\in\mathbb{N}_0$ we have that $\pa{L-zI}\vert_{H_k}:\,D_k\to H_k$. Moreover, the dissipativity of $L$ on $D(L)$ assures us that $L$ is dissipative, and as such closable, on the Hilbert space $H_k$. Thus $(L-zI)\vert_{H_k}$ is closable too and
$$\overline{\pa{L-zI}\vert_{H_k}}=\pa{\overline{L}-zI}\vert_{H_k}.$$
Additionally, since the only part of $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ that is not in $H_k$ is a finite dimensional subspace of $D(L)$, we can conclude that
$$D\left((\overline{L}-zI)\vert_{H_k}\right)=D(\overline{L})\cap H_k.$$
Given $z$ in the resolvent set of $\overline{L}$ we know that $\overline{L}-zI\vert_{V_m}:V_m\rightarrow V_m$ is invertible for any $m$ and as such
$$(\overline{L}-zI)\vert_{V_m}\pa{V_m}=V_m.$$
Thus,
$$V_m\subset\operatorname{Range}\pa{(\overline{L}-zI)\vert_{H_k}},\qquad \forall m \ge k.$$
We conclude that $(\overline{L}-zI) \vert_{H_k}$ is injective with a dense range in $H_k$ for any $z\in\Gamma_{\kappa}$, and hence invertible on its range. The validity of \eqref{eq:OPPSimprovedresolvant} for $k=0$ allows us to extend our inverse to $H_k$ with the same \emph{uniform bound} as is given in \eqref{eq:OPPSimprovedresolvant}. The general case is now proved.
\begin{comment}
The fact that $L$, with domain $D_k$, is well defined on $H_k$ is immediate if one considers the invariance under $L$ of all $V_m$, and the fact that $\oplus_{m=0}^{k-1} V_m$ is finite dimensional. Moreover, $L$ is dissipative. As the semigroup its closure generates over $H_k$ must coincides with the one generated by the closure of $L$ over all of $H$, we can view the above as $L\vert_{H_k}$.\\
Consider $z\in \Gamma_{\kappa}$. Since for any $0\leq j <l$ the subspace $\oplus_{m=j}^{l}V_m$ is finite dimensional, contained in $D(L)$ and invariant under $L$, we have that $\pa{L-zI}\vert_{\oplus_{m=j}^{l}V_m}$ is invertible. Moreover, due to the orthogonality of $\br{V_m}_{m\in\mathbb{N}_0}$
$$\norm{\pa{L-zI}\vert_{\oplus _{m=j}^l V_m}^{-1}}_{B\pa{\oplus _{m=k}^n V_m}}\leq \norm{\pa{L-zI}^{-1}}_{B\pa{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}}$$
The above states that $L-zI$ has a bounded inverse on $D_k$, which is dense in $H_k$. Thus, $\pa{\pa{L-zI}\vert_{H_k}}^{-1}$ is defined and satisfies \eqref{eq:OPPSimprovedresolvant}.
\end{comment}
\end{proof}
{}From this point onward, we will assume that we are dealing with the closed operator $\overline{L}$ and with its appropriate domain (that includes $\bigcup_{m\in\mathbb{N}_0}V_m$) when we consider our equation. We will also write $L$ instead of $\overline{L}$ in what is to follow.\\
Lemma \ref{lem:dissipative} and Proposition \ref{prop:OPPSthmimproved} are all the tools we need to estimate the uniform exponential stability of our evolution semigroup on each $H_k$, an estimation that is crucial to show Theorem \ref{thm:e2rateofdecay}.
\begin{prop}\label{prop:exp_stability}
Consider the Fokker-Planck operator $L$, defined on $L^{2}\pa{\mathbb{R}^d,f_\infty^{-1}}$, and the spaces $\br{H_k}_{k\geq 1}$ defined in \eqref{def:Hk}. Then, for any $0<\epsilon<\mu$, the semigroup generated by the operator $L+\pa{k\mu-\epsilon}I\vert_{H_k}$, with domain $D(L)\cap H_k$, is uniformly exponentially stable. I.e., there exists some geometric constant $C_{k,\epsilon}>0$ such that
\begin{equation}\label{eq:GPresult}
\norm{e^{Lt}}_{B\pa{H_k}} \leq C_{k,\epsilon}e^{-\pa{k\mu-\epsilon}t}\,,\quad t\ge0\,,
\end{equation}
\end{prop}
\begin{proof}
We will show that
$$M_{k,\epsilon}:=\sup_{\Re z>0}\left\|\bigg(\pa{L+[k\mu-\epsilon]I}-zI\bigg)^{-1}\right\|_{B(H_k)} < \infty\,,$$
and conclude the result from the fact that $L$ generates a contraction semigroup according to Lemma \ref{lem:dissipative} and the Gearhart-Pr\"uss Theorem.\\
The study of upper bounds for the resolvents of $L+[k\mu-\epsilon]I$ in the right-hand complex plane relies on subdividing this domain into several pieces. This is illustrated in Figure \ref{fig:compact}, which we will refer to during the proof to help visualise this division.\\
Since $L$ generates a contraction semigroup, for any $\epsilon>0$, $L-\epsilon I$ generates a semigroup that is uniformly exponentially stable on $L^2(\mathbb{R}^d,f_\infty^{-1})$.
The Gearhart-Pr\"{u}ss Theorem applied to $L-\epsilon I$ implies that
\[
\widetilde{M}_{k,\epsilon}:=\sup_{\Re z>0} \left\|\left(L-(\epsilon+z)I\right)^{-1}\right\|_{B(H_k)} \leq \sup_{ \Re z>0}\left\|\left(L-(\epsilon+z)I\right)^{-1} \right\|_{{B}(L^2(\mathbb{R}^d,f_\infty^{-1}))}<\infty,
\]
where we removed the subscript $H_k$ from the operator on the left-hand side to simplify notations.\\
Since
$$L-\pa{\epsilon+z}I=L+[k\mu-\epsilon]I-\pa{z+k\mu}I,$$
we see that
$$\widetilde{M}_{k,\epsilon}=\sup_{\Re z_1>0} \left\| \bigg(\pa{L+[k\mu-\epsilon]I}-\pa{z_1+k\mu}I\bigg)^{-1}\right\|_{B(H_k)}$$
$$=\sup_{\Re z>k\mu} \left\| \bigg(\pa{L+[k\mu-\epsilon]I}-zI\bigg)^{-1}\right\|_{B(H_k)}$$
(this term corresponds to the right-hand side of the dashed line in Figure \ref{fig:compact}).\\
{}From the above we conclude that
$$M_{k,\epsilon}= \max\pa{\widetilde{M}_{k,\epsilon} \,,\sup_{0<\Re z\leq k\mu} \left\|\pa{L-[z-k\mu+\epsilon]I}^{-1}\right\|_{B(H_k)}}.$$
which implies that we only need to show that the second term in the parenthesis is finite (this term corresponds to the area between the dashed line and the imaginary axis in Figure \ref{fig:compact}). \\
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{resolvent_set2.png}
\caption{ choosing $k=2$, the solid dots represent $\sigma((L+[2\mu-\epsilon]I)|_{H_{2}})$ where the eigenvalues of the $2 \times 2$ matrix ${\bf C}$ are given by $\lambda_{1,2}=1\pm \frac{7}{2}i$. The empty dots are the eigenvalues of the operator $L+[2\mu-\epsilon]I$ that disappear due to the restriction to $H_2$, and the shaded area represents the compact set $\{z\in\mathbb{C} \mid 0\leq \Re z \leq 2\mu\}\cap \{z\not\in \Gamma_{\kappa}+2\mu-\epsilon\}$ where $\kappa=1$.}
\label{fig:compact}
\end{figure}
Using Proposition \ref{prop:OPPSthmimproved} we conclude that
\[
\sup_{z-k\mu+\epsilon\in\Gamma_{\kappa}} \left\| \left(L-\left[z-k\mu+\epsilon \right]I\right)^{-1}\right\|<\infty
\]
(rep\-re\-sent\-ed in Figure \ref{fig:compact} by the domain between the two solid blue curves). We conclude that $M_{k,\epsilon}<\infty$ if and only if
$$\sup_{\br{0<\Re z\leq k\mu} \cap \br{z\not\in \Gamma_{\kappa}+k\mu-\epsilon }}\left\|\pa{L-[z-k\mu+\epsilon]I}^{-1}\right\|_{B(H_k)}<\infty.$$
Since $\Re z=-\epsilon$ is the closest vertical line to $\Re z=0$ which intersects\linebreak
$\sigma\pa{\pa{L + [k\mu -\epsilon]I}\vert_{H_k}}$, we notice that $\br{0<\Re z\leq k\mu} \cap \br{z\not\in \Gamma_{\kappa}+k\mu-\epsilon} $ (repres\-en\-ted by the shaded area in Figure \ref{fig:compact}) is a compact set in the resolvent set of \linebreak $\pa{L+[k\mu-\epsilon]I}\vert_{H_k}$. As the resolvent map is analytic on the resolvent set, we conclude that $M_{k,\epsilon}<\infty$, completing the proof.
\end{proof}
\begin{remark}\label{rem:quantitative_Pruss_Gearhart}
While the constant mentioned in \eqref{eq:GPresult} is a fixed geometric one, the original Gearhart-Pr\"uss theorem doesn't give an estimation for it. However, recent studies have improved the original theorem and have managed to find explicit expression for this constant by paying a small price in the exponential power. As we can afford to ``lose'' another small $\epsilon$, we could use references such as \cite{HS,LV13} to have a more concrete expression for $C_{k,\epsilon}$. We will avoid giving such an expression in this work to simplify its presentation.
\end{remark}
We finally have all the tools to show Theorem \ref{thm:e2rateofdecay}:
\begin{proof}[Proof of Theorem \ref{thm:e2rateofdecay}]
Using the invariance of $V_0$ and $H_k$ under $L$ and Proposition \ref{prop:exp_stability} we find that for any \amit{$ f_k\in H_{k}$
\begin{eqnarray*}
&&e_2\pa{e^{Lt}\pa{ f_k+f_\infty}|f_\infty} =e_2\pa{e^{Lt}\pa{ f_k}+f_\infty|f_\infty}= \frac12\Norm{e^{Lt} f_k}^2_{H_{k}}\\
&&\leq \frac12 C^2_{k,\epsilon}e^{-2\pa{k\mu-\epsilon}t}\Norm{ f_k}^2_{H_{k}}=C^2_{k,\epsilon}e^{-2\pa{k\mu-\epsilon}t}e_2\pa{ f_k+f_\infty|f_\infty}\,,
\end{eqnarray*}}
showing the desired result.
\end{proof}
Theorem \ref{thm:e2rateofdecay} has given us the ability to control the rate of convergence to equilibrium of functions with initial data that, up to $f_\infty$, live on a ``higher eigen\-space''. Can we use this information to understand what happens to the solution of an arbitrary initial datum $f_0\in L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ with unit mass?\\
The answer to this question is \emph{Yes}.\\
Since for any $k\geq 1$
$$L^2\pa{\mathbb{R}^d,f_\infty^{-1}}=V_0\oplus\pa{\bigoplus_{m=1}^{k}V_m}\oplus H_{k+1}$$
and the Fokker-Planck semigroup is invariant under all the above spaces, we are motivated to \emph{split} the solution of our equation into a part in $V_0\oplus H_{k+1}$ and a part in $\bigoplus_{m=1}^{k}V_m$ - which is a \emph{finite dimensional subset of} $D(L)$. As we now know that decay in $\bigoplus_{m=1}^{k}V_m$ is slower than that for $H_{k+1}$ we will obtain a \emph{sharp} rate of convergence to equilibrium. We summarise the above intuition in the following theorem:
\begin{theorem}\label{thm:rate_of_decay_e2}
Consider the Fokker-Planck equation \eqref{eq:fokkerplanck} with diffusion and drift matrices satisfying Conditions (A)-(C). Let $f_0\in L^1_+\pa{\mathbb{R}^d}\cap L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ be a given function with unit mass \amit{such that
$$f_0=f_\infty+f_{k_0}+\tilde f_{k_0},$$
where $f_{k_0}\in V_{k_0}$ is non-zero and $\tilde f_{k_0}\in H_{k_0+1}$.
}Denote by $[L]_{k_0}$ the matrix representation of $L$ with respect to an orthonormal basis of $V_{k_0}$ and let
$$n_{k_0}:=\max\br{\text{defect of }\lambda\;|\; \lambda\text{ is an eigenvalue of }[L]_{k_0}\text{ and }\Re \lambda=-k_0\mu},$$
where $\mu$ is defined in \eqref{eq:def_of_mu}. Then, there exists a geometric constant $c_{k_0}$, which is independent of $f_0$, such that
\begin{equation}\label{eq:rate_of_decay_e2_general}
e_2\pa{f(t)|f_\infty} \leq c_{k_0} e_2\pa{f_0|f_\infty}\pa{1+t^{2n_{k_0}}}e^{-2k_0\mu t}.
\end{equation}
\end{theorem}
\begin{remark}\label{rem:no_need_for_+}
As can be seen in the proof of the theorem, the sign of $f_0$ plays no role. As such, the theorem could have been stated for $f_0\in L^1\pa{\mathbb{R}^d}\cap L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$. We decided to state it as is since it is the form we will use later on, and we wished to avoid possible confusion.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:rate_of_decay_e2}]
\amit{Due to the invariance of all $V_m$ under $L$ we see that
$$f(t)= f_\infty+ e^{Lt}f_{k_0}+e^{Lt}\tilde f_{k_0},$$
with $e^{Lt}f_{k_0}\in V_{k_0}$ and $e^{Lt}\tilde f_{k_0}\in H_{k_0+1}$.}
{}From Theorem \ref{thm:e2rateofdecay} we conclude that
$$e_2\pa{f_\infty+e^{Lt}\pa{\tilde f_{k_0}}|f_\infty} \leq c_{k_0,\epsilon}e_2\pa{f_\infty+\tilde f_{k_0}|f_\infty}e^{-2\pa{(k_0+1)\mu-\epsilon}t},$$
for any $0<\epsilon<\mu$.\\
Next, we denote by $d_k:=\text{dim}(V_k)$ and let $\br{\xi_{i}}_{i=1,\dots,d_{k_0}}$ be an orthonormal basis for $V_{k_0}$. The invariance of $V_m$ under $L$ implies that we can write
$$e^{Lt}f_{k_0}=\sum_{i=1}^{d_{k_0}}a_i(t)\xi_i$$
with $\bm{a}(t):
=\pa{a_1(t),\dots,a_{d_{k_0}}(t)}$ satisfying the simple ODE
\begin{equation*
\dot{\bm{a}}(t)=[L]^T_{k_0}\bm{a}(t).
\end{equation*}
\\
This, together with the definition of $n_{k_0}$ and the fact that a matrix and its transpose share eigenvalues and defect numbers, implies that we can find a geometric constant that depends only on $k_0$ such that
\begin{equation}\label{eq:rate_of_decay_e2_general_comp}
\sum_{i=1}^{d_{k_0}}a_i^2(t) \leq c_{k_0}\pa{1+t^{2n_{k_0}}}e^{-2k_0\mu t}\sum_{i=1}^{d_{k_0}}a_i^2(0).
\end{equation}
Since
$$e_2\pa{f(t)|f_\infty} =e_2\pa{f_\infty+e^{Lt}(\tilde f_{k_0})+ e^{Lt}(f_{k_0})|f_\infty}=\frac{1}{2}\Norm{e^{Lt}(\tilde f_{k_0})+ e^{Lt}(f_{k_0})}^2_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}} $$
$$=\frac{1}{2}\Norm{e^{Lt}(\tilde f_{k_0})}^2_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}} + \frac{1}{2}\Norm{\sum_{i=1}^{d_{k_0}}a_i(t)\xi_i}^2_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}$$
$$=e_2\pa{f_\infty+e^{Lt}(\tilde f_{k_0})|f_\infty}+\frac{1}{2}\sum_{i=1}^{d_{k_0}}a_i(t)^2,$$
we see, by combining Theorem \ref{thm:e2rateofdecay} and \eqref{eq:rate_of_decay_e2_general_comp} that
\begin{equation}\nonumber
\begin{split}
e_2\pa{f(t)|f_\infty} \leq &c_{k_0,\epsilon}e_2\pa{f_\infty+\tilde f_{k_0}|f_\infty}e^{-2\pa{(k_0+1)\mu-\epsilon}t}\\
&+\frac{c_{k_0}}{2}\sum_{i=1}^{d_{k_0}}a_i^2(0)\pa{1+t^{2n_{k_0}}}e^{-2k_0\mu t}.
\end{split}
\end{equation}
Hence
\begin{equation*}
e_2\pa{f(t)|f_\infty}
\leq \max\pa{c_{k_0,\epsilon},c_{k_0}} \pa{e_2\pa{f_\infty+\tilde f_{k_0}|f_\infty}+\frac{1}{2}\Norm{f_{k_0}}^2_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}}\pa{1+t^{2n_{k_0}}}e^{-2k_0\mu t}.
\end{equation*}
This completes the proof, as we have seen that
$$e_2(f_0|f_\infty)=e_2\pa{f_\infty+\tilde f_{k_0}|f_\infty}+\frac{1}{2}\Norm{f_{k_0}}^2_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}}.$$
\end{proof}
\amit{
\begin{remark}\label{rem:no_split_for_e_p}
The idea to \emph{split} a solution into a few parts is viable \emph{only} for the $2-$entropy. The reason behind it is that such splitting, regardless of whether or not it can be done to functions outside of $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$, will most likely create functions without a definite sign. These functions can not be explored using the $p-$entropy with $1<p<2$.
\end{remark}
\amit{Theorem \ref{thm:rate_of_decay_e2} gives an optimal rate of decay for the $2-$entropy. However, one can underestimate the rate of decay by using Theorem \ref{thm:e2rateofdecay} and remove the condition $f_{k_0}\not=0$ to obtain the following:}
\begin{corollary}\label{corr:decay_est}
The statement of Theorem \ref{thm:rate_of_decay_e2} remains valid when replacing $k_0$ by any $1 \leq k_1 \leq k_0$. However, the decay estimate $\eqref{eq:rate_of_decay_e2_general}$ will not be sharp when $k_1<k_0$.
\end{corollary}}
\amit{\begin{proof}[Proof of Theorem \ref{thm:main} for $p=2$] The proof follows immediately from Coro\-llary \ref{corr:decay_est} for $k_1=1$.
\end{proof}}
Now that we have learned everything we can on the convergence to equilibrium for $e_2$, we can proceed to understand the convergence to equilibrium of $e_p$.
\section{Non-symmetric Hypercontractivity and Rates of Convergence for the $p-$Entropy}\label{sec:hyper}
In this section we will show how to deduce the rate of convergence to equilibrium for the family of $p-$entropies, with $1<p<2$, from $e_2$. The main thing that will make the above possible is \emph{a non-symmetric hypercontractivity} property of our Fokker-Planck equation - namely, that any solution to the equation with (initially only) a finite $p-$entropy will eventually be ``pushed'' into $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$, at which point we can use the information we gained on $e_2$. \\
Before we show this result, and see how it implies our main theorem, we explain why and how this non-symmetric hypercontractivity helps.
\begin{lemma}\label{lem:control_of_ep_by_e2}
Let $f\in L^1_+\pa{\mathbb{R}^d}$ with unit mass. Then
\begin{enumerate}[(i)]
\item \begin{equation*
e_p(f|f_\infty)=\frac{1}{p(p-1)}\pa{\norm{f}^p_{L^p\pa{\mathbb{R}^d,f_\infty^{1-p}}}-1}.
\end{equation*}
\item for any $1<p_1< p_2 \leq 2 $ there exists a constant $C_{p_1,p_2}>0$ such that
\begin{equation*
e_{p_1}(f|f_\infty)\leq C_{p_1,p_2}e_{p_2}(f|f_\infty).
\end{equation*}
In particular, for any $1<p<2$
\begin{equation*
e_{p}(f|f_\infty)\leq C_{p}e_{2}(f|f_\infty),
\end{equation*}
for a fixed geometric constant.
\end{enumerate}
\end{lemma}
\begin{proof}
$(i)$ is trivial.
To prove $(ii)$ we consider the function
$$g(y):=\begin{cases}
\frac{p_2(p_2-1)}{p_1(p_1-1)}\frac{y^{p_1}-p_1(y-1)-1}{y^{p_2}-p_2(y-1)-1}\,, & y\geq 0, y\not=1 \\
1\,, & y=1.
\end{cases}$$
Clearly $g \geq 0$ on $\mathbb{R}^+$, and it is easy to check that it is continuous. Since we have $\lim_{y\rightarrow\infty}g(y)=0$, we can conclude the result using \eqref{eq:defentropy}.
\end{proof}
It is worth to note that the second point of part $(ii)$ of Lemma \ref{lem:control_of_ep_by_e2} can be extended to general generating function for an admissible relative entropy. The following is taken from \cite{AMTU01}:
\begin{lemma}\label{lem:control_of_psi_by_e2}
Let $\psi$ be a generating function for an admissible relative entropy. Then one has that
\begin{equation*
\psi(y) \leq 2\psi^{\prime\prime}(1)\psi_2(y),\quad y\geq 0.
\end{equation*}
In particular $e_p \leq 2 e_2$ for any $1<p<2$ whenever $e_2$ is finite.
\end{lemma}
Lemma \ref{lem:control_of_ep_by_e2} assures us that, if we start with initial data in $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$, then $e_p$ will be finite. Moreover, due to Theorem \ref{thm:main} for $p=2$, and the fact that the solution to \eqref{eq:fokkerplanck} remains in $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$, we have that
$$e_p(f(t)|f_\infty) \leq 2e_2(f(t)|f_\infty) \leq Ce_2(f_0|f_\infty)\pa{1+t^{2n}}e^{-2\mu t}. $$
However, one can easily find initial data $f_0\not\in L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ with finite $p-$entropies. If one can show that the flow of the Fokker-Planck equation eventually forces the solution to enter $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$, we would be able to utilise the idea we just presented, at least from that time on.\\
This \emph{explicit non-symmetric} hypercontractivity result we desire, is the main new theorem we present in this section.
\begin{theorem}\label{thm:hyper}
Consider the Fokker-Planck equation \eqref{eq:fokkerplanck} with diffusion and drift matrices ${\bf D}$ and ${\bf C}$ satisfying Conditions (A)-(C). Let $f_0 \in L^1_+\pa{\mathbb{R}^d}$ be a function with unit mass and assume there exists $\epsilon>0$ such that
\begin{equation}\label{eq:hypercondition}
\int_{\mathbb{R}^d}e^{\epsilon \abs{x}^2}f_0(x)dx <\infty.
\end{equation}
\begin{enumerate}
\item[(i)]
Then, for any $q>1$, there exists an explicit $t_0>0$ that depends only on geometric constants of the problem such that the solution to \eqref{eq:fokkerplanck} satisfies
\begin{equation}\label{eq:hyper}
\int_{\mathbb{R}^d}f(t,x)^q f_\infty^{-1}(x)dx \leq \pa{\frac{q}{\pi (q+1)}}^{\frac{qd}{2}}\pa{\frac{8\pi^2}{q-1}}^{\frac{d}{2}} \pa{\int_{\mathbb{R}^d}e^{\epsilon\abs{x}^2}f_0(x)dx}^q
\end{equation}
for all $t\geq t_0$.
\item[(ii)] In particular, if $f_0$ satisfies $e_p(f_0|f_\infty)<\infty$ for some $1<p<2$ we have that
\begin{equation}\label{eq:entropichyper}
\begin{split}
e_2(f(t)|f_\infty)\leq \frac{1}{2}\pa{\pa{\frac{8\sqrt{2}}{3\cdot 2^{\frac{1}{p}}}}^d \pa{p(p-1)e_p(f_0|f_\infty)+1}^{\frac{2}{p}}-1},
\end{split}
\end{equation}
for $t\geq \tilde{t}_0(p)>0$, which can be given explicitly.
\end{enumerate}
\end{theorem}
\amit{\begin{remark}\label{rem:improvement_of_constants}
As we consider $e_p$ in our hypercontractivity, which is, up to a constant, the $L^p$ norm of $g:=\frac{f}{f_\infty}$ with the measure $f_\infty(x)dx$, one can view our result as a hypercontractivity property of the Ornstein-Uhlenbeck operator, $P$ (for an appropriate choice of the diffusion matrix ${\bf Q}$ and drift matrix ${\bf B}$), discussed in \S\ref{sec:spectral}. With this notation, \eqref{eq:entropichyper} is equivalent to
\begin{equation}\label{eq:reformulation_of_hyper}
\|g(t)\|_{L^2(f_\infty)} \leq C_{p,d} \|g_0\|_{L^p(f_\infty)}\,,\quad t\geq \tilde{t}_0(p)
\end{equation}
for $1<p<2$, where $C_{p,d}:=\left(\frac{8 \sqrt{2}}{3\cdot 2^{\frac{1}{p}}} \right)^{\frac{d}{2}}$. Since $e_2$ decreases along the flow of our equation, \eqref{eq:reformulation_of_hyper} is valid for $p=2$ with $C_{2,d}=1$. Thus, by using the Riesz-Thorin theorem one can improve inequality \eqref{eq:reformulation_of_hyper} to the same inequality with the constant $C_{p,d}^{\frac{2}{p}-1}$. We would like to point out at this point that a simple limit process shows that \eqref{eq:reformulation_of_hyper} is also valid for $p=1$, but there is no connection between the $L^1$ norm of $g$ and the Boltzmann entropy, $e_1$, of $f_0$.
\end{remark}}
\amit{\begin{remark}\label{rem:Wang}
Since its original definition for the Ornstein-Uhlenbeck semigroup in the work of Nelson, \cite{Ne73}, the notion of \emph{hypercontractivity} has been studied extensively for Markov diffusive operators (implying selfadjointness). A contemporary review of this topic can be found in \cite{BaGeLe14}. For such selfadjoint generators, hypercontractivity is equivalent to the validity of a logarithmic Sobolev inequality, as proved by Gross \cite{Gro75}. For non-symmetric generators, however, this equivalence does not hold: While a log Sobolev inequality still implies hypercontractvity of related semigroups (cf.\ the proof of Theorem 5.2.3 in \cite{BaGeLe14}), the reverse implication is not true in general (cf. Remark 5.1.1 in \cite{W05}). In particular, hypocoercive degenerate parabolic equations cannot give rise to a log Sobolev inequality, but they may exhibit hypercontractivity (as just stated above).\\
The last 20 years have seen the emergence of the, more delicate, study of hypercontractivity for non-symmetric and even degenerate semigroups. Notable works in the field are the paper of Fuhrman, \cite{F98}, and more recently the work of Wang et al., \cite{BWY15SPA,BWY15EJP,W17}. Most of these works consider an abstract Hilbert space as an underlying domain for the semigroup, and to our knowledge none of them give an explicit time after which one can observe the hypercontractivity phenomena (Fuhrman gives a condition on the time in \cite{F98}).\\
Our hypercontractivity theorem, which we will prove shortly, gives not only an explicit and quantitative inequality, but also provides an estimation on the time one needs to wait before the hypercontractivity occurs. To keep the formulation of Theorem \ref{thm:hyper} simple we did not include this ``waiting time'' there, but we emphasised it in its proof. Moreover, the hypercontractivity estimate from Theorem \ref{thm:hyper}(i) only requires \eqref{eq:hypercondition}, a weighted $L^1$ norm of $f_0$. This is weaker than in usual hypercontractivity estimates, which use $L^p$ norms as on the r.h.s. of \eqref{eq:reformulation_of_hyper}.
\end{remark}}
It is worth to note that we prove our theorem under the setting of the $e_p$ entropies, which can be thought of as $L^p$ spaces with a weight function that depends on $p$.
In order to be able to prove Theorem \ref{thm:hyper} we will need a few technical lemmas.
\begin{lemma}\label{lem:exact_sol}
Given $f_0\in L^1_+\pa{\mathbb{R}^d}$ with unit mass, the solution to the Fokker-Planck equation \eqref{eq:fokkerplanck} with diffusion and drift matrices ${\bf D}$ and ${\bf C}$ that satisfy Conditions (A)-(C) is given by
\begin{equation}\label{eq:exactsolution}
f(t,x)=\frac{1}{\pa{2\pi}^{\frac{d}{2}}\sqrt{\det {\bf W}(t)}}\int_{\mathbb{R}^d}e^{-\frac{1}{2}\pa{x-e^{-{\bf C} t}y}^T {\bf W}(t)^{-1}\pa{x-e^{{\bf C} t}y}} f_0(y)dy,
\end{equation}
where
\begin{equation*
{\bf W}(t):=2\int_{0}^t e^{-{\bf C} s}{\bf D} e^{-{\bf C}^T s}ds.
\end{equation*}
\end{lemma}
This is a well known result, see for instance \S 1 in \cite{Ho67} or \S 6.5 in \cite{RiFP89}.
\begin{lemma}\label{lem:WconvergesKrate}
Assume that the diffusion and drift matrices, ${\bf D}$ and ${\bf C}$, satisfy Conditions (A)-(C), and let ${\bf K}$ be the unique positive definite matrix that satisfies
\begin{equation*
2{\bf D}={\bf C}{\bf K}+{\bf K}{\bf C}^T.
\end{equation*}
Then (in any matrix norm)
\begin{equation*
\norm{{\bf W}(t)-{\bf K}} \leq c (1+t^{2n})e^{-2\mu t}, \quad t\geq 0,
\end{equation*}
where $c>0$ is a geometric constant depending on $n$ and $\mu$, with $n$ being the maximal defect of the eigenvalues of ${\bf C}$ with real part $\mu$, defined in \eqref{eq:def_of_mu}.
\end{lemma}
\begin{proof}
We start the proof by noticing that ${\bf K}$ is given by
$${\bf K}=2\int_{0}^\infty e^{-{\bf C} s}{\bf D} e^{-{\bf C}^T s}ds$$
(see for instance \cite{OPPS15}). As such
$$\norm{{\bf W}(t)-{\bf K}} \leq 2\int_{t}^{\infty} \norm{e^{-{\bf C} s}{\bf D} e^{-{\bf C}^T s}}ds \leq 2\norm{{\bf D}}\int_{t}^{\infty} \norm{e^{-{\bf C} s}} \norm{e^{-{\bf C}^T s}}ds.$$
Using the fact that $${\bf{A}} e^{-{\bf C} t}{\bf{A}}^{-1} = e^{-{\bf A}{\bf C}{\bf A}^{-1}t}$$ for any regular matrix ${\bf A}$, we conclude that, if ${\bf{J}}$ is the Jordan form of ${\bf C}$, then
\begin{equation}\label{eq:exp_C_connection_jordan}
\norm{e^{-{\bf C} t}} \leq \norm{{\bf A}_{{\bf{J}}}}\norm{{\bf A}_{{\bf{J}}}^{-1}} \norm{e^{-{\bf{J}}t}}\,,
\end{equation}
where ${\bf A}_{{\bf{J}}}$ is the similarity matrix between ${\bf C}$ and its Jordan form.\\
For a single Jordan block of size $n+1$ (corresponding to a defect of $n$ in the eigenvalue $\lambda$), ${\bf{\widetilde{J}}}$, we find that
$$e^{{\bf{\widetilde{J}}}t}=\pa{\begin{tabular}{ccccc}
$e^{\lambda t}$ & $t e^{\lambda t}$ & $\dots$ & & $\frac{t^n}{n!}e^{\lambda t}$ \\
& $e^{\lambda t}$ & $\ddots$ & &$\frac{t^{n-1}}{(n-1)!}e^{\lambda t}$ \\
& & $\ddots$ & &$\vdots$ \\
$0$ & & & &$e^{\lambda t}$
\end{tabular}} \qquad \text{ where }\qquad
{\bf{\widetilde{J}}}=\pa{\begin{tabular}{cccc}
$\lambda$ & $1$ & & $0$ \\
& $\ddots$ & $\ddots$ & \\
&&&1\\
$0$ & & & $\lambda$
\end{tabular}} .$$
Thus, we conclude that
$$\norm{e^{{\bf{\widetilde{J}}}t}x}_{1} \leq \sum_{i=1}^{n+1} \sum_{j=i}^{n+1} \frac{t^{j-i}}{(j-i)!}e^{\Re(\lambda) t} \abs{x_j} \leq \pa{ \sum_{i=1}^{n+1} \pa{1+t^n}e^{\Re(\lambda) t}}\norm{x}_{1} $$
$$=(n+1)\pa{1+t^n}e^{\Re(\lambda) t}\norm{x}_{1},\quad t\geq 0.$$
Due to the equivalence of norms on finite dimensional spaces, there exists a geometric constant $c_1>0$, that depends on $n$, such that
\begin{equation}\label{eq:jordan_bound}
\norm{e^{{\bf{\widetilde{J}}}t}} \leq c_1\pa{1+t^n}e^{\Re(\lambda) t}.
\end{equation}
Coming back to ${\bf C}$, we see that the above inequality together with \eqref{eq:exp_C_connection_jordan} imply that $\norm{e^{-{\bf C} t}}$ is controlled by the norm of ${\bf C}$'s largest (measured by the defect number) Jordan block of the eigenvalue with smallest real part. From this, and \eqref{eq:jordan_bound}, we conclude that
\begin{equation}\label{eq:C-convergence}
\|e^{-{\bf C} t} \| \leq c_2 (1+t^{n})e^{-\mu t}, \quad t\geq 0.
\end{equation}
The same estimation for $\norm{e^{-{\bf C}^T t}}$ implies that
$$\norm{{\bf W}(t)-{\bf K}}\leq c_3\int_{t}^{\infty} \pa{1+s^{2n}}e^{-2\mu s}ds,$$
for some geometric constant $c_3>0$ that depends on $n$. Since
\[
\int^\infty_t s^{2n} e^{-2\mu s} ds = \left[\frac{1}{2\mu} t^{2n} + \frac{2n}{(2\mu)^2} t^{2n-1} + \frac{2n(2n-1)}{(2\mu)^3} t^{2n-2} +...+ \frac{(2n)!}{(2\mu)^{2n+1}}\right]e^{-2\mu t}
\]
we conclude the desired result.
\end{proof}
While we can continue with a general matrix ${\bf K}$, it will simplify our computations greatly if ${\bf K}$ would have been ${\bf I}$. Since we are working under the assumption that ${\bf D}={\bf C}_S$, the normalization from Theorem \ref{thm:simpliedDC} implies exactly that. Thus, from this point onwards we will assume that ${\bf K}$ is ${\bf I}$.
\begin{lemma}\label{lem:Inverse}
For any $\epsilon>0$ there exists an explicit $t_1>0$ such that for all $t\geq t_1$
\begin{equation*
\norm{{\bf W}^{-1}(t)-{\bf I}} \leq \epsilon,
\end{equation*}
where ${\bf W}(t)$ is as in Lemma \ref{lem:WconvergesKrate}. An explicit, but not optimal choice for $t_1$ is given by
\begin{equation}\label{eq:explicit_time}
t_1(\epsilon): =\frac{1}{2(\mu-\alpha)} \log\pa{\frac{c(1+\epsilon)\pa{1+\pa{\frac{n}{\alpha e}}^{2n}}}{\epsilon}},
\end{equation}
where $0<\alpha<\mu$ is arbitrary and $c>0$ is given by Lemma \ref{lem:WconvergesKrate}.
\end{lemma}
\begin{proof}
We have that for any invertible matrix ${\bf A}$
$$\|{\bf A}^{-1}-{\bf I} \|= \|\pa{{\bf A}-{\bf I}}{\bf A}^{-1}\| \leq \norm{{\bf A}-{\bf I}}\norm{{\bf A}^{-1}}.$$
In addition, if $\norm{{\bf A}-{\bf I}}<1$, then
$$\norm{{\bf A}^{-1}}= \norm{\pa{{\bf I}-\pa{{\bf I}-{\bf A}}}^{-1}} \leq \frac{1}{1-\norm{{\bf A}-{\bf I}}}.$$
Thus, for any $t>0$ such that $\norm{{\bf W}(t)-{\bf I}}<1$ we have that
\begin{equation}\label{eq:W-1est}
\norm{{\bf W}^{-1}(t)-{\bf I}} \leq \frac{\norm{{\bf W}(t)-{\bf I}}}{1-\norm{{\bf W}(t)-{\bf I}}}.
\end{equation}
Defining $\tilde{t_1}(\epsilon)$ as
\begin{equation}\label{eq:t1tilde}
\tilde{t_1}(\epsilon):=\min\br{s\geq 0\, \bigg|\,\pa{1+t^{2n}}e^{-2\mu t} \leq \frac{\epsilon}{c(1+\epsilon)},\quad \forall t\geq s},
\end{equation}
with the constant $c$ given by Lemma \ref{lem:WconvergesKrate}, we see from Lemma \ref{lem:WconvergesKrate} that for any $t\geq \tilde{t_1}(\epsilon)$
$$\norm{{\bf W}(t)-{\bf I}} \leq \frac{\epsilon}{1+\epsilon}.$$
Combining the above with \eqref{eq:W-1est}, shows the first result for $t_1= \tilde{t_1}(\epsilon)$.
\medskip
To prove the second claim we will show that
\begin{equation*
t_1(\epsilon)\geq \tilde{t_1}(\epsilon).
\end{equation*}
For this elementary proof we use the fact that
$$\max_{t\geq 0} e^{-a t}t^b = \pa{\frac{b}{a e}}^{b}$$
for any $a,b >0$. Thus, choosing $a = 2\alpha$, where $0<\alpha<\mu$ is arbitrary, and $b=2n$ we have that
$$\pa{1+t^{2n}}e^{-2\mu t} \leq \pa{1+\pa{\frac{n}{\alpha e}}^{2n}} e^{-2(\mu-\alpha)t},\quad t\geq 0.$$
As a consequence, if
\begin{equation}\label{eq:expest}
\pa{1+\pa{\frac{n}{\alpha e}}^{2n}} e^{-2(\mu-\alpha)t} \leq \frac{\epsilon}{c(1+\epsilon)},\quad \forall t\geq s,
\end{equation}
then $s\geq \tilde{t_1}(\epsilon)$ due to \eqref{eq:t1tilde}. The smallest possible $s$ in \eqref{eq:expest} is obtained by solving the corresponding equality for $t$, and yields \eqref{eq:explicit_time}, concluding the proof.
\end{proof}
We now have all the tools to prove Theorem \ref{thm:hyper}
\begin{proof}[Proof of Theorem \ref{thm:hyper}]
To show $(i)$ we recall Minkowski's integral inequality, which will play an important role in estimating the $L^p$ norms of $f(t)$. \\
\textbf{Minkowski's Integral Inequality:} \emph{For any non-negative measurable function $F$ on $(X_1\times X_2, \mu_1\times \mu_2)$, and any $q\geq 1$ one has that}
\begin{equation}\label{eq:Mink}
\begin{split}
\left(\int_{X_2}\abs{\int_{X_1}F(x_1,x_2)d\mu_1( x_1)}^q d\mu_2(x_2) \right)^{\frac{1}{q}}& \\
\leq \int_{X_1} \pa{ \int_{X_2}\abs{F(x_1,x_2)}^q d\mu_{2}( x_2) }^{\frac{1}{q}}&d\mu_1(x_1).
\end{split}
\end{equation}
Next, we fix an $\epsilon_1=\epsilon_1(\epsilon,q)\in(0,1)$, to be chosen later. From Lemma \ref{lem:WconvergesKrate} and \ref{lem:Inverse} we see that, for $t\geq t_1(\epsilon_1)$ with
$$t_1(\epsilon_1): =\frac{1}{2(\mu-\alpha)} \log\pa{\frac{c(1+\epsilon_1)\pa{1+\pa{\frac{n}{\alpha e}}^{2n}}}{\epsilon_1}}$$
for some fixed $0<\alpha<\mu$, we have that
$$\norm{{\bf W}(t)-{\bf I}} \leq \frac{\epsilon_1}{1+\epsilon_1}<\epsilon_1,\qquad \norm{{\bf W}^{-1}(t)-{\bf I}} \leq \epsilon_1,$$
and hence
$$
{\bf W}(t) > (1-\epsilon_1){\bf I},\qquad {\bf W}(t)^{-1}\geq (1-\epsilon_1){\bf I}.
$$
As such, for $t\geq t_1(\epsilon_1)$
\begin{equation}\label{eq:hyperproofI}
\abs{e^{-\frac{1}{2}\pa{x-e^{-{\bf C} t}y}^T {\bf W}(t)^{-1}\pa{x-e^{{\bf C} t}y}} f_0(y)}^q \leq e^{-\frac{q}{2}(1-\epsilon_1)\abs{x-e^{-{\bf C} t}y}^2} \abs{f_0(y)}^{q}
\end{equation}
and
\begin{equation}\label{eq:hyperproofII}
\det{{\bf W}(t)} \geq (1-\epsilon_1)^d.
\end{equation}
We conclude, using \eqref{eq:Mink}, the exact solution formula \eqref{eq:exactsolution}, \eqref{eq:hyperproofI} and \eqref{eq:hyperproofII} that for $t\geq t_1(\epsilon_1)$ it holds:
\begin{equation}\label{eq:hyperproofIII}
\begin{split}\int_{\mathbb{R}^d} &\abs{f(t,x)}^q f_\infty^{-1}(x)dx \\\leq& \frac{(2\pi)^{\frac{d}{2}}}{\pa{2\pi (1-\epsilon_1)}^{\frac{qd}{2}}}\pa{\int_{\mathbb{R}^d} \pa{\int_{\mathbb{R}^d}e^{-\frac{q}{2}(1-\epsilon_1)\abs{x-e^{-{\bf C} t}y}^2} \abs{f_0(y)}^{q} e^{\frac{\abs{x}^2}{2}}dx}^{\frac{1}{q}}dy}^q \\
=&\frac{(2\pi)^{\frac{d}{2}}}{\pa{2\pi (1-\epsilon_1)}^{\frac{qd}{2}}}\pa{\int_{\mathbb{R}^d}\pa{\int_{\mathbb{R}^d}e^{-\frac{q}{2}(1-\epsilon_1)\abs{x-e^{-{\bf C} t}y}^2} e^{\frac{\abs{x}^2}{2}}dx}^{\frac{1}{q}} \abs{f_0(y)}dy}^q.
\end{split}
\end{equation}
We proceed by choosing $\epsilon_1>0$ such that $q(1-\epsilon_1)>1$ (or equivalently $\epsilon_1 < \frac{q-1}{q}$) and denoting
$$\eta:=q(1-\epsilon_1)-1>0.$$
Shifting the $x$ variable by $\frac12 e^{-{\bf C} t}y$ and completing the square, we find that
\begin{equation}\label{eq:hyperproofIV}
\begin{split}
\int_{\mathbb{R}^d}e^{-\frac{q}{2}(1-\epsilon_1)\abs{x-e^{-{\bf C} t}y}^2} e^{\frac{\abs{x}^2}{2}}dx &= \int_{\mathbb{R}^d}e^{-\frac{\eta+1}{2}\abs{x-\frac{1}{2}e^{-{\bf C} t}y}^2} e^{\frac{\abs{x+\frac{1}{2}e^{-{\bf C} t}y}^2}{2}}dx\\
=\int_{\mathbb{R}^d} e^{x e^{-{\bf C} t}y}e^{-\frac{\eta}{2}\abs{x-\frac{1}{2}e^{-{\bf C} t}y}^2}dx&=\int_{\mathbb{R}^d}e^{-\frac{\eta}{2}\abs{x-\frac{1}{2}\pa{1+\frac{2}{\eta}}e^{-{\bf C} t}y}^2}e^{\pa{\frac12+\frac{1}{2\eta}}\abs{e^{-{\bf C} t}y}^2}dx\\
&=\pa{\frac{2\pi}{\eta}}^{\frac{d}{2}} e^{\pa{\frac12+\frac{1}{2\eta}}\abs{e^{-{\bf C} t}y}^2}.
\end{split}
\end{equation}
Using \eqref{eq:C-convergence} we can find a uniform geometric constant $c_2$ such that
\begin{equation}\nonumber
\|e^{-{\bf C} t}\|^2\leq c_2^2 \pa{1+t^{n}}^2 e^{-2\mu t}\leq 2c_2^2 \pa{1+t^{2n}}e^{-2\mu t}.
\end{equation}
Following the proof of Lemma \ref{lem:Inverse} we recall that if
$$t\geq \frac{1}{2(\mu-\alpha)}\log\pa{\frac{\tilde{c}(1+\epsilon_2)\pa{1+\frac{n}{\alpha e}}^{2n}}{\epsilon_2}},$$
where $0<\alpha<\mu$ is arbitrary and for any $\tilde{c},\epsilon_2>0$, then
$$\pa{1+t^{2n}}e^{-2\mu t} \leq \frac{\epsilon_2}{\tilde{c}(1+\epsilon_2)}.$$
Thus, choosing
$$\tilde{c}=\frac{c_2^2(1+\eta)}{q\eta}=\frac{c_2^2(1-\epsilon_1)}{q(1-\epsilon_1)-1}\quad \text{ and }\quad \epsilon_2=\frac{\epsilon_1}{1-\epsilon_1}$$
we get that if
\begin{equation*
t\geq t_2(\epsilon_1):=\frac{1}{2(\mu-\alpha)}\log\pa{\frac{c_2^2(1-\epsilon_1)\pa{1+\frac{n}{\alpha e}}^{2n}}{\pa{q(1-\epsilon_1)-1}\epsilon_1}},
\end{equation*}
where $0<\alpha<\mu$ is arbitrary and for any $\tilde{c},\epsilon_2>0$, then
\begin{equation}\nonumber
\pa{\frac{1}{2}+\frac{1}{2\eta}}\|e^{-{\bf C} t}\|^2\leq \frac{c_2^2(1+\eta)}{q\eta}q\pa{1+t^{2n}}e^{-2\mu t} \leq q\epsilon_1.
\end{equation}
Combining this with our previous computations (\eqref{eq:hyperproofIII} and \eqref{eq:hyperproofIV}), we find that for any $t\geq t_0(\epsilon_1):=\max\pa{t_1(\epsilon_1),t_2(\epsilon_1)}$
$$\int_{\mathbb{R}^d}\abs{f(t,x)}^q f_\infty^{-1}(x)dx \leq \frac{(2\pi)^{d(1-\frac{q}{2})}}{(1-\epsilon_1)^{\frac{qd}{2}}\eta^{\frac{d}{2}}} \pa{\int_{\mathbb{R}^d}e^{\epsilon_1\abs{y}^2}f_0(y)dy}^q.$$
If $\epsilon_1$ is chosen more restrictively than before, namely $\epsilon_1 \leq \frac{q-1}{2q}$, then we have
$$ \frac{q-1}{2} \leq \eta< q-1 \qquad \text{and} \qquad 1-\epsilon_1 \geq \frac{q+1}{2q},$$
which implies the first statement of the theorem by choosing $\epsilon_1:=\min\pa{\epsilon,\frac{q-1}{2q}}$.
\medskip
For the proof of (ii) we note that \eqref{eq:entropichyper} is equivalent to
\begin{equation}\label{eq:hyperequiv}
\|f(t)\|^2_{L^2\pa{\mathbb{R}^d,f_\infty^{-1}}} \leq\pa{\frac{8\sqrt{2}}{3\cdot 2^{\frac{1}{p}}}}^d\|f_0\|^2_{L^p\pa{\mathbb{R}^d,f_\infty^{1-p}}}.
\end{equation}
With the H\"{o}lder inequality we obtain
\begin{equation*}
\begin{split}
\int_{\mathbb{R}^d} e^{\frac{p-1}{4p}|x|^2}f_0(x) dx &\leq \left(\int_{\mathbb{R}^d} e^{-\frac{|x|^2}{4}} dx \right)^{\frac{p-1}{p}} \left(\int_{\mathbb{R}^d} e^{\frac{p-1}{2}|x|^2}f_0^p(x)dx \right)^{\frac1p}\\
&= 2^{\frac{d}{2}\frac{p-1}{p}}\|f_0\|_{L^p\pa{\mathbb{R}^d,f_\infty^{1-p}}}.
\end{split}
\end{equation*}
Hence, $e_p(f_0|f_\infty)<\infty$ implies \eqref{eq:hypercondition} with $\epsilon=\frac{p-1}{4p}$, and \eqref{eq:hyperequiv} follows from \eqref{eq:hyper} with $q=2$ and $\tilde t_0(p)=t_0\pa{\frac{p-1}{4p}}$.
\end{proof}
\begin{remark}\label{rem:explicit_time_hyper}
If the condition \eqref{eq:hypercondition} holds for $\epsilon=\frac12$ we can give an explicit upper bound for the ``waiting time'' in the hypercontractivity estimate \eqref{eq:hyper}. For such $\epsilon$ we have $\epsilon_1:=\min\pa{\epsilon,\frac{q-1}{2q}}=\frac{q-1}{2q}$, and by choosing $\alpha=\frac{\mu}{2}$ we can see that $t_0(\epsilon_1)$ from the proof of Theorem \ref{thm:hyper} is
$$\overline{t_0}(q): =\frac{1}{\mu} \log\pa{\frac{\max\pa{c(3q-1),2c_2^2\frac{q+1}{q-1}} \pa{1+\pa{\frac{2n}{\mu e}}^{2n}}}{q-1 }},
$$
where $c,c_2$ are geometric constants found in the proof of Lemma \ref{lem:WconvergesKrate}.
\end{remark}
With the non-symmetric hypercontractivity result at hand, we can finally complete the proof of our main theorem for $1<p<2$.
\begin{proof}[Proof of Theorem \ref{thm:main} for $1<p<2$]
Using Theorem \ref{thm:hyper} $(ii)$ we find an explicit\linebreak $T_0(p)$ such that for any $t\geq T_0(p)$ the solution to the Fokker-Planck equation, $f(t)$, is in $L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$. Proceeding similarly to the previous remark (but now with $q=2$ and $\epsilon=\frac{p-1}{4p}$) we have $\epsilon_1:=\min\pa{\frac{p-1}{4p},\frac14}=\frac{p-1}{4p}$.
This yields the following upper bound for the ``waiting time'' in the hypercontractivity estimate \eqref{eq:entropichyper}:
$$
T_0(p): =\frac{1}{\mu} \log\pa{\frac{\max\pa{c(5p-1),2c_2^2\frac{3p^2+p}{p+1}} \pa{1+\pa{\frac{2n}{\mu e}}^{2n}}}{p-1}}.
$$
Using Lemma \ref{lem:control_of_psi_by_e2}, Theorem \ref{thm:main} for $p=2$ (which was already proven in \S\ref{sec:spectral}), and inequality \eqref{eq:entropichyper} we conclude that for any $t\geq T_0(p)$
\begin{equation}\label{eq:main_proof_1p2_I}
\begin{gathered}
e_p(f(t)|f_\infty) \leq 2e_2(f(t)|f_\infty) \leq 2\tilde{c_2}e_2\pa{f(T_0(p))|f_\infty}\pa{1+\pa{t-T_0(p)}^{2n}}e^{-2\mu (t-T_0(p))}\\
\leq 2\tilde{c_p}e^{2\mu T_0(p)} \pa{p(p-1)e_p(f_0|f_\infty)+1}^{\frac{2}{p}}\pa{1+t^{2n}}e^{-2\mu t}.\quad
\end{gathered}
\end{equation}
To complete the proof we recall that any admissible relative entropy decreases along the flow of the Fokker-Planck equation (see \cite{AE} for instance). Thus, for any $t\leq T_0(p)$ we have that
\begin{equation}\label{eq:main_proof_1p2_II}
e_p(f(t)|f_\infty) \leq e_p(f_0|f_\infty) \leq e_p(f_0|f_\infty)e^{2\mu T_0(p)}\pa{1+t^{2n}}e^{-2\mu t}.
\end{equation}
The theorem now follows from \eqref{eq:main_proof_1p2_I} and \eqref{eq:main_proof_1p2_II}, together with the fact that for a $1<p<2$
$$e_p(f_0|f_\infty) \leq \mathcal{C}_p \pa{p(p-1)e_p(f_0|f_\infty)+1}^{\frac{2}{p}},$$
where $\mathcal{C}_p:=\sup_{x\geq 0}\frac{x}{(p(p-1)x+1)^{\frac{2}{p}}}<\infty.$
\end{proof}
We end this section with a slight generalization of our main theorem:
\begin{theorem}\label{thm:main_generalized}
Let $\psi$ be a generating function for an admissible relative entropy. Assume in addition that there exists $C_\psi>0$ such that
\begin{equation}\label{eq:psi_condition_hyper}
\psi_{p}(y) \leq C_{\psi}\psi(y)
\end{equation}
for some $1<p<2$ and all $y\in \mathbb{R}^+$. Then, under the same setting of Theorem \ref{thm:main} (but now with the assumption $e_\psi(f_0|f_\infty)<\infty$) we have that
\begin{equation*
e_\psi(f(t)|f_\infty) \leq c_{p,\psi} \pa{e_\psi(f_0|f_\infty)+1}^{\frac{2}{p}}\pa{1+t^{2n}}e^{-2\mu t},\quad t\ge0,
\end{equation*}
where $c_{p,\psi}>0$ is a fixed geometric constant.
\end{theorem}
\begin{proof}
The proof is almost identical to the proof of Theorem \ref{thm:main}. Due to \eqref{eq:psi_condition_hyper} we know that $e_p(f_0|f_\infty)<\infty$. As such, according to Theorem \ref{thm:hyper} $(ii)$ there exists an explicit $T_0(p)$ such that for all $t\geq T_0(p)$ we have that $f(t)\in L^2\pa{\mathbb{R}^d,f_\infty^{-1}}$ and
$$e_2(f(t)|f_\infty)\leq \frac{1}{2}\pa{\pa{\frac{8\sqrt2}{3\cdot 2^\frac1p}}^{d}\pa{C_\psi p(p-1)e_\psi(f_0|f_\infty)+1)}^{\frac{2}{p}}-1}.$$
The above, together with Lemma \ref{lem:control_of_psi_by_e2} gives the appropriate decay estimate on $e_\psi$ for $t\geq T_0(p)$. Since $e_\psi$ decreases along the flow of our equation, we can deal with the interval $t\leq T_0(p)$ like in the previous proof, yielding the desired result.
\end{proof}
In the next, and last, section of this work we will mention another natural quantity in the theory of the Fokker-Planck equations - the Fisher information. We will briefly explain how the method we presented here is different to the usual technique one considers when dealing with the entropy. Moreover we describe how to infer from our main theorem an improved rate of convergence to equilibrium - in relative Fisher information.
\section{Decay of the Fisher Information}\label{sec:fisher}
The study of convergence to equilibrium for the Fokker-Planck equations via relative entropies has a long history. Unlike the study we presented here, which relies on detailed spectral investigation of the Fokker-Planck operator together with a non-symmetric hypercontractivity result, the common method to approach this problem - even in the degenerate case - is the so called \emph{entropy method}.\\
The idea behind the entropy method is fairly simple: once an entropy has been chosen and shown to be a Lyapunov functional to the equation, one attempts to find a linear relation between it and the absolute value of its dissipation. In the setting of the our equation, the latter quantity is referred to as \emph{the Fisher information}. \\
More precisely, it has been shown in \cite{AE} that:
\begin{lemma}\label{lem:entropydissipation}
Let $\psi$ be a generating function for an admissible relative entropy and let $f(t,x)$ be a solution to the Fokker-Planck equation \eqref{eq:fokkerplanck} with initial datum $f_0\in L^1_+\pa{\mathbb{R}^d}$. Then, for any $t>0$ we have that
\begin{equation*
\begin{split}
\frac{d}{dt}&e_\psi\pa{f(t)|f_\infty} =\\
& -\int_{\mathbb{R}^d}\psi^{\prime\prime}\pa{\frac{f(t,x)}{f_\infty(x)}}\nabla\pa{ \frac{f(t,x)}{f_\infty(x)}}^T{\bf C}_s\nabla\pa{ \frac{f(t,x)}{f_\infty(x)}}f_{\infty}(x)dx \leq 0.
\end{split}
\end{equation*}
\end{lemma}
\begin{definition}
For a given positive semidefinite matrix ${\bf{P}}$ the expression
\begin{equation*
I_\psi^{\bf{P}}(f|f_\infty):=\int_{\mathbb{R}^d}\psi^{\prime\prime}\pa{\frac{f(x)}{f_\infty(x)}}\nabla\pa{\frac{f(x)}{f_\infty(x)}}^T {\bf{P}} \nabla\pa{\frac{f(x)}{f_\infty(x)}}f_\infty(x) dx\geq 0.
\end{equation*}
is called \emph{the relative Fisher Information generated by $\psi$}.
\end{definition}
The entropy method boils down to proving that there exists a constant $\lambda>0$ such that
\begin{equation}\label{eq:log_sob}
I_\psi^{\bf{P}}(f|f_\infty) \geq \lambda e_{\psi}(f|f_\infty).
\end{equation}
When ${\bf D}$ is positive definite, the above (with the choice $\bf{P}:={\bf D}$) is a Sobolev inequality (and a log-Sobolev inequality for $\psi=\psi_1$), and a standard way to prove it is by using the Bakry-\'Emery technique (see \cite{AMTU01, BaEmD85} for instance). This technique involves differentiating the Fisher information along the flow of the Fokker-Planck equation and finding a closed functional inequality for it. By an appropriate integration in time, one can then obtain \eqref{eq:log_sob}.\\
Problems start arising with the above method when ${\bf D}$ is not invertible. As can be seen from the expression of $I_\psi^{{\bf D}}$ - there are some functions that are not identically $f_\infty$ yet yield a zero Fisher information. In recent work of Arnold and Erb (\cite{AE}), the authors managed to circumvent this difficulty by defining a new positive definite matrix ${\bf{P_0}}$ that is strongly connected to the drift matrix ${\bf C}$, and for which \eqref{eq:log_sob} is valid as a functional inequality. They proceeded to successfully use the Bakry-\'Emery method on $I_\psi^{{\bf{P_0}}}$ and conclude from it, and the log-Sobolev inequality, rates of decay for $I_\psi^{{\bf D}}$ (which is controlled by $I_\psi^{{\bf{P_0}}}$) and $e_\psi$. This is essentially what is behind the exponential decay in Theorem \ref{thm:Anton_Erb_rate}. Moreover, in the defective case (ii), it led to an $\epsilon$-reduced exponential decay rate. \\
As we have managed to obtain better convergence rates to equilibrium (in relative entropy) for the case of defective drift matrices ${\bf C}$, one might ask whether or not the same rates will be valid for the associated Fisher information $I_p^{{\bf D}}:=I_{\psi_p}^{\bf D}$. The answer to that question is \emph{Yes}, and we summarise this in the next theorem:
\begin{theorem}\label{thm:decay_for_fisher}
Consider the Fokker-Planck equation \eqref{eq:fokkerplanck} with diffusion and drift matrices ${\bf D}$ and ${\bf C}$ which satisfy Conditions (A)-(C). Let $\mu$ be defined as in \eqref{eq:def_of_mu} and assume that one, or more, of the eigenvalues of ${\bf C}$ with real part $\mu$ are defective. Denote by $n>0$ the maximal defect of these eigenvalues. Then, for any $1<p\leq 2$, the solution $f(t)$ to \eqref{eq:fokkerplanck} with initial datum $f_0\in L^1_+\pa{\mathbb{R}^d}$ that has a unit mass and
$I_p^{{\bf{P_0}}}(f_0|f_\infty)<\infty$
satisfies:
\begin{equation*
I_p^{{\bf D}}\pa{f(t)|f_\infty}\le c I^{{\bf{P_0}}}_p\pa{f(t)|f_\infty} \leq
c_p(f_0) \pa{1+t^{2n}}e^{-2\mu t},\quad t\ge0,
\end{equation*}
where $c_p(f_0)$ depends on $I_p^{{\bf{P}_0}}(f_0|f_\infty)$.
\end{theorem}
\begin{proof}
We first note that Proposition 4.4 from \cite{AE} implies the estimate $e_p\pa{f_0|f_\infty}\le c I_p^{{\bf{P_0}}}(f_0|f_\infty)$ $<\infty$, and hence Theorem \ref{thm:main} applies. This decay of $e_p$ carries over to $I_p^{{\bf{P_0}}}$ due to the following two ingredients:
For small $t$ we can use the purely exponential decay of $I_p^{{\bf{P_0}}}$ as established in Proposition 4.5 of \cite{AE} (with the rate $2(\mu-\epsilon)$). And for large time we use the (degenerate) parabolic regularisation of the Fokker-Planck equation \eqref{eq:fokkerplanck}: As proven in Theorem 4.8 of \cite{AE} we have for all $\tau\in (0,1]$ that
\begin{equation*
I^{\bf{P}_0}_\psi(f(\tau)|f_\infty) \leq \frac{c_{k_0}}{\tau^{2\kappa+1}}e_{\psi}\pa{f_0|f_\infty},
\end{equation*}
where $\psi$ is the generating function for an admissible relative entropy. And $\kappa>0$ is the minimal number such that there exists $\tilde{\lambda}>0$ with
$$\sum_{j=0}^{\kappa} {{\bf C}}^j {\bf D} \pa{{\bf C}^T}^j \geq \tilde{\lambda} \bf{I}. $$
The existence of such $\kappa$ and $\tilde{\lambda}$ is guaranteed by Condition (C) and equivalent to the rank condition \eqref{rank-cond}- cf.\ Lemma 2.3 in \cite{AAS}.
\end{proof}
\begin{comment}
The only extra tool that is required to show the above theorem, whose proof we skip, is the following \emph{regularization property} of the Fokker-Planck equation:
\begin{theorem}\label{thm:regularization}
Consider the Fokker-Planck equation \eqref{eq:fokkerplanck} with diffusion and drift matrices ${\bf D}$ and ${\bf C}$ which satisfy Conditions (A)-(C). Let $k_0>0$ be the minimal number such that there exists $\kappa>0$ with
$$\sum_{j=0}^{k_0} {{\bf C}}^j {\bf D} \pa{{\bf C}^T}^j \geq \kappa I $$
(the existence of such $k_0$ and $\kappa$ is guaranteed by Condition (C) and equivalent to the rank condition \eqref{rank-cond}- cf.\ Lemma 2.3 in \cite{AAS}). Let $f_0\in L_+^1\pa{\mathbb{R}^d}$ be a function with unit mass and let $f(t)$ be the solution to \eqref{eq:fokkerplanck} with initial data $f_0$. Then, there exists a positive constant $c_{k_0}>0$, independent of $f$, such that for all $\tau\in (0,1]$
\begin{equation}\label{eq:regularization}
I^{\bf{P}}_\psi(f(\tau)|f_\infty) \leq \frac{c_{k_0}}{\tau^{2k_0+1}}e_{\psi}\pa{f_0|f_\infty},
\end{equation}
where $\psi$ is the generating function for an admissible relative entropy.
\end{theorem}
\end{comment}
|
1,941,325,220,778 | arxiv | \section{Introduction}
In 1935 Yukawa introduced virtual particles, pions, to explain the nuclear force\cite{Yukawa}, which bounds protons and neutrons inside nuclei. Since then the nucleon-nucleon ($NN$) interaction has been extensively investigated at low energies both theoretically and experimentally. Fig.~\ref{fig:potential} shows modern $NN$ potentials, which are characterized by the following features\cite{Taketani,Machleidt}.
\begin{center}
\includegraphics[width=75mm]{Fig/phen-pot_new.eps}
\figcaption{\label{fig:potential}Three examples of the modern $NN$ potential for the $^1S_0$ (spin singlet and $S$-wave) state: Bonn\protect\cite{Bonn}, Reid93\protect\cite{Reid93} and AV18\protect\cite{AV18}.}
\end{center}
At long distances ($r\ge 2$ fm ) there exists weak attraction, which
is well understood and is dominated by the one pion exchange, while
contributions from the exchange of multi-pions and/or heavy mesons such as $\rho$, $\omega$ and $\sigma$ lead to slightly stronger attraction at medium distances (1 fm $\le r \le $ 2 fm).
On the other hand, at short distances ($r \le$ 1 fm), attraction turns into repulsion, and it becomes stronger and stronger as $r$ gets smaller, forming the strong repulsive core\cite{Jastrow}.
The repulsive core is essential not only for describing the $NN$ scattering data, but also for the stability and saturation of atomic nuclei, for determining the maximum mass of neutron stars, and for igniting Type II supernova explosions\cite{Supernova}.
Although the origin of the repulsive core must be related to the quark-gluon structure of nucleons,
it remains one of the most fundamental problems in nuclear physics for a long time\cite{OSY}.
It is a great challenge for us to derive the nuclear potential including the repulsive core from (lattice) QCD.
In this talk, we first explain our strategy to extract $NN$ potentials theoretically from the first principle using lattice QCD and present the recent result in quenched QCD simulations. We then apply the method to various cases including the energy dependence of the potential, the quark mass dependence of the potential, the tensor potential, the full QCD calculation and the hyperon-nucleon potential. Since we mainly present only the results due to the limitation of the length, please see the corresponding references for more details.
\section{Strategy to extract potentials in QCD}
Since a potential is a concept of the non-relativistic quantum mechanics, it is non-trivila to define it in QCD. To find a reasonable definition of the $NN$ potentials in QCD, we first consider the $S$-matrix below inelastic threshold of the $NN$ scattering. The unitarity leads to
\begin{eqnarray}
S &=& e^{2 i \delta}
\end{eqnarray}
where the "phase" $\delta$ is a hermitian matrix. We next introduce the equal time Bethe-Salpeter (BS) wave function\cite{BNNPEW,cppacs}, defined by
\begin{eqnarray}
\varphi_E({\bf r}) &=&\langle 0 \vert N ({\bf x} +{\bf r},0) N({\bf x},0) \vert 2N, E \rangle
\end{eqnarray}
where $\vert 2N, E\rangle$ is a two-nucleon eigenstate in QCD with energy $E=2\sqrt{m_N^2+k^2}$, and
the $N(x)$ is the gauge-invariant 3-quark operator given by
\begin{eqnarray}
N(x) &=& \varepsilon^{abc} q_a(x) q_b(x) q_c(x) .
\end{eqnarray}
For large $r=\vert {\bf r}\vert$, the partial wave $l$ of the BS-wave function behaves as
\begin{eqnarray}
\varphi_E^l({\bf r}) &\rightarrow & A_l \frac{\sin(kr-l\pi/2 +\delta_l(k))}{k r}
\end{eqnarray}
where $\delta_l(k)$ is the phase of the $S$-matrix for the partial wave $l$\cite{ishizuka,AHI2}. (Although $\delta_l(k)$ is a hermitian matrix in general, we here consider the case that $\delta_l(k)$ is just a number for simplicity.) The above formula says that $\delta_l(k)$ is the scattering phase shift of the
scattering wave. In other words, the BS wave function defined above can be interpreted as the $NN$ scattering wave.
Based on the above fact, we have proposed the following strategy to define and extract the $NN$ potential in QCD\cite{IAH1,aoki}. We define a non-local potential from the BS wave function as
\begin{eqnarray}
\left[E-H_0\right] \varphi_E({\bf r}) &=& \int d^3 s\, U({\bf r}, {\bf s} )\varphi_E({\bf s})
\label{eq:def_pot}
\end{eqnarray}
where $H_0 = -{\bf \nabla}^2/(2\mu_N)$ and $\mu_N = m_N/2$ is the reduced mass of a two-nucleon system.
Since the non-local potential is difficult to deal with, we expand it in terms of derivatives as
$U({\bf r},{\bf s}) = V({\bf r},{\bf \nabla})\delta^3({\bf r}-{\bf s})$. The first few terms
are given by\cite{OM}
\begin{eqnarray}
V({\bf r}, {\bf \nabla}) &=& V_C(r) + V_T(r) S_{12} + V_{\rm LS}{\bf \rm L}\cdot {\bf \rm S}
\nonumber \\
&+& \{ V_D(r), {\bf \nabla}^2 \} + \cdots , \\
S_{12} &=& \frac{3}{r^2} ({\bf \sigma_1}\cdot{\bf r}) ({\bf \sigma_2}\cdot{\bf r})
-({\bf \sigma_1}\cdot {\bf \sigma_2} )
\end{eqnarray}
where $S_{12} $ is the tensor operator and $\sigma_i$ is the spin operator of
the $i$-th nucleon. The central potential $V_C$ and the tensor potential $V_T$ are the leading local terms (without derivatives), and thus can be determined from the BS wave function at one energy using
eq.(\ref{eq:def_pot}). Higer order terms in the above derivative expansion can be successively determined from BS wave functions at different energy. We however expect at low energy that the leading order terms, $V_C$ and $V_T$, give a good approximation of the potential.
We can estimate an applicable range of energy for the local potential approximation, by
calculating physical observables such as the scattering phase shift from the local potential and comparing them with experimental values.
The first result for the $NN$ potentials based on the above strategy has been obtained in quenched lattice QCD simulations\cite{IAH1}, where the lattice spacing $a$ is 0.137 fm, the spatial extension $L$ is 4.4 fm and the pion mass $m_\pi$ is 529 MeV. In Fig.\ref{fig:quench_potential}, the central potential for the $^1S_0$ (spin singlet and $L=0$) state and the effective central potential for the $^3S_1$ (spin triplet and $L=0$) state, obtained at $k^2 \simeq 0$, are plotted as a function of $r$.
By comparing Fig.\ref{fig:quench_potential} with Fig.\ref{fig:potential}, we see that qualitative features of $NN$ potentials are reproduced. Ref.\cite{IAH1} has been selected as one of 21 papers in Nature Research Highlights 2007\cite{nature}.
\begin{center}
\includegraphics[width=60mm, angle=270]{Fig/pot_529_1S0-3S1.eps}
\figcaption{\label{fig:quench_potential}
The (effective) central potential for the $^1S_0$ ($^3S_1$) state at $m_\pi = 529$ MeV in quenched QCD.
}
\end{center}
\section{Recent developments}
In this section we report recent developments of lattice QCD calculations for baryon-baryon interactions,
based on the method in the previous section.
\subsection{Energy dependence}
We first investigate the applicable range of energy for the local potential
determined at $k\simeq 0$.
If terms with derivatives such as $V_{\rm LS}(r) {\bf L}\cdot{\bf S}$ or $\{V_D(r), {\bf\nabla}^2\}$ becomes important, the local potential determined at $k > 0$ is different from the one at $k\simeq 0$.
From such $k$ dependences of local potentials, in principle,
some of the terms with derivatives can be determined. In Fig.\ref{fig:energy_dep}, the local potential for the $^1S_0$ state obtained at $k\simeq 250$ MeV (red, APBC) is compared with the one at $k\simeq 0$ MeV (blue, PBC) in quenched QCD at $a=0.137$ fm and $m_\pi = 529$ MeV\cite{ABHIMNW,murano}.
\begin{center}
\includegraphics[width=82mm]{Fig/V1S0_APBC_PBC.t9.r13_16.eps}
\figcaption{\label{fig:energy_dep}
Comparison of central potentials in the local approximation between the periodic boundary condition (PBC, blue) and the anti-periodic boundary condition(APBC, red) for
the $^1S_0$ state at $m_\pi = 529$ MeV.}
\end{center}
As can be seen from the figure, the $k$ dependence of the local potential turns out to be very small.
This means that the potential obtained at $k\simeq 0$ in Fig.\ref{fig:quench_potential} well describes physical observables such as the phase shift $\delta_0(k)$ from $k\simeq 0$ to $k\simeq 250$ MeV, in quenched QCD at $a= 0.137$ fm and $m_\pi = 529$ MeV.
\subsection{Quark mass dependence}
A quark mass dependence of the $NN$ potential is shown in Fig.\ref{fig:mass_dep},
where the central potentials for the $^1S_0$ state
obtained at $k\simeq 0$ at $m_\pi = 380$, 529 and 731 MeV are compared
in quenched QCD at $a=0.137$ fm\cite{AHI1,AHI2}.
As quark mass decreases, the repulsion at short distance (the repulsive core) get stronger while
the attraction at intermediate distance ( 0.6 fm $\sim$ 1.2 fm) becomes also stronger.
\begin{center}
\includegraphics[width=58mm, angle=270]{Fig/COM-potential-phys.ps}
\figcaption{\label{fig:mass_dep}
The central potential for the $^1S_0$ state at $m_\pi = 380$ MeV (red),
529 MeV (green) and 731 MeV (blue) in quenched QCD at $a=0.137$ fm.}
\end{center}
\subsection{Tensor potential}
The tensor operator $S_{12}$ mixes the $^3S_1$( $J=S=1$ and $L=0$) state with the $^3D_1$ ($ J=S=1$ and $L=0$) state. Using this property, we can determine the tensor potential as follows.
For the $J=S=1$ state, the local potential approximation leads to
\begin{eqnarray}
(E-H_0)\varphi_E({\bf r} ) &=& \left[V_C(r) + V_T(r) S_{12}\right] \varphi_E({\bf r}),
\end{eqnarray}
which, by the projection $P$ to the $L=0$ state and the projection $Q$ to the $L=2$ state, is decomposed into
\begin{eqnarray}
\left(\begin{array}{cc}
P \varphi_E & P S_{12} \varphi_E \\
Q \varphi_E & Q S_{12} \varphi_E \\
\end{array}
\right) &\times &
\left(\begin{array}{c}
V_C \\
V_T \\
\end{array}
\right) \nonumber \\
&=& (E - H_0)
\left(\begin{array}{c}
P \varphi_E\\
Q \varphi_E \\
\end{array}
\right) .
\end{eqnarray}
The above equation can be easily solved as
\begin{eqnarray}
\left(\begin{array}{c}
V_C \\
V_T \\
\end{array}
\right) &=&
\left(\begin{array}{cc}
P \varphi_E & P S_{12} \varphi_E \\
Q \varphi_E & Q S_{12} \varphi_E \\
\end{array}
\right)^{-1} \nonumber \\
&\times& (E - H_0)
\left(\begin{array}{c}
P \varphi_E\\
Q \varphi_E \\
\end{array}
\right) .
\end{eqnarray}
In Fig.\ref{fig:tensor}, the central potential $V_C(r)$ and the tensor potential $V_T(r)$ for the
spin-triplet state are plotted\cite{AHI2}, together with the effective central potential $V_C^{\rm eff}(r)$ for the $^3S_1$ in Fig.\ref{fig:quench_potential}, which corresponds to $ (E-H_0) P \varphi_E / (P \varphi_E )$ in the above notation.
These potentials are calculated in quenched QCD at $a=0.137$ fm and $m_\pi=529$ MeV.
\begin{center}
\includegraphics[width=60mm, angle=270]{Fig/VTVC.1665.ps}
\vspace{0.3cm}
\figcaption{\label{fig:tensor}
The central potential(blue) and the tensor potential (red), together with the effective central potential (red), for the spin-triplet state at $m_\pi =529$ MeV.
They are obtained from $k\simeq 0$ states in quenched QCD at $a=0.137$ fm.
}
\end{center}
We first notice that the tensor potential $V_T$ has no strong repulsive core, in contrast to the central potential. This feature is consistent with the previous phenomenological estimate\cite{Machleidt2}.
Although the tensor potential is comparable in the magnitude with the central potential, the difference between the central and effective central potentials, which is caused by the second order perturbation of $V_T$, is very small at this quark mass.
A quark mass dependence of the tensor potential is given in Fig.\ref{fig:tensor_mass}, where
the tensor potential is plotted at $m_\pi = 380, 529$ and 731 MeV\cite{AHI2}.
The tensor potential becomes stronger as the quark mass decreases.
\begin{center}
\includegraphics[width=60mm, angle=270]{Fig/VT_mass.ps}
\figcaption{\label{fig:tensor_mass}
Quark mass dependence of the tensor potential. }
\end{center}
\subsection{Full QCD calculations}
Results for the $NN$ potentials so far have been calculated in quenched QCD.
In Ref.\cite{IAH2}, preliminary results in full QCD calculations have been reported, based on
gauge configurations generated by the PACS-CS Collaboration in 2+1 flavor QCD
at $a=0.09$ fm and $L=2.9$ fm\cite{pacs-cs}. Hadron spectra obtained from these configurations are shown in Fig.\ref{fig:spectra}. Agreements between lattice QCD predictions and experimental values are quiet good.
The (effective) central $NN$ potentials on these configurations are given in Fig.\ref{fig:full_potential} for the $^1S_0$ state (red) and the $^3S_1$ state (blue) at $m_\pi = 702$ MeV.
\vspace{1.6cm}
\begin{center}
\includegraphics[width=80mm]{Fig/spectrum.eps}
\figcaption{\label{fig:spectra}
Light hadron spectra extrapolated to the physical point
using $m_\pi$, $m_K$ and $m_\Omega$ as input.
Horizontal bars denote the experimental values.
}
\end{center}
\begin{center}
\includegraphics[width=60mm,angle=270]{Fig/VCeff.ps}
\figcaption{\label{fig:full_potential}
The (effective) central potential for the $^1S_0$ state (red) and the $^3S_1$ state (blue) at $m_\pi = 702$ MeV
in 2+1 flavor QCD at $a=0.09$ fm.
}
\end{center}
We observe that the repulsive core in both states is much larger in magnitude than the corresponding one in quenched QCD in Fig.\ref{fig:quench_potential}, though the lattice spacing ( 0.09 fm vs. 0.137 fm) and the pion mass ( 529 MeV vs. 702 MeV ) are different. A reason for the difference of the repulsive core between full and quenched QCD is now under investigation.
\subsection{Hyperon-Nucleon interactions}
A hyperon is a baryon which contains at least one strange quark.
Contrary to the case of $NN$ interactions,
hyperon-nucleon ($YN$) or hyperon-hyperon($YY$) interactions can not be precisely determined, since the scattering experiments are either difficult or impossible
due to the short life times of hyperons.
Our approach therefore may open a new possibility to determine them theoretically from QCD.
In this direction, potentials between a $\Xi^0$ (hyperon with strangeness $-2$ ) and a proton
has already been calculated in quenched QCD\cite{nemura1}.
\begin{center}
\includegraphics[width=85mm]{Fig/PACSCS_VCeff_1S0_.eps}
\figcaption{\label{fig:hyperon_singlet}
The central potential for the $\Lambda N$ ($^1S_0$) state in 2+1 full QCD as a function of $r$ at $m_\pi \simeq 400$ MeV (red) and 700 MeV (green).
}
\end{center}
\begin{center}
\includegraphics[width=85mm]{Fig/PACSCS_VCT_3E1_.eps}
\figcaption{\label{fig:hyperon_triplet}
The central and tensor potentials for the $\Lambda N$ ($^3S_1 - ^3D_1$) state in 2+1 full QCD at $m_\pi \simeq 400$ MeV (red and blue) and 700 MeV (green and magenta).
}
\end{center}
Recently calculations have been extended to the potentials between
the $\Lambda$( hyperon with strangeness $-1$ ) and $N$ in both quenched and full QCD.
Fig.\ref{fig:hyperon_singlet} and Fig.\ref{fig:hyperon_triplet} show the $\Lambda N$ potentials
as a function of $r$ obtained from 2+1 flavor QCD calculation\cite{nemura2} based on the PACS-CS gauge configurations.
The central potential for the $^1S_0$ state is given in Fig.\ref{fig:hyperon_singlet}, while the central and the tensor potentials for the $^3S_1 - ^3D_1$ state are given in Fig.\ref{fig:hyperon_triplet} ,
with highlighting the short distance (medium to long distance) region in the left (right) panel.
These figures contain results at $m_\pi \simeq 400$ and 700 MeV.
As can be seen from both figures, the attractive well of the central potential moves to outer region as the quark mass decreases while the depth of these attractive pockets do not change so much.
The present results show that the tensor force is weaker while the spin dependence is stronger than the $NN$ case\cite{IAH2}. As in the $NN$ case, the hight of the repulsive core
is much larger than the quenched case\cite{nemura2} and it increases as the quark mass decreases.
\section{Conclusion}
We present recent results on baron-baryon interactions obtained from lattice QCD simulations.
In our strategy, baryon-baryon($NN$, $YN$ and $YY$) potentials are extracted from the BS wave functions. The first result for the $NN$ (effective) central potentials in quenched QCD shows good "shape": Qualitative features of the $NN$ potential have been reproduced in quenched QCD, and the energy dependence of the potentials is weak at low energy.
The method has been successfully extended to the tensor potential and the $\Lambda N$ potentials in both quenched and full QCD.
One of the ultimate goal in our approach is to calculate baryon-baryon potentials in full QCD at $m_\pi = 140$ MeV. In such calculations one can investigate, for example, a relation between the deuteron binding and the tensor force. As other directions of our approach, it is important to extract the 3-body force\cite{miyazawa} from QCD and to understand the origin of the repulsive core theoretically\cite{ABW}.
\acknowledgments{I would like to thank members of HAL QCD Collaboration for providing me the latest results and useful discussions. This work is supported in part by
Grants-in-Aid of the Ministry of Education, Culture, Sports, Science and Technology of Japan (Nos. 20340047, 20105001, 20105003). }
\end{multicols}
\vspace{-2mm}
\centerline{\rule{80mm}{0.1pt}}
\vspace{2mm}
\begin{multicols}{2}
|
1,941,325,220,779 | arxiv | \subsubsection*{\bibname}}
\usepackage{microtype}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage{booktabs}
\usepackage{epsfig,epsf,fancybox}
\usepackage{amsmath}
\usepackage{mathrsfs}
\usepackage{amssymb}
\usepackage{color}
\usepackage{multirow}
\usepackage{paralist}
\usepackage{verbatim}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{galois}
\usepackage{boxedminipage}
\usepackage{accents}
\usepackage{stmaryrd}
\usepackage[bottom]{footmisc}
\usepackage{natbib}
\usepackage{url}
\usepackage[colorlinks,linkcolor=blue,citecolor=blue]{hyperref}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{criterion}{Criterion}[section]
\newtheorem{definition}{Definition}[section]
\newtheorem{example}{Example}[section]
\newtheorem{question}{Question}[section]
\newtheorem{remark}[theorem]{Remark}
\newtheorem{assumption}[theorem]{Assumption}
\def\par\noindent{\em Proof.}{\par\noindent{\em Proof.}}
\def\hfill $\Box$ \vskip 0.4cm{\hfill $\Box$ \vskip 0.4cm}
\newcommand{\arabic{algorithm}}{\arabic{algorithm}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{B}}{\mathbb{B}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\top}{\top}
\newcommand{\textnormal{s.t.}}{\textnormal{s.t.}}
\newcommand{\textnormal{Tr}\,}{\textnormal{Tr}\,}
\newcommand{\textnormal{Tr}}{\textnormal{Tr}}
\newcommand{\textnormal{conv}}{\textnormal{conv}}
\newcommand{\textnormal{diag}}{\textnormal{diag}}
\newcommand{\textnormal{Diag}\,}{\textnormal{Diag}\,}
\newcommand{\textnormal{Prob}}{\textnormal{Prob}}
\newcommand{\textnormal{var}}{\textnormal{var}}
\newcommand{\textnormal{rank}}{\textnormal{rank}}
\newcommand{\textnormal{sign}}{\textnormal{sign}}
\newcommand{\textnormal{poly}}{\textnormal{poly}}
\newcommand{\textnormal{cone}\,}{\textnormal{cone}\,}
\newcommand{\textnormal{cl}\,}{\textnormal{cl}\,}
\newcommand{\textnormal{vec}\,}{\textnormal{vec}\,}
\newcommand{\textnormal{sym}\,}{\textnormal{sym}\,}
\newcommand{\boldsymbol M \,}{\boldsymbol M \,}
\newcommand{\boldsymbol V \,}{\boldsymbol V \,}
\newcommand{\mathscr{T}\,}{\mathscr{T}\,}
\newcommand{\textnormal{feas}\,}{\textnormal{feas}\,}
\newcommand{\textnormal{opt}\,}{\textnormal{opt}\,}
\newcommand{\mathbf\Sigma}{\mathbf\Sigma}
\newcommand{\mathbf S}{\mathbf S}
\newcommand{\mathbf R}{\mathbf R}
\newcommand{\mathbf x}{\mathbf x}
\newcommand{\mathbf y}{\mathbf y}
\newcommand{\mathbf s}{\mathbf s}
\newcommand{\mathbf a}{\mathbf a}
\newcommand{\mathbf g}{\mathbf g}
\newcommand{\mathbf e}{\mathbf e}
\newcommand{\mathbf z}{\mathbf z}
\newcommand{\mathbf w}{\mathbf w}
\newcommand{\mathbf u}{\mathbf u}
\newcommand{\mathbf v}{\mathbf v}
\newcommand{\mathop{\rm argmin}}{\mathop{\rm argmin}}
\newcommand{\mathop{\rm argmax}}{\mathop{\rm argmax}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\frac{1}{2}}{\frac{1}{2}}
\newcommand{\mbox{$\mathbb K$}}{\mbox{$\mathbb K$}}
\newcommand{\mbox{$\mathbb Z$}}{\mbox{$\mathbb Z$}}
\newcommand{\textnormal{card}}{\textnormal{card}}
\newcommand{\textnormal{trace}}{\textnormal{trace}}
\newcommand{\textnormal{prox}}{\textnormal{prox}}
\newcommand{\textnormal{diam}}{\textnormal{diam}}
\newcommand{\textnormal{dom}}{\textnormal{dom}}
\newcommand{\textnormal{dist}}{\textnormal{dist}}
\newcommand{{\textbf{E}}}{{\textbf{E}}}
\newcommand{\mathcal L}{\mathcal L}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\textnormal{Err}}{\textnormal{Err}}
\newcommand{\textnormal{Gap}}{\textnormal{Gap}}
\newcommand{{\it et al.\ }}{{\it et al.\ }}
\newcommand{\color{red}}{\color{red}}
\usepackage{enumitem,xspace}
\def\textsf{ROOT\mbox{-}SA}\xspace{\textsf{ROOT\mbox{-}SA}\xspace}
\def\textsf{ROOT\mbox{-}SGD}\xspace{\textsf{ROOT\mbox{-}SGD}\xspace}
\def\textsf{SA}\xspace{\textsf{SA}\xspace}
\def\textsf{RAVE\mbox{-}SA}\xspace{\textsf{RAVE\mbox{-}SA}\xspace}
\def\textsf{RAVE}\xspace{\textsf{RAVE}\xspace}
\def\textsf{RAVE\mbox{-}IGT\mbox{-}SA}\xspace{\textsf{RAVE\mbox{-}IGT\mbox{-}SA}\xspace}
\def\textsf{RAVE\mbox{-}IGT\mbox{-}SA}\xspace{\textsf{RAVE\mbox{-}IGT\mbox{-}SA}\xspace}
\def\textsf{IGT\mbox{-}SA}\xspace{\textsf{IGT\mbox{-}SA}\xspace}
\defaffine iterative algorithm\xspace{affine iterative algorithm\xspace}
\def\textsf{RATE}\xspace{\textsf{RATE}\xspace}
\newcommand{\Bb}{\mathbf{B}
\newcommand{\gb}{\mathbf{g}
\newcommand{\Ab}{\mathbf{A}
\newcommand{\bb}{\mathbf{b}
\newcommand{\mathbf{e}}{\mathbf{e}}
\newcommand{\mathbf{I}}{\mathbf{I}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\renewcommand{\mathbf R}{\mathbb{R}}
\newcommand{\ensuremath{{\mathbb{E}}}}{\ensuremath{{\mathbb{E}}}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\def\textsf{SA}\xspace{\textsf{SA}\xspace}
\def\textsf{RATE}\xspace{\textsf{RATE}\xspace}
\def\mathbf{x}{\mathbf{x}}
\def\mathbf{y}{\mathbf{y}}
\def\mathbf{z}{\mathbf{z}}
\def\mathbf{G}{\mathbf{G}}
\def\mathbf{g}{\mathbf{g}}
\def\mathbf{u}{\mathbf{u}}
\defz{z}
\def\mathbf{z}{\mathbf{z}}
\newcommand{\equiv}{\equiv}
\defK{K}
\def\mathsf{epoch}{\mathsf{epoch}}
\def\mathsf{Epoch}{\mathsf{Epoch}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\lambda^*(\eta){\lambda^*(\eta)}
\defaffine iterative algorithm\xspace{affine iterative algorithm\xspace}
\def\mathbf{w}{\mathbf{w}}
\newtheorem{innercustom}{}
\newenvironment{custom}[1]
{\renewcommand\theinnercustom{#1}\innercustom}
{\endinnercustom}
\newcommand{\marginpar{FIX}}{\marginpar{FIX}}
\newcommand{\marginpar{NEW}}{\marginpar{NEW}}
\usepackage{enumitem}
\def\blue#1{\textcolor{cyan}{#1}}
\def\green#1{\colorbox{green}{#1}}
\def\color{red}#1{}\def\newpage}\addtolength{\oddsidemargin}{-.5in}\addtolength{\evensidemargin}{-.5in}\addtolength{\textwidth}{1in}\addtolength{\topmargin}{-.5in}\addtolength{\textheight}{-4in} %\usepackage{refcheck{
\newcommand{\cjlcomment}[1]{{\bf{{\color{cyan}{{Junchi {---} #1}}}}}}
\begin{document}
\runningauthor{C.~J.~Li, Y.~Yu, N.~Loizou, G.~Gidel, Y.~Ma, N.~Le Roux, M.~I.~Jordan}
\twocolumn[
\aistatstitle{On the Convergence of Stochastic Extragradient for Bilinear Games using Restarted Iteration Averaging}
\aistatsauthor{
Chris Junchi Li$^{\diamond, \star}$
\And
Yaodong Yu$^{\diamond, \star}$
\And
Nicolas Loizou$^{\triangleleft}$
\And
Gauthier Gidel$^{\dagger,\ddagger}$
}
\aistatsauthor{
Yi Ma$^\diamond$
\And
Nicolas Le Roux$^{\dagger,\ddagger,\S,\Box}$
\And
Michael I. Jordan$^\diamond$
}
\vspace{0.1in}
\centering{University of California, Berkeley$^\diamond$ \quad Johns Hopkins University$^\triangleleft \quad $Mila$^\dagger$}
\vspace{0.05in}
\centering{ Université de Montréal$^\ddagger$ \quad McGill University$^\S$ \quad Microsoft Research$^\Box$ }
\vspace{0.4in}
]
\begin{abstract}
We study the stochastic bilinear minimax optimization problem, presenting an analysis of the same-sample Stochastic ExtraGradient (SEG) method with constant step size, and presenting variations of the method that yield favorable convergence. In sharp contrasts with the basic SEG method whose last iterate only contracts to a fixed neighborhood of the Nash equilibrium, SEG augmented with iteration averaging provably converges to the Nash equilibrium under the same standard settings, and such a rate is further improved by incorporating a scheduled restarting procedure. In the interpolation setting where noise vanishes at the Nash equilibrium, we achieve an optimal convergence rate up to tight constants. We present numerical experiments that validate our theoretical findings and demonstrate the effectiveness of the SEG method when equipped with iteration averaging and restarting.
\end{abstract}
\section{INTRODUCTION}\label{sec_intro}
The \emph{minimax optimization} framework provides solution concepts useful in game theory~\citep{morgenstern1944theory}, statistics~\citep{bacheta} and online learning~\citep{blackwell1956analog,cesa2006prediction}.
It has recently been prominent in the deep learning community due to its application to generative modeling~\citep{goodfellow2014generative, arjovsky2017wasserstein} and robust prediction~\citep{madry2017towards, zhang2019theoretically}.
There remains, however, a gap between minimax characterizations of solutions and algorithmic frameworks that provably converge to such solutions in practice.
In standard single-objective machine learning applications, the traditional algorithmic realization of optimization frameworks is stochastic gradient descent (SGD, or one of its variants), where the full gradient is formulated as an expectation over the data-generating mechanism.
In general minimax optimization problems, however, naive use of SGD leads to pathological behavior due to the presence of rotational dynamics~\citep{goodfellow2016nips,balduzzi2018mechanics}.
One way to overcome these rotations is to use gradient-based methods specifically designed for the minimax setting (or more generally for the multi-player game setting).
A key example of such a method is the celebrated \emph{extragradient method}.
Originally introduced by~\citep{g.m.korpelevichExtragradientMethodFinding1976}, it addresses general minimax optimization problems and yields optimal convergence guarantees in the batch setting~\citep{azizian2020accelerating}.
In the stochastic setting, however, it has only been analyzed in special cases, such as the constrained case \citep{juditsky2011solving}, the bounded-noise case \citep{hsieh2020explore}, and the interpolatory case \citep{vaswani2019painless}.
In the current paper, we study the general stochastic bilinear minimax optimization problem, also known as the bilinear saddle-point problem,
\begin{equation}\label{Sminimax}
\min_\mathbf{x} \max_\mathbf{y}~
\mathbf{x}^\top \ensuremath{{\mathbb{E}}}_\xi[\Bb_{\xi}] \mathbf{y}
+
\mathbf{x}^\top \ensuremath{{\mathbb{E}}}_\xi[\gb^\mathbf{x}_\xi]
+
\ensuremath{{\mathbb{E}}}_\xi[(\gb^\mathbf{y}_\xi)^\top] \mathbf{y} \, ,
\end{equation}
where the index $\xi$ denotes the randomness associated with stochastic sampling.
Following standard practice we assume that the expected coupling matrix $\Bb = \ensuremath{{\mathbb{E}}}[\Bb_\xi]$ is nonsingular, and that the intercept vectors $\gb^\mathbf{x}_\xi$ and $\gb^\mathbf{y}_\xi$ have zero mean:
$\ensuremath{{\mathbb{E}}}[\gb^\mathbf{x}_\xi] = \mathbf{0}_n$ and $\ensuremath{{\mathbb{E}}}[\gb^\mathbf{y}_\xi] = \mathbf{0}_m$.
Thus the Nash equilibrium point is $[\mathbf{x}^*; \mathbf{y}^*] = [\mathbf{0}_n; \mathbf{0}_m]$.
Such assumptions are standard in the literature on bilinear optimization~\citep[see, e.g., ][]{vaswani2019painless,mishchenko2020revisiting}.%
\footnote{In the case of a square, nonsingular coupling matrix $\Bb$ this assumption is feasible without loss of generality, while in the rectangular matrix case we simply restrict ourselves to the nonsingular $\min(n,m)$-dimensional subspace induced by singular value decomposition. The nonzero component of the intercept vectors $[\gb^\mathbf{x}_\xi; \gb^\mathbf{y}_\xi]$ projected onto such a subspace is not taken into account in the SEG dynamics.
}
In this work, we present theoretical results in the general setting of bilinear minimax games for a version of the Stochastic ExtraGradient (SEG) method that incorporates iteration averaging and scheduled restarting.
The introduction of stochasticity in the matrix $\Bb_{\xi}$ together with an unbounded domain presents technical challenges that have been a major stumbling block in earlier work~\citep[cf.][]{dieuleveut2016harder}.
Here we show how to surmount these challenges.
Formally, we introduce the following SEG method composed of an extrapolation step (half-iterates) and an update step:
\begin{equation}\label{SEGupdate}\begin{aligned}
\mathbf{x}_{t-1/2} &= \mathbf{x}_{t-1} - \eta_t\left[ \Bb_{\xi, t} \mathbf{y}_{t-1} + \gb^\mathbf{x}_{\xi,t}\right],
\\
\mathbf{y}_{t-1/2} &= \mathbf{y}_{t-1} + \eta_t\left[ \Bb_{\xi, t}^\top \mathbf{x}_{t-1} + \gb^\mathbf{y}_{\xi,t}\right],
\\
\mathbf{x}_t &= \mathbf{x}_{t-1} - \eta_t\left[ \Bb_{\xi, t} \mathbf{y}_{t-1/2} + \gb^\mathbf{x}_{\xi,t}\right],
\\
\mathbf{y}_t &= \mathbf{y}_{t-1} + \eta_t\left[ \Bb_{\xi, t}^\top \mathbf{x}_{t-1/2} + \gb^\mathbf{y}_{\xi,t}\right].
\end{aligned}
\end{equation}
Here and throughout we adopt a \textbf{\emph{same-sample-and-step-size}} notation in which the extrapolation and extragradient steps share the same stochastic sample \citep{gidel2019variational,mishchenko2020revisiting} and step size $\eta_t$; i.e., the updates in~Eq.~\eqref{SEGupdate} use the same samples of $\Bb_{\xi}$, $\gb^\mathbf{x}_\xi$ and $\gb^\mathbf{y}_\xi$.
Note that there exist counterexamples~\citep[see, e.g.,][Theorem 1]{chavdarova2019reducing} where the SEG iteration~\citep{juditsky2011solving} persistently diverges when using independent samples.
The same-sample stochastic extra gradient (SEG) method aims to address this issue~\citep{gidel2019variational,mishchenko2020revisiting}.
In practice, for the bilinear game problems we consider in this paper as well as other application problems, including generative adversarial networks and adversarial training, it is easy to perform the same-sample SEG updates: in most machine learning applications one can re-use a sample without significant extra cost.
\smallskip\noindent\textbf{Main contributions.}
We provide an in-depth study of SEG on bilinear games and we show that, unlike in the minimization-only setting, in the minimax optimization setting the last-iterate SEG algorithm with the same sample and step sizes \emph{cannot} converge in general even when the step sizes are diminishing to zero [Theorems \ref{theo_SEG_A} and \ref{theo_SEG_lower_bound}].
This motivates our study of averaging and restarting in order to obtain meaningful convergence rates:
\begin{enumerate}[topsep=0pt,parsep=0pt,partopsep=0pt, leftmargin=*,label=(\roman*)]
\item
We prove that in the bilinear game setting, under mild assumptions, iteration averaging allows SEG to converge at the rate of $1/\sqrt{K}$ [Theorem \ref{theo_SEG_B}], $K$ being the number of samples the algorithm has processed.
This rate is statistically optimal up to a constant multiplier.
Additionally, we can further boost the convergence rate when we combine iteration averaging with scheduled restarting [Theorem \ref{theo_SEG_C}] when the lower bound of the smallest eigenvalue in the coupling matrix is known to the system.
In this case, exponential forgetting of the initialization and an optimal statistical rate are achieved.
\item
In the special case of the interpolation setting, we are able to show that SEG with iteration averaging and scheduled restarting achieves an accelerated rate of convergence, faster than (last-iterate) SEG [Theorem \ref{theo_SEGg_interpolation_C}], reducing the dependence of the rate on the condition number to a dependence on its square root.
We achieve state-of-the-art rates comparable to the full batch optimal rate~\citep{azizian2020accelerating}, with access only to a stochastic estimate of the gradient, improving upon~\citet{vaswani2019painless}.
\item
We provide the first convergence result on SEG with unbounded noise.
The only existing result of which we are aware of for the unbounded noise setting is the work of \citet{vaswani2019painless} in the interpolation setting.
Our theoretical results are further validated by experiments on synthetic data.
\end{enumerate}
\subsection{Related Work}
\noindent\textbf{Bilinear minimax optimization.} The study of the bilinear example as a tool to understand minimax optimization originated with~\citet{daskalakis2018training}, who studied an optimistic gradient descent-ascent (OGDA) algorithm to solve that minimax problem.
They were able to prove sublinear convergence for this method.
Later,~\citet{mokhtari2020unified} proposed to analyze OGDA and the related ExtraGradient (EG) method as perturbations of the Proximal Point (PP) method.
They were able to prove a linear convergence rate for both EG and OGDA with an iteration complexity of $O(\kappa\log (1/\epsilon))$, where $\kappa \equiv \lambda_{\max}(\Bb^\top \Bb)/\lambda_{\min}(\Bb\Bb^\top)$ is the condition number of problem Eq.~\eqref{Sminimax}.
Highly related to the current work is that of \citet{gidel2019variational}, who studied the bilinear case and proved an $O(\kappa\log (1/\epsilon))$ iteration complexity for EG with a better constant than~\citet{mokhtari2020unified}.
\citet{wei2021linear} studied Optimistic Multiplicative Weights Update (OMWU) for solving constrained bilinear games and established the linear last-iterate convergence.
Regarding optimal methods, a combination of \citet{ibrahim2020linear} and \citet{zhang2019lower} established a general lower bound, which specializes to a lower bound of $\Omega(\sqrt{\kappa} \log (1/\epsilon))$ for the case of the bilinear minimax game setting.
\citet{azizian2020accelerating} proved linear convergence results for a series of algorithms that achieve this lower bound and also provided an alternative proof for this lower bound by using spectral arguments.
However,~\citet{azizian2020accelerating} did not provide accelerated rates for OGDA and provided an accelerated rate for EG with momentum but with an unknown constant.
In this work, we completely close that gap by providing accelerated convergence rates for (stochastic) EG with relatively tight constants.
In another work, \citet{azizian2020tight} proved a full-regime result for EG without momentum where they show that the $O(\kappa\log (1/\epsilon))$ iteration complexity for EG is optimal among the methods using a fixed number of composed gradient evaluations and only the last iterate (excluding momentum and restarting).
A similar iteration complexity, (with an unknown constant) can be derived from the seminal work by~\citet{tseng1995linear} on EG.
\noindent\textbf{Stochastic bilinear minimax and variational inequalities.} The standard assumptions made in the literature on stochastic variational inequalities~\citep{nemirovski2009robust,juditsky2011solving} is that the set of parameters and the variance of the stochastic estimate of the vector field are bounded.
These two assumptions do not hold in the stochastic bilinear case, because it is unconstrained and the noise increases with the norm of the parameters.
Recently, \citet{hsieh2020explore} provided results on stochastic EG with different step sizes, without the bounded domain assumption but still requiring the bounded noise assumption.
{\citet{iusem2017extragradient} and \citet{bot2019forward} studied the independent-sample, minibatch setting where the summation of inverse batchsize converges.}
\citet{mishchenko2020revisiting} discussed how using the same mini-batch for the two gradients in stochastic EG gives stronger guarantees.
Using a Hamiltonian viewpoint, \citet{loizou2020stochastic} provided the first set of global non-asymptotic last-iterate convergence guarantees for a stochastic game over a non-compact domain, in the absence of strong monotonicity assumptions.
In particular, their stochastic Hamiltonian gradient methods come with last-iterate convergence guarantees in the finite-sum stochastic bilinear game as well.
In our work, we provide an accelerated convergence rate for EG in the bilinear setting with unbounded domain and unbounded noise.
\noindent\textbf{Restarting and acceleration.} {Restarting has long been introduced as an effective approach to accelerate first-order methods in the optimization literature~\citep{o2015adaptive, roulet2020sharpness, renegar2021simple}. In particular, \citet{o2015adaptive} proposed an adaptive restarting technique that significantly improves the convergence rate of Nesterov's accelerated gradient descent method. \citet{roulet2020sharpness} developed optimal restarting methods for solving convex optimization problems that satisfy the sharpness assumption. \citet{renegar2021simple} considered a more general set of problems than \citet{roulet2020sharpness} and presented a simple and near-optimal restarting scheme.
Our variant restarting achieves acceleration via a fundamentally different idea that is inspired by modern variance-reduction ideas.
}
\noindent\textbf{Averaging in convex-concave games.}
\cite{golowich2020last} studied the effect of averaging for EG in the smooth convex-concave setting.
They showed that the last iterate converges at a rate of $O(1/\sqrt{K})$ in terms of the square root of the Hamiltonian (and also the duality gap), while it is known that iteration averaging enjoys an $O(1/K)$ rate \citep{nemirovski2004prox}.
A tight lower bound was also proved to justify an assertion of optimality in the last-iterate setting.
Such a result provides a convincing argument in favor of restarting the algorithm from an average of the iterates. This is a theme that we pursue in the current paper.
\noindent\textbf{Stability of limit points in minimax games.}
GDA dynamics often encounter limit cycles or non-Nash stable limiting points~\citep{daskalakis2018limit,adolphs2019local,berard2019closer,mazumdar2019finding}.
To mitigate this, \citet{adolphs2019local} and \citet{mazumdar2019finding} proposed to exploit the curvature associated with the stable limit points that are not Nash equilibria. While appealing theoretically, such methods generally involve costly inversion of Jacobian matrices at each step.
\noindent\textbf{Over-parameterized models and interpolation.} Recently it was shown that popular stochastic gradient methods, like SGD and its momentum variants, converge considerably faster when the underlying model is sufficiently over-parameterized as to interpolate the data \citep{gower2019sgd,gower2021sgd,loizou2020momentum,vaswani2019fast, loizou2021stochastic, sebbouh2020convergence}.
In the minimax optimization setting, an analysis that also covers the interpolation regime is rare.
To the best of our knowledge the only paper that provides convergence guarantees for SEG in this setting is \cite{vaswani2019painless}, where SEG with line search is proposed and analyzed.
In our work we provide convergence guarantees in the interpolation regime as corollaries of our main theorems but with a tight $1/e$-prefactor in the linear convergence.
\noindent\textbf{Organization.}
The remainder of this paper is organized as follows.
\S\ref{sec_assumptions} details the basic setup and assumptions for our main results.
\S\ref{sec_SEGg} presents our convergence results for SEG with averaging and restarting.
\S\ref{sec_experiment} provides experiments that validate our theory.
\S\ref{sec_conclusions} concludes this paper with future directions.
All technical analyses along with auxiliary results are relegated to later sections in the supplementary materials.
\noindent\textbf{Notation.}
Throughout this paper we use the following notation.
For two real symmetric matrices, $\Bb_1,\Bb_2$, we denote $\Bb_1\preceq \Bb_2$ when $\mathbf{v}^\top\Bb_1\mathbf{v} \le \mathbf{v}^\top\Bb_2 \mathbf{v}$ holds for all vectors $\mathbf{v}$.
Let $\lambda_{\max}(\Bb)$~(resp.~$\lambda_{\min}(\Bb)$) be the largest (resp.~smallest) eigenvalue of a generic (real symmetric) matrix $\Bb$.
Let $\|\Bb\|_{op}$ denotes the operator norm of $\Bb$.
Let $\mathcal{F}_t$ be the filtration generated by the stochastic samples, $\Bb_{\xi,s}, \gb_{\xi,s}$, $s=1,\dots,t$, in the bilinear game.
Let $\max(a,b)$ or $a\lor b$ denote the maximum value of $a,b\in \mathbf R$, and let $\min(a;b)$ or $a\land b$ denote the minimum.
For two real sequences, $(a_n)$ and $(b_n)$, we write $a_n = O(b_n)$ to mean that $|a_n|\le C b_n$ for a positive, numerical constant $C$, for all $n\ge 1$, and let $a_n = \tilde{O}(b_n)$ mean that $|a_n|\le C b_n$ where $C$ hides a logarithmic factor in relevant parameters.
We also denote $
\widehat{\mathbf{M}}_\xi \equiv \Bb_\xi^\top \Bb_\xi
$ and $
\mathbf{M}_\xi \equiv \Bb_\xi\Bb_\xi^\top
$ for brevity, each being positive semi-definite for each realization of $\xi$.
Finally, let $[n] = \{1,\dots,n\}$ for $n$ being a natural number.
\section{SETUP FOR MAIN RESULTS}\label{sec_assumptions}
In this section, we introduce the basic setup and assumptions needed for our statement of the convergence of the stochastic extragradient (SEG) algorithm.
We first make the following assumptions on $\Bb_\xi$. Let us recall that
$\widehat{\mathbf{M}} \equiv \ensuremath{{\mathbb{E}}}_\xi\widehat{\mathbf{M}}_{\xi} \equiv \ensuremath{{\mathbb{E}}}_\xi[\Bb_{\xi}^\top \Bb_{\xi}]
$ and $
\mathbf{M} \equiv \ensuremath{{\mathbb{E}}}_\xi\mathbf{M}_{\xi} \equiv \ensuremath{{\mathbb{E}}}_\xi[\Bb_{\xi} \Bb_{\xi}^\top]
$.
\begin{assumption}[Assumption on $\Bb_\xi$]\label{assu_boundednoise_A}
Denote $\Bb = \ensuremath{{\mathbb{E}}}_\xi[\Bb_{\xi}]$ for $\Bb\in \mathbf R^{n\times m}$ and impose the following regularity conditions:
$
\lambda_{\max}(\Bb^\top\Bb) > 0$
and
$
\lambda_{\min}(\mathbf{M}) \land \lambda_{\min}(\widehat{\mathbf{M}}) > 0
$.
We assume that there exist $\sigma_{\Bb}, \sigma_{\Bb,2}\in [0,\infty)$ such that
\begin{equation}\label{sigmaAsq}
\begin{aligned}
\| \ensuremath{{\mathbb{E}}}_\xi[(\Bb_{\xi} - \Bb)^\top (\Bb_{\xi} - \Bb)] \|_{op}
\le
\sigma_{\Bb}^2,
\\
\| \ensuremath{{\mathbb{E}}}_\xi\left[(\Bb_{\xi} - \Bb) (\Bb_{\xi} - \Bb)^\top\right] \|_{op}
\le
\sigma_{\Bb}^2,
\end{aligned}
\end{equation}
and
\begin{equation}\label{sigmaA2sqinit}
\begin{aligned}
\| \ensuremath{{\mathbb{E}}}_\xi[\Bb_{\xi}^\top \Bb_{\xi} - \widehat{\mathbf{M}}]^2 \|_{op}
\le
\sigma_{\Bb,2}^2,
\\
\| \ensuremath{{\mathbb{E}}}_\xi[\Bb_{\xi}\Bb_{\xi}^\top - \mathbf{M}]^2 \|_{op}
\le
\sigma_{\Bb,2}^2.
\end{aligned}
\end{equation}
\end{assumption}
The assumption of $n\ge m$ (i.e.~$\Bb$ is tall) is without loss of generality;
we can convert the SEG iterates with a wide coupling matrix to that of its transpose.
Note also $\sigma_{\Bb} = 0$ corresponds to the nonrandom $\Bb_{\xi} = \Bb$ case.
The stochasticity introduced in $\Bb_\xi$ allows us to conclude the first convergence result under the unbounded noise condition.%
\footnote{As a comparison, \citet{hsieh2020explore} only provides a proof for the bounded noise case.}
Next we impose an assumption on the intercept vector $\gb_\xi$.
\begin{assumption}[Assumption on $\gb_\xi$]\label{assu_boundednoise_B}
There exists a $\sigma_{\gb}\in [0,\infty)$ such that
$$
\ensuremath{{\mathbb{E}}}_\xi\left[ \|\gb_{\xi}^\mathbf{x}\|^2 + \|\gb_{\xi}^\mathbf{y}\|^2\right]
\le
\sigma_{\gb}^2
<
\infty
.
$$
Furthermore, we let $\ensuremath{{\mathbb{E}}}_\xi[\gb^\mathbf{x}_\xi] = \mathbf{0}_n$, $\ensuremath{{\mathbb{E}}}_\xi[\gb^\mathbf{y}_\xi] = \mathbf{0}_m$ and assume independence between the stochastic matrix $\Bb_{\xi}$ and the vector $[\gb^\mathbf{x}_\xi; \gb^\mathbf{y}_\xi]$.
\end{assumption}
We remark that the independence assumption in Assumption \ref{assu_boundednoise_B} significantly simplifies our analysis.%
\footnote{In practice, such independence can be \textit{approximately} achieved via the following decoupling argument: we formulate the random Jacobian-vector product and the random intercept using two independent random samples, separately.
Note an approximate knowledge of the Nash equilibrium is required in this decoupling argument.
}
In particular, it ensures
$
\ensuremath{{\mathbb{E}}}[\Bb_{\xi}\gb^\mathbf{y}_\xi] = \mathbf{0}_n
$ and $
\ensuremath{{\mathbb{E}}}[\Bb_{\xi}^\top\gb^\mathbf{x}_\xi] = \mathbf{0}_m
$, so the Nash equilibrium is the equilibrium point that the last-iterate SEG oscillates around.
The independence structure of $\Bb_\xi$ and $[\gb^\mathbf{x}_\xi; \gb^\mathbf{y}_\xi]$ in Assumption~\ref{assu_boundednoise_B} is crucial for our analysis, which is satisfied in certain statistical models.
Specially, when one of the $\Bb_{\xi}$ and $[\gb^\mathbf{x}_\xi; \gb^\mathbf{y}_\xi]$ is nonrandom this is always satisfied.
Our analysis can be further generalized to more relaxed assumptions on zero correlation between $[\gb^\mathbf{x}_\xi; \gb^\mathbf{y}_\xi]$ and the first three moments of $\Bb_{\xi}$, with a second-moment condition similar to $
\ensuremath{{\mathbb{E}}}_\xi[
\|\Bb_{\xi} \gb_{\xi}^\mathbf{y}\|^2
+
\|\Bb_{\xi}^\top \gb_{\xi}^\mathbf{x}\|^2
]
\le
C(\lambda_{\max}(\mathbf{M}) \lor \lambda_{\max}(\widehat{\mathbf{M}}))\sigma_{\gb}^2$.
We defer the full development of this extension to future work.
With Assumptions~\ref{assu_boundednoise_A} and \ref{assu_boundednoise_B} at hand, we are ready to state our main results on the convergence of SEG variants.
\section{SEG WITH AVERAGING AND RESTARTING}\label{sec_SEGg}
Recall that in contrast to SGD theory in convex optimization, the last iterate of SEG does \emph{not} converge to an arbitrarily small neighborhood of the Nash equilibrium even for the case of a converging step size~\citep{hsieh2020explore}.
We accordingly turn to an analysis of the \emph{averaged iterate} of $\mathbf{x}_t$ and $\mathbf{y}_t$, $t=0,1,\dots,K$, denoted as
\begin{equation}\label{averaged_iterate}\begin{aligned}
\overline{\mathbf{x}}_{K}
\equiv
\frac{1}{K+1} \sum_{t=0}^K \mathbf{x}_t
,
\quad
\overline{\mathbf{y}}_{K}
\equiv
\frac{1}{K+1} \sum_{t=0}^K \mathbf{y}_t
.
\end{aligned}\end{equation}
For simplicity we focus on the case in which $\Bb_{\xi}, \Bb$ are square matrices.
Let us define $\eta_{\mathbf{M}}$ as follows, which is the maximal step size that the SEG algorithm analysis takes:
\begin{equation}\label{etaMdef}
\eta_{\mathbf{M}}
\equiv
\frac{1}{\sqrt{\rho_{1}\lor\rho_{2}
}},
\end{equation}
where $\rho_{1} = \lambda_{\max}\big(\mathbf{M}^{-1/2} [\ensuremath{{\mathbb{E}}}_\xi \mathbf{M}_\xi^2] \mathbf{M}^{-1/2}\big)$ and $\rho_{2} = \lambda_{\max}\big(\widehat{\mathbf{M}}^{-1/2} [\ensuremath{{\mathbb{E}}}_\xi \widehat{\mathbf{M}}_\xi^2] \widehat{\mathbf{M}}^{-1/2}\big)$.
We introduce the following variants:
\begin{equation}\label{eta_choice}\begin{aligned}
\hat\eta_{\mathbf{M}}(\alpha)
&\equiv
\frac{\eta_{\mathbf{M}}}{\sqrt{2}}
\land
\frac{\alpha\lambda_{\min}(\Bb\Bb^\top)}{2\sigma_{\Bb}^2 \sqrt{\lambda_{\max}(\Bb^\top\Bb)}}
,
\\
\bar\eta_{\mathbf{M}}(\alpha)
&\equiv
\eta_{\mathbf{M}}
\land
\frac{\alpha\lambda_{\min}(\Bb\Bb^\top)}{2\sigma_{\Bb}^2 \sqrt{\lambda_{\max}(\Bb^\top\Bb)}}
,
\end{aligned}\end{equation}
which reduce to $1/\sqrt{2\lambda_{\max}(\Bb^\top\Bb)}$ and $1/\sqrt{\lambda_{\max}(\Bb^\top\Bb)}$ when $\Bb_\xi$ is nonrandom.
We state our first main result on SEG with iteration averaging, Theorem \ref{theo_SEG_B}, whose proof is provided in \S\ref{sec_proof,theo_SEG_B}:
\begin{theorem}[SEG Averaged Iterate]\label{theo_SEG_B}
Let Assumptions~\ref{assu_boundednoise_A} and \ref{assu_boundednoise_B} hold with $n=m$.
Prescribing an $\alpha\in (0,1)$, when the step size $\eta$ is chosen as $\hat\eta_{\mathbf{M}}(\alpha)$ as defined in Eq.~\eqref{eta_choice}, we have for all $K\ge 1$ the following convergence bound for the averaged iterate:
\begin{equation}\label{Asingleloop_SEG_B}
\begin{aligned}
&\ensuremath{{\mathbb{E}}}\big[
\|\overline{\mathbf{x}}_{K}\|^2 +
\|\overline{\mathbf{y}}_{K}\|^2
\big]
\\
&\le\,\,\tau_{1}
\cdot
\frac{\|\mathbf{x}_0\|^2 + \|\mathbf{y}_0\|^2}{\color{red}(K+1)^2}
+\tau_{2}
\cdot
\frac{\sigma_{\gb}^2}{\color{red}K+1},
\end{aligned}
\end{equation}
where $\tau_1, \tau_2$ depending on $\sigma_{\Bb}, \sigma_{\Bb,2}$ are defined as
$$\begin{aligned}
\tau_1
&=
\frac{16 + 8\kappa_\zeta}{(1-\alpha)\hat\eta_{\mathbf{M}}(\alpha)^2\lambda_{\min}(\Bb\Bb^\top)}
,
\\
\tau_2
&=
\frac{18 + 12\kappa_\zeta}{(1-\alpha)\lambda_{\min}(\Bb\Bb^\top)}
,
\end{aligned}$$
and $
\kappa_\zeta
\equiv
\frac{\sigma_{\Bb}^2 + \hat\eta_{\mathbf{M}}(\alpha)^2\sigma_{\Bb,2}^2}{\lambda_{\min}(\mathbf{M})\land\lambda_{\min}(\widehat{\mathbf{M}})}
$ denotes the effective noise condition number of problem Eq.~\eqref{Sminimax}.
\end{theorem}
Measured by the Euclidean metric, Theorem \ref{theo_SEG_B} indicates an $O(1/\sqrt{K})$ leading-order convergence rate for the averaged iterate of SEG in the general stochastic setting, which is known to be statistically optimal up to a constant multiplier.
We provide detailed comparisons with previous related work in \S\ref{sec_comp,theo_SEG_B}.
Nevertheless, the iteration slowly forgets initial conditions at a polynomial rate, and this result can be improved if we utilize a restarting scheme and take advantage of the knowledge of the smallest eigenvalue of $\Bb\Bb^\top$.
Indeed, in the following result, we boost the convergence rate shown in Eq.~\eqref{Asingleloop_SEG_B}, when the smallest eigenvalue $\lambda_{\min}(\Bb\Bb^\top)$ is available to the system, via a novel restarting procedure at specific times.
The rationale behind this analysis is akin to that used in boosting sublinear convergence in convex optimization to linear convergence when the designer has (an estimate of) the strong convexity parameter.
We now develop this argument in detail.
We continue to assume the case of square matrices $\Bb_\xi,\Bb$.
In Algorithm \ref{algo_iasgd_restart} we run SEG with averaging and restart the iteration at chosen timestamps, $\{\mathcal{T}_i\}_{i\in [\mathsf{Epoch}-1]}\subseteq [K]$, initializing at the averaged iterate of the previous epoch.
The principle behind our choice of parameters in this algorithm is that we trigger the restarting when the expected squared Euclidean metric $\ensuremath{{\mathbb{E}}}\left[ \|\mathbf{x}_{K}\|^2 + \|\mathbf{y}_{K}\|^2 \right]$ decreases by a factor of $1/e^2$,
and we halt the restarting procedure once the last iterate reaches stationarity in squared Euclidean metric in the sense of Theorem \ref{theo_SEG_A}:\footnote{
The choice of the discount factor $1/e^2$ is to be consistent with our optimal choice in the interpolation setting, where in the $\sigma_{\Bb} = 0$ case the total complexity is minimized to $e\sqrt{\lambda_{\max}(\Bb^\top\Bb) / \lambda_{\min}(\Bb\Bb^\top)}$.
}
$$
\|\mathbf{x}_0\|^2 + \|\mathbf{y}_0\|^2
\approx
\frac{3\sigma_{\gb}^2}{\lambda_{\min}(\mathbf{M})\land\lambda_{\min}(\widehat{\mathbf{M}})}.
$$
Given these choices, summarized in Algorithm \ref{algo_iasgd_restart}, we obtain the following theorem:
\begin{algorithm}[!t]
\caption{Iteration Averaged SEG with Scheduled Restarting}
\begin{algorithmic}[1]
\REQUIRE Initialization $\mathbf{x}_0$, step sizes $\eta_t$, total number of iterates $K$, restarting timestamps $\{\mathcal{T}_i\}_{i\in [\mathsf{Epoch}-1]}\subseteq [K]$ with the total number of epoches $\mathsf{Epoch}\ge 1$, index $s\leftarrow 0$
\FOR{$t=1, 2,\dots,K$}\label{linerestart}
\STATE
$s\leftarrow s+1$
\STATE
Update $\mathbf{x}_t$, $\mathbf{y}_t$ via Eq.~\eqref{SEGupdate}
\STATE
Update $\hat{\mathbf{x}}_t$, $\hat{\mathbf{y}}_t$ via
$$\begin{aligned}
\hspace{-.1in}
\hat{\mathbf{x}}_{t} \leftarrow \frac{s-1}{s}\hat{\mathbf{x}}_{t-1} + \frac{1}{s} \mathbf{x}_{t},
\,\,
\quad
\hat{\mathbf{y}}_{t} \leftarrow \frac{s-1}{s}\hat{\mathbf{y}}_{t-1} + \frac{1}{s} \mathbf{y}_{t}
\end{aligned}$$
\IF{$t\in \{\mathcal{T}_i\}_{i\in [\mathsf{Epoch}-1]}$}
\STATE
Overload $\mathbf{x}_{t}\leftarrow \hat{\mathbf{x}}_{t}$, $\mathbf{y}_{t}\leftarrow \hat{\mathbf{y}}_{t}$, and set $s\leftarrow 0$
\hfill \text{//restarting procedure is triggered}
\ENDIF
\ENDFOR
\STATE {\bfseries Output:}
$\hat{\mathbf{x}}_{K}, \hat{\mathbf{y}}_{K}$
\end{algorithmic}
\label{algo_iasgd_restart}
\end{algorithm}
\begin{theorem}[SEG with Averaging/Restarting]\label{theo_SEG_C}
Let Assumptions~\ref{assu_boundednoise_A} and \ref{assu_boundednoise_B} hold with $n=m$.
For any prescribed $\alpha\in (0,1)$, choose the step size $\hat\eta_{\mathbf{M}}(\alpha)$ as in Eq.~\eqref{eta_choice} and assume a proper restarting schedule.
For all $K \ge K_{\operatorname{complexity}}+1$ we have the following convergence bound for the output $\hat\mathbf{x}_{K}, \hat\mathbf{y}_{K}$ of Algorithm \ref{algo_iasgd_restart}:
\begin{equation}\label{Asingleloop_SEG_C}\begin{aligned}
\ensuremath{{\mathbb{E}}}\left[
\|\hat{\mathbf{x}}_{K}\|^2 + \|\hat{\mathbf{y}}_{K}\|^2
\right]
\le
C_1
\cdot
\frac{\sigma_{\gb}^2}{\color{red} K - K_{\operatorname{complexity}} + 1}
,
\end{aligned}
\end{equation}
where
$$C_1 \equiv \frac{18}{(1-\alpha)\lambda_{\min}(\Bb\Bb^\top)}\cdot\Bigg[
1 +
\underbrace{
\frac{
O(\sigma_{\Bb}^2 + \hat\eta_{\mathbf{M}}(\alpha)^2\sigma_{\Bb,2}^2)}{
\lambda_{\min}(\mathbf{M}) \land \lambda_{\min}(\widehat{\mathbf{M}})
}
}_{\text{higher-order term $O(\kappa_\zeta)$}}
\Bigg],$$
where $K_{\operatorname{complexity}}$ is the fixed \emph{burn-in complexity} defined as
\begin{equation}\label{Tcomp_prime}
\begin{aligned}
\frac{
\mbox{logarithmic factor}
}{
\frac{1}{e}\sqrt{(1-\alpha)\bar\eta_{\mathbf{M}}(\alpha)^2\lambda_{\min}(\Bb\Bb^\top)}
- C_2
}
,
\end{aligned}
\end{equation}
with $C_2$ being
$$O\Big(
\bar\eta_{\mathbf{M}}(\alpha)^{3/2}
(\lambda_{\min}(\Bb\Bb^\top))^{1/4}
\sqrt{\sigma_{\Bb}^2 + \bar\eta_{\mathbf{M}}(\alpha)^2\sigma_{\Bb,2}^2}
\Big).$$
\end{theorem}
The proof of Theorem \ref{theo_SEG_C} is provided in \S\ref{sec_proof,theo_SEG_C}.
Here we not only achieve the optimal $O(1/\sqrt{K})$ convergence rate for the averaged iterate, but the proper restarting schedule allows us to achieve a convergence rate bound for iteration-averaged SEG in Eq.~\eqref{Asingleloop_SEG_C} that forgets the initialization at an exponential rate instead of the polynomial rate that is obtained without restarting [cf.\ Theorem \ref{theo_SEG_C}].
Finally, we consider the interpolation setting, where the noise vanishes at the Nash equilibrium.
That is, $\gb^\mathbf{x}_\xi = \mathbf{0}_n$ and $\gb^\mathbf{y}_\xi = \mathbf{0}_m$; i.e.~$\sigma_{\gb} = 0$ in Assumption \ref{assu_boundednoise_B}.
In that setting, we prove that SEG with iteration averaging achieves an accelerated linear convergence rate.
Set the (constant) interval length of restarting timestamps $K_{\operatorname{thres}}(\alpha)$ as
\begin{equation}\label{Tholder_epoch}
\frac{2}{
\frac{1}{e}\sqrt{(1-\alpha)\bar\eta_{\mathbf{M}}(\alpha)^2\lambda_{\min}(\Bb\Bb^\top)}
- C_3
}
,
\end{equation}
with $C_3$ being
$$O\left(
\bar\eta_{\mathbf{M}}(\alpha)^{3/2}
(\lambda_{\min}(\Bb\Bb^\top))^{1/4}
\sqrt{\sigma_{\Bb}^2 + \bar\eta_{\mathbf{M}}(\alpha)^2\sigma_{\Bb,2}^2}
\right).$$
We present an analysis of this algorithm in the following theorem, which can be seen as a corollary of Theorem \ref{theo_SEG_C} but benefits from a refined analysis where tight constant prefactor sits in each term of the bound:
\begin{figure*}[!tb]
\begin{center}
\subfigure[General setting.]{\includegraphics[width=0.4\textwidth]{figs/2d_noise_eta_large_with_start.pdf}
}
\subfigure[Interpolation setting.]{
\includegraphics[width=0.4\textwidth]{figs/2d_noiseless_eta_large_with_start.pdf}
}
\vspace{-0.1in}
\caption{Illustration (in two dimensions) of the stochastic extragradient (SEG) algorithm, stochastic extragradient with iteration averaging (SEG-Avg), and stochastic extragradient with restarted iteration averaging (SEG-Avg-Restart) on the stochastic minimax optimization problem defined in Eq.~\eqref{Sminimax}. Here the Nash equilibrium is $[\mathbf{x}^*; \mathbf{y}^*] = [\mathbf{0}_n; \mathbf{0}_m]$.
(\textbf{a}) General setting.
(\textbf{b}) Interpolation setting, where noise vanishes at the Nash equilibrium.
}
\end{center}
\vspace{-0.25in}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\subfigure[\label{fig:general}General setting.]{
\includegraphics[width=0.475\textwidth]{figs/noise_compare.pdf}
}
\subfigure[\label{fig:interpolation}Interpolation setting.]{
\includegraphics[width=0.475\textwidth]{figs/noiseless_compare.pdf}
}
\caption{Comparing SEG, SEG-Avg, and SEG-Avg-Restart on a stochastic bilinear optimization problem. The horizontal axis represents the iteration number, and vertical axis represents the square $\ell_2$-distance to the Nash equilibrium.
(\textbf{a}) General setting ($d=100, \operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.01$).
(\textbf{b}) Interpolation setting ($d=100, \operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.0$).}
\end{center}
\vspace{-0.2in}
\end{figure*}
\begin{theorem}[Interpolation Setting]\label{theo_SEGg_interpolation_C}
Let Assumptions~\ref{assu_boundednoise_A} and \ref{assu_boundednoise_B} hold with $n=m$ and $\sigma_{\gb} = 0$.
For any prescribed $\alpha\in (0,1)$ choosing the step size $\eta = \bar\eta_{\mathbf{M}}(\alpha)$ as in Eq.~\eqref{eta_choice} and the restarting timestamps $\mathcal{T}_i = i\cdot K_{\operatorname{thres}}(\alpha)$ where $K_{\operatorname{thres}}(\alpha)$ was defined as in Eq.~\eqref{Tholder_epoch}, we conclude for all $K \ge 1$ that is divisible by $K_{\operatorname{thres}}(\alpha)$ the following convergence rate for the output $\hat\mathbf{x}_{K}, \hat\mathbf{y}_{K}$ of Algorithm \ref{algo_iasgd_restart}:
\begin{align}\label{Asingleloop_SEG_interpolation_C}
&\ensuremath{{\mathbb{E}}} \left[\|\hat\mathbf{x}_{K}\|^2 + \|\hat\mathbf{y}_{K}\|^2\right]
\\
\le\,\, & e^{-
\frac{K}{e}\sqrt{(1-\alpha)\bar\eta_{\mathbf{M}}(\alpha)^2\lambda_{\min}(\Bb\Bb^\top)}
+ C_4
}
\left[\|\mathbf{x}_0\|^2 + \|\mathbf{y}_0\|^2\right]
\nonumber
,
\end{align}
with $C_4$ being
$$
O\left(
K\bar\eta_{\mathbf{M}}(\alpha)^{3/2}
(\lambda_{\min}(\Bb\Bb^\top))^{1/4}
\sqrt{\sigma_{\Bb}^2 + \bar\eta_{\mathbf{M}}(\alpha)^2\sigma_{\Bb,2}^2}
\right).
$$
\end{theorem}
The proof of Theorem \ref{theo_SEGg_interpolation_C} is provided in \S\ref{sec_proof,theo_SEGg_interpolation_C}.
The idea behind Theorem \ref{theo_SEGg_interpolation_C} is, in plain words, to trigger restarting whenever the last-iterate SEG has travelled through a full cycle, giving insights on the design of $K_{\operatorname{thres}}(\alpha)$ in the restarting mechanism.
Compared with Eq.~\eqref{xygrowth_SEG_A} in Theorem \ref{theo_SEG_A} with $\sigma_{\gb}$ equal to zero, the contraction rate (in terms of its exponent) to the Nash equilibrium $-
\frac{\eta_{\mathbf{M}}^2}{4}\cdot\left( \lambda_{\min}(\mathbf{M})\land\lambda_{\min}(\widehat{\mathbf{M}})
\right)
$ improves to $-
\frac{1}{e}\sqrt{(1-\alpha)\bar\eta_{\mathbf{M}}(\alpha)^2\lambda_{\min}(\Bb\Bb^\top)}
$ plus higher-order moment terms involving $\Bb_\xi$.
It is worth mentioning that Algorithm \ref{algo_iasgd_restart} achieves this accelerated convergence rate in Eq.~\eqref{Asingleloop_SEG_interpolation_C} via simple restarting and does \emph{not} require an explicit Polyak- or Nesterov-type momentum update rule \citep{NESTEROV[Lectures]}.
In the case of nonrandom $\Bb_\xi$, this rate matches the lower bound \citep{ibrahim2020linear,zhang2019lower},%
\footnote{\citet{ibrahim2020linear} paper provides the stated lower bound $\sqrt{\kappa}\log(1/\epsilon)$. Although the argument in \citet{zhang2019lower} does not achieve this bound directly (since they did not consider the bilinear-coupling case), modifying their arguments easily extends it to the same lower bound in the bilinear-coupling case. Theorem~\ref{theo_SEGg_interpolation_C} matches this lower bound in the nonrandom case.}
and the only algorithm that achieves this optimal rate to our best knowledge is \citet{azizian2020accelerating} without an explicit $1/e$-prefactor on the right hand of Eq.~\eqref{Asingleloop_SEG_interpolation_C}.
We end this section with some remarks.
For the results in this section, we can forgo fully optimizing the prefactor over $\alpha$ and simply set a step size $\eta$ as in Eq.~\eqref{eta_choice}.
Both the analyses of Theorems \ref{theo_SEG_B} and \ref{theo_SEG_C} adopt a step size of $\eta_{\mathbf{M}}/\sqrt{2}$, capped by some $\alpha$-dependent threshold, due to the fact that our analysis relies heavily on the last-iterate convergence to stationarity.
In the meantime, Theorem \ref{theo_SEGg_interpolation_C} does not rely on such an argument and accommodates the larger (thresholded) $\eta_{\mathbf{M}}$ as the step size.
Lastly, we emphasize that the knowledge of $\lambda_{\min}(\Bb\Bb^\top)$ is required for the algorithm to achieve the accelerated rate.
Considerations regarding such knowledge are related to the topic of adaptivity of stochastic gradient algorithms~\citep[see, e.g.,][]{lei2020adaptivity}.
\begin{figure*}[t]
\begin{center}
\subfigure[\label{fig:d=100}$d=100$]{
\includegraphics[width=0.31\textwidth]{figs/dimension_compare_d100.pdf}}
\subfigure[\label{fig:d=200}$d=200$]{
\includegraphics[width=0.31\textwidth]{figs/dimension_compare_d200.pdf}}
\subfigure[\label{fig:seg-avg-restart-zoomin}SEG-Avg-Restart ($d=200$)]{
\includegraphics[width=0.31\textwidth]{figs/noiseless_restart_zoomin.pdf}}
\caption{Comparing SEG and SEG-Avg-Restart on a stochastic bilinear optimization problem in the interpolation setting. The horizontal axis represents the iteration number, and the vertical axis represents the squared $\ell_2$-distance to the Nash equilibrium.
(\textbf{a}) Comparison on dimension $d=100$ $(\operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.0$).
(\textbf{b}) Comparison on dimension $d=200$ $(\operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.0$). (\textbf{c}). Zoomed-in visualization of SEG-Avg-Restart on dimension $d=200$ $(\operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.0$).}
\end{center}
\label{fig:compare-seg-seg-avg-restart-interpolation}
\vspace{-0.15in}
\end{figure*}
\begin{figure*}[h]
\begin{center}
\subfigure[\label{fig:seg-step-size}Different step size $\eta$.]{
\includegraphics[width=0.45\textwidth]{figs/noise_compare_stepsize.pdf}
}
\hspace{.1in}
\subfigure[\label{fig:seg-noise}Different noise $\operatorname{std}_\gb$.]{
\includegraphics[width=0.45\textwidth]{figs/noise_compare_sigmab.pdf}
}
\caption{Comparison of SEG (without averaging) with different step sizes $\eta$ and noise magnitudes $\operatorname{std}_\gb$ on a stochastic bilinear optimization problem in the general setting. The horizontal axis represents the iteration number, and the vertical axis represents the squared $\ell_2$-distance to the Nash equilibrium.
(\textbf{a}) Comparison with respect to varying step size $\eta \in \{0.01, 0.0075, 0.005, 0.0025\}$ $(\operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.01$).
(\textbf{b}) Comparison with respect to varying noise $\operatorname{std}_\gb \in \{0.01, 0.001, 0.0001\}$ with step size $\eta=0.01$ $(\operatorname{std}_\Bb=0.1)$.}
\end{center}
\vspace{-0.3in}
\end{figure*}
\begin{figure*}[h]
\begin{center}
\subfigure[\label{fig:general-DSEG}General setting.]{
\includegraphics[width=0.31\textwidth]{figs/noise_compare_dseg.pdf}
}
\subfigure[\label{fig:interpolation-DSEG}Interpolation setting.]{
\includegraphics[width=0.31\textwidth]{figs/noiseless_compare_dseg.pdf}
}
\subfigure[\label{fig:interpolation-DSEG-semilogy}Interpolation setting.]{
\includegraphics[width=0.31\textwidth]{figs/noiseless_compare_dseg_semilogy.pdf}
}
\caption{Comparing SEG-Avg, SEG-Avg-Restart, and DSEG methods~\citep{hsieh2020explore} on the stochastic bilinear optimization problem. The horizontal axis represents the iteration number, and the vertical axis represents the square $\ell_2$-distance to the Nash equilibrium.
(\textbf{a}) General setting ($d=100, \operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.01$).
(\textbf{b}) Interpolation setting ($d=100, \operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.0$). (\textbf{c}). Interpolation setting ($d=100, \operatorname{std}_\Bb=0.1, \operatorname{std}_\gb=0.0$) under the semi-log scale in the vertical.}
\end{center}
\vspace{-0.25in}
\end{figure*}
\vspace{0.05in}
\section{EXPERIMENTS}\label{sec_experiment}
In this section, we present the results of numerical experiments on stochastic bilinear minimax optimization problems, including both the general setting and the interpolation setting (i.e., zero noise at the Nash equilibrium).
The objective function we study remains the same as Eq.~\eqref{Sminimax}, repeated here for convenience:
\begin{equation}\tag{\ref{Sminimax}}
\min_\mathbf{x} \max_\mathbf{y}~
\mathbf{x}^\top \ensuremath{{\mathbb{E}}}_\xi[\Bb_{\xi}] \mathbf{y}
+
\mathbf{x}^\top \ensuremath{{\mathbb{E}}}_\xi[\gb^\mathbf{x}_\xi]
+
\ensuremath{{\mathbb{E}}}_\xi[(\gb^\mathbf{y}_\xi)^\top] \mathbf{y}.
\end{equation}
Here we assume $\Bb_\xi$ is a square matrix of dimension $d\times d$ where $d = m = n$.
To generate $\Bb_{\xi}$ for each $\xi$, where $\xi$ corresponds to one iteration in our experiments, we first generate a random vector $\mathbf{u} \in \mathbb{R}^{d}$, where each element of the vector $\mathbf{u}$ is sampled from a uniform distribution, $\mathbf{u}_{j} \sim \text{Unif}\,[1, d+1]$, for $j \in [d]$. Then we define $\Bb = \textsf{Diag}(\mathbf{u})$ and generate $\Bb_{\xi} \in \mathbb{R}^{d\times d}$ as follows:
$$
\Bb_{\xi} = \Bb + \mathbf{E}_{\xi}
, \quad \text{and}\,\,
[\mathbf{E}_{\xi}]_{ij} \sim \mathcal{N}(0, \operatorname{std}_\Bb^{2})
,
$$
where $\mathbb{E}_{\xi}[\Bb_{\xi}] = \Bb$, and $\Bb$ is a fixed matrix for all $\Bb_{\xi}$. We generate the noise vectors $\gb_{\xi}^{\mathbf{x}} \sim \mathcal{N}(\gb^{\mathbf{x}}, \operatorname{std}_\gb^{2}\mathbf{I}_{d\times d})$ and $\gb_{\xi}^{\mathbf{y}} \sim \mathcal{N}(\gb^{\mathbf{y}}, \operatorname{std}_\gb^{2}\mathbf{I}_{d\times d})$, where we generate the means as follows: $\gb^{\mathbf{x}}$, $\gb^{\mathbf{y}} \sim \mathcal{N}(0, 0.1 \cdot \mathbf{I}_{d\times d})$ (note that $\gb^{\mathbf{x}}$, $\gb^{\mathbf{y}}$ are fixed for all $\gb_{\xi}^{\mathbf{x}}$, $\gb_{\xi}^{\mathbf{y}}$). More specifically, for each iteration, we randomly generate $\{\Bb_{\xi}, \gb_{\xi}^{\mathbf{x}}, \gb_{\xi}^{\mathbf{y}}\}$ according to the above procedure.
When $\operatorname{std}_\Bb=\operatorname{std}_\gb=0$, the objective in Eq.~\eqref{Sminimax} equals $ \mathbf{x}^{\top}\Bb\mathbf{y} +\mathbf{x}^{\top}\gb^{\mathbf{x}} + (\gb^{\mathbf{y}})^{\top}\mathbf{y}$, where the Nash equilibrium is $\mathbf{x}^{\star} = -(\Bb^{\top})^{-1}\gb^{\mathbf{y}}$ and $\mathbf{y}^{\star} = -\Bb^{-1}\gb^{\mathbf{x}}$.
We study three algorithms in this section: Stochastic ExtraGradient (\textbf{SEG}), Stochastic ExtraGradient with iteration averaging (\textbf{SEG-Avg}), and Stochastic ExtraGradient with Restarted iteration averaging (\textbf{SEG-Avg-Restart}).%
\footnote{Straightforward calculation gives $\sigma_\Bb = \operatorname{std}_\Bb\sqrt{d}$ and $\sigma_\gb = \operatorname{std}_\gb\sqrt{2d}$ in our example, as in Assumptions \ref{assu_boundednoise_A}, \ref{assu_boundednoise_B}.}
\noindent\textbf{General setting ($\sigma_\gb > 0$).}
We first set $\operatorname{std}_\gb = 0.01$ and $\operatorname{std}_\Bb = 0.1$. The results comparing the three algorithms are shown in Figure~\ref{fig:general}.
We find that SEG can only converge to a neighborhood of the Nash equilibrium, whereas SEG-Avg and SEG-Avg-Restart can converge to the equilibrium.
From Figure~\ref{fig:general}, we also observe that the convergence rate of SEG-Avg is ${O}(1/K^2)$ at the beginning, and then the convergence rate of SEG-Avg changes to ${O}(1/K)$.
Similar to the interpolation setting, SEG-Avg-Restart converges faster than both SEG and SEG-Avg.
We also study the effect of the step size $\eta$ and the noise parameter $\operatorname{std}_\gb$ for SEG. As shown in Figure~\ref{fig:seg-step-size}, we observe that SEG cannot converge to a smaller neighborhood of the Nash equilibrium with smaller step size $\eta$, which aligns well with our theoretical results. We summarize the varying noise experimental results in Figure~\ref{fig:seg-noise}, where we observe that SEG converges to a smaller neighborhood of the Nash equilibrium when we decrease the noise parameter $\operatorname{std}_\gb$.
\noindent\textbf{Comparisons with DSEG.}
As shown in Figures~\ref{fig:general-DSEG}, ~\ref{fig:interpolation-DSEG}, and \ref{fig:interpolation-DSEG-semilogy}, we provide experimental results on comparing SEG-Avg, SEG-Avg-Restart with the \emph{Double Stepsize Extragradient} (DSEG) method, proposed in \citet{hsieh2020explore}, which allows the step sizes of the extrapolation step and gradient step admitting different scales.
We follow the optimized hyperparameter setup described in \citet{hsieh2020explore} and select the step size constants to achieve faster convergence.
From Figure~\ref{fig:general-DSEG}, for the general setting, we find that the convergence rate of DSEG is $O(1/K)$ and both SEG-Avg and SEG-Avg-Restart converge faster than DSEG.
For the interpolation setting in Figures~\ref{fig:interpolation-DSEG} and \ref{fig:interpolation-DSEG-semilogy}, we observe that the convergence rate of DSEG is significantly slower than SEG-Avg-Restart.
\noindent\textbf{Interpolation setting ($\sigma_\gb = 0$).}
We first set the noise parameter $\operatorname{std}_\gb = 0$, and set $\operatorname{std}_\Bb = 0.1$. The performance of SEG, SEG-Avg, and SEG-Avg-Restart is summarized in Figure~\ref{fig:interpolation}, where we set the dimension $d=100$. We observe that the convergence rate of SEG-Avg is ${O}(1/K^{2})$, which aligns with our theoretical analysis. Meanwhile, we find that SEG-Avg-Restart converges faster than SEG under this interpolation setting. As shown in Figures~\ref{fig:d=100} and \ref{fig:d=200}, we compare the convergence rate of SEG and SEG-Avg-Restart on a semi-log plot, since both algorithms converge exponentially to the Nash equilibrium in the interpolation setting. We observe that SEG-Avg-Restart converges faster than SEG (for both $d=100$ and $d=200$) as suggested by our theory. We also present a zoomed-in plot of SEG-Avg-Restart in Figure~\ref{fig:seg-avg-restart-zoomin}.
\section{CONCLUSIONS}\label{sec_conclusions}
We have presented an analysis of the classical Stochastic ExtraGradient (SEG) method for stochastic bilinear minimax optimization.
Despite that the last iterate only contracts to a fixed neighborhood of the Nash equilibrium and the diameter of the neighborhood is independent of the step size, we show that SEG accompanied by iteration averaging converges to Nash equilibria at a sublinear rate.
Moreover, the forgetting of the initialization is optimal when we use a scheduled restarting procedure in both the general and interpolation settings.
Numerical experiments further validate this use of iteration averaging and restarting in the SEG setting.
Further directions for research include justification of the optimality of our convergence result, improvement of the convergence of SEG for nonlinear convex-concave optimization problems with relaxed assumptions, and connection to the Hamiltonian viewpoint for bilinear minimax optimization.
\section*{Acknowledgements}
We would like to acknowledge support from the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764.
Gauthier Gidel and Nicolas Le Roux are supported by Canada CIFAR AI Chairs.
Part of this work was done while Nicolas Loizou was a postdoctoral research fellow at Mila, Université de Montréal, supported by the IVADO Postdoctoral Funding Program.
Yi Ma acknowledges support from ONR grant N00014-20-1-2002 and the joint Simons Foundation-NSF DMS grant number 2031899.
\bibliographystyle{plainnat}
|
1,941,325,220,780 | arxiv | \section{Portal is PSPACE-complete}
\label{sec:pspace}
In this section we give a new metatheorem for games with doors and switches, in the same vein as the metatheorems in \cite{Forisek10}, \cite{HardGames12}, and \cite{2Button2015}. We use this metatheorem to give proofs of PSPACE-completeness of Portal with various game elements. All of the gadgets in this section can be created in the Portal 2 Puzzle Maker.
The proofs in this section revolve around constructing game mechanics which implement a switch:
the construction can be in one of two states, and the state is controllable by the player. When the avatar is near the switch, it can be freely set to either state. Each state has a set of doors which are open when the switch is in that state. A switch is very similar to a button in that it controls whether doors are open or closed, and the player has the option of interacting with it. The key difference is that buttons can be pressed multiple times to open or close its associated doors, and cannot necessarily be `unpressed' to undo the action. We show that a game with switches and doors is PSPACE-complete, using similar techniques to \cite{2Button2015}.
In what follows we will use the nondeterministic constraint logic framework\cite{GPCBook09}, wherein the state of a nondeterministic machine is encoded by a graph called a \emph{constraint graph}. The state is updated by changing the orientation of the edges in such a way that constraints stored on the vertices are satisfied.
Formally, an constraint graph is an undirected simple graph $G=(V,E)$ with an assignment of nonnegative integers to the edges $w:E\rightarrow \mathbb{Z}^+$, referred to as \emph{weights}, and an assignment of integers to the vertices $c:V\rightarrow \mathbb{Z}$, referred to as \emph{constraints}. Each edge has an orientation $p:E\rightarrow \{+1,-1\}$. A constraint graph is fully specified by the tuple $\mathcal{G}=(G,w,c,p)$. The edge orientation $p$ induces a directed graph $D_{G,p}$.
Let $v\in V$ be a vertex of $G$. Its \emph{in-neighbourhood}
\begin{align*}
N^-(v,p)=\{w~|~(v,w)\in A\}
\end{align*}
is the set of vertices of $D_{G,p}=(V,A)$ with an arc oriented towards it.
The constraint graph $\mathcal{G}$ is \emph{valid} if, for all $y\in V$, $\sum_{x\in N^-(y,p)} w((x,y)) \ge c(x)$.
The state of a constraint graph can be changed by selecting an edge and multiplying its orientation by $-1$, such that the resulting constraint graph is valid. We say that we have \emph{flipped} the edge.
A vertex $v$ in a constraint graph with three incident edges $x,y,o$ can implement an AND gate by setting $c(v)=2$, $w(x)=w(y)=1$, and $w(o)=2$. Clearly, the edge $o$ can only point away from $v$ if both $x$ and $y$ are pointing towards $v$. In a similar fashion, we can implement an OR gate by setting $w(v)=2$, $w(x)=w(y)=w(o)=2$. A constraint graph where all vertices are AND or OR vertices is called an \emph{AND/OR constraint graph}. The following decision problem about constraint graphs is PSPACE-complete.
\begin{problem}
\textsc{Nondeterministic Constraint Logic}
\textit{Input}: An AND/OR constraint logic graph $\mathcal{G}=((V,E),w,c,p)$, and a target edge ${i,j}\in E$.
\textit{Output}: Whether there exists a constraint graph $\mathcal{G}'=((V,E),w,c,p')$ such that $p'(\{i,j\})=-p(\{i,j\})$, and which can be obtained from $\mathcal{G}$ by a sequence of valid edge flips.
\end{problem}
\begin{metatheorem}
\label{thm:switches}
Games with doors that can be controlled by a single switch and switches that can control at least six doors are PSPACE-complete.
\end{metatheorem}
\begin{proof}
We prove this by reduction from \textsc{Nondeterministic Constraint Logic}. The edges of the consistency graph are represented by a single switch whose state represents the edge orientation. Connected to each switch is a \emph{consistency check gadget}. This gadget consists of a series of hallways that checks that the state of the two vertices adjacent to the simulated edge are in a valid configuration and thus that the update made to the graph was valid. Each edge switch is connected to doors in up to six consistency checks, two for itself and four for the adjacent edges. For an AND vertex, the weight two edge is given by the door with the single hallway, and the weight one edges connect to the two doors in the other hallway. For an OR vertex we have a hallway that splits in three, each with one node. An example is given in Figure~\ref{img:SwitchGadget}. Each switch thus connects to five doors. All of the edge gadgets, with their constraint checks, are connected together. This construction allows the player to change the direction of any edge they choose. However, to get back to the main hallway connecting the gadgets, the graph must be left in a valid state. Off the main hallway there is be a final exit connected to the target location, but blocked by a door connected to the target edge. If the player is able to flip the edge by visiting the edge gadget, moving the cube to the button which opens the exit door, and return through the graph consistency check, then the avatar can reach the target location.
\end{proof}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{SwitchGraph}
\caption{Section of a constraint logic graph being simulated. Blue edges are weight 2 and red edges are weight 1.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{SwitchGadget}
\caption{Gadget simulating edge $c$ in the constraint logic graph. Green dotted lines are open doors.}
\end{subfigure}
\caption{Example of an edge gadget built from switches and doors.}
\label{img:SwitchGadget}
\end{figure}
\begin{theorem}
\label{thm:pspace}
\textsc{Portal} with any subset of long falls, portals, Weighted Storage Cubes, doors, Heavy Duty Super Buttons, lasers, laser relays, gravity beams, turrets, timed buttons, and moving platforms is in PSPACE.
\end{theorem}
\begin{proof}
Portal levels do not increase in size and the walls and floors have a fixed geometry. Assuming all velocities are polynomially bounded, all gameplay elements have a polynomial amount of state which describes them. For example the position and velocity of the avatar or a HEP; whether a door is open or closed; and the time on a button timer. The number of gameplay elements remains bounded while playing. Most gameplay elements cannot be added while playing, and items like the HEP launcher and cube suppliers only produce another copy when the prior one has been destroyed. We only need a polynomial amount of space to describe the state of a game of Portal at any given point in time. Thus one can nondeterministically search the state space for any solutions to the \textsc{Portal} problem, putting it in NPSPACE. Thus by Savitch's Theorem\cite{SAVITCH1970} the problem is in PSPACE.
\end{proof}
\begin{theorem}\label{thm:cubes-pspace}
\textsc{Portal} with Weighted Storage Cubes, doors, and Heavy Duty Super Buttons is PSPACE-complete.
\end{theorem}
\begin{proof}
We will construct switches and doors out of doors, Weighted Storage Cubes, and Heavy Duty Super Buttons. Then, we invoke Metatheorem~\ref{thm:switches} to complete the proof. A switch is constructed out of a room with a single cube and two buttons as in Figure~\ref{fig:switch}. Which of the buttons being pressed by the cube dictates the state of the switch. Each button is connected to the corresponding doors which should open when the switch is in that state. To ensure the switch is always in a valid state, we put an additional door in the only entrance to the room. This door is only open if at least one of the two buttons is depressed. Furthermore, this construction prevents the cube from being removed from the room to be used elsewhere. As long as there are no extra cubes in the level, the room must be left in exactly one of the two valid switch states for the avatar to exit the room. We now apply our doors and simulated switches as in Metatheorem~\ref{thm:switches} completing the hardness proof. Theorem~\ref{thm:pspace} implies inclusion in PSPACE.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{SwitchExample}
\caption{An example of a single switch implemented with cubes, doors, and buttons. The door will only open if at least one of the buttons is pressed.}
\label{fig:switch}
\end{figure}
\end{proof}
\begin{theorem} \label{thm:lasers-pspace}
\textsc{Portal} with lasers, relays, portals, and moving platforms is PSPACE-complete.
\end{theorem}
\begin{proof}
We will construct doors and switches out of lasers, relays, and moving platforms allowing us to use Metatheorem~\ref{thm:switches}. In Portal 2, the avatar is not able to cross through an active laser. Because lasers can be blocked by the moving platforms game element, a door can be constructed by placing a moving platform and laser at one end of a small hallway. If the moving platform is in front of the laser, the gadget is in the unlocked state. If the moving platform is to the side, then the player cannot pass through the hallway and it is in the locked state. Moving platforms can be controlled by laser relays and will switch position based on whether the laser relay is active. Lasers can be directed to selectively activate laser relays with portals, so we have a mechanism to lock or unlock the doors.
As it stands, once a new portal is created the previously opened door will revert to its previous state. To prove PSPACE-hardness, we need to make these changes persist. To do so, we introduce a memory latch gadget, shown in Figures~\ref{fig:memory-latch0}\iffull~and \ref{fig:memory-latch1}\fi. When the relay in this gadget is activated for a sufficiently long period of time, the platform will move out of the way and the laser will keep the relay active. If the relay has been blocked for enough time, the platform moves back and blocks the laser. Thus, the state of the gadget persists.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{MemoryLatchOff}
\caption{A memory latch in the off state.}
\label{fig:memory-latch0}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{MemoryLatchOn}
\caption{A memory latch in the on state.}
\label{fig:memory-latch1}
\end{figure}
The last construction is the switch, which we build out of two groups of lasers, moving platforms, and laser relays, as well as a memory latch. The player has the ability to change the state of the memory latch. We interpret the state of the memory latch as the state of the switch. When active, one of the relays in the latch moves a platform out of the way of one of the lasers, activating the corresponding relays and opening the set of doors to which they are connected. Another relay in the latch moves the second moving platform into the path of the second laser, deactivating its corresponding laser relays and the doors they control. Likewise, deactivating the memory latch causes both moving platforms to revert to their original positions, blocking the first laser and letting the second through. We have now successfully constructed doors and switches, so by Metatheorem~\ref{thm:switches} and Theorem~\ref{thm:pspace}, PSPACE-completeness follows.
\end{proof}
Note that in the proof of the preceding theorem, laser catchers could be used in place of laser relays, although the relays have the convenient property that they each need only be connected to a single moving platform. It is also possible that the proof could be adapted to use a single Reflection Cube instead of portals. Additional care would be required with respect to the construction of the door, and it would need to be the case that lasers from multiple directions blocked the avatar. Emancipation Grills or long falls with the moving platforms would simplify this particular door construction.
The game elements in the following corollary are a superset of those used in Theorem~\ref{thm:cubes-pspace}, so this result follows trivially. However, we prove it by using a construction similar to that in Theorem~\ref{thm:lasers-pspace}, as we feel that the gadgets involved are interesting. We also note that the proof only uses Heavy Duty Super Buttons placed on vertical surfaces, whereas Theorem~\ref{thm:cubes-pspace} relies on their placement on the floor.
\begin{corollary} \label{thm:gravity-pspace}
\textsc{Portal} with gravity beams, cubes, Heavy Duty Super Buttons, and long fall is PSPACE-complete.
\end{corollary}
\begin{proof}
When active, a gravity beam causes objects which fit inside its diameter to be pushed or pulled in line with the gravity beam emitter. Objects in the gravity beam ignore the normal pull of gravity, and thus float along their course. We construct a simple door by placing a gravity beam so that it can carry the player avatar across a pit large enough that the avatar would otherwise be unable to traverse. We hook the gravity beam emitter up to a button allowing it to be turned on and off, unlocking and locking the door.
If we wish to only use buttons placed on vertical surfaces, we are now faced with the problem of making changes to doors persist once the avatar stops holding a cube next to the button. To solve this problem, we construct a memory latch as in Theorem~\ref{thm:lasers-pspace}. If a weighted cube button is placed in the path of a gravity beam, a weighted cube caught in the beam can depress the button as in Figure~\ref{fig:grav-memory-latch1}. A cube on the floor near a gravity beam, \iffull as in Figure~\ref{fig:grav-memory-latch0}\fi~will be picked up by the beam. \iffull Weighted cube buttons can activate and deactivate the same mechanics as laser catchers, including gravity beam emitters. Figures~\ref{fig:grav-memory-latch0} and \ref{fig:grav-memory-latch1} demonstrate a memory latch in the off and on positions, respectively.\fi We also note that gravity beams are blocked by moving platforms, just like lasers. At this point, we have the properties we need from the laser, laser catcher, and moving platform. We also note that the player can pick up and remove cubes from the beam, meaning that portals are not needed.
\begin{figure}[t]
\centering
\includegraphics[width=0.72\textwidth]{GravMemoryLatchOff}
\caption{A memory latch in the off state.}
\label{fig:grav-memory-latch0}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.72\textwidth]{GravMemoryLatchOn}
\caption{A memory latch in the on state.}
\label{fig:grav-memory-latch1}
\end{figure}
\end{proof}
\section*{Acknowledgments}
All raster figures are screenshots from Valve's Portal or Portal 2,
either using Portal 2's Puzzle Maker or
by way of the Portal Unofficial Wiki (\url{http://theportalwiki.com/}).
\section{Conclusion}
In this paper we proved a number of hardness results about the video game Portal. In Sections \ref{sec:PortalFalling} through \ref{sec:PortalHEP} we have identified several game elements that, when accounted for, give Portal sufficient flexibility so as to encode instances of NP-hard problems. Furthermore, in Section~\ref{sec:pspace} we gave a new metatheorem and use it to prove that certain additional game elements, such as lasers, relays and moving platforms, make the game PSPACE-complete. The unique game mechanics of Portal provided us with a beautiful and unique playground in which to implement the gadgets involved in the hardness proofs. Indeed, our work shows how clause, literal, and variable gadgets inspired by the work of Aloupis et al.~\cite{NintendoFun2014} can be implemented in a 3D video game.
While our results about Portal itself will be of interest to game and puzzle enthusiasts, what we consider most interesting are the techniques we utilized to obtain them. Adding new, simple gadgets to this collection of abstractions gives us powerful new tools with which to attack future problems. In Section \iffull\ref{subsec:OtherGames}\fi\ifabstract\ref{sec:PortalTurrets}\fi~we identified several other video games that our techniques can be generalized to. We also believe the decomposition of games into individual mechanics will be an important tactic for understanding games of increasing complexity. Metatheorems~\ref{thm:timed-thm} and \ref{thm:switches} are new metatheorems for platform games. We hope that our work is useful as a stepping stone towards more metatheorems of this type. Additionally, we hope the study of motion planning in environments with dynamic topologies leads to new insights in this area.
\subsection{Open Questions}
This work leads to many open questions to pursue in future research. In Portal, we leave many hardness gaps and a number of mechanics unexplored. We are particularly curious about Portal with only portals, and Portal with only cubes. The removal of Emancipation Fields from our proofs would be very satisfying. The other major introduction in Portal 2 that we have not covered is co-op mode. If the players are free to communicate and have perfect information of the map, this feature should not add to the complexity of the game. However, the game seems designed with limited communication in mind and thus an imperfect-information model seems reasonable. Although perfect-information team games tend to reduce down to one- or two-player games, it has been shown that when the players have imperfect information the problem can become significantly harder. In particular, a cooperative game with imperfect information can be 2EXPTIME-complete~\cite{peterson2001lower}, while a team game with imperfect information can be undecidable \cite{demaine2008constraint}. We are not aware of any common or natural games that have used these techniques and think it would be very interesting to have a result such as Bridge or Team Fortress 2 being undecidable.
More than the results themselves, one would hope to use these techniques to show hardness for other problems. Many other games use movable blocks, timed door buttons, and stationary turrets and may have hardness results that immediately follow. Some techniques like encoding numbers in velocities might be transferable. It would be good to generalize some of these into metatheorems which cover a larger variety of games.
\section{Introduction}
In Valve's critically acclaimed \emph{Portal} franchise, the player guides \emph{Chell} (the game's silent protagonist) through a ``test facility'' constructed by the mysterious fictional organization Aperture Science.
Its unique game mechanic is the Portal Gun, which enables the player to place a pair of portals on certain surfaces within each test chamber. When the player avatar jumps into one of the portals, they are instantly transported to the other. This mechanic, coupled with the fact that in-game items can be thrown through the portals, has allowed the developers to create a series of unique and challenging puzzles for the player to solve as they guide Chell to freedom. Indeed, the Portal series has proved extremely popular, and is estimated to have sold more than 22 million copies \cite{PortalSales,SteamSpyPortal,Portal2Sales,SteamSpyPortal2}.
We analyze the computational complexity of Portal following the recent surge of interest in complexity analysis of video games and puzzles. Examples of previous work in this area includes the proof of NP-completeness of Minesweeper~\cite{Minesweeper00}, Clickomania~\cite{ClickomaniaGameTheory2000,Clickomania_MOVES2015}, and Tetris~\cite{Tetris03}, as well as PSPACE-completeness of Lemmings~\cite{Lemmings04, viglietta2015lemmings} and Super Mario Bros.~\cite{Mario_FUN2016}.
See also the surveys~\cite{AlgGameTheory_GONC3, NPPuzzles08, GPCBook09}.
Recent work has moved from puzzles to classic arcade games~\cite{HardGames12}, Nintendo games~\cite{NintendoFun2014}, 2D platform games~\cite{Forisek10}, and others~\cite{DBLP:journals/corr/Walsh14, floodIt, DBLP:journals/corr/abs-1203-1633}.
In this paper, we explore how different game elements contribute to the computational complexity of Portal 1 and Portal 2 (which we collectively refer to as \emph{Portal}), with an emphasis on identifying gadgets and proof techniques that can be used in hardness results for other video games. We show that a generalized version of Portal with Emancipation Grills is weakly NP-hard (Section~\ref{sec:PortalFalling}); Portal with turrets is NP-hard (Section~\ref{sec:PortalTurrets}); Portal with timed door buttons and doors is NP-hard (Section~\ref{sec:PortalTimed}); Portal with High Energy Pellet launchers and catchers is NP-hard (Section~\ref{sec:PortalHEP}); Portal with Cubes, Weighted Buttons, and Doors is PSPACE-complete (Section~\ref{sec:pspace}); and Portal with lasers, laser relays, and moving platforms is PSPACE-complete (Section~\ref{sec:pspace}).
Table~\ref{PortalResultsTable} summarizes these results.
The first column lists the primary game mechanics of Portal we are investigating. The second and third column note whether the long fall or Portal Gun mechanics are needed for the proof. Section~\ref{sec:PortalDefinitions} provides more details about what these models mean.
The turret proof generalizes to many other video games, as described in \iffull Section~\ref{subsec:OtherGames}\else Section~\ref{sec:PortalTurrets}\fi.
\begin{table}[ht]
\centering
\centerline{
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Mechanics} & \textbf{Portals} & \textbf{Long Fall} & \textbf{Complexity} \\ \hline
\hline
None & No & Yes & P (\S \ref{sec:PortalNavigation}) \\ \hline
Emancipation Grills, No Terminal Velocity & Yes & Yes & Weakly NP-hard (\S \ref{sec:PortalFalling}) \\ \hline
Turrets & No & Yes & NP-hard (\S \ref{sec:PortalTurrets}) \\ \hline
Timed Door Buttons and Doors & No & No & NP-hard (\S \ref{sec:PortalTimed}) \\ \hline
HEP Launcher and Catcher & Yes & No & NP-hard (\S \ref{sec:PortalHEP}) \\ \hline
Cubes, Weighted Buttons, Doors & No & No & PSPACE-comp. (\S \ref{sec:pspace}) \\ \hline
Lasers, Relays, Moving Platforms & Yes & No & PSPACE-comp. (\S \ref{sec:pspace}) \\ \hline
Gravity Beams, Cubes, Weighted Buttons, Doors & No & No & PSPACE-comp. (\S \ref{sec:pspace}) \\ \hline
\end{tabular}
}
\caption{Summary of New Portal Complexity Results}
\label{PortalResultsTable}
\end{table}
\ifabstract
\later{
\section{Additional Notes About Portal}
\label{appen:defintions}}
\later{
Viglietta~\cite{HardGames12} argues that most modern games include Turing-complete scripting languages and thus could allow designers to create undecidable puzzles, and Portal (being based on the Source engine) is no exception. However, we feel that many games (including Portal) have consistent game mechanics from which puzzles are built, and that analyzing the complexity of games and puzzles that can be created within those rule sets is interesting and meaningfully captures what we think of as the game they compose.
}
\fi
\section{Definitions of Game Elements}
\label{sec:PortalDefinitions}
Portal is a \emph{platform game}: a single-player game with the goal of navigating the avatar from a start location to an end location of a series of stages, called \emph{levels}. The gameplay in Portal involves walking, turning, jumping, crouching, pressing buttons, picking up objects, and creating portals. The locations and movement of the avatar and all in-game objects are discretized.
For convenience we make a few assumptions about the game engine, which we feel preserve the essential character of the games under consideration, while abstracting away certain irrelevant implementation details in order to make complexity analysis more amenable:
\begin{itemize}
\item Positions and velocities are represented as fixed-point numbers.\footnote{The actual game uses floats in many instances. We claim that all our proofs work if we round the numbers involved, and only encode the problems in the significand.}
\item Time is discretized and represented as a fixed-point number.
\item At each discrete time step, there is only a constant number of possible user inputs: button presses and the cursor position.
\item The cursor position is represented by two fixed-point numbers.
\end{itemize}
In Portal, a \emph{level} is a description of the polygonal surfaces in 3D defining the geometry of the map, along with a list of game elements with their locations and, if applicable, connections to each other.
In general, we assume that the level can be specified succinctly as a
collection of polygons whose coordinates may have polynomial precision,
(and thus so can the player coordinates), and thus exponentially large values
(ratios). This assumption matches the Valve Map Format (VMF) used to
specify levels in Portal, Portal~2, and other Source games \cite{VMF}.
A realistic special case is where we aim for \emph{pseudopolynomial}
algorithms, that is, we assume that the coordinates of the polygons and
player are assumed to have polynomial values/ratios (logarithmic precision),
as when the levels are composed of explicit discrete blocks.
This assumption matches the voxel-based P2C format sometimes used for
community-created Portal~2 levels \cite{P2C}.
In this work, we consider the following decision problem, which asks whether a given level has a path from the given start location the end location.
\begin{problem}
\textsc{Portal}
\textit{Parameter}: A set of allowed gameplay elements.
\textit{Input}:
A description of a Portal level using only allowed gameplay elements, and
spatial coordinates specifying a start and end location.
\textit{Output}: Whether there exists a path traversable by a Portal player from the start location to the end location.
\end{problem}
The key game mechanic, the \emph{Portal Gun}, creates a portal on the closest surface in a direct line from the player's avatar if the surface is of the appropriate type. We call surfaces that admit portals \emph{portalable}. There are a variety of other gameplay elements which can be a part of a Portal level. Because we use many these in our proofs, we describe them in detail below.
\newcounter{papercount}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{long fall} is a drop in the level terrain that the avatar can jump down from without dying, but cannot jump up. \setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}%
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[height=0.11\textheight]{longfall_diagram}
\captionof*{figure}{It's a long way down.}
\label{fig:longfall}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{door} can be open or closed, and can be traversed by the player's avatar if and only if it is open. In Portal, many mechanics can act as doors, such as literal doors, laser fields, and moving platforms. On several occasions we will assume the door being used also blocks other objects in the game, such as High Energy Pellets or lasers, which is not generally true.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[width=0.68\linewidth]{225px-Portal_Door.png}
\captionof*{figure}{A Door in Portal 2}
\label{fig:Door}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{button} is an element which can be interacted with when the avatar is nearby to change the state of the level, e.g., a button to open or close a door.\\ \vspace{2mm}
\item A \emph{timed button} will revert back to its previous state after a set period of time, reverting its associated change to the level too, e.g., a timed button which opens a door for 10 seconds, before closing it again.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[width=0.28\linewidth]{99px-Portal_2_Switch.png}
\captionof*{figure}{Timed Button}
\label{fig:Button}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{weighted floor button} is a an element which changes the state of a level when one or more of a set of objects is placed on it. In Portal, the 1500 Megawatt Aperture Science Heavy Duty Super-Colliding Super Button is an example of a weighted floor button which activates when the avatar or a Weighted Storage Cube is placed on top of it. An activated weighted floor button can activate other mechanics such as doors, moving platforms, laser emitters, and gravitational beam emitters.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[width=0.68\linewidth]{225px-Portal_2_Heavy_Duty_Super-Colliding_Super_Button.png}
\captionof*{figure}{Heavy Duty Super-Colliding Super Button}
\label{HeavyDutySuper-CollidingSuperButton}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item \emph{Blocks} can be picked up and moved by the avatar. The block can be set down and used as a platform, allowing the avatar to reach higher points in the level. While carrying a block, the avatar will not fit through small gaps, rendering some places inaccessible while doing so. In Portal, the Weighted Storage Cube is an example of a block that can be jumped on or used to activate weighted floor buttons. We will refer to Weighted Storage Cubes, Companion Cubes, etc.\ as simply \emph{cubes}.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[height=0.11\textheight]{375px-Portal1_StorageCube.png}
\captionof*{figure}{Weighted Storage Cube}
\label{fig:WeightedStorageCube}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{Material Emancipation Grid}, also called an \emph{Emancipation Grill} or \emph{fizzler}, destroys some objects which attempt to pass through it, such as cubes and turrets. When the avatar passes through an Emancipation Grid, all previously placed portals are removed from the map.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[ height=0.11\textheight]{450px-Emancipation_Grid.jpg}
\captionof*{figure}{Emancipation Grid}
\label{fig:EmancipationGrid}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item The \emph{Portal Gun} allows the player to place portals on portalable surfaces within their line of effect. Portals are orange or blue. If the player jumps into an orange (blue) portal, they are transported to the blue (orange) portal. Only one orange portal and one blue portal may be placed on the level at any given time. Placing a new orange (blue) portal removes the previously placed orange (blue) portal from the level.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[width=0.5\linewidth]{Portal_PortalGun.png}
\captionof*{figure}{Portal Gun}
\label{fig:PortalGun}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{High Energy Pellet} (HEP) is a spherical object which moves in a straight line until it encounters another object. HEPs move faster than the player avatar. If they collide with the player avatar, then the avatar is killed. If a HEP encounters a wall or another object, it will bounce off it with equal angle of incidence and reflection. In Portal, some HEPs have a finite lifespan, which is reset when the HEP passes through a portal, and others have an unbounded lifespan. These unbounded HEPs are referred to as \emph{Super High Energy Pellets}. \ifabstract HEP's are created by \emph{HEP Launchers} and can activate \emph{HEP Catchers} if they come in contact with them.\fi
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{hep_and_collecter}
\captionof*{figure}{A HEP about to reach a HEP Collector}
\label{fig:HEP}
\end{minipage}
\end{center}
\iffull
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{HEP Launcher} emits a HEP at an angle normal to the surface upon which it is placed. These are launched when the HEP launcher is activated or when the previously emitted HEP has been destroyed.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[height=0.11\textheight]{194px-Combine_Ball_launcher.png}
\captionof*{figure}{HEP Launcher}
\label{fig:HEPLauncher}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{HEP Catcher} is a device which is activated if it is ever hit by a HEP. In Portal, this device can act as a button, and is commonly used to open doors or move platforms when activated.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[height=0.11\textheight]{194px-Combine_Ball_catcher.png}
\captionof*{figure}{HEP Catcher}
\label{fig:HEPCatcher}
\end{minipage}
\end{center}
\fi
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{Laser Emitter} emits a \emph{Thermal Discouragement Beam} at an angle normal to the surface upon which it is placed. The beam travels in a straight line until it is stopped by a wall or another object. The beam causes damage to the player avatar and will kill the avatar if they stay close to it for too long. We call the beam and its emitter a \emph{laser}.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[height=0.11\textheight]{450px-Thermal_Discouragement_Beam.png}
\captionof*{figure}{A Laser Emitter and Thermal Discouragement Beam.}
\label{fig:laser}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{Laser Relay} is an object which can activate other objects while a laser passes through it.
\vspace{2mm}
\item A \emph{Laser Catcher} is an object which can activate other objects while a contacts it.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\vspace{4mm}
\includegraphics[height=0.11\textheight]{laser_catcher_and_relay.jpg}
\captionof*{figure}{An active laser relay and laser catcher.}
\label{fig:Laser}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{Moving Platform} is a solid polygon with an inactive and an active position. It begins in the inactive position and will move in a line at a constant velocity to the active position when activated. If it becomes deactivated it will move back to the inactive position with the opposite velocity.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\vspace{4mm}
\includegraphics[height=0.11\textheight]{moving_platform.jpg}
\captionof*{figure}{A horizontal moving platform.}
\label{fig:MovingPlatform}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item A \emph{Turret} is an enemy which cannot move on its own. If the player's avatar is within the field of view of a turret, the turret will fire on the avatar. If the avatar is shot sufficiently many times within a short period of time, the avatar will die.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[height=0.11\textheight]{129px-Portal_Turret.png}
\captionof*{figure}{Turret from Portal 2}
\label{fig:Turret}
\end{minipage}
\end{center}
\begin{center}
\captionsetup{justification=centering}
\begin{minipage}{.68\textwidth}
\centering
\begin{enumerate}
\setcounter{enumi}{\the\value{papercount}}
\item An \emph{Excursion Funnel}, also called a \emph{Gravitational Beam Emitter} emits a gravitational beam normal to the surface upon which it is placed. The gravitational beam is directed and will move small objects at a constant velocity in the prescribed direction. Importantly, it will carry Weighted Storage Cubes and the player avatar. Gravitational Beam Emitters can be switched on and off, as well as flipping the direction of the gravitational beam they emit.
\setcounter{papercount}{\the\value{enumi}}
\end{enumerate}
\end{minipage}
\hfill
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[height=0.11\textheight]{600px-Excursion_Funnel.jpeg}
\captionof*{figure}{A Gravity Beam and Excursion Funnel.}
\label{fig:ExcursionFunnel}
\end{minipage}
\end{center}
\later{
There are two main pieces of software for creating levels in Portal 2: the \emph{Puzzle Maker} (also known as the \emph{Puzzle Creator}), and the \emph{Valve Hammer Editor} equipped with the \emph{Portal 2 Authoring Tools}. Both of these tools are publicly available for players to create their own levels. The Puzzle Maker is a more restricted editor than Hammer, with the advantage of providing a more user-friendly editing experience.
However, levels created in the Puzzle Maker must be coarsely discretized, with coarsely discretized object locations, and must be made of voxels. In particular, the Puzzle Maker uses the P2C file format, which restricts it to pseudopolynomial instances (while Hammer uses VMF). Furthermore, no HEP launchers or additional doors can be placed in Puzzle Maker levels. We will often comment on which of our reductions can be constructed with the additional Puzzle Maker restrictions (except, of course, the small level size and item count), but this distinction is not a primary focus of this work.}
\section{Portal with Emancipation Grills is Weakly NP-hard}
\label{sec:PortalFalling}
In this section, we prove that \textsc{Portal} with portals and Emancipation Grills is weakly NP-hard by reduction from \textsc{Subset Sum} \cite{NPBook}, which is defined like so.
\begin{problem}
\textsc{Subset Sum}
\textit{Input:} A set of integers $A=\{a_1,a_2,\dots a_n\}$, and a target value $t$.
\textit{Output:} Whether there exists a subset $\{s_1,s_2,\dots,s_m\}\subseteq A$ such that
\begin{align*}
\sum_{i=1}^{m}s_i=t.
\end{align*}
\end{problem}
The reduction involves representing the integers in $A$ as distances which are translated into the avatar's velocity. More explicitly, the input $A$ will be constructed from long holes the avatar can fall down, and the target will be encoded in a distance the avatar must launch themselves after falling. In the game, there is a maximum velocity the player avatar can reach. For the next theorem, it is necessary to consider Portal without bounded terminal velocity.\footnote{Alternatively, any terminal velocity which scales at least polynomially in the level size suffices.}
\begin{theorem}
\textsc{Portal} with portals, long fall, Emacipation Grills, and no terminal velocity is weakly NP-hard.
\end{theorem}
\begin{wrapfigure}{R}{0.5\textwidth}
\centering
\vspace{-2ex}
\includegraphics[width=\linewidth]{FallingDiagram1}
\caption{A cross-section of the element selection gadget, where $\delta=2\cdot n^2\cdot \epsilon \cdot t$. Grey lines are portalable surfaces and blue lines are Emancipation Grills.}
\vspace{-4ex}
\label{fig:fallingDiagram}
\end{wrapfigure}
\begin{proof}
The elements of $A$ are represented by a series of wells, each of depth $4\cdot a_i\cdot n^2\cdot \epsilon\cdot t$, where $a_i\in A$ is the number to be encoded, $n$ is the number of elements in $A$, $t$ is the original target value of the \textsc{Subset Sum} problem, and $\epsilon$ is an expansion factor that is chosen to larger than the height of the avatar plus the height she can jump. An example is shown in Figure~\ref{fig:fallingDiagram}. The bottom of each well is a portalable surface, and the ceiling above each well is also a portalable surface. This construction will allow the avatar to shoot a portal to a ceiling tile, and to the bottom of the well they are falling into, selecting the next number.
We cannot allow the avatar to select the same element more than once. The Emancipation Grills below each portalable ceiling serve to remove the portal from the ceiling of the well into which the avatar is currently falling, and to prevent sending a portal up to that same ceiling tile. The stair-stepped ceiling will allow the player to see the ceilings of all of the wells with index greater than the one they are currently at, but prevents them from seeing the portalable surface of the wells with a lower index. This construction ensures that the player can only select each element once using portals. The enforced order of choosing does not matter when solving \textsc{Subset Sum}.
Another concern is the ability to move horizontally while falling. This movement is a small, fixed velocity $v_h$. To solve this issue, we simply ensure the distance between each hole is greater than $2\cdot v_h\cdot n\cdot \epsilon$ so it is impossible to move from one hole to another while falling.
The distance between each step is $\epsilon$, thus to ensure the accumulated error from falling these distances does not impact the solution to the subset sum, we scale each $a_i$ by $4\cdot n^2\cdot\epsilon$, which is greater than the sum of all of these extra distances.
The verification gadget involves two main pieces: a single portalable surface on a wall, the launching portal, and a target platform for the player to reach. We place the launching portal so it can always be shot from the region above the wells. The target platform is placed $\epsilon$ units below the launching portal. The target platform is placed a distance of $2\cdot t\cdot n\cdot \epsilon$ away from the launching portal and in front of the portalable surface such that leaving the portalable surface with the target velocity will cause the player to reach the target platform. Because it takes 1 second to fall the vertical distance to the platform, the avatar will only reach the target if their velocity is equal to $2 \cdot t\cdot n^2\cdot \epsilon$. We make the target platform $n\cdot \epsilon$ on each side, to account for any errors incurred by the falling region or initial horizontal movement. This size is smaller than the difference if the target value $t$ differed by $1$. We now have an encoding of our numbers, a method of selecting them, and verification if they reach the target sum, completing the reduction.
With an acceleration of $\alpha$ and zero initial velocity a body will fall a distance $s = \frac{1}{2}\cdot\alpha\cdot t^2$. The time it takes to fall will thus be $t = \left(\frac{2s}{\alpha}\right)^{1/2}$. The resulting final velocity will be $v_f=\left(2\cdot\alpha s\right)^{1/2}$. If the player starts at an initial height $h$ and horizontal velocity $V_x$ then they will travel a total horizontal distance of $V_x\cdot\left(\frac{2h}{\alpha}\right)^{1/2}$. In our construction we have the player initially fall a total distance of
\begin{align*}\sum_{i=1}^{k} 4\cdot s_i\cdot n^2\cdot \epsilon\cdot t\end{align*}
If this solution is correct, the sum of the $a_j$ which are chosen will add to $t$ giving $2\cdot n^2 \cdot \epsilon\cdot t^2$. Because the verification portal on the wall is placed at a height of $\epsilon$ we arrive at our required distance to the verification platform of $t' = \left(8\cdot\alpha\cdot\epsilon\cdot n^2\cdot t^2\right)^{1/2}\left(\frac{2\epsilon}{\alpha}\right)^{1/2} = 2\cdot n\cdot \epsilon\cdot t$.
\end{proof}
\iffull
All of the game elements needed for this construction can be placed in the Puzzle Maker. However, this reduction would not be constructible because maps in the Puzzle Maker appear to be specified in terms of voxels. Because \textsc{Subset Sum} is only weakly NP-hard\cite{NPBook}, we need the values of the elements of $A$ to be exponential in $n$. Thus we need to describe the map in terms of coordinates specifying the polygons making up the map, whereas the Puzzle Maker specifies each voxel in the map.
\fi
\begin{corollary}
\textsc{Portal} with portals, long fall, and no terminal velocity can be solved in pseudopolynomial time.
\end{corollary}
\begin{proof}
Theorem~\ref{thm:pseudopoly} gives a pseudopolynomial time algorithm for \textsc{Portal} with portals by constructing the full state-space graph. The state of the emancipation grids do not get changed over time and thus do not add additional state that needs to be stored. \iffull We can use the same vertices in the former proof, but now the edge transitions will differ if they player's avatar passes through any emancipation grids. This construction is still polynomial in the state-space and thus polynomial in the voxels in the level.\fi
\end{proof}
\section{Portal with High Energy Pellets and Portals is NP-hard}
\label{sec:PortalHEP}
\ifabstract\later{
\section{Portal with High Energy Pellets and Portals is NP-hard}
\label{appen:PortalHEP}}
\fi
In Portal, the High Energy Pellet, HEP, is an object which moves in a straight line until it encounters another object. HEPs move faster than the player avatar and if they collide with the player avatar, the avatar is killed. If a HEP encounters another wall or object, it will bounce off of that object with equal angle of incidence and reflection. In Portal, some HEPs have a finite lifespan, which is reset when the HEP passes through a portal, and others have an unbounded lifespan. \iffull A HEP launcher emits a HEP normal to the surface it is placed upon. These are launched when the HEP launcher is activated or when the previous HEP emitted has been destroyed.\fi A HEP catcher is another device that is activated if it is ever hit by a HEP. When activated this device can activate other objects, such as doors or moving platforms. HEP's are only seen in the first Portal game and are not present in the Portal 2 Puzzle Maker.
\begin{theorem}
\textsc{Portal} with Portals, High Energy Pellets, HEP launchers, HEP catchers, and doors controlled by HEP catchers is NP-hard.
\end{theorem}
\begin{proof}
We will reduce from finding Hamiltonian cycles in grid graphs~\cite{GridHamPath}; refer to Figure~\ref{fig:doorsHamiltonianHEP}. For this construction, we will need a gadget to ensure the avatar traverses every represented node, as well as a timing element. Each node in the graph will be represented by a room that contains a HEP launcher and a HEP catcher. They are positioned near the ceiling, each facing a portalable surface. The HEP catcher is connected to a closed door preventing the avatar from reaching the exit. Proof of \iffull Metatheorem~\ref{thm:timed-thm} uses the same idea and has an example of how rooms in Portal can be connected to simulate a grid graph. The rooms are small in comparison to the hallways. In particular, the time it takes to shoot a portal, wait for it to enter the HEP Catcher, and travel across a room is $\delta$ and the time it takes to traverse a hallway is $\alpha > n\cdot\delta$ where $n$ is the number of nodes in the graph. This property ensures the error from turning versus going straight through a room won't matter in comparison to traveling from node to node.\fi
\begin{figure}[t]
\centering
\includegraphics[width=0.72\textwidth]{doorsHamiltonianHEP}
\caption{An example level for the HEP reduction. Not drawn to scale.}
\label{fig:doorsHamiltonianHEP}
\end{figure}
The timer will contain two elements. First, we will arrange for a hallway with two exits and a HEP launcher behind a door on one end. The hallway is long enough so it is impossible for the avatar to traverse the hallway when the door is open. Call this component the \emph{time verifier}. In another area, we have a HEP launcher and a HEP catcher on opposite ends of a hallway that is inaccessible to the avatar. The catcher in this section will open the door in the time verifier. This construction ensures that the player can only pass through the time verifier if they enter it before a certain point after starting. To complete the proof, we set the timer equal to $(\alpha+\delta)\cdot n+ \epsilon_1+\epsilon_2$ where $\epsilon_1$ is the minimum time needed for the avatar to traverse the hallway with doors, $\epsilon_2$ is the minimum time needed for the avatar to traverse the time verifier, $\alpha$ is the minimum time it takes for the player to move to an adjacent room and change the trajectory of the HEP, and $n + 1$ is the number of HEP catchers in the level. Thus concludes our reduction from the Hamiltonian cycle problem in grid graphs.
\end{proof}
The HEP Catchers are only able to be activated once, so one may be tempted to claim this problem is in NP. This is not necessarily the case because navigating around HEP particles with more complicated trajectories might require long paths or wait times. The PSPACE-hardness of motion planning with periodic obstacles\cite{sutner1988motion} suggests the natural class for this problem is actually PSPACE-complete.
\section{Movement is Easy}
\label{sec:PortalNavigation}
\ifabstract\later{
\section{Movement is Easy}
\label{appen:PortalNavigation}}
\fi
In this section, we prove a basic result that the core mechanism of portals
does not affect the complexity of traversing a level.
\begin{theorem}
\label{thm:pseudopoly}
\textsc{Portal} with portals can be solved in pseudopolynomial time.
\end{theorem}
\begin{proof}
We construct a state-space graph of the Portal level. Each vertex represents a tuple comprised of the avatar's position vector, the avatar's velocity vector, the avatar's orientation, the position vector of the blue portal, and the position vector of the orange portal. The vertices are connected with directed edges encoding the state transitions caused by user input. We can then search for a path from the initial game state to any of the winning game states in time polynomial in the size of the graph.
Thus we have a pseudopolynomial-time algorithm for solving \textsc{Portal} in this case.
\end{proof}
\section{Portal with Timed Door Buttons is NP-hard}
\label{sec:PortalTimed}
\ifabstract\later{
\section{Portal with Timed Door Buttons is NP-hard}
\label{appen:PortalTimed}}
\fi
We provide a new metatheorem related to Forisek's Metatheorem 2~\cite{Forisek10} and Viglietta's Metatheorem 1~\cite{HardGames12}.
\begin{metatheorem}
\label{thm:timed-thm}
A platform game with doors controlled by timed switches is NP-hard.
\end{metatheorem}
\begin{proof}
\label{pf:timed-proof}
We will prove hardness by reducing from finding Hamiltonian cycles in grid graphs~\cite{GridHamPath}. Every vertex of the graph will be represented by a room with a timed switch in the middle. These rooms will be laid out in a grid with hallways in-between. The rooms are small in comparison to the hallways. In particular, the time it takes to press a timed button and travel across a room is $\delta$ and the time it takes to traverse a hallway is $\alpha > n\cdot\delta$ where $n$ is the number of nodes in the graph. This property ensures the error from turning versus going straight through a room won't matter in comparison to traveling from node to node. All of the timed switches will be connected to a series of closed doors blocking the exit hallway connected to the start node. The timers will be set, such that the doors will close again after $(\alpha + \delta) \cdot (t + 1) + \epsilon$ where $\epsilon$ is the time it takes to move from the switch at the start node through the open doors to the exit. The exit is thus only reachable if all of the timed switches are simultaneously active. Because we can make $\alpha$ much larger than $\epsilon$, we can ensure that there is only time to visit every switch exactly once and then pass through before any of the doors revert.
\end{proof}
\begin{corollary}
\label{cor:timeddoor}
A Portal level with only timed door buttons is NP-hard.
\end{corollary}
\later{
A screenshot of an example map for Corollary~\ref{cor:timeddoor} is given in Figure~\ref{fig:HampathScreenshotsMap}. Because the Portal 2 Workshop does not allow additional doors, the example uses collapsible stairs, as seen in Figure~\ref{fig:HampathScreenshotsVerify} for the verification gadget instead. We note that anything which will prevent the player from passing unless currently activated by a timed button will suffice. Moving platforms and Laser Fields are other examples. Unfortunately, the Puzzle Maker does not allow the timer length to be specified, which is a needed generalization for the reduction and available in the Hammer editor.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{doorsHamiltonian}
\caption{An example of a map forcing the player to find a Hamiltonian cycle in a grid graph.}
\label{fig:HampathScreenshotsMap}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{button_hampath_node.jpg}
\caption{Close-up of a node in the grid graph.}
\label{HampathScreenshotsNode}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{button_hampath_verify_fps.jpg}
\caption{A screenshot of the verification gadget, partially satisfied.}
\label{fig:HampathScreenshotsVerify}
\end{figure}
}
\section{Portal with Turrets is NP-hard}
\label{sec:PortalTurrets}
\ifabstract\later{
\section{Portal with Turrets is NP-hard}
\label{appen:PortalTurrets}}
\fi
In this section we prove \textsc{Portal} with turrets is NP-hard, and show that our method can be generalised to prove that many 3D platform games with enemies are NP-hard. Although enemies in a game can provide interesting and complex interactions, we can pull out a few simple properties that will allow them to be used as gadgets to reduce solving a game from 3-SAT, defined like so.
\begin{problem}
$3$-\textsc{SAT}
\textit{Input:} A $3$-CNF boolean formula $f$.
\textit{Output:} Whether there exists a satisfying assignment for $f$.
\end{problem}
This proof follows the architecture laid out in \cite{NintendoFun2014}:
\begin{enumerate}
\item The enemy must be able to prevent the player from traversing a specific region of the map; call this the \emph{blocked region}.
\item The player avatar must be able to enter an area of the map, which is path-disconnected from the blocked region, but from which the player can remove the enemy in the blocked region.
\item The level must contain long falls.
\end{enumerate}
We further assume that the behavior of the enemies is local, meaning an interaction with one enemy will not effect the behavior of another enemy if they are sufficiently far away. \iffull In many games one must also be careful about ammo and any damage the player may incur while interacting with the gadget, because these quantities will scale with the number of literals. Here long falls serve only in the construction of one-way gadgets, and can of course be replaced by some equivalent game mechanic. Similarly, a 2D game with these elements and an appropriate crossover gadget should also be NP-hard. \fi The following is a construction proving Portal with Turrets is NP-hard using this technique. Note that these gadgets can be constructed in the Portal 2 Puzzle Maker.
\iffull
\subsection{Literal}\fi
Each literal is encoded with a hallway with three turrents placed in a raised section, illustrated in Figure \ref{TurretLiteral}. The hallway must be traversed by the player, starting from ``Traverse In'', ending at ``Traverse Out''. If the turrets are active, they will kill the avatar before the avatar can cross the hallway or reach the turrets. The literal is true if the turrets are deactivated or removed, and false if they are active. The ``Unlock In'' and ``Unlock Out'' pathways allow for the player avatar to destroy the turrets from behind, deactivating them and counting as a true assignment of the literal.
\iffull
\begin{figure}[th]
\centering
\includegraphics[width=0.8\textwidth]{turret-literal-label.jpg}
\caption{An example of a (currently) false literal constructed with Turrets. Labels added over the screenshot denote }
\label{TurretLiteral}
\end{figure}
\fi
\iffull
\subsection{Variable}\fi
The variable gadget consists of a hallway that splits into two separate paths. Each hallway starts and ends with a one-way gadget constructed with a long fall. This construction forces the avatar to commit to one of the two paths. The gadget is shown in Figure~\ref{Split}.
The hallways connect the ``Unlock In'' and ``Unlock Out'' paths of the literals corresponding to a particular variable. Furthermore, one path connects all of the true literals, the other connects all of the false literals.
\later{
\begin{figure}[!ht]
\centering
\includegraphics[width=.8\textwidth]{split-2.jpg}
\caption{An example of the choice gadget used to construct variable gadgets.}
\label{Split}
\end{figure}
}
\iffull
\subsection{Clause Gadget}\fi
Each clause gadget is implemented with three hallways in parallel. A section of each hallway is the ``Traverse In'' through the ``Traverse Out'' corresponding to a literal. The avatar can progress from one end of the clause to the other if any of the literals is true (and thus passable). Furthermore, each of the clause gadgets is connected in series. Figures \ref{TurretClauseGadget} and \ref{TurretClauseExample} illustrate a full clause gadget.
\begin{figure}[th]
\centering
\includegraphics[width=0.8\textwidth]{TurretClauseGadget.pdf}
\caption{A diagram of clause $C_k$ which contains variables $x_a$, $x_b$, and $x_c$.}
\label{TurretClauseGadget}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=0.6\textwidth]{TurretClauseLabeled.pdf}
\caption{An example of a clause gadget with two literals.}
\label{TurretClauseExample}
\end{figure}
\ifabstract
\begin{figure}[th]
\centering
\includegraphics[width=0.8\textwidth]{turret-literal-label.jpg}
\caption{An example of a (currently) false literal constructed with Turrets. Labels added over the screenshot.}
\label{TurretLiteral}
\end{figure}
\fi
\both{
\begin{theorem}
\textsc{Portal} with Turrets is NP-hard.
\end{theorem}}
\later{
\begin{proof}
Given an instance of a 3SAT problem, we can translate it into a Portal with Turrets map using the above gadgets. This map is solvable if and only if the corresponding 3SAT problem is solvable.
\end{proof}
It is tempting to claim NP-completeness because disabling the turrets need only be performed once per turret and thus seems to have a monotonically changing state. However, the turrets themselves are physical objects that can be picked up and moved around. Their relocation add an exponential amount of state to the level. Further, if they can be jumped on top of or used to block the player in a constrained hallway, they may conceivably cause the level to be PSPACE-complete in the same way boxes can add significant complexity to a game.
}
\iffull
\subsection{Application to Other Games}
\label{subsec:OtherGames}\fi
While the framework we have presented is shown using the gameplay elements of Portal, similar elements to those we have used show up in other video games. Hence, our framework can be generalised to show hardness of other games. In this section we note several common features of games which would allow for an equivalent to the turret ``guarding unit'' in Portal. We list examples of notable games which fit the criteria. We give ideas how to use our framework to prove hardness results for these games, but it is important to note that game-specific implementation details will need to be taken into account for any hardness proof.
The first examples are games that include player controlled weapons with fixed positions, such as stationary turrets or gun emplacements. The immovable turrets should be placed at the unlock points of the literal gadget, so that they only allow the player to shoot the one desired blocking unit. Examples in contemporary video games include the Emplacement Gun in Half-Life 2, the Type-26 ASG in Half-Life, and the Anti-Infantry Stationary Guns in Halo 1 through 4.
Another set of examples are games which include a pair of ranged weapons, where one is more powerful than the other, but has shorter range. In place of the turrets in the Portal literal gadgets, we place an enemy unit equipped with the short range weapon, and give the player avatar the long range weapon. We place the blocked region such that it is in range and line of sight of the player while standing in the unlock region of the literal gadget. Additionally, we place the player such that they are not in range of the enemy's weapon. Thus the player can kill the enemy from the unlock area.
Suppose further that the blocked region is built in such a way that the player can only pass through it by moving within range of the enemy. One way of doing this would be to build it with tight turns. The result would be an equivalent implementation of the variable and clause gadgets from our Portal constructions. Note that a special case involves melee enemies. This construction applies to Doom, the Elder Scrolls III--V, Fallout 3 and 4, Grand Theft Auto 3--5, Left 4 Dead 1 and 2, the Mass Effect series, the Deus Ex series, the Metal Gear Solid series, the Resident Evil series, and many others. \iffull The complementary case occurs when the player has the short ranged, but more powerful weapon and the enemy has the weaker, long ranged weapon. Here the unlock region provides close proximity to the enemy unit but the locked region involves a significant region within line of sight and range of the enemy but is outside of the player's weapon's range. Although most games where this construction is applicable will also fall into the prior case, examples exist where the player has limited attacks, such as in the Spyro series.\fi
\iffull
A third case is where the environment impacts the effectiveness of attacks. For example, certain barriers might block projectile weapons but not magic spells. Skills that can shoot above or around barriers like this show up with Thunderstorm in Diablo II, Firestorm in Guild Wars, and Psi-storm in StarCraft. Another common effect is a location based bonus, for example the elevated-ground bonus in XCOM. Unfortunately these games lack a long-fall, and thus require the construction of a one-way gadget if one wishes to prove hardness.
While we have so far only covered NP-hardness, we conjecture that these games are significantly harder.
Assuming simple AI and perfect information, many are likely PSPACE-complete; however, when all of the details are taken into consideration, EXPTIME or NEXPTIME seem more likely. Proving such results will require development of more sophisticated mathematical machinery.
\fi
\ifabstract\later{
We would like to make some additional comments on generalizing this theorem. Here long falls serve only in the construction of one-way gadgets, and can of course be replaced by some equivalent game mechanic. Additionally, a 2D game with these elements and an appropriate crossover gadget should also be NP-hard.
In many games one must also be careful about ammo and any damage the player may incur while interacting with the gadget, because this will scale with the number of literals. For most of the games mentioned this is not an issue because they either 1) have items or locations to restore health or 2) have health restore after a fixed time outside of combat.
There are also some less likely, but still potentially useful combinations of mechanics that can be used to fulfill the criteria for constructing literals. First, suppose the player has a short-ranged but more-powerful weapon. This case looks just like the case where an enemy has the short ranged weapon. The unlock region provides close proximity to the enemy unit but the locked region involves a significant region within line of sight and range of the enemy but is outside of the player's weapon's range. Although most games where this is applicable will also fall into the prior case, examples exist where the player has limited attacks, such as in the Spyro series.
Another case is where the environment effect impacts the effectiveness of attacks. For example, certain barriers might block projectile weapons but not magic spells. Skills that can shoot above or around barriers like this show up with Thunderstorm in Diablo II, firestorm in Guild Wars, and psi-storm in StarCraft. Another common effect is a location-based bonus, for example the elevated-ground bonus in XCOM. Unfortunately these games lack a long-fall, and thus require the construction of a one-way gadget if one wishes to prove hardness.
While we have so far only covered NP-hardness, we conjecture that these games are significantly harder.
Assuming simple AI and perfect information, many are likely PSPACE-complete; however, when all of the details are taken into consideration EXP or NEXPTIME seem more likely. Proving such results will require development of more sophisticated machinery.
}
\fi
|
1,941,325,220,781 | arxiv | \section{Preliminaries}
\medskip
\subsection{Generalized Cartan type S Lie algebra and its Lie bialgebra structure}
Let $\mathbb{F}$ be a field with $\text{char}(\mathbb{F})=0$ and
$n>0$. Let $\mathbb{Q}_n=\mathbb{F}[x^{\pm1}_1,\cdots,x^{\pm1}_n]$
be a Laurent polynomial algebra and $\partial_i$ coincides with the
degree operator $x_i\frac{\partial}{\partial x_i}$.
Set
$T=\bigoplus_{i=1}^n{\mathbb{Z}}\partial_i$, and
$x^{\alpha}=x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ for
$\alpha=(\alpha_1,\cdots,\alpha_n)
\in \mathbb{Z}^{n}$.
Denote $\mathbf{W}=\mathbb{Q}_n\otimes_{\mathbb{Z}}
T=\text{Span}_{\mathbb{F}}\{\,x^{\alpha}\partial \mid \alpha \in
\mathbb{Z}^{n},\partial\in T\}$, where we set $x^{\alpha}\partial\
=x^{\alpha}\otimes
\partial$ for short. Then $\mathbf{W}=\text{Der}_{\mathbb{F}}(\mathbb{Q}_n)$
is a Lie algebra of
generalized-Witt type (see \cite{DZ}) under the following bracket
$$[\,x^{\alpha}\partial,\,x^{\beta}\partial'\,]
=x^{\alpha+\beta}\bigl(\,\partial(\beta)\partial'-\partial'(\alpha)\partial\,\bigr),
\qquad\forall\
\alpha,\, \beta\in \mathbb{Z}^{n};\ \partial,\, \partial' \in T,$$
where
$\partial(\beta)=\langle\partial,\beta\rangle=\langle\beta,\partial\rangle=\sum\limits_{i=1}^{n}a_i\beta_i\in\mathbb{Z}$
for $\partial=\sum\limits_{i=1}^{n}a_i\partial_i \in T$ and
$\beta=(\beta_1,\cdots,\beta_n) \in \mathbb{Z}^{n}$. The bilinear
map $\langle\cdot,\cdot\rangle: \,T\times {\mathbb{Z}}^n\longrightarrow
{\mathbb{Z}}$ is non-degenerate in the sense that
\begin{gather*}
\partial(\alpha)=\langle\partial,\alpha\rangle=0 \quad (\forall\;\partial \in
T),\ \Longrightarrow \alpha=0,\\
\partial(\alpha)=\langle\partial,\alpha\rangle=0 \quad (\forall\;\alpha \in
\mathbb{Z}^n),\ \Longrightarrow
\partial=0.
\end{gather*}
$\mathbf{W}$ is an infinite dimensional simple Lie algebra over
$\mathbb{F}$ (see \cite{DZ}).
We recall that the $divergence$ (cf. \cite{DZ1}) $\mathrm{div}:
\mathbf{W}\longrightarrow \mathbb{Q}_n$ is the $\mathbb{F}$-linear
map such that
$$
\mathrm{div}(x^{\alpha}\partial)=\partial(x^{\alpha})=\partial(\alpha)x^{\alpha},\quad
\textit{for } \alpha \in \mathbb{Z}^n,\ \partial \in T.\leqno (1)
$$
The $divergence$ has the following two properties:
\begin{gather*}
\mathrm{div} ([u,v])=u\cdot \mathrm{div}(v)-v\cdot \mathrm{div}(u),\tag{2}\\
\mathrm{div}(fw)=f\,\mathrm{div}(w)+w\cdot f, \tag{3}
\end{gather*}
for $u,v,w \in \mathbf{W}, f \in \mathbb{Q}_n$. In view of (2), the
subspace
$$\widetilde{\mathbf{S}}=\mathrm{Ker}(\mathrm{div})$$ is a subalgebra of $\mathbf{W}$.
The Lie algebra $\mathbf{W}$ is $\mathbb{Z}^n$-graded, whose
homogeneous components are
$$
\mathbf{W}_{\alpha}:=x^{\alpha}T, \qquad \alpha
\in\mathbb{Z}^n.
$$
The \textit{divergence} $\mathrm{div}:
\mathbf{W}\longrightarrow \mathbb{Q}_n$ is a derivation of degree
$0$. Hence, its kernel is a homogeneous subalgebra of $\mathbf{W}$.
So we have
$$\widetilde{\mathbf{S}}=\bigoplus_{\alpha \in \mathbb{Z}^n}
\widetilde{\mathbf{S}}_{\alpha}, \quad
\widetilde{\mathbf{S}}_{\alpha}:=\widetilde{\mathbf{S}}\cap
\mathbf{W}_{\alpha}.$$ For each $\alpha \in \mathbb{Z}^n,$ let
$\hat{\alpha}: T\rightarrow \mathbb{F}$ be the corresponding linear
function defined by
$\hat{\alpha}(\partial)=\langle\partial,\alpha\rangle=\partial(\alpha)$. We have
$$
\widetilde{\mathbf{S}}_{\alpha}=x^{\alpha}T_{\alpha}, \quad\textit{and } \
T_{\alpha}=\mathrm{Ker}(\hat{\alpha}).
$$
The algebra $\widetilde{\mathbf{S}}$ is not simple, but its derived
subalgebra $\mathbf{S}=(\widetilde{\mathbf{S}})'$ is simple,
assuming only that $\mathrm{dim} T\geq 2$. According to Proposition
3.1 \cite{DZ1}, we have $\mathbf{S}=\bigoplus_{\alpha\neq
0}\widetilde{\mathbf{S}}_{\alpha}$. More generally, it turns out that
the shifted spaces $x^{\eta}\mathbf{S},\, \eta \in
\mathbb{Z}^n-\{0\}$, are simple subalgebras of $\mathbf{W}$ if
$\mathrm{dim} T\geq 3$. We refer to the simple Lie algebras
$x^{\eta}\mathbf{S}$ as the Lie algebras of {\it generalized Cartan
type} $\mathbf{S}$ (see \cite{DZ1}). The Lie algebra
$x^{\eta}\mathbf{S}$ is $\mathbb{Z}^n$-graded with
$x^{\alpha}T_{\alpha-\eta} \ (\alpha\neq \eta)$, as its homogeneous component
of degree $\alpha$, while its homogeneous component of degree $\eta$ is
$0$.
Throughout this paper, we assume that $\eta\neq 0$,
$\eta_k=\eta_{k'}$.
Take two distinguished elements $h=\partial_k-\partial_{k'}, e=x^{\gamma}\partial_0
\in x^{\eta}\mathbf{S}$ such that $[h,e]=e$ where $1\leq k\neq k'
\leq n$. It is easy to see that $\partial_0(\gamma-\eta)=0$, and
$\gamma_k-\gamma_{k'}=1$. Using a result of \cite{M}, we have the
following
\begin{prop}
There is a triangular Lie bialgebra structure on
$x^{\eta}\mathbf{S}$ $(\eta\neq 0, \ \eta_k=\eta_{k'})$ given by
the classical Yang-Baxter $r$-matrix
$$
r:=(\partial_k-\partial_{k'})\otimes x^{\gamma}\partial_0-x^{\gamma}\partial_0
\otimes (\partial_k-\partial_{k'}),\quad \forall \;\partial_{k'},\,\partial_k \in
T, \ \gamma \in \mathbb{Z}^{n},
$$
where $\gamma_k-\gamma_{k'}=1$, $\partial_0(\gamma)=\partial_0(\eta)$ and
$[\,\partial_k-\partial_{k'},
x^\gamma\partial_0\,]=x^\gamma\partial_0$.\hfill\qed
\end{prop}
\subsection{Generalized Cartan type $\mathbf{S}$ Lie subalgebra $\mathbf{S}^+$}
Denote $D_i=\frac{\partial}{\partial x_i}$. Set
$\mathbf{W}^+:=\text{Span}_{\mathcal{K}}\{x^{\alpha} D_i\mid
\alpha\in\mathbb{Z}_+^n, 1\le i\le n\}$, where $\mathbb{Z}_+$ is the
set of non-negative integers. Then
$\mathbf{W}^+=\text{Der}_{\mathcal{K}}(\mathcal{K}[x_1,\cdots,x_n])$
is the derivation Lie algebra of polynomial ring
$\mathcal{K}[x_1,\cdots,x_n]$, which, via the identification $x^\alpha
D_i$ with $x^{\alpha-\epsilon_i}
\partial_i$ (here $\alpha-\epsilon_i\in\mathbb{Z}^n$, $\epsilon_i=(\delta_{1i},\cdots,\delta_{ni})$),
can be viewed as a Lie subalgebra (the ``positive" part) of the
generalized-Witt algebra $\mathbf{W}$ over a field $\mathcal{K}$.
For $X=\sum_{i=1}^{n}a_iD_i \in \mathbf{W}$, we define
$\mathrm{Div}(X)=\sum_{i=1}^{n}D_i(a_i)$ as usual. Note that
$\mathrm{div}(X)=\sum_{i=1}^{n}x_iD_i(x_i^{-1}a_i)$ (since
$\partial_i=x_iD_i$). Thus we have $\mathrm{div}(x_1\cdots x_n
X)=x_1\cdots x_n \mathrm{Div}(X)$. This means that
$X\in\mathrm{Ker}(\mathrm{Div})$ if and only if
$x^{\underline{1}}X\in \widetilde {\mathbf{S}}$, and if and only if
$X\in x^{-\underline{1}}\widetilde{\mathbf{S}}$, where
$\underline{1}=\epsilon_1+\cdots+\epsilon_n$.
Set $\mathbf{S}^+:=\mathrm{Ker}(\mathrm{Div})\bigcap\mathbf{W}^+$,
then we have
$\mathbf{S}^+=(x^{-\underline{1}}\mathbf{S})\bigcap\mathbf{W}^+$
since $\mathbf{S}=\bigoplus_{\alpha\neq 0}\widetilde{\mathbf{S}}_{\alpha}$
and
$x^{-\underline{1}}\widetilde{\mathbf{S}}_0\bigcap\mathbf{W}^+=0$
(where $\widetilde{\mathbf{S}}_0=T$), which is a subalgebra of
$\mathbf{W}^+$. Note that
$\{\,\alpha_nx^{\alpha-\epsilon_n}D_i-\alpha_ix^{\alpha-\epsilon_i}D_n\mid
\alpha\in\mathbb{Z}^n_+, 1\le i< n\,\}$ is a basis of $\mathbf{S}^+$,
where
$\alpha_nx^{\alpha-\epsilon_n}D_i-\alpha_ix^{\alpha-\epsilon_i}D_n
=x^{\alpha-\epsilon_i-\epsilon_n}(\alpha_n\partial_i-\alpha_i\partial_n)\in
x^{\alpha-\epsilon_i-\epsilon_n}T_{\alpha-\epsilon_i-\epsilon_n+\underline{1}}$ indicates once
again that $\mathbf{S}^+$ is indeed a subalgebra of
$x^{-\underline{1}}\mathbf{S}$ since $\partial_i=x_iD_i$.
\subsection{The special algebra $\mathbf{S}(n;\underline{1})$}
Assume now that $\text{char}(\mathcal{K})=p$, then by definition,
the Jacobson-Witt algebra $\mathbf{W}(n;\underline{1})$ is a
restricted simple Lie algebra over a field $\mathcal{K}$. Its
structure of $p$-Lie algebra is given by $D^{[p]}=D^p,\; \forall\, D
\in \mathbf{W}(n;\underline{1})$ with a basis $\{\,x^{(\alpha)}D_j
\mid 1\leq j\leq n, \ 0 \leq \alpha \leq \tau \}$, where
$\tau=(p{-}1,\cdots,p{-}1) \in \mathbb{N}^n$;
$\epsilon_i=(\delta_{1i},\cdots,\delta_{ni})$ such that
$x^{(\epsilon_i)}=x_i$ and $D_j(x_i)=\delta_{ij}$; and $\mathcal
O(n;\underline{1}):=\{\,x^{(\alpha)}\mid 0 \leq \alpha \leq \tau \}$ is
the restricted divided power algebra with
$x^{(\alpha)}x^{(\beta)}=\binom{\alpha{+}\beta}{\alpha}\,x^{(\alpha{+}\beta)}$ and a
convention: $x^{(\alpha)}=0$ if $\alpha$ has a component
$\alpha_j<0$ or $\geq p$, where
$\binom{\alpha{+}\beta}{\alpha}:=\prod_{i=1}^n\binom{\alpha_i{+}\beta_i}{\alpha_i}$.
Note that $\mathcal O(n;\underline{1})$ is $\mathbb{Z}$-graded by
$\mathcal
O(n;\underline{1})_i:=\textrm{Span}_{\mathcal{K}}\{\,x^{(\alpha)}\mid 0
\leq \alpha \leq \tau, |\alpha|=i \}$, where $|\alpha|=\sum_{j=1}^{n}\alpha_j$.
Moreover, $\mathbf{W}(n;\underline{1})$ is isomorphic to
$\text{Der}_{\mathcal{K}}(\mathcal O(n;\underline{1}))$ and inherits
a gradation from $\mathcal O(n;\underline{1})$ by means of
$\mathbf{W}(n;\underline{1})_i=\sum_{j=1}^{n}\mathcal
O(n;\underline{1})_{i+1}D_j$. Then the subspace
$$\mathbf{S}'(n;\underline{1})=\{E \in \mathbf{W}(n;\underline{1}) \mid \mathrm{Div}(E)=0\}$$
is a subalgebra of $\mathbf{W}(n;\underline{1})$.
Its derived subalgebra
$\mathbf{S}(n;\underline{1})=\mathbf{S}'(n;\underline{1})^{(1)}$ is
called \textit{the special algebra}. Then
$\mathbf{S}(n;\underline{1})=\bigoplus_{i=-1}^{s}\mathbf{S}(n;\underline{1})
\cap\mathbf{W}(n;\underline{1})_i$ is graded with $s=|\tau|-2$.
Recall the mappings $D_{ij}: \mathcal
O(n;\underline{1})\longrightarrow \mathbf{W}(n;\underline{1})$,
$D_{ij}(f)=D_j(f)D_i-D_i(f)D_j$ for $f \in \mathcal
O(n;\underline{1})$. Note that $D_{ii}=0$ and $D_{ij}=-D_{ji},\
1\leq i,\,j \leq n$. Then by Lemma 4.2.2 \cite{H},
$$\mathbf{S}(n;\underline{1})=\text{Span}_{\mathcal{K}}\{D_{in}(f) \mid f \in
\mathcal O(n;\underline{1}), 1\leq i< n\}$$ is a $p$-subalgebra of
$\mathbf{W}(n;\underline{1})$ with restricted gradation. Evidently,
we have the following result (see the proof of Theorem 3.7, p.159 in
\cite{HR})
\begin{lemm}
$\mathbf{S}'(n;\underline{1})=\mathbf{S}(n;\underline{1})+\sum\limits_{j=1}^{n}\mathcal{K}
x^{(\tau-(p-1)\epsilon_j)}D_j$. And
$\mathrm{dim}_{\mathcal{K}}\mathbf{S}'(n;\underline{1})$
$=(n-1)p^n+1$,
$\mathrm{dim}_{\mathcal{K}}\mathbf{S}(n;\underline{1})=(n-1)(p^n-1)$.\hfill\qed
\end{lemm}
By definition (cf. \cite{H}), the restricted universal enveloping
algebra $\mathbf{u}(\mathbf{S}(n;\underline{1}))$ is isomorphic to
$U(\mathbf{S}(n;\underline{1}))/I$ where $I$ is the Hopf ideal of
$U(\mathbf{S}(n;\underline{1}))$ generated by
$(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)}),\,
(D_{ij}(x^{(\alpha)}))^p$ with $\alpha \neq \epsilon_i+\epsilon_j$ for
$1\leq i<j\leq n$. Since
$\mathrm{dim}_{\mathcal{K}}\mathbf{S}(n;\underline{1})=(n-1)(p^n-1)$,
we have
$\mathrm{dim}_{\mathcal{K}}\mathbf{u}(\mathbf{S}(n;\underline{1}))$
$=p^{(n-1)(p^n-1)}$.
\subsection{A crucial Lemma}
For any element $x$ of a unital $R$-algebra ($R$ a ring) and
$a \in R$, we set (see \cite{AJ})
$$x_a^{\langle n\rangle}:=(x+a)(x+a+1)\cdots(x+a+n-1), \leqno(4)$$
then $x^{\langle n\rangle}:=x_0^{\langle n\rangle}=\sum_{k=0}^nc(n,k)x^k$ where
$c(n,k)$ is the number of $\pi\in \frak S_n$ with exactly $k$ cycles
(cf. \cite{R}). Given a $\pi\in \frak S_n$, let $c_i=c_i(\pi)$ be
the number of cycles of $\pi$ of length $i$. Note that $n=\sum
ic_i$. Define the type of $\pi$, denoted type $\pi$, to be the
$n$-tuple $\underline{c}=(c_1,\cdots,c_n)$. The total number of
cycles of $\pi$ is denoted $c(\pi)$, so
$c(\pi)=|\,\underline{c}\,|=c_1+\cdots+c_n$. Denote by $\frak
S_n(\underline{c})$ the set of all $\sigma\in \frak S_n$ of type
$\underline{c}$, then $|\frak
S_n(\underline{c})|=n!/1^{c_1}c_1!2^{c_2}c_2!\cdots n^{c_n}c_n!$
(see Proposition 1.3.2 \cite{R}).
We also set
$$x_a^{[n]}:=(x+a)(x+a-1)\cdots(x+a-n+1), \leqno(5)$$
then $x^{[n]}:=x_0^{[n]}=\sum_{k=0}^n s(n,k)x^k$ where
$s(n,k)=(-1)^{n-k}c(n,k)$ is the Stirling number of the first kind.
\begin{lemm} $($\cite{AJ,CG}$)$
For any element $x$ of a unital $\mathbb{F}$-algebra with
$\text{char}(\mathbb{F})=0$, $a, \,b \in \mathbb{F}$ and $r,\, s,\,
t \in \mathbb{Z}$, one has
\begin{gather*}
x_a^{\langle s+t\rangle}=x_a^{\langle s\rangle}\,x_{a+s}^{\langle t\rangle},\tag{6} \\
x_a^{[s+t]}=x_a^{[s]}\,x_{a-s}^{[t]},\tag{7} \\
x_a^{[s]}=x_{a-s+1}^{\langle s\rangle},\tag{8} \\
\sum\limits_{s+t=r}\frac{(-1)^t}{s!\,t!}\,x_a^{[s]}\,x_b^{\langle
t\rangle}=\dbinom{a{-}b} {r}=\frac{(a{-}b)\cdots(a{-}b{-}r{+}1)}{r!}, \tag{9} \\
\sum\limits_{s+t=r}\frac{(-1)^t}{s!\,t!}\,x_a^{[s]}\,x_{b-s}^{[t]}=\dbinom{a{-}b{+}r{-}1}
{r}=\frac{(a{-}b)\cdots(a{-}b{+}r{-}1)}{r!}.\tag{10}
\end{gather*}
\end{lemm}
\subsection{Quantization by Drinfel'd twists} The following
result is well-known (see \cite{CP,D,ES,NR}, etc.).
\begin{lemm}
Let $(A,m,\iota,\Delta_0,\varepsilon_0,S_0)$ be a Hopf algebra over
a commutative ring. A Drinfel'd twist $\mathcal{F}$ on $A$ is an
invertible element of $A\otimes A$ such that
\begin{gather*}(\mathcal{F}\otimes
1)(\Delta_0\otimes \text{\rm Id})(\mathcal{F})=(1\otimes
\mathcal{F})(\text{\rm Id}\otimes\Delta_0)(\mathcal{F}), \\
(\varepsilon_0\otimes \text{\rm Id})(\mathcal{F})=1=(\text{\rm
Id}\otimes \varepsilon_0)(\mathcal{F}).
\end{gather*} Then,
$w=m(\text{\rm Id}\otimes S_0)(\mathcal{F})$ is invertible in $A$
with $w^{-1}=m(S_0\otimes \text{\rm Id})(\mathcal{F}^{-1})$.
Moreover, if we define $\Delta: \,A\longrightarrow A\otimes A$ and
$S: \,A\longrightarrow A$ by
$$\Delta(a)=\mathcal{F}\Delta_0(a)\mathcal{F}^{-1},
\qquad S=w\,S_0(a)\,w^{-1},$$ then $(A, m, \iota,
\Delta,\varepsilon,S)$ is a new Hopf algebra, called the twisting
of $A$ by the Drinfel'd twist $\mathcal{F}$.
\end{lemm}
Let $\mathbb{F}[[t]]$ be a ring of formal power series over a field
$\mathbb{F}$ with $\text{char}(\mathbb{F})=0$. Assume that $L$ is a
triangular Lie bialgebra over $\mathbb{F}$ with a classical
Yang-Baxter $r$-matrix $r$ (see \cite{D,ES}). Let $U(L)$ denote the
universal enveloping algebra of $L$, with the standard Hopf algebra
structure $(U(L),m,\iota,\Delta_0,\varepsilon_0,S_0)$.
\smallskip
Let us consider {\it the topologically free
$\mathbb{F}[[t]]$-algebra} $U(L)[[t]]$ (for the definition, see p.
4, \cite{ES}), which can be viewed as an associative
$\mathbb{F}$-algebra of formal power series with coefficients in
$U(L)$. Naturally, $U(L)[[t]]$ equips with an induced Hopf algebra
structure arising from that on $U(L)$ (via the coefficient ring
extension), by abuse of notation, denoted still by
$(U(L)[[t]],m,\iota,\Delta_0,\varepsilon_0,S_0)$.
\begin{defi} (\cite{HW})
For a triangular Lie bialgebra $L$ over $\mathbb{F}$ with
$\text{char}(\mathbb{F})=0$, $U(L)[[t]]$ is called {\it a
quantization of $U(L)$ by a Drinfel'd twist} $\mathcal{F}$ over
$U(L)[[t]]$ if $U(L)[[t]]/tU(L)[[t]]\cong U(L)$, and $\mathcal{F}$
is determined by its $r$-matrix $r$ (namely, its Lie bialgebra
structure).
\end{defi}
\subsection{Construction of Drinfel'd twists}
Let $L$ be a Lie algebra containing linearly independent elements
$h$ and $e$ satisfying $[h,\,e]=e$, then the classical Yang-Baxter
$r$-matrix $r=h\otimes e-e\otimes h$ equips $L$ with the structure
of triangular coboundary Lie bialgebra (see \cite{M}). To describe a
quantization of $U(L)$ by a Drinfel'd twist $\mathcal{F}$ over
$U(L)[[t]]$, we need an explicit construction for such a Drinfel'd
twist. In what follows, we shall see that such a twist depends upon
the choice of {\it two distinguished elements} $h,\,e$ arising from
its $r$-matrix $r$.
Recall the following results proved in \cite{CG} and \cite{HW}. Note
that $h$ and $e$ satisfy the following equalities:
$$e^s\cdot h_a^{[m]}=h_{a-s}^{[m]}\cdot e^s, \leqno{(11)}$$
$$e^s\cdot h_a^{\langle m\rangle}=h_{a-s}^{\langle m\rangle}\cdot e^s, \leqno{(12)}$$
where $m,\,s$ are non-negative integers, $a \in \mathbb{F}$.
For $a \in \mathbb{F}$, following \cite{CG}, we set
\begin{gather*}
\mathcal{F}_a=\sum\limits_{r=0}^{\infty}\frac{(-1)^r}{r!}h_a^{[r]}\otimes
e^rt^r,\qquad F_a=\sum\limits_{r=0}^{\infty}\frac{1}{r!}h_a^{\langle
r\rangle}\otimes
e^rt^r,\\
u_a=m\cdot(S_0\otimes \text{\rm Id})(F_a),\qquad\quad
v_a=m\cdot(\text{\rm Id}\otimes S_0)(\mathcal{F}_a).
\end{gather*}
Write $\mathcal{F}=\mathcal{F}_0,\, F=F_0,\,u=u_0,\,v=v_0$.
Since $S_0(h_a^{\langle r\rangle})=(-1)^rh_{-a}^{[r]}$ and
$S_0(e^r)=(-1)^re^r$, one has
$$
v_a=\sum\limits_{r=0}^{\infty}\frac{1}{r!}h_a^{[r]}
e^rt^r, \quad
u_b=\sum\limits_{r=0}^{\infty}\frac{(-1)^r}{r!}h_{-b}^{[r]}
e^rt^r.
$$
\begin{lemm} $($\cite{CG}$)$
For $a,\, b \in \mathbb{F}$, one has
$$
\mathcal{F}_a F_b=1\otimes(1-et)^{a-b} \quad\text{and }\quad v_a
u_b=(1-et)^{-(a+b)}.
$$
\end{lemm}
\begin{coro} $($\cite{CG}$)$
For $a \in \mathbb{F}$, $\mathcal{F}_a$ and $u_a$ are invertible
with $\mathcal{F}_a^{-1}=F_a$ and $u_a^{-1}=v_{-a}$. In particular,
$\mathcal{F}^{-1}=F$ and $u^{-1}=v$.
\end{coro}
\begin{lemm} $($\cite{CG}$)$ For any positive integers $r$, we have
$$\Delta_0(h^{[r]})=\sum\limits_{i=0}^r \dbinom{r}{i}h^{[i]}\otimes
h^{[r-i]}.$$ Furthermore, $\Delta_0(h^{[r]})=\sum\limits_{i=0}^r
\dbinom{r}{i}h^{[i]}_{-s}\otimes h^{[r-i]}_s$ for any $s \in
\mathbb{F}$.
\end{lemm}
\begin{prop} $($\cite{CG}$)$ If a Lie algebra $L$ contains a $2$-dimensional solvable
Lie subalgebra with a basis $\{h,\,e\}$ satisfying $[h,\,e]=e$, then
$\mathcal{F}=\sum\limits_{r=0}^{\infty}\frac{(-1)^r}{r!}h^{[r]}\otimes
e^rt^r$ is a Drinfel'd twist on $U(L)[[t]]$.
\end{prop}
\begin{remark} Recently, we observed that Kulish et al earlier used the so-called
{\it Jordanian twist} (see \cite{KL}) with the two-dimensional
carrier subalgebra $B(2)$ such that $[H,E]=E$, defined by the
canonical twisting element
$$\mathcal{F}_{\mathcal{J}}^{c}=\mathrm{exp}(H\otimes\sigma(t)), \quad
\sigma(t)=\mathrm{ln}(1+Et),$$ where
$\exp(X)=\sum_{i=0}^{\infty}\frac{X^n}{n!}$ and
$\ln(1+X)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}X^n$.
Expanding it, we get
\begin{equation*}
\begin{split}
\exp(H\otimes\sigma(t)
&=\exp\Bigl(\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}H\otimes (Et)^n\Bigr)\\
&=\prod_{n\ge 1}\Bigl(\sum_{\ell\ge
0}\frac{(-1)^{(n{+}1)\ell}}{n^\ell \ell!}H^\ell\otimes
(Et)^{n\ell}\Bigr)
\\
&=\sum_{n\ge1}\sum_{c_1,\cdots,c_n\ge0}\frac{(-1)^{c_1+2c_2+\cdots+nc_n-|\,\underline{c}\,|}}
{c_1!\cdots c_n!1^{c_1}2^{c_2}\cdots
n^{c_n}}H^{|\,\underline{c}\,|}\otimes
(Et)^{c_1+2c_2+\cdots+nc_n} \\
&=\sum_{n\ge0}\Bigl(\sum_{\underline{c}}\frac{(-1)^{n-|\,\underline{c}\,|}|\,\frak
S_n(\underline{c})\,|} {n!}H^{|\,\underline{c}\,|}\Bigr)\otimes
(Et)^n \\
&=\sum_{n\ge0}\left(\sum_{k=0}^n\frac{(-1)^{n-k}c(n,k)}
{n!}H^k\right)\otimes
(Et)^n \\
&=\sum_{n=0}^{\infty}\frac{1}{n!}H^{[n]}\otimes E^nt^n, \qquad
\end{split}
\end{equation*}
where we set $n=c_1+2c_2+\cdots+nc_n$,
$c(n,k)=\sum_{|\,\underline{c}\,|=k}|\,\frak S_n(\underline{c})\,|$.
So
$$(\mathcal{F}_{\mathcal{J}}^{c})^{-1}=\exp((-H)\otimes\sigma(t))=\sum_{r=0}^{\infty}
\frac{1}{r!}(-H)^{[r]}\otimes
E^rt^r=\sum_{r=0}^{\infty}\frac{(-1)^r}{r!}H^{\langle
r\rangle}\otimes E^rt^r.$$
Consequently, we can rewrite the twist $\mathcal{F}$ in
Proposition 1.9 as
$$\mathcal{F}=\sum_{r=0}^{\infty}
\frac{(-1)^r}{r!}H^{[r]}\otimes E^rt^r=\exp(H\otimes\sigma'(t)),
\quad \sigma'(t)=\ln(1-Et),$$ where $[H,-E]=-E$. So there is no
difference between the twists $\mathcal{F}$ and
$\mathcal{F}_{\mathcal{J}}^{c}$. They are essentially the same up to
an isomorphism on the carrier subalgebra $B(2)$.
\end{remark}
\section{Quantization of Lie bialgebra of generalized Cartan type $\mathbf{S}$}
In this section, we explicitly quantize the Lie bialgebras
$x^{\eta}\mathbf{S}$ of generalized Cartan type $\mathbf{S}$ by the
twist given in Proposition 1.9.
\subsection{Some commutative relations in $U(x^{\eta}\mathbf{S})$}
For the universal enveloping algebra $U(x^{\eta}\mathbf{S})$ of the
generalized Cartan type $\mathbf{S}$ Lie algebra
$x^{\eta}\mathbf{S}$ over $\mathbb{F}$, we need to do some necessary
calculations, which are important to the quantizations of Lie
bialgebra structure of $x^{\eta}\mathbf{S}$ in the sequel.
\begin{lemm} Fix two distinguished
elements $h:=\partial_k{-}\partial_{k'}$, $e:=x^{\gamma}\partial_0 \in
x^{\gamma} T_{\gamma-\eta}$ with $\gamma_k-\gamma_{k'}=1$ for
$x^{\eta}\mathbf{S}$. For $a \in \mathbb{F}$, $x^{\alpha}\partial
\in x^{\alpha}T_{\alpha-\eta},\, x^{\beta}\partial' \in
x^{\beta}T_{\beta-\eta}$, $m$ is non-negative integer, the
following equalities hold in $U(x^{\eta}\mathbf{S}):$
\begin{gather*}
x^{\alpha}\partial\cdot h_a^{[m]}=h_{a+(\alpha_{k'}-\alpha_k)}^{[m]}\cdot
x^{\alpha}\partial,
\tag{13}\\
x^{\alpha}\partial\cdot h_a^{\langle m\rangle}=h_{a+(\alpha_{k'}-\alpha_k)}^{\langle
m\rangle}\cdot x^{\alpha}\partial, \tag{14}\\
x^{\alpha}\partial\cdot(x^{\beta}\partial')^m=\sum\limits_{\ell=0}^m({-}1)^\ell\dbinom{m}
{\ell} (x^{\beta}\partial')^{m{-}\ell}\cdot
x^{\alpha{+}\ell\beta}\Bigl(a_\ell
\partial-b_\ell\partial'\Bigr), \tag{15}
\end{gather*}
where $a_\ell=\prod\limits_{j=0}^{\ell-1}\partial'(\alpha{+}j\beta)=
\prod\limits_{j=0}^{\ell-1}\partial'(\alpha{+}j\eta)$,
$b_\ell=\ell\,\partial(\beta)a_{\ell{-}1}$, and set $a_0=1$,
$b_0=0$.
\end{lemm}
\begin{proof}
One has (13) and (14) by using induction on $m$.
Formula (15) is a consequence of the fact (see Proposition 1.3 (4),
\cite{HR}) that for any elements $a,\,c$ in an associative algebra,
one has
$$c\,a^m=\sum_{\ell=0}^m(-1)^{\ell}\dbinom{m}{\ell}a^{m{-}\ell}(\text{ad}\,a)^\ell(c),$$
together with the formula
$$
(\text{ad}\,x^{\beta}\partial')^\ell(x^\alpha\partial)
=x^{\alpha{+}\ell\beta}(a_\ell\partial-b_\ell\partial'),\leqno(16)$$
obtained by induction on $\ell$ when taking $a=x^{\beta}\partial',
\,c=x^\alpha\partial$.
\end{proof}
To simplify formulas in the sequel, we introduce the operator
$d^{(\ell)}(\ell\geq 0)$ on $U(x^{\eta}\mathbf{S})$ defined by
$d^{(\ell)}:=\frac{1}{\ell!}(\text{\rm ad}\,e)^{\ell}$. From (16)
and the derivation property of $d^{(\ell)}$, it is easy to get
\begin{lemm} For $\mathbb{Z}^n$-homogeneous elements $x^{\alpha}\partial$, $a_i$, the
following equalities hold in $U(x^{\eta}\mathbf{S}):$
\begin{gather*}
d^{(\ell)}(x^\alpha\partial)=x^{\alpha+\ell\gamma}(A_\ell\partial-B_\ell\partial_0),
\tag{17}\\
d^{(\ell)}(a_1\cdots
a_s)=\sum_{\ell_1{+}\cdots{+}\ell_s=\ell}d^{(\ell_1)}(a_1)\cdots
d^{(\ell_s)}(a_s), \tag{18}
\end{gather*}
where $A_\ell=\frac{1}{\ell!}
\prod\limits_{j=0}^{\ell{-}1}\partial_0(\alpha{+}j\gamma)=\frac{1}{\ell!}
\prod\limits_{j=0}^{\ell{-}1}\partial_0(\alpha{+}j\eta),
\,B_\ell=\partial(\gamma)A_{\ell{-}1}$, and set $A_0=1$, $A_{-1}=0$.
\end{lemm}
Denote by $(U(x^{\eta}\mathbf{S}),\, m,\, \iota,\, \Delta_0,\,
S_0,\, \varepsilon_0)$ the standard Hopf algebra structure of the
universal enveloping algebra $U(x^{\eta}\mathbf{S})$ for the Lie
algebra $x^{\eta}\mathbf{S}$.
\subsection{Quantization of $U(x^{\eta}\mathbf{S})$ in char 0}
We can perform the process of twisting the standard Hopf structure
$(U(x^{\eta}\mathbf{S})[[t]],\, m,\, \iota,\, \Delta_0,\, S_0,\,
\varepsilon_0)$ by the Drinfel'd twist $\mathcal{F}$ constructed in
Proposition 1.9.
\smallskip
The following Lemma is very useful to our main result in this
section.
\begin{lemm}
For $a \in \mathbb{F},\,\alpha \in \mathbb{Z}^n$, and $x^{\alpha}\partial \in
x^{\alpha}T_{\alpha-\eta}$, one has
\begin{gather*}
\bigl((x^{\alpha}\partial)^s\otimes 1\bigr)\cdot
F_a=F_{a+s(\alpha_{k'}-\alpha_k)}\cdot
\bigl((x^{\alpha}\partial)^s\otimes 1\bigr), \tag{19}\\
(x^{\alpha}\partial)^s\cdot u_a=u_{a+s(\alpha_k-\alpha_{k'})}\cdot
\Bigl(\sum\limits_{\ell=0}^{\infty}d^{(\ell)}\bigl((x^\alpha\partial)^s\bigr)\cdot
h_{1-a}^{\langle \ell\rangle}t^\ell\Bigr), \tag{20} \\
\bigl(1\otimes (x^{\alpha}\partial)^s\bigr)\cdot
F_a=\sum\limits_{\ell=0}^{\infty}(-1)^\ell F_{a+\ell}\cdot
\Bigl(h_a^{\langle \ell\rangle}\otimes
d^{(\ell)}((x^{\alpha}\partial)^s)t^\ell\Bigr). \tag{21}
\end{gather*}
\end{lemm}
\begin{proof}
For (19): By (14), one has
\begin{equation*}
\begin{split}
(x^\alpha\partial\otimes 1)\cdot F_a
&=\sum\limits_{m=0}^{\infty}\frac{1}{m!}x^\alpha\partial \cdot
h_a^{\langle m\rangle}\otimes e^mt^m
\\
&=\sum\limits_{m=0}^{\infty}\frac{1}{m!}h_{a+(\alpha_{k'}-\alpha_k)}^{\langle
m\rangle}\cdot x^\alpha\partial \otimes e^mt^m
\\
&=F_{a+(\alpha_{k'}-\alpha_k)}\cdot(x^\alpha\partial \otimes 1).
\end{split}
\end{equation*}
By induction on $s$, we obtain the result.
For (20): Let
$a_\ell=\prod\limits_{j=0}^{\ell-1}\partial_0(\alpha{+}j\gamma),\,
b_\ell=\ell\,\partial(\gamma)a_{\ell{-}1}$, using induction on $s$.
For $s=1$, using (7), (11), (13) and (15), we get
\begin{equation*}
\begin{split}
x^{\alpha}\partial\cdot u_a &=x^{\alpha}\partial\cdot
\left(\sum\limits_{r=0}^{\infty}\frac{(-1)^r}{r!} h_{-a}^{[r]}\cdot
e^rt^r \right) \\
&=
\sum\limits_{r=0}^{\infty}\frac{(-1)^r}{r!}x^{\alpha}\partial\cdot
h_{-a}^{[r]}\cdot e^rt^r \\
&=\sum\limits_{r=0}^{\infty}\frac{(-1)^r}{r!}h_{-a-(\alpha_k-\alpha_{k'})}^{[r]}\cdot
x^{\alpha}\partial\cdot
e^rt^r \\ &=\sum\limits_{r=0}^{\infty}\frac{(-1)^r}{r!}h_{-a-(\alpha_k-\alpha_{k'})}^{[r]}
\left(\sum\limits_{\ell=0}^{r}(-1)^\ell\dbinom{r}{\ell} e^{r-\ell}\cdot
x^{\alpha+\ell\gamma}(a_\ell\partial-b_\ell\partial_0)t^r
\right)\\
&=\sum\limits_{r,\ell=0}^{\infty}\frac{({-}1)^{r{+}\ell}}{(r{+}\ell)!}
h_{{-}a{-}(\alpha_k-\alpha_{k'})}^{[r{+}\ell]}
\left(({-}1)^\ell\dbinom{r{+}\ell}{\ell} e^{r}\cdot
x^{\alpha{+}\ell\gamma}(a_\ell\partial{-}b_\ell\partial_0)t^{r{+}\ell}
\right)\\
&=\sum\limits_{r,\ell=0}^{\infty}\frac{(-1)^{r}}{r!\ell!}
h_{-a-(\alpha_k-\alpha_{k'})}^{[r]}\cdot h_{-a-(\alpha_k-\alpha_{k'})-r}^{[\ell]}\cdot
e^{r}\cdot x^{\alpha+\ell\gamma}(a_\ell\partial-b_\ell\partial_0)t^{r+\ell}
\\
&=\sum\limits_{\ell=0}^{\infty}\left(\sum\limits_{r=0}^{\infty}\frac{(-1)^{r}}{r!}
h_{-a-(\alpha_k-\alpha_{k'})}^{[r]} e^{r}t^r\right)
h_{-a-(\alpha_k-\alpha_{k'})}^{[\ell]}\cdot
x^{\alpha+\ell\gamma}(A_\ell\partial{-}B_\ell\partial_0)t^{\ell}
\\
&=u_{a+(\alpha_k-\alpha_{k'})}\cdot\sum\limits_{\ell=0}^{\infty}h_{-a-(\alpha_k-\alpha_{k'})}^{[\ell]}\cdot
x^{\alpha+\ell\gamma}(A_\ell\partial-B_\ell\partial_0)t^{\ell}
\\
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
&=u_{a+(\alpha_k-\alpha_{k'})}\cdot\sum\limits_{\ell=0}^{\infty}x^{\alpha+\ell\gamma}
(A_\ell\partial-B_\ell\partial_0)\cdot h_{-a+\ell}^{[\ell]}t^{\ell}
\\
&=u_{a+(\alpha_k-\alpha_{k'})}\cdot\sum\limits_{\ell=0}^{\infty}d^{(\ell)}(x^{\alpha}\partial)\cdot
h_{-a+1}^{\langle
\ell\rangle}t^{\ell},
\end{split}
\end{equation*}
where $A_\ell=\frac{1}{\ell!}
\prod\limits_{j=0}^{\ell{-}1}\partial_0(\alpha{+}j\gamma)=\frac{1}{\ell!}
\prod\limits_{j=0}^{\ell{-}1}\partial_0(\alpha{+}j\eta),
\,B_\ell=\partial(\gamma)A_{\ell{-}1}$, and set $A_0=1$, $A_{-1}=0$.
Suppose $s\ge 1$. Using Lemma 2.2 and the induction hypothesis on
$s$, we have
\begin{equation*}
\begin{split}
(x^{\alpha}\partial)^{s{+}1}\cdot u_a &= x^{\alpha}\partial\cdot
u_{a+ s(\alpha_k-\alpha_{k'})} \cdot
\sum\limits_{n=0}^{\infty}d^{(n)}((x^{\alpha}\partial)^s)\cdot
h_{1-a}^{\langle
n \rangle} t^{n}\\
&=
u_{a{+}(s{+}1)(\alpha_k-\alpha_{k'})}{\cdot}\Bigl(\sum\limits_{m=0}^{\infty}d^{(m)}(x^{\alpha}\partial)\cdot
h_{1{-}a{-}s(\alpha_k-\alpha_{k'})}^{\langle m \rangle}
t^{m}\Bigr)\\
&\quad\qquad\qquad\qquad\cdot\Bigl(\sum\limits_{n=0}^{\infty}d^{(n)}((x^{\alpha}\partial)^s)\cdot
h_{1{-}a}^{\langle
n \rangle} t^{n}\Bigr)\\
&=u_{a{+}(s{+}1)(\alpha_k-\alpha_{k'})}\cdot\Bigl(\sum\limits_{m,n=0}^{\infty}d^{(m)}(x^{\alpha}\partial)
d^{(n)}((x^{\alpha}\partial)^s)h_{1-a+n}^{\langle m
\rangle} h_{1-a}^{\langle n \rangle} t^{n+m}\Bigr)\\
&=u_{a{+}(s{+}1)(\alpha_k-\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}\sum\limits_{m+n=\ell}d^{(m)}(x^{\alpha}\partial)
d^{(n)}((x^{\alpha}\partial)^s) h_{1-a}^{\langle \ell \rangle} t^{\ell}\Bigr)\\
&=u_{a{+}(s{+}1)(\alpha_k-\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}d^{(\ell)}((x^{\alpha}\partial)^{s{+}1})h_{1-a}^{\langle
\ell \rangle} t^{\ell}\Bigr),
\end{split}
\end{equation*}
where we get the first and second ``=" by using the inductive
hypothesis, the third by using (14) \& (18) and the fourth by using
(6) \& (18).
For (21): For $s$=1, using (15) we get
\begin{equation*}
\begin{split}
(1 \otimes x^\alpha\partial )\cdot F_a
&=\sum\limits_{m=0}^{\infty}\frac{1}{m!} h_a^{\langle m\rangle}\otimes
x^\alpha\partial \cdot e^mt^m
\\
&=\sum\limits_{m=0}^{\infty}\frac{1}{m!}h_{a}^{\langle m\rangle}\otimes
\left( \sum\limits_{\ell=0}^{m}(-1)^\ell\dbinom{m}{\ell}
e^{m{-}\ell}\cdot
x^{\alpha{+}\ell\gamma}(a_\ell\partial{-}b_\ell\partial_0)t^m\right)\\
&=\sum\limits_{m=0}^{\infty}\sum\limits_{\ell=0}^{\infty}(-1)^\ell\frac{1}{m!\ell!}h_{a}^{\langle
m+\ell\rangle}\otimes e^{m}\cdot
x^{\alpha+\ell\gamma}(a_\ell\partial-b_\ell\partial_0)t^{m+\ell}
\\
&=\sum\limits_{\ell=0}^{\infty}(-1)^\ell\left(\sum\limits_{m=0}^{\infty}\frac{1}{m!}h_{a+\ell}^{\langle
m\rangle}\otimes e^mt^m\right)\Bigl(h_a^{\langle \ell\rangle}\otimes
d^{(\ell)}(x^{\alpha}\partial) t^{\ell}\Bigr)
\\
&=\sum\limits_{\ell=0}^{\infty}(-1)^\ell F_{a+\ell}\cdot
\Bigl(h_a^{\langle \ell\rangle}\otimes d^{(\ell)}(x^{\alpha}\partial)t^{\ell}\Bigr).
\end{split}
\end{equation*}
For $s>1$, it follows from the induction hypothesis \& (18).
\end{proof}
The following theorem gives the quantization of
$U(x^{\eta}\mathbf{S})$ by Drinfel'd twist $\mathcal{F}$, which is
essentially determined by the Lie bialgebra triangular structure on
$x^{\eta}\mathbf{S}$.
\begin{theorem}
Fix two distinguished elements $h=\partial_k{-}\partial_{k'}$,
$e=x^{\gamma}\partial_0$, where
$\gamma$ satisfies $\gamma_k{-}\gamma_{k'}=1$ such that
$[h,\,e]=e$
in the generalized Cartan type $\mathbf{S}$ Lie
algebra $x^{\eta}\mathbf{S}$ over $\mathbb{F}$, there exists a
structure of noncommutative and noncocommutative Hopf algebra
$(U(x^{\eta}\mathbf{S})[[t]],m,\iota,\Delta,S,\varepsilon)$ on
$U(x^{\eta}\mathbf{S})[[t]]$ over $\mathbb{F}[[t]]$
with
$U(x^{\eta}\mathbf{S})[[t]]/tU(x^{\eta}\mathbf{S})[[t]]$ $\cong
U(x^{\eta}\mathbf{S})$, which leaves the product of
$U(x^{\eta}\mathbf{S})[[t]]$ undeformed but with the deformed
coproduct, antipode and counit defined by
\begin{gather*}
\Delta(x^{\alpha}\partial)=x^{\alpha}\partial\otimes
(1{-}et)^{\alpha_k-\alpha_{k'}}+\sum\limits_{\ell=0}^{\infty}(-1)^\ell
h^{\langle \ell\rangle}\otimes (1{-}et)^{-\ell}\cdot
d^{(\ell)}(x^{\alpha}\partial)t^\ell,
\tag{22}\\
S(x^{\alpha}\partial)=-(1{-}et)^{-(\alpha_k-\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}
d^{(\ell)}(x^{\alpha}\partial)\cdot h_1^{\langle \ell\rangle}t^\ell\Bigr),
\tag{23}\\
\varepsilon(x^{\alpha}\partial)=0, \tag{24}
\end{gather*}
where $x^{\alpha}\partial \in x^{\alpha}T_{\alpha-\eta}$.
\end{theorem}
\begin{proof}
By Lemmas 1.4 and 1.6, it follows from (19) and (21) that
\begin{equation*}
\begin{split}
\Delta(x^{\alpha}\partial)
&=\mathcal{F}\cdot\Delta_0(x^{\alpha}\partial)\cdot\mathcal{F}^{-1}\\
&=\mathcal{F}\cdot(x^{\alpha}\partial\otimes 1)\cdot F+
\mathcal{F}\cdot(1\otimes x^{\alpha}\partial)\cdot F
\\
&=\Bigl(\mathcal{F}
F_{\alpha_{k'}-\alpha_k}\Bigr)\cdot(x^{\alpha}\partial{\otimes} 1)+
\sum\limits_{\ell=0}^{\infty}(-1)^\ell
\Bigl(\mathcal{F}F_{\ell}\Bigr)\cdot\Bigl(h^{\langle \ell\rangle}{\otimes}
d^{(\ell)}(x^{\alpha}\partial)t^\ell\Bigr)
\\ &=\Bigl(1\otimes
(1{-}et)^{\alpha_k-\alpha_{k'}}\Bigr)\cdot (x^{\alpha}\partial\otimes
1)\\
&\quad+\sum\limits_{\ell=0}^{\infty}(-1)^\ell \Bigl(1\otimes
(1{-}et)^{-\ell}\Bigr)\cdot\Bigl( h^{\langle \ell\rangle}\otimes
d^{(\ell)}(x^{\alpha}\partial)t^\ell\Bigr)
\\ &=x^{\alpha}\partial\otimes
(1{-}et)^{\alpha_k-\alpha_{k'}}+\sum\limits_{\ell=0}^{\infty}(-1)^\ell
h^{\langle \ell\rangle}\otimes (1{-}et)^{-\ell}\cdot
d^{(\ell)}(x^{\alpha}\partial)t^\ell.
\\
\end{split}
\end{equation*}
By (20) and Lemma 1.6, we obtain
\begin{equation*}
\begin{split}
S(x^{\alpha}\partial)&=u^{-1}S_0(x^{\alpha}\partial)\,u=-v\cdot
x^{\alpha}\partial\cdot u\\ &=-v \cdot u_{\alpha_k-\alpha_{k'}}\cdot
\Bigl(\sum\limits_{\ell=0}^{\infty}d^{(\ell)}(x^{\alpha}\partial)\cdot
h_{1}^{\langle \ell\rangle}t^\ell\Bigr)
\\ &=-(1{-}et)^{-(\alpha_k-\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}
d^{(\ell)}(x^{\alpha}\partial)\cdot h_1^{\langle \ell\rangle}t^\ell\Bigr).
\end{split}
\end{equation*}
Hence, we get the result.
\end{proof}
For later use, we need to make the following
\begin{lemm} For $s\ge 1$, one has
\begin{gather*}
\Delta((x^\alpha\partial)^s)=\sum_{0\le j\le s\atop
\ell\ge0}\dbinom{s}{j}({-}1)^\ell(x^{\alpha}\partial)^jh^{\langle
\ell\rangle}\otimes(1{-}et)^{j(\alpha_k-\alpha_{k'}){-}\ell}
d^{(\ell)}((x^\alpha\partial)^{s{-}j})t^\ell.\tag{\text{\rm i}}\\
S((x^{\alpha}\partial)^s)=
(-1)^s(1{-}et)^{-s(\alpha_k-\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}
d^{(\ell)}((x^{\alpha}\partial)^s)\cdot h_1^{\langle
\ell\rangle}t^\ell\Bigr).\tag{\text{\rm ii}}
\end{gather*}
\end{lemm}
\begin{proof} By (19), (21) and Lemma 1.6, we obtain
\begin{equation*}
\begin{split}
\Delta((x^{\alpha}\partial)^s)&=\mathcal{F}\Bigl(x^{\alpha}\partial\otimes
1+1\otimes x^{\alpha}\partial\Bigr)^s\mathcal{F}^{-1}\\
&=\sum_{j=0}^s\binom{s}{j}\mathcal{F}F_{j(\alpha_{k'}{-}\alpha_k)}
(x^\alpha\partial{\otimes}
1)^j\Bigl(\sum_{\ell\ge0}({-}1)^\ell\mathcal{F}F_{\ell}\bigl(h^{\langle\ell\rangle}{\otimes}
d^{(\ell)}((x^\alpha\partial)^{s{-}j})t^\ell\bigr)\Bigr)\\
&=\sum_{j=0}^s\sum_{\ell\ge0}\binom{s}{j}({-}1)^\ell\bigl((x^\alpha\partial)^j{\otimes}
(1{-}et)^{j(\alpha_k{-}\alpha_{k'}){-}\ell}\bigr)\bigl(h^{\langle\ell\rangle}{\otimes}
d^{(\ell)}((x^\alpha\partial)^{s{-}j})t^\ell\bigr)\\
&=\sum_{0\le j\le s\atop
\ell\ge0}\dbinom{s}{j}({-}1)^\ell(x^{\alpha}\partial)^jh^{\langle
\ell\rangle}\otimes(1{-}et)^{j(\alpha_k-\alpha_{k'}){-}\ell}
d^{(\ell)}((x^\alpha\partial)^{s{-}j})t^\ell.
\end{split}
\end{equation*}
Again by (20) and Lemma 1.6, we get
\begin{equation*}
\begin{split}
S((x^{\alpha}\partial)^s)&=u^{-1}S_0((x^{\alpha}\partial)^s)\,u=(-1)^s
v\cdot (x^{\alpha}\partial)^s\cdot u\\ &=(-1)^s v \cdot
u_{s(\alpha_k-\alpha_{k'})}\cdot
\Bigl(\sum\limits_{\ell=0}^{\infty}d^{(\ell)}((x^{\alpha}\partial)^s)\cdot
h_{1}^{\langle \ell\rangle}t^\ell\Bigr)
\\ &=(-1)^s(1{-}et)^{-s(\alpha_k-\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}
d^{(\ell)}((x^{\alpha}\partial)^s)\cdot h_1^{\langle
\ell\rangle}t^\ell\Bigr).
\end{split}
\end{equation*}
So this completes the proof.
\end{proof}
\subsection{Quantization integral forms of $\mathbb{Z}$-form $\mathbf{S}_{\mathbb{Z}}^+$ in
char $0$} As we known,
$\{\alpha_nx^{\alpha-\epsilon_n}D_i-\alpha_ix^{\alpha-\epsilon_i}D_n=x^{\alpha-\epsilon_i-\epsilon_n}(\alpha_n\partial_i-\alpha_i\partial_n)
\mid \alpha \in\mathbb{Z}_+^n, 1\le i< n\,\}$ is a
$\mathbb{Z}$-basis of $\mathbf{S}_{\mathbb{Z}}^+$, as a subalgebra
of both the simple Lie $\mathbb{Z}$-algebras
$x^{-\underline{1}}\mathbf{S}_{\mathbb{Z}}$ and
$\mathbf{W}^+_{\mathbb{Z}}$. In order to get the quantization
integral forms of $\mathbb{Z}$-form $\mathbf{S}_{\mathbb{Z}}^+$, it
suffices to consider what conditions are needed for those
coefficients occurred in the formulae (22) \& (23) to be integral
for the indicated basis elements.
\begin{lemm} $($\cite{CG}$)$ \
For any $a,\,k,\,\ell\in\mathbb{Z}$,
$a^\ell\prod\limits_{j=0}^{\ell-1}(k{+}ja)/\ell!$ is an
integer.\hfill\qed
\end{lemm}
From this Lemma (due to Grunspan), we see that if we take
$\partial_0(\gamma)=\pm1$, then $A_\ell$ and $B_\ell$ are integers in
Theorem 2.4. In this paper, the cases we are interested in are:
$\mathrm{(i)}\ h=\partial_k{-}\partial_{k'}$,
$e=x^{\epsilon_k}(\partial_k{-}2\partial_{k'})$ $(1\leq k \neq k'\leq
n)$; $\mathrm{(ii)}\ h=\partial_k{-}\partial_{k'}$,
$e=x^{\epsilon_k-\epsilon_m}\partial_m$ $(1\leq k \neq k'\neq m\leq
n)$. The latter will be discussed in Section 5. Denote by
$\mathcal{F}(k,k')$ the corresponding Drinfel'd twist in the case
$\mathrm{(i)}$. As a result of Theorem 2.4, we have
\begin{coro}
Fix distinguished elements $h:=\partial_k{-}\partial_{k'}$,
$e:=x^{\epsilon_k}(\partial_k{-}2\partial_{k'})$ $(1\leq k \neq k'\leq n)$,
the corresponding quantization of $U(\mathbf{S}^+_{\mathbb{Z}})$
over $U(\mathbf{S}^+_{\mathbb{Z}})[[t]]$ by Drinfel'd twist
$\mathcal{F}(k,k')$ with the product undeformed is given by
\begin{gather*}
\Delta(x^{\alpha}\partial)=x^{\alpha}\partial\otimes
(1{-}et)^{\alpha_k{-}\alpha_{k'}}+\sum\limits_{\ell=0}^{\infty}{({-}1)}^\ell
\,h^{\langle \ell\rangle}\otimes (1{-}et)^{{-}\ell}\cdot\tag{25} \\
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\, \cdot\,
x^{\alpha{+}\ell\epsilon_k}\bigl(A_\ell\partial-B_\ell(\partial_k{-}2\partial_{k'})\bigr)t^\ell,\\
S(x^{\alpha}\partial)={-}(1{-}et)^{-(\alpha_k{-}\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}
\,x^{\alpha{+}\ell\epsilon_k}\bigl(A_\ell\partial-B_\ell(\partial_k{-}2\partial_{k'})\bigr)\cdot
h_1^{\langle \ell\rangle}t^\ell\Bigr), \tag{26}
\end{gather*}
\begin{gather*}
\varepsilon(x^{\alpha}\partial)=0, \tag{27}
\end{gather*}
where $
A_\ell=\frac{1}{\ell!}\prod\limits_{j=0}^{\ell-1}(\alpha_k{-}2\alpha_{k'}{+}j),\,
B_\ell=\partial(\epsilon_k) A_{\ell{-}1}$ with $A_0=1, A_{-1}=0$.
\end{coro}
\begin{remark} We get $n(n-1)$ {\it basic
Drinfel'd twists} $\mathcal{F}(1,2),{\cdots},\mathcal{F}(1,n)$,
$\mathcal{F}(2,1),$ $\cdots,\mathcal{F}(n,n-1)$ over $U(\mathbf{S}^+_{\mathbb{Z}})$. It is interesting to
consider the products of some {\it basic Drinfel'd twists}, using
the same argument as the proof of Theorem 2.4, one can get many more
new Drinfel'd twists (which depends on a bit more calculations to be
done), which will lead to many more new complicated quantizations
not only over the $U(\mathbf{S}_{\mathbb{Z}}^+)[[t]]$, but the
possible quantizations over the
$\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$ as well, via our
modulo reduction approach developed in the next section.
\end{remark}
\section{Quantizations of the special algebra $\mathbf{S}(n;\underline{1})$ }
In this section, firstly, we make {\it modulo $p$ reduction and base
change with the $\mathcal K[[t]]$ replaced by $\mathcal K[t]$}, for
the quantization of $U(\mathbf{S}^+_{\mathbb{Z}})$ in char $0$
(Corollary 2.7) to yield the quantization of
$U(\mathbf{S}(n;\underline{1}))$, for the restricted simple modular
Lie algebra $\mathbf{S}(n;\underline{1})$ in char $p$. Secondly, we
shall further make {\it ``$p$-restrictedness" reduction as well as
base change with the $\mathcal K[t]$ replaced by $\mathcal
K[t]_p^{(q)}$}, for the quantization of
$U(\mathbf{S}(n;\underline{1}))$, which will lead to the required
quantization of $\mathbf{u}(\mathbf{S}(n;\underline{1}))$, the
restricted universal enveloping algebra of
$\mathbf{S}(n;\underline{1})$.
\subsection{Modulo $p\,$ reduction and base change}
Let $\mathbb{Z}_p$ be the prime subfield of $\mathcal{K}$ with
$\text{char}(\mathcal{K})=p$. When considering
$\mathbf{W}_{\mathbb{Z}}^+$ as a $\mathbb{Z}_p$-Lie algebra, namely,
making modulo $p$ reduction for the defining relations of
$\mathbf{W}_{\mathbb{Z}}^+$, denoted by
$\mathbf{W}_{\mathbb{Z}_p}^+$, we see that
$(J_{\underline{1}})_{\mathbb{Z}_p}=\text{Span}_{\mathbb{Z}_p}\{x^\alpha
D_i \mid \exists\, j: \alpha_j\ge p\,\}$ is a maximal ideal of
$\mathbf{W}^+_{\mathbb{Z}_p}$, and
$\mathbf{W}^+_{\mathbb{Z}_p}/(J_{\underline{1}})_{\mathbb{Z}_p}
\cong \mathbf{W}(n;\underline{1})_{\mathbb{Z}_p}
=\text{Span}_{\mathbb{Z}_p}\{x^{(\alpha)}D_{i}\mid 0\le \alpha\le
\tau, 1\le i\le n\}$. For the subalgebra
$\mathbf{S}_{\mathbb{Z}}^+$, we have
$\mathbf{S}^+_{\mathbb{Z}_p}/(\mathbf{S}^+_{\mathbb{Z}_p}\cap(J_{\underline{1}})_{\mathbb{Z}_p})
\cong \mathbf{S}'(n;\underline{1})_{\mathbb{Z}_p}$. We denote simply
$\mathbf{S}^+_{\mathbb{Z}_p}\cap(J_{\underline{1}})_{\mathbb{Z}_p}$
as $(J^+_{\underline{1}})_{\mathbb{Z}_p}$.
Moreover, we have $\mathbf{S}'(n;\underline{1})
=\mathcal{K}\otimes_{\mathbb{Z}_p}\mathbf{S}'(n;\underline{1})_{\mathbb{Z}_p}
=\mathcal{K}\mathbf{S}'(n;\underline{1})_{\mathbb{Z}_p}$, and
$\mathbf{S}^+_{\mathcal{K}}=\mathcal{K}\mathbf{S}^+_{\mathbb{Z}_p}$.
Observe that the ideal
$J^+_{\underline{1}}:=\mathcal{K}(J^+_{\underline{1}})_{\mathbb{Z}_p}$
generates an ideal of $U(\mathbf{S}^+_{\mathcal{K}})$ over
$\mathcal{K}$, denoted by
$J:=J^+_{\underline{1}}U(\mathbf{S}^+_{\mathcal{K}})$, where
$\mathbf{S}^+_{\mathcal{K}}/J^+_{\underline{1}}\cong
\mathbf{S}'(n;\underline{1})$. Based on the formulae (25) \& (26),
$J$ is a Hopf ideal of $U(\mathbf{S}^+_{\mathcal{K}})$ satisfying
$U(\mathbf{S}^+_{\mathcal{K}})/J\cong
U(\mathbf{S}'(n;\underline{1}))$. Note that elements $\sum a_{i,
\alpha}\frac{1}{\alpha!}x^{\alpha}D_i$ in
$\mathbf{S}^+_{\mathcal{K}}$ for $0\le\alpha\le\tau$ will be
identified with $\sum a_{i, \alpha}x^{(\alpha)}D_i$ in
$\mathbf{S}'(n;\underline{1})$ and those in $J_{\underline{1}}$ with
$0$. Hence, by Lemma 1.2 and Corollary 2.7, we get the quantization
of $U(\mathbf{S}'(n;\underline{1}))$ over
$U_t(\mathbf{S}'(n;\underline{1})):=U(\mathbf{S}'(n;\underline{1}))\otimes_{\mathcal
K}\mathcal K[t]$ (not necessarily in
$U(\mathbf{S}'(n;\underline{1}))[[t]]$, as seen in formulae (28) \&
(29)) as follows.
\begin{theorem}
Fix two distinguished elements
$h:=D_{kk'}(x^{(\epsilon_k+\epsilon_{k'})})$,
$e:=2D_{kk'}(x^{(2\epsilon_k+\epsilon_{k'})})$ $(1\leq k\neq k'\leq
n)$, the corresponding quantization of
$U(\mathbf{S}'(n;\underline{1}))$ over
$U_t(\mathbf{S}'(n;\underline{1}))$ with the product undeformed is
given by
\begin{gather*}
\Delta(D_{ij}(x^{(\alpha)}))=D_{ij}(x^{(\alpha)})\otimes
(1{-}et)^{\alpha_k{-}\delta_{ik}{-}\delta_{jk}-\alpha_{k'}{+}\delta_{ik'}{+}\delta_{jk'}}\tag{28}\\
\qquad\qquad\qquad\quad\,+\sum\limits_{\ell=0}^{p{-}1}{({-}1)}^\ell
h^{\langle \ell\rangle}\otimes(1{-}et)^{{-}\ell}\Bigl(\bar{A}_\ell
D_{ij}(x^{(\alpha{+}\ell\epsilon_k)})\\
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\, +\,\bar{B}_\ell
(\delta_{ik}D_{k'j}{+}\delta_{jk}D_{ik'})
(x^{(\alpha{+}(\ell{-}1)\epsilon_k+\epsilon_{k'})})\Bigr)t^\ell,
\end{gather*}
\begin{gather*}
S(D_{ij}(x^{(\alpha)}))={-}(1{-}et)^{-\alpha_k{+}\delta_{ik}{+}\delta_{jk}+\alpha_{k'}{-}\delta_{ik'}{-}\delta_{jk'}}
\cdot\Bigl(\sum\limits_{\ell=0}^{p{-}1}\bigl(\bar{A}_\ell
D_{ij}(x^{(\alpha{+}\ell\epsilon_k)}) \tag{29}\\
\qquad\qquad +\bar{B}_\ell
(\delta_{ik}D_{k'j}{+}\delta_{jk}D_{ik'})(x^{(\alpha{+}(\ell-1)\epsilon_k+\epsilon_{k'})})\bigr)
\cdot h_1^{\langle \ell\rangle}t^\ell\Bigr),
\\
\varepsilon(D_{ij}(x^{(\alpha)}))=0, \tag{30}\\
\Delta(x^{(\tau-(p-1)\epsilon_j)}D_j)=x^{(\tau-(p-1)\epsilon_j)}D_j{\otimes}
(1{-}et)^{p\,(\delta_{jk'}{-}\delta_{jk})}+1{\otimes}
x^{(\tau-(p-1)\epsilon_j)}D_j, \tag {31}\\
S(x^{(\tau-(p-1)\epsilon_j)}D_j)=-(1{-}et)^{p\,(\delta_{jk}{-}\delta_{jk'})}x^{(\tau-(p-1)\epsilon_j)}D_j,
\tag {32}\\
\varepsilon(x^{(\tau-(p-1)\epsilon_j)}D_j)=0, \tag {33}
\end{gather*}
where $0\le \alpha \le \tau$, $1\le j<i\le n$, $\bar
A_\ell=\ell!\binom{\alpha_k{+}\ell}{\ell}(A_\ell-\delta_{jk}A_{\ell-1}-\delta_{ik}A_{\ell-1})\,(\text{\rm
mod} \,p)$, $\bar B_\ell=2\ell!
\binom{\alpha_k{+}\ell-1}{\ell-1}(\alpha_{k'}+1)A_{\ell-1}\,(\text{\rm
mod}\,p)$,
$A_\ell=\frac{1}{\ell!}\prod\limits_{m=0}^{\ell-1}(\alpha_k-\delta_{jk}-\delta_{ik}
-2\alpha_{k'}+2\delta_{jk'}+2\delta_{ik'}+m)$ and $A_0=1, A_{-1}=0$.
\end{theorem}
Note that (28), (29) \& (30) give the corresponding quantization of
$U(\mathbf{S}(n;\underline{1}))$ over
$U_t(\mathbf{S}(n;\underline{1})):=U(\mathbf{S}(n;\underline{1}))\otimes_{\mathcal
K}\mathcal K[t]$ (also over $U(\mathbf{S}(n;\underline{1}))[[t]]$).
It should be noticed that in this step --- inducing from the
quantization integral form of $U(\mathbf S_{\mathbb Z}^+)$ and
making the modulo $p$ reduction, we used the first base change with
$\mathcal K[[t]]$ replaced by $\mathcal K[t]$, and the objects from
$U(\mathbf S(n;\underline1))[[t]]$ turning to $U_t(\mathbf
S(n;\underline1))$.
\subsection{Modulo ``$p$-restrictedness" reduction and base change}
Let $I$ be the ideal of $U(\mathbf{S}(n;\underline{1}))$ over
$\mathcal{K}$ generated by
$(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})$
and $(D_{ij}(x^{(\alpha)}))^p$ with $\alpha\ne \epsilon_i+\epsilon_j$
for $0\le\alpha\le \tau$ and $1\le j<i\le n$.
$\mathbf{u}(\mathbf{S}(n;\underline{1}))=U(\mathbf{S}(n;\underline{1}))/I$
is of dimension $p^{(n-1)(p^n-1)}$. In order to get a reasonable
quantization of finite dimension for
$\mathbf{u}(\mathbf{S}(n;\underline{1}))$ in char $p$, at first, it
is necessary to clarify in concept the underlying vector space in
which the required $t$-deformed object exists. According to our
modular reduction approach, it should start to be induced from the
$\mathcal{K}[t]$-algebra $U_t(\mathbf{S}(n;\underline{1}))$ in
Theorem 3.1.
Firstly, we observe the following fact
\begin{lemm} $(\text{\rm i})$ \ $(1-et)^p\equiv 1 \quad (\text{\rm mod}\,p, I)$.
\smallskip $(\text{\rm ii})$ \ $(1-et)^{-1}\equiv
1+et+\cdots+e^{p-1}t^{p-1} \quad (\text{\rm mod}\,
p, I)$.
\smallskip $(\text{\rm iii})$ \ $h_a^{\langle \ell\rangle} \equiv 0 \quad
(\text{\rm mod} \, p, I)$ for $\ell \geq p$, and $a\in\mathbb{Z}_p$.
\end{lemm}
\begin{proof} (i), (ii) follow from $e^p=0$ in $\mathbf{u}(\mathbf{S}(n;\underline{1}))$.
(iii) For $\ell\in\mathbb{Z}_+$, there is a unique decomposition
$\ell=\ell_0+\ell_1p$ with $0\le \ell_0<p$ and $\ell_1\ge 0$. Using
the formulae (4) \& (6), we have
\begin{equation*}
\begin{split}
h_a^{\langle \ell\rangle}& =h_a^{\langle \ell_0\rangle}\cdot
h_{a+\ell_0}^{\langle \ell_1p\rangle}\equiv h_a^{\langle
\ell_0\rangle}\cdot (h_{a+\ell_0}^{\langle
p\rangle})^{\ell_1}\,\qquad
(\text{mod } p)\\
&\equiv h_a^{\langle \ell_0\rangle}\cdot (h^p-h)^{\ell_1}\quad
(\text{mod } p),
\end{split}
\end{equation*}
where we used the facts that $(x+1)(x+2)\cdots(x+p-1)\equiv
x^{p-1}-1\; (\text{mod } p)$, and $(x+a+\ell_0)^p\equiv
x^p+a+\ell_0\ (\text{mod } p)$. Hence, $h_a^{\langle \ell\rangle} \equiv
0$ (mod $p, \,I$) for $\ell \geq p$.
\end{proof}
The above Lemma, together with Theorem 3.1, indicates that the
required $t$-deformation of
$\mathbf{u}(\mathbf{S}(n;\underline{1}))$ (if it exists) in fact
only happens in a $p$-truncated polynomial ring (with degrees of $t$
less than $p$) with coefficients in
$\mathbf{u}(\mathbf{S}(n;\underline{1}))$, i.e.,
$\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1})):=
\mathbf{u}(\mathbf{S}(n;\underline{1}))\otimes_{\mathcal K}
\mathcal{K}[t]_p^{(q)}$ (rather than in
$\mathbf{u}_t(\mathbf{S}(n;\underline{1})):=\mathbf{u}(\mathbf{S}(n;\underline{1}))
\otimes_{\mathcal K}\mathcal K[t]$), where $\mathcal{K}[t]_p^{(q)}$
is taken to be a $p$-truncated polynomial ring which is a quotient
of $\mathcal K[t]$ defined as
$$
\mathcal{K}[t]_p^{(q)}= \mathcal{K}[t]/(t^p-qt), \qquad\text{\it for
}\ q\in\mathcal{K}.\leqno(34)
$$
Thereby, we obtain the underlying ring for our required
$t$-deformation of $\mathbf{u}(\mathbf{S}(n;\underline{1}))$ over
$\mathcal{K}[t]_p^{(q)}$, and
$\hbox{\rm dim}\,_{\mathcal{K}}\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))
=p\cdot\hbox{\rm dim}\,_{\mathcal{K}}\mathbf{u}(\mathbf{S}(n;\underline{1}))
=p^{1+(n-1)(p^n-1)}$. Via modulo ``restrictedness" reduction, it is
necessary for us to work over the objects from $U_t(\mathbf
S(n;\underline1))$ passage to $U_{t,q}(\mathbf S(n;\underline1))$
first, and then to $\mathbf u_{t,q}(\mathbf S(n;\underline1))$ (see
the proof of Theorem 3.5 below), here we used the second base change
with $\mathcal K[t]_p^{(q)}$ instead of $\mathcal K[t]$.
\smallskip
We are now in a position to describe the following
\begin{defi} With notations as above. A Hopf algebra
$(\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$, $m,
\iota,\Delta,S,\varepsilon)$ over a ring $\mathcal K[t]_p^{(q)}$ of
characteristic $p$ is said to be a finite-dimensional quantization
of $\mathbf{u}(\mathbf{S}(n;\underline{1}))$ if its Hopf algebra
structure, via modular reduction and base changes, inherits from a
twisting of the standard Hopf algebra $U(\mathbf {S}^+_\mathbb
Z)[[t]]$ by a Drinfeld twist such that
$\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))/t\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))
$ $\cong \mathbf{u}(\mathbf{S}(n;\underline{1}))$.
\end{defi}
To describe $\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$
explicitly, we still need an auxiliary Lemma.
\begin{lemm} Let $e=2D_{kk'}(x^{(2\epsilon_k+\epsilon_{k'})})$ and $d^{(\ell)}=\frac{1}{\ell!}(\text{\rm
ad}\,e)^\ell$. Then
\smallskip
$(\text{\rm i})$ \ $d^{(\ell)}(D_{ij}(x^{(\alpha)}))=\bar{A}_\ell
D_{ij}(x^{(\alpha{+}\ell\epsilon_k)})+\bar{B}_\ell
(\delta_{ik}D_{k'j}+\delta_{jk}D_{ik'})(x^{(\alpha{+}(\ell-1)\epsilon_k+\epsilon_{k'})})$,
\smallskip\hskip0.5cm
where $\bar A_\ell, \bar B_\ell$ as in Theorem 3.1.
\smallskip $(\text{\rm ii})$ \
$d^{(\ell)}(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))=\delta_{\ell,0}D_{ij}(x^{(\epsilon_i+\epsilon_j)})
-\delta_{1,\ell}(\delta_{ik}-\delta_{jk})e$.
\smallskip $(\text{\rm iii})$ \
$d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)=\delta_{\ell,0}(D_{ij}(x^{(\alpha)}))^p-\delta_{1,\ell}
(\delta_{ik}-\delta_{jk})\delta_{\alpha,\epsilon_i+\epsilon_j}e$.
\end{lemm}
\begin{proof} (i) Note that
$A_\ell=\frac{1}{\ell!}\prod\limits_{m=0}^{\ell-1}(\alpha_k{-}\delta_{jk}{-}\delta_{ik}
{-}2\alpha_{k'}{+}2\delta_{jk'}{+}2\delta_{ik'}{+}m)$, for $0\le
\alpha\le\tau$. By (17) and Theorem 3.1, we get
\begin{equation*}
\begin{split}
d^{(\ell)}(D_{ij}(x^{(\alpha)}))&=\frac{1}{\alpha!}d^{(\ell)}
(x^{\alpha{-}\epsilon_i{-}\epsilon_j}(\alpha_j\partial_i
{-}\alpha_i\partial_j))\\
&=\frac{1}{\alpha!}x^{\alpha{-}\epsilon_i{-}\epsilon_j+\ell\epsilon_k}(A_\ell(\alpha_j\partial_i
{-}\alpha_i\partial_j)-(\alpha_j\delta_{ik}{-}\alpha_i\delta_{jk})A_{\ell{-}1}(\partial_k{-}2\partial_{k'}))\\
&=\bar{A}_\ell D_{ij}(x^{(\alpha{+}\ell\epsilon_k)})+\bar{B}_\ell
(\delta_{ik}D_{k'j}{+}\delta_{jk}D_{ik'})(x^{(\alpha{+}(\ell-1)\epsilon_k+\epsilon_{k'})}).
\end{split}
\end{equation*}
(ii) Note that $A_0=1$ and $A_\ell=0 $ for $\ell \geq 1$,
\begin{gather*}
\bar
A_\ell=\ell!\binom{\alpha_k{+}\ell}{\ell}(A_\ell-\delta_{jk}A_{\ell-1}-\delta_{ik}A_{\ell-1})\quad
(\text{\rm
mod} \,p), \\
\bar B_\ell=2\ell!
\binom{\alpha_k{+}\ell{-}1}{\ell{-}1}(\alpha_{k'}{+}1)A_{\ell-1}\quad
(\text{\rm mod}\,p).
\end{gather*}
We obtain $\bar A_0=1$ and $\bar B_0=0$. We also obtain $\bar
A_1=-(\delta_{ik}+\delta_{jk})(\alpha_k+1)$, $\bar B_1=2(\alpha_{k'}+1)$
and $\bar A_\ell=\bar B_\ell=0$ for $\ell\ge 2$, namely,
$d^{(\ell)}(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))=0$ for $\ell\ge 2$.
So by (i), we have
\begin{equation*}
\begin{split}
d^{(1)}(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))&=-(\delta_{ik}+\delta_{jk})(\alpha_k+1)
D_{ij}(x^{(\epsilon_i+\epsilon_j+\epsilon_k)})\\
& \quad +2(\alpha_{k'}+1)(\delta_{ik}D_{k'j}+\delta_{jk}D_{ik'})
(x^{(\epsilon_i+\epsilon_j+\epsilon_{k'})})
\\&=-(\delta_{ik}-\delta_{jk})e.
\end{split}
\end{equation*}
In any case, we arrive at the result as required.
\smallskip
(iii) From (15), we obtain that for $0\le \alpha\le\tau$,
\begin{equation*}
\begin{split}
d^{(1)}\,((D_{ij}(x^{(\alpha)}))^p)&=\frac{1}{(\alpha!)^p}\bigl[\,e,(D_{ij}(x^{\alpha}))^p\,\bigr]
=\frac{1}{(\alpha!)^p}\bigl[\,e,(x^{\alpha-\epsilon_i-\epsilon_j}(\alpha_j\partial_i-\alpha_i\partial_j))^p\,\bigr]\\
&=\frac{1}{(\alpha!)^p}\sum\limits_{\ell=1}^p(-1)^\ell\dbinom{p}
{\ell}(x^{\alpha-\epsilon_i-\epsilon_j}(\alpha_j\partial_i-\alpha_i\partial_j))^{p-\ell} \\
& \quad\quad \cdot
x^{\epsilon_k{+}\ell(\alpha{-}\epsilon_i-\epsilon_j)} (a_\ell
(\partial_k-2\partial_{k'})-b_\ell(\alpha_j\partial_i-\alpha_i\partial_j))\\
&\equiv
-\frac{a_p}{\alpha!}\,x^{2\epsilon_k{+}p(\alpha{-}\epsilon_i-\epsilon_j)}
(\partial_k-2\partial_{k'})\qquad(\text{mod }\,p\,)\\
&\equiv \begin{cases} -{a_p}\,e,\qquad & \text{\it if }\quad \alpha=\epsilon_i+\epsilon_j\\
0,\qquad & \text{\it if }\quad \alpha\ne\epsilon_i+\epsilon_j
\end{cases}\qquad(\text{mod }\,J),
\end{split}
\end{equation*}
where the last ``$\equiv$" by using the identification w.r.t. modulo
the ideal $J$ as before, and
$a_\ell=\prod\limits_{m=0}^{\ell-1}(\alpha_j\partial_i-\alpha_i\partial_j)
(\epsilon_k+m(\alpha{-}\epsilon_i{-}\epsilon_j)),\
b_\ell=\ell\,(\partial_k-2\partial_{k'})(\alpha{-}\epsilon_i{-}\epsilon_j)a_{\ell-1}$,
and $a_p=\delta_{ik}-\delta_{jk}$ for
$\alpha=\epsilon_i+\epsilon_j$.
Consequently, by the definition of $d^{(\ell)}$, we get
$d^{(\ell)}((x^{(\alpha)}D_i)^p)=0$ in
$\mathbf{u}(\mathbf{S}(n;\underline{1}))$ for $2\le \ell\le p-1$ and
$0\le\alpha\le\tau$.
\end{proof}
Based on Theorem 3.1, Definition 3.3 and Lemma 3.4, we arrive at
\begin{theorem} Fix two distinguished elements
$h:=D_{kk'}(x^{(\epsilon_k+\epsilon_{k'})})$,
$e:=2D_{kk'}(x^{(2\epsilon_k+\epsilon_{k'})})$ $(1\leq k\neq k'\leq
n)$, there is a noncommutative and noncocummtative Hopf algebra
$(\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1})),m,\iota,\Delta,S,\varepsilon)$
over $\mathcal{K}[t]_p^{(q)}$ with its algebra structure undeformed,
whose coalgebra structure is given by
\begin{gather*}
\Delta(D_{ij}(x^{(\alpha)}))=D_{ij}(x^{(\alpha)}){\otimes}
(1{-}et)^{\alpha_k{-}\delta_{ik}{-}\delta_{jk}-\alpha_{k'}{+}\delta_{ik'}{+}\delta_{jk'}}
\tag{36}
\\
\qquad \qquad\qquad
+\sum\limits_{\ell=0}^{p{-}1}{({-}1)}^\ell h^{\langle
\ell\rangle}{\otimes}(1{-}et)^{{-}\ell}d^{(\ell)}(D_{ij}(x^{(\alpha)}))t^\ell,
\\
S(D_{ij}(x^{(\alpha)})){=}{-}(1{-}et)^{-\alpha_k{+}\delta_{ik}{+}\delta_{jk}+\alpha_{k'}{-}\delta_{ik'}{-}\delta_{jk'}}
\cdot\Bigl(\sum\limits_{\ell=0}^{p{-}1}d^{(\ell)}(D_{ij}(x^{(\alpha)}))
\cdot h_1^{\langle \ell\rangle}t^\ell\Bigr), \tag{37}\\
\varepsilon(D_{ij}(x^{(\alpha)}))=0, \tag{38}
\end{gather*}
for $0\le\alpha\le\tau$, which is finite dimensional with
$\hbox{\rm dim}\,_{\mathcal{K}}\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))=p^{1{+}(n-1)(p^n-1)}$.
\end{theorem}
\begin{proof}
Set $U_{t,q}(\mathbf S(n;\underline 1)):=U(\mathbf S(n;\underline
1))\otimes_{\mathcal K}\mathcal K[t]_p^{(q)}$. Note that the result
of Theorem 3.1, via the base change with $\mathcal K[t]$ instead of
$\mathcal K[t]_p^{(q)}$, is still valid over $U_{t,q}(\mathbf
S(n;\underline 1))$.
Denote by $I_{t,q}$ the ideal of $U_{t,q}(\mathbf S(n;\underline
1))$ over the ring $\mathcal K[t]_p^{(q)}$ generated by the same
generators of the ideal $I$ in $U(\mathbf S(n;\underline 1))$ via
the base change with $\mathcal K$ replaced by $\mathcal
K[t]_p^{(q)}$. We shall show that $I_{t,q}$ is a Hopf ideal of
$U_{t,q}(\mathbf S(n;\underline 1))$. It suffices to verify that
$\Delta$ and $S$ preserve the generators in $I_{t,q}$ of
$U_{t,q}(\mathbf S(n;\underline 1))$.
\smallskip
(I) \ By Lemmas 2.5, 3.2 \& 3.4 (iii), we obtain
\begin{equation*}
\begin{split}
\Delta((D_{ij}(x^{(\alpha)}&))^p) =(D_{ij}(x^{(\alpha)}))^p\otimes
(1{-}et)^{p\,(\alpha_k{-}\alpha_{k'})}\\
&\qquad\qquad+\sum\limits_{\ell=0}^{\infty} ({-}1)^\ell h^{\langle
\ell\rangle}\otimes
(1{-}et)^{{-}\ell}d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)t^\ell\
\end{split}\tag{39}
\end{equation*}
\begin{equation*}
\begin{split}
&\equiv(D_{ij}(x^{(\alpha)}))^p{\otimes}1+\sum\limits_{\ell=0}^{p{-}1}
({-}1)^\ell h^{\langle \ell\rangle}{\otimes}
(1{-}et)^{{-}\ell}d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)t^\ell\quad
(\text{\rm mod }\, p)
\\
&=(D_{ij}(x^{(\alpha)}))^p{\otimes}1+1{\otimes}(D_{ij}(x^{(\alpha)}))^p
+h{\otimes}(1{-}et)^{-1}(\delta_{ik}{-}\delta_{jk})\delta_{\alpha,\epsilon_i{+}\epsilon_j}et.
\end{split}
\end{equation*}
Hence, when $\alpha\ne\epsilon_i+\epsilon_j$, we get
\begin{equation*}
\begin{split}
\Delta((D_{ij}(x^{(\alpha)}))^p)&=(D_{ij}(x^{(\alpha)}))^p\otimes 1+1\otimes
(D_{ij}(x^{(\alpha)}))^p\\
& \in I_{t,q}\otimes
U_{t,q}(\mathbf{S}(n;\underline{1}))+U_{t,q}(\mathbf{S}(n;\underline{1}))\otimes
I_{t,q};
\end{split}
\end{equation*}
and when $\alpha=\epsilon_i+\epsilon_j$, by Lemma 3.4 (ii), (28)
becomes
$$
\Delta(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))=D_{ij}(x^{(\epsilon_i+\epsilon_j)}){\otimes}
1+ 1{\otimes} D_{ij}(x^{(\epsilon_i+\epsilon_j)})+h{\otimes}
(1{-}et)^{-1}(\delta_{ik}{-}\delta_{jk})et.
$$
Combining with (39), we obtain
\begin{equation*}
\begin{split}
\Delta((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})
)&\equiv((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})
)\otimes 1\\ &\quad +1\otimes
((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})
)\\
&\in I_{t,q}\otimes
U_{t,q}(\mathbf{S}(n;\underline{1}))+U_{t,q}(\mathbf{S}(n;\underline{1}))\otimes
I_{t,q}.
\end{split}
\end{equation*}
Thereby, we prove that the ideal $I_{t,q}$ is also a coideal of the
Hopf algebra $U_{t,q}(\mathbf{S}(n;\underline{1}))$.
\smallskip
(II) \ By Lemmas 2.5, 3.2 \& 3.4 (iii), we have
\begin{equation*}
\begin{split}
S((D_{ij}(x^{(\alpha)}))^p) &=-(1{-}et)^{-p(\alpha_k-\alpha_{k'})}
\sum\limits_{\ell=0}^{\infty} d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)\cdot
h_1^{\langle
\ell\rangle}t^\ell\\
&\equiv -(D_{ij}(x^{(\alpha)}))^p-\sum\limits_{\ell=1}^{p-1}
d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)\cdot h_1^{\langle \ell\rangle}t^\ell \quad
(\text{mod }\,p)\\
&=-(D_{ij}(x^{(\alpha)}))^p+(\delta_{ik}-\delta_{jk})\delta_{\alpha,\epsilon_i+\epsilon_j}e\cdot
h_1^{\langle 1\rangle} t.
\end{split}\tag{40}
\end{equation*}
Hence, when $\alpha\ne\epsilon_i+\epsilon_j$, we get
$$
S\bigl((D_{ij}(x^{(\alpha)}))^p\bigr)=-(D_{ij}(x^{(\alpha)}))^p\in
I_{t,q}.
$$
When $\alpha=\epsilon_i+\epsilon_j$, by Lemma 3.4 (ii), (29) reads as
$$
S(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))
=-D_{ij}(x^{(\epsilon_i+\epsilon_j)})+(\delta_{ik}-\delta_{jk})
e\cdot h_1^{\langle 1\rangle} t.
$$
Combining with (40), we obtain
$$
S\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})\bigr)
=-\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})\bigr)
\in I_{t,q}.
$$
Thereby, the ideal $I_{t,q}$ is indeed preserved by the antipode $S$
of the quantization $U_{t,q}(\mathbf{S}(n;\underline{1}))$, the same
as in Theorem 3.1.
\smallskip
(III) It is obvious to notice that
$\varepsilon((D_{ij}(x^{(\alpha)}))^p)=0$ for all $\alpha$ with
$0\le\alpha\le\tau$.
\smallskip
In other words, we prove that $I_{t,q}$ is a Hopf ideal in
$U_{t,q}(\mathbf{S}(n;\underline{1}))$. We thus obtain the required
$t$-deformation on
$\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$, for the Cartan type
simple modular restricted Lie algebra of $\mathbf{S}$ type
--- the special algebra
$\mathbf{S}(n;\underline{1})$.
\end{proof}
\begin{remark} (i) \
Set $f=(1-et)^{-1}$. By Lemma 3.4 \& Theorem 3.5, one gets
$$
[h,f]=f^2-f,\quad h^p=h, \quad f^p=1, \quad
\Delta(h)=h\otimes f+1\otimes h,
$$
where $f$ is a group-like element, and $S(h)=-hf^{-1}$,
$\varepsilon(h)=0$. So the subalgebra generated by $h$ and $f$ is a
Hopf subalgebra of $\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$,
which is isomorphic to the well-known Radford Hopf algebra over
$\mathcal{K}$ in char $p$ (see \cite{R}).
\smallskip
(ii) \ According to our argument, given a parameter
$q\in\mathcal{K}$, one can specialize $t$ to any root of the
$p$-polynomial $t^p-qt\in\mathcal{K}[t]$ in a split field of
$\mathcal{K}$. For instance, if take $q=1$, then one can specialize
$t$ to any scalar in $\mathbb{Z}_p$. If set $t=0$, then we get the
original standard Hopf algebra structure of
$\mathbf{u}(\mathbf{S}(n;\underline{1}))$. In this way, we indeed
get a new Hopf algebra structure over the same restricted universal
enveloping algebra $\mathbf{u}(\mathbf{S}(n;\underline{1}))$ over
$\mathcal{K}$ under the assumption that $\mathcal{K}$ is
algebraically closed, which has the new coalgebra structure induced
by Theorem 3.5, but has dimension $p^{(n-1)(p^n-1)}$.
\end{remark}
\section{More quantizations}
In this section, we can get more Drinfel'd twists by considering the
products of some pairwise different {\it basic Drinfel'd twists} as
stated in Remark 2.8. By the same argument as in Theorem 2.4, one
can get many more new complicated quantizations not only over the
$U(\mathbf{S}_{\mathbb{Z}}^+)[[t]]$, but over the
$\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$ as well. Moreover,
we prove that the twisted structures given by some products of
pairwise different {\it basic Drinfel'd twists} with different
length are nonisomorphic.
\subsection{More Drinfel'd twists}
We consider the products of pairwise different and mutually
commutative basic Drinfel'd twists. Note that $[\mathcal{F}(i,j),
\mathcal{F}(k,m)]=0$ for $i\neq k,m$ and $j\neq k$. This fact,
according to the definition of $\mathcal{F}(k,m)$, implies the
commutative relations in the case $i\neq k, m$ and $j\neq k$:
\begin{equation*}
\begin{split}
(\mathcal{F}(k,m)\otimes 1)(\Delta_0\otimes\text{\rm Id})
(\mathcal{F}(i,j))&=(\Delta_0\otimes\text{\rm Id})
(\mathcal{F}(i,j))(\mathcal{F}(k,m)\otimes 1),\\
(1\otimes \mathcal{F}(k,m))(\text{\rm Id}\otimes\Delta_0)
(\mathcal{F}(i,j))&=(\text{\rm Id}\otimes\Delta_0)
(\mathcal{F}(i,j))(1\otimes\mathcal{F}(k,m)),
\end{split}\tag{41}
\end{equation*}
which give rise to the following property.
\begin{theorem}
$\mathcal{F}(i,j)\mathcal{F}(k,m)(i \neq k,m; j\neq k)$ is still a
Drinfel'd twist on $U(\mathbf{S}_{\mathbb{Z}}^+)[[t]]$.
\end{theorem}
\begin{proof}
Note that $\Delta_0\otimes\text{\rm id}$, $\text{\rm
id}\otimes\Delta_0$, $\varepsilon_0\otimes\text{\rm id}$ and
$\text{\rm id}\otimes\varepsilon_0$ are algebraic homomorphisms.
According to Lemma 1.4, it suffices to check that
\begin{equation*}
\begin{split}
(\mathcal{F}(i,j)\mathcal{F}(k,m)\otimes 1)&(\Delta_0\otimes
\text{\rm Id})(\mathcal{F}(i,j)\mathcal{F}(k,m))\\
&= (1\otimes \mathcal{F}(i,j)\mathcal{F}(k,m))(\text{\rm
Id}\otimes\Delta_0) (\mathcal{F}(i,j)\mathcal{F}(k,m)).
\end{split}
\end{equation*}
Using $(41)$, we have
\begin{equation*}
\begin{split}
\text{LHS}&=(\mathcal{F}(i,j)\otimes 1)(\mathcal{F}(k,m)\otimes
1)(\Delta_0\otimes \text{\rm Id})(\mathcal{F}(i,j))(\Delta_0\otimes
\text{\rm
Id})(\mathcal{F}(k,m))\\
&=(\mathcal{F}(i,j)\otimes 1)(\Delta_0\otimes \text{\rm
Id})(\mathcal{F}(i,j))(\mathcal{F}(k,m)\otimes 1)(\Delta_0\otimes
\text{\rm
Id})(\mathcal{F}(k,m))\\
&=(1\otimes\mathcal{F}(i,j) )(\text{\rm Id}\otimes\Delta_0
)(\mathcal{F}(i,j))(1\otimes \mathcal{F}(k,m))(\text{\rm
Id}\otimes\Delta_0)(\mathcal{F}(k,m))\\
&=(1\otimes\mathcal{F}(i,j) )(1\otimes \mathcal{F}(k,m))(\text{\rm
Id}\otimes\Delta_0 )(\mathcal{F}(i,j))(\text{\rm
Id}\otimes\Delta_0)(\mathcal{F}(k,m))=\text{RHS}.
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
More generally, we have the following
\begin{coro}
Let $\mathcal{F}(i_1,j_1),\cdots, \mathcal{F}(i_m,j_m)$ be $m$
pairwise different basic Drinfel'd twists and
$[\mathcal{F}(i_k,j_k), \mathcal{F}(i_s,j_s)]=0$ for all $1\leq k\neq s\leq
m$. Then $\mathcal{F}(i_1,j_1) \cdots \mathcal{F}(i_m,j_m)$
is still
a Drinfel'd twist.
\end{coro}
We denote $\mathcal{F}_m=\mathcal{F}(i_1,j_1)\cdots
\mathcal{F}(i_m,j_m)$ and the length of $\mathcal{F}(i_1,j_1)\cdots
\mathcal{F}(i_m,j_m)$ as $m$. These twists lead to more
quantizations.
\subsection{More quantizations}
We consider the modular reduction process for the quantizations of
$U(\mathbf{S}^+)[[t]]$ arising from those products of some pairwise
different and mutually commutative basic Drinfel'd twists. We will
then get lots of new families of noncommutative and noncocommutative
Hopf algebras of dimension $p^{1{+}(n-1)(p^n-1)}$ with indeterminate
$t$ or of dimension $p^{(n-1)(p^n-1)}$ with specializing $t$ into a
scalar in $\mathcal{K}$.
Let $A(k,k')_\ell$,\, $B(k,k')_\ell$ and $A(m,m')_n$,\, $B(m,m')_n$
denote the coefficients of the corresponding quantizations of
$U(\mathbf{S}^+_{\mathbb{Z}})$ over
$U(\mathbf{S}^+_{\mathbb{Z}})[[t]]$ given by Drinfel'd twists
$\mathcal{F}(k,k')$ and $\mathcal{F}(m,m')$ respectively as in
Corollary 2.7. Note that $A(k,k')_0=A(m,m')_0$ $=1$,
$A(k,k')_{-1}=A(m,m')_{-1}=0$.
Set
\begin{equation*}
\begin{split}
\partial(m, m';k, k')_{\ell,n}&:=A(m,m')_nA(k,k')_\ell\partial-A(m,m')_nB(k,k')_\ell(\partial_k{-}2\partial_{k'})\\
&\qquad\qquad\quad\,-A(k,k')_\ell B(m,m')_n(\partial_{m}{-}2\partial_{m'}).
\end{split}
\end{equation*}
\begin{lemm}
Fix distinguished elements $h(k,k')=\partial_k{-}\partial_{k'}$,
$e(k,k')=x^{\epsilon_k}(\partial_k-2\partial_{k'})$ $(1\le k\neq k'\le n)$
and $h(m,m')=\partial_{m}{-}\partial_{m'}$,
$e(m,m')=x^{\epsilon_m}(\partial_{m}{-}2\partial_{m'})$ $(1\le m\neq m'\le n)$
with $k\neq m,m'$ and $k'\neq m$, the corresponding quantization of
$U(\mathbf{S}^+_{\mathbb{Z}})$ over
$U(\mathbf{S}^+_{\mathbb{Z}})[[t]]$ by Drinfel'd twist
$\mathcal{F}=\mathcal{F}(m,m')\mathcal{F}(k,k')$ with the product
undeformed is given by
\begin{gather*}
\Delta(x^{\alpha}\partial )= x^{\alpha}\partial \otimes
\bigl(1{-}e(k,k')t\bigr)^{\alpha_k{-}\alpha_{k'}}\bigl(1{-}e(m,m')t\bigr)^{\alpha_m{-}\alpha_{m'}}
\tag{42} \\\qquad\qquad\qquad\
+\sum\limits_{n,\ell=0}^{\infty}{({-}1)}^{n+\ell} h(k,k')^{\langle
\ell\rangle}\cdot h(m,m')^{\langle n \rangle}\otimes \bigl(1{-}e(k,k')t\bigr)^{{-}\ell}\\
\qquad\qquad\qquad\qquad\quad \cdot\,\bigl(1{-}e(m,m')t\bigr)^{{-}n}
x^{\alpha+\ell\epsilon_k+n\epsilon_m} \partial(m,m';k,k')_{\ell,n}
t^{n+\ell},\\
S(x^{\alpha}\partial)={-}\bigl(1{-}e(k,k')t\bigr)^{-\alpha_k{+}\alpha_{k'}}
\bigl(1{-}e(m,m')t\bigr)^{-\alpha_m{+}\alpha_{m'}}\cdot\tag {43}\\
\qquad\qquad\qquad\qquad\ \cdot \sum\limits_{n,\ell=0}^{\infty}
x^{\alpha+\ell\epsilon_k+n\epsilon_m} \partial(m,m';k,k')_{\ell,n} \cdot
h(m,m')_{1}^{\langle n\rangle}h(k,k')_{1}^{\langle \ell\rangle} t^{n+\ell},\\
\varepsilon(x^{\alpha}\partial
)=0, \tag{44}
\end{gather*}
for $x^\alpha\partial \in \mathbf{S}^+_{\mathbb{Z}}$.
\end{lemm}
\begin{proof} Using Corollary 2.7, we get
\begin{equation*}
\begin{split}
\Delta(x^{\alpha}\partial)&=\mathcal{F}(m,m')\mathcal{F}(k,k')
\Delta_0(x^{\alpha}\partial)\mathcal{F}(k,k')^{-1}\mathcal{F}(m,m')^{-1}\\
&=\mathcal{F}(m,m')\Bigl( x^{\alpha}\partial\otimes
\bigl(1{-}e(k,k')t\bigr)^{\alpha_k{-}\alpha_{k'}}\\
&\qquad+\sum\limits_{\ell=0}^{\infty}{({-}1)}^\ell h(k,k')^{\langle
\ell\rangle}\otimes\bigl(1{-}e(k,k')t\bigr)^{{-}\ell}\cdot
x^{\alpha{+}\ell\epsilon_k}\bigl(A(k,k')_\ell \partial \\
&\qquad\qquad -B(k,k')_\ell(\partial_k{-}2\partial_{k'})\bigr)t^\ell\Bigr)
\mathcal{F}(m,m')^{-1}.
\end{split}
\end{equation*}
Using (19) and Lemma 2.1, we get
\begin{equation*}
\begin{split}
\mathcal{F}(m,m')&\Bigl( x^{\alpha}\partial\otimes
\bigl(1{-}e(k,k')t\bigr)^{\alpha_k{-}\alpha_{k'}}\Bigr) \mathcal{F}(m,m')^{-1}\\
&=\mathcal{F}(m,m')\Bigl( x^{\alpha}\partial\otimes 1 \Bigr)
\mathcal{F}(m,m')^{-1}
\Bigl(1\otimes\bigl(1{-}e(k,k')t\bigr)^{\alpha_k{-}\alpha_{k'}}\Bigr)
\\
&=\mathcal{F}(m,m')\mathcal{F}(m,m')^{-1}_{\alpha_{m'}{-}\alpha_m}
\Bigl( x^{\alpha}\partial \otimes 1 \Bigr)\Bigl(1\otimes
\bigl(1{-}e(k,k')t\bigr)^{\alpha_k{-}\alpha_{k'}}\Bigr)\\
&=\Bigl(1\otimes\bigl(1{-}e(m,m')t\bigr)^{\alpha_m{-}\alpha_{m'}}\Bigr)
\Bigl( x^{\alpha}\partial \otimes 1 \Bigr)
\Bigl(1\otimes\bigl(1{-}e(k,k')t\bigr)^{\alpha_k{-}\alpha_{k'}}\Bigr)\\
&=x^{\alpha}\partial \otimes
\bigl(1{-}e(k,k')t\bigr)^{\alpha_k{-}\alpha_{k'}}\bigl(1{-}e(m,m')t\bigr)^{\alpha_m{-}\alpha_{m'}}.
\end{split}
\end{equation*}
Using (21), we have
\begin{equation*}
\begin{split}
\mathcal{F}&(m,m')\Bigl(\sum\limits_{\ell=0}^{\infty}{({-}1)}^\ell
h(k,k')^{\langle \ell\rangle}\otimes \bigl(1{-}e(k,k')t\bigr)^{{-}\ell} \\
&\qquad \cdot x^{\alpha{+}\ell\epsilon_k}\bigl(A(k,k')_\ell
\partial-B(k,k')_\ell(\partial_k{-}2\partial_{k'})\bigr)t^\ell\Bigr) \mathcal{F}(m,m')^{-1}\\
&=\sum\limits_{\ell=0}^{\infty}{({-}1)}^\ell h(k,k')^{\langle
\ell\rangle}\otimes \bigl(1{-}e(k,k')t\bigr)^{{-}\ell}\\
&\qquad \cdot
\mathcal{F}(m,m')\Bigl(1 \otimes
x^{\alpha{+}\ell\epsilon_k}\bigl(A(k,k')_\ell
\partial-B(k,k')_\ell(\partial_k{-}2\partial_{k'})\bigr)\Bigr)t^\ell \mathcal{F}(m,m')^{-1}\\
&=\sum\limits_{n,\ell=0}^{\infty}{({-}1)}^\ell h(k,k')^{\langle
\ell\rangle}\otimes \bigl(1{-}e(k,k')t\bigr)^{{-}\ell}\cdot
\mathcal{F}(m,m')F(m,m')_{n} \Bigl( h(m,m')^{\langle n \rangle}\otimes\\
&\qquad\quad
x^{\alpha+\ell\epsilon_k+n\epsilon_m}\partial(m,m';k,k')_{\ell,n}
t^{n+\ell}\Bigr)
\\
&=\sum\limits_{n,\ell=0}^{\infty}{({-}1)}^\ell h(k,k')^{\langle
\ell\rangle}h(m,m')^{\langle n \rangle}\otimes
\bigl(1{-}e(k,k')t\bigr)^{{-}\ell}\bigl(1{-}e(m,m')t\bigr)^{{-}n}\\
& \qquad \cdot
x^{\alpha+\ell\epsilon_k+n\epsilon_m}\partial(m,m';k,k')_{\ell,n}t^{n+\ell}.
\end{split}
\end{equation*}
For $k\neq m, m'$ and $k'\neq m$, by the definitions of $v$ and $u$,
we get
\begin{equation*}
\begin{split}
v&=v(k,k')v(m,m')=v(m,m')v(k,k'),\\
u&=u(m,m')u(k,k')=u(k,k')u(m,m').
\end{split}
\end{equation*}
Note $ u(m,m')h(k,k')=h(k,k')u(m,m')$,
$v(m,m')e(k,k')=e(k,k')v(m,m')$. By Corollary 2.7 and (20), we have
\begin{equation*}
\begin{split}
S(x^{\alpha}\partial)&=-v\cdot x^{\alpha}\partial\cdot u \\
&=-v(m,m')v(k,k')\cdot x^{\alpha}\partial\cdot u(k,k')u(m,m')\\
&=v(m,m')\cdot\Bigl({-}\bigl(1{-}e(k,k')t\bigr)^{-\alpha_k{+}\alpha_{k'}}\cdot
\Bigl(\sum\limits_{\ell=0}^{\infty}
x^{\alpha{+}\ell\epsilon_k}\bigl(A(k,k')_\ell\partial \\
&\qquad -B(k,k')_\ell(\partial_k{-}2\partial_{k'})\bigr)\cdot h(k,k')_1^{\langle
\ell\rangle}t^\ell\Bigr)\Bigr)\cdot u(m,m')
\\
&={-}\bigl(1{-}e(k,k')t\bigr)^{-\alpha_k{+}\alpha_{k'}}\cdot
v(m,m')u(m,m')_{\alpha_m-\alpha_{m'}} \\
&\qquad \cdot \sum\limits_{n,\ell=0}^{\infty}
x^{\alpha+\ell\epsilon_k+n\epsilon_m}\partial(m,m';k,k')_{\ell,n}
\cdot h(m,m')_{1}^{\langle
n\rangle}h(k,k')_{1}^{\langle
\ell\rangle} t^{n+\ell}\\
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
&={-}\bigl(1{-}e(k,k')t\bigr)^{-\alpha_k{+}\alpha_{k'}}\bigl(1{-}e(m,m')t\bigr)^{-\alpha_m{+}\alpha_{m'}}
\\
&\qquad \cdot \sum\limits_{n,\ell=0}^{\infty}
x^{\alpha+\ell\epsilon_k+n\epsilon_m}\partial(m,m';k,k')_{\ell,n}
\cdot h(m,m')_{1}^{\langle
n\rangle}h(k,k')_{1}^{\langle \ell\rangle} t^{n+\ell}.
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
Set $\alpha
(k,k')=\alpha_k{-}\delta_{ik}{-}\delta_{jk}{-}\alpha_{k'}{+}\delta_{ik'}{+}\delta_{jk'}$
and $d_{kk'}^{(\ell)}=\frac{1}{\ell!}(\text{\rm
ad}\,e(k,k'))^{\ell}$. Write coefficients $\bar A_\ell$, $\bar
B_\ell$, $ A_\ell$ in Theorem 3.1 as $\bar A(k,k')_\ell$, $\bar
B(k,k')_\ell$, $ A(k,k')_\ell$, respectively. Set
\begin{equation*}
\begin{split}
D_{ij}(m, m'; &k, k')_{\ell, n}:=\bar{A}(k,k')_\ell \bar{A}(m,m')_n
D_{ij}(x^{(\alpha{+}\ell\epsilon_k+n\epsilon_m)})\\
&\,+\bar{B}(k,k')_\ell \bar{A}(m,m')_n (\delta_{ik}D_{k'j}
+\delta_{jk}D_{ik'})(x^{(\alpha{+}(\ell-1)\epsilon_k+n\epsilon_m+\epsilon_{k'})})
\\
&\,+\bar{A}(k,k')_\ell \bar{B}(m,m')_n
(\delta_{im}D_{k'j}+\delta_{jm}D_{ik'})
(x^{(\alpha{+}\ell\epsilon_k+(n-1)\epsilon_m+\epsilon_{k'})}).
\end{split}
\end{equation*}
Using Lemma 4.3, we get a new quantization of
$U(\mathbf{S}(n;\underline{1}))$ over
$U_t(\mathbf{S}(n;\underline{1}))$ by Drinfel'd twist
$\mathcal{F}=\mathcal{F}(m,m')\mathcal{F}(k,k')$ as follows.
\begin{lemm}
Fix distinguished elements
$h(k,k')=D_{kk'}(x^{(\epsilon_k+\epsilon_{k'})})$,
$e(k,k')=2D_{kk'}(x^{(2\epsilon_k+\epsilon_{k'})})$;
$h(m,m')=D_{mm'}(x^{(\epsilon_m+\epsilon_{m'})})$,
$e(m,m')=2D_{mm'}(x^{(2\epsilon_m+\epsilon_{m'})})$ with $k\neq
m,m'; k'\neq m$, the corresponding quantization of
$U(\mathbf{S}(n;\underline{1}))$ on
$U_t(\mathbf{S}(n;\underline{1}))$ $($also on
$U(\mathbf{S}(n;\underline{1}))[[t]])$ with the product undeformed
is given by
\begin{gather*}
\Delta(D_{ij}(x^{(\alpha)}))=D_{ij}(x^{(\alpha)})\otimes
\bigl(1{-}e(k,k')t\bigr)^{\alpha(k,k')}\bigl(1{-}e(m,m')t\bigr)^{\alpha(m,m')}\tag{45}\\
\qquad\qquad\qquad\qquad\,+\sum\limits_{n,\ell=0}^{p{-}1}{({-}1)}^{n+\ell}h(k,k')^{\langle
\ell\rangle}h(m,m')^{\langle
n\rangle}\otimes \bigl(1{-}e(k,k')t\bigr)^{{-}\ell}\cdot\\
\qquad\qquad\qquad\qquad\qquad\qquad
\cdot\bigl(1{-}e(m,m')t\bigr)^{{-}n}D_{ij}(m, m'; k, k')_{\ell,
n}t^{n+\ell},
\\
S(D_{ij}(x^{(\alpha)}))={-}\bigl(1{-}e(k,k')t\bigr)^{-\alpha(k,k')}
\bigl(1{-}e(m,m')t\bigr)^{-\alpha(m,m')}\cdot \tag{46} \\
\qquad\qquad\qquad\qquad\qquad
\cdot\,\Bigl(\sum\limits_{n,\ell=0}^{p{-}1}D_{ij}(m,m';k,k')_{\ell,n}
h(k,k')_1^{\langle \ell\rangle}h(m,m')_1^{\langle n\rangle}t^{n+\ell}\Bigr),
\\
\varepsilon(D_{ij}(x^{(\alpha)}))=0, \tag{47}
\end{gather*}
where $0\le \alpha \le \tau$.
\end{lemm}
For the further discussion, we need two lemmas below about the
quantization of $U(\mathbf{S}(n;\underline{1}))$ over
$U(\mathbf{S}(n;\underline{1}))[[t]]$ in Lemma 4.4.
\begin{lemm} For $s\ge 1$, one has
\begin{gather*}
\Delta((D_{ij}(x^{(\alpha)}))^s)=\sum_{0\le j\le s\atop n, \ell\ge
0}\dbinom{s}{j}({-}1)^{n+\ell}(D_{ij}(x^{(\alpha)}))^jh(k,k')^{\langle
\ell\rangle}h(m,m')^{\langle
n\rangle}\otimes\tag{\text{\rm i}} \\
\qquad\qquad\qquad\qquad\quad
\bigl(1{-}e(k,k')t\bigr)^{j\alpha(k,k'){-}\ell}\bigl(1{-}e(m,m')t\bigr)^{j\alpha(m,m'){-}n}\cdot
\\
\qquad\qquad\qquad\qquad\qquad \cdot\,
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^{s{-}j})t^{\ell+n}.
\\
S((D_{ij}(x^{(\alpha)}))^s)=(-1)^s\bigl(1{-}e(m,m')t\bigr)^{-s\alpha(m,m')}
\bigl(1{-}e(k,k')t\bigr)^{-s\alpha(k,k')}\cdot\tag{\text{\rm ii}} \\
\qquad\qquad\qquad\qquad\quad\; \cdot
\Bigl(\sum\limits_{n,\ell=0}^{\infty}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^s)
h(k,k')_1^{\langle \ell\rangle} h(m,m')_1^{\langle n \rangle}t^{n+\ell}\Bigr).
\end{gather*}
\end{lemm}
\begin{proof} By Lemma 2.5, (21), (19) and Lemma 1.6, we obtain
\begin{equation*}
\begin{split}
&\Delta((D_{ij}(x^{(\alpha)}))^s)
=\mathcal{F}\Bigl(D_{ij}(x^{(\alpha)})\otimes
1+1\otimes D_{ij}(x^{(\alpha)})\Bigr)^s\mathcal{F}^{-1}\\
&=\mathcal{F}(m,m')\Bigl(\sum_{0\le j\le s\atop
\ell\ge0}\dbinom{s}{j}({-}1)^\ell(D_{ij}(x^{(\alpha)}))^jh(k,k')^{\langle
\ell\rangle}\otimes\bigl(1{-}e(k,k')t\bigr)^{j\alpha(k,k'){-}\ell}\\
& \quad
\cdot d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^{s{-}j})t^\ell\Bigr)\mathcal{F}(m,m')^{-1}\\
&=\mathcal{F}(m,m')\Bigl(\sum_{0\le j\le s\atop
\ell\ge0}\dbinom{s}{j}({-}1)^\ell\bigl((D_{ij}(x^{(\alpha)}))^j{\otimes}
1\bigr) \bigl(h(k,k')^{\langle
\ell\rangle}{\otimes}\bigl(1{-}e(k,k')t\bigr)^{j\alpha(k,k'){-}\ell}\bigr)\\
& \quad
\cdot \bigl(1\otimes d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^{s{-}j})t^\ell\bigr)\Bigr)\mathcal{F}(m,m')^{-1}\\
&=\mathcal{F}(m,m')\sum_{0\le j\le s\atop n, \ell\ge
0}\dbinom{s}{j}({-}1)^{n+\ell}\bigl((D_{ij}(x^{(\alpha)}))^j\otimes
1\bigr)\mathcal{F}(m,m')_n^{-1}h(k,k')^{\langle
\ell\rangle}h(m,m')^{\langle n\rangle}\\ & \quad
\otimes\bigl(1{-}e(k,k')t\bigr)^{j\alpha(k,k'){-}\ell}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^{s{-}j})t^{\ell+n}\\
&=\sum_{0\le j\le s\atop n, \ell\ge
0}\dbinom{s}{j}({-}1)^{n+\ell}\mathcal{F}(m,m')\mathcal{F}(m,m')_{n-j\alpha(m,m')}^{-1}
\bigl((D_{ij}(x^{(\alpha)}))^j\otimes 1\bigr)h(k,k')^{\langle \ell\rangle}\\
& \quad \cdot h(m,m')^{\langle n\rangle}
\otimes\bigl(1{-}e(k,k')t\bigr)^{j\alpha(k,k'){-}\ell}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^{s{-}j})t^{\ell+n}\\
&=\sum_{0\le j\le s\atop n, \ell\ge
0}\dbinom{s}{j}({-}1)^{n+\ell}(D_{ij}(x^{(\alpha)}))^jh(k,k')^{\langle
\ell\rangle}h(m,m')^{\langle
n\rangle}\otimes\bigl(1{-}e(k,k')t\bigr)^{j\alpha(k,k'){-}\ell}
\\ & \quad \cdot\bigl(1{-}e(m,m')t\bigr)^{j\alpha(m,m'){-}n}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^{s{-}j})t^{\ell+n}.
\end{split}
\end{equation*}
Again by (20) and Lemma 1.6,
\begin{equation*}
\begin{split}
S((D_{ij}(x^{(\alpha)}))^s)&=u^{-1}S_0((D_{ij}(x^{(\alpha)}))^s)\,u\\
&=(-1)^s v\cdot (D_{ij}(x^{(\alpha)}))^s\cdot u \\ &=(-1)^s
v(m,m')\Bigl(\bigl(1{-}e(k,k')t\bigr)^{-s\alpha(k,k')}
\\ & \quad \cdot\Bigl(\sum\limits_{\ell=0}^{\infty}
d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^s)\cdot h(k,k')_1^{\langle
\ell\rangle}t^\ell\Bigr)\Bigr)u(m,m')
\\
&=(-1)^s
v(m,m')u(m,m')_{s\alpha(m,m')}\bigl(1{-}e(k,k')t\bigr)^{-s\alpha(k,k')}\\
&\quad \cdot \Bigl(\sum\limits_{n,\ell=0}^{\infty}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^s)\cdot
h(k,k')_1^{\langle
\ell\rangle} h(m,m')_1^{\langle n \rangle}t^{n+\ell}\Bigr) \\
&=(-1)^s
\bigl(1{-}e(m,m')t\bigr)^{-s\alpha(m,m')}\bigl(1{-}e(k,k')t\bigr)^{-s\alpha(k,k')}\\
& \quad \cdot \Bigl(\sum\limits_{n,\ell=0}^{\infty}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^s)\cdot
h(k,k')_1^{\langle \ell\rangle} h(m,m')_1^{\langle n \rangle}t^{n+\ell}\Bigr).
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
\begin{lemm} \ Set $e(k,k')=2D_{kk'}(x^{(2\epsilon_k+\epsilon_{k'})})$, $e(m,m')=2D_{mm'}
(x^{(2\epsilon_m+\epsilon_{m'})})$,
\smallskip
$d_{kk'}^{(\ell)}=\frac{1}{\ell!}(\text{\rm ad}\,e(k,k'))^\ell$ and
$d_{mm'}^{(n)}=\frac{1}{n!}(\text{\rm ad}\,e(m,m'))^n$. Then
\smallskip
$(\text{\rm i})$ \
$d_{mm'}^{(n)}d_{kk'}^{(\ell)}(D_{ij}(x^{(\alpha)}))=D_{ij}(m,m';k,k')_{\ell,n}$,
\smallskip
\hskip0.6cm where $D_{ij}(m,m';k,k')_{\ell,n}$ as in Lemma 4.4.
\smallskip
$(\text{\rm ii})$ \
$d_{mm'}^{(n)}d_{kk'}^{(\ell)}(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))
=\delta_{\ell,0}\delta_{n,0}D_{ij}(x^{(\epsilon_i+\epsilon_j)})
-\delta_{n,0}\delta_{1,\ell}(\delta_{ik}{-}\delta_{jk})e(k,k')$
\smallskip
\hskip5cm
$-\,\delta_{\ell,0}\delta_{1,n}(\delta_{im}{-}\delta_{jm})e(m,m')$.
\smallskip
$(\text{\rm iii})$ \
$d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)=\delta_{\ell,0}\delta_{n,0}(D_{ij}(x^{(\alpha)}))^p
-\delta_{n,0}\delta_{1,\ell}
(\delta_{ik}{-}\delta_{jk})\delta_{\alpha,\epsilon_i+\epsilon_j}\cdot$
\smallskip
\hskip5cm $\cdot e(k,k')-\delta_{\ell,0}\delta_{1,n}
(\delta_{im}{-}\delta_{jm})\delta_{\alpha,\epsilon_i+\epsilon_j}e(m,m')$.
\end{lemm}
\begin{proof} (i) For $0\le \alpha\le\tau$, using (17), we
obtain
\begin{equation*}
\begin{split}
d_{mm'}^{(n)}&d_{kk'}^{(\ell)}(D_{ij}(x^{(\alpha)}))\\
&=d_{mm'}^{(n)}d_{kk'}^{(\ell)}\Bigl(\frac{1}{\alpha !}
x^{\alpha-\epsilon_i-\epsilon_j}(\alpha_j\partial_i-\alpha_i\partial_j)\Bigr)\\
&=d_{mm'}^{(n)}\Bigl(\frac{1}{\alpha
!}x^{\alpha-\epsilon_i-\epsilon_j+\ell\epsilon_k}\bigl(A(k,k')_\ell
(\alpha_j\partial_i{-}\alpha_i\partial_j)-B(k,k')_\ell(\partial_k{-}2\partial_{k'})\bigr)\Bigr)\\
&=\frac{1}{\alpha
!}x^{\alpha-\epsilon_i-\epsilon_j+\ell\epsilon_k+n\epsilon_m}\bigl(A(k,k')_\ell
A(m,m')_n (\alpha_j\partial_i{-}\alpha_i\partial_j)\\
& \quad -A(m,m')_n
B(k,k')_\ell(\partial_k{-}2\partial_{k'})-
A(k,k')_\ell B(m,m')_n(\partial_{k'}{-}2\partial_{k'})\bigr)\\
&=D_{ij}(m,m';k,k')_{\ell,n}.
\end{split}
\end{equation*}
(ii), (iii) may be proved directly using Lemma 3.4.
\end{proof}
Using Lemmas 3.2, 3.4, 4.5 \& 4.6, we get a new Hopf algebra
structure over the same restricted universal enveloping algebra
$\mathbf{u}(\mathbf{S}(n;\underline{1}))$ over $\mathcal{K}$ by the
products of two different and commutative basic Drinfel'd twists.
\begin{theorem} Fix two distinguished elements
$h(k,k'):$ $=D_{kk'}(x^{(\epsilon_k+\epsilon_{k'})})$,
$e(k,k'):=2D_{kk'}(x^{(2\epsilon_k+\epsilon_{k'})})$ $(1\le k\neq
k'\le n)$ and $h(m,m'):=D_{mm'}(x^{(\epsilon_m+\epsilon_{m'})})$,
$e(m,m'):=2D_{mm'}(x^{(2\epsilon_m+\epsilon_{m'})})$ $(1\le m\neq
m'\le n)$ with $k\neq m, m'; k'\neq m$, there is a noncommutative
and noncocummtative Hopf algebra
$(\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1})),m,\iota,$
$\Delta,S,\varepsilon)$ over $\mathcal{K}[t]_p^{(q)}$ with the
product undeformed, whose coalgebra structure is given by
\begin{gather*}
\Delta(D_{ij}(x^{(\alpha)}))=D_{ij}(x^{(\alpha)})\otimes
\bigl(1{-}e(k,k')t\bigr)^{\alpha(k,k')}\bigl(1{-}e(m,m')t\bigr)^{\alpha(m,m')} \tag{48}\\
\qquad\qquad\qquad\qquad+
\sum\limits_{n,\ell=0}^{p-1}(-1)^{\ell+n}h(k,k')^{\langle \ell\rangle}
h(m,m')^{\langle n \rangle}\otimes \bigl(1{-}e(k,k')t\bigr)^{-\ell}\cdot\\
\qquad\qquad\qquad\qquad\qquad\qquad\cdot\, \bigl(1{-}e(m,m')t\bigr)^{-n}d_{kk'}^{(\ell)}d_{mm'}^{(n)}(D_{ij}(x^{(\alpha)}))t^{\ell+n},\\
S(D_{ij}(x^{(\alpha)}))=-\bigl(1{-}e(k,k')t\bigr)^{-\alpha(k,k')}
\bigl(1{-}e(m,m')t\bigr)^{-\alpha(m,m')}\cdot \tag{49}\\
\qquad\qquad\qquad\qquad\qquad\cdot\,
\Bigl(\sum\limits_{n,\ell=0}^{p-1}d_{kk'}^{(\ell)}d_{mm'}^{(n)}(D_{ij}(x^{(\alpha)}))
h(k,k')_1^{\langle \ell\rangle}h(m,m')_1^{\langle n\rangle}t^{\ell+n}\Bigr),\\
\varepsilon(D_{ij}(x^{(\alpha)}))=0, \tag{50}
\end{gather*}
where $0\le\alpha\le\tau$, and
$\hbox{\rm dim}\,_{\mathcal{K}}\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))=p^{1{+}(n-1)(p^n-1)}$.
\end{theorem}
\begin{proof}
Let $I_{t,q}$ denote the ideal of
$(U_{t,q}(\mathbf{S}(n;\underline{1})), m,
\iota,\Delta,S,\varepsilon)$ over the ring $\mathcal K[t]_p^{(q)}$
generated by the same generators as in $I$ ($q\in\mathcal{K}$).
Observe that the result in Lemma 4.4, via the base change with
$\mathcal K[t]$ replaced by $\mathcal K[t]_p^{(q)}$, is still valid
for $U_{t,q}(\mathbf{S}(n;\underline{1}))$.
In what follows, we shall show that $I_{t,q}$ is a Hopf ideal of
$U_{t,q}(\mathbf{S}(n;\underline{1}))$. To this end, it suffices to
verify that $\Delta$ and $S$ preserve the generators of $I_{t,q}$.
\smallskip
(I) \ By Lemmas 4.5, 3.2, 3.4 \& 4.6, we obtain
\begin{equation*}
\begin{split}
\Delta((D_{ij}(&x^{(\alpha)}))^p)=(D_{ij}(x^{(\alpha)}))^p\otimes
(1{-}e(k,k')t)^{p\,\alpha(k,k')}(1{-}e(m,m')t)^{p\,\alpha(m,m')}\\
&\quad+\sum\limits_{n,\ell=0}^{\infty} ({-}1)^{n+\ell} h(k,k')^{\langle
\ell\rangle} h(m,m')^{\langle n \rangle}\otimes
(1{-}e(k,k')t)^{{-}\ell}\\
& \quad \cdot(1{-}e(m,m')t)^{{-}n}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)t^{n+\ell}
\\
&\equiv(D_{ij}(x^{(\alpha)}))^p{\otimes}1{+}\sum\limits_{n,\ell=0}^{p{-}1}
({-}1)^{n+\ell} h(k,k')^{\langle \ell\rangle} h(m,m')^{\langle n
\rangle}{\otimes}(1{-}e(k,k')t)^{{-}\ell}
\\ & \quad \cdot
(1{-}e(m,m')t)^{{-}n}d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)t^{n+\ell}
\quad (\text{\rm mod }\, p)
\\&=(D_{ij}(x^{(\alpha)}))^p{\otimes}1
+\sum\limits_{n,\ell=0}^{p{-}1} ({-}1)^{n+\ell} h(k,k')^{\langle
\ell\rangle} h(m,m')^{\langle n \rangle}{\otimes} (1{-}e(k,k')t)^{{-}\ell}\\ &
\quad \cdot (1{-}e(m,m')t)^{{-}n}
\Big(\delta_{\ell,0}\delta_{n,0}(D_{ij}(x^{(\alpha)}))^p-\delta_{n,0}\delta_{1,\ell}
(\delta_{ik}{-}\delta_{jk})\\
& \quad \cdot
\delta_{\alpha,\epsilon_i+\epsilon_j}e(k,k')-\delta_{\ell,0}\delta_{1,n}
(\delta_{im}{-}\delta_{jm})\delta_{\alpha,\epsilon_i+\epsilon_j}e(m,m')\Big
)t^{n+\ell}
\\
&=(D_{ij}(x^{(\alpha)}))^p{\otimes}1+1{\otimes}(D_{ij}(x^{(\alpha)}))^p\\
& \quad
+h(k,k'){\otimes}(1{-}e(k,k')t)^{-1}(\delta_{ik}{-}\delta_{jk})
\delta_{\alpha,\epsilon_i+\epsilon_j}e(k,k')t\\
& \quad
+h(m,m'){\otimes}(1{-}e(m,m')t)^{-1}(\delta_{im}{-}\delta_{jm})
\delta_{\alpha,\epsilon_i+\epsilon_j}e(m,m')t.
\end{split}\tag{51}
\end{equation*}
Hence, when $\alpha\ne\epsilon_i+\epsilon_j$, we get
\begin{equation*}
\begin{split}
\Delta((D_{ij}(x^{(\alpha)}))^p)&\equiv(D_{ij}(x^{(\alpha)}))^p\otimes
1+1\otimes
(D_{ij}(x^{(\alpha)}))^p\\
&\in I_{t,q}\otimes
U_{t,q}(\mathbf{S}(n;\underline{1}))+U_{t,q}(\mathbf{S}(n;\underline{1}))\otimes
I_{t,q};
\end{split}
\end{equation*}
And when $\alpha=\epsilon_i+\epsilon_j$, by Lemmas 3.4 and 4.6, (45)
becomes
\begin{equation*}
\begin{split}
\Delta(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))&=D_{ij}(x^{(\epsilon_i+\epsilon_j)})\otimes
1+ 1\otimes
D_{ij}(x^{(\epsilon_i+\epsilon_j)})\\
&\quad +\,(\delta_{ik}-\delta_{jk})h(k,k')\otimes
(1-e(k,k')t)^{-1}e(k,k')t\\
&\quad +\,(\delta_{im}-\delta_{jm})h(m,m')\otimes
(1-e(m,m')t)^{-1}e(m,m')t.
\end{split}
\end{equation*}
Combining with (51), we obtain
\begin{equation*}
\begin{split}
\Delta((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)}))
&\equiv((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)}))\otimes
1\\ &\quad +1\otimes ((D_{ij}(x^{(\alpha)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)}))\\
&\in I_{t,q}\otimes
U_{t,q}(\mathbf{S}(n;\underline{1}))+U_{t,q}(\mathbf{S}(n;\underline{1}))\otimes
I_{t,q}.
\end{split}
\end{equation*}
Thereby, we prove that the ideal $I_{t,q}$ is also a coideal of the
Hopf algebra $U_{t,q}(\mathbf{S}(n;\underline{1}))$.
\smallskip
(II) \ By Lemmas 4.5, 3.2, 3.4 \& 4.6, we have
\begin{equation*}
\begin{split}
S((D_{ij}(&x^{(\alpha)}))^p)
=-\bigl(1{-}e(k,k')t\bigr)^{-p\,\alpha(k,k')}\bigl(1{-}e(m,m')t\bigr)^{-p\,\alpha(m,m')}\\
& \quad \cdot \Bigl( \sum\limits_{n,\ell=0}^{\infty}
d_{m'}m^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)\cdot
h(k,k')_1^{\langle \ell\rangle}h(m,m')_1^{\langle
n\rangle}t^{n+\ell}\Bigr)\\
&\equiv -\sum\limits_{n,\ell=0}^{p-1}
d_{mm'}^{(n)}d_{kk'}^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)\cdot
h(k,k')_1^{\langle \ell\rangle}h(m,m')_1^{\langle n\rangle}t^{n+\ell}
\quad (\text{mod }p)\\
&=-(D_{ij}(x^{(\alpha)}))^p+(\delta_{ik}{-}\delta_{jk})\delta_{\alpha,\epsilon_i+\epsilon_j}e(k,k')\cdot
h(k,k')_1^{\langle 1\rangle} t\\
&\quad
+(\delta_{im}{-}\delta_{jm})\delta_{\alpha,\epsilon_i+\epsilon_j}e(m,m')\cdot
h(m,m')_1^{\langle 1\rangle} t.
\end{split}\tag{52}
\end{equation*}
Hence, when $\alpha\ne\epsilon_i+\epsilon_j$, we get
$$
S\bigl((D_{ij}(x^{(\alpha)}))^p\bigr)=-(D_{ij}(x^{(\alpha)}))^p\in
I_{t,q};
$$
When $\alpha=\epsilon_i+\epsilon_j$, by Lemmas 3.4 and 4.5,
(46) reads as
$S(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))=-D_{ij}(x^{(\epsilon_i+\epsilon_j)})+(\delta_{ik}-\delta_{jk})e(k,k')\cdot
h(k,k')_1^{\langle 1\rangle}t+(\delta_{im}-\delta_{jm})e(m,m')\cdot
h(m,m')_1^{\langle 1\rangle}t$. Combining with (52), we obtain
$$
S\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})\bigr)
=-\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})\bigr)
\in I_{t,q}.
$$
Thereby, we show that the ideal $I_{t,q}$ is indeed preserved by the
antipode $S$ of the quantization
$U_{t,q}(\mathbf{S}(n;\underline{1}))$.
\smallskip
(III) It is obvious to notice that
$\varepsilon((D_{ij}(x^{(\alpha)}))^p)=0$ for all $0\le\alpha\le\tau$.
\smallskip
This completes the proof.
\end{proof}
\begin{remark}
Corollary 4.2 gives more Drinfel'd twists. Using the same proof as
Theorem 4.7, we can get new families of noncommutative and
noncocommutative Hopf algebras of dimension $p^{1{+}(n-1)(p^n-1)}$
in characteristic $p$. Obviously, they are $p$-polynomial
deformations $(\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1})),m,
\iota,$ $\Delta,S,\varepsilon)$ of the restricted universal
enveloping algebra of $\mathbf{S}(n;\underline{1})$ over the
$p$-truncated polynomial ring $\mathcal{K}[t]_p^{(q)}$.
\end{remark}
\subsection{Different twisted structures} We shall show that the twisted structures given by
Drinfel'd twists with different product-length are nonisomorphic.
\begin{defi}
A Drinfel'd twist $\mathcal{F} \in A\otimes A$ on any Hopf algebra
$A$ is called \textit{compatible} if $\mathcal{F}$ commutes with the
coproduct $\Delta_0$.
\end{defi}
In other words, twisting a Hopf algebra $A$ with a
\textit{compatible} twist $\mathcal{F}$ gives exactly the same Hopf
structure, that is, $\Delta_{\mathcal{F}}=\Delta_0$. The set of
\textit{compatible} twists on $A$ thus forms a group.
\begin{lemm}$($\cite{MG}$)$
Let $\mathcal{F} \in A\otimes A$ be a Drinfel'd twist on a Hopf
algebra $A$. Then the twisted structure induced by $\mathcal{F}$
coincides with the structure on $A$ if and only if $\mathcal{F}$ is
a compatible twist.
\end{lemm}
Using the same proof as in Theorem 4.1, we obtain
\begin{lemm}
Let $\mathcal{F}, \mathcal{G} \in A\otimes A$ be Drinfel'd twists on
a Hopf algebra $A$ with
$\mathcal{F}\mathcal{G}=\mathcal{G}\mathcal{F}$ and $\mathcal{F}\neq
\mathcal{G}$. Then $\mathcal{F}\mathcal{G}$ is a Drinfel'd twist.
Furthermore, $\mathcal{G}$ is a Drinfel'd twist on
$A_{\mathcal{F}}$, $\mathcal{F}$ is a Drinfel'd twist on
$A_{\mathcal{G}}$
and
$\Delta_{\mathcal{F}\mathcal{G}}=(\Delta_{\mathcal{F}})_{\mathcal{G}}
=(\Delta_{\mathcal{G}})_{\mathcal{F}}$.
\end{lemm}
Let $A$ denote one of objects:
$U(\mathbf{S}_{\mathbb{Z}}^{+})[[t]]$,
$U_{t,q}(\mathbf{S}(n,\underline{1}))$ and
$\mathbf{u}_{t,q}(\mathbf{S}(n,\underline{1}))$.
\begin{prop}
Drinfel'd twists
$\mathcal{F}^{\zeta(i)}:=\mathcal{F}(2,1)^{\zeta_1}\cdots\mathcal{F}(n,1)^{\zeta_{n-1}}$
$($where
$\zeta(i)=(\zeta_1,\cdots,\zeta_{n-1})=(\underbrace{1,\cdots,1}_i,0,{\cdots},0)\in
\mathbb Z_2^{n-1})$ lead to $n-1$ different twisted Hopf algebra
structures on $A$.
\end{prop}
\begin{proof} For $i=1$, $\mathcal{F}(2,1)$ gives one twisted structure with a
twisted coproduct different from the original one. For $i=2$, using
Lemma 4.11, we know that $\mathcal{F}(3,1)$ is a Drinfel'd twist and
not a compatible twist on
$U(\mathbf{S}_{\mathbb{Z}}^{+})[[t]]_{\mathcal{F}(2,1)}$. So the
twist $\mathcal{F}(2,1)\mathcal{F}(3,1)$ gives new Hopf algebra
structure with the coproduct different from the previous one twisted
by $\mathcal{F}(2,1)$. Using the same discussion, we obtain that the
Drinfel'd twists $\mathcal{F}^{\zeta(i)}$ for
$\zeta(i)=(\underbrace{1,\cdots,1}_i,0,\cdots,0)\in
\mathbb{Z}_2^{n-1}$ give $n{-}1$ different twisted structures on
$U(\mathbf{S}_{\mathbb{Z}}^{+})[[t]]$. This leads to the
corresponding result on $A$.
\end{proof}
\section{Quantizations of horizontal type for
$\mathbf{S}(n;\underline{1})$ and $\mathfrak{sl}_n$}
In this section, we assume that $n\geq 3$. Take $h:=\partial_k-\partial_{k'}$
and $e:=x^{\epsilon_k-\epsilon_m}\partial_m$ $(1\leq k \neq k'\neq m \leq
n)$ and denote by $\mathcal{F}(k,k';m)$ the corresponding Drinfel'd
twist. These twists will lead to the quantizations in horizontal
direction. So we call them the Drinfeld twists in {\it horizontal}
(while those twists used in Sections 3, 4 are in {\it vertical}).
Using the horizontal Drinfeld twists and the same discussion in
Sections 2, 3, we obtain some new quantizations of horizontal type
for the universal enveloping algebra of the special algebra
$\mathbf{S}(n;\underline{1})$.
The twisted structures given by the twists $\mathcal{F}(k,k';m)$ on
subalgebra $\mathbf{S}(n;\underline{1})_0$ are the same as those on
the special linear Lie algebra $\mathfrak{sl}_n$ over a field
$\mathcal{K}$ with $\text{char}(\mathcal{K})=p$ derived by the
Jordanian twists $\mathcal{F}=\mathrm{exp}(h\otimes \sigma)$,
$\sigma=\mathrm{ln}(1-e)$ for some two-dimensional carrier
subalgebra $B(2)=\text{Span}_{\mathcal K}\{h, e\}$ discussed in
\cite{KL}, \cite{KLS}, etc.
\subsection{Quantizations of horizontal type of $\mathbf u(\mathbf{S}(n;\underline{1}))$}
From Lemma 2.2 and Theorem 2.4, we have
\begin{lemm}
Fix two distinguished elements $h:=\partial_k{-}\partial_{k'}$,
$e:=x^{\epsilon_k-\epsilon_m}\partial_m$ $(1\leq k \neq k'\neq m \leq
n)$, the corresponding horizontal quantization of
$U(\mathbf{S}^+_{\mathbb{Z}})$ over
$U(\mathbf{S}^+_{\mathbb{Z}})[[t]]$ by Drinfel'd twist
$\mathcal{F}(k,k';m)$ with the product undeformed is given by
\begin{gather*}
\Delta(x^{\alpha}\partial)=x^{\alpha}\partial\otimes
(1{-}et)^{\alpha_k{-}\alpha_{k'}}{+}\sum\limits_{\ell=0}^{\infty}{({-}1)}^\ell
\,h^{\langle \ell\rangle}\otimes(1{-}et)^{{-}\ell}\cdot\tag{53} \\
\qquad\qquad \cdot\,
x^{\alpha{+}\ell(\epsilon_k-\epsilon_m)}(A_\ell\partial-B_\ell\partial_m)t^\ell,\\
S(x^{\alpha}\partial)={-}(1{-}et)^{-(\alpha_k{-}\alpha_{k'})}\cdot\Bigl(\sum\limits_{\ell=0}^{\infty}
\,x^{\alpha{+}\ell(\epsilon_k-\epsilon_m)}(A_\ell\partial-B_\ell\partial_m)\cdot
h_1^{\langle \ell\rangle}t^\ell\Bigr),
\tag{54}\\
\varepsilon(x^{\alpha}\partial)=0, \tag{55}
\end{gather*}
where $\alpha-\eta \in \mathbb{Z}^n_+,\, \eta=-\underline1,\,
A_\ell=\frac{1}{\ell!}\prod\limits_{j=0}^{\ell-1}(\alpha_m{-}j),\,
B_\ell=\partial(\epsilon_k-\epsilon_m) A_{\ell{-}1}$, with a convention
$A_0=1, A_{-1}=0$.
\end{lemm}
Note that $A_\ell=0$ for $\ell>\alpha_m$ and $B_\ell=0$ for $\ell>\alpha_m
+1$ in Lemma 5.1.
\begin{remark} According to the parametrization of the twists $\mathcal F(k,k';m)$, we
get $n(n{-}1)(n{-}2)$ {\it basic Drinfel'd twists} over
$U(\mathbf{S}^+_{\mathbb{Z}})$ and consider the products of some
{\it basic Drinfel'd twists}. Using the same argument as Section 4,
one can get many more new Drinfel'd twists, which will lead to new
complicated quantizations not only over the
$U(\mathbf{S}_{\mathbb{Z}}^+)[[t]]$, but over the
$\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$ as well.
\end{remark}
We firstly make {\it the modulo $p$ reduction} for the quantizations
of $U(\mathbf{S}^{+}_\mathbb{Z})$ in Lemma 5.1 to yield the
horizontal quantizations of $U(\mathbf{S}(n;\underline{1}))$ over
$U_t(\mathbf{S}(n;\underline{1}))$.
\begin{theorem}
Fix distinguished elements
$h=D_{kk'}(x^{(\epsilon_k+\epsilon_{k'})})$,
$e=D_{mk}(x^{(2\epsilon_k)})$ $(1\leq k\neq k'\neq m\leq n)$, the
corresponding horizontal quantization of
$U(\mathbf{S}(n;\underline{1}))$ over
$U_t(\mathbf{S}(n;\underline{1}))$ with the product undeformed is
given by
\begin{gather*}
\Delta(D_{ij}(x^{(\alpha)}))=D_{ij}(x^{(\alpha)})\otimes
(1{-}et)^{\alpha(k,k')} +\sum\limits_{\ell=0}^{p{-}1}{({-}1)}^\ell
h^{\langle \ell\rangle}\otimes(1{-}et)^{{-}\ell}\cdot\tag{56}\\
\qquad\qquad\quad \cdot\,\Bigl(\bar{A}_\ell
D_{ij}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))})+\bar{B}_\ell
(\delta_{ik}D_{jm}-\delta_{jk}D_{im})(x^{(\alpha{+}(\ell-1)(\epsilon_k-\epsilon_{m}))})\Bigr)t^\ell,
\end{gather*}
\begin{gather*}
S(D_{ij}(x^{(\alpha)}))={-}(1{-}et)^{-\alpha(k,k')}
\cdot\sum\limits_{\ell=0}^{p{-}1}\Bigl(\bar{A}_\ell
D_{ij}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))})\tag{57}\\
\qquad\qquad\qquad\qquad\quad +\,\bar{B}_\ell
(\delta_{ik}D_{jm}-\delta_{jk}D_{im})(x^{(\alpha{+}(\ell-1)(\epsilon_k-\epsilon_{m}))})\Bigr)
\cdot h_1^{\langle \ell\rangle}t^\ell,
\\
\varepsilon(D_{ij}(x^{(\alpha)}))=0, \tag{58}
\end{gather*}
where $0\le \alpha \le \tau$, $\alpha(k,k')=
\alpha_k{-}\delta_{ik}{-}\delta_{jk}{-}\alpha_{k'}{+}\delta_{ik'}{+}\delta_{jk'}$,
$\bar A_\ell\equiv\binom{\alpha_k{+}\ell}{\ell}\,(\text{\rm mod}
\,p)$ for $0\leq \ell\leq\alpha_m$, $\bar B_\ell\equiv
\binom{\alpha_k{+}\ell-1}{\ell-1}\,(\text{\rm mod}\,p)$ for $1\leq
\ell\leq\alpha_m{+}1$, and otherwise, $\bar A_\ell=\bar B_\ell=0$.
\end{theorem}
\begin{proof}
Note that the elements
$\sum_{i,\alpha}\frac{1}{\alpha!}a_{i,\alpha}x^{\alpha}D_i$ in
$\mathbf{S}^+_{\mathcal{K}}$ for $0\le\alpha\le\tau$ will be
identified with $\sum_{i,\alpha}a_{i,\alpha}x^{(\alpha)}D_i$ in
$\mathbf{S}(n;\underline{1})$ and those in $J_{\underline{1}}$
(given in Section 3.1) with $0$. Hence, by Lemma 5.1, we get
\begin{equation*}
\begin{split}
\Delta(D_{ij}(x^{(\alpha)}))&
=\frac{1}{\alpha!}\Delta(x^{\alpha-\epsilon_i-\epsilon_j}(\alpha_j\partial_i{-}\alpha_i\partial_j))\\
&=D_{ij}(x^{(\alpha)})\otimes
(1{-}et)^{\alpha(k,k')}{+}\sum\limits_{\ell=0}^{p-1}{({-}1)}^\ell
\,h^{\langle \ell\rangle}\otimes (1{-}et)^{{-}\ell}\cdot\\
& \quad \cdot
\frac{1}{\alpha!}x^{\alpha-\epsilon_i-\epsilon_j{+}\ell(\epsilon_k-\epsilon_m)}
\bigl(A_\ell(\alpha_j\partial_i{-}\alpha_i\partial_j)-B_\ell\partial_m\bigr)t^\ell,
\end{split}
\end{equation*}
where
$A_\ell=\frac{1}{\ell!}\prod\limits_{j=0}^{\ell-1}(\alpha_m{-}\delta_{im}{-}\delta_{jm}{-}j),\,
B_\ell=(\alpha_j\partial_i{-}\alpha_i\partial_j)(\epsilon_k{-}\epsilon_m)
A_{\ell{-}1}$.
Write
\begin{gather*}
(*)=\frac{1}{\alpha!}x^{\alpha-\epsilon_i-\epsilon_j{+}\ell(\epsilon_k-\epsilon_m)}
\bigl(A_\ell(\alpha_j\partial_i{-}\alpha_i\partial_j)-B_\ell\partial_m\bigr), \\
(**)=\bar{A}_\ell
D_{ij}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))})+\bar{B}_\ell
(\delta_{ik}D_{jm}{-}\delta_{jk}D_{im})(x^{(\alpha{+}(\ell-1)(\epsilon_k-\epsilon_{m}))}).
\end{gather*}
We claim that $(*)=(**)$.
The proof will be given in the following steps:
$(\text{\rm i})$ \ For $\delta_{im}+\delta_{jm}=1$, we have
\begin{equation*}
(*)=\begin{cases}
\frac{(\alpha_k+\ell)!}{\alpha_k!}\frac{(\alpha_m-\ell)!}{\alpha_m!}(A_\ell{+}A_{\ell-1})
D_{ij}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))}), & \text{\it for
} \,
0\le \ell\leq\alpha_m,\\
0, & \text{\it for } \quad \ell>\alpha_m.
\end{cases}
\end{equation*}
A simple calculation shows that
$\frac{(\alpha_k+\ell)!}{\alpha_k!}\frac{(\alpha_m-\ell)!}{\alpha_m!}(A_\ell{+}A_{\ell-1})
\equiv\binom{\alpha_k{+}\ell}{\ell} \ (\text{\rm mod} \; p)$, for
$0\leq \ell\leq\alpha_m$. So, $(*)=(**)$.
$(\text{\rm ii})$ \ For $\delta_{im}+\delta_{jm}=0$, we consider
three subcases:
If $\delta_{ik}=1$, we have
\begin{equation*}
(*)=\begin{cases}
\frac{(\alpha_k+\ell)!}{\alpha_k!}\frac{(\alpha_m-\ell)!}{\alpha_m!}A_\ell
D_{kj}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))}) & \\
\ +\,\frac{(\alpha_k+\ell-1)!}{\alpha_k!}\frac{(\alpha_m-(\ell-1))!} {\alpha_m!}
A_{\ell{-}1} D_{jm}(x^{(\alpha{+}(\ell-1)(\epsilon_k-\epsilon_m))}),
& \text{\it for } \ 0\leq \ell\leq\alpha_m{+}1,\\
0, & \text{\it for } \quad \ell>\alpha_m{+}1.
\end{cases}
\end{equation*}
A simple calculation indicates that for $0\leq \ell\leq\alpha_m{+}1$,
\begin{equation*}
\begin{split}
\frac{(\alpha_k{+}\ell)!}{\alpha_k!}\frac{(\alpha_m{-}\ell)!}{\alpha_m!}A_\ell
&\equiv
\binom{\alpha_k{+}\ell}{\ell}=\bar A_\ell \ (\text{\rm mod} \; p),\\
\frac{(\alpha_k{+}\ell{-}1)!}{\alpha_k!}\frac{(\alpha_m{-}(\ell{-}1))!}
{\alpha_m!}A_{\ell{-}1}&\equiv\binom{\alpha_k{+}\ell{-}1}{\ell{-}1}=\bar
B_\ell \ (\text{\rm mod} \;p).
\end{split}
\end{equation*}
So, $(*)=(**)$.
If $\delta_{jk}=1$, we have
\begin{equation*}
(*)=\begin{cases}
\frac{(\alpha_k+\ell)!}{\alpha_k!}\frac{(\alpha_m-\ell)!}{\alpha_m!}A_\ell
D_{ik}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))}) & \\
\ -\frac{(\alpha_k+\ell-1)!}{\alpha_k!}\frac{(\alpha_m-(\ell-1))!}
{\alpha_m!}A_{\ell-1}
D_{im}(x^{(\alpha{+}(\ell-1)(\epsilon_k-\epsilon_m))}), & \text{\it
for } \ 0\leq \ell\leq\alpha_m{+}1,\\
0, & \text{\it for } \quad \ell>\alpha_m{+}1.
\end{cases}
\end{equation*}
A simple computation shows that
\begin{equation*}
\begin{split}
\frac{(\alpha_k{+}\ell)!}{\alpha_k!}\frac{(\alpha_m{-}\ell)!}{\alpha_m!}A_\ell
&\equiv \binom{\alpha_k{+}\ell}{\ell}=\bar A_\ell\,(\text{\rm mod}
\;p),
\quad \text{\it for } 0\leq \ell\leq\alpha_m,\\
\frac{(\alpha_k{+}\ell{-}1)!}{\alpha_k!}\frac{(\alpha_m{-}(\ell{-}1))!}{\alpha_m!}
A_{\ell-1}&\equiv\binom{\alpha_k{+}\ell{-}1}{\ell{-}1}=\bar B_\ell
\,(\text{\rm mod} \;p), \quad \text{\it for } 0\leq
\ell\leq\alpha_m{+}1.
\end{split}
\end{equation*}
So, $(*)=(**)$.
If $\delta_{ik}=\delta_{jk}=0$, we have
$(*)=\frac{(\alpha_k+\ell)!}{\alpha_k!}\frac{(\alpha_m-\ell)!}{\alpha_m!}A_\ell
D_{ij}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))})$, and
\begin{equation*}
\begin{split}
\frac{(\alpha_k{+}\ell)!}{\alpha_k!}\frac{(\alpha_m{-}\ell)!}{\alpha_m!}A_\ell
&\equiv \binom{\alpha_k{+}\ell}{\ell}=\bar A_\ell \ (\text{\rm mod}
\;p),
\quad\text{\it for } \ 0\leq \ell\leq\alpha_m, \\
\bar B_\ell &\equiv 0 \ (\text{\rm mod} \;p), \quad\text{\it for } \
0\leq \ell\leq\alpha_m{+}1.
\end{split}
\end{equation*}
So, $(*)=(**)$.
\smallskip
Therefore, we verify the formula (56).
\smallskip
Applying a similar argument to the antipode, we can get the formula
(57).
\smallskip
This completes the proof.
\end{proof}
To describe $\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$
explicitly, we still need an auxiliary Lemma.
\begin{lemm} Denote $e=D_{mk}(x^{(2\epsilon_k)})$, $d^{(\ell)}=\frac{1}{\ell!}(\text{\rm
ad}\,e)^\ell$. Then
\smallskip
$(\text{\rm i})$ \ $d^{(\ell)}(D_{ij}(x^{(\alpha)}))=\bar{A}_\ell
D_{ij}(x^{(\alpha{+}\ell(\epsilon_k-\epsilon_m))})$
\smallskip
\hskip3.2cm $+\,\bar{B}_\ell
(\delta_{ik}D_{jm}{-}\delta_{jk}D_{im})(x^{(\alpha{+}(\ell-1)(\epsilon_k-\epsilon_{m}))})$,
\smallskip
\hskip0.6cm where $\bar A_\ell, \bar B_\ell$ as in Theorem 5.3.
\smallskip
$(\text{\rm ii})$ \ $d^{(\ell)}(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))
=\delta_{\ell,0}D_{ij}(x^{(\epsilon_i+\epsilon_j)})
-\delta_{1,\ell}(\delta_{ik}{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm})e$.
\smallskip
$(\text{\rm iii})$ \
$d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)=\delta_{\ell,0}(D_{ij}(x^{(\alpha)}))^p-\delta_{1,\ell}
(\delta_{ik}{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm})\delta_{\alpha,\epsilon_i+\epsilon_j}e$.
\end{lemm}
\begin{proof} We can get (i) from the proof of Theorem 5.3.
(ii) Note that $\bar A_0=1$, $\bar B_0=0$. Using Theorem 5.3,
for $\delta_{im}+\delta_{jm}=1$, we obtain $\bar A_1=1$ and $\bar B_1=0$;
for $\delta_{im}+\delta_{jm}=0$, we obtain $\bar A_1=0$ and $\bar B_1=1$. We have
$\bar A_\ell=\bar B_\ell=0$ for
$\ell>1$. Therefore, in any case, we arrive at the result as
desired.
\smallskip
(iii) From (15), we obtain that for $0\le \alpha\le\tau$,
\begin{equation*}
\begin{split}
d^{(1)}\,((D_{ij}(x^{(\alpha)}))^p)&=\frac{1}{(\alpha!)^p}\bigl[\,e,(D_{ij}(x^{\alpha}))^p\,\bigr]
=\frac{1}{(\alpha!)^p}\bigl[\,e,(x^{\alpha-\epsilon_i-\epsilon_j}(\alpha_j\partial_i{-}\alpha_i\partial_j))^p\,\bigr]\\
&=\frac{1}{(\alpha!)^p}\sum\limits_{\ell=1}^p(-1)^\ell\dbinom{p}
{\ell}(x^{\alpha-\epsilon_i-\epsilon_j}(\alpha_j\partial_i{-}\alpha_i\partial_j))^{p-\ell}\cdot \\
& \qquad\qquad \cdot
x^{\epsilon_k-\epsilon_m{+}\ell(\alpha{-}\epsilon_i-\epsilon_j)}
(a_\ell
\partial_m-b_\ell(\alpha_j\partial_i{-}\alpha_i\partial_j))\\
&\equiv
-\frac{a_p}{\alpha!}\,x^{\epsilon_k-\epsilon_m{+}p(\alpha{-}\epsilon_i-\epsilon_j)}
\partial_m\qquad(\text{mod }\,p\,)\\
&\equiv \begin{cases} -{a_p}\,e,\qquad & \text{\it if }\quad \alpha=\epsilon_i+\epsilon_j\\
0,\qquad & \text{\it if }\quad \alpha\ne\epsilon_i+\epsilon_j
\end{cases}\qquad(\text{mod }\,J),
\end{split}
\end{equation*}
where the last ``$\equiv$" by using the identification with respect
to modulo the ideal $J$ as before, and
$a_\ell=\prod\limits_{m=0}^{\ell-1}(\alpha_j\partial_i-\alpha_i\partial_j)(\epsilon_k-\epsilon_m+m(\alpha{-}\epsilon_i-\epsilon_j)),\
b_\ell=\ell\,\partial_m(\alpha{-}\epsilon_i-\epsilon_j)a_{\ell-1}$, and
$a_p=\delta_{ik}{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm}$, for
$\alpha=\epsilon_i+\epsilon_j$.
Consequently, by the definition of $d^{(\ell)}$, we get
$d^{(\ell)}((x^{(\alpha)}D_i)^p)=0$ in
$\mathbf{u}(\mathbf{S}(n;\underline{1}))$ for $2\le \ell\le p-1$ and
$0\le\alpha\le\tau$.
\end{proof}
Based on Theorem 5.3 and Lemma 5.4, we arrive at
\begin{theorem} Fix distinguished elements
$h=D_{kk'}(x^{(\epsilon_k+\epsilon_{k'})})$,
$e=D_{mk}(x^{(2\epsilon_k)})$ $(1\leq k\neq k'\neq m\leq n)$, there
exists a noncommutative and noncocummtative Hopf algebra $($of
horizontal type$)$
$(\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1})),m,\iota,\Delta,S,\varepsilon)$
over $\mathcal{K}[t]_p^{(q)}$ with the product undeformed, whose
coalgebra structure is given by
\begin{gather*}
\Delta(D_{ij}(x^{(\alpha)}))=D_{ij}(x^{(\alpha)})\otimes
(1{-}et)^{\alpha(k,k')} \tag{59}\\
\hskip5cm +\,\sum\limits_{\ell=0}^{p{-}1}{({-}1)}^\ell h^{\langle
\ell\rangle}\otimes(1{-}et)^{{-}\ell}d^{(\ell)}(D_{ij}(x^{(\alpha)}))t^\ell,
\\
S(D_{ij}(x^{(\alpha)}))={-}(1{-}et)^{-\alpha(k,k')}
\sum\limits_{\ell=0}^{p{-}1}d^{(\ell)}(D_{ij}(x^{(\alpha)}))
\cdot h_1^{\langle \ell\rangle}t^\ell, \tag{60}\\
\varepsilon(D_{ij}(x^{(\alpha)}))=0, \tag{61}
\end{gather*}
where $0\le\alpha\le\tau$ and
$\alpha(k,k')=\alpha_k{-}\delta_{ik}{-}\delta_{jk}-\alpha_{k'}{+}\delta_{ik'}{+}\delta_{jk'}$,
which is finite dimensional with
$\hbox{\rm dim}\,_{\mathcal{K}}\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))=p^{1{+}(n-1)(p^n-1)}$.
\end{theorem}
\begin{proof} Utilizing the same arguments as in the proofs of Theorems 3.5 \& 4.7,
we shall show that the ideal
$I_{t,q}$ is a Hopf ideal of the {\it twisted} Hopf algebra
$U_{t,q}(\mathbf{S}(n;\underline{1}))$ as in Theorem 5.3. To this
end, it suffices to verify that $\Delta$ and $S$ preserve the
generators in $I_{t,q}$.
\smallskip
(I) \ By Lemmas 2.5, 5.3 \& 5.4 (iii), we obtain
\begin{equation*}
\begin{split}
\Delta(&(D_{ij}(x^{(\alpha)}))^p)=(D_{ij}(x^{(\alpha)}))^p\otimes
(1{-}et)^{p\,(\alpha_k{-}\alpha_{k'})}
\\
&\quad+\sum\limits_{\ell=0}^{\infty} ({-}1)^\ell h^{\langle
\ell\rangle}\otimes
(1{-}et)^{{-}\ell}d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)t^\ell
\\
&\equiv(D_{ij}(x^{(\alpha)}))^p\otimes1+\sum\limits_{\ell=0}^{p{-}1}
({-}1)^\ell h^{\langle \ell\rangle}\otimes
(1{-}et)^{{-}\ell}d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)t^\ell\quad
(\text{\rm mod }\, p)
\\
&=(D_{ij}(x^{(\alpha)}))^p\otimes1+1\otimes(D_{ij}(x^{(\alpha)}))^p \\
&\quad
+\,h\otimes(1{-}et)^{-1}(\delta_{ik}{-}\delta_{jk}{-}\delta_{im}
{+}\delta_{jm})\delta_{\alpha,\epsilon_i+\epsilon_j}et.
\end{split}\tag{62}
\end{equation*}
Hence, when $\alpha\ne\epsilon_i+\epsilon_j$, we get
\begin{equation*}
\begin{split}
\Delta((D_{ij}(x^{(\alpha)}))^p)&\equiv(D_{ij}(x^{(\alpha)}))^p\otimes
1+1\otimes
(D_{ij}(x^{(\alpha)}))^p\\
&\in I_{t,q}\otimes
U_{t,q}(\mathbf{S}(n;\underline{1}))+U_{t,q}(\mathbf{S}(n;\underline{1}))\otimes
I_{t,q}.
\end{split}
\end{equation*}
When $\alpha=\epsilon_i+\epsilon_j$, by Lemma 5.4 (ii), (56) becomes
\begin{equation*}
\begin{split}
\Delta(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))&=D_{ij}(x^{(\epsilon_i+\epsilon_j)})\otimes
1+ 1\otimes D_{ij}(x^{(\epsilon_i+\epsilon_j)})\\
&\quad+\,h\otimes
(1{-}et)^{-1}(\delta_{ik}{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm})et.
\end{split}
\end{equation*}
Combining with (62), we obtain
\begin{equation*}
\begin{split}
\Delta((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})
)&\equiv\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})
\bigr)\otimes 1\\ &\quad +1\otimes
\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})
\bigr)\\
&\in I_{t,q}\otimes
U_{t,q}(\mathbf{S}(n;\underline{1}))+U_{t,q}(\mathbf{S}(n;\underline{1}))\otimes
I_{t,q}.
\end{split}
\end{equation*}
Thereby, we prove that $I_{t,q}$ is a coideal of the Hopf algebra
$U_{t,q}(\mathbf{S}(n;\underline{1}))$.
\smallskip
(II) \ By Lemmas 2.5, 5.3 \& 5.4 (iii), we have
\begin{equation*}
\begin{split}
S((D_{ij}(x^{(\alpha)}))^p) &=-(1{-}et)^{-p(\alpha_k-\alpha_{k'})}
\sum\limits_{\ell=0}^{\infty} d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)\cdot
h_1^{\langle
\ell\rangle}t^\ell\\
&\equiv -(D_{ij}(x^{(\alpha)}))^p-\sum\limits_{\ell=1}^{p-1}
d^{(\ell)}((D_{ij}(x^{(\alpha)}))^p)\cdot h_1^{\langle \ell\rangle}t^\ell \quad
(\text{mod } p)\\
&=-(D_{ij}(x^{(\alpha)}))^p
+(\delta_{ik}{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm})\delta_{\alpha,\epsilon_i+\epsilon_j}e\cdot
h_1^{\langle 1\rangle} t.
\end{split}\tag{63}
\end{equation*}
Hence, when $\alpha\ne\epsilon_i+\epsilon_j$, we get
$$
S\bigl((D_{ij}(x^{(\alpha)}))^p\bigr)=-(D_{ij}(x^{(\alpha)}))^p\in I_{t,q}.
$$
When $\alpha=\epsilon_i+\epsilon_j$, by Lemma 5.4 (ii), (57) reads as
$$
S(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))
=-D_{ij}(x^{(\epsilon_i+\epsilon_j)})+(\delta_{ik}{-}\delta_{jk}{-}
\delta_{im}{-}\delta_{jm}) e\cdot h_1^{\langle 1\rangle} t.
$$
Combining with (63), we obtain
$$
S\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})\bigr)
=-\bigl((D_{ij}(x^{(\epsilon_i+\epsilon_j)}))^p-D_{ij}(x^{(\epsilon_i+\epsilon_j)})\bigr)
\in I_{t,q}.
$$
Thereby, we show that $I_{t,q}$ is preserved by the antipode $S$ of
$U_{t,q}(\mathbf{S}(n;\underline{1}))$ as in Theorem 5.3.
\smallskip
(III) It is obvious to notice that
$\varepsilon((D_{ij}(x^{(\alpha)}))^p)=0$ for all $0\le\alpha\le\tau$.
\smallskip
So, $I_{t,q}$ is a Hopf ideal in
$U_{t,q}(\mathbf{S}(n;\underline{1}))$. We get a
finite-dimensional horizontal quantization on
$\mathbf{u}_{t,q}(\mathbf{S}(n;\underline{1}))$.
\end{proof}
\subsection{Jordanian modular quantizations of
$\mathbf{u}(\mathfrak{sl}_n)$} Let $\mathbf{u}(\mathfrak{sl}_n)$
denote the restricted universal enveloping algebra of
$\mathfrak{sl}_n$. Since Drinfeld twists $\mathcal{F}(k,k';m)$ of
horizontal type closely act on the subalgebra $U((\mathbf S_{\mathbb
Z}^+)_0)[[t]]$, consequently on $\mathbf u_{t,q}(\mathbf
S(n;\underline 1)_0)$, these induce the Jordanian quantizations on
$\mathbf u_{t,q}(\mathfrak{sl}_n)$.
By Lemma 5.4 (i), we have
\begin{equation*}
\begin{split}
d^{(\ell)}(D_{ij}(x^{(2\epsilon_j)}))
&=\delta_{\ell,0}D_{ij}(x^{(2\epsilon_j)})+\delta_{1,\ell}
\bigl(\delta_{jm}D_{ik}(x^{(2\epsilon_k)})\\
&\quad -\delta_{ik}D_{mj}(x^{(2\epsilon_j)})+\delta_{jm}\delta_{ik}
D_{km}(x^{(\epsilon_k+\epsilon_m)})\bigr)-\delta_{2,\ell}\delta_{jm}\delta_{ik}e.
\end{split}
\end{equation*}
By Theorem 5.3, we have
\begin{theorem} Fix distinguished elements
$h=D_{kk'}(x^{(\epsilon_k+\epsilon_{k'})})$,
$e=D_{mk}(x^{(2\epsilon_k)})$ $(1\leq k\neq k'\neq m\leq n)$, the
corresponding Jordanian quantization of
$\mathbf{u}(\mathbf{S}(n,\underline{1})_0)\cong \mathbf
u(\mathfrak{sl}_n)$ over
$\mathbf{u}_{t,q}(\mathbf{S}(n,\underline{1})_0)\cong \mathbf
u_{t,q}(\mathfrak{sl}_n)$ with the product undeformed, whose
coalgebra structure is given by
\begin{gather*}
\Delta(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))=D_{ij}(x^{(\epsilon_i+\epsilon_j)})\otimes
1+1\otimes D_{ij}(x^{(\epsilon_i+\epsilon_j)}) \tag{64}\\
\qquad\qquad\qquad\qquad
+\,(\delta_{ik}{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm})\,
h\otimes(1{-}et)^{-1}et,
\\
\Delta(D_{ij}(x^{(2\epsilon_j)}))=D_{ij}(x^{(2\epsilon_j)})\otimes
(1{-}et)^{\delta_{jk}{-}\delta_{ik}{-}\delta_{jk'}{+}\delta_{ik'}}
+1\otimes D_{ij}(x^{(2\epsilon_j)}) \tag{65}\\
\qquad\quad
-\,h\otimes(1{-}et)^{-1}\bigl(\delta_{jm}D_{ik}(x^{(2\epsilon_k)})-\delta_{ik}D_{mj}(x^{(2\epsilon_j)})+\delta_{jm}\delta_{ik}
D_{km}(x^{(\epsilon_k+\epsilon_m)})\bigr)t\\
\qquad -\,\delta_{jm}\delta_{ik}\,
h^{\langle 2\rangle}\otimes(1{-}et)^{-2}et^2,\\
S(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))=-D_{ij}(x^{(\epsilon_i+\epsilon_j)})+(\delta_{ik}
{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm})eh_1t, \tag{66} \\
S(D_{ij}(x^{(2\epsilon_j)}))=-(1{-}et)^{-(\delta_{jk}
{-}\delta_{ik}{-}\delta_{jk'}{+}\delta_{ik'})}
\cdot\Bigl(D_{ij}(x^{(2\epsilon_j)})+\tag{67} \\
\quad \bigl(\delta_{jm}D_{ik}(x^{(2\epsilon_k)})-\delta_{ik}D_{mj}
(x^{(2\epsilon_j)})+\delta_{jm}\delta_{ik}
D_{km}(x^{(\epsilon_k+\epsilon_m)})\bigr)h_1t-\delta_{jm}\delta_{ik}
eh_1^{\langle 2\rangle}t^2\Bigr), \\
\varepsilon(D_{ij}(x^{(\epsilon_i+\epsilon_j)}))=\varepsilon(D_{ij}(x^{(2\epsilon_j)}))=0.
\tag{68}
\end{gather*}
for $1\leq i\neq j\leq n$.
\end{theorem}
\begin{remark}
Since $\mathbf{S}(n,\underline{1})_0\cong \mathfrak{sl}_n$, which,
via the identification $D_{ij}(x^{(\epsilon_i+\epsilon_j)})$
with $E_{ii}-E_{jj}$ and $D_{ij}(x^{(2\epsilon_j)})$ with $E_{ji}$
for $1\leq i\neq j\leq n$, we get a Jordanian quantization for
$\mathfrak{sl}_n$, which has been discussed by Kulish et al (cf.
\cite{KL}, \cite{KLS} etc.).
\end{remark}
\begin{coro} Fix distinguished elements
$h=E_{kk}-E_{k'k'}$, $e=E_{km}$ $(1\leq k\neq k'\neq m\leq n)$, the
corresponding Jordanian quantization of
$\mathbf{u}(\mathfrak{sl}_n)$ over
$\mathbf{u}_{t,q}(\mathfrak{sl}_n)$ with the product undeformed,
whose coalgebra structure is given by
\begin{gather*}
\Delta(E_{ii}-E_{jj})=(E_{ii}-E_{jj})\otimes 1+1\otimes(
E_{ii}-E_{jj})\tag{69}\\
\qquad\qquad\qquad \qquad +\,(\delta_{ik}{-}\delta_{jk}
{-}\delta_{im}{+}\delta_{jm})
h\otimes(1{-}et)^{-1}et,\\
\Delta(E_{ji})=E_{ji}\otimes
(1{-}et)^{\delta_{jk}{-}\delta_{ik}{-}\delta_{jk'}{+}\delta_{ik'}}
+1\otimes E_{ji}\tag{70}\\
\qquad\qquad\qquad\qquad
-\,h\otimes(1{-}et)^{-1}\bigl(\delta_{jm}E_{ki}
-\delta_{ik}E_{jm}\bigr)t-\delta_{jm}\delta_{ik}
h^{\langle 2\rangle}\otimes(1{-}et)^{-2}et^2,\\
S( E_{ii}-E_{jj})=-(E_{ii}-E_{jj})+(\delta_{ik}
{-}\delta_{jk}{-}\delta_{im}{+}\delta_{jm})eh_1t, \tag{71}
\end{gather*}
\begin{gather*}
S(E_{ji})=-(1{-}et)^{-(\delta_{jk}{-}\delta_{ik}{-}\delta_{jk'}{+}\delta_{ik'})}
\Bigl(E_{ji}+\bigl(\delta_{jm}E_{ki}-\delta_{ik}E_{jm}\bigr)h_1t\tag{72}
\\
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad
-\,\delta_{jm}\delta_{ik}
eh_1^{\langle 2\rangle}t^2\Bigr), \\
\varepsilon(E_{ii}-E_{jj})=\varepsilon(E_{ji})=0. \tag{73}
\end{gather*}
for $1\leq i\neq j\leq n$.
\end{coro}
\begin{example} For $n=3$, take $h=E_{11}{-}E_{22}$, $e=E_{13}$, and set
$h'=E_{22}{-}E_{33}$, $f=(1{-}et)^{-1}$. By Corollary 5.8, we get a
Jordanian quantization on $\mathbf u_{t,q}(\mathfrak {sl}_3)$ with
the coproduct as follows (here we omit the antipode formulae which
can be directly written down from (71) \& (72)):
\begin{equation*}
\begin{split}
\Delta(h)&=h\otimes f+1\otimes h,\\
\Delta(h')&=h\otimes f+(h'{-}h)\otimes 1+1\otimes h',\\
\Delta(e)&=e\otimes f^{-1}+1\otimes e,\\
\Delta(E_{12})&=E_{12}\otimes f^{-2}+1\otimes E_{12},\\
\Delta(E_{21})&=E_{21}\otimes f^2+(1{+}h)\otimes E_{21}-h\otimes
fE_{21}f^{-1},\\
\Delta(E_{31})&=E_{31}\otimes f+(1{+}h)\otimes E_{31}-h\otimes fE_{31}f^{-1}+2(f^{-1}{-}1)E_{31}\otimes f(f{-}1),\\
\Delta(E_{23})&=E_{23}\otimes f+1\otimes E_{23},\\
\Delta(E_{32})&=E_{32}\otimes f^{-1}+(1{+}h)\otimes E_{32}-h\otimes
fE_{32}f^{-1},
\end{split}
\end{equation*}
where $\{f, h\}$ satisfying the relations: $[h,f]=f^2-f$, $h^p=h$,
$f^p=1$ generates the (finite-dimensional) Radford Hopf subalgebra
(with $f$ as a group-like element) over a field of characteristic
$p$.
\end{example}
\vskip10pt \centerline{\bf ACKNOWLEDGMENT}
\vskip10pt Authors are indebted to B. Enriquez and C. Kassel for
their valuable comments on quantizations when N.H. visited IRMA as
an invited professor of the ULP at Strasbourg from November to
December of 2007. N.H. is grateful to Kassel for his kind invitation
and extremely hospitality.
\bibliographystyle{amsalpha}
|
1,941,325,220,782 | arxiv | \section{Introduction}
The jellyfish has been the subject of numerous mathematical and physical studies ranging from the discovery of reentry phenomenon in electrophysiology to the development of axisymmetric methods for solving fluid-structure interaction problems. A wide variety of experimental, theoretical, and numerical studies have been performed to understand how jellyfish propel themselves through the water using bell pulsations. More recently, experimental and numerical studies have been performed to understand how jellyfish with relatively simple morphologies use bell pulsations to locomote and to feed. In this work we capitalize upon the unique properties of the upside down jellyfish, \textit{Cassiopea spp.}, to understand the dynamics of feeding currents generated through complex oral arm structures and uncoupled from locomotion. The upside down jellyfish is ideal for such work since they typically rest on the sandy bottom of the ocean with their oral arms facing upwards and towards the sun. Bell pulsations are used primarily to drive flow through an elaborate array of oral arms.
The fluid dynamics videos are available as a \href{http://frg.unc.edu/movies/Upsidedownjelly.mp4}{larger file size} and as a \href{http://frg.unc.edu/movies/Upsidedownjelly_small.mp4}{smaller file size}. Flow visualization is used to reveal the generation of starting and stopping vortices that then break up into smaller structures as they advect through the oral arms. Numerical simulations show similar effects when the oral arms are added as a porous layer above the pulsing bell.
\section{Methods}
\textit{Cassiopea spp.} medusae were ordered from Carolina Biological Supply and Gulf Specimen Marine Supply and were housed in 29 gallon aquaria. Flow visualization was performed using gravity fed fluorescein and rhodamine dyed. The dye was injected into the sand below the jellyfish and was slowly pulled to the surface through the pulsation of the jellyfish bells. Black lights were used to illuminate the dyes against the black background. The immersed boundary method was then used to solve the fluid-structure interaction problem and explore how changes in morphology and pulsing dynamics alter the resulting fluid flow~\cite{Peskin}. The oral arms were represented as a porous layer using the method described by Kim and Peskin~\cite{Kim} and Stockie~\cite{Stockie}. Visualizations of the computational results were performed using $DataTank^{TM}$. The experimental and computational components of this work were conducted at the University of North Carolina at Chapel Hill, in the facilities of the Mathematical Physiology Laboratory.
|
1,941,325,220,783 | arxiv | \section{Introduction}
In Bayesian function estimation, a common approach to putting a prior
distribution on a function $f$ of interest, for instance a regression function
in nonparametric regression models or a drift function in diffusion models, is
to expand the function in a particular basis
and to endow the coefficients in the expansion with prior weights.
For computational or other reasons the series is often truncated after finitely many terms,
and the truncation level is endowed with a prior as well. The coefficients
in the expansion are often chosen to be independent under the prior and
distributed according to some given probability density.
It is of interest to understand whether, in addition to their attractive
conceptual and computational aspects, nonparametric priors of this
type enjoy favourable theoretical properties as well.
Examples of
papers in which this was studied for various families of series priors include \cite{zhao2000},
\cite{shenwasserman2001}, \cite{dejonge2012},
\cite{rivoirard2012}, \cite{arbel2013}, \cite{shen2015}.
The results in these papers show that when appropriately constructed,
random series priors can yield posteriors that contract at optimal rates
and that adapt automatically to the smoothness of the function that is being estimated.
To ensure that the nonparametric Bayes procedure not only adapts to smoothness, but is also flexible
with respect to the multiplicative scale
of the function of interest, a multiplicative hyperparameter with an independent
prior distribution is often employed as well. Theoretically this is usually not
needed for an optimal concentration rate of the posterior, but it can greatly improve performance in practice. {See for instance
\cite{meulen2014}, where it is explained why it is }
computationally attractive in certain settings
to use Gaussian priors on the
series coefficients in combination with a multiplicative (squared) scaling parameter
with an inverse gamma prior. For a given truncation level, the prior is conjugate and
allows for posterior computations using standard Gibbs sampling.
The existing theoretical results do not cover this important case however. This is mainly due
to the fact that essentially, the available rate of contraction
theorems for series priors require that hyper priors have (sub-) exponential tails,
which excludes the inverse gamma distribution. { (For example the second part of condition (A2) of \cite{shen2015} is not satisfied in our setting.)}
The theoretical properties of random series priors with inverse gamma scaling have therefore
remained unexplored. With this paper we intend to fill this gap.
Concretely, we consider statistical models in which the unknown object of
interest is a square integrable function $f$ on $[0,1]$. We endow this function with a prior
that is hierarchically specified as follows:
\begin{equation}\label{eq: prior}
\begin{split}
J & \sim \text{Poisson or geometric},\\
s^2 & \sim \text{inverse gamma},\\
f \given s, J & = \sum_{j \le J} f_j\psi_j, \quad \text{with} \
(f_1, \ldots, f_J) \sim N(0, \diag(s^2j^{-1-2\alpha})_{j \le J}),
\end{split}
\end{equation}
where $(\psi_j)$ is a fixed orthonormal basis of $L^2[0,1]$ and $\alpha > 0$ is a hyperparameter.
(In fact, we will consider a somewhat broader class of hyper priors on $J$ and $s^2$,
see Section \ref{sec: prior}.)
In this paper we prove that this prior enjoys very favourable theoretical properties as well.
We derive optimal posterior contraction rates and adaptation up
to arbitrarily high degrees of smoothness.
In recent years, general rate of contraction theorems have been derived for
a variety of nonparametric statistical problems. Roughly speaking, such theorems give sufficient conditions
for having a certain rate of contraction in terms of (i) the amount of mass that the prior
gives to neighbourhoods of the true function and (ii) the existence of growing subsets of
the support of the prior, so-called sieves, that contain all but an exponentially small amount of the prior mass
and whose metric entropy is sufficiently small. The statements of our main theorem match the conditions
of these existing general results. This means that we automatically obtain results for different statistical settings,
including for instance signal estimation in white noise and drift estimation for SDEs.
A simple but important observation that we make in this paper is that to obtain sharp rates
for the priors we consider, it is necessary to use versions of the general contraction rate theorems
that give entropy conditions on the intersection of the sieves with balls around the true function,
as can be found for instance in \cite{GGV}, \cite{meulen2006} and \cite{GVnoniid}.
As remarked in these papers, it is in many nonparametric problems sufficient to consider
only the entropy of the sieves themselves, without intersecting them with a ball around the truth. For the priors
we consider in this paper however, which in some sense are finite-dimensional in nature in certain regimes,
this is not the case. It turns out that since the inverse gamma
has polynomial tails, we need to make the sieves relatively large in order to ensure that they receive
sufficient prior mass. Without intersecting them with a small ball around the truth, this would
make their entropy too large, or even infinite.
The proof of our main results indicate
that the good adaptation properties of series priors like \eqref{eq: prior}
are really due to the fact that {\em both} the truncation level $J$ {\em and} the scaling constant $s$
are random. If the true function that is being estimated is relatively smooth, the prior
can approximate it well by letting $J$ be small. If it is relatively rough however, the prior can adapt
to it by letting $J$ be essentially infinite, or very large, to pick up all the fluctuations. The
correct bias-variance trade-off is in that case achieved automatically by adapting the multiplicative scale.
In some sense, priors like \eqref{eq: prior} can switch with sufficient probability between
being essentially finite-dimensional, and being essentially infinite-dimensional. In combination
with a random multiplicative scale, this gives them the ability to adapt to all levels of smoothness.
The remainder of the paper is organized as follows. In the next section we describe
in detail the class of priors we consider.
In Section \ref{sec: main} we present the main results of the paper, which give
bounds on the amount of mass that the priors give to $L^2$-neighbourhoods of functions with a given
degree of (Sobolev-type) smoothness, and the existence of appropriate sieves within the support
of the prior. In Section \ref{sec: app} we link these general theorems to existing rate of contraction results
for two different SDE models, to obtain concrete contraction results for signal estimation in white
noise and drift estimation of a one-dimensional SDE with priors of the form \eqref{eq: prior}.
The proofs of the main results are given in Sections \ref{sec: proof1} and \ref{sec: proof2}.
\section{Prior model}
\label{sec: prior}
We consider problems in which the unknown function of interest (e.g.\ a drift function of an SDE, a
signal observed in noise, \ldots)
is a square integrable function on $[0,1]$,
i.e.\ an element of $L^2[0,1] = \{f: [0,1] \to \RR: \|f\|_2 < \infty\}$,
where the $L^2$-norm is as usual defined by
$\|f\|^2_2 = \int_0^1 f^2(x)\,dx$.
We fix an arbitrary orthonormal basis $(\psi_j)$ of $L^2[0,1]$ (for instance the standard Fourier basis). { Every element of \(f\in L^2[0,1]\) can be represented as a series \(f=\sum_j \langle f,\psi_j\rangle \psi_j\) where the convergence is in the \(L^2\)-norm and by the Plancherel formula \(\|f\|_2^2=\sum_{j}|\langle f,\psi_j\rangle|^2\). Finite series \(\sum_{j\le J}\langle f,\psi_j\rangle \psi_j\) approximate \(f\) and the quality of this approximation depends on the decay of the coefficients \(\langle f,\psi_j\rangle\), which also determines the ``smoothness'' of the function. The class of \(\beta\)-Sobolev smooth functions \(H^\beta[0,1]\) is given by all \(f\in L^2[0,1]\) for which the \(\beta\)-Sobolev norm \(\|f\|_\beta:=\sqrt{\sum_{j}k^{2\beta}|\langle f,\psi_j\rangle|^2}\)
is finite. If $\psi_j$ is the classical Fourier series basis,
these are the classical\(\beta\)-Sobolev spaces.}
We define a series prior on a function $f \in L^2[0,1]$ through a hierarchical
scheme which involves a prior on the point $J$ at which the series is truncated,
a prior on the multiplicative scaling constant $s$ and conditionally on
$s$ and $J$, a series prior with Gaussian coefficients on $f$.
Specifically, the prior on $J$ is defined through a probability mass
function $p$ that is assumed to satisfy, for constants $C, C' >0$,
\begin{equation}\label{eq: p}
p(j) \gtrsim e^{-Cj\log j}, \qquad \sum_{i > j} p(i) \lle e^{-C'j}
\end{equation}
for all $j \in \NN$. (As usual, $a \lle b$ or $b \gtrsim a$ means that $a \le cb$ for
an irrelevant constant $c >0$.) This includes for instance the cases of a Poisson or a geometric prior on $J$.
For the scaling parameter we assume that the density $g$
of $s^2$ is positive and continuous and satisfies, for some $q < -1$ and $C'' >0$,
\begin{equation}\label{eq: g}
g(x) \gtrsim e^{-C''/x} \quad \text{near $0$}, \qquad
g(x) \gtrsim x^{q} \quad \text{near $\infty$}.
\end{equation}
Hence in particular, the popular and
computationally convenient choice of an inverse gamma prior on $s^2$ is included in our setup.
The full prior {\(\Pi\) is then specified as follows:}
\begin{align}
\label{eq: p1} J & \sim p\\
\label{eq: p2} s^2 & \sim g\\
\label{eq: p3} f \given s, J & \sim s \sum_{j=1}^J j^{-1/2-\alpha} Z_j \psi_j,
\end{align}
where {\(\alpha\) is a positive constant which determines the baseline smoothness of the prior,} $p$ satisfies \eqref{eq: p}, $g$ satisfies \eqref{eq: g} and the $Z_j$
are independent standard Gaussians.
\section{Main results}
\label{sec: main}
Our main abstract result gives properties of the truncated
series prior that link directly to the conditions of existing
general theorems for posterior contraction in a variety of statistical settings.
Combined with such existing results, we obtain concrete
results for, for instance, signal estimation in white noise,
drift estimation in diffusion models, et cetera. We give concrete
examples in the next section.
As usual, if $\FF$ is a { subset of a normed vector} space
with norm $\|\cdot\|$, then we denote by $N(\eps, \FF, \|\cdot\|)$
the minimal number of balls of $\|\cdot\|$-radius $\eps$ needed to cover the set $\FF$.
\begin{thm}\label{thm: main}
Let the prior $\Pi$ on $f$ be as defined in \eqref{eq: p1}--\eqref{eq: p3},
with $\alpha > 0$ and $p$ and $g$ satisfying \eqref{eq: p}--\eqref{eq: g}. Let $f_0 \in H^\beta[0,1]$ for $\beta > 0$.
Then there exists a constant $c > 0$
such that for every $K > 1$, there exist $\FF_n \subset L^2[0,1]$ such that with
\begin{equation}\label{eq: epsn}
\eps_n =c \Big(\frac{n}{\log n}\Big)^{-\beta/(1+2\beta)},
\end{equation}
we have
\begin{align}
\label{eq: pm} \Pi(f:\|f - f_0\|_2 \le \eps_n) & \ge e^{-n\eps^2_n},\\
\label{eq: rm} \Pi(f \not \in \FF_n) &\le e^{-Kn\eps_n^2},\\
\label{eq: en} \sup_{\eps > \eps_n}\log N(a\eps, \{f \in \FF_n : \|f-f_0\|_2 \le \eps\}, \|\cdot\|_2) & \lle n\eps^2_n,
\end{align}
for all $a \in (0,1)$.
\end{thm}
The proof of the theorem is given in Section \ref{sec: proof1}.
The result matches with the sufficient conditions of existing posterior contraction theorems,
provided that the relevant statistical distance-type quantities (e.g.\ Hellinger, Kullback-Leibler, \ldots)
in the model can be appropriately linked to the $L^2$-norm on the parameter $f$. In the next
section we give two concrete SDE-related examples, which motivated the present study.
Theorem \ref{thm: main} shows that with truncated series priors of the type \eqref{eq: p1}--\eqref{eq: p3}
we can have adaption to arbitrary degrees of smoothness in certain function estimation problems, and achieve posterior contraction
rates that are optimal up to a logarithmic factor.
Inspection of the proof of Theorem \ref{thm: main} shows that in the range $\beta \le \alpha + 1/2$,
i.e.\ if the ``baseline smoothness'' $\alpha$ of the prior happens to have been chosen large enough
relative to the smoothness $\beta$ of the true function, then we actually get the optimal rate $n^{-\beta/(1+2\beta)}$
without additional logarithmic factors. This is true under a slightly stronger condition on the prior on
the cut-off point $J$. Instead of \eqref{eq: p}, we need to assume that for constants $C, C' > 0$ it holds that
\begin{equation}\label{eq: pp}
p(j) \gtrsim e^{-Cj}, \qquad \sum_{i > j} p(i) \lle e^{-C'j}
\end{equation}
for all $j \in \NN$. This means that
the prior on $J$ can still be geometric, but that the Poisson prior on $J$ is excluded.
\begin{thm}\label{thm: main2}
Let the prior $\Pi$ on $f$ be as defined in \eqref{eq: p1}--\eqref{eq: p3},
with $\alpha > 0$ and $p$ and $g$ satisfying \eqref{eq: pp} and \eqref{eq: g}.
Let $f_0 \in H^\beta[0,1]$ for $0< \beta \le \alpha + 1/2$. Then there exists a constant $c > 0$
such that for every $K > 1$, there exist $\FF_n \subset L^2[0,1]$ such that with
\begin{equation}\label{eq: epsn2}
\eps_n =c n^{-\beta/(1+2\beta)},
\end{equation}
we have
\begin{align}
\label{eq: pm2} \Pi(f: \|f - f_0\|_2 \le \eps_n) & \ge e^{-n\eps^2_n},\\
\label{eq: rm2} \Pi(f \not \in \FF_n) &\le e^{-Kn\eps_n^2},\\
\label{eq: en2} \sup_{\eps > \eps_n}\log N(a\eps, \{f \in \FF_n : \|f-f_0\|_2 \le \eps\}, \|\cdot\|_2) & \lle n\eps^2_n,
\end{align}
for all $a \in (0,1)$.
\end{thm}
The proof of this theorem is given in Section \ref{sec: proof2}.
\section{Specific statistical settings}
\label{sec: app}
\subsection{Detecting a signal in Gaussian white noise}
Suppose we observe a sample path $X^{(n)}=(X^{(n)}_t: t \in [0,1])$ of stochastic process
satisfying the SDE
\[
dX^{(n)}_t = f_0(t)\,dt + \frac1{\sqrt n}\,dW_t,
\]
where $W$ is a standard Brownian motion and $f_0 \in L^2[0,1]$ is an unknown signal.
To make inference about the signal we endow it with the truncated series prior $\Pi$
described in Section \ref{sec: prior} and we compute the corresponding posterior
$\Pi(\cdot \given X^{(n)})$. Theorem 3.1 of \cite{meulen2006} or Theorem 6 of \cite{GVnoniid},
combined by our main result Theorem \ref{thm: main}, imply that if $f_0 \in H^\beta[0,1]$
for $\beta > 0$, then we have the posterior contraction
\[
\Pi(f: \|f-f_0\|_2 >M_n (n/\log n)^{-\beta/(1+2\beta)} \given X^{(n)}) \overset{P_{f_0}}{\to} 0
\]
for all $M_n \to \infty$, where the convergence is in probability under the true
model corresponding to the signal $f_0$.
\subsection{Estimating the drift of an ergodic diffusion}
Suppose we observe a sample path $X^{(T)} = (X_t: t \in [0,T])$ of an
ergodic one-dimensional diffusion satisfying the SDE
\[
dX_t = b_0(X_t)\,dt + \sigma(X_t)\,dW_t, \qquad X_0 = 0,
\]
where $W$ is a standard Brownian motion, $\sigma: \RR \to \RR$ is a know
continuous function that is bounded away from $0$, and $b_0: \RR \to \RR$ is a continuous function
that satisfies the appropriate conditions to guarantee that the SDE indeed generates an ergodic diffusion
(see for instance \cite{kallenberg2002}). The goal is to estimate the
restriction $b_0|_{[0,1]}$ of $b_0$ to the interval $[0,1]$.
The likelihood for this model, given by Girsanov's formula (e.g.\ \cite{liptser2001}),
factorizes into a factor involving only the drift
on the interval $[0,1]$ and a factor involving only the restriction of the drift
to the complement $\RR\backslash [0,1]$.
As a result, since we are only interested in the drift on $[0,1]$, we can effectively
assume that it is known outside $[0,1]$ and we only have to put a prior on the restriction of
the drift to $[0,1]$. We endow this with the truncated series prior $\Pi$
described in Section \ref{sec: prior} and we compute the corresponding posterior
$\Pi(\cdot \given X^{(T)})$. Theorem 3.3 of \cite{meulen2006}
and Theorem \ref{thm: main} then imply that if $b_0|_{[0,1]} \in H^\beta[0,1]$
for $\beta > 0$, then we have the posterior contraction
\[
\Pi(b: \|b-b_0\|_2 >M_T (T/\log T)^{-\beta/(1+2\beta)} \given X^{(T)}) \overset{P_{b_0}}{\to} 0
\]
as $T \to \infty$ for all $M_T \to \infty$, where the convergence is in probability under the true
model corresponding to the drift function $b_0$.
\section{Proof of Theorem \ref{thm: main}}
\label{sec: proof1}
\subsection{Prior mass}
The following theorem implies that \eqref{eq: pm} holds with $\eps_n$ as specified.
\begin{thm}\label{thm: pm1}
Let the prior $\Pi$ on $f$ be defined according to \eqref{eq: p1}--\eqref{eq: p3},
with $\alpha > 0$ and $p$ and $g$ satisfying \eqref{eq: p}--\eqref{eq: g},
and let $f_0 \in H^\beta[0,1]$ for $\beta > 0$.
Then, for a constant $C > 0$, it holds that
\[
-\log \Pi(f: \|f-f_0\|_2 \le 2\eps) \le C \eps^{-1/\beta}\log 1/\eps,
\]
for all $\eps> 0$ small enough.
\end{thm}
\begin{prf}
Recall that $s^2$ has density $g$ under the prior. Hence, by conditioning we see that
the probability of interest is bounded from below by
\[
p\big(\big\lfloor(\eps/\|f_0\|_\beta)^{-1/\beta}\big\rfloor\big)
\int_{0}^{\infty} \Pi\Big(\Big\|{\sqrt{\eta}} \sum_{j=1}^{\big\lfloor(\eps/\|f_0\|_\beta)^{-1/\beta}\big\rfloor} j^{-1/2-\alpha} Z_j \psi_j - f_0\Big\|_2 \le 2\eps\Big)g(\eta)\,d\eta,
\]
Now suppose first that $1+2\alpha-2\beta \le 0$. Then by
Lemma \ref{lem: sb}, the preceding is further lower bounded by
\[
\exp\big(-C_1\eps^{-1/\beta}\log 1/\eps\big)
p\big(\big\lfloor(\eps/\|f_0\|_\beta)^{-1/\beta}\big\rfloor\big) \int_{\eps^{1/\beta}}^{2\eps^{1/\beta}}g(\eta)\,d\eta
\]
for a constant $C_1> 0$. By the assumptions on $p$ and $g$ this is
bounded from below by a constant times $\exp(-C_2\eps^{-1/\beta}\log 1/\eps)$
for $\eps$ small enough, for some constant \(C_2>0\).
In the other case $1+2\alpha-2\beta > 0$ we restrict the integral over $\eta$ to
a different region to obtain instead the lower bound
\[
\exp\big(-C_1\eps^{-1/\beta}\log 1/\eps\big)
p\big(\big\lfloor(\eps/\|f_0\|_\beta)^{-1/\beta}\big\rfloor\big)
\int_{\eps^{(2\beta - 2\alpha)/\beta}}^{2\eps^{(2\beta - 2\alpha)/\beta}}g(\eta)\,d\eta
\]
for some $C_1> 0$. For $\alpha < \beta \le \alpha + 1/2$
the assumptions on $p$ and on the behaviour of $g$ near $0$ ensure again that this
is
bounded from below by a constant times $\exp(-C_2\eps^{-1/\beta}\log 1/\eps)$
for $\eps$ small enough. For the range $\beta < \alpha$ this holds as well,
by the
the assumptions on $p$ and on the behaviour of $g$ near $\infty$.
{When \(\alpha=\beta\) we use the lower bound
\[
\exp\big(-C_1\eps^{-1/\beta}\log 1/\eps\big)
p\big(\big\lfloor(\eps/\|f_0\|_\beta)^{-1/\beta}\big\rfloor\big)
\int_{1}^{C_3}g(\eta)\,d\eta.\]
Again by the behaviour of \(g\) near infinity, the integral on the right is positive for \(C_3\) big enough and the desired lower bound holds by the assumption on \(p\).}
\end{prf}
\begin{lem}\label{lem: cb}
Let $Z_1, Z_2, \ldots$ be independent and standard normal.
There exists a universal constant $K> 1$
such that for every $s > 0$, $\eps > 0$, $J \in \NN$ and $a \in \ell^2$,
\[
-\log \PP\Big(\Big\|s \sum_{j=1}^J a_j Z_j \psi_j \Big\|_2 \le \eps\Big) \le
2 J \log \Big(K \vee \frac{s\|a\|_2}{\eps}\Big).
\]
\end{lem}
\begin{prf}
Since the $\psi_j$ form an orthonormal basis, the probability we have to lower bound
equals
\[
\PP\Big(s^2 \sum_{j=1}^J a^2_j Z^2_j \le \eps^2\Big) \ge
\PP\Big( \max_{j \le J} |Z_j| \le \frac{\eps}{s\|a\|_2}\Big) =
\Big(\PP\Big( |Z_1| \le \frac{\eps}{s\|a\|_2}\Big)\Big)^J.
\]
If ${\eps}/({s\|a\|_2}) \ge \xi_{3/4}$, with $\xi_p$ the $p$-quantile of the
standard normal distribution, { \(\PP( |Z_1| \le \eps/(s\|a\|_2))\ge 1/2.\)}
In the other case, it is at least { \(\phi(\xi_{3/4}) \times {2\eps}/({s\|a\|_2})\)},
with $\phi$ the standard normal density.
So in either case, it is at least a constant $C \in (0,1)$ times
$1 \wedge {\eps}/({s\|a\|_2})$. It follows that
\begin{align*}
\log \PP\Big(\Big\|s \sum_{j=1}^J a_j Z_j \psi_j \Big\|_2 \le \eps\Big) & \ge
J \log C+ J \log \Big(1 \wedge \frac{\eps}{s\|a\|_2}\Big)\\
& \ge 2 J \log \Big(C \wedge \frac{\eps}{s\|a\|_2}\Big).
\end{align*}
This implies the statement of the lemma.
\end{prf}
\begin{lem}\label{lem: sb}
Let $Z_1, Z_2, \ldots$ be independent and standard normal.
Let \(\beta>0\) and \(f_0\in H^\beta[0,1]\) be given.
There exists a constant $K > 1$ such that for all
$\eps, s, \alpha > 0$ and
$J \ge (\eps/\|f_0\|_\beta)^{-1/\beta}$,
\[
-\log \PP\Big(\Big\|s \sum_{j=1}^J j^{-1/2-\alpha} Z_j \psi_j - f_0\Big\|_2 \le 2\eps\Big) \le
2 J \log \Big(K \vee \frac{s}{\eps}\Big) +
\frac{\|f_0\|^2_\beta}{s^2}J^{{(1+2\alpha-2\beta) \vee 0}}.
\]
\end{lem}
\begin{prf} {For fixed \(J,s\), the sum
$s \sum_{j=1}^J j^{-1/2-\alpha} Z_j \psi_j$ is a centered Gaussian random element in $L^2[0,1]$ and has a
reproducing kernel Hilbert space (RKHS),} which is the space $\HHH^{s,J}$
of all functions $h = \sum_{j\le J} h_j\psi_j$,
with RKHS-norm
\[
\Big\|\sum_{j\le J} h_j\psi_j\Big\|^2_{\HHH^{s, J}} = \frac1{s^2}\sum_{j \le J} {j^{1+2\alpha}h_j^2}.
\]
The function $f_0$ admits a series expansion $f_0 = \sum f_j\psi_j$. For $J_0 \le J$,
consider the function $h_0 = \sum_{j \le J_0} f_j\psi_j$ in the RKHS. It holds that
\[
\|f_0 - h_0\|^2_2 = \sum_{j > J_0} f^2_k \le J_0^{-2\beta} \|f_0\|^2_\beta.
\]
Hence for $J_0 = \lfloor(\eps/\|f_0\|_\beta)^{-1/\beta}\rfloor$, we have that $\|f_0 - h_0\|_2 \le \eps$.
The condition on $J$ ensures that $h_0$ is an element of the RKHS, and
\[
\|h_0\|^2_{\HHH^{s, J}} = \frac1{s^2}\sum_{j \le J_0} {j^{1+2\alpha-2\beta}j^{2\beta}f_j^2}
\le \frac{\|f_0\|^2_\beta}{s^2}J_0^{(1+2\alpha-2\beta) \vee 0}.
\]
It follows that
\begin{equation}\label{eq:upperboundfortheinfimumoveranepsRKHSballaroundthetrueparameter}
\inf_{h \in \HHH^{s, J} \atop \|h-f_0\|\le \eps}\|h\|^2_{\HHH^{s,J}} \le
\frac{\|f_0\|^2_\beta}{s^2}J^{{(1+2\alpha-2\beta) \vee 0}}
\end{equation}
Combining this with the preceding lemma and Lemma 5.3 of \cite{rkhs2008} completes the proof.
\end{prf}
\subsection{Sieves, remaining mass and entropy}
Let the sequence $\eps_n \to 0$ and $\beta > 0$ be given. We consider sieves of growing dimension of the form
\begin{equation}\label{eq: s}
\FF_n = \Big\{h=\sum_{j\le J_n} h_j\psi_j\Big\},
\end{equation}
where
\begin{equation}\label{eq: j}
J_n = K_1 \eps_n^{-1/\beta}\log 1/\eps_n
\end{equation}
for a constant $K_1 > 0$ specified below.
By assumption \eqref{eq: p} we have
\[
\Pi(f \not\in \FF_n ) = \Pi(J > J_n) \lle e^{-C'K_1\eps_n^{-1/\beta} \log 1/\eps_n}.
\]
This implies that statement \eqref{eq: rm} of Theorem \ref{thm: main} holds if $K_1$ is chosen
large enough.
As for the entropy condition \eqref{eq: en}, we note that if the
function $f_0$ admits the series expansion $f_0 = \sum_{j} f_{0,j}\psi_j$,
then a function $f \in\FF_n$ which satisfies $\|f-f_0\|_2\le \eps$ is of the form $f = \sum_{j \le J_n} f_j\psi_j$,
and
$\sum_{j \le J_n} (f_j-f_{0,j})^2 \le \eps^2$.
Hence, the covering number in \eqref{eq: en} is bounded
by the $a\eps$-covering number of a ball of radius $\eps$ in $\RR^{J_n}$, which is
bounded by $(3/a)^{J_n}$ (see, for instance, \cite{pollard1990}).
In view of the choice \eqref{eq: j} of $J_n$ it follows that \eqref{eq: en} holds.
\section{Proof of Theorem \ref{thm: main2}}
\label{sec: proof2}
Under the conditions of Theorem \ref{thm: main2} we can
replace the result of Theorem \ref{thm: pm1} by the following,
which implies that \eqref{eq: pm2} holds.
\begin{thm}
Let the prior $\Pi$ on $f$ be defined according to \eqref{eq: p1}--\eqref{eq: p3},
with $\alpha > 0$ and $p$ and $g$ satisfying \eqref{eq: pp} and \eqref{eq: g},
and let $f_0 \in H^\beta[0,1]$ for $0 < \beta \le \alpha + 1/2$.
Then, for a constant $C > 0$, it holds that
\[
-\log \Pi(f: \|f-f_0\|_2 \le \eps) \le C \eps^{-1/\beta},
\]
for all $\eps> 0$ small enough.
\end{thm}
\begin{prf}
Instead of using Lemma \ref{lem: cb} we simply note that for $s > 0$ and $J \in \NN$,
and $Z_1, Z_2, \ldots$ independent and standard normal,
\[
-\log\PP\Big(\Big\|s \sum_{j=1}^J j^{-1/2-\alpha} Z_j \psi_j \Big\|_2 \le \eps\Big)
\le -\log \PP\Big(\Big\| \sum_{j=1}^\infty j^{-1/2-\alpha} Z_j \psi_j \Big\|_2 \le \eps/s\Big).
\]
By Lemma 4.2 of \cite{waaij2016} the right-hand side is bounded by a constant times $(\eps/s)^{-1/\alpha}$.
{ Using \eqref{eq:upperboundfortheinfimumoveranepsRKHSballaroundthetrueparameter} and Lemma 5.3 of \cite{rkhs2008},
we see that for $J \ge (\eps/\|f_0\|_\beta)^{-1/\beta}$,}
\[
-\log \PP\Big(\Big\|s \sum_{j=1}^J j^{-1/2-\alpha} Z_j \psi_j - f_0\Big\|_2 \le 2\eps\Big) \lle
\Big(\frac s\eps\Big)^{1/\alpha} +
\frac{1 \vee \eps^{{-(1+2\alpha-2\beta)/\beta}}}{s^2}.
\]
For $\beta \le \alpha + 1/2$ the two terms on the right are balanced for $s$ of the order
$\eps^{(\beta-\alpha)/\beta}$,
in which case the right-hand side of bounded by a constant times $\eps^{-1/\beta}$.
In view of assumption \eqref{eq: pp} and \eqref{eq: g}, it follows by conditioning that
\[
\Pi(f: \|f-f_0\|_2 \le 2\eps) \ge \exp\big(-c_1\eps^{-1/\beta}\big) p\big(\big\lfloor c_2\eps^{-1/\beta}\big\rfloor\big)
\int_{\eps^{(2\beta-2\alpha)/\beta}}^{{c_3}\eps^{(2\beta-2\alpha)/\beta}}g(\eta)\,d\eta.
\]
The assumptions on $p$ and $g$ ensure
that this is bounded from below by a constant times $\exp(-C\eps^{-1/\beta})${ for some constants \(C,c_1,c_2,c_3>1\).}
\end{prf}
To complete the proof of Theorem \ref{thm: main2} we note that in this case we can use
the same sieves $\FF_n$ as defined in \eqref{eq: s}, but with a different choice
for the dimension $J_n$, namely $J_n =\big\lceil K_1\eps_n^{-1/\beta}\big\rceil$, for some $K_1 > 0$.
The tail condition in \eqref{eq: pp} then ensures that \eqref{eq: rm2} holds if $K_1$
is chosen large enough. The entropy bound \eqref{eq: en2} is obtained by the same
argument as before, but now using the new choice of $J_n$.
\bibliographystyle{harry}
|
1,941,325,220,784 | arxiv | \section{Introduction}
Suppose $k$ is a field of characteristic $p>0$ and $g\geq 2$. We would like to describe the loci of curves $C$ defined by invariants of the $p$-torsion of its Jacobian, for $C$ either a smooth (irreducible, projective, algebraic) genus-$g$ curve over $k$ or a genus-$g$ stable curve over $k$ of compact type (i.e., a stable curve whose dual graph is a tree).
Let $\mathcal{M}_g$ be the moduli space of smooth genus-$g$ curves, $\mathcal{M}^{ct}_g$ the moduli space of stable genus-$g$ curves of compact type, and $\mathcal{A}_g$ the moduli space of principally polarized $g$-dimensional abelian varieties. To a genus-$g$ stable curve of compact type $C$, we can attach its Jacobian variety $\mathcal{J}_C$, and this induces the Torelli morphism $$j: \mathcal{M}^{ct}_g \to \mathcal{A}_g.$$ We call $j(\mathcal{M}_g^{ct}) = \mathcal{J}_g$ the Torelli locus, and $j(\mathcal{M}_g) = \mathcal{J}_g^0 \subset \mathcal{J}_g$ the open Torelli locus. It is well-known that $\dim \mathcal{M}_g = \dim \mathcal{M}^{ct}_g = \dim \mathcal{J}_g^0 = \dim \mathcal{J}_g = 3 g - 3$, and $\dim \mathcal{A}_g = \frac{g(g + 1)}{2}$.
We say that an abelian variety $A$ over $k$ of dimension $g>0$ is \textit{supersingular} if there is an isogeny $$A \sim E^g,$$ where $E$ is a supersingular elliptic curve over $\bar{k}$, that is $E[p](\bar{k}) = \{O\}$.
We say that a curve $C$ of compact type is supersingular if its Jacobian is supersingular. With $\mathcal{S}_g \subset \mathcal{A}_g$, we denote the locus of supersingular $g$-dimensional abelian varieties. A lot of things are known about $\mathcal{S}_g$; e.g., \cite{lioort}, Theorem 4.9 gives $$\dim \mathcal{S}_g = \left \lfloor \frac{g^2}{4} \right \rfloor, $$ for any $g\geq 1$. Furthermore, using the theory of Dieudonn\'e modules and the Hecke correspondence, some stratifications of $\mathcal{A}_g$ over $k$ that we mention below are described very well; see for example \cite{normanoort}, \cite{lioort}, and \cite{chaioort}.
We are mainly interested in the locus of supersingular curves, and hence in $\mathcal{S}_g\cap \mathcal{J}_g$, the locus of supersingular $g$-dimensional Jacobians. For $g = 2$ or $g = 3$, $\mathcal{J}_g^0$ is dense in $\mathcal{A}_g$, so we get information about the supersingular curves directly. However, when $g\geq 4$, we know very little about the supersingular curves in general. For example, even for $g = 4$, it is only recently shown in \cite{kudoharashitasenda} that for an arbitrary prime $p>0$ there is a supersingular smooth curve of genus $4$. The corresponding question for arbitrary $p>0$ and $g\geq 5$ is still open. Furthermore, the knowledge about the structure of the locus of supersingular curves and some related loci is very limited. See \cite{pries_current_results}, where some of the open questions are mentioned. The biggest obstacle is that we cannot directly use the tools developed for working in $\mathcal{A}_g$.
For principally polarized $g$-dimensional abelian varieties $A$ over $k$, there are several invariants that introduce the stratifications of $\mathcal{A}_g$: \textit{$p$-rank}, \textit{Newton polygon}, \textit{Ekedahl-Oort type}, and \textit{$a$-number}. The \textit{$p\text{-}\rank$} of $A$ is defined as the number $f$, with $0\leq f\leq g$, so that $$\#A[p](\bar{k}) = p^f.$$ Let $A[p^{\infty}] = \varinjlim A[p^n]$ be the $p$-divisible group of $A$. By the Dieudonn\'e-Manin classification \cite{manin}, there are certain $p$-divisible groups $G_{c, d}$ for $c, d\geq 0$ relatively prime integers, so that $A[p^{\infty}]$ is up to isogeny (of $p$-divisible groups) equal to $$\oplus_{\lambda = \frac{d}{c + d}}G_{c, d},$$ for a unique choice of \textit{slopes} $\lambda$. The Newton polygon of $A$ is defined as the collection of those $\lambda$ counted with multiplicities. It holds that $$A \text{ is supersingular if and only if } A[p^{\infty}] \sim (G_{1, 1})^g.$$
The $p\text{-}\rank$ and the Newton polygon are isogeny invariants. The number of slopes $\lambda = 0$ equals the $p\text{-}\rank$ of $A$, and in particular, the supersingular locus $\mathcal{S}_g$ is contained in the $p\text{-}\rank$ zero locus $V_0$. The isomorphism class of $p$-torsion group schemes $A[p]$ is determined by the Ekedahl-Oort type of $A$, which we present with the Young type $$\mu = [\mu_1, \mu_2, \ldots, \mu_n], $$ with $g\geq \mu_1 > \ldots > \mu_n > 0$; equivalently, the Ekedahl-Oort type is also determined by the final type $\nu = \{\nu(1), \nu(2), \ldots, \nu(g)\}$, connected with $\mu$ via $\mu_j = \#\{i: 1\leq j \leq g, \nu(i) + j \leq g\}$. The $a$-number of $A$ is $$a(A) = \dim_k\mathrm{Hom}(\alpha_p, A),$$ where $\alpha_p$ is the group scheme defined as the kernel of Frobenius on the additive group $\mathbb{G}_a$. If $A$ has the Ekedahl-Oort type $[\mu_1, \ldots, \mu_n]$, then $p\text{-}\rank(A) = g - \mu_1$ and $a(A) = n$. In general, we have $1\leq p\text{-}\rank(A) + a(A) \leq g$. For a smooth curve $C$, we define these invariants in terms of its Jacobian $\mathcal{J}_C$ - e.g., $p\text{-}\rank(C) = p\text{-}\rank(\mathcal{J}_C)$. See \cite{oort_mixed_char} or \cite{pries_current_results} for more details.
\\
As one of the first steps, we describe what happens when $g = 4$ and $p = 2$ and consider the previously introduced moduli spaces over $\overline{\mathbb{F}}_2$, that is, $\mathcal{M}_4 = \mathcal{M}_4 \otimes \overline{\mathbb{F}}_2$, $\mathcal{A}_4 = \mathcal{A}_4 \otimes \overline{\mathbb{F}}_2$, etc. A result from \cite{ddmt} gives that $\mathcal{S}_4 = \mathcal{S}_4 \otimes \overline{\mathbb{F}}_2$ is irreducible. We investigate the locus of supersingular Jacobians of genus-$4$ curves, i.e., the intersection of the $9$-dimensional $\mathcal{J}_4$ with $4$-dimensional $\mathcal{S}_4$ inside the $10$-dimensional $\mathcal{A}_4$. The stack $\mathcal{A}_4$ is smooth, and thus the codimension of an intersection in $\mathcal{A}_4$ is the sum of codimensions at most; we use this in the rest of the paper without explicitly mentioning it. By considering $\mathbb{F}_2$-points of the loci $\mathcal{J}_4$ and $\mathcal{S}_4$ and using the data from \cite{lmfdb} and \cite{xarles}, we get that $\mathcal{S}_4$ is not contained in the Torelli locus $\mathcal{J}_4$. That leads to the conclusion that each irreducible component of $$(\mathcal{S}_4 \cap \mathcal{J}_4)\otimes \overline{\mathbb{F}}_2$$ is of dimension three; see Theorem \ref{thm:supersingular_curves_dim3}. In Remarks \ref{rem:eo43_implies_ss3dim} and \ref{rem:supersingular_locus_3dim}, we offer alternative proofs.
Then, we consider the Ekedahl-Oort loci and their connections with the Newton polygon strata of $2\text{-}\rank$ zero and describe their intersections with $\mathcal{J}_4$. There are eight Ekedahl-Oort, and three Newton polygon strata inside the $2\text{-}\rank$ zero locus $V_0$ of $\mathcal{A}_4$. The mentioned Ekedahl-Oort ones $Z_{\mu}$ are defined by $$\mu \in \{[4], [4, 1], [4, 2], [4, 3], [4, 2, 1], [4, 3, 1], [4, 3, 2], [4, 3, 2, 1]\},$$ and consist of principally polarized abelian fourfolds $A$ with the type $\mu_A = \mu$.
The Newton polygon ones are defined by the slope sequences $$\left (\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}\right ), \left (\frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{2}, \frac{1}{2}, \frac{2}{3}, \frac{2}{3}, \frac{2}{3}\right ), \text{ and } \left (\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{3}{4}, \frac{3}{4}, \frac{3}{4}, \frac{3}{4}\right ), $$ and we denote the corresponding locally closed subsets of $\mathcal{A}_4$ (consisting of principally polarized abelian varieties having those Newton polygons) with $\mathcal{S}_4$, $\mathcal{N}_{1/3}$, and $\mathcal{N}_{1/4}$. Denote $\mathcal{N}_{\leq 1/3} = \mathcal{N}_{1/3} \sqcup {\mathcal{S}_4}$ and $\mathcal{N}_{\leq 1/4} = \mathcal{N}_{1/4} \sqcup \mathcal{N}_{1/3} \sqcup {\mathcal{S}_4}$.
We show in Theorem \ref{thm:generic_a_num} that any generic point of $(V_0\cap \mathcal{J}_4)\otimes \overline{\mathbb{F}}_2$ has $a$-number one.
Furthermore, we show in Corollary \ref{cor:np_conclusion} and Corollary \ref{cor:eo_conclusion} that $$(\mathcal{N}_{\leq 1/3}\cap \mathcal{J}_4 )\otimes \overline{\mathbb{F}}_2 \text{ and } ( \mathcal{N}_{\leq 1/4}\cap \mathcal{J}_4 )\otimes \overline{\mathbb{F}}_2$$ are of the expected dimensions, as well as $$( Z_{\mu}\cap \mathcal{J}_4 )\otimes \overline{\mathbb{F}}_2$$ for Young types $\mu = [4], [4, 1], [4, 2]$. See \cite{zhou_genus4} for related questions in characteristic three. Consequently, we get information about the Newton polygon and the Ekedahl-Oort stratification of $\mathcal{M}_4$.
\subsection*{Acknowledgement}
The author is grateful to his supervisor Carel Faber for all the discussions and valuable comments and to Stefano Marseglia for the conversation and the help with the code supporting Example \ref{ex:isog_class_ss_bkm}. The author is supported by the Mathematical Institute of Utrecht University.
\newpage
\section{Abelian varieties over finite fields}
Here, we recall some well-known facts about abelian varieties defined over finite fields; for example, see \cite{lmfdb_paper}.
Let $[\mathcal{A}_4(\mathbb{F}_q)]$ denote the set of $\mathbb{F}_q$-isomorphism classes of principally polarized abelian fourfolds over $\mathbb{F}_q$ and $[\mathcal{M}_4(\mathbb{F}_q)]$ the set of $\mathbb{F}_q$-isomorphism classes of genus-$4$ smooth curves over $\mathbb{F}_q$. In addition, let us denote $[\mathcal{M}^0_4(\mathbb{F}_q)]$ and $[\mathcal{H}_4(\mathbb{F}_q)]$ for the sets of respectively non-hyperelliptic and hyperelliptic genus-$4$ smooth curves over $\mathbb{F}_q$.
Let $q = p^r$, with a prime number $p$ and $r\in \mathbb{Z}_{>0}$, and let $A$ be an abelian variety of dimension $g$ over $\mathbb{F}_q$. We denote the $q$-Frobenius with $F = F_q$ and write $$P_{A/\mathbb{F}_q} = P_{A/\mathbb{F}_q}(t) = \det (t - F| H^1(A))$$ for its characteristic polynomial (which is of degree $2g$), called \allowbreak {\textit{the Weil $q$-polynomial of $A$}.}\allowbreak
If $B$ is another abelian variety over $\mathbb{F}_q$, the Honda-Tate theorem gives us that $P_{A/\mathbb{F}_q}$ divides $P_{B/\mathbb{F}_q}$ if and only if $B$ is isogenous over $\mathbb{F}_q$ to a product in which $A$ occurs as a factor. In particular, $$P_{A/\mathbb{F}_q} = P_{B/\mathbb{F}_q} \text{ if and only if } A \text{ and } B \text{ are isogenous over } \mathbb{F}_q.$$ Moreover, the characteristic polynomial $P_{A/\mathbb{F}_q}$ determines the Newton polygon of $A$; see \cite{lmfdb_paper}, Section 2.4.
If $P_{A/\mathbb{F}_q}(t) = \prod_{i = 1}^{2g}(t - \alpha_i)$ is the Weil polynomial for $A$, then the Weil polynomial associated with the base change of $A$ to $\mathbb{F}_{q^n}$ is
\begin{equation}
\label{eq:base_ext_avs}
P_{A/\mathbb{F}_{q^n}}(t) = \prod_{i = 1}^{2g}(t - \alpha_i^n).
\end{equation}
\noindent
That follows from the fact that the $\alpha_i$ are the eigenvalues of the $q$-Frobenius on $H^1(A)$ and the eigenvalues of the $q^n$-Frobenius are then the $\alpha_i^n$.
For a stable curve $C$ over $\mathbb{F}_{q}$ of compact type, we denote with $$P_{C/\mathbb{F}_{q}}(t):= P_{\mathcal{J}_C/\mathbb{F}_{q}}(t)$$ the characteristic polynomial of $\mathcal{J}_C$ over $\mathbb{F}_{q}$.
When $C$ is a smooth curve of genus $g$ defined over $\mathbb{F}_q$, we can explicitly determine $P_{C/\mathbb{F}_{q}}$ by counting the number of points of $C$ over some extensions of the base field. Consider the zeta-function of $C/\mathbb{F}_q$ defined by $$Z(C/\mathbb{F}_q, t) = \exp \left (\sum_{n\geq 1}\# C(\mathbb{F}_{q^n})\frac{t^n}{n} \right ). $$ The Weil conjecture for curves gives us that
\begin{equation}
Z(C/\mathbb{F}_q, t) = \frac{L(t)}{(1 - t)(1 - qt)},
\label{eq:weil_conj}
\end{equation}
where $L(t) = t^{2g}P_{\mathcal{J}_{C}/\mathbb{F}_q}(1/t)$ is the $L$-polynomial of $\mathcal{J}_C/\mathbb{F}_q$. In particular, using the numbers $\#C(\mathbb{F}_{q^i})$ for $i = 1, \ldots, g$, we can determine $P_{C/\mathbb{F}_{q}}$, so the isogeny class of $\mathcal{J}_C$ over $\mathbb{F}_q$. In particular, they also determine the Newton polygon of $C$: if we write $L(t) = \sum_{i = 0}^{2g}a_it^i$ (and recall $q = p^r$), then the Newton polygon of $C$ is the lower convex hull of the points $$\{(i, \mathrm{val}_2(a_i)/r): 0\leq i\leq 2g\}$$ with $\mathrm{val_2}$ the $2$-adic valuation. $C$ is a supersingular curve if and only if its Newton polygon is a straight line with slope $1/2$ starting at $(0, 0)$ and ending at $(2g, g)$. See further \cite{pries_current_results}, Section 2.
To additionally connect the curves with the corresponding Jacobian varieties in a way that we will use in our arguments, we consider the following well-known lemma.
\begin{lem}
\label{lem:curves_jacobians_autogp}
Let $C$ be a smooth curve over $k$. Then, the map $\epsilon \mapsto (\epsilon^{-1})^*$ gives a group homomorphism $\varphi: \mathrm{Aut}_{k} C \to \mathrm{Aut}_{k} \mathcal{J}_C$
such that:
\begin{enumerate}
\item $\varphi$ is injective;
\item if $C$ is hyperelliptic, then $\varphi$ is an isomorphism;
\item if $C$ is not hyperelliptic, then $$\mathrm{Aut}_{k} \mathcal{J}_C \cong \{\pm 1\}\times \mathrm{Aut}_{k} C.$$
\end{enumerate}
\end{lem}
\begin{proof}
This follows from Torelli's theorem \cite{milne}, Theorem 12.1; see also \cite{howe}, Proposition 4.1.
\end{proof}
\begin{rem}
The Torelli morphism induces a bijection of sets $[\mathcal{H}_4(\mathbb{F}_q)]$ and $[j(\mathcal{H}_4)(\mathbb{F}_q)]$. Furthermore, when $C$ is a non-hyperelliptic curve in $[\mathcal{M}_4(\mathbb{F}_q)]$, then there are precisely two elements in $[\mathcal{A}_4(\mathbb{F}_q)]$ whose representatives are isomorphic over $\overline{\mathbb{F}}_q$ to $\mathcal{J}_C$: $\mathcal{J}_C$ and its twist $\mathcal{J}_C^{-1}$. For more details, see for example \cite{lauter}, Appendix, or \cite{berfabgeer}, Sections 8.1 and 8.2.
\end{rem}
\subsection{Computing polarizations of abelian varieties over finite fields}
Let $h = h(x)$ be a square-free Weil polynomial without real roots and set $L = \mathbb{Q}[x]/(h)$. In \cite{bkm}, Bergstr\"om, Karemaker, and Marseglia gave a description of the polarizations of abelian varieties over a finite field inside the fixed isogeny class defined by $h$ under the assumption that at least one in that isogeny class admits a lifting to characteristic zero for which the reduction morphism induces an isomorphism of endomorphism rings. That description relies on the antiequivalence of categories
\begin{equation}
\mathcal{G}: \mathrm{AV}_{\mathbb{F}_p}(h) \to \mathcal{I}(R_h),
\label{eq:CS_equiv}
\end{equation}
where $\mathrm{AV}_{\mathbb{F}_p}(h)$ is the category of abelian varieties over $\mathbb{F}_p$ inside the $\mathbb{F}_p$-isogeny class defined by the polynomial $h$ (using Honda-Tate theory) and with $\mathbb{F}_p$-morphisms, and $\mathcal{I}(R_h)$ is the category of fractional $R_h$-ideals in $L$ with $R_h$-linear morphisms, where $R_h$ is the order in $L$ generated by $\pi = x \text{ }(\text{mod }h)$ and $p/\pi$. The equivalence is provided by \cite{mars}, Theorem 4.3, and obtained using the Centeleghe-Stix equivalence from \cite{centstix}.
Under $\mathcal{G}$, for $B, B' \in \mathrm{AV}_{\mathbb{F}_p}(h)$ we have $\mathcal{G}(\mathrm{Hom}(B, B')) = (\mathcal{G}(B): \mathcal{G}(B'))$, and if $\mathcal{G}(B) = I$ then $\mathcal{G}(B^{\vee}) = \Bar{I}^t$, where $I^t = \{a\in L: \mathrm{Tr}_{L/\mathbb{Q}}(aI) \subseteq \mathbb{Z}\}$ is the trace dual ideal of $I$ and $\overline{I} = \{\bar{x}: x\in I\}$ with $x\mapsto \bar{x}$ the involution of $L$.
Let $L_{\mathbb{R}}$ be the unique totally real subalgebra of $L$, let $L^{+}$ be the set of totally positive elements of $L_{\mathbb{R}}$, and let $S_{\mathbb{R}}^* = S^*\cap L_{\mathbb{R}}$ be the group of totally real units in $S$. Furthermore, for a CM-type $\Phi$ of $L$, let
$$\Sigma_{\Phi} = \begin{Bmatrix}
& S \text{ is Gorenstein and } S = \Bar{S}, \text{ and there exists } \\ S \subseteq L \text{ order}: &\text{ } A_0\in \mathrm{AV}_{\mathbb{F}_p}(h) \text{ with CM-type }\Phi, \text{ and } \mathrm{End}(A_0) = S, \\ & \text{ that admits a canonical lifting to a }p\text{-adic field }K
\end{Bmatrix}.$$
Consider an abelian variety $B_0$ in $\mathrm{AV}_{\mathbb{F}_p}(h)$, and denote $I = \mathcal{G}(B_0)$, $T = (I:I)$. Let $\mathcal{T}$ be a transversal of the quotient $T^*/\<v\bar{v}: v \in T^*\>$, i.e., a set containing exactly one representative for each class in $T^*/\<v\bar{v}: v \in T^*\>$. If $B_0$ is principally polarized, then, in particular, it follows that $B_0 \cong B_0^{\vee}$, so there is an element $i_0\in L^*$ such that $i_0 \bar{I}^t = I$. If such $i_0$ exists, for an arbitrary $\mathbb{C}$-valued CM-type, we denote $$\mathcal{P}_{\Phi}^{1}(I) = \{i_0u: u \in \mathcal{T} \text{ such that }i_0u \text{ is totally imaginary and }\Phi\text{-positive}\}. $$ Otherwise, when $B_0$ is not isomorphic to $B_0^{\vee}$, such $i_0$ does not exist, and we denote $$\mathcal{P}_{\Phi}^{1}(I) = \o,$$ for all $\Phi$.
We collect some results of \cite{bkm} into the following Proposition. The authors of \cite{bkm} offer an algorithm for computing representatives of principal polarizations of any abelian variety inside a fixed isogeny class whenever \eqref{eqthm:552} is satisfied, using that \eqref{eqthm:552} can be checked algorithmically. We use this in {Section \ref{sec:jacobians} to prove Theorem \ref{thm:supersingular_curves_dim3}.
\begin{prop} Let $\Phi_b$ be a CM-type for $L = \mathbb{Q}[x]/(h)$ for which $\Sigma_b$ is non-empty and let $S\in \Sigma_{\Phi_b}$ be an order. For an abelian variety $B_0 \in \mathrm{AV}_{\mathbb{F}_p}(h)$ let $I = \mathcal{G}(B_0)$ and $T = (I:I)$, and let $\mathcal{T}$ be a transversal of $T^*/\<v\bar{v}: v\in T^*\>$ as above. If it holds that
\begin{equation}
\text{for every }\xi \in S_{\mathbb{R}}^* \text{ we have }\xi L^{+}\cap \mathcal{T} \neq \o,
\label{eqthm:552}
\end{equation}
then the complete set of representatives of principal polarizations of $B_0$ up to isomorphism is in bijection with the set $\mathcal{P}^{1}_{\Phi_b}(I)$.
\label{thm:552}
\end{prop}
\begin{proof}
This follows from \cite{bkm}, Theorem 5.5, Theorem 4.10 and Proposition 5.2.
\end{proof}
\begin{rem}
\label{rmk:reflex_cond}
\cite{bkm}, Corollary 4.9 gives that $\Sigma_{\Phi}$ is non-empty under the assumption that $(L, \Phi)$ satisfies the \textit{generalized residue reflex condition} (see \cite{bkm}, Definition 2.14).
\end{rem}
\newpage
\section{Jacobian varieties over $\mathbb{F}_2$}
\label{sec:jacobians}
In this section, we describe the Jacobians of genus-$4$ curves of compact type over $\mathbb{F}_2$. By comparing the stack counts in certain $\mathbb{F}_2$-isogeny classes of principally polarized abelian varieties with the ones coming from Jacobians, i.e., by comparing $\mathbb{F}_2$-points of $\mathcal{S}_4 = \mathcal{S}_4 \otimes \overline{\mathbb{F}}_2$ and $\mathcal{J}_4 = \mathcal{J}_4 \otimes \overline{\mathbb{F}}_2$, we will conclude that the supersingular locus $\mathcal{S}_4$ is not contained in $\mathcal{J}_4$.
Let $C$ be a stable genus-$4$ curve of compact type, defined over $\mathbb{F}_2$. It is well-known that the Jacobian of $C$ is isomorphic to the product of Jacobians of its (non-rational) irreducible components. Write $C_{g, \mathbb{F}_{2^n}}^{(i)}$ for a smooth curve of genus $g\geq 1$ defined over $\mathbb{F}_{2^n}$, which is an irreducible component of $C$; the $(i)$-notation is to denote, in general, distinct curves. Up to isomorphism over $\mathbb{F}_2$, we have the following list of possibilities for a Jacobian of a genus-$4$ curve of compact type defined over $\mathbb{F}_2$.
\begin{enumerate}[(i)]
\item $\mathcal{J}_C = \mathcal{J}_{C_{4, \mathbb{F}_2}^{(0)}}$.
\item $\mathcal{J}_C = \mathcal{J}_{C_{3, \mathbb{F}_2}^{(0)}} \times\mathcal{J}_{C_{1, \mathbb{F}_2}^{(0)}}$.
\item $\mathcal{J}_C = \mathcal{J}_{C_{2, \mathbb{F}_2}^{(1)}}\times\mathcal{J}_{C_{2, \mathbb{F}_2}^{(2)}}$.
\item $\mathcal{J}_C = \mathcal{J}_{C_{1, \mathbb{F}_2}^{(1)}}\times\mathcal{J}_{C_{1, \mathbb{F}_2}^{(2)}}\times \mathcal{J}_{C_{1, \mathbb{F}_2}^{(3)}}\times\mathcal{J}_{C_{1, \mathbb{F}_2}^{(4)}}$.
\item $\mathcal{J}_C = \mathcal{J}_{C_{2, \mathbb{F}_2}^{(0)}}\times\mathcal{J}_{C_{1, \mathbb{F}_2}^{(1)}}\times\mathcal{J}_{C_{1, \mathbb{F}_2}^{(2)}}$.
\item $\mathcal{J}_C = \mathcal{J}_{C_{2, \mathbb{F}_4}^{(1)}}\times\mathcal{J}_{C_{2, \mathbb{F}_4}^{(2)}}$, where $C_{2, \mathbb{F}_4}^{(1)}$ and $C_{2, \mathbb{F}_4}^{(2)}$ are sent one to another by the Frobenius, i.e., they are conjugated.
\item $\mathcal{J}_C = \mathcal{J}_{C_{2, \mathbb{F}_2}^{(0)}}\times\mathcal{J}_{C_{1, \mathbb{F}_4}^{(1)}}\times\mathcal{J}_{C_{1, \mathbb{F}_4}^{(2)}}$, where $C_{1, \mathbb{F}_4}^{(1)}$ and $C_{1, \mathbb{F}_4}^{(2)}$ are conjugated.
\item $\mathcal{J}_C = \mathcal{J}_{C_{1, \mathbb{F}_2}^{(1)}}\times\mathcal{J}_{C_{1, \mathbb{F}_2}^{(2)}}\times\mathcal{J}_{C_{1, \mathbb{F}_4}^{(3)}}\times\mathcal{J}_{C_{1, \mathbb{F}_4}^{(4)}}$, where $C_{1, \mathbb{F}_4}^{(3)}$ and $C_{1, \mathbb{F}_4}^{(4)}$ are conjugated.
\item $\mathcal{J}_C = \mathcal{J}_{C_{1, \mathbb{F}_4}^{(1)}}\times\mathcal{J}_{C_{1, \mathbb{F}_4}^{(2)}}\times\mathcal{J}_{C_{1, \mathbb{F}_4}^{(3)}}\times\mathcal{J}_{C_{1, \mathbb{F}_4}^{(4)}}$, where, without loss of generality, ${C_{1, \mathbb{F}_4}^{(1)}}$ and ${C_{1, \mathbb{F}_4}^{(2)}}$, as well as ${C_{1, \mathbb{F}_4}^{(3)}}$ and ${C_{1, \mathbb{F}_4}^{(4)}}$, are two pairs of mutually conjugated curves.
\item $\mathcal{J}_C = \mathcal{J}_{C_{1, \mathbb{F}_2}^{(0)}}\times\mathcal{J}_{C_{1, \mathbb{F}_8}^{(1)}}\times\mathcal{J}_{C_{1, \mathbb{F}_8}^{(2)}}\times\mathcal{J}_{C_{1, \mathbb{F}_8}^{(3)}}$, where ${C_{1, \mathbb{F}_8}^{(1)}}, {C_{1, \mathbb{F}_8}^{(2)}}$, and ${C_{1, \mathbb{F}_8}^{(3)}}$ are three irreducible components of $C$ defined over $\mathbb{F}_8$, cyclically permuted by the Frobenius morphism.
\item $\mathcal{J}_C = \mathcal{J}_{C_{1, \mathbb{F}_{16}}^{(1)}}\times\mathcal{J}_{C_{1, \mathbb{F}_{16}}^{(2)}}\times\mathcal{J}_{C_{1, \mathbb{F}_{16}}^{(3)}}\times\mathcal{J}_{C_{1, \mathbb{F}_{16}}^{(4)}}$, where all $C_{1, \mathbb{F}_{16}}^{(1)}, C_{1, \mathbb{F}_{16}}^{(2)}, C_{1, \mathbb{F}_{16}}^{(3)}$, and $C_{1, \mathbb{F}_{16}}^{(4)}$ are irreducible components of $C$ defined over $\mathbb{F}_{16}$, cyclically permuted by the Frobenius.
\end{enumerate}
Let $C$ be a singular, stable genus-$4$ curve defined over $\mathbb{F}_2$, whose Jacobian variety has one of the types from (vi)-(xi), and let $D, D'$ be two components of $C$ defined over $\mathbb{F}_{2^n}$ for $n \in \{2, 3, 4\}$ so that $D$ and $D'$ are conjugated by Frobenius. Since finite fields are perfect, we get that $D$ and $D'$ have the same number of $\mathbb{F}_{2^n}$- and $\mathbb{F}_{2^{2n}}$-points. In particular, they are in the same $\mathbb{F}_{2^n}$-isogeny class, so we have that $(P_{D/\mathbb{F}_{2^n}})^2$, the square of the characteristic polynomial of $\mathcal{J}_D$ over $\mathbb{F}_{2^n}$, divides $P_{C/\mathbb{F}_{2^n}}$, the characteristic polynomial of $\mathcal{J}_C$ over $\mathbb{F}_{2^n}$.
\\
\noindent
From the database \cite{lmfdb}, we extracted all ($65$) $\mathbb{F}_2$-isogeny classes of supersingular abelian fourfolds. In \cite{xarles}, Xarles determined all smooth genus-$4$ curves over $\mathbb{F}_2$ up to $\mathbb{F}_2$-isomorphism. We used the provided list of curves and their numbers of $\mathbb{F}_{2^r}$-points, for $r = 1, 2, 3, 4$, to deduce which are supersingular and to determine the $\mathbb{F}_2$-isogeny classes of their Jacobians. In addition, for each supersingular genus-$4$ curve $C$ over $\mathbb{F}_2$, we computed the size of $\mathrm{Aut}_{\mathbb{F}_2}(C)$.
First, we give an example of a smooth, genus-$4$ curve over $\mathbb{F}_2$, which is supersingular.
\begin{exmp}
\label{ex:smooth_ss_curve}
Consider the genus-$4$ curve $C$ defined over $\mathbb{F}_2$ by the equations $$C:\left\{\begin{matrix}
XY + ZT = 0\\
X^2Z + Y^2Z + YZ^2 + X^2T + Y^2T + XT^2 = 0
\end{matrix}\right. \text{ in } \P^3.$$ It can be computed that $\#C(\mathbb{F}_2) = 7, \#C(\mathbb{F}_4) =9, \#C(\mathbb{F}_8) = 13$, and $\#C(\mathbb{F}_{16}) = 9$, and using \eqref{eq:weil_conj}, one obtains $P_{C/\mathbb{F}_2} = t^8 + 4t^7 + 10t^6 + 20t^5 + 32t^4 + 40t^3 + 40t^2 + 32t + 16.$ It follows that $C$ is a supersingular curve.
\end{exmp}
In the following two examples, we consider an $\mathbb{F}_2$-isogeny class of abelian fourfolds and describe its elements.
\begin{exmp}
Consider the $\mathbb{F}_2$-isogeny class $[A]$ of abelian varieties of dimension $4$ defined by the characteristic polynomial $$P_{A/\mathbb{F}_2} = t^8 + 2t^7 + 2t^6 - 4t^4 + 8t^2 + 16t + 16;$$ it is an irreducible (and therefore square-free) polynomial without real roots. Note that all abelian varieties inside $[A]$ are supersingular. Using \eqref{eq:base_ext_avs}, we find that the isogeny classes of the base extensions of abelian varieties in $[A]$ to $\mathbb{F}_4$ and $\mathbb{F}_8$ are defined by irreducible polynomials $P_{A/\mathbb{F}_4}$ and $P_{A/\mathbb{F}_8}$, while the $\mathbb{F}_{16}$-isogeny class of the base extension of $[A]$ to $\mathbb{F}_{16}$ is defined by $$P_{A/\mathbb{F}_{16}} = (t^4 - 4t^3 + 16t^2 - 64t + 256)^2 = (P_{S/\mathbb{F}_{16}})^2$$ with $P_{S/\mathbb{F}_{16}}$ irreducible.
The only possibility that $\mathcal{J}_C$ is in $[A]$, with $C$ a stable, genus-$4$ curve of compact type defined over $\mathbb{F}_2$, is, in fact, when $C$ is a smooth curve. Indeed, if there are at least two irreducible components of $C$ defined over $\mathbb{F}_{2^n}$ with $n \in \{1, 2, 3\}$, the polynomial $P_{C/\mathbb{F}_{2^n}}$ would have at least two irreducible factors. Similarly, if $C$ has four components $C^{(i)}_{1, \mathbb{F}_{16}}$, $i = 1, 2, 3, 4$ of genus $1$, then $P_{C/\mathbb{F}_{16}} = (P_{C^{(1)}_{1, \mathbb{F}_{16}}} )^4$. None of that is possible, so $C$ is necessarily smooth.
Using Xarles's data, we find only one smooth genus-$4$ curve over $\mathbb{F}_2$, whose Jacobian is in $[A]$. Namely, it is a hyperelliptic curve defined by the equation $$C: y^2 + y = x^9 + x^5, $$ for which we computed $|\mathrm{Aut}_{\mathbb{F}_2}(C)| = 4$. Lemma \ref{lem:curves_jacobians_autogp} gives that $[\mathcal{J}_C]$ is an $\mathbb{F}_2$-point of $\mathcal{A}_4$ weighted by $\frac{1}{4}$ lying inside the isogeny class $[A]$. Therefore, we got $$\sum_{\substack{\mathcal{J}_C \in [\mathcal{J}_4(\mathbb{F}_2)] \\ \mathcal{J}_C \text{ in } [A]}} \frac{1}{|\mathrm{Aut}_{\mathbb{F}_2}(\mathcal{J}_C)|} = \frac{1}{4}.$$
\label{ex:isog_class_ss}
\end{exmp}
\begin{exmp}
Consider the same $\mathbb{F}_2$-isogeny class $[A]$ as in Example \ref{ex:isog_class_ss}, and write $h = P_{A/\mathbb{F}_2}$. The outcome of a Magma code gives that there is a CM-type $\Phi$ of endomorphism algebra $L = \mathbb{Q}[x]/h$, so that $(L, \Phi)$ satisfies the generalized residue reflex condition. By Remark \ref{rmk:reflex_cond}, it follows that $\Sigma_{\Phi}$ is non-empty.
We find six overorders $T_i$ of $R_h$ inside $\O_L$, of index $i$ in $\O_L$ for $i = 1, 4, 8, 16, 32, 64$, and all satisfy condition \eqref{eqthm:552} of Proposition \ref{thm:552}. Hence, for all $i \in \{1, 4, 8, 16, 32, 64\}$ and for each fractional ideal $I$ with $(I:I) = T_i$, we can compute $|\mathcal{P}_{\Phi}^1(I)|$. That gives us a description of polarized abelian varieties $B$ in $\mathrm{AV}_{\mathbb{F}_2}(h)$ corresponding to fractional ideals $I$ under the antiequivalence of categories \eqref{eq:CS_equiv}; moreover, for each such $B$, we can compute the number of its (polarized) automorphisms over $\mathbb{F}_2$.
Write $I_B = \mathcal{G}(B)$, for the image of $B$ in $\mathrm{AV}_{\mathbb{F}_2}(h)$ under \eqref{eq:CS_equiv}, and let $(I_B:I_B) = T_i$. The obtained description of all $B\in \mathrm{AV}_{\mathbb{F}_2}(h)$ in terms of $i$ follows.
\begin{itemize}
\item For $i = 1, 16, 32$, none of the abelian varieties $B$ in $\mathrm{AV}_{\mathbb{F}_2}(h)$ is principally polarizable.
\item For $i = 4$, there is a unique principally polarized abelian variety $B$ in $\mathrm{AV}_{\mathbb{F}_2}(h)$ and $|\mathrm{Aut}_{\mathbb{F}_2}(B)| = 4$.
\item For $i = 8$, there are two principally polarized abelian varieties $B$ in $\mathrm{AV}_{\mathbb{F}_2}(h)$, both with $|\mathrm{Aut}_{\mathbb{F}_2}(B)| = 4$.
\item For $i = 64$, there is a single unpolarized abelian variety in $\mathrm{AV}_{\mathbb{F}_2}(h)$, and it possesses two non-isomorphic principal polarizations. In other words, here we get two non-isomorphic principally polarized varieties $B$. For both of them, it holds that $|\mathrm{Aut}_{\mathbb{F}_2}(B)| = 2$.
\end{itemize}
In particular, the stack count we get here is $$\sum_{\substack{B \in [\mathcal{A}_4(\mathbb{F}_2)] \\ B \text{ in } [A]}} \frac{1}{|\mathrm{Aut}_{\mathbb{F}_2}(B)|} = \frac{1}{4} + 2\cdot \frac{1}{4} + 2\cdot \frac{1}{2} = \frac{7}{4}.$$
\label{ex:isog_class_ss_bkm}
\end{exmp}
\begin{exmp} We obtain the same numerical outcome in terms of the stack counts as in Example \ref{ex:isog_class_ss_bkm} and Example \ref{ex:isog_class_ss}, when we take $$h = P_{A/\mathbb{F}_2} = t^8 - 2t^7 + 2t^6 - 4t^4 + 8t^2 - 16t + 16.$$ In that case, we find only one stable, genus-$4$ curve $C$ of compact type defined over $\mathbb{F}_2$, for which $\mathcal{J}_C$ is inside the $\mathbb{F}_2$-isogeny class defined by $h$. Namely, we have that $C$ is the smooth hyperelliptic curve defined over $\mathbb{F}_2$ by the standard equation $$y^2 + y = x^9 + x^5 + 1.$$ It is a twist of the curve considered in Example \ref{ex:isog_class_ss}.
\end{exmp}
Let now $\mathcal{A}_4 = \mathcal{A}_4 \otimes \overline{\mathbb{F}}_2$ and $\mathcal{J}_4 = \mathcal{J}_4 \otimes \overline{\mathbb{F}}_2$, and let $\mathcal{S}_4$, $\mathcal{N}_{\leq 1/3}$, and $\mathcal{N}_{\leq 1/4}$ be the Newton polygon strata inside $\mathcal{A}_4$ in characteristic two.
First, recall the result earlier obtained.
\begin{lem}
\label{lem:ss4}
In characteristic two, the supersingular locus $\mathcal{S}_4$ is irreducible in $\mathcal{A}_4$.
\end{lem}
\begin{proof}
See \cite{ddmt}, Theorem 3.5.
\end{proof}
Collecting the previous observations, we get the following result.
{\begin{thm} In characteristic two, the supersingular locus $\mathcal{S}_4\cap \mathcal{J}_4$ of Jacobians is pure of dimension three.
\label{thm:supersingular_curves_dim3}
\end{thm}
\begin{proof}
Let $\mathcal{W}$ be an irreducible component of $\mathcal{S}_4\cap \mathcal{J}_4$. Recall that $\mathcal{S}_4 = \mathcal{S}_4\otimes \overline{\mathbb{F}}_2$ is irreducible by Lemma \ref{lem:ss4}, proper, and has dimension $\lfloor 4^2/4 \rfloor = 4$, and that $\mathcal{J}_4$ is closed of codimension $1$ in $\mathcal{A}_4$. We find by Example \ref{ex:smooth_ss_curve} that $\mathcal{S}_4\cap \mathcal{J}_4$ is non-empty, and thus, the dimension of each component of $\mathcal{S}_4\cap \mathcal{J}_4$ is either $3$ or $4$.
Assume that $\dim \mathcal{W} = 4$. Then, $\mathcal{W}$ coincides with $\mathcal{S}_4$ and is contained in the Torelli locus $\mathcal{J}_4$. However, by Examples \ref{ex:isog_class_ss} and \ref{ex:isog_class_ss_bkm}, that is not possible. Namely, the inclusion of $\mathcal{S}_4$ in $\mathcal{J}_4$ would result in the inclusion of their $\mathbb{F}_2$-points; that would imply that we should get the same stack counts in those examples. Therefore, $\dim \mathcal{W} = 3$.
\end{proof}}
\begin{rem}
There is a recent result by Harashita describing certain irreducible components $\mathcal{W}$ of $(\mathcal{J}_4\cap \mathcal{S}_4)\otimes \overline{\mathbb{F}}_p$. Namely, if $\mathcal{W}$ contains an image of a non-hyperelliptic superspecial (smooth) curve, then by \cite{harashita21}, Theorem 2.6, $\mathcal{W}$ is three-dimensional. Note that we cannot use it for $p = 2$ since there are no superspecial genus-$g$ curves in characteristic two for $g\geq 2$.
\end{rem}
\begin{cor} In characteristic two, the Newton polygon strata $\mathcal{N}_{\leq 1/3}\cap \mathcal{J}_4$ and $\mathcal{N}_{\leq 1/4}\cap \mathcal{J}_4$ are of the (expected) codimensions $4$ and $5$ inside $\mathcal{J}_4$.
\label{cor:np_conclusion}
\end{cor}
\begin{proof}
For $\mathcal{N}_{\leq 1/4}\cap \mathcal{J}_4$, the result follows from \cite{fabervdgeer}, Theorem 2.3.
For the second claim, consider for example, the curve $C$ given by the equations $$C:\left\{\begin{matrix}
XY + ZT = 0\\
T^2X + TX^2 + Y^3 + X^2Z + XZ^2 = 0
\end{matrix}\right. \text{ in } \P^3. $$ It lies in $\mathcal{N}_{1/3}$ since $\#C(\mathbb{F}_2) = 5, \#C(\mathbb{F}_4) = 9, \#C(\mathbb{F}_8) = 11$, and $\#C(\mathbb{F}_{16}) = 17$. Furthermore, $\mathcal{S}_4\subseteq \mathcal{N}_{\leq 1/3}$, and thus, $\mathcal{N}_{\leq 1/3}\cap \mathcal{J}_4$ is not fully contained in the Torelli locus, so the result for $\mathcal{N}_{\leq 1/3}\cap \mathcal{J}_4$ follows.
\end{proof}
\newpage
\label{sec:Oort}
\section{Ekedahl-Oort stratification for $g = 4$}
In this section, we mention two classification results about the Ekedahl-Oort stratification of $\mathcal{A}_4 = \mathcal{A}_4 \otimes \overline{\mathbb{F}}_p$. Using them, we will connect certain Ekedahl-Oort types and Newton polygons and describe the stable, genus-$4$ curves of compact type lying inside the loci determined by those invariants.
Ekedahl-Oort strata in $\mathcal{A}_g$ are defined by the classification of the $p$-torsion group schemes $A[p]$ of abelian varieties $A$ up to isomorphism (of $\mathrm{BT}_1$-group schemes), while the Newton polygon strata are defined by the classification of the $p$-divisible groups $A[p^{\infty}]$ of abelian varieties $A$ up to isogeny (of $p$-divisible groups). Recall the well-known facts about these stratifications.
\begin{itemize}
\item The Ekedahl-Oort strata in $\mathcal{A}_4$ of $p$-rank zero are precisely $Z_{\mu}$ for those $\mu$ appearing in the diagram below,
\begin{small}
\begin{center}
\begin{tikzcd}
&&& {[4, 2, 1]} \\
{[4, 3, 2, 1]} & {[4, 3, 2]} & {[4, 3, 1]} && {[4, 2]} & {[4, 1]} & {[4]}, \\
&&& {[4, 3]}
\arrow[from=2-1, to=2-2]
\arrow[from=2-2, to=2-3]
\arrow[from=2-3, to=1-4]
\arrow[from=2-3, to=3-4]
\arrow[from=3-4, to=2-5]
\arrow[from=1-4, to=2-5]
\arrow[from=2-5, to=2-6]
\arrow[from=2-6, to=2-7]
\end{tikzcd}
\end{center}
\end{small}
where there is an arrow $\mu_1 \to \mu_2$ if $Z_{\mu_1} \subseteq \overline{Z_{\mu_2}}$. For $\mu = [\mu_1, \mu_2, \ldots, \mu_n]$, the codimension of $Z_{\mu}$ in $\mathcal{A}_4$ equals $\sum_{i = 1}^n\mu_i$.
\item The Newton polygon strata inside the $p$-rank zero locus are the ones defined by the slope sequences $$\left (\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}\right ), \left (\frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \frac{1}{2}, \frac{1}{2}, \frac{2}{3}, \frac{2}{3}, \frac{2}{3}\right ), \text{ and } \left (\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{3}{4}, \frac{3}{4}, \frac{3}{4}, \frac{3}{4}\right ).$$ We denote these strata respectively by $\mathcal{S}_{4}$, $\mathcal{N}_{1/3}$, and $\mathcal{N}_{1/4}$.
\end{itemize}
In \cite{ibukiyamakaremakeryu}, Ibukiyama, Karemaker, and Yu obtain the following result, which tells us how the mentioned Ekedahl-Oort and Newton polygon loci are related.
\begin{prop} Let $p$ be a prime number and $\mathcal{A}_4 = \mathcal{A}_4 \otimes \overline{\mathbb{F}}_p$ the moduli space of principally polarized abelian varieties in characteristic $p$. The following results hold.
\begin{enumerate}
\item The strata $Z_{\mu}$, for $\mu \in \{[4, 3], [4, 3, 1], [4, 3, 2], [4, 3, 2, 1]\}$, are completely contained in $\mathcal{S}_4$, and we have
\begin{equation*}
\mathcal{S}_4 = (Z_{[4]}\cap \mathcal{S}_4) \sqcup (Z_{[4, 2]}\cap \mathcal{S}_4) \sqcup Z_{[4, 3]}\sqcup Z_{[4, 3, 1]}\sqcup Z_{[4, 3, 2]}\sqcup Z_{[4, 3, 2, 1]};
\end{equation*} $Z_{[4]}\cap \mathcal{S}_4$ is dense in $\mathcal{S}_4$.
\item The stratum $Z_{[4, 2, 1]}$ is completely contained in $\mathcal{N}_{1/3}$, and we have $$\mathcal{N}_{1/3} = (Z_{[4]}\cap \mathcal{N}_{1/3} )\sqcup (Z_{[4, 2]}\cap \mathcal{N}_{1/3} ) \sqcup Z_{[4, 2, 1]}; $$ $(Z_{[4]}\cap \mathcal{N}_{1/3} )$ is dense in $ \mathcal{N}_{1/3}$.
\item The stratum $Z_{[4, 1]}$ is completely contained in $\mathcal{N}_{1/4}$, and we have $$\mathcal{N}_{1/4} = (Z_{[4]}\cap \mathcal{N}_{1/4} )\sqcup Z_{[4, 1]}; $$ $(Z_{[4]}\cap \mathcal{N}_{1/4} )$ is dense in $ \mathcal{N}_{1/4}$.
\end{enumerate}
\label{prop:first_eo_prop}
\end{prop}
\begin{proof}
See \cite{ibukiyamakaremakeryu}, Proposition 5.13.
\end{proof}
\begin{rem}
By Proposition \ref{prop:first_eo_prop}, we have $Z_{[4, 3]}\subseteq \mathcal{S}_4$, $Z_{[4, 2]}\subseteq \mathcal{S}_4 \cup \mathcal{N}_{1/3}$, and $Z_{[4, 1]}\subseteq \mathcal{N}_{1/4}$. Therefore, for an abelian variety $A$, we have
\begin{equation*} a(A) = 2 \implies
\begin{cases}
A \in Z_{[4, 1]} & \text{ if } A \in \mathcal{N}_{1/4} \\
A \in Z_{[4, 2]} & \text{ if } A \in \mathcal{N}_{1/3} \\
A \in Z_{[4, 2]}\cup Z_{[4, 3]} & \text{ if } A \in \mathcal{S}_{4}
\end{cases}.
\end{equation*}
\label{rem:a_num_2_eo_np}
\end{rem}
Let $p = 2$. Using the characterization of the space of regular differentials in terms of theta characteristics by Stöhr and Voloch in \cite{stohrvoloch}, Proposition 3.1, for non-hyperelliptic, genus-$4$ smooth curves $C$, which lie on a quadric cone, we find that
\begin{equation}
2\text{-}\rank(C) = 0 \implies a(C) = 2.
\label{eq:anumberofcone}
\end{equation}
Let us now combine that result with the previous remark.
\begin{exmp} As an immediate application of the previous results, we find a smooth genus-$4$ curve over $\mathbb{F}_2$ with the Ekedahl-Oort type $[4, 1]$. Consider the curve
$$C:\left\{\begin{matrix}
XY + T^2 = 0\\
TX^2 + Y^3 + X^2Z + Z^3 = 0
\end{matrix}\right. \text{ in } \P^3.$$ Using Xarles's data, we find $$\#C(\mathbb{F}_2) = 5, \#C(\mathbb{F}_4) = 9, \#C(\mathbb{F}_8) = 11, \text{ and } \#C(\mathbb{F}_{16}) = 25, $$ and thus, we see that $\mathcal{J}_C \in \mathcal{N}_{1/4}$, so, in particular, $2\text{-}\rank(C) = 0$. Furthermore, $C$ is a smooth curve lying on a quadric cone, so we conclude by the previous $a(\mathcal{J}_C) = a(C) = 2$. Now Remark \ref{rem:a_num_2_eo_np} gives us that $$\mathcal{J}_C \in Z_{[4, 1]}.$$
\label{exmp:EO41nonempty}
\end{exmp}
\begin{exmp} Xarles's data from \cite{xarles} say there are no smooth genus-$4$ curves $C$ over $\mathbb{F}_2$ lying on a quadric cone, such that $\mathcal{J}_C$ is in $\mathcal{N}_{1/3}\cup \mathcal{S}_4$.
\end{exmp}
Now we offer a characterization of the Ekedahl-Oort types in terms of indecomposable $\mathrm{BT}_1$ group schemes. Recall the construction from \cite{oort_eo_classification} by Oort.
Let $$\nu = \{\nu(1),\nu(2), \ldots, \nu(g) \}, \text{ }\text{ with }\text{ } \nu(2g - i) = \nu(i) + g - i,\text{ } 1\leq i \leq g, \text{ }\text{and } \nu(0) = 0$$ be the final type of a principally polarized abelian variety $A$ of dimension $g$ over $k$, let $$1\leq m_1 < m_2 < \ldots < m_g\leq 2g$$ be those $i$ with $\nu(i - 1)< \nu(i)$, and $$1\leq n_g < n_{g - 1} < \ldots < n_1\leq 2g$$ be the elements in the complement $\{1, 2, \ldots, 2g\} - \{m_1, m_2, \ldots, m_g\}$. By \cite{oort_eo_classification}, Theorem 9.4, the polarized covariant Dieudonn\'e module $\mathbb{D}(A[p])$ is isomorphic to the one with the basis $$\{Z_{1}, Z_{2}, \ldots, Z_{2g}\},
\text{ where } Z_{m_i} = X_i \text{ and } Z_{n_i} = Y_i, \text{ for } i \in \{1, 2, \ldots g\}$$ and the relations $$\mathcal{F}(Z_{m_i}) = Z_{i}, \quad \mathcal{F}(Z_{n_i}) = 0, \quad \mathcal{V}(Z_{i}) = 0, \text{ and } \mathcal{V}(Z_{2g - i + 1}) = \begin{cases}
Z_{n_i} & \text{if } 2g - i + 1 \in \{n_1, \ldots, n_g\} \\
-Z_{n_i} & \text{otherwise }
\end{cases},$$ for any $i \in \{1, 2, \ldots, g\}$. The quasi-polarization is given by $$\<X_i, Y_j\> = \delta_{ij}, \quad \<X_i, X_j\> = 0, \quad \text{and}\quad \<Y_i, Y_j\> = 0.$$
In the following examples, we will describe some $\mathrm{BT}_1$ group schemes $A[p]$ of abelian varieties $A$ inside the $p\text{-}\rank$ zero loci.
\begin{lem}
For any $g\geq 1$, there is a unique, indecomposable $\mathrm{BT}_1$ group scheme $I_{g, 1}$ of rank $p^{2g}$, $p$-rank $0$, and $a$-number $1$. In other words, for an abelian variety $A$ with $A[p]\cong I_{g, 1}$, we have $\mu_A = [g]$, i.e., $\nu_A = \{0, 1, 2, \ldots, g-1\}$.
\end{lem}
\begin{proof}
The proof follows from the fact that $\mathcal{V}$ is nilpotent on $I_{g, 1}$ and that $\mu_{A, 2} = 0$ uniquely determines $I_{g, 1}$; see \cite{pries_eo_classification}, Lemma 3.1.
The $\mathrm{BT}_1$ group scheme $I_{g, 1}$ is necessarily indecomposable as $p\text{-}\rank$ and $a$-number are additive functions.
Let us briefly describe the basis $\{Z_1, \ldots, Z_{2g}\}$ of $\mathbb{D}(I_{g, 1})$ and how it behaves under $\mathcal{F}$ and $\mathcal{V}$. We have $$Z_{2} = X_{1}, \ldots,Z_{g-1} = X_g, Z_{2g} = X_{g}, \quad \text{and}\quad Z_{1} = Y_g, Z_{g + 1} = Y_{g - 1}, \ldots Z_{2g - 1} = Y_1, $$ with
$$\mathcal{F}(X_i) = \begin{cases}
X_{i - 1} & \text{ if } 2\leq i \leq g \\
Y_g & \text{ if } i = 1
\end{cases},\quad \text{and}\quad \mathcal{F}(Y_i) = 0 \text{ for }i\in \{1, \ldots, g\},$$ and $$\mathcal{V}(Y_i) = \begin{cases}
Y_{i + 1} & \text{ if } 1\leq i \leq g-1 \\
0 & \text{ if } i = g
\end{cases},\quad\text{and}\quad \mathcal{V}(X_i) = \begin{cases}
0 & \text{ if } 1\leq i \leq g-1 \\
-Y_1 & \text{ if } i = g
\end{cases}.$$
\label{lem:Ig1}
\end{proof}
\begin{exmp}[Dimension 1]
In dimension $1$, the notions of being inside the $p\text{-}\rank$ zero locus, being supersingular, superspecial, and having the Ekedahl-Oort type $[1]$ coincide. By Lemma \ref{lem:Ig1}, an abelian variety $A$ having any of those properties satisfies $A[p]\cong I_{1, 1}$.
\end{exmp}
\begin{exmp}[Dimension 2]
There are two Ekedahl-Oort types of abelian surfaces $A$ inside the $p\text{-}\rank$ zero locus. It immediately follows that
\begin{itemize}
\item either $\mu_A = [2]$, when $A[p]\cong I_{2, 1}$ by Lemma \ref{lem:Ig1};
\item or $\mu_A = [2, 1]$, when $A$ is superspecial and $A[p]\cong I_{1, 1}^{\oplus 2}$.
\end{itemize}
\end{exmp}
\begin{exmp}[Dimension 3]
The Ekedahl-Oort types occurring for abelian threefolds $A$ inside the $p\text{-}\rank$ zero locus are $[3], [3, 1], [3, 2]$, and $[3, 2, 1]$.
\begin{itemize}
\item If $\mu_A = [3]$, then $A[p] \cong I_{3, 1}$ as explained in Lemma \ref{lem:Ig1}.
\item When $\mu_A = [3, 1]$, then $A[p]\cong I_{3,2}$ is an indecomposable $\mathrm{BT}_1$ group scheme; see Example \ref{exmp:dim4_eo_anum3} for the indecomposability argument.
Its covariant Dieudonn\'e module $D_6 = \mathbb{D}(I_{3, 2})$ has a basis $\{Y_3, X_1, Y_2, X_2, Y_1, X_3\}$, and $$\mathcal{F}: \bigl(\begin{smallmatrix}
Y_3& X_1& Y_2& X_2& Y_1& X_3 \\
0 & Y_3 &0 & X_1 &0 & Y_2
\end{smallmatrix}\bigr) \text{ and } \mathcal{V}: \bigl(\begin{smallmatrix}
Y_3& X_1& Y_2& X_2& Y_1& X_3 \\
0 & 0 &0 & -Y_3 &Y_2 & -Y_1
\end{smallmatrix}\bigr).$$ Therefore, we get $\mathcal{V}(D_6) = \<Y_3, Y_2, Y_1\> = D_3$, and $\mathcal{V}^2(D_6) = \<Y_2\> = D_1$. Then $\mathcal{F}^{-1}(D_1) = \<Y_3, Y_2, Y_1, X_3\> = D_4$, so $\mathcal{V}(D_4) = \<Y_2, Y_1\> = D_2$ and $\mathcal{V}(D_2) = \<Y_2\> = D_1$. From this, we clearly see that the final type of $A$ is $\nu_A = \{0, 1, 1\}$.
\item If $\mu_A = [3, 2]$, then $A[p]\cong I_{2, 1}\oplus I_{1, 1}$. To see this, use Lemma \ref{lem:Ig1} to compute the Ekedahl-Oort type of $I_{2, 1}\oplus I_{1, 1}$.
\item Lastly, if $\mu_A = [3, 2, 1]$, then $A$ is superspecial and $A[p]\cong I_{1, 1}^{\oplus 3}$.
\end{itemize}
\label{exmp:dim3_eo_anum2}
\end{exmp}
Lastly, we classify the Ekedahl-Oort types in dimension four, of those abelian varieties having the $a$-number three and lying inside the $p\text{-}\rank$ zero locus.
\begin{exmp}
There are three Ekedahl-Oort types of abelian fourfolds with $p\text{-}\rank$ zero and $a$-number three. Namely, $\mu$ can be $$[4, 2, 1], [4, 3, 1], \text{ or }[4, 3, 2].$$ Using the additivity of the $p\text{-}\rank$ and the $a$-number, we see that two of them define decomposable $\mathrm{BT}_1$ group schemes, and consequently, the remaining one defines an indecomposable one.
Consider first $A[p]\cong I_{3, 2}\oplus I_{1, 1}$. Lemma \ref{lem:Ig1} and Example \ref{exmp:dim3_eo_anum2} give us that $$D_8 = \mathbb{D}(I_{3, 2}\oplus I_{1, 1}) = \mathbb{D}(I_{3, 2})\oplus \mathbb{D}(I_{1, 1}) = \<Y_3, X_1, Y_2, X_2, Y_1, X_3\>\oplus\<Y_1', X_1'\>.$$ Hence, $\mathcal{V}(D_8) = \<Y_3, Y_2, Y_1\>\oplus\<Y_1'\> = D_4$, and $\mathcal{V}^2(D_8) = \<Y_2\>\oplus\<0\> = D_1$. Next, since $\mathcal{F}^{-1}(D_1) = \<Y_3, Y_2, Y_1, X_3\>\oplus \<Y_1'\> = D_5$ we obtain that $\mathcal{V}(D_5) = \<Y_2, Y_1\>\oplus\<0\> = D_2$ and $\mathcal{V}(D_2) = \<Y_2\>\oplus \<0\> = D_1$. It follows that the final type of $A$ is $\nu_A = \{0, 1, 1, 1\}$ so $$\mu_A = [4, 2, 1].$$
For $A[p] \cong I_{2,1}\oplus (I_{1, 1})^{\oplus 2}$ we can do the similar computations using Lemma \ref{lem:Ig1} and Example \ref{exmp:dim3_eo_anum2} to get $$\mu_A = [4, 3, 2].$$
Therefore, we conclude that the Ekedahl-Oort type $[4, 3, 1]$ corresponds to the indecomposable $\mathrm{BT}_1$ group scheme we denote $I_{4, 3}$.
\label{exmp:dim4_eo_anum3}
\end{exmp}
Combining the observations from the previous examples gives us the following Proposition.
\begin{prop} Let $A$ be a principally polarized abelian fourfold over $k$. Assume that $A$ lies in the $p\text{-}\rank$ zero locus of $\mathcal{A}_4$, and let $\mu_A$ be the Young type associated with the Ekedahl-Oort type of $A$. Then, there are the following possibilities for the $p$-torsion group scheme $A[p]$ of $A$:
\begin{itemize}
\item $A[p]\cong I_{4, 1}$, and then $\mu_A = [4]$ and $a(A) = 1$;
\item $A[p]\cong I_{4, 2}$, and then $\mu_A = [4, 1]$ and $a(A) = 2$;
\item $A[p]\cong I_{3, 1}\oplus I_{1, 1}$, and then $\mu_A = [4, 2]$ and $a(A) = 2$;
\item $A[p]\cong I_{2, 1}\oplus I_{2, 1}$, and then $\mu_A = [4, 3]$ and $a(A) = 2$;
\item $A[p]\cong I_{3, 2}\oplus I_{1, 1}$, and then $\mu_A = [4, 2, 1]$ and $a(A) = 3$;
\item $A[p]\cong I_{4, 3}$, and then $\mu_A = [4, 3, 1]$ and $a(A) = 3$;
\item $A[p]\cong I_{2, 1}\oplus (I_{1, 1})^{\oplus 2}$, and then $\mu_A = [4, 3, 2]$ and $a(A) = 3$;
\item $A[p]\cong (I_{1, 1})^{\oplus 4}$, and then $\mu_A = [4, 3, 2, 1]$ and $a(A) = 4$;
\end{itemize}
where $I_{r, a}$ is the notation for a unique indecomposable symmetric $\mathrm{BT}_1$ group scheme of rank $p^{2r}$ of $p\text{-}\rank$ $0$ and $a$-number $a$.
\label{prop:eo_classification}
\end{prop}
\begin{proof}
We almost proved this in Lemma \ref{lem:Ig1} and the previous Examples. See \cite{oort_eo_classification}, Section 9 for the classification of symmetric $\mathrm{BT}_1$ group schemes, or \cite{pries_eo_classification}, 4.4 together with \cite{elkinpries}, Remark 5.13.
\end{proof}
The final notice we mention in this section is a result by Ekedahl and van der Geer about the irreducibility of certain Ekedahl-Oort strata.
\begin{prop}
If $\mu$ is an eligible Young type and $\mu \not \in \{[4, 3], [4, 3, 1], [4, 3, 2], [4, 3, 2, 1]\}$, then the Ekedahl-Oort stratum $Z_{\mu}$ is irreducible in $\mathcal{A}_4$.
\label{prop:ekedahlvdgeer}
\end{prop}
\begin{proof}
See \cite{ekedahlvdgeer}, Theorem 11.5.
\end{proof}
\newpage
\section{Ekedahl-Oort type $[4, 3]$}
Smooth genus-$4$ curves can be either hyperelliptic or trigonal. We focus on those over $k$, a field of characteristic two, lying inside the $2\text{-}\rank$ zero locus. By Proposition \ref{prop:first_eo_prop}, we see that investigating the locus of supersingular curves is related to understanding the Ekedahl-Oort strata defined by a Young type $\mu$ with $\mu \leq [4, 3]$.
Here we consider the type $[4, 3]$ and describe the curves inside the corresponding locus using explicit computations of the associated Hasse-Witt matrices. By the following result, we find there are no smooth hyperelliptic curves of genus four in characteristic two with the Ekedahl-Oort type $[4, 3]$.
\begin{prop}
Let $C$ be a smooth hyperelliptic curve of genus four defined over a field $k$ of characteristic two. If $2\text{-}\rank(C) = 0$, then the Ekedahl-Oort type of $C$ is $[4, 2]$.
\label{prop:eo43_he}
\end{prop}
\begin{proof}
See \cite{vdgeercycle}, Theorem 3.2, for a statement without proof. See also \cite{elkinpries}, Corollary 5.3.
\end{proof}
A non-hyperelliptic smooth curve $C$ of genus four is a trigonal curve. Its model in the canonical embedding in $\P^3$ is an intersection of a quadric and a cubic hypersurface. Over an algebraic closure $\bar{k}$ of $k$ and after a suitable choice of coordinates, we have that any such $C$ (or more precisely, its canonical model) lies either on a \textit{non-singular quadric} $$Q_{ns}: XY + ZT = 0,$$ or on a \textit{quadric cone} (a singular quadric) $$Q_{c}: XY + T^2 = 0.$$
For those lying on a quadric cone, we have the following result.
\begin{prop}
In characteristic two, no non-hyperelliptic smooth curves of genus four with the Ekedahl-Oort type $[4, 3]$ lie on a quadric cone.
\label{prop:eo43_cone}
\end{prop}
\begin{proof}
See \cite{ddmt}, Theorem 4.5.
\end{proof}
In the computations proving Proposition \ref{prop:eo43_cone}, it was enough to discuss the Hasse-Witt matrices of curves lying on $Q_c$ using the conditions equivalent to having the type $[4, 3]$. Let us describe the Hasse-Witt matrices of smooth curves of genus four in characteristic two that lie on the non-singular quadric $Q_{ns}$.\\
\\
Let $k = \bar{k}$ be an algebraically closed field of characteristic two.
The \textit{Cartier operator} $\mathcal{C}$ on the space of regular differentials on a genus-$4$ curve $C$ over $k$ is by definition $$\mathcal{C}\left((f_0^2 + f_1^2x) \mathrm{d} x\right) = f_1\mathrm{d} x, $$ for a separating variable $x$ of $\kappa(C)$, $f_0, f_1 \in \kappa(C)$, and $\omega = (f_0^2 + f_1^2x) \mathrm{d} x \in H^{0}(C, \Omega^1_C)$.
This operator satisfies the following properties for any function $f \in \kappa(C)$ and regular differentials $\omega, \omega_1, \omega_2 \in H^0(C, \Omega_C^1)$:
$$\mathcal{C}(\omega_1 + \omega_2) = \mathcal{C}(\omega_1) + \mathcal{C}(\omega_2), \quad \mathcal{C}(f^2\omega) = f\mathcal{C}(\omega), \quad \mathcal{C}(\mathrm{d} f) = 0,$$ and
$$\mathcal{C}\left (\frac{\mathrm{d} f}{f} \right ) = \frac{\mathrm{d} f}{f}.$$
Given a basis $\{\omega_1, \omega_2, \omega_3, \omega_4\}$ of $H^0(C, \Omega_C^1)$ and the relations $$\mathcal{\mathcal{C}}(\omega_i) = \sum_{j = 1}^4h_{i j}\cdot \omega_j, \text{ for }i = 1, 2, 3, 4 $$ with $h_{ij} \in k$, the \textit{Hasse-Witt matrix} of $C$ is $$H_C = \left (h_{ij}^2 \right )_{1\leq i, j \leq 4}.$$ Note that, by definition, the rank of $H_C$ equals the rank of the Cartier operator $\mathcal{C}$. Furthermore, using the Cartier operator, we can compute the $2\text{-}\rank$ and the $a\text{-number}$ of a curve $C$ as $$2\text{-}\rank(C) = \mathrm{rk}(\mathcal{C}^4) = \dim(\mathrm{Im}(\mathcal{C}^4)),\quad\text{ and}\quad a(C) = \dim(\ker(\mathcal{C})) = 4 - \mathrm{rk}(\mathcal{C}).$$
In \cite{stohrvoloch}, Section 2, St\"ohr and Voloch gave formulas for computing the Hasse-Witt matrices of non-hyperelliptic smooth genus four curves over $k$. Consider the Segre embedding
\begin{equation}
\mathbb{P}^1\times \mathbb{P}^1 \overset{\cong}{\to} Q_{ns} = \{(X:Y:Z:T)\in \mathbb{P}^3: XY = ZT\},
\label{eq:segre_splitquad}
\end{equation}
and note that $Q_{ns}$ contains the affine plane $\{(a:b:ab:1) \in S: a, b\in \overline{\mathbb{F}}_2\}$. For any non-hyperelliptic curve lying on $Q_{ns}$, we get an affine equation of the form $$C: f(x, y) = \sum_{i, j = 0}^3a_{i,j}x^iy^j = 0, $$ for which a basis of regular differentials $\{\omega_1, \omega_2, \omega_3, \omega_4\}$ for $C$ can be given as $$\omega_1 = \frac{1}{\partial f/\partial y} \mathrm{d} x,\quad \omega_2 = \frac{x}{\partial f/\partial y} \mathrm{d} x,\quad \omega_3 = \frac{y}{\partial f/\partial y} \mathrm{d} x, \quad\text{and}\quad \omega_4 = \frac{xy}{\partial f/\partial y} \mathrm{d} x, $$ and the Hasse-Witt matrix expressed as
\begin{equation}
\label{eq:hassewitt_split_quadric}
H_C = \begin{pmatrix}
a_{11} & a_{31} & a_{13} & a_{33}\\
a_{01} & a_{21} & a_{03} & a_{23}\\
a_{10} & a_{30} & a_{12} & a_{32}\\
a_{00} & a_{20} & a_{02} & a_{22}
\end{pmatrix}.
\end{equation}
Note that the group $G = C_2\ltimes (\mathrm{PGL}_2(k)\times \mathrm{PGL}_2(k))$ acts on $\P^1\times \P^1$, where $C_2$ is the cyclic group of order two which acts by interchanging the $\P^1$'s. Using the isomorphism \eqref{eq:segre_splitquad}, explicitly given by $$((x:y),(z:t))\mapsto (xz: yt:yz:xt),$$ we see that each element of $G$ induces a projective automorphism preserving $Q_{ns}$. See also \cite{savitt_ptsgenF8}, Section 3.
\begin{lem}
Let $k$ be an algebraically closed field of characteristic two. Up to isomorphism, any trigonal, smooth genus-$4$ curve $C$ over $k$ that lies on the non-singular quadric can be written as $V(XY + ZT, q)$ in $\P^3$, with $q\in k[X, Y, Z, T]$ a cubic of the form:
\begin{enumerate}
\item[a)] $q = 1\cdot X^3 + 1\cdot Y^3 + a_{21}X^2Y + a_{31}X^2Z + a_{20}X^2T + a_{12}XY^2 + a_{22}XYZ + a_{11}XYT + a_{32}XZ^2 + a_{10}XT^2 + a_{13}Y^2Z + a_{02}Y^2T + a_{23}YZ^2 + a_{01}YT^2$, or
\item[b)] $q = 1\cdot X^3 + a_{21}X^2Y + a_{31}X^2Z + a_{20}X^2T + a_{12}XY^2 + a_{22}XYZ + a_{11}XYT + a_{32}XZ^2 + a_{10}XT^2 + a_{13}Y^2Z + a_{02}Y^2T + a_{23}YZ^2 + a_{01}YT^2$.
\end{enumerate}
Moreover, the coefficients $a_{ij} \in k$ appearing in $a)$ and $b)$ are the ones occurring in \eqref{eq:hassewitt_split_quadric} with $a_{00} = a_{33} = 0$, and $a_{30}$ and $a_{03}$ the coefficients of $X^3$ and $Y^3$ respectively.
\label{lem:split_quad_nice_eqs}
\end{lem}
\begin{proof}
Any $C$ lying on $Q_{ns}\cap V(q)$, for $q$ with a monomial divisible by $ZT$ with a non-zero coefficient, also lies on $Q_{ns}\cap V(q)$ where $q$ does not contain such monomials.
Then, consider the following three types of projective automorphisms preserving $Q_{ns}$ induced by $G$:
\begin{enumerate}
\item ${\bigl(\begin{smallmatrix}
X & Y & Z & T\\
X + aZ & Y & Z & T + aY
\end{smallmatrix}\bigr), \bigl(\begin{smallmatrix}
X & Y & Z & T\\
X & Y + aT & Z + aX & T
\end{smallmatrix}\bigr), \bigl(\begin{smallmatrix}
X & Y & Z & T\\
X + aT & Y & Z + aY & T
\end{smallmatrix}\bigr)}$, and ${\bigl(\begin{smallmatrix}
X & Y & Z & T\\
X & Y + aZ& Z & T + aX
\end{smallmatrix}\bigr),}$ with $a\in k$;
\item ${\bigl(\begin{smallmatrix}
X & Y & Z & T\\
X & aY & aZ & T
\end{smallmatrix}\bigr), \bigl(\begin{smallmatrix}
X & Y & Z & T\\
aX & Y & Z & aT
\end{smallmatrix}\bigr), \bigl(\begin{smallmatrix}
X & Y & Z & T\\
aX& Y & aZ & T
\end{smallmatrix}\bigr),}$ and ${\bigl(\begin{smallmatrix}
X & Y & Z & T\\
X & aY & Z & aT
\end{smallmatrix}\bigr),}$ with $a\in k$;
\item ${\bigl(\begin{smallmatrix}
X & Y & Z & T\\
X & Y & T & Z
\end{smallmatrix}\bigr), \bigl(\begin{smallmatrix}
X & Y & Z & T\\
Z & T & X & Y
\end{smallmatrix}\bigr),}$ and $\bigl(\begin{smallmatrix}
X & Y & Z & T\\
T & Z & X & Y
\end{smallmatrix}\bigr)$.
\end{enumerate}
First, assume that at least one of the coefficients with $X^3, Y^3, Z^3$, and $T^3$ is non-zero. After applying a transformation of type $3$, we may assume that the coefficient $a_{30}$ with $X^3$ is non-zero and $a_{30} = 1$ after scaling. Possibly after using the first and then the third type $1$ transformation, we eliminate $Z^3$ and $T^3$ from $q$. If the coefficient with $Y^3$ is zero, $q$ is of type $b)$. Otherwise, applying the first transformation of type $2$, we get a $q$ of type $a)$.
Finally, assume that all the coefficients with $X^3, Y^3, Z^3$, and $T^3$ are zero. Since $q$ is irreducible, at least one of the coefficients with $X^2Z, X^2T$, or $Y^2Z, Y^2T$ is non-zero. After applying some transformation of type $1$, we will end up in an already discussed case.
\end{proof}
In addition to the preceding, we have the following.
\begin{lem} Let $C = V(XY + ZT, q)$ in $\P^3$ be as in Lemma \ref{lem:split_quad_nice_eqs} with $q \in k[X, Y, Z, T]$ a cubic of form $a)$ or $b)$. The smoothness of $C$ implies $(a_{10}, a_{01}) \neq 0$ and $(a_{23}, a_{32}) \neq 0$.
\label{lem:easy_singularity}
\end{lem}
\begin{proof}
If $a_{10}= a_{01} = 0$, then $(0:0:0:1)$ is a singular point of $C$, and if $a_{23} = a_{32} = 0$ then $(0:0:1:0)$ is a singular point of $C$.
\end{proof}
Using the previous Lemmas, we are able to show the following Proposition.
\begin{prop}
In characteristic two, no non-hyperelliptic smooth curves of genus four with the Ekedahl-Oort type $[4, 3]$ lie on a non-singular quadric $Q_{ns}$.
\label{prop:eo43_smthquad}
\end{prop}
\begin{proof}
We discuss what the Hasse-Witt matrix $H_C$ of a curve $C$ can be case-by-case and conclude that there are no smooth ones of type $[4, 3]$. Being of type $[4,3]$ is equivalent to the conditions $\mathrm{rank}_k H_C = 2$ (i.e., $\<\mathcal{C}\omega_i: i = 1, 2, 3, 4\>$ is two-dimensional) and $\mathcal{C}^2\omega_i = 0$ for all $i$. By Lemma \ref{lem:split_quad_nice_eqs}, we have two possible forms of $H_C$.
Let $C = V(XY + ZT, q)$ with $q$ of type $a)$, when we have $$H_C = \begin{pmatrix}
a_{11} & a_{31} & a_{13} & 0\\
a_{01} & a_{21} & 1 & a_{23}\\
a_{10} & 1 & a_{12} & a_{32}\\
0 & a_{20} & a_{02} & a_{22}
\end{pmatrix}.$$
\noindent
Assume first $a_{11} = a_{22} = 0$. There are two possibilities.
\begin{itemize}
\item $\mathcal{C}\omega_2$ and $\mathcal{C}\omega_3$ are linearly independent. Then from $\mathcal{C}^2\omega_1 = \mathcal{C}^2\omega_4 = 0$ we obtain $a_{31} = a_{13} = a_{20} = a_{02} = 0$. Those are in contradiction with $\mathcal{C}^2\omega_2 = 0$.
\item $\mathcal{C}\omega_2$ and $\mathcal{C}\omega_3$ are linearly dependent. Then $\mathcal{C}\omega_1$ and $\mathcal{C}\omega_4$ have to be linearly dependent too. Otherwise, the rank condition would imply $a_{10} = a_{01} = a_{23} = a_{32} = 0$, so $C$ would be singular by Lemma \ref{lem:easy_singularity}. Then, it follows that $a_{21}^3 = 1$. The assumption $a_{01} = 0$ implies $a_{10} = 0$, and $a_{23} = 0$ implies $a_{32} = 0$. In both cases, $C$ is singular by Lemma \ref{lem:easy_singularity}, so we may assume $a_{01}\neq 0$, $a_{23}\neq 0$. However, then $C$ is singular at $(x:\sqrt{a_{21}}x:\sqrt{a_{21}}x^2:1)$, for the choice $x = \left ( {a_{01}}/{a_{23}a_{21}} \right )^{1/4}$.
\end{itemize}
Next, assume $a_{11} \neq 0$ and $a_{22} = 0$; similarly for $a_{11} = 0, a_{22} \neq 0$. If $(a_{20}, a_{02})\neq 0$, then $\mathcal{C}\omega_2$ and $\mathcal{C}\omega_3$, and moreover $\mathcal{C}\omega_1, \mathcal{C}\omega_2, \mathcal{C}\omega_3$ are linearly dependent since $\mathcal{C}^2\omega_1 = 0$, so $a_{23} = a_{32} = 0$ and $C$ is singular by Lemma \ref{lem:easy_singularity}. Therefore $a_{20} = a_{02} = 0$. After writing $\mathcal{C}\omega_1$ in terms of $\mathcal{C}\omega_2$ and $\mathcal{C}\omega_3$ and considering $\mathcal{C}^2\omega_2 = \mathcal{C}^2\omega_3 = 0$, we can express all the coefficients $a_{ij}\neq a_{23}, a_{32}$ in terms of $a_{01}, a_{10}$, and $a_{11}$. It is not hard to see that $(\sqrt{a_{10}}:0:0:1)$ will always be a singular point of such a $C$.
The final possibility is $a_{11} \neq 0$ and $a_{22} \neq 0$. Consider the relations
\begin{equation}
\mathcal{C}\omega_1 = \sqrt[4]{\frac{a_{31}}{a_{11}}}\mathcal{C}\omega_2 + \sqrt[4]{\frac{a_{13}}{a_{11}}}\mathcal{C}\omega_3, \quad \text{and} \quad \mathcal{C}\omega_4 = \sqrt[4]{\frac{a_{20}}{a_{22}}}\mathcal{C}\omega_2 + \sqrt[4]{\frac{a_{02}}{a_{22}}}\mathcal{C}\omega_3.
\label{eqn:Car_relations1}
\end{equation}
From them and the rank condition, it follows that $\mathcal{C}\omega_2$ and $\mathcal{C}\omega_3$ are linearly independent, and we get eight relations
\begin{equation}
\left\{\begin{matrix}
a_{11} =& \sqrt{\alpha}\cdot a_{01} + \sqrt{\beta}\cdot a_{10}\\
\alpha\cdot a_{11} =& \sqrt{\alpha}\cdot a_{21} + \sqrt{\beta}\\
\beta\cdot a_{11} =& \sqrt{\alpha} + \sqrt{\beta}\cdot a_{12}\\
0 =& \sqrt{\alpha}\cdot a_{23} + \sqrt{\beta}\cdot a_{32}\\
\end{matrix}\right., \quad \left\{\begin{matrix}
0 =& \sqrt{\gamma}\cdot a_{01} + \sqrt{\delta}\cdot a_{10}\\
\gamma\cdot a_{22} =& \sqrt{\gamma}\cdot a_{21} + \sqrt{\delta}\\
\delta\cdot a_{22} =& \sqrt{\gamma} + \sqrt{\delta}\cdot a_{12}\\
a_{22} =& \sqrt{\gamma}\cdot a_{23} + \sqrt{\delta}\cdot a_{32}\\
\end{matrix}\right.;
\label{eqn:Car_relations2}
\end{equation}
where $$a_{31} = \alpha a_{11},\quad a_{13} = \beta a_{11},\quad a_{20} = \gamma a_{22}, \text{ }\text{ and } \text{ } a_{02} = \delta a_{22},$$ for some $\alpha, \beta, \gamma, \delta
\in k$. Note that $\alpha \neq 0, \beta \neq 0, \gamma \neq 0$, and $\delta \neq 0$ because of \eqref{eqn:Car_relations1} and $\mathcal{C}^2\omega_1 = \mathcal{C}^2\omega_4 = 0$, while $\mathcal{C}\omega_1 \neq 0$ and $\mathcal{C}\omega_4\neq 0$.
If we write down $\mathcal{C}^2\omega_2 = \mathcal{C}^2\omega_3 = 0$ in terms of generators $\mathcal{C}\omega_2$ and $\mathcal{C}\omega_3$, we get another four relations $$\left\{\begin{matrix}
a_{21} =& {\alpha}\cdot a_{01} + {\gamma}\cdot a_{23}\\
1 =& {\alpha}\cdot a_{10} + {\gamma}\cdot a_{32}\\
1 =& {\beta}\cdot a_{01} + {\delta}\cdot a_{23}\\
a_{12} =& {\beta}\cdot a_{10} + {\delta}\cdot a_{32}\\
\end{matrix}\right..$$
Denote $\Delta = \det
\begin{pmatrix}
\sqrt{\alpha} & \sqrt{\beta}\\
\sqrt{\gamma} & \sqrt{\delta}
\end{pmatrix}$, and assume $\Delta \neq 0$. Solving the equations leads to $$a_{01} = \frac{a_{11}\sqrt{\delta}}{\Delta},\quad a_{10} = \frac{a_{11}\sqrt{\gamma}}{\Delta},\quad a_{23} = \frac{a_{22}\sqrt{\beta}}{\Delta} ,\quad a_{32} = \frac{a_{22}\sqrt{\alpha}}{\Delta},$$ $$a_{21} = \frac{\alpha\sqrt{\delta}a_{11} + \gamma\sqrt{\beta}a_{22}}{\Delta},\quad a_{12} = \frac{\sqrt{\alpha}{\delta}a_{22} + \sqrt{\gamma}{\beta}a_{11}}{\Delta}, \text{ }\text{ and}$$ $$ \frac{\alpha\sqrt{\gamma} a_{11} + \sqrt{\alpha}\gamma a_{22}}{\Delta} = 1 = \frac{\beta\sqrt{\delta} a_{11} + \sqrt{\beta}\delta a_{22}}{\Delta}.$$
It is not hard to see that the point $P = (x: y: xy: 1)$, with $(x, y) \in k^2$ a solution to the equation $\left\{\begin{matrix}
1 + \alpha x^2 + \beta y^2 = 0\\
\gamma x^2 + \delta y^2 + x^2y^2 = 0
\end{matrix}\right.$, is a singular point of such a $C$. Otherwise, $\Delta = 0$, so there is some $\lambda \in k^*$, so that $\gamma = \lambda \alpha, \delta = \lambda \beta$.
The top relations in the left and right columns of \eqref{eqn:Car_relations2} imply that this cannot happen.
Let us now discuss whether a smooth genus-$4$ curve $C = V(XY + ZT, q)$ with $q$ of type $b)$ can be of the type $[4, 3]$. The Hasse-Witt matrix of $C$ is now $$H_C = \begin{pmatrix}
a_{11} & a_{31} & a_{13} & 0\\
a_{01} & a_{21} & 0 & a_{23}\\
a_{10} & 1 & a_{12} & a_{32}\\
0 & a_{20} & a_{02} & a_{22}
\end{pmatrix}.$$
With similar arguments as above, the cases $a_{11} = 0$ or $a_{22} = 0$ cannot occur for such a smooth curve $C$. As previously, we assume $a_{11} \neq 0, a_{22} \neq 0$, and write $$a_{31} = \alpha a_{11}, \quad a_{13} = \beta a_{11},\quad a_{20} = \gamma a_{22}, \text{ }\text{ and }\text{ } a_{02} = \delta a_{22}, $$ for some $\alpha, \beta, \gamma, \delta \in k$. Since $\mathcal{C}^2\omega_1 = \mathcal{C}^2\omega_4 = 0$, $\mathcal{C}\omega_1 \neq 0$ and $\mathcal{C}\omega_4 \neq 0$ it follows that $\alpha\neq 0$ and $\gamma \neq 0$. Moreover, we can assume $\Delta \neq 0$ with $\Delta = \det
\begin{pmatrix}
\sqrt{\alpha} & \sqrt{\beta}\\
\sqrt{\gamma} & \sqrt{\delta}
\end{pmatrix}$ since $a_{11}$ would be zero otherwise (by discussing the relations similar to the ones in \eqref{eqn:Car_relations2}), as well as $\beta\neq 0$ and $\delta \neq 0$ since it is not hard to see that $C$ would be singular otherwise (by looking at the charts $\{Z \neq 0\}$ and $\{T \neq 0\}$).
From the relations \eqref{eqn:Car_relations1}, and $\mathcal{C}^2\omega_2 = \mathcal{C}^2\omega_3 = 0$ we find that $$ a_{01}= \frac{a_{11}\sqrt{\delta}}{\Delta},\quad a_{10}= \frac{a_{11}\sqrt{\gamma}}{\Delta},\quad a_{23}= \frac{a_{22}\sqrt{\beta}}{\Delta},\quad a_{32}= \frac{a_{22}\sqrt{\alpha}}{\Delta}, $$
$$a_{12} = \sqrt{\beta}\cdot a_{11} = \sqrt{\delta}\cdot a_{22},
\quad a_{21} = \frac{a_{11}\Delta}{\sqrt{\delta}} = \frac{a_{22}\Delta}{\sqrt{\beta}}, \quad a_{11} = \sqrt{\frac{\delta}{\alpha\gamma}}, \text{ }\text{ and } \text{ } a_{22} = \sqrt{\frac{\beta}{\alpha\gamma}}.$$
Similarly as in the first part of this proof and using (all of) the obtained formulas, we find that the point $P = (x: y: xy: 1)$, with $(x, y) \in k^2$ a solution to the equation $$\left\{\begin{matrix}
1 + \alpha x^2 + \beta y^2 = 0\\
\gamma x^2 + \delta y^2 + x^2y^2 = 0
\end{matrix}\right.$$ is a singular point of such $C$. That concludes our case-by-case analysis.
\end{proof}
\begin{rem}
Let $\alpha, \beta, \gamma, \delta, \Delta, a_{11}$, and $a_{22}$ be non-zero elements of $k$ and $q = q(X, Y, Z, T)$ a cubic polynomial as above. Denote $f(x, y) = q(x, y, xy, 1)$. In the proof of both parts of the previous Proposition, we used that a point $P = (p_x:p_y:p_xp_y:1)\in \P^3$ is a singular point of $C$ if and only if $$\frac{\partial f}{\partial x}(p_x, p_y) = 0, \quad \frac{\partial f}{\partial y}(p_x, p_y) = 0, \text{ }\text{ and }\text{ }(f + x\frac{\partial f}{\partial x}+ y\frac{\partial f}{\partial y})(p_x, p_y) = 0.$$ Note that $$f + x\frac{\partial f}{\partial x}+ y\frac{\partial f}{\partial y} = a_{11}(1 + \alpha x^2 + \beta y^2) + a_{22}(\gamma x^2 + \delta y^2 + x^2y^2).$$
\end{rem}
As a consequence of the results presented in this section, we have the following Corollary.
\begin{cor}
In characteristic two, there are no smooth curves of genus four with the Ekedahl-Oort type $[4, 3]$.
\label{cor:no_43_eo_smth_curves}
\end{cor}
\begin{proof}
This follows from Proposition \ref{prop:eo43_he}, Proposition \ref{prop:eo43_cone}, and Proposition \ref{prop:eo43_smthquad}.
\end{proof}
We end this section with two remarks on the result obtained in Corollary \ref{cor:no_43_eo_smth_curves}.
\begin{rem}
In contrast to Corollary \ref{cor:no_43_eo_smth_curves}, in \cite{zhou_genus4}, Theorem 1.2, Zhou shows the existence of a smooth genus-$4$ curve over $\overline{\mathbb{F}}_p$ with the Ekedahl-Oort type $[4, 3]$ for any odd prime number $p$ with $p \equiv \pm 2 \mod 5$.
\end{rem}
\begin{rem}
\label{rem:eo43_implies_ss3dim}
Note that $Z_{[4, 3]}\cap j(\partial {\mathcal{M}_g^{ct}})$ is a $2$-dimensional locus - the only singular, stable genus-$4$ curves with the type $ [4, 3]$ are the ones whose two components are two supersingular genus-$2$ curves; this follows from Proposition \ref{prop:eo_classification}. We can combine that observation with Corollary \ref{cor:no_43_eo_smth_curves} and Example \ref{ex:smooth_ss_curve} to give the second proof of Theorem \ref{thm:supersingular_curves_dim3}, using that $Z_{[4, 3]}\subseteq \mathcal{S}_4$ is a $3$-dimensional locus.
\end{rem}
\newpage
\section{Generic $a$-number}
Let $k$ be an algebraically closed field in characteristic $p>0$, and consider $\mathcal{M}_g = \mathcal{M}_g\otimes \overline{\mathbb{F}}_p$. In \cite{pries_a_number}, using the induction and the intersection with the boundary strata, Pries shows that the generic point of any component of the $p\text{-}\rank \leq f$ locus of smooth genus-$g$ curves $$V_{f} \mathcal{M}_g,$$ for $f = g- 2$ or $f = g- 3$, has $a$-number one. Having Pries's inductive argument, which we present below, it was enough to show that the generic points of all components of $V_0 \mathcal{M}_2$ and $V_0 \mathcal{M}_3$ have $a$-number one.
\begin{prop}
Let $g\geq 2$, $1\leq f <g$, and $\mathcal{M}_g = \mathcal{M}_g \otimes \overline{\mathbb{F}}_p$. If the generic point $X$ of any component of $V_f \mathcal{M}_g$ has $a(X) = 1$, then the generic point $Y$ of any component of $V_{f + 1} \mathcal{M}_{g + 1}$ has $a(Y) = 1$.
In particular, if the generic point $X$ of any component of the locus $V_0 \mathcal{M}_g$ has $a(X) = 1$, then for any $h\geq g$, the generic point $Y$ of any component of $V_{h-g} \mathcal{M}_h$ has $a(Y) = 1$.
\label{prop:pries_induction}
\end{prop}
\begin{proof}
See \cite{pries_a_number}, Proposition 3.7.
\end{proof}
Here, we show that for $p = 2$, the generic point of any component of $V_0\cap \mathcal{J}_4$ has $a$-number one. We start by showing that for $p = 2$, there is a smooth genus-$4$ curve $C$ with $p\text{-}\rank(C) = 0$ and $a(C) = 1$.
\begin{exmp}
The curve $C:\left\{\begin{matrix}
XY + ZT = 0\\
X^2Z + Y^2Z + YZ^2 + X^2T + Y^2T + XT^2 = 0
\end{matrix}\right. \text{ in } \P^3 $ from Example \ref{ex:smooth_ss_curve} has $a$-number $a(C) = 1$.
That follows from \eqref{eq:hassewitt_split_quadric}, since $$H_C = \begin{pmatrix}
0 & 1 & 1 & 0\\
0 & 0 &0 & 1\\
1 & 0 & 0 & 0\\
0 &1 & 1 & 0
\end{pmatrix},$$ and thus $a(C) = 4 - \mathrm{rank} H_C = 1$.
\label{exmp:ss_prank0_anum1}
\end{exmp}
\noindent A specificity of working with curves $C$ in positive characteristics is that there are certain non-trivial constraints among $p, g = g(C)$, and $a = a(C)$ that should be satisfied for such curves $C$ to exist. For example, we have the following result by Zhou.
\begin{prop}
Let $C$ be a smooth genus-$g$ curve defined over a field of characteristic $p$. If the rank of the Cartier operator of $C$ equals $1$, i.e., if $a(C) = g - 1$, then $$g \leq p + \frac{p(p-1)}{2}.$$
\label{prop:zhou_a}
\end{prop}
\begin{proof}
See \cite{zhou_a_number}, Theorem 1.1.
\end{proof}
Using the mentioned results in case $p = 2$, we extract information about the $a$-numbers of some $2$-rank strata.
\begin{thm} Let $\mathcal{J}_4 = \mathcal{J}_4\otimes \overline{\mathbb{F}}_2$ and let $V_0\subseteq \mathcal{A}_4=\mathcal{A}_4 \otimes \overline{\mathbb{F}}_2$ be the $2\text{-}\rank$ zero locus. Let $X$ be a generic point of a component of $V_0\cap \mathcal{J}_4$, the $2\text{-}\rank$ zero locus of Jacobians. Then $a(X) = 1$.
\label{thm:generic_a_num}
\end{thm}
\begin{proof}
Since there is a smooth genus-$4$ curve over $\overline{\mathbb{F}}_2$ with $p\text{-}\rank$ $0$ and $a$-number $1$, by Example \ref{exmp:ss_prank0_anum1}, we see that $Z_{[4]}\cap \mathcal{J}_4\neq 0$, and thus that the Ekedahl-Oort stratum $Z_{[4]}\cap \mathcal{J}_4$ needs to be of dimension $$\dim(Z_{[4]}\cap \mathcal{J}_4) = \dim(V_0\cap \mathcal{J}_4) = 5.$$ Therefore, to get the result, we need to show that all components of $\overline{Z_{[4, 1]}}\cap \mathcal{J}_4$ have dimension $\leq 4$.
For a contradiction, assume that $\overline{Z_{[4, 1]}}\subseteq \mathcal{J}_4$ is of dimension $5$, using Proposition \ref{prop:ekedahlvdgeer}. Then all the Ekedahl-Oort strata $Z_{\mu}$ for $\mu \leq [4, 1]$ are contained in $\mathcal{J}_{4}$. In particular, $Z_{[4, 3, 1]}$ is a two-dimensional non-empty subvariety of $\mathcal{J}_4$. By Proposition \ref{prop:eo_classification} and using that $I_{4, 3}$ is an indecomposable symmetric $\mathrm{BT}_1$ group scheme, it follows that $Z_{[4, 3, 1]}$ is contained in the open Torelli locus $\mathcal{J}_4^0$. However, by Proposition \ref{prop:zhou_a}, there are no smooth genus-$4$ curves over $k$ with $a$-number $3$. Hence $\dim \overline{Z_{[4, 1]}}\cap \mathcal{J}_4 \leq 4$.
\end{proof}
\begin{rem}
\label{rem:supersingular_locus_3dim}
The way we used the type $[4, 3, 1]$ in the previous argument can also be employed, together with Example \ref{ex:smooth_ss_curve}, to give the third proof of Theorem \ref{thm:supersingular_curves_dim3}.
\end{rem}
As a consequence of the previous argument, we obtain the following result.
\begin{cor} Let $\mathcal{J}_4 = \mathcal{J}_4 \otimes \overline{\mathbb{F}}_2$. Then the Ekedahl-Oort strata $Z_{[4]}\cap \mathcal{J}_4$, $Z_{[4, 1]}\cap \mathcal{J}_4$, and $Z_{[4, 2]}\cap \mathcal{J}_4$ are respectively of the expected codimensions $4, 5,$ and $6$ in $\mathcal{J}_4$, while $Z_{\mu}\cap \mathcal{J}_4^0 = \o$ exactly for $\mu \in \{[4, 3], [4, 2, 1], [4, 3, 1], [4, 3, 2], [4, 3, 2, 1]\}$.
\label{cor:eo_conclusion}
\end{cor}
\begin{proof}
Note that $Z_{[4, 2]}\cap \mathcal{J}_4 \neq \o$ since the hyperelliptic curves of $2\text{-}\rank = 0$ have the type $[4, 2]$, and the curve $$C: y^2 + y = x^9 + x^5$$ considered in Example \ref{ex:isog_class_ss} is a supersingular curve, so a curve of $2\text{-}\rank$ zero. The result for the type $[4, 2]$ follows by the argument analogous to the one above. A similar argument can also be used for the type $[4, 1]$ since, by Example \ref{exmp:EO41nonempty}, we see that the curve $C$ given by the equations $$C:\left\{\begin{matrix}
XY + T^2 = 0\\
TX^2 + Y^3 + X^2Z + Z^3 = 0
\end{matrix}\right. \text{ in } \P^3,$$ has the type $[4, 1]$. The conclusion for $\mu = [4]$ was already made in the proof of Theorem \ref{thm:generic_a_num}. Lastly, the result for $\mu \leq [4, 2, 1]$ follows from Proposition \ref{prop:zhou_a}, and for $\mu = [4, 3]$, it follows from Corollary \ref{cor:no_43_eo_smth_curves}.
\end{proof}
The argument offered in the proof of Theorem \ref{thm:generic_a_num} gives us the following observation.
\begin{rem}
Let $p>0$ be an arbitrary prime number and let $[4, 3, 1]$ be the Ekedahl-Oort type in characteristic $p$ and dimension $4$, and $\mathcal{J}_4^0 = \mathcal{J}_4^0 \otimes \overline{\mathbb{F}}_p$. Assume that $Z_{[4, 3, 1]} \not \subseteq \mathcal{J}_4^0$ (e.g., that the locus of smooth genus-$4$ curves with the Ekedahl-Oort type $[4, 3, 1]$ in characteristic $p$ is not $2$-dimensional). Then the analog of Theorem \ref{thm:generic_a_num} holds: for any generic point $X$ of a component $(V_0\cap \mathcal{J}_4)\otimes \overline{\mathbb{F}}_p$, we have $a(X) = 1$. Moreover, that implies the existence of a smooth genus-$4$ curve in characteristic $p$ with the Ekedahl-Oort type $[4]$.
\end{rem}
\begin{rem}
By Theorem \ref{thm:supersingular_curves_dim3}, Corollary \ref{cor:np_conclusion}, and Corollary \ref{cor:eo_conclusion}, we know the dimensions (of any component) of the intersection of $\mathcal{J}_4$ and Newton and Ekedahl-Oort strata inside $V_0$ in characteristic two. However, we cannot say much about their irreducibility or the number of irreducible components. In fact, we do not even know that for $V_0 \cap \mathcal{J}_4$. For now, we only know that $V_0 \cap \mathcal{J}_g$ is connected for any $g\geq 2$ in any characteristic $p>0$ by Achter and Pries in \cite{achterpries_prankconn}, Corollary 4.5 using \cite{achterpries_monodromy}, Proposition 3.4.
\end{rem}
\normalsize
|
1,941,325,220,785 | arxiv | \section{Introduction}
Subdwarf B (sdB) stars are helium core burning stars with very thin
hydrogen envelopes that lie at the blue end of the horizontal branch and hence
are identified with the extreme horizontal branch (EHB) stars \citep[see a recent review by][]{heb2009}.
\citet{dcr1996} showed that a high mass-loss rate on the red giant branch
produces a thin hydrogen envelope and prevents the star from ascending the
asymptotic giant branch.
The evolution of single EHB stars from zero-age to helium exhaustion may be
followed on a series of tracks narrowly centred on 0.475 $M_\odot$\citep{dor1993}.
After helium exhaustion these objects evolve directly onto
the white dwarf sequence.
On the other hand, following the original proposal of \citet{men1976} for the
formation of sdB stars through binary evolution, it has been found that
a significant fraction of these stars reside
in close binary systems \citep[e.g.,][]{max2001,mor2003}.
The onset of a common envelope phase or Roche lobe overflow contributes to the
removal of the hydrogen envelope and directs the star toward the EHB.
\citet{han2002,han2003} propose three formation channels for the formation
of sdB stars through binary interaction, either involving common envelope (CE) phases,
episodes of Roche lobe overflow (RLOF), or the merger of two helium white dwarfs.
The CE scenario involving primary stars that experience a
helium flash accompanied by a low-mass or white dwarf secondary star is expected
to create short-period binaries ($\log{P(d)}\approx -1$ to 1) and a
final primary mass distribution narrowly centred on 0.46$M_\odot$.
The CE scenario with primary stars massive enough to avoid a helium flash
is expected to achieve a much lower final mass for the primary (0.33-0.35 $M_\odot$).
On the other hand, the RLOF scenario creates longer period binaries and a wider distribution
of primary final masses. Studies of the binary components, and an
estimate of the frequency of such systems are required to constrain these models and
determine the relative contribution of these formation channels to the sdB population.
In this context, we have initiated a program to identify new hot
subdwarf candidates (Vennes et al., in preparation).
We combined ultraviolet photometric measurements from the {\it Galaxy Ultraviolet Explorer} ({\it GALEX}) all-sky survey
and photographic visual magnitudes from the Guide Star Catalog, Version 2.3.2 (GSC2.3.2), to build a list of blue stellar candidates, while
follow-up spectroscopic measurements further constrained the properties of the candidates. In Section 2,
we present available spectroscopic and photometric measurements of two new, hot sdB stars.
Section 3 presents our analysis of the radial velocity measurements, and in
Section 4 we constrain the properties of the binary components. We summarise and conclude
in Section 5.
\section{Observations}
The ultraviolet sources
GALEX~J234947.7$+$384440 (hereafter GALEX~J2349$+$3844)
and GALEX~J032139.8+472716 (hereafter GALEX~J0321$+$4727) were originally selected
based on the colour index $NUV-V<0.5$ and the brightness limit $NUV<14$, where $NUV$ is the {\it GALEX} near ultraviolet
bandpass, and $V$ is the GSC photographic magnitude.
The sources were also identified with optical counterparts in the Tycho-2
catalogue \citep{hog2000} and the Third U.S. Naval Observatory CCD Astrograph Catalog \citep[UCAC3,][]{zac2010}.
GALEX~J0321$+$4727 is also known as a disqualified member (No. 488) of the cluster Melotte 20 \citep[see][]{mer2008,van2009}.
\citet{hec1956} listed a spectral type of B7
but excluded it as a possible cluster member based on its proper motion.
GALEX~J2349$+$3844 was independently identified in the
First Byurakan Survey of Blue Stellar Objects as FBS~2347+385 \citep{mic2008}.
\begin{table}
\centering
\begin{minipage}{\columnwidth}
\caption{Astrometry and photometry \label{tbl_phot}}
\begin{tabular}{@{}lcc@{}}
\hline
& J0321$+$4727 & J2349$+$3844 \\
\hline
RA (2000) & 03 21 39.629 & 23 49 47.645 \\
Dec (2000) & +47 27 18.79 & +38 44 41.57 \\
$\mu_{\alpha}\cos{\delta}$ (Tycho-2) & 57.2$\pm$1.9 mas yr$^{-1}$ & $-$7.5$\pm$3.5 mas yr$^{-1}$ \\
$\mu_{\delta}$ (Tycho-2) & $-$8.5$\pm$1.8 mas yr$^{-1}$ & 1.6$\pm$3.2 mas yr$^{-1}$ \\
$\mu_{\alpha}\cos{\delta}$ (UCAC3) & 58.0$\pm$1.0 mas yr$^{-1}$ & $-$4.0$\pm$2.3 mas yr$^{-1}$ \\
$\mu_{\delta}$ (UCAC3) & $-$8.4$\pm$1.0 mas yr$^{-1}$ & $-$1.4$\pm$1.3 mas yr$^{-1}$ \\
{\it FUV} & $12.441\pm0.019$ & $11.261\pm0.021$ \\
{\it NUV} & $11.913\pm0.007$ & $11.310\pm0.005$ \\
$B$ & $11.53\pm0.11$ & $11.66\pm0.11$ \\
$V$ & $11.72\pm0.16$ & $11.72\pm0.15$ \\
2MASS {\it J} & $11.807\pm0.023$ & $12.040\pm0.024$ \\
2MASS {\it H} & $11.859\pm0.030$ & $12.156\pm0.031$ \\
2MASS {\it K} & $11.893\pm0.029$ & $12.184\pm0.024$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\subsection{Photometry and astrometry}
We extracted the ultraviolet photometry from the
{\it GALEX} all-sky survey using CasJobs at the Multimission Archive at STScI (MAST). {\it GALEX} obtained photometric
measurements in the FUV and NUV bands with effective wavelengths of 1528 \AA\ and 2271 \AA,
respectively. We corrected the photometry for non-linearity
\citep{mor2007}. Although the statistical errors are
small ($<0.02$ mag) we estimate the total errors to be $\ga 0.2$ mag because of large uncertainties in the
linearity corrections for bright sources \citep[see][]{mor2007}. In the present case, the $NUV$ measurements are more
reliable than the $FUV$ measurements.
We also obtained infrared photometry from the Two Micron All Sky
Survey \citep[{\it 2MASS},][]{skr2006} and optical photometry from the Tycho catalogue\footnote{Accessed at VizieR \citep{och2000}.}.
We transformed the Tycho $B_T$ and $V_T$ photometric magnitudes
to the Johnson $B$ and $V$ magnitudes using the recommended transformation equations
\citep{per1997}.
Table~\ref{tbl_phot} lists the available photometry
for the two sources and astrometric measurements from Tycho-2 and UCAC3. The tabulated coordinates (epoch and equinox 2000)
are the averages of the Tycho-2 and UCAC3 coordinates. Both optical counterparts lie within $\sim 3\arcsec$ of the
ultraviolet sources.
Finally, we have extracted photometry from the Northern Sky Variability Survey
(NSVS). The photometric bandpass is very broad, ranging from 4500 to 10000 \AA,
with an effective wavelength close to the Johnson $R$ band \citep{woz2004}.
The modified julian dates supplied by NRVS were converted to the barycentric
julian dates. The time series comprise 173 and 240 good measurements for
GALEX~J0321$+$4727 and GALEX~J2349$+$3844, respectively, and allow the
examination of our objects for variability. We also obtained a photometric
series (167 good measurements) of the known eclipsing and variable sdB+dM
binary 2M~1533$+$3759 \citep{for2010} to test our methodology.
\subsection{Spectroscopy}
We observed GALEX~J2349$+$3844 and
GALEX~J0321$+$4727 using the spectrograph at the coud\'e focus of the 2m
telescope at Ond\v{r}ejov Observatory \citep{sle2002}. We obtained the spectroscopic series using the 830.77
lines per mm grating with a SITe $2030\times 800$ CCD that delivered
a spectral resolution $R = 13\, 000$ and a spectral range from 6254 \AA\ to 6763 \AA.
The exposure time for both targets is 30 minutes, with each exposure immediately
followed by a ThAr comparison arc.
The fast rotating B star HR 7880 was observed each
night to help remove telluric features from the spectra.
We verified the stability of the wavelength scale by measuring the wavelength centroids
of O{\sc i} sky lines. The velocity scale remained stable within 1\,km~s$^{-1}$.
We also obtained two low dispersion spectra of GALEX~J0321$+$4727 using the
R-C spectrograph attached to the 4m telescope at Kitt Peak National Observatory
(KPNO) on UT 2010 March 23.
We used the KPC-10A grating (316 lines/mm) with the
WG360 order blocking filter. The slitwidth was set to 1.5 arcseconds to provide
a resolution of FWHM $= 5.5$\AA. A HeNeAr comparison spectrum was obtained following
the target spectrum.
We exposed GALEX~J0321$+$4727 for 60 and 180 s and we co-added the spectra
weighted by the exposure times.
All spectra were reduced using standard procedures within IRAF.
\section{Binary Parameters}
\subsection{Radial velocity variations}
We measured the radial velocities by fitting a Gaussian function to the
H$\alpha$ core and by measuring the shifts relative to the rest wavelength.
The shifts were then converted into radial velocities and adjusted to the
solar system barycentre. Tables~\ref{tbl_gal0321} and \ref{tbl_gal2349} lists
the barycentric julian dates, radial velocities and the spectra signal-to-noise
ratios for GALEX~J0321$+$4727 and GALEX~J2349$+$3844,
respectively. The accuracy of individual measurements varied from 1
\,km~s$^{-1}$ in high signal-to-noise spectra to 10\,km~s$^{-1}$
in lower quality spectra.
\begin{table}
\centering
\begin{minipage}{\columnwidth}
\caption{Radial velocities of GALEX~J0321$+$4727. \label{tbl_gal0321}}
\begin{tabular}{@{}ccc|ccc@{}}
\hline
BJD & v & S/N & BJD & v & S/N \\
(2455000+) & (km s$^{-1}$) & & (2455000+) & (km s$^{-1}$) & \\
\hline
45.54317 & +24.9 & 29 & 75.61381 & +38.6 & 13 \\
45.57495 & +61.4 & 32 & 76.40849 & +66.1 & 20 \\
59.49449 & +119.6 & 25 & 76.43166 & +86.7 & 21 \\
62.58712 & +58.9 & 15 & 76.45473 & +112.2 & 22 \\
62.61018 & +95.5 & 10 & 76.47782 & +120.4 & 22 \\
63.55702 & +45.3 & 10 & 76.49589 & +120.9 & 11 \\
63.57998 & +13.8 & 15 & 76.52687 & +95.7 & 14 \\
63.60293 & +15.1 & 17 & 76.54989 & +80.6 & 15 \\
63.61545 & +13.7 & 12 & 76.57303 & +49.1 & 14 \\
75.41035 & +143.4 & 15 & 84.56626 & +35.5 & 13 \\
75.43322 & +131.9 & 18 & 84.59055 & +9.4 & 26 \\
75.48607 & +82.6 & 15 & 84.61363 & +12.2 & 24 \\
75.50993 & +46.0 & 13 & 84.63684 & +35.0 & 25 \\
75.53328 & +14.5 & 16 & 98.38356 & +18.2 & 8 \\
75.59048 & +28.1 & 13 & & & \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{table}
\centering
\begin{minipage}{\columnwidth}
\caption{Radial velocities of GALEX~J2349$+$3844. \label{tbl_gal2349}}
\begin{tabular}{@{}ccc|ccc@{}}
\hline
BJD & v & S/N & BJD & v & S/N \\
(2455000+) & (km s$^{-1}$) & & (2455000+) & (km s$^{-1}$) & \\
\hline
45.48574 & -32.6 & 21 & 75.56357 & -4.7 & 12 \\
45.50700 & +2.0 & 22 & 76.31684 & -47.6 & 17 \\
59.40594 & +29.0 & 25 & 76.33994 & -71.0 & 18 \\
62.49909 & -62.8 & 8 & 76.36123 & -82.4 & 16 \\
63.46112 & -76.3 & 15 & 76.38463 & -79.2 & 20 \\
63.48409 & -70.0 & 18 & 84.28986 & -74.6 & 25 \\
63.51335 & -41.2 & 17 & 84.31270 & -61.3 & 26 \\
63.53634 & -10.2 & 17 & 84.33575 & -38.1 & 27 \\
68.42225 & -19.7 & 14 & 84.35898 & -8.4 & 25 \\
68.46553 & -55.4 & 12 & 84.38179 & +38.6 & 27 \\
69.35158 & -31.8 & 14 & 84.40464 & +54.1 & 27 \\
69.37813 & -52.8 & 19 & 84.42747 & +78.3 & 30 \\
69.40226 & -67.9 & 9 & 84.45030 & +86.0 & 23 \\
70.50880 & +23.4 & 18 & 84.47349 & +86.9 & 19 \\
70.53208 & +56.3 & 27 & 84.49679 & +86.4 & 22 \\
74.53854 & -97.6 & 13 & 84.52089 & +68.0 & 24 \\
75.31818 & +26.7 & 18 & 84.54375 & +53.4 & 25 \\
75.34117 & +2.9 & 15 & 98.29111 & +59.4 & 17 \\
75.36393 & -20.4 & 18 & 98.31403 & +76.7 & 16 \\
75.38821 & -38.7 & 15 & 98.33709 & +90.4 & 17 \\
75.46518 & -81.3 & 11 & 98.36017 & +78.5 & 10 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
The orbital parameters were determined by
fitting to the velocity series a sinusoidal function of the form
\begin{displaymath}
v(t) = \gamma + K\sin{(2\pi[t-T_0]/P)},
\end{displaymath}
where $P$ is the period, $\gamma$ is the systemic velocity, $K$ is the
velocity semi-amplitude, and $T_0$ is the initial epoch. The initial epoch
$T_0$ corresponds to the inferior conjunction of the sdB ($\Phi = 0$).
We applied a $\chi^2$ minimisation technique with each velocity measurement
weighted proportionally to the signal-to-noise ratio achieved in the
corresponding spectrum. Figures~\ref{fig_vel_gal0321} and \ref{fig_vel_gal2349}
show the periodograms and best-fit radial velocity curves for
GALEX~J0321$+$4727 and GALEX~J2349$+$3844, respectively. The residual to the
best-fit radial curve is $\approx 6$ km\,s$^{-1}$ for both data sets.
Table~\ref{tbl_bin} lists the corresponding binary parameters and the
calculated mass functions. The measured radial velocities were also employed to
apply doppler corrections to individual spectra and build phase-averaged
spectra for each star (Section 4).
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.eps}
\caption{({\it Top}) Period analysis of GALEX~J0321$+$4727 showing a single
significant period. ({\it Middle}) Radial velocity measurements folded on the
orbital period and best-fit sine curve. ({\it Bottom}) Residual of the
velocities relative to the best-fit sine curve.\label{fig_vel_gal0321}}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{Same as Figure~\ref{fig_vel_gal0321} but for GALEX~J2349$+$3844.
\label{fig_vel_gal2349}}
\end{figure}
\begin{table}
\centering
\begin{minipage}{\columnwidth}
\caption{Binary parameters. \label{tbl_bin}}
\begin{tabular}{@{}lcc@{}}
\hline
Parameter & J0321$+$4727 & J2349$+$3844 \\
\hline
Period (d) & $0.26584\pm0.00004$ & $0.46249\pm0.00007$ \\
$T_0$ (BJD 2455000+) & $45.582\pm0.011$ & $45.511\pm0.013$ \\
K (km~s$^{-1}$) & $59.8\pm4.5$ & $87.9\pm2.2$ \\
$\gamma$ (km~s$^{-1}$) & $70.5\pm2.2$ & $2.0\pm1.0$ \\
$f(M_{\rm sec})$ ($M_\odot$) & $0.00589\pm0.00015$ & $0.03254\pm0.00044$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
Our new systemic velocity for GALEX~J0321$+$4727 ($\gamma=70.5$ km~s$^{-1}$) also clearly rules out membership
to the cluster Melotte 20 \citep[$\gamma=-1.4$ km~s$^{-1}$,][]{mer2008}.
\subsection{Photometric variations}
We investigated possible variability in GALEX~J0321$+$4727 and GALEX~J2349$+$3844
using the NSVS light curves. We analyse the light curves using a Lomb periodogram
for unevenly sampled time series \citep{pre1992}. The power spectra (Fig.~\ref{fig_fft})
show a peak signal at a frequency close to the orbital period in GALEX~J0321$+$4727 (Table~\ref{tbl_bin})
and 2M~1533$+$3759 \citep{for2010}, but not in GALEX~J2349$+$3844 (Table~\ref{tbl_bin}). We estimated the probability of
a given frequency relative to the probability of the peak frequency following \citet{pre1992} and determined
the 1$\sigma$ (66\%) error bars on the period of the photometric variations for GALEX~J0321$+$4727:
\begin{displaymath}
P=0.26586\pm0.00003\ {\rm d},
\end{displaymath}
and 2M~1533$+$3759:
\begin{displaymath}
P=0.16177\pm0.00003\ {\rm d}.
\end{displaymath}
For both stars, the period of photometric variations is equal, within error bars, to the measured orbital period,
thereby validating the method.
Figure~\ref{fig_phot} shows the NSVS light curves for GALEX~J0321$+$4727 and GALEX~J2349$+$3844 folded
on the orbital period with an arbitrary phase adjustment so that $\Phi=0$ corresponds to the
inferior conjunction of the primary star (sdB). The distant NSVS epoch (~1999) precluded phasing with the current
orbital ephemeris (Table~\ref{tbl_bin}). The light curve of GALEX~J0321$+$4727 is fitted with the sine curve:
\begin{displaymath}
m = (12.034\pm0.003) + (0.061\pm0.004)\ \sin{2\pi\Phi},
\end{displaymath}
that we interpret as a reflection effect on a late-type secondary star (Section 4). The variations in GALEX~J2349$+$3844
are not significant with a mean magnitude of $<m>=12.281\pm0.003$ and a semi-amplitude of $\Delta m/2=0.009\pm0.004$.
\begin{figure}
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{Lomb periodograms of GALEX~J0321$+$4727 and GALEX~J2349$+$3844,
and the test target 2M~1533$+$3759.}
\label{fig_fft}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig4.eps}
\caption{NSVS lightcurves of GALEX~J0321$+$4727 and GALEX~J2349$+$3844 compared to best-fit
sine curves with semi-amplitudes of $0.061\pm0.004$ and $0.009\pm0.004$ mag, respectively.}
\label{fig_phot}
\end{figure}
\section{Properties of the components}
Our analysis of the spectroscopic observations of the primary stars is based on
a grid of non-LTE models and synthetic spectra calculated using TLUSTY/SYNSPEC
\citep{hub1995,lan1995}. The grid covers the effective temperature from
$T_{\rm eff}=21000$ to 35000 K (in steps of 1000 K), the surface gravity from
$\log{g}=4.75$ to 6.25 (in steps of 0.25), and the helium abundance from
$\log{(n_{\rm He}/n_{\rm H})} = -4.0$ to $-1.0$ (in steps of 0.5). The neutral
hydrogen and helium atoms include 9 ($n\le9$) and 24 ($n\le8$) energy levels,
respectively, and the ionized helium atom includes 20 ($n\le20$) energy levels.
\subsection{Spectral energy distribution}
Figure~\ref{fig_sed_gal2349} shows a preliminary analysis of the atmospheric
properties of the two sdB stars using their observed spectral energy
distribution (SED) from the infrared to the ultraviolet, and using
representative sdB models at $T_{\rm eff}=29000$ K, $\log{g}=5.5$, and
$\log{(n_{\rm He}/n_{\rm H})}=-2.65$ (GALEX~J0321$+$4727) and $-3.25$
(GALEX~J2349$+$3844). The model corresponds to a hot sdB star with absolute
magnitude $M_V=4.1$ and a canonical mass of $0.47\,M_\odot$
\citep[see][]{dor1993}. We corrected the model spectra for interstellar
extinction using a variable extinction coefficient $E(B-V)$ and a parametrised
extinction law \citep[$R=3.2$,][]{car1989}. GALEX~J0321$+$4727 lies close to
the plane of the Galaxy ($l=147.5, b=-8.1$) and the total extinction in the
line of sight is $E(B-V) = 0.61$ \citep{sch1998}. Excluding the FUV band, the
observed SED is well matched by the model assuming $E(B-V) = 0.23$. A much
higher coefficient ($\approx 0.4$) is required to match the FUV band although
the accuracy of the FUV photometry is most probably affected by non-linearity
(see Section 2.1). GALEX~J2349$+$3844 is located lower below the plane
($l=110.0, b=-22.6$) and the total extinction is lower $E(B-V) = 0.17$. In this
case, the SED is well matched assuming $E(B-V) = 0.13$. These estimates appear
reasonable if we locate both stars at a distance of $\sim 330$ pc by assuming
$M_V\sim4.1$ and $V=11.7$ for both stars. Taking the scale height for dust in
the Galactic plane as $h\approx 150$ pc, the total distance accross the dust
layer is $\approx 1060$ and 390 pc at $|b|=8.1$ and 22.6$^\circ$, respectively,
so that the path toward GALEX~J0321$+$4727 covers $\sim31$\% of the total
distance and $\sim85$\% for GALEX~J2349$+$3844. According to this simple
calculation, the scaled $E(B-V)$ indices are predicted to be $0.19$ and $0.15$
for GALEX~J0321$+$4727 and GALEX~J2349$+$3844,
respectively, and are similar to our estimates based on the SED.
\begin{figure}
\includegraphics[width=\columnwidth]{fig5.eps}
\caption{({\it Top}) Spectral energy distribution of GALEX~J0321$+$4727
compared to a model spectrum that was corrected for interstellar extinction
assuming $E(B-V) = 0.0, 0.23, 0.4$, and $R_V = 3.2$. ({\it Bottom}) Same but
for GALEX~J2349$+$3844 and assuming $E(B-V) = 0.0, 0.13, 0.2$.
\label{fig_sed_gal2349}}
\end{figure}
Taking into account the effect of interstellar reddening, the entire spectral energy distribution
of both systems
is dominated by the sdB stars.
\subsection{Line profile analysis}
We measured the sdB effective temperature ($T_{\rm eff}$), surface gravity
($\log{g}$), and helium abundance ($\log{(n_{\rm He}/n_{\rm H})}$) by
fitting the observed line profiles with our grid of model spectra.
We employed $\chi^2$ minimisation techniques to find the best-fit parameters
and draw contours at 66, 90, and 99\% significance.
The model spectra were convolved with Gaussian profiles
with a $FWHM = 5.5$ \AA\
for the analysis of the KPNO spectra, while we adopted a $FWHM= 0.66$ \AA\ that includes
the effect of orbital smearing for the Ond\v{r}ejov coud{\'e} spectra.
First we analysed the KPNO spectrum of GALEX~J0321$+$4727 (Fig.~\ref{fig_spec}).
We included in the analysis the Balmer line spectrum from H$\alpha$ to H11 and
five blue He{\sc i} lines normally dominant in sdB stars. We repeated the
analysis with H$\alpha$ excluded and obtained the same atmospheric parameters
as the analysis that included H$\alpha$.
The mid-exposure time of the co-added KPNO spectrum is BJD~2455278.59501
corresponding to an orbital phase $\Phi=0.51\pm0.03$ or close to the inferior
conjunction of the secondary star. This phase also corresponds to minimum
contamination to the sdB spectrum due to the reflection effect.
\begin{figure}
\includegraphics[width=\columnwidth]{fig6.eps}
\caption{Model atmosphere analysis of the low-dispersion KPNO spectrum of
GALEX~J0321$+$4727. The {\it top} panels show the line profiles and best fit
models, and the {\it lower} panels show the $\chi^2$ contours drawn at 66, 90,
and 99\%.}
\label{fig_spec}
\end{figure}
Next, we analysed phase-resolved H$\alpha$ spectra of GALEX~J0321$+$4727 to
take into account the light contamination due to the reflection effect on the
temperature measurements. This effect was notable in the analysis of the
similar systems such as HS2333+3927 \citep{heb2004} and 2M~1533+3759
\cite{for2010}. Variations of $\approx 6000$ K were observed in HS2333+3927,
while weaker variations of $\approx 1000$ K were observed in 2M~1533+3759. To
investigate this effect in GALEX~J0321$+$4727 we built three spectra inclusive
of phases $0.0-1.0$ (average), $0.35-0.65$ (minimum reflection effect), and
$0.85-0.15$ (maximum reflection effect).
Figure~\ref{fig_fit} shows our analysis of the
H$\alpha$ and He I$\lambda 6678$ \AA\ line profiles in the co-added ($0.0-1.0$) coud{\'e}
spectra of GALEX~J0321$+$4727 and GALEX~J2349$+$3844.
\begin{figure}
\includegraphics[width=\columnwidth]{fig7.eps}
\caption{Co-added coud{\'e} spectra of GALEX~J0321$+$4727 ({\it top}) and
GALEX~J2349$+$3844 ({\it bottom}) showing the H$\alpha$ ({\it left}) and HeI
$\lambda 6678$ \AA\ ({\it right}) lines and the best-fit models.
\label{fig_fit}}
\end{figure}
Table~\ref{tbl_sdb} summarises our measurements of the hot subdwarf atmospheric
parameters. Our measurements show that variability in GALEX~J0321$+$4727 is
affecting the temperature and abundance measurements but that the surface
gravity does not vary significantly. In our limited phase resolution, the
peak-to-peak temperature variation reaches $\approx 4000$ K. The Ond\v{r}ejov
coud{\'e} and KPNO temperature measurements at phase 0.5 agree but the surface
gravity measurements differ by 0.5 dex. This is most likely due to a systematic
effect in the model spectra themselves with the H$\alpha$ line profile analysis
overestimating the surface gravity relative to an analysis involving the
complete series. The shape and strength of the upper Balmer
lines are very sensitive to surface gravity and offer a more reliable surface
gravity diagnostics. In summary we estimate the
parameters representative of the sdB GALEX~J0321$+$4727 at phase 0.5. The
effective temperature and helium abundance were estimated by taking
the weighted average of the results from the H$\alpha$-H11 and H$\alpha$
(0.35 - 0.65) fits and for the surface gravity we adopted the result from the
H$\alpha$-H11 analysis:
\begin{displaymath}
T_{\rm eff}=29200\pm300,\ \log{g}=5.5\pm0.1,\
\end{displaymath}
and
\begin{displaymath}
\log{n_{\rm He}/n_{\rm H}}=-2.6\pm0.1.
\end{displaymath}
Applying a correction of $-0.5\pm0.2$ to the surface gravity measurement of GALEX~J2349$+$3844
that is based on H$\alpha$ alone we conservatively estimate the sdB parameters:
\begin{displaymath}
T_{\rm eff}=28400\pm400,\ \log{g}=5.4\pm0.3,\
\end{displaymath}
and
\begin{displaymath}
\log{n_{\rm He}/n_{\rm H}}=-3.2\pm0.1.
\end{displaymath}
Our analysis assumes a hydrogen/helium composition and the inclusion of heavy
elements in the model atmospheres is likely to affect our results
\citep[see][]{heb2000,ede2003}. A self consistent analysis including abundance
measurements \citep[e.g.,][]{oto2006,ahm2007} awaits high signal-to-noise and
resolution ultraviolet and optical spectroscopy.
\begin{table}
\centering
\begin{minipage}{\columnwidth}
\caption{Measurements. \label{tbl_sdb}}
\begin{tabular}{ccccc}
\hline
Range & Phase & $T_{\rm eff}$ & $\log{g}$ & $\log{n_{\rm He}/n_{\rm H}}$ \\
& & (K) & (cgs) & \\
\hline
\multicolumn{5}{c}{J0321$+$4727} \\
\hline
H$\alpha$ & 0.85-0.15 & 33750$\pm$350 & 5.88$\pm$0.07 & $-$2.10$\pm$0.10\\
H$\alpha$ & 0.00-1.00 & 32100$\pm$250 & 5.95$\pm$0.05 & $-$2.32$\pm$0.07\\
H$\alpha$ & 0.35-0.65 & 29550$\pm$650 & 5.98$\pm$0.09 & $-$2.68$\pm$0.20\\
H$\alpha$-H11 & 0.50 & 29100$\pm$350 & 5.47$\pm$0.08 & $-2.52_{-0.31}^{+0.22}$\\
\hline
\multicolumn{5}{c}{J2349$+$3844} \\
\hline
H$\alpha$ & 0.00-1.00 & 28400$\pm$400 & 5.84$\pm$0.06 & $-$3.23$\pm$0.09 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\subsection{Nature of the companions}
Using the mass function (Table~\ref{tbl_bin})
and assuming a mass of $0.5\ M_\odot$ for the hot subdwarf GALEX~J0321$+$472,
we calculate a minimum secondary mass of $0.13\ M_\odot$ that
corresponds to a spectral type of M5 \citep{kir1994}. Assuming an absolute $J$
magnitude of 3.9 for the hot subdwarf, a M5 star with
$M_J = 8.8$ would be outshone by the hot subdwarf.
The NSVS lightcurve
shows GALEX~J0321$+$4727 to be variable and
the search for a period in the photometric data
resulted in a best-period corresponding to the orbital period. The
observed variations are most probably caused by irradiation of the atmosphere of the cool
companion by the hot subdwarf.
Using the observed semi-amplitude of
0.061 mag we may constrain the binary parameters further. We
estimated the system inclination and secondary mass using the reflection model of \citet{max2002} and
assuming two different masses for
the sdB star. For
a sdB mass of $0.4\ M_\odot$, the inclination is predicted to be between
63$^\circ$ and 71$^\circ$ and the secondary mass between $0.124\ M_\odot$ and
$0.133\ M_\odot$. For a sdB mass of $0.5\ M_\odot$, we obtain an
inclination ranging from 65$^\circ$ to 70$^\circ$, and a secondary mass between
$0.143\ M_\odot$ and $0.149\ M_\odot$.
Again, using the mass function and assuming a mass of $0.5\ M_\odot$ for the
hot subdwarf GALEX~J2349$+$3844, the minimum secondary mass is $0.27\ M_\odot$
that corresponds to a spectral type of M4 \citep{kir1994}. Assuming an absolute
$J$ magnitude of 5.6 for the hot subdwarf, a M4 spectral type star with
$M_J = 8.6$ would also be outshone by the hot subdwarf.
However, the NSVS time series do not show variations down to a limit of
$\Delta m = 0.009$. Illumination of a $0.3\ M_\odot$ star, which is the
suggested mass at a high inclination, would cause a variation of
$\Delta m \sim 0.4$ magnitudes. Lower inclinations would require larger
companions causing even larger variations that are incompatible with the
observations. The lack of variability suggests that the companion is most
likely a white dwarf \citep[see][]{max2004}.
\section{Summary and Conclusions}
We show that GALEX~J0321$+$4727 and GALEX~J2349$+$3844 are hot hydrogen-rich
subdwarfs in close binaries. Based on a preliminary analysis of periodic light
variations in GALEX~J0321$+$4727 we infer that its companion is a low-mass star
($M\sim0.13\,M_\odot$). The secondary star in GALEX~J2349$+$3844 is probably a
white dwarf with $M\ga 0.3\, M_\odot$. The two new systems are post-CE systems
with a hot subdwarf primary. Their orbital periods locate them close to the
peak of the period distribution for such systems \citep[see][]{heb2009}. A
future study of GALEX~J0321$+$4727 will involve phase-resolved high
signal-to-noise ratio spectroscopic and photometric observations aimed at
resolving the nature of the companion. Searches for close binaries in the sdB
population have a relatively high yield \citep[69\%, see][]{max2001},
and, therefore, we expect that many new systems remain to be discovered in our
GALEX/GSC catalogue of EHB stars.
\section{acknowledgements}
S.V. and A.K. are supported by GA AV grant numbers IAA300030908 and IAA301630901, respectively, and by GA \v{C}R grant number P209/10/0967.
A.K. also acknowledges support from the Centre for Theoretical Astrophysics (LC06014).
Some of the data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts.
|
1,941,325,220,786 | arxiv | \section{#1}}
\renewcommand{\thesection.\arabic{equation}}}{\thesection.\arabic{equation}}
\def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn
\else \newpage \fi \thispagestyle{empty}\cdot@page\z@
\def\arabic{footnote}{\fnsymbol{footnote}} }
\def\endtitlepage{\if@restonecol\twocolumn \else \fi
\def\arabic{footnote}{\arabic{footnote}}
\setcounter{footnote}{0}}
\relax
\hybrid
\parskip=0.4em
\makeatletter
\newdimen\normalarrayskip
\newdimen\minarrayskip
\normalarrayskip\baselineskip \minarrayskip\jot
\newif\ifold \oldtrue \def\oldfalse{\oldfalse}
\def\arraymode{\ifold\relax\else\displaystyle\fi
\def\eqnumphantom{\phantom{(\thesection.\arabic{equation}})}}
\def\@arrayskip{\ifold\baselineskip\z@\lineskip\z@
\else
\baselineskip\minarrayskip\lineskip1\baselineskip\fi}
\def\@arrayclassz{\ifcase \@lastchclass \@acolampacol \or
\@ampacol \or \or \or \@addamp \or
\@acolampacol \or \@firstampfalse \@acol \fi
\edef\@preamble{\@preamble
\ifcase \@chnum
\hfil$\relax\arraymode\@sharp$\hfil
\or $\relax\arraymode\@sharp$\hfil
\or \hfil$\relax\arraymode\@sharp$\fi}}
\def\@array[#1]#2{\setbox\@arstrutbox=\hbox{\vrule
height\arraystretch \ht\strutbox
depth\arraystretch \dp\strutbox
width\z@}\@mkpream{#2}\edef\@preamble{\halign \noexpand\@halignto
\bgroup \tabskip\z@ \@arstrut \@preamble \tabskip\z@ \cr}%
\let\@startpbox\@@startpbox \let\@endpbox\@@endpbox
\if #1t\vtop \else \if#1b\vbox \else \vcenter \fi\fi
\bgroup \let\par\relax
\let\@sharp##\let\protect\relax
\@arrayskip\@preamble}
\def\eqnarray{\stepcounter{equation}%
\let\@currentlabel=\thesection.\arabic{equation}}
\global\@eqnswtrue
\global\@eqcnt\z@
\tabskip\@centering
\let\\=\@eqncr
$$%
\halign to \displaywidth \bgroup
\eqnumphantom \@eqnsel
\hskip\@centering
$\displaystyle \tabskip\z@ {##}$%
&\global\@eqcnt\@ne \hskip 2\arraycolsep
$ \displaystyle \arraymode{##}$\hfil
&\global\@eqcnt\tw@ \hskip 2\arraycolsep
$\displaystyle\tabskip\z@{##}$\hfil
\tabskip\@centering
&{##}\tabskip\z@\cr}
\makeatother
\def\mathbb{A}{\mathbb{A}}
\def\mathbb{B}{\mathbb{B}}
\def\mathbb{C}{\mathbb{C}}
\def\mathbb{D}{\mathbb{D}}
\def\mathbb{E}{\mathbb{E}}
\def\mathbb{F}{\mathbb{F}}
\def\mathbb{G}{\mathbb{G}}
\def\mathbb{H}{\mathbb{H}}
\def\mathbb{K}{\mathbb{K}}
\def\mathbb{L}{\mathbb{L}}
\def\mathbb{P}{\mathbb{P}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{R}{\mathbb{R}}
\def\mathbb{Z}{\mathbb{Z}}
\def\mathcal{A} {\mathcal{A}}
\def\mathcal{B} {\mathcal{B}}
\def\mathcal{C} {\mathcal{C}}
\def\mathcal{D} {\mathcal{D}}
\def\mathcal{E} {\mathcal{E}}
\def\mathcal{F} {\mathcal{F}}
\def\mathcal{G} {\mathcal{G}}
\def\mathcal{H} {\mathcal{H}}
\def\mathcal{I} {\mathcal{I}}
\def\mathcal{J} {\mathcal{J}}
\def\mathcal{K} {\mathcal{K}}
\def\mathcal{L} {\mathcal{L}}
\def\mathcal{M} {\mathcal{M}}
\def\mathcal{N} {\mathcal{N}}
\def\mathcal{O} {\mathcal{O}}
\def\mathcal{P} {\mathcal{P}}
\def\mathcal{Q} {\mathcal{Q}}
\def\mathcal{R} {\mathcal{R}}
\def\mathcal{S} {\mathcal{S}}
\def\mathcal{T} {\mathcal{T}}
\def\mathcal{U} {\mathcal{U}}
\def\mathcal{V} {\mathcal{V}}
\def\mathcal{W} {\mathcal{W}}
\def\mathcal{X} {\mathcal{X}}
\def\mathcal{Y} {\mathcal{Y}}
\def\mathcal{Z} {\mathcal{Z}}
\def{\mathfrak g}{{\mathfrak g}}
\def{\mathfrak b}{{\mathfrak b}}
\def{\mathfrak h}{{\mathfrak h}}
\def\mathfrak{a}{\mathfrak{a}}
\def\mathfrak{b}{\mathfrak{b}}
\def\mathfrak{g}{\mathfrak{g}}
\def\mathfrak{h}{\mathfrak{h}}
\def\mathfrak{n}{\mathfrak{n}}
\def\mathfrak{t}{\mathfrak{t}}
\def\mathfrak{S}{\mathfrak{S}}
\def\mathfrak{M}{\mathfrak{M}}
\def\mathfrak{V}{\mathfrak{V}}
\def\mathfrak{X}{\mathfrak{X}}
\def{\theta} {{\theta}}
\def{\Theta} {{\Theta}}
\def{\omega} {{\omega}}
\def{\alpha} {{\alpha}}
\def{\beta} {{\beta}}
\def{\gamma} {{\gamma}}
\def\sigma {{\sigma}}
\def{\Sigma}{{\Sigma}}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\varepsilon{\varepsilon}
\def\varsigma{\varsigma}
\def\varpi{\varpi}
\def\epsilon{\epsilon}
\def\bf{E}{\bf{E}}
\def\mathfrak{h}{\mathfrak{h}}
\def\mathfrak{so}{\mathfrak{so}}
\def\mathfrak{sp}{\mathfrak{sp}}
\def\mathfrak{gl}{\mathfrak{gl}}
\def{\overline}{\overline}
\def{\rm ev}{{\rm ev}}
\def\bar{0}{\bar{0}}
\def\bar{1}{\bar{1}}
\def{\rm str}\,{{\rm str}\,}
\def{\rm odd}{{\rm odd}}
\def{\bfit\alpha}{{\bfit\alpha}}
\def{\bfit\beta}{{\bfit\beta}}
\def{\bfit\gamma}{{\bfit\gamma}}
\def\bnu{{\bfit\nu}}
\def{\bfit\mu}{{\bfit\mu}}
\def{\bfit\omega}{{\bfit\omega}}
\def{\bfit\phi}{{\bfit\phi}}
\def{\bfit\lambda}{{\bfit\lambda}}
\def{\bfit\rho}{{\bfit\rho}}
\def\partial {\partial}
\def\overline {\partial } {\overline {\partial }}
\def\bar{i}{\bar{i}}
\def\bar{j}{\bar{j}}
\def{\bar{u}}{{\bar{u}}}
\def\bar{w} {\bar{w}}
\def\bar{z} {\bar{z}}
\def\bar{k} {\bar{k}}
\def\overline{A} {\overline{A}}
\def\widetilde {{\widetilde{\omega}}}
\def{\widetilde{\rho}} {{\widetilde{\rho}}}
\def{\rm Br}{{\rm Br}}
\def{\mathop{\rm codim}}{{\mathop{\rm codim}}}
\def{\rm cok}{{\rm cok}}
\def{\mathop {\rm coker}}{{\mathop {\rm coker}}}
\def{\rm Ch}{{\rm Ch}}
\def{\rm ch}{{\rm ch}}
\def{\rm Det}{{\rm Det}}
\def{\rm DET}{{\rm DET}}
\def{\rm diff}{{\rm diff}}
\def{\rm Diff}{{\rm Diff}}
\def{\rm Id}{{\rm Id}}
\def\cdot{\cdot}
\def{\mathop{\rm GSp}}{{\mathop{\rm GSp}}}
\def{\mathop{\rm GO}}{{\mathop{\rm GO}}}
\def{\rm Ker}{{\rm Ker}}
\def{\rm Mat}{{\rm Mat}}
\def{\rm End}{{\rm End}}
\def\nabla_{\partial}{\nabla_{\partial}}
\def\nabla_{\bar {\partial}}{\nabla_{\bar {\partial}}}
\def{\rm Lie}{{\rm Lie}}
\def{\rm Nm}{{\rm Nm}}
\def{\rm Gal}{{\rm Gal}}
\def\noindent{\noindent}
\def\nonumber{\nonumber}
\def{\mathop{\rm Pin}}{{\mathop{\rm Pin}}}
\def{\rm pt}{{\rm pt}}
\def{\rm rank}{{\rm rank}}
\def{\mathop{\rm Res}}{{\mathop{\rm Res}}}
\def{\mathop{\rm Sym}}{{\mathop{\rm Sym}}}
\def{\mathop{\rm Spin}}{{\mathop{\rm Spin}}}
\def{\mathop{\rm Sp}}{{\mathop{\rm Sp}}}
\def{\mathop{\rm SO}}{{\mathop{\rm SO}}}
\def{\mathop{\rm SL}}{{\mathop{\rm SL}}}
\def{\rm Td}{{\rm Td}}
\def{\rm vol}{{\rm vol}}
\def{\rm Vol}{{\rm Vol}}
\def\mathfrak{\mathfrak}
\def{\overline} {{\overline}}
\def{\rm tr}\,{{\rm tr}\,}
\def{\rm Tr}\,{{\rm Tr}\,}
\def\<{\langle}
\def\>{\rangle}
\def\sigma{\sigma}
\def\bar{\sigma}{\bar{\sigma}}
\def{\rm ad}{{\rm ad}}
\def\widetilde{\widetilde}
\def\mathfrak{osp}{\mathfrak{osp}}
\newtheorem{te}{Theorem}[section
\newtheorem{de}{Definition}[section]
\newtheorem{prop}{Proposition}[section]
\newtheorem{cor}{Corollary}[section]
\newtheorem{lem}{Lemma}[section]
\newtheorem{ex}{Example}[section]
\newtheorem{rem}{Remark}[section]
\newtheorem{conj}{Conjecture}[section]
\newtheorem{prob}{Problem}[section]
\newtheorem{quest}{Question}[section]
\newcommand\bqa{\begin{eqnarray}}
\newcommand\eqa{\end{eqnarray}}
\def\begin{eqnarray}\new\begin{array}{cc}{\begin{eqnarray}\oldfalse\begin{array}{cc}}
\def\end{array}\end{eqnarray}{\end{array}\end{eqnarray}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\bse{\begin{subequations}}
\def\end{subequations}{\end{subequations}}
\def\begin{pmatrix}{\begin{pmatrix}}
\def\end{pmatrix}{\end{pmatrix}}
\def\be\label{\begin{eqnarray}\new\begin{array}{cc}\label}
\def\hbar{\hbar}
\def\imath{\imath}
\def\square{\hfill{\vrule height6pt width6pt
depth1pt} \break \vspace{.01cm}}
\newcommand\cod{\operatorname{codim}}
\newcommand\im{\operatorname{im}}
\newcommand{\rm Inv}{\operatorname{Inv}}
\newcommand\id{\operatorname{id}}
\newcommand\coim{\operatorname{coim}}
\newcommand\rk{\operatorname{rank}}
\newcommand\ann{\operatorname{ann}}
\newcommand{\noindent {\it Proof} }{\noindent {\it Proof} }
\newcommand{{\mathfrak{sl}}}{{\mathfrak{sl}}}
\newcommand{{\mathrm{ Ad}}}{{\mathrm{ Ad}}}
\newcommand{{\mathrm{ Aut}}}{{\mathrm{ Aut}}}
\newcommand{{\mathrm{ Int}}}{{\mathrm{ Int}}}
\newcommand{{\mathrm{ Hom}}}{{\mathrm{ Hom}}}
\newcommand{{\mathrm{ Out}}}{{\mathrm{ Out}}}
\newcommand{{\mathrm{ diag}}}{{\mathrm{ diag}}}
\newcounter{pac}[section]
\newcommand{\npa}{\addtocounter{pac}{1} \noindent {\bf
\arabic{section}.\arabic{pac}}\,\,\,}
\newcounter{pacc}[subsection]
\newcommand{\npaa}{\addtocounter{pacc}{1} \noindent {\bf
\arabic{section}.\arabic{subsection}.\arabic{pacc}}\,\,\,}
\setcounter{pac}{0}
\setcounter{footnote}0
\begin{document}
\title{\bf On quantum $\mathfrak{osp}(1|2\ell)$-Toda chain
\footnote{Talk given by the second author at the "Polivanov-90"
conference,
16-17 December 2020, Steklov Mathematical Institute of Russian Academy of Science.}}
\author{A.A. Gerasimov, D.R. Lebedev and S.V. Oblezin}
\date{}
\maketitle
\renewcommand{\abstractname}{}
\begin{abstract}
\noindent {\bf Abstract}. The orthosymplectic superalgebra
$\mathfrak{osp}(1|\,2\ell)$ is the closest analog of standard Lie
algebras in the world of super Lie algebras. We demonstrate that the
corresponding $\mathfrak{osp}(1|\,2\ell)$-Toda chain turns out to be an
instance of a $BC_\ell$-Toda chain. The underlying reason for this
relation is discussed.
\end{abstract}
\vspace{5 mm}
\section{Introduction}
Representation theory is an essential tool in finding explicit
solutions of known quantum integrable systems as well as
construction of new ones. An important class of finite-dimensional
quantum integrable systems allowing representation theory
interpretation is provided by Toda chains. It is known that
integrable Toda chains are classified by a class of root systems
that include root systems of finite dimensional Lie algebras as well
as their affine counterparts. For the Toda chains associated with
the root systems of finite dimensional Lie algebras the
corresponding integrable systems can be solved explicitly by
representation theory methods \cite{K1}, \cite{GW} (see \cite{STS}
for a review). Precisely, the eigenfunctions of the quantum
Hamiltonians are given by special matrix elements of principal
series representations of the totally split real form of the
corresponding Lie group. The resulting functions should be
considered as generalized Whittaker functions associated with the
corresponding finite-dimensional Lie algebras \cite{K1}, \cite{Ha}.
These functions allow quite explicit integral representations (see
e.g. \cite{GLO1}).
The class of integrable Toda chains is however a bit larger than the
class of finite/affine Lie algebras and includes in particular
non-reduce root systems $BC_\ell$ combining $B_\ell$ and $C_\ell$
root systems. The corresponding $BC_\ell$-Toda system is an
important element of the web of Toda type theories connected by
various intertwining relations \cite{GLO2}. Although $BC_\ell$ root
system fits naturally in the classification of finite Lie algebra
root systems the problem of construction of the Lie algebra type
object corresponding to the non-reduced root $BC_\ell$ systems seems
not yet obtained a satisfactory resolution. However, one should
recall that $BC_\ell$ root systems appear in the Cartan
classification of symmetric spaces \cite{H}, \cite{L}. Still
$BC_\ell$-Toda chain can be solved via representation theory methods
using a generalization of $C_\ell$-Whittaker functions (see e.g.
\cite{J} for $\ell=1$ and a remark in \cite{RS}, relevant to
$BC_\ell$ classical Toda system). This unfortunately does not
elucidate the question of the
interpretation of $BC_\ell$-Toda eigenfunctions as standard Whittaker
functions for some group-like object. One should add that the
integrability of the quantum $BC_\ell$-Toda chain for generic coupling
constants was proven independently in \cite{S} using Yangian
representation theory (aka quantum inverse scattering methods).
This however also does not clarify the question of existence of
a group-like structure behind $BC_\ell$ root systems.
In this note we consider quantum Toda chains associated with the
super Lie algebras $\mathfrak{osp}(1|2\ell)$. This series of super Lie
algebras occupy a special place in the world of super
Lie algebras. In particular, it is the only instance of simple
super Lie algebras for which the corresponding category of
finite-dimensional representations is semi-simple
and thus allows direct analogs of the standard constructions of
representation theory of semisimple Lie algebras \cite{Kac1}.
In connection with this fact one should mention that
$\mathfrak{osp}(1|2\ell)$ is the unique super Lie algebra
with finitely-generated center of its universal enveloping algebra.
The special properties of $\mathfrak{osp}(1|2\ell)$ makes it natural to consider
the associated quantum integrable systems.
In this note we demonstrate that $\mathfrak{osp}(1|2\ell)$-Toda chain
may be also considered as a Toda chain associated with the $BC_\ell$
root system. This allows us to solve $BC_\ell$-Toda chain
by standard representation theory methods i.e. by identifying
the corresponding eigenfunctions with $\mathfrak{osp}(1|2\ell)$-Whittaker functions.
The underlying reason for the appearance of $BC_\ell$
root structure in $\mathfrak{osp}(1|2\ell)$-Toda chain becomes clear by
comparing $BC_\ell$ root data with that of the super Lie algebra
$\mathfrak{osp}(1|2\ell)$. Actually the only difference is the opposite parity
of the maximal commutative subalgebra eigenspaces in the Cartan
decomposition corresponding to short roots
of non-reduced $BC_\ell$ root system. This difference however does not
affect the expressions for quantum Hamiltonians of the
corresponding Toda chain.
Exposition of the paper goes as follows. In Section 2 we provide
basic facts on the structure of the orthosymplect super Lie algebra
$\mathfrak{osp}(1|2\ell)$. In Section 3 we construct $\mathfrak{osp}(1|2\ell)$-Whittaker
functions associated with representations of the
super Lie algebra $\mathfrak{osp}(1|2\ell)$ and demonstrate that these functions are
eigenfunctions of the quadratic quantum Hamiltonian of $BC_\ell$-Toda
chain for special values of the coupling constants. Finally,
in Section 4 we discuss the structure of root system of
$\mathfrak{osp}(1|2\ell)$ versus $BC_\ell$ root system and provide an
explanation of the apparent identification of
quadratic Hamiltonian of $\mathfrak{osp}(1|2\ell)$-Toda chain with
that of $BC_\ell$-Toda chain.
{\it Acknowledgments:} The research of the second (D.R.L.) and
third (S.V.O.) authors was supported by RSF grant 16-11-10075.
\section{Basic facts on the super Lie algebra $\mathfrak{osp}(1|2\ell)$}
We start with the basic definition of a Lie superalgebra structure
and then we describe explicitly the algebra $\mathfrak{osp}(1|2\ell)$ in
detail. This is a standard material that can be found in standard
sources on super algebras e.g. \cite{Kac1}, \cite{Kac2}.
The notion of Lie superalgebra is a direct generalization of the
notion of Lie algebra to the category of vector superspaces. Vector
superspace $V=V_{\bar{0}}\oplus V_{\bar{1}}$ is a $\mathbb{Z}_2$-graded vector space
with the parity $p$ taking values $0$ and $1$ on $V_{\bar{0}}$ and
$V_{\bar{1}}$ respectively. The tensor product structure is given by
twisting of the standard tensor product structure in the category of
vector spaces \begin{eqnarray}\new\begin{array}{cc} v\otimes w=(-1)^{p(v)\cdot p(w)}\,w\otimes
v,\qquad v\in V, \quad w\in W, \end{array}\end{eqnarray} for $v$ and $w$ are homogeneous
elements with respect to the $\mathbb{Z}_2$-grading.
\begin{de} The structure of super Lie algebra on super vector space
$\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ is given by
a bilinear operation $[\cdot,\,\cdot]$, called the bracket, so
that for any homogeneous elements $X,\,Y,\,Z\in \mathfrak{g}$ the following hold:
\begin{eqnarray}\new\begin{array}{cc}
p\bigl([X,\,Y]\bigr)\,=\,p(X)\,+\,p(Y)\,,
\end{array}\end{eqnarray}
\begin{eqnarray}\new\begin{array}{cc}
\,[X,\,Y]\,=\,-(-1)^{p(X)\cdot p(Y)}[Y,\,X]\,,
\end{array}\end{eqnarray}
\begin{eqnarray}\new\begin{array}{cc}
\,[X,\,[Y,\,Z]](-1)^{p(X)\cdot p(Z)}\,
+\,[Z,\,[X,\,Y]](-1)^{p(Y)\cdot p(Z)}\\
+\,[Y,\,[Z,\,X]](-1)^{p(X)\cdot p(Y)}\,=\,0\,.
\end{array}\end{eqnarray}
\end{de}
We will be interested in a special instance of super Lie algebras,
the ortho-symplectic super Lie algebra $\mathfrak{osp}(1|2\ell)$. To define
this algebra let us first introduce the super Lie algebra
$\mathfrak{gl}(1|2\ell)$.
\begin{de}\label{GL1} The super Lie algebra $\mathfrak{gl}(1|\,2\ell)$ is generated by
\begin{eqnarray}\new\begin{array}{cc}
E_{0,\,i}\,,\quad E_{i,\,0}\,,\quad
p(E_{0,\,i})\,=\,p(E_{i,\,0})\,=\,1\,,\qquad0\leq i\leq2\ell\,,\\
\text{and}\qquad E_{kl}\,,\qquad p(E_{kl})\,=\,0\,,\qquad1\leq
k,\,l\leq2\ell\,,
\end{array}\end{eqnarray}
subjected to the following relations:
\begin{eqnarray}\new\begin{array}{cc}\label{glbracket}
\bigl[E_{ij},\,E_{kl}\bigr]\,
=\,\delta_{jk}E_{il}\,-(-1)^{p(i)p(l)}\,\delta_{il}E_{kj}\,,\qquad
0\leq i,\,j\leq2\ell\,,\quad
0\leq k,\,l\leq2\ell\,.
\end{array}\end{eqnarray}
\end{de}
The super Lie agebra $\mathfrak{gl}(1|2\ell)$ may be identified with the super
Lie algebra structure $\bigl({\rm End}(V),\,[\cdot,\,\cdot]\bigr)$
on the space ${\rm End}(V)$ of endomorphisms of the superspace
\begin{eqnarray}\new\begin{array}{cc}
V\,=\,\mathbb{R}^{1|2\ell}\,=\,V_{\bar{0}}\,\oplus\,V_{\bar{1}}\,,\qquad
V_{\bar{0}}\,=\,\mathbb{R}^{0|2\ell}\,,\qquad V_{\bar{1}}\,=\,\mathbb{R}^{1|0}\,,
\end{array}\end{eqnarray}
in the following way.
Any zero parity linear endomorphism $A\in{\rm End}(V)$ is given by the matrix of the
following shape:
\begin{eqnarray}\new\begin{array}{cc}\label{shape}
A\, =\,\left(\begin{array}{cc}
A_{11} & A_{12}\\A_{21} & A_{22}
\end{array}\right)\,,\quad
\begin{array}{cc}
A_{11}:\,V_{\bar{1}}\,\longrightarrow\,V_{\bar{1}}\,, &
A_{12}:\,V_{\bar{0}}\,\longrightarrow\,V_{\bar{1}}\,,\\
A_{21}:\,V_{\bar{1}}\,\longrightarrow\,V_{\bar{0}}\,, &
A_{22}:\,V_{\bar{0}}\,\longrightarrow\,V_{\bar{0}}\,,
\end{array}
\end{array}\end{eqnarray}
where entries of blocks $A_{11},\,A_{22}$ are even while the
entries of $A_{12}\,,A_{21}$ are odd so that
\begin{eqnarray}\new\begin{array}{cc}
{\rm End}(V)_{\bar{0}}\,=\,\Big\{\Big(\begin{smallmatrix}
A_{11}&&0\\&&\\bar{0}&&A_{22}\end{smallmatrix}\Big)\Big\}\,,\qquad
{\rm End}(V)_{\bar{1}}\,=\,\Big\{\Big(\begin{smallmatrix}
0&&A_{12}\\&&\\A_{21}&&0\end{smallmatrix}\Big)\Big\}\,.
\end{array}\end{eqnarray}
The super brackets on ${\rm End}(V)$ are defined on homogeneous elements $X,\,Y\in{\rm End}(V)$
as follows
\begin{eqnarray}\new\begin{array}{cc}\label{bracket}
[X,\,Y]\,=\,X\circ Y\,-\,(-1)^{p(X)\cdot p(Y)}Y\circ X\,.
\end{array}\end{eqnarray}
The description of $\mathfrak{g}(1|2\ell)$ given in Definition
\ref{GL1} is then obtained via fixing a bases in $V$
\begin{eqnarray}\new\begin{array}{cc}\label{basis}
\{\varepsilon_0,\,\varepsilon_1,\,\ldots,\,\varepsilon_{2\ell}\}\subset\mathbb{R}^{1|2\ell}\,,\qquad
p(\varepsilon_0)=1\,,\quad p(\varepsilon_k)=0,\quad1\leq k\leq2\ell\,.
\end{array}\end{eqnarray}
The generators $E_{ij}$ are identified with the elementary matrices
in ${\rm End}(V)$ with the only non-zero elements in the $i$-th row
and the $j$-th column.
\begin{de} The super transposition of a matrix $A\in{\rm End}(V)$ is
defined by
\begin{eqnarray}\new\begin{array}{cc}\label{sTransp}
A^{\top}\,
=\,\left(\begin{array}{cc}
A_{11} & A_{12}\\A_{21} & A_{22}
\end{array}\right)^{\top}\,
=\,\left(\begin{array}{cc}
A_{11}^t & -A_{21}^t\\A_{12}^t & A_{22}^t
\end{array}\right)\,,
\end{array}\end{eqnarray}
where $X^t$ if the standard transposition of a matrix $X$.
\end{de}
\begin{lem} Super transposition \eqref{sTransp} possesses the
following properties:
\begin{eqnarray}\new\begin{array}{cc}
(A\,v)^t\,=\,v^tA^{\top}\,,\qquad A\in{\rm End}(V)\,,\quad v\in V\,,
\end{array}\end{eqnarray}
\begin{eqnarray}\new\begin{array}{cc}
(A\cdot B)^{\top}\,=\,B^{\top}\cdot A^{\top}\,,
\end{array}\end{eqnarray}
\begin{eqnarray}\new\begin{array}{cc}
(A^{\top})^{\top}\,=\Pi\,A\,\Pi^{-1}\,,
\end{array}\end{eqnarray}
where $\Pi$ is the parity operator with the matrix
$\Big(\begin{smallmatrix}-1&&0\\&&\\bar{0}&&{\rm Id}_{2\ell}\end{smallmatrix}\Big)$.
\end{lem}
\noindent {\it Proof} : Given $v\in V$ let us write it down in the basis
$\{\varepsilon_0,\,\varepsilon_1,\ldots,\varepsilon_{2\ell}\}$:
\begin{eqnarray}\new\begin{array}{cc}
v\,=\,\xi\varepsilon_0\,+\,\sum_{i=1}^{2\ell}v_i\varepsilon_i\,,
\end{array}\end{eqnarray}
with odd Grassmann coordinate $\xi$, and even coordinates $v_i$.
Then we have
\begin{eqnarray}\new\begin{array}{cc}
(A\,v)^t_i\,=\,a_{i0}\xi\,+\,a_{i1}v_1\,+\ldots+\,a_{i,\,2\ell}v_{2\ell}\,,
\end{array}\end{eqnarray}
and on the other hand,
\begin{eqnarray}\new\begin{array}{cc}
(v^tA^{\top})_0\,
=\,\xi a_{00}\,+\,v_1a_{0,1}\,+\ldots+\,v_na_{0,n}\,,\\
(v^tA^{\top})_k\,=\,-\xi
a_{k0}\,+\,v_1a_{k,1}\,+\ldots+\,v_na_{k,n}\,,\quad1\leq k\leq2\ell\,.
\end{array}\end{eqnarray}
Taking into account that
\begin{eqnarray}\new\begin{array}{cc}
\xi a_{00}\,=\,a_{00}\xi\,,\qquad-\xi
a_{k,0}\,=\,a_{k,0}\xi\,,\quad1\leq k\leq2\ell\,,
\end{array}\end{eqnarray}
we deduce the first assertion. The second assertion can be verified
by straightforward computation. The third assertion follows from the
definition: on the one hand, we have
\begin{eqnarray}\new\begin{array}{cc}
(A^{\top})^{\top}\,
=\,\left(\begin{array}{cc}
A_{11}^t & -A_{21}^t\\A_{12}^t & A_{22}^t
\end{array}\right)^{\top}\,
=\,\left(\begin{array}{cc}
A_{11} & -A_{12}\\-A_{12} & A_{22}
\end{array}\right)\,;
\end{array}\end{eqnarray}
on the other hand, in the standard basis \eqref{basis} the matrix of
parity operator reads
\begin{eqnarray}\new\begin{array}{cc}
\Pi\,=\,\Big(\begin{smallmatrix}
-1 && 0\\&&\\bar{0}&&{\rm Id}_{2\ell}
\end{smallmatrix}\Big)\,,
\end{array}\end{eqnarray}
so the assertion easily follows. $\Box$
Now $\mathfrak{osp}(1|2\ell)$ may be defined as a subalgebra of the general linear
superalgebra $\mathfrak{gl}(1|2\ell)$. Introduce the following involution:
\begin{eqnarray}\new\begin{array}{cc}
\theta\,:\quad\mathfrak{gl}(1|2\ell)\,\longrightarrow\,\mathfrak{gl}(1|2\ell)\,,\qquad
X\,\longmapsto\,X^{\theta}\,:=\,-JX^{\top}J^{-1}\,,
\end{array}\end{eqnarray}
where
\begin{eqnarray}\new\begin{array}{cc}
J\,
=\,\Big(\begin{smallmatrix}
1 & 0 & 0\\bar{0} & 0 & -{\rm Id}_{\ell}\\
0 & {\rm Id}_{\ell} & 0
\end{smallmatrix}\Big)\,\in\,{\rm End}(V)_{\bar{0}}\,.
\end{array}\end{eqnarray}
\begin{de} The orthosymplectic super Lie algebra $\mathfrak{osp}(1|2\ell)$ is defined as
the $\theta$-invariant subalgebra of $\mathfrak{gl}(1|2\ell)$:
\begin{eqnarray}\new\begin{array}{cc}\label{ospNrep}
\mathfrak{osp}(1|\,2\ell)\,
=\Big\{X\,\in\,\mathfrak{gl}(1|2\ell)\,:\quad X^{\theta}\,=\,X\Big\}\\
=\,\Big\{X\,
=\,\Big(\begin{smallmatrix}
0 & x & y\\
y^t & A & B\\
-x^t & C & -A^t
\end{smallmatrix}\Big)\,:\quad
B^t\,=\,B\,,\quad
C^t\,=\,C\Big\}\,\subset\,\mathfrak{gl}(1|\,2\ell)\,.
\end{array}\end{eqnarray}
\end{de}
According to the classification of simple super Lie algebras
\cite{Kac1} one associates the root system $B_{0,\ell}$ to the super
Lie algebra $\mathfrak{osp}(1|2\ell)$. Let
$\{\epsilon_1,\ldots,\,\epsilon_{\ell}\}\subset\mathbb{R}^{\ell}$ be an orthogonal
basis in $\mathbb{R}^{\ell}$ with respect to the scalar product $(\,,\,)$.
Then simple root system ${}^s\Delta^+(B_{0,\ell})$ of type
$B_{0,\,\ell}$ consists of even simple positive roots
${}^s\Delta^+_{\bar{0}}$ and odd simple positive roots
${}^s\Delta^+_{\bar{1}}$:
\begin{eqnarray}\new\begin{array}{cc}\label{OPSroot}
{}^s\Delta^+_{\bar{0}}(B_{0,\,\ell})\,=\,
\bigl\{{\alpha}_k\,=\,\epsilon_{\ell+1-k}-\epsilon_{\ell+2-k}\,,\quad1<k\leq\ell\bigr\}\,\,,\\
{}^s\Delta^+_{\bar{1}}(B_{0,\,\ell})\,=\,\bigl\{{\alpha}_1=\epsilon_{\ell}\bigr\}\,,
\end{array}\end{eqnarray}
indexed by $I=\{1,\,\ldots,\,\ell\}$. The simple co-roots
${\alpha}_i^{\vee},\,i\in I$ are defined in a standard way:
$$
{\alpha}_i^{\vee}\,:=\,\frac{2{\alpha}_i}{({\alpha}_i,{\alpha}_i)}\,,\quad i\in I\,.
$$
Note that the set $\Delta^+(B_{0,\ell})$ of positive roots contains the
sub-system of even positive roots of $C_\ell$ root system with the
corresponding set of simple roots:
\begin{eqnarray}\new\begin{array}{cc}
{}^s\Delta^+(C_{\ell})\,=\,\bigl\{2{\alpha}_1=2\epsilon_{\ell}\,,\quad
{\alpha}_k\,=\,\epsilon_{\ell+1-k}-\epsilon_{\ell+2-k}\,,\,\,1<k\leq\ell\bigr\}\,
\subset\,\Delta^+(B_{0,\ell})\,.
\end{array}\end{eqnarray}
The Cartan matrix $A=\|A_{ij}\|$ associated with the simple
root system \eqref{OPSroot} is defined by the standard
formula
\begin{eqnarray}\new\begin{array}{cc}
A_{ij}\,=\,\frac{2(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)}\,,\qquad i,j\in I\,.
\end{array}\end{eqnarray}
Thus the Cartan matrix of ${}^s\Delta^+(B_{0,\ell})$ coincides with
the standard $B_{\ell}$-type Cartan matrix
\begin{eqnarray}\new\begin{array}{cc}\label{OSPcar}
A\,
=\,\left(\begin{array}{c|cccc}
2&-2&0& \ldots &0\\
\hline
-1&2&-1&\ddots&\vdots\\
0&\ddots&\ddots&\ddots&0\\
\vdots&\ddots&-1&2&-1\\
0&\ldots&0&-1&2
\end{array}\right).
\end{array}\end{eqnarray}
The Cartan decomposition for $\mathfrak{osp}(1|2\ell)$ reads
\begin{eqnarray}\new\begin{array}{cc}\label{ospNCartan}
\mathfrak{osp}(1|2\ell)(\mathbb{C})\,=\,\bigoplus_{i\in I}\mathbb{C}
h_i\,\oplus\,\bigoplus_{{\alpha}\in\Delta^+_{\bar{0}}}\bigl(\mathbb{C}
X_{{\alpha}}\,\oplus\,\mathbb{C} X_{-{\alpha}}\bigr)\,\oplus\,\bigoplus_{\beta\in\Delta^+_{\bar{1}}}\bigl(\mathbb{C}
X_{\beta}\,\oplus\,\mathbb{C} X_{-\beta}\bigr)\,,\\
\Delta^+_{\bar{0}}\,=\,\bigl\{2\epsilon_i\,;\qquad\epsilon_i\pm\epsilon_j\,,\quad
i<j\,,\quad i,\,j\in I\bigr\}\,,\qquad
\Delta^+_{\bar{1}}\,=\,\bigl\{\epsilon_i\,,\quad i\in I\bigr\}\,.
\end{array}\end{eqnarray}
and the Cartan-Weyl relations are the following:
\begin{eqnarray}\new\begin{array}{cc}\label{osprel}
\bigl[X_{\epsilon_i},\,X_{\epsilon_j}\bigr]\,=\,(1+\delta_{ij})X_{\epsilon_i+\epsilon_j}\,,\qquad
\bigl[X_{-\epsilon_i},\,X_{-\epsilon_j}\bigr]\,=\,-(1+\delta_{ij})X_{-\epsilon_i-\epsilon_j}\,,\\
\bigl[X_{\epsilon_i},\,X_{-\epsilon_i}\bigr]\,=\,a_{ii}\,,\qquad i\in I\,;\\
\bigl[X_{\epsilon_i-\epsilon_j},\,X_{\epsilon_j}\bigr]\,=\,X_{\epsilon_i}\,,\qquad
\bigl[X_{\epsilon_i-\epsilon_j},\,X_{-\epsilon_i}\bigr]\,=\,-X_{-\epsilon_j}\,,\\
\bigl[X_{\epsilon_i},\,X_{-\epsilon_i-\epsilon_j}\bigr]\,=\,X_{-\epsilon_j}\,,\qquad
\bigl[X_{-\epsilon_i},\,X_{\epsilon_i+\epsilon_j}\bigr]\,=\,X_{\epsilon_j}\,,\qquad i<j\,;\\
\bigl[X_{{\alpha}},\,X_{-{\alpha}}\bigr]\,=\,h_{{\alpha}^{\vee}}\,
=\,\sum_{i\in I}\<{\alpha}^{\vee},\,\epsilon_i\>a_{ii}\,,\\
\bigl[h_{{\alpha}^{\vee}},\,X_{{\gamma}}\bigr]\,=\,{\alpha}^{\vee}({\gamma})X_{{\gamma}}\,,\qquad
{\alpha},\,{\gamma}\in\Delta^+\,.
\end{array}\end{eqnarray}
The Serre relations on $X_{{\alpha}_i},\,{\alpha}_i\in{}^s\Delta^+(B_{0,\ell})$
have the following form:
\begin{eqnarray}\new\begin{array}{cc}\label{Serre}
{\rm ad}_{X_{{\alpha}_1}}^2(X_{{\alpha}_1})\,=\,0\,,\qquad
{\rm ad}_{X_{-{\alpha}_1}}^2(X_{-{\alpha}_1})\,=\,0\,,\\
{\rm ad}_{X_{{\alpha}_i}}^{1-a_{ij}}(X_{{\alpha}_j})\,=\,0\,,\quad
{\rm ad}_{X_{-{\alpha}_i}}^{1-a_{ij}}(X_{-{\alpha}_j})\,=\,0\,,\qquad
i,j\in I\,\,.
\end{array}\end{eqnarray}
The Cartan-Weyl generators $X_\alpha$ may be represented via matrix
embedding \eqref{ospNrep} of
$\mathfrak{osp}(1|2\ell)$ as follows:
\begin{eqnarray}\new\begin{array}{cc}\label{ospNgen1}
X_{\epsilon_i}\,=\,E_{i,\,0}\,+\,E_{0,\,\ell+i}\,,\qquad
X_{-\epsilon_i}\,=\,E_{0,\,i}\,-\,E_{\ell+i,\,0}\,;
\end{array}\end{eqnarray}
\begin{eqnarray}\new\begin{array}{cc}\label{ospNgen0}
X_{\epsilon_i-\epsilon_j}\,=\,E_{ij}-E_{2\ell+1-i,\,2\ell+1-j}\,,\\
X_{-\epsilon_i+\epsilon_j}\,=\,E_{ji}-E_{2\ell+1-j,\,2\ell+1-i}\,,\\
X_{\epsilon_i+\epsilon_j}\,=\,E_{i,\,\ell+j}+E_{j,\,\ell+i}\,,\quad
X_{-\epsilon_i-\epsilon_j}\,=\,E_{\ell+j,\,i}+E_{\ell+i,\,j}\,,\quad
i<j\,,\\
X_{2\epsilon_i}\,=\,E_{i,\,\ell+i}\,,\qquad
X_{-2\epsilon_i}\,=\,E_{\ell+i,\,i}\,,\qquad i\in I\,.
\end{array}\end{eqnarray}
The Cartan subalgebra $\mathfrak{h}\subset\mathfrak{osp}(1|\,2\ell)$ is spanned by
\begin{eqnarray}\new\begin{array}{cc}\label{ospNgen2}
h_i\,=\,E_{ii}\,-\,E_{\ell+i,\,\ell+i}\,,\qquad i\in I\,.
\end{array}\end{eqnarray}
For a class of super Lie algebras $\mathfrak{g}$ allowing
non-degenerate invariant pairing $(\,|\,)$ there is a canonical
construction of the quadratic Casimir element
$C_2\in\mathcal{Z}(\mathcal{U}(\mathfrak{g}))$ of the center of the universal enveloping
algebra $\mathcal{U}(\mathfrak{g})$ (see e.g.
\cite{Kac1}). Let us chose a pair $\{u_i,\,i\in I\}$,
$\{u^i,\,i\in I\}$ of dual bases
in the Cartan subalgebra $\mathfrak{h}\subset\mathfrak{g}$, and let
$\{X_{{\alpha}},\,X^{{\alpha}},\,{\alpha}\in\Delta^+\}$ be the Cartan-Weyl generators
normalized by $(X^{{\alpha}}|\,X_{{\alpha}})=1$. Then the quadratic Casimir element
$C_2$ allows for the following presentation:
\begin{eqnarray}\new\begin{array}{cc}\label{casimir}
C_2\,
=\,\sum_{i\in I}u^iu_i\,
+\,\sum_{{\alpha}\in\Delta^+}\bigl((-1)^{p({\alpha})}X_{{\alpha}}X^{{\alpha}}\,+\,X^{{\alpha}}X_{{\alpha}}\bigr)\,.
\end{array}\end{eqnarray}
To define the quadratic Casimir element for $\mathfrak{osp}(1|2\ell)$ we shall
first introduce an invariant non-degenerate invariant pairing.
The super Lie algebra $\mathfrak{osp}(1|2\ell)$ allows the invariant scalar
product defined as follows
\begin{eqnarray}\new\begin{array}{cc}\label{ISP2}
(X|Y)\,:=\,\frac{1}{2}{\rm str}\,\bigl(\rho_t(X)\circ
\rho_t(Y)\bigr)\,,\qquad X,\,Y\in \mathfrak{osp}(1|2\ell)\,,
\end{array}\end{eqnarray}
where $\rho_t: \mathfrak{osp}(1|2\ell)\to {\rm End}(\mathbb{C}^{1|2\ell})$ is the
tautological representation of $\mathfrak{osp}(1|2\ell)$ in $\mathbb{C}^{1|2\ell}$.
The supertrace of $A\in{\rm End}(\mathbb{C}^{1|2\ell})$ of the shape
\eqref{shape} is given by
\begin{eqnarray}\new\begin{array}{cc}
{\rm str}\,(A)\,=\,{\rm str}\,\left(\begin{array}{cc}
A_{11} & A_{12}\\A_{21} & A_{22}
\end{array}\right)\,
=\,-{\rm tr}\,(A_{11})\,+\,{\rm tr}\,(A_{22})\,.
\end{array}\end{eqnarray}
The explicit form of the invariant scalar product \eqref{ISP2}
may be directly derived using the matrix representation
\eqref{ospNrep}.
\begin{lem} The invariant scalar product \eqref{ISP2} on super Lie
algebra $\mathfrak{osp}(1|2\ell)$ is as follows
\begin{eqnarray}\new\begin{array}{cc}\label{ISP}
(h_i|h_i)=1\,,\quad i\in I\,;\\
(X_\alpha|X_{-\alpha})=\frac{2}{(\alpha,\alpha)}\,,\quad
\alpha\in \Delta^+_{\bar{0}}\,;
\qquad (X_\beta|X_{-\beta})=1\,,
\quad \beta\in \Delta^+_{\bar{1}}\,,
\end{array}\end{eqnarray}
with the rest of the products being zero.
\end{lem}
\noindent {\it Proof} : Validity of \eqref{ISP} may be checked directly. Thus for
example we have using \eqref{ospNgen2}
\begin{eqnarray}\new\begin{array}{cc}
(h_i|h_j)=\frac{1}{2}{\rm str}\,\,\Big(\rho_t(E_{ii}-E_{\ell+i,\ell+i})
\rho_t(E_{jj}-E_{\ell+j,\ell+j})\Big)\,
=\,\delta_{ij}\,.
\end{array}\end{eqnarray}
Similarly, using \eqref{ospNgen1} we obtain
\begin{eqnarray}\new\begin{array}{cc}
(X_{\epsilon_i}|X_{-\epsilon_j})\,
=\,\frac{1}{2}{\rm str}\,\Big(\rho_t(E_{i,0}+E_{0,\ell+i})
\rho_t(E_{0,j}-E_{\ell+j,0})\Big)\,
=\,\delta_{ij}\,.
\end{array}\end{eqnarray}
Similarly one might check the expressions for remaining products. $\Box$
\begin{prop} The following expression provides the quadratic
Casimir element \eqref{casimir} for $\mathfrak{osp}(1|2\ell)$:
\begin{eqnarray}\new\begin{array}{cc}\label{Gcasimir}
C_2\,
=\,\sum_{i\in I}\Big(h_{i}^2\,
-\,X_{\epsilon_i}X_{-\epsilon_i}\,+\,X_{-\epsilon_i}X_{\epsilon_i}\Big)\\
+\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}\frac{({\alpha},\,{\alpha})}{2}\bigl(X_{{\alpha}}X_{-{\alpha}}\,
+\,X_{-{\alpha}}X_{{\alpha}}\bigr)\,.
\end{array}\end{eqnarray}
\end{prop}
\noindent {\it Proof} : Using the expressions \eqref{ISP} for the invariant pairing
we have
\begin{eqnarray}\new\begin{array}{cc}\label{ISP1}
h^i\,=\,h_i\,,\quad i\in I\,;\\
X^{\alpha}\,=\,\frac{(\alpha,\alpha)}{2}\,X_{-\alpha}\,,\quad
\alpha\in \Delta^+_{\bar{0}}\,;\qquad
X^{\beta}\,=\,X_{-\beta}\,,\quad \beta\in \Delta^+_{\bar{1}}\,.
\end{array}\end{eqnarray}
Substituting \eqref{ISP1} into \eqref{casimir} we arrive at
\eqref{Gcasimir}, and complete the proof. $\Box$
In the following it will more convenient to use
another set of notations for generators which is adapted to the
matrix form \eqref{ospNrep}
\begin{eqnarray}\new\begin{array}{cc}
y_i\,=X_{\epsilon_i}\,,\qquad x_i\,=\,X_{-\epsilon_i}\,\,;\\
a_{ii}\,=\,h_i\,,
\quad b_{ii}\,=\,X_{2\epsilon_i}\,,\quad c_{ii}\,=\,X_{-2\epsilon_i}\,,\qquad i\in I\,;\\
a_{ij}\,=\,X_{\epsilon_i-\epsilon_j}\,\qquad a_{ji}\,=\,X_{-\epsilon_i+\epsilon_j}\,,\\
b_{ij}\,=\,X_{\epsilon_i+\epsilon_j}\,,\quad c_{ij}\,=\,X_{-\epsilon_i-\epsilon_j},\quad
i<j\,\,.
\end{array}\end{eqnarray}
In addition to \eqref{osprel} the even part of $\mathfrak{osp}(1|2\ell)$
satisfies the following relations:
\begin{eqnarray}\new\begin{array}{cc}\label{spNrels}
[b_{ij},b_{kl}]=0, \qquad
[c_{ij},c_{kl}]=0, \qquad
[b_{ij},c_{kl}]=\delta_{jk}a_{il},\\
\,[a_{ii},b_{kl}]=(\delta_{ik}-\delta_{il})b_{kl}, \qquad
[a_{ii},c_{kl}]=-(\delta_{ik}-\delta_{il})b_{kl}\,.
\end{array}\end{eqnarray}
Using these notations the quadratic Casimir element \eqref{Gcasimir}
may be written as follows:
\begin{eqnarray}\new\begin{array}{cc}\label{Gcasimir1}
C_2\,
=\,\sum_{i=1}^{\ell}(a_{ii}^2+x_iy_i-y_ix_i)\,
+\,2(c_{ii}b_{ii}+b_{ii}c_{ii})\\
+\,\sum_{i<j}(a_{ij}a_{ji}+a_{ji}a_{ij})\,+\,(b_{ij}c_{ij}+c_{ij}b_{ij})\,.
\end{array}\end{eqnarray}
From now on we will consider the real form $\mathfrak{osp}(1|2\ell)(\mathbb{R})$ of
the orthosymplectic super Lie algebra, such that the generators
$a_{ii},\,b_{ii},\,c_{ii},\,i\in I$,
$a_{ij},\,a_{ji},\,b_{ij},\,c_{ij},\,i<j$ as well as $x_i$ and $y_i$
are defined to be real.
\section{The $\mathfrak{osp}(1|\,2\ell)$-Whittaker function}
In this section we construct the Whittaker function associated with
the super Lie algebra $\mathfrak{osp}(1|2\ell)$. There is a classical approach
to the construction of Whittaker functions associated with
semisimple Lie algebras \cite{J}, \cite{K1}, \cite{K2}, \cite{H}.
Below we give a modified version of this construction due to
Kazhdan-Kostant (see \cite{E}).
Given a super Lie algebra ${\mathfrak g}$, let $\mathcal{U}({\mathfrak g})$ be the corresponding
universal enveloping algebra and let $\mathcal{Z}(\mathcal{U}(\mathfrak{g}))\subset\mathcal{U}({\mathfrak g})$
be its center. A $\mathcal{U}(\mathfrak{g})$-module $\mathcal{V}$ admits an infinitesimal
character $\zeta$ if there is a homomorphism $\zeta:
\mathcal{Z}(\mathcal{U}(\mathfrak{g}))\rightarrow \mathbb{C}$ such that $zv=\zeta(z)v$ for all
$z\!\in\!\mathcal{Z}(\mathcal{U}(\mathfrak{g}))$ and $v\in\mathcal{V}$. Given a character $\chi$ of a
nilpotent super Lie subalgebra $\mathfrak{n}\subset \mathfrak{g}$,
\begin{eqnarray}\new\begin{array}{cc}
\chi\,:\quad\mathfrak{n}\longrightarrow\,\mathbb{C}^{1|1}\,;
\end{array}\end{eqnarray}
we define a Whittaker vector $\psi\in\mathcal{V}$ by the following
relations:
\begin{eqnarray}\new\begin{array}{cc}
X\cdot\psi\,=\,\chi(X)\,\psi\,,\qquad\forall
X\in\mathfrak{n}\subset\mathfrak{g}\,.
\end{array}\end{eqnarray}
The Whittaker vector $\psi\in\mathcal{V}$ is called cyclic, if it
generates $\mathcal{V}$: $\mathcal{U}({\mathfrak g})\,\cdot\psi=\mathcal{V}$. A $\mathcal{U}({\mathfrak g})$-module
$\mathcal{V}$ is called a Whittaker module if it contains a cyclic Whittaker
vector. A pair of $\mathcal{U}({\mathfrak g})$-modules $\mathcal{V}$ and $\mathcal{V}'$ is called
dual if there exists a non-degenerate pairing
\begin{eqnarray}\new\begin{array}{cc}
\<\,,\,\>\,:\quad\mathcal{V}\times\mathcal{V}'\,\longrightarrow\,\mathbb{C}^{1|1}\,,
\end{array}\end{eqnarray}
which is $\mathbb{C}$-antilinear in the first
variable and $\mathbb{C}$-linear in the second one, and such that
\begin{eqnarray}\new\begin{array}{cc}\label{hermitean}
\<X\cdot v',\,v\>\,=\,-(-1)^{p(v')\cdot p(X)}\<v'\,,X\cdot
v\>\,,\qquad v\in\mathcal{V},\,\, v'\in\mathcal{V}'\,,\quad X\in\mathfrak{g}\,.
\end{array}\end{eqnarray}
Now we restrict ourselves to the case of the orthosymplectic super
Lie algebra $\mathfrak{osp}(1|2\ell)$. Let $\mathcal{V}_\lambda$ be a $\mathcal{U}(\mathfrak{osp}(1|\,2\ell))$-module
with an infinitesimal central character allowing
a vector $v_{\lambda}\in\mathcal{V}_{\lambda}$ defined by (see the notations
of \eqref{ospNCartan}, \eqref{osprel}):
\begin{eqnarray}\new\begin{array}{cc}\label{hwvec}
h_{{\alpha}^{\vee}}\cdot v_{\lambda}\,=\,{\alpha}^{\vee}(\lambda)v_{\lambda}\,,\quad
X_{{\alpha}}\cdot v_{\lambda}\,=\,0\,,\qquad\forall
X_{{\alpha}}\in\mathfrak{n}_+\subset\mathfrak{osp}(1|2\ell)\,,\quad{\alpha}\in\Delta^+\,,
\end{array}\end{eqnarray}
where $\lambda$ is an element of the dual to the Cartan subalgebra
$\mathfrak{h}\subset \mathfrak{osp}(1|2\ell)$.
Value of the quadratic Casimir element $C_2$ on $\mathcal{V}_{\lambda}$ is
uniquely determined by \eqref{hwvec}. Indeed let us re-write the
Casimir element $C_2$ from \eqref{Gcasimir} as follows:
\begin{eqnarray}\new\begin{array}{cc}\label{Gcasimir11}
C_2\,=\,\sum_{i\in I}\Big(a_{ii}^2\,
-\,a_{ii}\,+\,2X_{-\epsilon_i}X_{\epsilon_i}\Big)\,
+\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}\frac{({\alpha},\,{\alpha})}{2}\Big(h_{{\alpha}^{\vee}}\,
+\,2X_{-{\alpha}}X_{{\alpha}}\Big)\\
=\,\sum_{i\in I}\Big(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\,
+\,2X_{-\epsilon_i}X_{\epsilon_i}\Big)\,
+\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\,X_{-{\alpha}}X_{{\alpha}}\,,
\end{array}\end{eqnarray}
where
\begin{eqnarray}\new\begin{array}{cc}\label{rho1}
\rho(q)\,
=\,\frac{1}{2}\Big(\sum_{{\alpha}\in\Delta^+_{\bar{0}}}{\alpha}(q)\,
-\,\sum_{\beta\in\Delta^+_{\bar{1}}}\beta(q)\Big)\,.
\end{array}\end{eqnarray}
Thus using \eqref{hwvec} $C_2$ takes the following value on
$v_{\lambda}\in\mathcal{V}_{\lambda}$:
\begin{eqnarray}\new\begin{array}{cc}\label{CasimirValue}
C_2(v_{\lambda})\,=\,\sum_{i\in
I}\bigl(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\bigr)(v_{\lambda})\,
=\,(\lambda,\lambda+2\rho)\,v_{\lambda}\,.
\end{array}\end{eqnarray}
In the following we consider those $\mathcal{V}_\lambda$ that allow a
structure of the Whittaker modules and also allow integration of
the action of the Cartan subalgebra $\mathfrak{h}\subset\mathfrak{osp}(1|2\ell)$ to the
action of the corresponding maximal torus $H$. Precisely let
$\mathcal{V}_\lambda$ and $\mathcal{V}'_\lambda$ be a dual pair of Whittaker
modules cyclically generated by
Whittaker vectors $\psi_R\in\mathcal{V}_{\lambda}$ and $\psi_L\in\mathcal{V}_{\lambda}'$.
Explicitly, the Whittaker vectors $\psi_R\in\mathcal{V}_{\lambda}$ and
$\psi_L\in\mathcal{V}'_\lambda$ are defined by the following conditions
\begin{eqnarray}\new\begin{array}{cc}\label{whittchar}
X_{{\alpha}}\cdot\psi_R\,=\,\chi_R(X_{{\alpha}})\,\psi_R\,,\qquad
X_{-{\alpha}}\cdot\psi_L\,=\,\chi_L(X_{-{\alpha}})\,\psi_L\,,\qquad
\forall
{\alpha}\in\Delta^+\,,
\end{array}\end{eqnarray}
where $\chi_R:\,\mathfrak{n}_+\to\mathbb{C}^{1|1}$ and $\chi_L:\,\mathfrak{n}_-\to\mathbb{C}^{1|1}$
are the characters of the opposite nilpotent super Lie subalgebras
$\mathfrak{n}_{\pm}\subset\mathfrak{osp}(1|2\ell)$:
\begin{eqnarray}\new\begin{array}{cc}
\chi_R\,:\quad\mathfrak{n}_+\,=\,\Big(\bigoplus_{{\alpha}\in\Delta^+_{\bar{0}}}\mathbb{C}
X_{{\alpha}}\,\oplus\,\bigoplus_{\beta\in\Delta^+_{\bar{1}}}\mathbb{C}
X_{\beta}\Big)\,\longrightarrow\,\mathbb{C}^{1|1}\,,\\
\chi_L\,:\quad\mathfrak{n}_-\,=\,\Big(\bigoplus_{{\alpha}\in\Delta^+_{\bar{0}}}\mathbb{C}
X_{-{\alpha}}\,\oplus\,\bigoplus_{\beta\in\Delta^+_{\bar{1}}}\mathbb{C}
X_{-\beta}\Big)\,\longrightarrow\,\mathbb{C}^{1|1}\,.
\end{array}\end{eqnarray}
\begin{lem}\label{Nchar}
\begin{itemize}
\item[(i)] The function $\chi_R:\,\mathfrak{n}_+\to\mathbb{C}^{1|1}$ defined by
\begin{eqnarray}\new\begin{array}{cc}\label{whittvecR}
\chi_R\bigl(X_{\epsilon_{\ell}}\bigr)\,=\,\imath^{3/2}\xi_{{\alpha}_1}^+\,\in\,
\imath^{3/2}\mathbb{R}^{1|0}\,,\quad
\chi_R\bigl(X_{2\epsilon_{\ell}}\bigr)\,=\,\imath(\xi_{{\alpha}_1}^+)^2\,\in\,\imath\mathbb{R}^{0|1}\,,\\
\chi_R\bigl(X_{\epsilon_{\ell+1-k}-\epsilon_{\ell+2-k}}\bigr)\,=\,
\imath\xi_{{\alpha}_k}^+\,\in\,\imath\mathbb{R}^{0|1}\,,
\quad1<k\leq\ell\,,\\
\chi_R\bigl(X_{\epsilon_k}\bigr)\,=\,\chi_R\bigl(X_{{\alpha}}\bigr)\,=\,0\,,\qquad
1\leq k<\ell\,,\quad{\alpha}\in\Delta^+_{\bar{0}}\setminus{}^s\Delta^+_{\bar{0}}\,,
\end{array}\end{eqnarray}
is a character of the super Lie subalgebra
$\mathfrak{n}_+\subset\mathfrak{osp}(1|\,2\ell)$.
\item[(ii)] Similarly, the function $\psi_L:\,\mathfrak{n}_-\to\mathbb{C}^{1|1}$ defined by
\begin{eqnarray}\new\begin{array}{cc}\label{whittvecL}
\chi_L\bigl(X_{-\epsilon_{\ell}}\bigr)\,=\,\imath^{3/2}\xi_{{\alpha}_1}^-\,\in\,
\imath^{3/2}\mathbb{R}^{1|0}\,,\qquad
\chi_L\bigl(X_{-\epsilon_{2\ell}}\bigr)\,=\,\imath(\xi_{{\alpha}_1}^-)^2\,\in\,\imath\mathbb{R}^{0|1}\,,\\
\chi_L\bigl(X_{-\epsilon_{\ell+1-k}+\epsilon_{\ell+2-k}}\bigr)
\,=\,\imath\xi_{{\alpha}_k}^-\,\in\,\imath\mathbb{R}^{0|1}\,,\quad1<k\leq\ell\,,\\
\chi_L\bigl(X_{-\epsilon_k}\bigr)\,=\,\chi_L\bigl(X_{-{\alpha}}\bigr)\,=\,0\,,\qquad
1\leq k<\ell\,,\quad{\alpha}\in\Delta^+_{\bar{0}}\setminus{}^s\Delta^+_{\bar{0}}\,,
\end{array}\end{eqnarray}
is a character of the super subalgebra Lie
$\mathfrak{n}_-\subset\mathfrak{osp}(1|\,2\ell)$.
\end{itemize}
\end{lem}
\noindent {\it Proof} : We provide the proof in the case of $\chi_R$ while the case of
$\chi_L$ can be treated in a similar way. Let us verify that
\eqref{whittvecR} defines a character $\chi_R$ of the super
subalgebra $\mathfrak{n}_+\subset\mathfrak{osp}(1|2\ell)$ by checking the compatibility
of \eqref{whittvecR} with the appropriate Cartan-Weyl relations
\eqref{osprel}:
\begin{eqnarray}\new\begin{array}{cc}\label{osprelN}
[X_{\epsilon_i},\,X_{\epsilon_i}]\,=\,2X_{\epsilon_i}^2\,=\,2X_{2\epsilon_i}\,,\quad
[X_{\epsilon_i},\,X_{\epsilon_j}]\,=\,X_{\epsilon_i+\epsilon_j}\,,\qquad i,j\in I\,,\\
\bigl[X_{\epsilon_i-\epsilon_{i+1}},X_{\epsilon_{i+1}}\bigr]=X_{\epsilon_i},\quad
\bigl[X_{\epsilon_i-\epsilon_{i+1}},[X_{\epsilon_i-\epsilon_{i+1}},X_{2\epsilon_{i+1}}]\bigr]
=2X_{2\epsilon_i},\quad1\leq
i<\ell\,,\\
\bigl[X_{\epsilon_i-\epsilon_j},\,X_{2\epsilon_j}\bigr]\,=\,X_{\epsilon_i+\epsilon_j}\,,\qquad i<j\,,\\
\bigl[X_{\epsilon_i-\epsilon_j},\,X_{\epsilon_j-\epsilon_k}\bigr]\,=\,X_{\epsilon_i-\epsilon_k}\,,\qquad
i<j<k\,,
\end{array}\end{eqnarray}
and with the Serre relations \eqref{Serre}:
\begin{eqnarray}\new\begin{array}{cc}\label{SerreN}
{\rm ad}_{X_{\epsilon_{\ell}}}^2(X_{\epsilon_{\ell}})\,=\,0\,,\quad
{\rm ad}_{X_{\epsilon_{\ell}}}^3(X_{\epsilon_{\ell-1}-\epsilon_{\ell}})\,
=\,{\rm ad}_{X_{\epsilon_{\ell-1}-\epsilon_{\ell}}}^2(X_{\epsilon_{\ell}})\,=\,0\,,\\
{\rm ad}_{X_{2\epsilon_{\ell}}}^2(X_{\epsilon_{\ell-1}-\epsilon_{\ell}})\,
=\,{\rm ad}_{X_{\epsilon_{\ell-1}-\epsilon_{\ell}}}^3(X_{2\epsilon_{\ell}})\,=\,0\,,\\
{\rm ad}_{X_{\epsilon_{i-1}-\epsilon_i}}^2(X_{\epsilon_i-\epsilon_{i+1}})\,
=\,{\rm ad}_{X_{\epsilon_i-\epsilon_{i+1}}}^2(X_{\epsilon_{i-1}-\epsilon_i})\,=\,0\,,\quad1<i<\ell\,.
\end{array}\end{eqnarray}
From the defining relations \eqref{whittvecR} we see that $\chi_R$
takes non-zero values only on the simple root generators
$X_{\epsilon_{\ell}}$, $X_{\epsilon_k-\epsilon_{k+1}},\,1\leq k<\ell$ and on the
special non-simple root generator $X_{2\epsilon_{\ell}}$. The latter
follows from the first relation from \eqref{osprelN} for $i=\ell$:
\begin{eqnarray}\new\begin{array}{cc}
[X_{\epsilon_{\ell}},\,X_{\epsilon_{\ell}}]\,=\,2X_{\epsilon_{\ell}}^2\,
=\,2X_{2\epsilon_{\ell}}\,.
\end{array}\end{eqnarray}
Indeed, given $X_{\epsilon_{\ell}}\cdot\psi_R=\imath^{3/2}\,\xi_{{\alpha}_1}^+\psi_R$ one
readily deduces
\begin{eqnarray}\new\begin{array}{cc}
2X_{\epsilon_{\ell}}^2\cdot\psi_R\,=\,2X_{\epsilon_{\ell}}\cdot(X_{\epsilon_{\ell}}\cdot\psi_R)\,
=\,2X_{\epsilon_{\ell}}\cdot(\imath^{3/2}\xi_{{\alpha}_1}^+\psi_R)\,
=\,-2\imath^{3/2}\xi_{{\alpha}_1}^+(X_{\epsilon_{\ell}}\cdot\psi_R)\\
=\,2\imath(\xi_{{\alpha}_1}^+)^2\psi_R\,,
\end{array}\end{eqnarray}
which matches with
$2X_{\epsilon_{2\ell}}\cdot\psi_R\,=\,2\imath(\xi_{{\alpha}_1}^+)^2\psi_R$.
Similarly, the first relation from \eqref{osprelN} for $1\leq
i<\ell$ yields
\begin{eqnarray}\new\begin{array}{cc}
\chi_R(X_{2\epsilon_i})\,=\,\frac{1}{2}\chi_R(X_{\epsilon_i})^2\,
=\,0\,,\qquad1\leq i<\ell\,,
\end{array}\end{eqnarray}
and the other relation in the first line of \eqref{whittvecR}
entails
\begin{eqnarray}\new\begin{array}{cc}
\chi_R(X_{\epsilon_i+\epsilon_j})\,=\,\chi_R(X_{\epsilon_i}X_{\epsilon_j}+X_{\epsilon_j}X_{\epsilon_i})\,
=\,0\,,\qquad i<j\,.
\end{array}\end{eqnarray}
The Serre relations imply that
$\dim\mathfrak{n}_+=|\Delta^+|=\ell^2+\ell=\ell(\ell+1)$. Thus the rest of
the defining relations \eqref{whittvecR} are provided by the fact
that given $X_{{\alpha}}\in\mathfrak{n}_+$ and $X_{\beta},X_{{\gamma}}\in\mathfrak{n}_+$, such
that ${\alpha}=\beta+{\gamma}$ and not both $\beta,{\gamma}$ are odd, we have
\begin{eqnarray}\new\begin{array}{cc}
\chi_R(X_{{\alpha}})\,
=\,\chi_R\bigl(X_{\beta}X_{{\gamma}}-X_{{\gamma}}X_{\beta}\bigr)\,=\,0\,.
\end{array}\end{eqnarray}
Namely, the last line of \eqref{osprelN} for each $i<j<k$ with
$1<i-k<\ell$ implies
\begin{eqnarray}\new\begin{array}{cc}
\chi_R(X_{\epsilon_i-\epsilon_k})\,
=\,\chi_R\bigl(X_{\epsilon_i-\epsilon_j}X_{\epsilon_j-\epsilon_k}
-X_{\epsilon_j-\epsilon_k}X_{\epsilon_i-\epsilon_j}\bigr)\,
=\,0\,.
\end{array}\end{eqnarray}
Then the second and the third lines of \eqref{osprelN} for each
$1\leq i<k\leq\ell$ we have
\begin{eqnarray}\new\begin{array}{cc}
\chi_R(X_{\epsilon_i})\,
=\,\chi_R\bigl(X_{\epsilon_i-\epsilon_k}X_{\epsilon_k}-X_{\epsilon_k}X_{\epsilon_i-\epsilon_k}\bigr)\,=\,0\,.
\end{array}\end{eqnarray}
Finally, we check that the remaining relations
\begin{eqnarray}\new\begin{array}{cc}
\bigl[X_{\epsilon_i-\epsilon_j},\,X_{2\epsilon_j}\bigr]\,=\,X_{\epsilon_i+\epsilon_j}\,,\qquad
[X_{\epsilon_i-\epsilon_j},\,X_{2\epsilon_j}]\,=\,X_{\epsilon_i+\epsilon_j}\,,\\
{\rm ad}_{X_{\epsilon_i-\epsilon_j}}^2(X_{2\epsilon_j})\,
=\,\bigl[X_{\epsilon_i-\epsilon_j},[X_{\epsilon_i-\epsilon_j},X_{2\epsilon_j}]\bigr]\,
=\,X_{2\epsilon_i}\,,\qquad i<j\,,
\end{array}\end{eqnarray}
are consistent with the defining relations \eqref{whittvecR}. This
completes our proof. $\Box$
\begin{rem} Our choice of the characters \eqref{whittvecL}, \eqref{whittvecR}
in Lemma \ref{Nchar} is
compatible with the notion of a unitary operators in the case of
super Hilbert spaces (see \cite{DM}). Note however that in our case we do not require
the Hilbert space structure but only an invariant pairing.
\end{rem}
\begin{de} Let $\mathcal{V}_\lambda$ and $\mathcal{V}'_\lambda$ be a dual pair of
cyclic Whittaker modules with the action of the Casimir element given
by \eqref{CasimirValue}. The $\mathfrak{osp}(1|2\ell)$-Whittaker function is defined by
\begin{eqnarray}\new\begin{array}{cc}\label{Gwhittaker}
\Psi_{\lambda}(e^q)\,
=\,e^{-\rho(q)}\<\psi_L\,,\,e^{-h_q} \cdot \psi_R\>\,,\qquad
h_q\,=\,\sum_{i\in I}q_ia_{ii}\,,
\end{array}\end{eqnarray}
where $\rho$ is the half-sum of positive even roots minus the half-sum
of positive odd roots, given by \eqref{rho1}.
\end{de}
\begin{prop} The $\mathfrak{osp}(1|\,2\ell)$-Whittaker function \eqref{Gwhittaker} is
a solution to the following eigenvalue problem:
\begin{eqnarray}\new\begin{array}{cc}\label{eigenvalue}
\mathcal{H}_2^{\mathfrak{osp}(1|2\ell)}\cdot\Psi_{\lambda}(e^q)
\,=\,-(\lambda+\rho)^2\,\Psi_{\lambda}(e^q)\,,
\end{array}\end{eqnarray}
\begin{eqnarray}\new\begin{array}{cc}\label{BCham}
\begin{array}{c}
\mathcal{H}_2^{\mathfrak{osp}(1|2\ell)}\,
=-\,\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,
+\,2\!\!\sum_{{\alpha}_i\in{}^s\!\Delta^+}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)}\,
+\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2\,e^{2{\alpha}_1(q)}\,,
\end{array}
\end{array}\end{eqnarray}
where $\rho$ is given by \eqref{rho1}, and
${}^s\!\Delta^+={}^s\!\Delta^+(B_{0,\ell})$ is defined in
\eqref{OPSroot}.
\end{prop}
\noindent {\it Proof} : On the one hand, by our construction we read from
\eqref{CasimirValue}:
\begin{eqnarray}\new\begin{array}{cc}\label{CasimirValue1}
\<\psi_L\,,e^{-h_q}\,C_2\,\psi_R\>\,
=\,(\lambda,\lambda+2\rho)\,\<\psi_L\,,e^{-h_q}\,C_2\,\psi_R\>\,.
\end{array}\end{eqnarray}
On the other hand, the action of the Casimir element
$C_2\in\mathcal{Z}(\mathcal{U}(\mathfrak{osp}(1|2\ell)))$ is equivalent to action on
\eqref{Gwhittaker} of a certain second-order differential operator.
Namely, from \eqref{Gcasimir1} we take
\begin{eqnarray}\new\begin{array}{cc}
C_2\,
=\,\sum_{i\in I}\Big(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\,
+\,2X_{-\epsilon_i}X_{\epsilon_i}\Big)\,
+\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\,X_{-{\alpha}}X_{{\alpha}}\,,
\end{array}\end{eqnarray}
and substituting this into \eqref{CasimirValue1} we obtain:
\begin{eqnarray}\new\begin{array}{cc}
\sum_{i\in I}
\bigl\<\psi_L\,,e^{h_q}\bigl(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\bigr)\,\psi_R\bigr\>\\
=\,\sum_{i\in I}\Big\{\frac{\partial^2}{\partial q_i^2}\,
-\,2\rho(\epsilon_i)\frac{\partial}{\partial
q_i}\Big\}\<\psi_L\,,e^{-h_q}\cdot\psi_R\>\,.
\end{array}\end{eqnarray}
Taking into account the defining equations \eqref{whittvecL},
\eqref{whittvecR} and
the hermitian property \eqref{hermitean} of $\<\,,\,\>$ we find out
\begin{eqnarray}\new\begin{array}{cc}
2\sum_{i\in I}\bigl\<\psi_L\,,e^{-h_q}X_{-\epsilon_i}X_{\epsilon_i}\,\psi_R\bigr\>\\
=\,-2\sum_{i\in I}e^{q_i}
(-1)^{p(X_{-\epsilon_i})\,p(\psi_L)}\bigl\<X_{-\epsilon_i}\psi_L\,\,,e^{-h_q}\,X_{\epsilon_i}\,
\psi_R\bigr\>\,\\
=\,-2(-1)^{p(X_{-\epsilon_{\ell}})\,p(\psi_L)}e^{q_{\ell}}
\bigl\<\imath^{3/2}\xi_{{\alpha}_1}^-\psi_L\,,e^{-h_q}\imath^{3/2}\xi_{{\alpha}_1}^+\psi_R\bigr\>\,\\
=\,-2(-1)^{p(X_{-\epsilon_{\ell}})\,p(\psi_L)}(-1)^{p(\xi_{{\alpha}_1}^+)\cdot p(\psi_L)}
\imath^{-3/2}\imath^{3/2}\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+
e^{{\alpha}_1(q)}\bigl\<\psi_L\,,e^{-h_q}\cdot\psi_R\bigr\>\,\\
=\,-2\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+e^{\alpha_1(q)}\,\bigl\<\psi_L\,,e^{-h_q}\cdot
\psi_R\bigr\>\,.
\end{array}\end{eqnarray}
Here we use the fact that $\<\,,\,\>$ is $\mathbb{C}$-antilinear in the
first variable and it is $\mathbb{C}$-linear in the second variable. In a similar way we
derive
\begin{eqnarray}\new\begin{array}{cc}
\sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\bigl\<\psi_L\,,e^{-h_q}X_{-{\alpha}}X_{{\alpha}}
\psi_R\bigr\>\\
=\,-\sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\,e^{{\alpha}(q)}
\bigl\<X_{-{\alpha}}\,\psi_L\,,
e^{h_q}X_{{\alpha}}\psi_R\bigr\>\\
=\,-2\sum_{i=2}^{\ell}\imath^{-1}\imath\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+e^{{\alpha}_i(q)}
\bigl\<\psi_L\,,e^{-h_q}\psi_R\bigr\>\\
-\,4\imath^{-1}\imath(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2e^{2{\alpha}_1(q)}
\bigl\<\psi_L\,,e^{-h_q}\psi_R\bigr\>\,.
\end{array}\end{eqnarray}
Collecting the contributions above we obtain the following:
\begin{eqnarray}\new\begin{array}{cc}
\<\psi_L\,,e^{-h_q}\,C_2\,\psi_R\>\,
=\,\Big\{\sum_{i\in I}\Big(\frac{\partial^2}{\partial q_i^2}\,
-\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\Big)\\
-\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2e^{2{\alpha}_1(q)}\,
-\,2\sum_{i=1}^{\ell}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)}
\Big\}
\<\psi_L\,,e^{-h_q}\cdot\psi_R\>\,.
\end{array}\end{eqnarray}
Now we observe that
\begin{eqnarray}\new\begin{array}{cc}
e^{-\rho(q)}\frac{\partial}{\partial q_i}e^{\rho(q)}\,
=\,\frac{\partial}{\partial q_i}\,+\,\rho(q)'_{q_i}\,
=\,\frac{\partial}{\partial q_i}\,+\,\rho(\epsilon_i)\,,\\
e^{-\rho(q)}\frac{\partial^2}{\partial q_i^2}e^{\rho(q)}\,
=\,\frac{\partial^2}{\partial q_i^2}\,+\,2\rho(q)'_{q_i}\frac{\partial}{\partial q_i}\,
+\,\bigl(\rho(q)'_{q_i}\bigr)^2\\
=\,\frac{\partial^2}{\partial q_i^2}\,+\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\,
+\,\rho(\epsilon_i)^2\,,
\end{array}\end{eqnarray}
hence we deduce the following:
\begin{eqnarray}\new\begin{array}{cc}
\sum_{i\in I}e^{-\rho(q)}\Big\{\frac{\partial^2}{\partial q_i^2}\,
-\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\Big\}e^{\rho(q)}\\
=\,\sum_{i\in I}\Big\{\frac{\partial^2}{\partial q_i^2}\,+\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\,
+\,\rho(\epsilon_i)^2\,
-\,2\rho(\epsilon_i)\Big(\frac{\partial}{\partial q_i}\,+\,\rho(\epsilon_i)\Big)\Big\}\\
=\,\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,-\,\rho^2\,.
\end{array}\end{eqnarray}
Finally, we collect all the contributions and substitute them into
\eqref{CasimirValue1} to deduce the following:
\begin{eqnarray}\new\begin{array}{cc}
\Big\{\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,
-\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2e^{2{\alpha}_1(q)}\,
-\,2\sum_{{\alpha}_i\in{}^s\!\Delta^+}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)}
-\,\rho^2\Big\}\cdot\Psi_{\lambda}(e^q)\\
=\,(\lambda,\lambda+2\rho)\,\Psi_{\lambda}(e^q)\,,
\end{array}\end{eqnarray}
where ${}^s\!\Delta^+={}^s\!\Delta^+(B_{0,\ell})$. This easily
entails the assertion \eqref{eigenvalue}. $\Box$
\begin{rem} In the special case $\lambda=\imath\mu-\rho$, the eigenvalue equation
\eqref{eigenvalue} reads
\begin{eqnarray}\new\begin{array}{cc}\label{BCham1}
\mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\,\Psi_{\lambda}(e^q)\,
=\,\mu^2\Psi_{\lambda}(e^q)\,,\\
\mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\,
=\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,
+\,2\!\!\sum_{{\alpha}_i\in{}^s\Delta^+}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)}\,
+\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2\,e^{2{\alpha}_1(q)}\,.
\end{array}\end{eqnarray}
\end{rem}
Let us introduce the corresponding couplings:
\begin{eqnarray}\new\begin{array}{cc}
g_i^2\,=\,\xi^-_{{\alpha}_i}\,\xi^+_{{\alpha}_i}\,, \qquad i\in I\,.
\end{array}\end{eqnarray}
\begin{lem}\label{INDEP} The $\mathfrak{osp}(1|2\ell)$-Whittaker function \eqref{Gwhittaker}
depends on $\xi^{\pm}_{{\alpha}_i}$ via $g_{i}^2$, $i\in I$.
\end{lem}
\noindent {\it Proof} : The $\mathfrak{osp}(1|2\ell)$-Whittaker function \eqref{Gwhittaker}
\begin{eqnarray}\new\begin{array}{cc} \Psi_{\lambda}(e^q|\xi^{\pm}_{{\alpha}_i})\,=\,
e^{-\rho(q)}\<\psi_L\,,\,e^{-h_q}\cdot\psi_R\>\,,\qquad
h_q\,=\,\sum_{i\in I}q_ia_{ii}\,,
\end{array}\end{eqnarray}
satisfies the following obvious relation: given
$Q=\exp\bigl\{\sum\limits_{i=1}^\ell \theta_i h_i\bigr\}\in H$
\begin{eqnarray}\new\begin{array}{cc}
\<\psi_L\,,\,Qe^{-h_q}Q^{-1}\cdot\psi_R\>\,
=\,\<\psi_L\,,\,e^{-h_q}\cdot\psi_R\>\,.
\end{array}\end{eqnarray}
The adjoint action of $Q$ on the left and right Whittaker vectors
$\psi_R,\,\psi_L$ \eqref{whittvecL}, \eqref{whittvecR} changes them,
so that the eigenvalues $\xi_{{\alpha}_i}^{\pm}$ of the corresponding
$\mathfrak{n}_{\pm}$-characters are changed as follows:
\begin{eqnarray}\new\begin{array}{cc}
\xi_{{\alpha}_1}^{\pm}\longrightarrow \xi_{{\alpha}_1}^{\pm}
e^{\pm\theta_{\ell}}\,,\qquad
\xi_{{\alpha}_i}^{\pm}\longrightarrow \xi_{{\alpha}_i}^{\pm}
e^{\pm(\theta_{\ell+1-i}-\theta_{\ell+2-i})}\,,\quad1<i\leq\ell\,.
\end{array}\end{eqnarray}
The invariance of the Whittaker function under this transformation
implies that the $\mathfrak{osp}(1|2\ell)$ -Whittaker function depends on
$\xi_{{\alpha}_i}^{\pm}$ only via quadratic combinations
$\xi_{{\alpha}_i}^+\xi_{{\alpha}_i}^-$. $\Box$
\begin{lem} Let us consider a specialization of the
$\mathfrak{osp}(1|2\ell)$-Toda chain
by taking arbitrary special values $g_i^2=\kappa_i^2\in \mathbb{R}$ of
the couplings. Then by a linear change of variables $q_i$ one can bring the quadratic
Hamiltonian
\begin{eqnarray}\new\begin{array}{cc}
\begin{array}{c}
\mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\,
=\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,
+\,2\sum_{i=2}^{\ell} \kappa_i^2\,e^{{\alpha}_i(q)}\,
+\,2\kappa_1^2\,e^{{\alpha}_1(q)}\,
+\,4\kappa_1^4\,e^{2{\alpha}_1(q)}\,,
\end{array}
\end{array}\end{eqnarray}
to the following canonical form:
\begin{eqnarray}\new\begin{array}{cc}\label{BCcan}
\mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\,
=\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,
+\,\sum_{i=2}^{\ell}e^{{\alpha}_i(q)}\,
+\,e^{{\alpha}_1(q)}\,+\,e^{2{\alpha}_1(q)}.
\end{array}\end{eqnarray}
\end{lem}
\noindent {\it Proof} : Indeed it is easy to check that the following transformation
of variables,
\begin{eqnarray}\new\begin{array}{cc}
q_{\ell}\,\longmapsto\,q_{\ell}\,-\,\ln2\,-\,\ln \kappa_1^2\,,\\
q_k\,\longmapsto\,q_k\,-\,(\ell+1-k)\ln2\,-\,\ln \kappa^2_{k}\,
+\,\ln \kappa^2_{\ell-k}\,,\qquad1\leq k<\ell\,,
\end{array}\end{eqnarray}
applied to \eqref{BCham} gives \eqref{BCcan}. $\Box$
\section{On $\mathfrak{osp}(1|2\ell)$ as a Lie algebra of type $BC_\ell$}
Let us recall the construction of the Toda chain associated with the
general root system. Let $\Delta$ be a rank $\ell$ root system
realized as a set of vectors in $V=\mathbb{R}^{\ell}$. Chose an orthogonal
basis $\{\epsilon_i,\,i\in I\}$ in $V$ and the dual basis $\{\epsilon^i,\,i\in
I\}$ in $V^*$, both indexed by $I=\{1,\ldots,\ell\}$. Then elements
$q\in V^*$ allow decomposition $q=\sum\limits_{i=1}^\ell q_i\epsilon^i$.
Let ${}^s\!\Delta^+$ be as a set of simple positive roots in
$\Delta$. The quadratic quantum Hamiltonian of the Toda chain
associated with the root system $\Delta$ is given by
\begin{eqnarray}\new\begin{array}{cc}\label{qHam}
\mathcal{H}_2^{\Delta^+}\,
=\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}+ \sum_{\alpha\in{}^s\!\Delta^+}\,g^2_\alpha
\,e^{\alpha(q)}
\end{array}\end{eqnarray}
with the coupling constants $g_\alpha^2$. Note that the Hamiltonian
depends only on the structure of simple positive roots
${}^s\!\Delta^+$.
Now let us specialize this expression to the case of $BC_\ell$-root
system, the unique non-reduced root system satisfying basic
axioms of root systems of finite-dimensional Lie algebras (for a
description of $BC_\ell$ root system see e.g. \cite{H}, \cite{L}).
The set of simple positive roots of the $BC_{\ell}$ root system is
given by
\begin{eqnarray}\new\begin{array}{cc}\label{BC2}
{}^s\Delta^+(BC_{\ell})\,
=\,\bigl\{2\epsilon_{\ell};\,\, \epsilon_{\ell}\,,\quad \epsilon_{i}-\epsilon_{i+1}\,,\,\, 1\leq
i<\ell\bigr\}\,.
\end{array}\end{eqnarray}
The Cartan matrix $A=\|A_{ij}\|$ associated with the set of simple
positive roots is defined via standard formula
\begin{eqnarray}\new\begin{array}{cc}\label{CAM}
A_{ij}=\frac{2(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)}\,,\qquad i,j\in I\, .
\end{array}\end{eqnarray}
Note that the Cartan matrix corresponding to $BC_\ell$ is degenerate.
For example the Cartan matrix for $\ell=5$ is given by
\eqref{OSPcar}:
\begin{eqnarray}\new\begin{array}{cc}
A\,= \left(\begin{smallmatrix}
2&4&-2&0\\
1&2&-1&0\\
-1&-2&2&-1\\
0&0&-1&2
\end{smallmatrix}\right).
\end{array}\end{eqnarray}
Using the general formula \eqref{qHam} for $BC_\ell$-root system we
arrive at the following
\begin{eqnarray}\new\begin{array}{cc}\label{BCcanG}
\mathcal{H}_2^{BC_\ell}\,
=\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,
+\,\sum_{i=1}^{\ell-1}g_i^2\,e^{q_i-q_{i+1}}\,
+\,g_\ell^2\,e^{q_{\ell}}\,
+\,g_{\ell+1}^2\,e^{2q_{\ell}}.
\end{array}\end{eqnarray}
It is clear that specialization of the quadratic Hamiltonian
\eqref{BCcanG} of the $BC_\ell$-Toda chain to the case of the
coupling constants $g_i^2=1,\,i\in I$ coincides with the quadratic
Hamiltonian $\eqref{BCcan}$ of $\mathfrak{osp}(1|2\ell)$-Toda chain. On the
other hand for generic values of $g_i$ in \eqref{BCcanG} it is not
possible by linear changes of variables $q_i$ to transform the
Hamiltonian \eqref{BCcanG} into the Hamiltonian with $g_i^2=1$.
Thus $\mathfrak{osp}(1|2\ell)$-Toda chain realized a special class of
$BC_\ell$-Toda chains.
There is a question on the underlying reason for this
phenomenon. It is easy to see that the simple
positive roots \eqref{BC2} of $BC_\ell$ and that of $B_{0,\ell}$
\eqref{OPSroot} are closely related. There are however two
differences. First, the short simple root of $\mathfrak{osp}(1|2\ell)$ has odd
parity while in $BC_\ell$ root system it is an even root. Second,
while in the case of super Lie algebra $\mathfrak{osp}(1|2\ell)$ the
corresponding root system includes the roots $\pm 2\epsilon_\ell$, these
roots are not simple and thus do not enter the expression for the
corresponding Cartan matrix. If however we formally add the root
$2\epsilon_\ell$ to the set of positive simple roots then the
corresponding Cartan matrix constructed according to \eqref{CAM}
precisely coincides with the Cartan matrix of $BC_\ell$ root system.
The fact that in the case of $\mathfrak{osp}(1|2\ell)$ the terms of the
Cartan decomposition \eqref{ospNCartan} corresponding to short
roots are odd actually does not manifest itself in the expressions
for the Hamiltonians of the corresponding Toda chain.
Indeed, according to Lemma \ref{INDEP} the eigenvalues
$\xi_{{\alpha}_i}^{\pm}$ in \eqref{whittvecR}, \eqref{whittvecL} enter the
expressions for quantum Hamiltonians only via combinations
$g_i^2=\xi_{{\alpha}_i}^+\xi_{{\alpha}_i}^-$. Therefore $B_{0,\ell}$-Toda
chains turns out to be a special case of $BC_\ell$-Toda chain. We
have checked this explicitly for quadratic Hamiltonian in the
previous Section 3.
It is natural to wonder whether we might treat the super Lie
algebra $\mathfrak{osp}(1|2\ell)$ as a proper candidate for the Lie algebra
structure associated with $BC_\ell$ root system. Such identification
has at least one obvious caveat. The root system $BC_\ell$ allows
embedding of roots systems $B_\ell$ and $C_\ell$ having isomorphic
Weyl groups $W_{B_\ell}\simeq W_{C_\ell}$. It would be natural to
expect the same property for the corresponding Lie algebras i.e. a
candidate for the Lie algebra associated with $BC_\ell$ should allow
an embedding of the Lie algebras $\mathfrak{so}_{2\ell+1}$ and
$\mathfrak{sp}_{2\ell}$ associated with the roots systems $B_\ell$
and $C_\ell$ correspondingly. While there indeed exists an embedding
$\mathfrak{sp}_{2\ell}\subset \mathfrak{osp}(1|2\ell)$ the super Lie algebra
$\mathfrak{osp}(1|2\ell)$ does not allow an embedding of
$\mathfrak{so}_{2\ell+1}$.
|
1,941,325,220,787 | arxiv | \section{\label{sec:Intro} Introduction}
\input{01-Intro.tex}
\section{Data}\label{sec:Data}
\input{02-Data.tex}
\section{The Algorithms}\label{sec:Method}
\input{03-Method.tex}
\section{Results on parameter space optimization}\label{sec:ps}
\input{04-ResultsPSopt.tex}
\section{The impact of X-ray flux, photometry and morphology in the quality of photo-z}\label{sec:tomo}
\input{05-photozTESTS.tex}
\section{Photo-z estimation and reliability}\label{sec:pointEst}
\input{06-ResultsPhotoz.tex}
\section{PDZ estimation and reliability }\label{sec:pdzEst}
\input{07-ResultsPdz.tex}
\section{Summary \& Conclusions}\label{sec:Conclusions}
\input{08-Conclusions.tex}
\section*{Acknowledgements}
The authors thank the anonymous referee that with specific comments improved the manuscript.
MB acknowledges the \textit{INAF PRIN-SKA 2017 program 1.05.01.88.04} and the funding from \textit{MIUR Premiale 2016: MITIC}.
MB and GL acknowledge the H2020-MSCA-ITN-2016 SUNDIAL (\textit{SUrvey Network for Deep Imaging Analysis and Learning}), financed within the Call
H2020-EU.1.3.1.
SC acknowledges support from the project ``Quasars at high redshift: physics and Cosmology'' financed by the ASI/INAF agreement 2017-14-H.0.
Topcat \citep{Taylor05} and STILTS \citep{Taylor06} have been used for this work.
This material is based upon work supported by the National Science Foundation under Grant Number AST-1715512.
DAMEWARE has been used for ML experiments \citep{DAMEWARE}. C$^3$ has been used for catalogue cross-matching \citep{Riccio17}.
The SDSS Web Site is \url{http://www.sdss.org/}.
The scientific results reported in this article are based in part on data obtained from the Chandra and XMM Data Archive.
\bibliographystyle{mnras}
\subsection{PHOTOMETRY}\label{sec:photo}
The photometry used in this work is extracted from the catalogue presented in \citetalias{Ananna2017}, which lists the multi-wavelength properties of the counterparts to the X-ray sources detected in Stripe 82X. Compared to the previous version of the catalogue presented in \cite{LaMassa16}, this new catalogue uses deeper multi-wavelength data for the identification of the counterparts and for the computation of the photometric redshifts via SED fitting. Although the catalogue of \citetalias{Ananna2017} includes $6,187$ X-ray sources, we focus here only on the $5,990$ for which a reliable counterpart was identified.
All the details on the properties of the photometric dataset are exhaustively presented in \citetalias{Ananna2017}. Here we provide only the list of data that we use for this paper, focusing on the central area of Stripe 82X, observed by SPITZER, as shown in Fig.~\ref{fig:radec}.
In particular we considered:
\begin{itemize}
\item FUV and NUV magnitudes and corresponding errors from GALEX all-sky survey \citep[]{Martin05}. They were not used in this work due
to the shallowness of the data;
\item {\it u,g,r,i,z} SDSS AUTO magnitudes and corresponding errors from \citet[][]{Fliri16};
\item J, H, K
from VISTA \citep[]{Irwin04}. As shown in \citetalias{Ananna2017} additional data in J$_{\rm UK}$,H$_{\rm UK}$,K$_{\rm UK}$ data from UKIDSS \citep[]{Lawrence07} are available for the same area but were not used in this paper;
\item $3.6$ and $4.5~\mu$m magnitudes and corresponding errors from IRAC. Here two complementary surveys are used: SPIES \citep[][]{Timlin} and SHELA \citep{Papovich}. Given the similarity of the two surveys, we do not differentiate sources belonging to one or another;
\item W1, W2, W3, W4 magnitudes and corresponding errors from AllWISE \citep{Wright10}.
\end{itemize}
In the first column of Table \ref{tab:depth}, we report the nominal depth of each photometric band considered.
The original catalogue is complemented by Soft, Hard and Full band X-ray fluxes from {\it XMM-Newton} and {\it Chandra} \citep[see][for details]{LaMassa16}. It also includes morphological information on the extension and variability of the sources in the optical band. We retain such information, as it has been already demonstrated in literature that they affect the accuracy of photo-z for AGN and can be used as priors for improving performance \citep[e.g.,][]{Salvato09}. While these data are not used directly for the computation of the photo-z, they are employed to perform various experiments by creating sub-samples in X-ray flux and morphology.
\begin{figure*}
\centering
\includegraphics[]{radec.pdf}
\caption{Map of the original multi-wavelength coverage of Stripe 82X area discussed in \citetalias{Ananna2017}. The total area extends for $\sim 2.5\degree$ in Declination and $120\degree$ in Right Ascension. The dots represent X-ray sources, respectively, from XMM-Newton AO13 (red), AO10 (blue), archival XMM-Newton sources (yellow) and Chandra sources (black). While standard photo-z are generated for the entire area (in red), the selection of the best features discussed in the first part of the paper is obtained considering only the sources in the yellow area.}
\label{fig:radec}
\end{figure*}
\begin{table*}
\centering \resizebox{\textwidth}{!}{
\begin{tabular}{lcccccccccr}
Filter & \multicolumn{9}{c}{BAND DEPTH}\\
\hline
& \multirow{2}{*}{NOMINAL} &\multirow{2}{*}{BEST} &\multirow{2}{*}{SDSS} & SDSS \& & SDSS \& & SDSS \& & SDSS & SDSS & SDSS VHS \\
& & & & VHS & IRAC & WISE & VHS \& IRAC & VHS \& WISE & IRAC \& WISE\\
\hline
\hline
u & 31.22 & 28.54 & 28.54 & 28.54 & 28.54 & 28.54 & 28.54 & 28.54 & 28.54 \\
g & 28.77 & 24.20 & 24.39 & 24.20 & 24.39 & 24.39 & 24.20 & 24.20 & 24.20\\
r & 27.13 & 23.25 & 23.43 & 23.25 & 23.43 & 23.43 & 23.25 & 23.25 & 23.25\\
i & 27.21 & 22.35 & 23.49 & 22.64 & 23.49 & 22.45 & 22.64 & 22.35 & 22.35\\
z & 30.46 & 22.42 & 23.35 & 22.46 & 22.99 & 22.42 & 22.46 & 22.42 & 22.08\\
J & 24.74 & 21.64 & --- & 24.64 & --- & --- & 21.64 & 21.64 & 21.51 \\
H & 24.15 & 22.87 & --- & 22.87 & --- & --- & 21.61 & 22.87 & 21.61\\
K & 22.60 & 21.63 & --- & 21.63 & --- & --- & 21.63 & 21.63 & 21.63\\
CH1\_SPIES & 24.27 & \multirow{2}{*}{20.82$^\dag$} & --- & --- & \multirow{2}{*}{21.64$^\dag$} & --- & \multirow{2}{*}{21.06$^\dag$} & --- & \multirow{2}{*}{20.49$^\dag$} \\
CH1\_SHELA & 22.80 & & --- & --- & & --- & & --- & \\
CH2\_SPIES & 22.88 & \multirow{2}{*}{20.49$\dag$} & --- & --- & \multirow{2}{*}{21.41$\dag$} & --- & \multirow{2}{*}{21.07$^\dag$} & --- & \multirow{2}{*}{20.22$^\dag$}\\
CH2\_SHELA & 23.88 & & --- & --- & & --- & & --- & \\
W1 & 21.16 & 20.71 & --- & --- & --- & 20.71 & --- & 20.71 & 20.61\\
W2 & 20.74 & 20.59 & --- & --- & --- & 20.63 & --- & 20.63 & 20.59\\
W3 & 18.20 & 18.04 & --- & --- & --- & 18.11 & --- & 18.11 & 18.04\\
W4 & 16.15 & 16.06 & --- & --- & --- & 16.13 & --- & 16.13 & 15.94\\
\hline
N. of sources & 5990 & 2290 & 4855 & 3218 & 2293 & 3291 & 1620 & 2696 & 1380 \\
\hline
N. of sources & \multirow{2}{*}{2933}& \multirow{2}{*}{1686}& \multirow{2}{*}{2793} & \multirow{2}{*}{2218} & \multirow{2}{*}{1596} & \multirow{2}{*}{2160} & \multirow{2}{*}{1279} & \multirow{2}{*}{1935} &\multirow{2}{*}{1121} \\
w/ z$_{\rm spec}$ & & & & & & & & & \\
\hline
N. of sources & \multirow{2}{*}{ 2351}& \multirow{2}{*}{1249 }& \multirow{2}{*}{ 2025} & \multirow{2}{*}{ 1649} & \multirow{2}{*}{1051} & \multirow{2}{*}{1619} & \multirow{2}{*}{888} & \multirow{2}{*}{1445} &\multirow{2}{*}{793 } \\
w/ F$_X>10^{-14}$ & & & & & & & & & \\
\hline
N. of sources & & & & & & & & & \\
w/ F$_X>10^{-14}$ & 1550& 1025& 1483& 1309 & 857 & 1256 & 758 & 1174 & 683 \\
and z$_{\rm spec}$& & & & & & & & & \\
\hline\hline
\end{tabular}
}
\caption{Summary table for depth, amount of sources and redshift coverage.
The first column refers to the nominal depth of the entire sample of reliable counterparts in Stripe 82X, as presented in \citetalias{Ananna2017}. The following columns refer to the magnitudes reached in the various experiments, i.e., the faintest magnitude reported in the Stripe 82X catalogue for the various sub-samples for which the photo-z have been computed. The values in the column $BEST$ represent the faintest magnitudes of the sub-sample of sources in the yellow area of Fig.~\ref{fig:radec}, used for the features analysis performed with $\Phi$LAB, (Sec.~\ref{sec:philab}). The bands marked with a --- symbol have been discarded from that specific experiment.\newline$^\dag$ SPIES and SHELA have been used together (Sec.~\ref{sec:photo}). The last rows of the table list, for each of the sub-samples, the total number of sources, their amount with spectroscopic redshift available, those with a X-ray flux brighter than $10^{-14}$ $erg/cm^2/s$ and the final depth expected by eROSITA all-sky survey. Note that here we use only the sources for which the determination of the counterpart is secure, i.e. (SDSS,VHS,IRAC)\_REL\_CLASS==SECURE in the catalogue of \citetalias{Ananna2017}}
\label{tab:depth}
\end{table*}
\subsection{SPECTROSCOPY}
The spectroscopic coverage of the field (see Fig.~\ref{fig:zs_mag}) is ideal for assessing the performances of photo-z for X-ray selected sources via ML. The spectroscopic surveys BOSS \citep{Dawson13} and eBOSS \citep{Delubac17} in the life time of SDSS, provide reliable redshifts for about 50\% of the sources ($2,962/5,990$).
In addition Stripe 82X was also suitable for a dedicated spectroscopic program during SDSS-IV, targeting specifically the counterparts to X-ray sources \citep[]{LaMassa19}. There, the exposure time was of at least two hours long, allowing the determination of the redshifts also for faint sources.
The training and testing samples are formed only by sources with redshift available at the time of the publication of \citetalias{Ananna2017}. However, as additional blind test, we checked the accuracy of the photo-z also using this new available spectroscopic sample of $257$ sources (see Sec.~\ref{sec:pointEst}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{zspec_mag.pdf}
\caption{Redshift and magnitude distribution for the sources with spectroscopic redshift. The blue sources were presented in \citetalias{Ananna2017} and have been used in this work as training and blind test samples. The $258$ yellow sources are on average fainter and were recently presented in \protect\cite{LaMassa19}. They are used as additional blind test sample.}
\label{fig:zs_mag}
\end{figure}
\subsection{\mathinhead{\Phi}{Phi}LAB}\label{sec:philab}
Recently, in \cite{delliveneri19}, we presented a novel method suitable for a deep analysis and optimization of any Parameter Space, which provides in output the selection of the most relevant features. The algorithm developed by our group is called $\Phi$LAB (PhiLAB, Parameter handling investigation LABoratory).
It is a hybrid approach, including properties of both wrappers and embedded feature selection categories \citep{Tangaro:2015}, based on two joined concepts, respectively: \textit{shadow features} \citep{Kursa10} and \textit{Na\"{\i}ve LASSO} (Least Absolute Shrinkage and Selection) statistics \citep{tibshirani12}.
Shadow features are randomly noised versions of the real ones and their importance percentage is used as a threshold to identify the most relevant features among the real ones.
Afterwards, the two algorithms, based on LASSO and integrated into $\Phi$LAB, perform a regularisation, based on the standard $L_1$ norm,
of a ridge regression on the residual set of weak relevant features (i.e. a shrinking of large regression coefficients to avoid overfitting).
This has the net effect of \emph{sparsifying} the weights of the features, effectively turning off the least informative ones.
LASSO acts by conditioning the likelihood with a penalty on the entries of the covariance matrix and such penalty plays two important
roles. First, it reduces the effective number of parameters and, second, produces an estimate which is sparse.
Having a regularization technique as part of a regression minimization law, represents the most evident difference with respect to more
traditional parameter space exploration methods, like PCA \citep{Pearson2010}. The latter is a technique based on feature covariance matrix
decomposition, where the principal components are retained instead of the original features. The two concepts, \textit{shadow features}
and \textit{Na\"{\i}ve LASSO}, are then combined within the proposed method by extracting the list of candidate most relevant features through
the noise threshold imposed by the shadow features and by filtering the set of residual weak relevant features through the LASSO statistics.
$\Phi$LAB is detailed in \cite{delliveneri19}, where the method has been used to investigate the parameter space in the case of the photometric determination of star formation rates in the SDSS.
\subsection{MLPQNA}\label{sec:mlpqna}
MLPQNA (Multi Layer Perceptron trained with Quasi Newton Algorithm) is a Multi Layer Perceptron (MLP; \citealt{Rosenblatt61}) neural network trained by a learning rule based on the Quasi Newton Algorithm (QNA) which is among the most used feed-forward neural networks in a large variety of scientific and social contexts, such as electricity price \citep{AGGARWAL200913}, detection of premature ventricular contractions \citep{EBRAHIMZADEH2010103}, forecasting stock exchange movements \citep{MOSTAFA20106302}, landslide susceptibility mapping \citep{Zare13} etc.\\
Furthermore, it has been successfully applied several times in the context of photometric redshifts (see for instance, \citealt{Biviano13,Brescia13,Brescia14a,Cavuoti15,de_Jong:17,Nicastro18}). The analytical description of the method has been discussed in the contexts of both classification \citep{Brescia20153893} and regression \citep{Cavuoti12, Brescia13}.
\subsection{METAPHOR}\label{SEC:METAPHOR}
METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts; \citealt{Cavuoti17}) is a modular workflow, designed to produce the redshift PDZs through ML. Its internal engine is the MLPQNA already described in Sec.~\ref{sec:mlpqna}.
The core of METAPHOR lies in a series of different perturbations of the photometry in order to explore the parameter space of data and to grab the uncertainty due to the photometric error.
In practice, the procedure to determine the PDZ of individual sources can be summarized in this way: we proceed by training the MLPQNA model and by perturbing the photometry of the given blind test set to obtain an arbitrary number $N$ of test sets with a variable photometric noise contamination. Then we submit the $N + 1$ test sets (i.e., $N$ perturbed sets plus the original one) to the trained model, thus obtaining $N + 1$ estimates of photo-z. With such $N + 1$ values we perform a binning in photo-z ($0.01$ for the described experiments), thus calculating for each one the probability that a given photo-z value belongs to each bin. In this work we used $N = 999$ to obtain a total of $1000$ photo-z estimates.
\subsection{Statistical Estimators}\label{SEC:statindicators}
For brevity, we define $\Delta z$ as:
\begin{equation}
\Delta z = (\mbox{{$z_{\rm phot}$}}-\mbox{{$z_{\rm spec}$}})/(1+\mbox{{$z_{\rm spec}$}})
\end{equation}
Then, in order to be able to compare the accuracy with that available for other surveys/ methods present in literature, we use the classical basic statistical estimators, applied on $\Delta z$, described as following:\\
$\bullet$ mean (or bias);\\
$\bullet$ standard deviation $\sigma$;\\
$\bullet$ $\sigma_{NMAD} = 1.4826 \times median (|\Delta z|)$;\\
$\bullet$ $\sigma_{68}$ that is the width in which falls the $68$\% of the $\Delta z$ distribution;\\
$\bullet$ $\eta$, defined as the fraction (percentage) of outliers or source for which $|\Delta z| > 0.15$.\\
Due to the limited number of data samples, a canonical splitting of the dataset (or knowledge base) into training and blind test set cannot be applied. Therefore, in order to circumvent this problem the training+test process involves a k-fold cross validation \citep{hastie09,Kohavi95astudy}: the knowledge base has been manually split into $4$ not-overlapped sub-sets. In this way by taking each time $3$ of these sub-sets as training set and leaving the fourth as blind test set, an overall blind test on the entire knowledge base sample can be performed, i.e., each object of the available data sample has been evaluated in a blind way (i.e., not used for the training phase).
\subsection{Impact of feature analysis on photo-z}
The identification and consequent rejection of the non relevant features, allow us to obtain more accurate photo-z.
This is demonstrated in Table~\ref{tab:preliminaryResults} where we report the accuracy and fraction of outliers for the photo-z computed with MLPQNA for the $BEST$ sample, with and without the removal of unimportant features for both $BESTmagopt$ and $BESTmagcolopt$. Table~\ref{tab:preliminaryResultsXray} is the same as Table~\ref{tab:preliminaryResults}, but this time the metrics are computed by limiting the samples to the sources that eROSITA will be able to detect. The comparison between the two tables points out something that is already well documented in literature in the case of photo-z computed via SED-fitting: namely, the accuracy of photo-z for AGN increases when the sample includes faint AGN, dominated by the host galaxies, easier to be modeled.
As soon as the sample is limited to bright AGN, the quality of photo-z decreases.
\begin{table}
\centering
\begin{tabular}{ccccc}
ID & $BEST$ & $BESTmagopt$& $BESTmagcolopt$\\\hline
N. of Sources & 1686 & 1686 &1686\\
bands & 14 & 12 & 20\\
|bias| & 0.0159 & 0.0105 &0.0102\\
$\sigma$ & 0.141 & 0.135 &0.121\\
$\sigma_{NMAD}$ & 0.079 & 0.074 &0.056\\
$\sigma_{68}$ & 0.091 & 0.083 &0.069\\
$\eta$ & 16.09 & 13.88 &12.74\\\hline
\end{tabular}
\caption{Accuracy of photo-z computed with MLPQNA on \textit{BEST}, \textit{BESTmagopt} and \textit{BESTmagcolopt} samples, after the optimization of the parameter spaces with the features analysis and selection performed with $\Phi$LAB. All quantities are calculated on blind test sets only.} \label{tab:preliminaryResults}
\end{table}
\begin{table}
\centering
\begin{tabular}{ccccc}
ID & $BEST$ & $BESTmagopt$& $BESTmagcolopt$ \\\hline
N. of Sources & 1029 & 1029 & 1029 \\
bands & 14 & 12 & 20 \\
|bias| & 0.0157 & 0.0183 & 0.0130 \\
$\sigma$ & 0.144 & 0.138 & 0.122 \\
$\sigma_{NMAD}$ & 0.078 & 0.072 & 0.057 \\
$\sigma_{68}$ & 0.092 & 0.077 & 0.074 \\
$\eta$ & 15.90 & 12.31 & 13.12 \\\hline
\end{tabular}
\caption{Same of Table~\ref{tab:preliminaryResults}, but considering only objects with F$_X>10^{-14}$.} \label{tab:preliminaryResultsXray}
\end{table}
\subsection{Impact of Photometric Errors}\label{sec:magerr}
The photometric catalogues used in Stripe 82X are relatively shallow, implying that the fainter sources in general have a large photometric error. Unlike SED-fitting, until recently ML techniques
could not handle the errors associated to the measurements and the same weight was incorrectly assumed for each photometric value \citep[but see][for a counter example application]{Reis19}. We tried to assess how this is impacting on the result, by reducing $BESTmagcolopt$ to sub-samples of decreasing photometric errors in all the bands. More specifically, we considered only sources with an error in magnitude
smaller than $0.3$, $0.25$ and $0.2$, thus reducing the original sample by $\sim15$\%, $\sim19$\% and $\sim24$\%, respectively (see Table~\ref{tab:magcut}).
\begin{table}\resizebox{\columnwidth}{!}{
\centering
\begin{tabular}{ccccc}
& \multirow{2}{*}{$BESTmagcolopt$} & \multicolumn{3}{c}{with Mag\_err limit}\\
& & 0.3 & 0.25 & 0.2 \\\hline
N. of sources & 1686 & 1442 & 1372 & 1275 \\
|bias| & 0.0102 & 0.0152 & 0.0126 & 0.0122 \\
$\sigma$ & 0.121 & 0.134 & 0.157 & 0.151 \\
$\sigma_{NMAD}$ & 0.056 & 0.056 & 0.059 & 0.054 \\
$\sigma_{68}$ & 0.069 & 0.065 & 0.069 & 0.065 \\
$\eta$ & 12.74 & 11.93 & 12.90 & 12.24 \\
\end{tabular}}
\caption{Statistical results of the \textit{BESTmagcolopt} photo-z estimation experiments after having removed objects with photometric errors larger than $0.3$, $0.25$ and $0.2$ respectively.}\label{tab:magcut
\end{table}
Considering only sources with small photometric errors provides the best accuracy ($\sigma_{NMAD}$=$0.054$), but the bias increases with respect to the original sample. The best trade off is obtained by keeping sources with a photometric error smaller than $0.3$ magnitudes.
\subsection{X-ray depth}\label{sec:xdepth}
The general experiment on Stripe 82X has demonstrated that reliable photo-z of the same quality, or even better than those computed via SED fitting, can be obtained for X-ray detected AGN also with ML, as long as a large number of photometric points is available and the spectroscopic sample is representative. But we are also interested to evaluate the accuracy obtained for a sub-sample of the sources, such as the brightest or the faintest detected in X-ray. And, in particular, the expected accuracy that can be reached for eROSITA. The final depth after four years of observations will be of $\sim 10^{-14}erg/s/cm^{-2}$ for the all-sky survey. The survey will detect AGN also at high redshift, but it will be dominated by bright, nearby AGN, for which the computation of the photo-z is typically more challenging.
Table~\ref{tab:preliminaryResultsXray} already reported a partial answer to the questions: namely, the accuracy in photo-z for X-ray bright AGN is worse than for the entire sample. This means that the good results obtained in the second column are driven by the fact that the faint AGN, easier to fit because galaxy dominated, are more numerous than the bright AGN. However, in that experiment the training in the photo-z computation was done using all the sources in the $BESTmagcolopt$, with the cut in X-ray flux done \textit{a posteriori} on the output.
In the following experiment instead, also the training sample is limited to the bright sources that eROSITA will detect. The result of test is presented in Table~\ref{tab:xray}. By comparing the last column of that table with the last column of Table~\ref{tab:preliminaryResultsXray}, we see that, while the bias and $\sigma_{\rm NMAD}$ remain unchanged, the fraction of outliers decreases. It means that we could improve our result, if, in addition to good photometry for the entire sample, we could increase the training sample of bright objects by the time when eROSITA survey will be available.
Photo-z computed via ML for X-ray selected sources in 3XMM-DR6 and 3XMM-DR7 where recently presented also in \citet{Ruiz18} and \citet{Meshcheryakov18}, respectively. While we are in overall agreement with the first, our results are less optimistic than those obtained by the second group. However, the results is not surprising when noting that their results are specifically obtained for QSO or type $1$ only, having as targets sources in ROSAT and 3XMM-DR7 that are presented in the spectroscopic catalog SDSS-DR14Q \citep{Paris18}. In our work, there is no any pre-selection and the sample includes QSO, type $1$ and type $2$ AGN and galaxies.
\begin{table}
\centering
\begin{tabular}{ccc}
& $BESTmagcolopt$ & at eROSITA depth\\ \hline
N. of sources & 1686 & 1029 \\
|bias| & 0.010 & 0.013 \\
$\sigma$ & 0.121 & 0.142 \\
$\sigma_{NMAD}$ & 0.056 & 0.064 \\
$\sigma_{68}$ & 0.069 & 0.075 \\
$\eta$ & 12.74 & 12.73 \\
\end{tabular}
\caption{Comparison between statistics for the complete best sample and for the sub-sample limited to eROSITA flux also in the training sample.\label{tab:xray}}
\end{table}
\subsection{Point-Like vs Extended}\label{sec:objtypes}
Given the resolution of the ground-based optical imaging, extended sources can only be found at low redshift (z$_{\mbox{spec}}$ $\leq 1$), with not significant contribution to the emission by the host galaxy. In contrast, point-like sources are mostly dominating the high redshift regime, with the emission due to the nuclear component.
This is taken into account when computing photo-z via SED-fitting by adopting a prior in absolute magnitude \citep[e.g.,][ \citetalias{Ananna2017}]{Salvato09,Salvato11,Fotopoulou12,Hsu14}. More recently, the separation of the sources in these two subgroups is becoming the standard also when computing photo-z via ML \citep[e.g.,][]{Mountrichas17, Ruiz18}.
One limitation of this method is that it relies on images that are affected by the quality of the seeing, which can alter the morphological classification of the sources.
This has been demonstrated in \citet[]{Hsu14} where the authors shown how the sources can change classification (and thus their photo-z value), depending on whether the images used are from HST or ground based.
In Stripe 82X, out of $1469$ sources at z$>$1, $77$ ($\sim$5\%) are classified as extended. In the $BESTmagcolopt$ sample the fraction is approximately the same ($27/704$, $\sim$4\%). Given the resolution of SDSS, this is clearly nonphysical.\\
In this section we first measure separately the accuracy for the point-like and extended sources in the $BESTmagcolopt$ sample. Here, a mixed training sample was used. In our second approach we created two training samples: one that includes only sources classified as "extended" and having redshift smaller than one; a second including only sources classified as "point-like" and/or at redshift larger than one. The sources in the test samples were separated accordingly.\\
The resulting statistics are shown in Table~\ref{tab:objtype}, with the first column reporting for convenience the first one of the two previous tables.
The second and third columns show the accuracy for the same sources of the $BESTmagcolopt$ sample, but this time divided according to their extension.\\
As expected, the photo-z for extended sources are more reliable, with $50$\% less outliers than the point-like sources. Column four and five show the extreme case, in which the training sample is split in two \textit{ab initio}. In this case, the photo-z for extended sources have the same accuracy of the best photo-z for normal galaxies, virtually without outliers or bias.\\
In contrast, the photo-z for point-like sources show about $20$\% of outliers. This is clearly understood, again thinking about the SED of these objects, by comparing
Col3 with Col5 of the table. This suggests that the training sample for point-like sources must include also sources with a contribution from the host. The last column of the table shows which precision for the entire sample is achieved by two training samples (one specific for extended and nearby sources and the second for a generic one).
\begin{table*}
\centering
\begin{tabular}{cccc||ccc}
& $BESTmagcolopt$ & \multicolumn{2}{c||}{limited to:} & \multicolumn{3}{c}{with specific training:}\\
& & Extended & Point Like & Extended & Point Like & Combined\\
\hline
N. of sources & 1686 & 598 & 1088 & 598 & 1088 & 1686 \\
|bias| & 0.0102 & 0.0097 & 0.0107 & 0.0005 & 0.0168 & 0.0099 \\
$\sigma$ & 0.121 & 0.089 & 0.136 & 0.051 & 0.142 & 0.109 \\
$\sigma_{NMAD}$ & 0.056 & 0.042 & 0.068 & 0.029 & 0.082 & 0.053 \\
$\sigma_{68}$ & 0.069 & 0.050 & 0.082 & 0.032 & 0.096 & 0.071 \\
$\eta$ & 12.74 & 7.69 & 15.57 & 1.67 & 19.40 & 12.59 \\
\end{tabular}
\caption{Photo-z estimation accuracy for the sources in $BESTmagcolopt$ using a unique training sample (Col1), afterwards divided between Extended
(Col2) and Point-Like (Col3). The photo-z are also computed by splitting the sources between the two groups and training them separately.
For this case the accuracy for Extended and Point-Like are presented in Col4 and Col5. In Col6 we recombine the sample. The improvement can be
seen by comparing the column \textit{Combined} with \textit{$BESTmagcolopt$}. All quantities are calculated on the blind test set extracted from the \textit{BESTmagcolopt} sample.} \label{tab:objtype}
\end{table*}
\subsection{Catalogue Release} \label{app:catalog}
The produced catalogue of photo-z, obtained by different cross-matches among available surveys as well as their final best combination,
is made publicly available at \href{https://academic.oup.com/mnras/article/489/1/663/5549519#supplementary-data}{MNRAS} online. A sample of the internal structure is shown in Table~\ref{tab:catalogue}.
The catalogue is indexed on the first column, which can be used to retrieve all other information about spectroscopic redshifts and X-ray
source counterparts, by cross-matching this catalogue with the one referred in \citetalias{Ananna2017}. The other columns, from left to right,
are respectively, RA and DEC of optical counterparts, followed by the photo-z estimations obtained by all discussed combinations of surveys,
i.e. SDSS, VHS, IRAC and WISE. The last column is related to the best photo-z obtained from all the previous combinations, as explained in
the main text. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.